diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md index d24ec4ae6f..30e551a122 100644 --- a/.github/ISSUE_TEMPLATE/bug_report.md +++ b/.github/ISSUE_TEMPLATE/bug_report.md @@ -1,34 +1,60 @@ --- -name: Bug report -about: Create a report to help us improve - +name: Report a Bug +about: Found an issue? Let us fix it. --- -**Describe the bug** -A clear and concise description of what the bug is. +Please ensure you do the following when reporting a bug: + +- [ ] Provide a concise description of what the bug is. +- [ ] Provide information about your environment. +- [ ] Provide clear steps to reproduce the bug. +- [ ] Attach applicable logs. Please do not attach screenshots showing logs unless you are unable to copy and paste the log data. +- [ ] Ensure any code / output examples are [properly formatted](https://docs.github.com/en/github/writing-on-github/basic-writing-and-formatting-syntax#quoting-code) for legibility. + +Note that some logs needed to troubleshoot may be found in the `/pgdata//pg_log` directory on your Postgres instance. + +An incomplete bug report can lead to delays in resolving the issue or the closing of a ticket, so please be as detailed as possible. + +If you are looking for [general support](https://access.crunchydata.com/documentation/postgres-operator/latest/support/), please view the [support](https://access.crunchydata.com/documentation/postgres-operator/latest/support/) page for where you can ask questions. + +Thanks for reporting the issue, we're looking forward to helping you! + +## Overview + +Add a concise description of what the bug is. + +## Environment + +Please provide the following details: + +- Platform: (`Kubernetes`, `OpenShift`, `Rancher`, `GKE`, `EKS`, `AKS` etc.) +- Platform Version: (e.g. `1.20.3`, `4.7.0`) +- PGO Image Tag: (e.g. `ubi8-5.x.y-0`) +- Postgres Version (e.g. `15`) +- Storage: (e.g. `hostpath`, `nfs`, or the name of your storage class) + +## Steps to Reproduce + +### REPRO + +Provide steps to get to the error condition: + +1. Run `...` +1. Do `...` +1. Try `...` + +### EXPECTED + +1. Provide the behavior that you expected. -**To Reproduce** -Steps to reproduce the behavior: -1. Go to '...' -2. Click on '....' -3. Scroll down to '....' -4. See error +### ACTUAL -**Expected behavior** -A clear and concise description of what you expected to happen. +1. Describe what actually happens -**Screenshots** -If applicable, add screenshots to help explain your problem. +## Logs -**Please tell us about your environment:** +Please provided appropriate log output or any configuration files that may help troubleshoot the issue. **DO NOT** include sensitive information, such as passwords. -* Operating System: -* Where is this running ( Local, Cloud Provider) -* Storage being used (NFS, Hostpath, Gluster, etc): -* Container Image Tag: -* PostgreSQL Version: -* Platform (Docker, Kubernetes, OpenShift): -* Platform Version: +## Additional Information -**Additional context** -Add any other context about the problem here. +Please provide any additional information that may be helpful. diff --git a/.github/ISSUE_TEMPLATE/enhancement-request.md b/.github/ISSUE_TEMPLATE/enhancement-request.md deleted file mode 100644 index 766b47d052..0000000000 --- a/.github/ISSUE_TEMPLATE/enhancement-request.md +++ /dev/null @@ -1,23 +0,0 @@ ---- -name: Enhancement request -about: Suggest an improvement to a current an existing feature - ---- - -**What is the motivation or use case for the change? ** - -**Describe the solution you'd like** -A clear and concise description of what you want to happen. - -**Please tell us about your environment:** - -* Operating System: -* Where is this running ( Local, Cloud Provider) -* Storage being used (NFS, Hostpath, Gluster, etc): -* Container Image Tag: -* PostgreSQL Version: -* Platform (Docker, Kubernetes, OpenShift): -* Platform Version: - -**Additional context** -Add any other context or screenshots about the enhancement request here. diff --git a/.github/ISSUE_TEMPLATE/feature_request.md b/.github/ISSUE_TEMPLATE/feature_request.md index 487abdabe6..4de2077c77 100644 --- a/.github/ISSUE_TEMPLATE/feature_request.md +++ b/.github/ISSUE_TEMPLATE/feature_request.md @@ -1,26 +1,42 @@ --- -name: Feature request -about: Suggest an idea for this project - +name: Feature Request +about: Help us improve PGO! --- -**What is the motivation or use case for the feature? ** +Have an idea to improve PGO? We'd love to hear it! We're going to need some information from you to learn more about your feature requests. + +Please be sure you've done the following: + +- [ ] Provide a concise description of your feature request. +- [ ] Describe your use case. Detail the problem you are trying to solve. +- [ ] Describe how you envision that the feature would work. +- [ ] Provide general information about your current PGO environment. + +## Overview + +Provide a concise description of your feature request. + +## Use Case + +Describe your use case. Why do you want this feature? What problem will it solve? Why will it help you? Why will it make it easier to use PGO? + +## Desired Behavior + +Describe how the feature would work. How do you envision interfacing with it? + +## Environment -**Describe the solution you'd like** -A clear and concise description of what you want to happen. +Tell us about your environment: -**Describe any alternatives you've considered** -A clear and concise description of any alternative solutions or features you've considered. +Please provide the following details: -**Please tell us about your environment:** +- Platform: (`Kubernetes`, `OpenShift`, `Rancher`, `GKE`, `EKS`, `AKS` etc.) +- Platform Version: (e.g. `1.20.3`, `4.7.0`) +- PGO Image Tag: (e.g. `ubi8-5.x.y-0`) +- Postgres Version (e.g. `15`) +- Storage: (e.g. `hostpath`, `nfs`, or the name of your storage class) +- Number of Postgres clusters: (`XYZ`) -* Operating System: -* Where is this running ( Local , Cloud Provider) -* Storage being used (NFS, Hostpath, Gluster, etc): -* Container Image Tag: -* PostgreSQL Version: -* Platform (Docker, Kubernetes, OpenShift): -* Platform Version: +## Additional Information -**Additional context** -Add any other context or screenshots about the feature request here. +Please provide any additional information that may be helpful. diff --git a/.github/ISSUE_TEMPLATE/support---question-and-answer.md b/.github/ISSUE_TEMPLATE/support---question-and-answer.md index dbf1c86be2..271caa9029 100644 --- a/.github/ISSUE_TEMPLATE/support---question-and-answer.md +++ b/.github/ISSUE_TEMPLATE/support---question-and-answer.md @@ -1,29 +1,35 @@ --- -name: Support - Question and Answer -about: " Have a quick question, let us know." - +name: Support +about: "Learn how to interact with the PGO community" --- -** Which example are you working with? ** +If you believe you have found have found a bug, please open up [Bug Report](https://github.com/CrunchyData/postgres-operator/issues/new?template=bug_report.md) + +If you have a feature request, please open up a [Feature Request](https://github.com/CrunchyData/postgres-operator/issues/new?template=feature_request.md) + +You can find information about general PGO [support](https://access.crunchydata.com/documentation/postgres-operator/latest/support/) at: + +[https://access.crunchydata.com/documentation/postgres-operator/latest/support/](https://access.crunchydata.com/documentation/postgres-operator/latest/support/) + +## Questions + +For questions that are neither bugs nor feature requests, please be sure to -**What is the current behavior?** +- [ ] Provide information about your environment (see below for more information). +- [ ] Provide any steps or other relevant details related to your question. +- [ ] Attach logs, where applicable. Please do not attach screenshots showing logs unless you are unable to copy and paste the log data. +- [ ] Ensure any code / output examples are [properly formatted](https://docs.github.com/en/github/writing-on-github/basic-writing-and-formatting-syntax#quoting-code) for legibility. -**What is the expected behavior?** +Besides Pod logs, logs may also be found in the `/pgdata/pg/log` directory on your Postgres instance. -**Other information** (e.g. detailed explanation, related issues, etc) +If you are looking for [general support](https://access.crunchydata.com/documentation/postgres-operator/latest/support/), please view the [support](https://access.crunchydata.com/documentation/postgres-operator/latest/support/) page for where you can ask questions. -**Please tell us about your environment:** +### Environment -* Operating System: -* Where is this running ( Local , Cloud Provider) -* Storage being used (NFS, Hostpath, Gluster, etc): -* Container Image Tag: -* PostgreSQL Version: -* Platform (Docker, Kubernetes, OpenShift): -* Platform Version: +Please provide the following details: -If possible please run the following on the kubernetes or OpenShift (oc) commands and provide the result: - kubectl describe yourPodName - kubectl describe pvc - kubectl get nodes - kubectl log yourPodName +- Platform: (`Kubernetes`, `OpenShift`, `Rancher`, `GKE`, `EKS`, `AKS` etc.) +- Platform Version: (e.g. `1.20.3`, `4.7.0`) +- PGO Image Tag: (e.g. `ubi8-5.x.y-0`) +- Postgres Version (e.g. `15`) +- Storage: (e.g. `hostpath`, `nfs`, or the name of your storage class) diff --git a/.github/actions/awk-matcher.json b/.github/actions/awk-matcher.json new file mode 100644 index 0000000000..852a723577 --- /dev/null +++ b/.github/actions/awk-matcher.json @@ -0,0 +1,13 @@ +{ + "problemMatcher": [ + { + "owner": "awk", + "pattern": [ + { + "regexp": "^([^:]+):([^ ]+) (([^:]+):.*)$", + "file": 1, "line": 2, "message": 3, "severity": 4 + } + ] + } + ] +} diff --git a/.github/actions/k3d/action.yaml b/.github/actions/k3d/action.yaml new file mode 100644 index 0000000000..395d5f1116 --- /dev/null +++ b/.github/actions/k3d/action.yaml @@ -0,0 +1,94 @@ +name: k3d +description: Start k3s using k3d +inputs: + k3d-tag: + default: latest + required: true + description: > + Git tag from https://github.com/k3d-io/k3d/releases or "latest" + k3s-channel: + default: latest + required: true + description: > + https://docs.k3s.io/upgrades/manual#release-channels + prefetch-images: + required: true + description: > + Each line is the name of an image to fetch onto all Kubernetes nodes + prefetch-timeout: + default: 90s + required: true + description: > + Amount of time to wait for images to be fetched + +outputs: + k3d-version: + value: ${{ steps.k3d.outputs.k3d }} + description: > + K3d version + kubernetes-version: + value: ${{ steps.k3s.outputs.server }} + description: > + Kubernetes server version, as reported by the Kubernetes API + pause-image: + value: ${{ steps.k3s.outputs.pause-image }} + description: > + Pause image for prefetch images DaemonSet + +runs: + using: composite + steps: + - id: k3d + name: Install k3d + shell: bash + env: + K3D_TAG: ${{ inputs.k3d-tag }} + run: | + curl --fail --silent https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | + TAG="${K3D_TAG#latest}" bash + k3d version | awk '{ print "${tolower($1)}=${$3}" >> $GITHUB_OUTPUT }' + + - id: k3s + name: Start k3s + shell: bash + run: | + k3d cluster create --image '+${{ inputs.k3s-channel }}' --no-lb --timeout=2m --wait + kubectl version | awk '{ print "${tolower($1)}=${$3}" >> $GITHUB_OUTPUT }' + + PAUSE_IMAGE=$(docker exec $(k3d node list --output json | jq --raw-output 'first.name') \ + k3s agent --help | awk '$1 == "--pause-image" { + match($0, /default: "[^"]*"/); + print substr($0, RSTART+10, RLENGTH-11) + }') + echo "pause-image=${PAUSE_IMAGE}" >> $GITHUB_OUTPUT + + - name: Prefetch container images + shell: bash + env: + INPUT_IMAGES: ${{ inputs.prefetch-images }} + INPUT_TIMEOUT: ${{ inputs.prefetch-timeout }} + run: | + jq <<< "$INPUT_IMAGES" --raw-input 'select(. != "")' | + jq --slurp \ + --arg pause '${{ steps.k3s.outputs.pause-image }}' \ + --argjson labels '{"name":"image-prefetch"}' \ + --argjson name '"image-prefetch"' \ + '{ + apiVersion: "apps/v1", kind: "DaemonSet", + metadata: { name: $name, labels: $labels }, + spec: { + selector: { matchLabels: $labels }, + template: { + metadata: { labels: $labels }, + spec: { + initContainers: to_entries | map({ + name: "c\(.key)", image: .value, command: ["true"], + }), + containers: [{ name: "pause", image: $pause }] + } + } + } + }' | + kubectl create --filename=- + kubectl rollout status daemonset.apps/image-prefetch --timeout "$INPUT_TIMEOUT" || + kubectl describe daemonset.apps/image-prefetch diff --git a/.github/dependabot.yml b/.github/dependabot.yml new file mode 100644 index 0000000000..639a059edc --- /dev/null +++ b/.github/dependabot.yml @@ -0,0 +1,16 @@ +# https://docs.github.com/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file +# https://docs.github.com/code-security/dependabot/dependabot-version-updates/customizing-dependency-updates +# +# See: https://www.github.com/dependabot/dependabot-core/issues/4605 +--- +# yaml-language-server: $schema=https://json.schemastore.org/dependabot-2.0.json +version: 2 +updates: + - package-ecosystem: github-actions + directory: / + schedule: + interval: weekly + day: tuesday + groups: + all-github-actions: + patterns: ['*'] diff --git a/.github/issue_template.md b/.github/issue_template.md deleted file mode 100644 index b7f775c0e5..0000000000 --- a/.github/issue_template.md +++ /dev/null @@ -1,42 +0,0 @@ -**I'm submitting a ...** - - - - [ ] bug report - - [ ] feature request - - [ ] support request - - - -**Do you want to request a *feature* or report a *bug*?** - - - -**What is the current behavior?** - - - -**If the current behavior is a bug, please provide the steps to reproduce:** - - - -**What is the expected behavior?** - - - -**What is the motivation or use case for changing the behavior?** - - - -**Other information** (e.g. detailed explanation, related issues, suggestions how to fix, etc) - - - -**Please tell us about your environment:** - - - Operating System: - - Container Image Tag: - - Operator Version: - - Storage (NFS, hostpath, storage class): - - PostgreSQL Version: - - Platform (Docker, Kubernetes, OpenShift): - - Platform Version: diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md index 009442f462..b03369bf09 100644 --- a/.github/pull_request_template.md +++ b/.github/pull_request_template.md @@ -4,24 +4,27 @@ - [ ] Have you added an explanation of what your changes do and why you'd like them to be included? - [ ] Have you updated or added documentation for the change, as applicable? - [ ] Have you tested your changes on all related environments with successful results, as applicable? + - [ ] Have you added automated tests? **Type of Changes:** - - [ ] Bug fix (non-breaking change which fixes an issue) - - [ ] New feature (non-breaking change which adds functionality) - - [ ] Breaking change (fix or feature that would cause existing functionality to change) + - [ ] New feature + - [ ] Bug fix + - [ ] Documentation + - [ ] Testing enhancement + - [ ] Other - -**What is the current behavior? (link to any open issues here)** +**What is the current behavior (link to any open issues here)?** **What is the new behavior (if this is a feature change)?** +- [ ] Breaking change (fix or feature that would cause existing functionality to change) -**Other information**: +**Other Information**: diff --git a/.github/stale.yml b/.github/stale.yml deleted file mode 100644 index c7e328933c..0000000000 --- a/.github/stale.yml +++ /dev/null @@ -1,58 +0,0 @@ -# Configuration for probot-stale - https://github.com/probot/stale - -# Number of days of inactivity before an Issue or Pull Request becomes stale -daysUntilStale: 60 - -# Number of days of inactivity before an Issue or Pull Request with the stale label is closed. -# Set to false to disable. If disabled, issues still need to be closed manually, but will remain marked as stale. -daysUntilClose: 7 - -# Issues or Pull Requests with these labels will never be considered stale. Set to `[]` to disable -exemptLabels: - - pinned - - security - - "[Status] Maybe Later" - -# Set to true to ignore issues in a project (defaults to false) -exemptProjects: false - -# Set to true to ignore issues in a milestone (defaults to false) -exemptMilestones: false - -# Set to true to ignore issues with an assignee (defaults to false) -exemptAssignees: false - -# Label to use when marking as stale -staleLabel: wontfix - -# Comment to post when marking as stale. Set to `false` to disable -markComment: > - This issue has been automatically marked as stale because it has not had - recent activity. It will be closed if no further activity occurs. Thank you - for your contributions. - -# Comment to post when removing the stale label. -# unmarkComment: > -# Your comment here. - -# Comment to post when closing a stale Issue or Pull Request. -# closeComment: > -# Your comment here. - -# Limit the number of actions per hour, from 1-30. Default is 30 -limitPerRun: 30 - -# Limit to only `issues` or `pulls` -# only: issues - -# Optionally, specify configuration settings that are specific to just 'issues' or 'pulls': -# pulls: -# daysUntilStale: 30 -# markComment: > -# This pull request has been automatically marked as stale because it has not had -# recent activity. It will be closed if no further activity occurs. Thank you -# for your contributions. - -# issues: -# exemptLabels: -# - confirmed diff --git a/.github/workflows/codeql-analysis.yaml b/.github/workflows/codeql-analysis.yaml new file mode 100644 index 0000000000..ae4d24d122 --- /dev/null +++ b/.github/workflows/codeql-analysis.yaml @@ -0,0 +1,40 @@ +name: CodeQL + +on: + pull_request: + push: + branches: + - main + schedule: + - cron: '10 18 * * 2' + +env: + # Use the Go toolchain installed by setup-go + # https://github.com/actions/setup-go/issues/457 + GOTOOLCHAIN: local + +jobs: + analyze: + runs-on: ubuntu-latest + permissions: + actions: read + contents: read + security-events: write + + if: ${{ github.repository == 'CrunchyData/postgres-operator' }} + + steps: + - uses: actions/checkout@v4 + - uses: actions/setup-go@v5 + with: { go-version: stable } + + - name: Initialize CodeQL + uses: github/codeql-action/init@v3 + with: { languages: go } + + - name: Autobuild + # This action calls `make` which runs our "help" target. + uses: github/codeql-action/autobuild@v3 + + - name: Perform CodeQL Analysis + uses: github/codeql-action/analyze@v3 diff --git a/.github/workflows/lint.yaml b/.github/workflows/lint.yaml new file mode 100644 index 0000000000..c715f2a1d7 --- /dev/null +++ b/.github/workflows/lint.yaml @@ -0,0 +1,39 @@ +name: Linters + +on: + pull_request: + +env: + # Use the Go toolchain installed by setup-go + # https://github.com/actions/setup-go/issues/457 + GOTOOLCHAIN: local + +jobs: + golangci-lint: + runs-on: ubuntu-latest + permissions: + contents: read + checks: write + steps: + - uses: actions/checkout@v4 + - uses: actions/setup-go@v5 + with: { go-version: stable } + + - uses: golangci/golangci-lint-action@v6 + with: + version: latest + args: --timeout=5m + + # Count issues reported by disabled linters. The command always + # exits zero to ensure it does not fail the pull request check. + - name: Count non-blocking issues + run: | + golangci-lint run --config .golangci.next.yaml \ + --issues-exit-code 0 \ + --max-issues-per-linter 0 \ + --max-same-issues 0 \ + --out-format json | + jq --sort-keys 'reduce .Issues[] as $i ({}; .[$i.FromLinter] += 1)' | + awk >> "${GITHUB_STEP_SUMMARY}" ' + NR == 1 { print "```json" } { print } END { if (NR > 0) print "```" } + ' || true diff --git a/.github/workflows/test.yaml b/.github/workflows/test.yaml new file mode 100644 index 0000000000..e8174e4f95 --- /dev/null +++ b/.github/workflows/test.yaml @@ -0,0 +1,209 @@ +name: Tests + +on: + pull_request: + push: + branches: + - main + +env: + # Use the Go toolchain installed by setup-go + # https://github.com/actions/setup-go/issues/457 + GOTOOLCHAIN: local + +jobs: + go-test: + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v4 + - uses: actions/setup-go@v5 + with: { go-version: stable } + - run: make check + - run: make check-generate + + - name: Ensure go.mod is tidy + run: go mod tidy && git diff --exit-code -- go.mod + + kubernetes-api: + runs-on: ubuntu-latest + needs: [go-test] + strategy: + fail-fast: false + matrix: + kubernetes: ['default'] + steps: + - uses: actions/checkout@v4 + - uses: actions/setup-go@v5 + with: { go-version: stable } + - run: go mod download + - run: ENVTEST_K8S_VERSION="${KUBERNETES#default}" make check-envtest + env: + KUBERNETES: "${{ matrix.kubernetes }}" + GO_TEST: go test --coverprofile 'envtest.coverage' --coverpkg ./internal/... + + # Upload coverage to GitHub + - run: gzip envtest.coverage + - uses: actions/upload-artifact@v4 + with: + name: "~coverage~kubernetes-api=${{ matrix.kubernetes }}" + path: envtest.coverage.gz + retention-days: 1 + + kubernetes-k3d: + if: "${{ github.repository == 'CrunchyData/postgres-operator' }}" + runs-on: ubuntu-latest + needs: [go-test] + strategy: + fail-fast: false + matrix: + kubernetes: [v1.31, v1.28] + steps: + - uses: actions/checkout@v4 + - uses: actions/setup-go@v5 + with: { go-version: stable } + + - name: Start k3s + uses: ./.github/actions/k3d + with: + k3s-channel: "${{ matrix.kubernetes }}" + prefetch-images: | + registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:ubi8-2.53.1-0 + registry.developers.crunchydata.com/crunchydata/crunchy-pgbouncer:ubi8-1.23-0 + registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-16.4-2 + + - run: make createnamespaces check-envtest-existing + env: + PGO_TEST_TIMEOUT_SCALE: 1.2 + GO_TEST: go test --coverprofile 'envtest-existing.coverage' --coverpkg ./internal/... + + # Upload coverage to GitHub + - run: gzip envtest-existing.coverage + - uses: actions/upload-artifact@v4 + with: + name: "~coverage~kubernetes-k3d=${{ matrix.kubernetes }}" + path: envtest-existing.coverage.gz + retention-days: 1 + + kuttl-k3d: + runs-on: ubuntu-latest + needs: [go-test] + strategy: + fail-fast: false + matrix: + kubernetes: [v1.31, v1.30, v1.29, v1.28] + steps: + - uses: actions/checkout@v4 + - uses: actions/setup-go@v5 + with: { go-version: stable } + + - name: Start k3s + uses: ./.github/actions/k3d + with: + k3s-channel: "${{ matrix.kubernetes }}" + prefetch-images: | + registry.developers.crunchydata.com/crunchydata/crunchy-pgadmin4:ubi8-4.30-31 + registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:ubi8-2.53.1-0 + registry.developers.crunchydata.com/crunchydata/crunchy-pgbouncer:ubi8-1.23-0 + registry.developers.crunchydata.com/crunchydata/crunchy-postgres-exporter:latest + registry.developers.crunchydata.com/crunchydata/crunchy-upgrade:latest + registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-16.4-2 + registry.developers.crunchydata.com/crunchydata/crunchy-postgres-gis:ubi8-16.4-3.3-2 + registry.developers.crunchydata.com/crunchydata/crunchy-postgres-gis:ubi8-16.4-3.4-2 + registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-17.0-0 + registry.developers.crunchydata.com/crunchydata/crunchy-postgres-gis:ubi8-17.0-3.4-0 + - run: go mod download + - name: Build executable + run: PGO_VERSION='${{ github.sha }}' make build-postgres-operator + + - name: Get pgMonitor files. + run: make get-pgmonitor + env: + PGMONITOR_DIR: "${{ github.workspace }}/hack/tools/pgmonitor" + QUERIES_CONFIG_DIR: "${{ github.workspace }}/hack/tools/queries" + + # Start a Docker container with the working directory mounted. + - name: Start PGO + run: | + kubectl apply --server-side -k ./config/namespace + kubectl apply --server-side -k ./config/dev + hack/create-kubeconfig.sh postgres-operator pgo + docker run --detach --network host --read-only \ + --volume "$(pwd):/mnt" --workdir '/mnt' --env 'PATH=/mnt/bin' \ + --env 'CHECK_FOR_UPGRADES=false' \ + --env 'QUERIES_CONFIG_DIR=/mnt/hack/tools/queries' \ + --env 'KUBECONFIG=hack/.kube/postgres-operator/pgo' \ + --env 'RELATED_IMAGE_PGADMIN=registry.developers.crunchydata.com/crunchydata/crunchy-pgadmin4:ubi8-4.30-31' \ + --env 'RELATED_IMAGE_PGBACKREST=registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:ubi8-2.53.1-0' \ + --env 'RELATED_IMAGE_PGBOUNCER=registry.developers.crunchydata.com/crunchydata/crunchy-pgbouncer:ubi8-1.23-0' \ + --env 'RELATED_IMAGE_PGEXPORTER=registry.developers.crunchydata.com/crunchydata/crunchy-postgres-exporter:latest' \ + --env 'RELATED_IMAGE_PGUPGRADE=registry.developers.crunchydata.com/crunchydata/crunchy-upgrade:latest' \ + --env 'RELATED_IMAGE_POSTGRES_16=registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-16.4-2' \ + --env 'RELATED_IMAGE_POSTGRES_16_GIS_3.3=registry.developers.crunchydata.com/crunchydata/crunchy-postgres-gis:ubi8-16.4-3.3-2' \ + --env 'RELATED_IMAGE_POSTGRES_16_GIS_3.4=registry.developers.crunchydata.com/crunchydata/crunchy-postgres-gis:ubi8-16.4-3.4-2' \ + --env 'RELATED_IMAGE_POSTGRES_17=registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-17.0-0' \ + --env 'RELATED_IMAGE_POSTGRES_17_GIS_3.4=registry.developers.crunchydata.com/crunchydata/crunchy-postgres-gis:ubi8-17.0-3.4-0' \ + --env 'RELATED_IMAGE_STANDALONE_PGADMIN=registry.developers.crunchydata.com/crunchydata/crunchy-pgadmin4:ubi8-8.12-0' \ + --env 'PGO_FEATURE_GATES=TablespaceVolumes=true' \ + --name 'postgres-operator' ubuntu \ + postgres-operator + - name: Install kuttl + run: | + curl -Lo /usr/local/bin/kubectl-kuttl https://github.com/kudobuilder/kuttl/releases/download/v0.13.0/kubectl-kuttl_0.13.0_linux_x86_64 + chmod +x /usr/local/bin/kubectl-kuttl + + - run: make generate-kuttl + env: + KUTTL_PG_UPGRADE_FROM_VERSION: '16' + KUTTL_PG_UPGRADE_TO_VERSION: '17' + KUTTL_PG_VERSION: '16' + KUTTL_POSTGIS_VERSION: '3.4' + KUTTL_PSQL_IMAGE: 'registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-16.4-2' + - run: | + make check-kuttl && exit + failed=$? + echo '::group::PGO logs'; docker logs 'postgres-operator'; echo '::endgroup::' + exit $failed + env: + KUTTL_TEST: kubectl-kuttl test + - name: Stop PGO + run: docker stop 'postgres-operator' || true + + coverage-report: + if: ${{ success() || contains(needs.*.result, 'success') }} + runs-on: ubuntu-latest + needs: + - kubernetes-api + - kubernetes-k3d + steps: + - uses: actions/checkout@v4 + - uses: actions/setup-go@v5 + with: { go-version: stable } + - uses: actions/download-artifact@v4 + with: { path: download } + + # Combine the coverage profiles by taking the mode line from any one file + # and the data from all files. Write a list of functions with less than + # 100% coverage to the job summary, and upload a complete HTML report. + - name: Generate report + run: | + gunzip --keep download/*/*.gz + ( sed -e '1q' download/*/*.coverage + tail -qn +2 download/*/*.coverage ) > total.coverage + go tool cover --func total.coverage -o total-coverage.txt + go tool cover --html total.coverage -o total-coverage.html + + awk < total-coverage.txt ' + END { print "
Total Coverage: " $3 " " $2 "" } + ' >> "${GITHUB_STEP_SUMMARY}" + + sed < total-coverage.txt -e '/100.0%/d' -e "s,$(go list -m)/,," | column -t | awk ' + NR == 1 { print "\n\n```" } { print } END { if (NR > 0) print "```\n\n"; print "
" } + ' >> "${GITHUB_STEP_SUMMARY}" + + # Upload coverage to GitHub + - run: gzip total-coverage.html + - uses: actions/upload-artifact@v4 + with: + name: coverage-report=html + path: total-coverage.html.gz + retention-days: 15 diff --git a/.github/workflows/trivy.yaml b/.github/workflows/trivy.yaml new file mode 100644 index 0000000000..2a16e4929c --- /dev/null +++ b/.github/workflows/trivy.yaml @@ -0,0 +1,75 @@ +name: Trivy + +on: + pull_request: + push: + branches: + - main + +env: + # Use the Go toolchain installed by setup-go + # https://github.com/actions/setup-go/issues/457 + GOTOOLCHAIN: local + +jobs: + licenses: + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v4 + + # Trivy needs a populated Go module cache to detect Go module licenses. + - uses: actions/setup-go@v5 + with: { go-version: stable } + - run: go mod download + + # Report success only when detected licenses are listed in [/trivy.yaml]. + - name: Scan licenses + uses: aquasecurity/trivy-action@0.28.0 + env: + TRIVY_DEBUG: true + with: + scan-type: filesystem + scanners: license + exit-code: 1 + + vulnerabilities: + if: ${{ github.repository == 'CrunchyData/postgres-operator' }} + + permissions: + # for github/codeql-action/upload-sarif to upload SARIF results + security-events: write + + runs-on: ubuntu-latest + + steps: + - uses: actions/checkout@v4 + + # Run trivy and log detected and fixed vulnerabilities + # This report should match the uploaded code scan report below + # and is a convenience/redundant effort for those who prefer to + # read logs and/or if anything goes wrong with the upload. + - name: Log all detected vulnerabilities + uses: aquasecurity/trivy-action@0.28.0 + with: + scan-type: filesystem + hide-progress: true + ignore-unfixed: true + scanners: secret,vuln + + # Upload actionable results to the GitHub Security tab. + # Pull request checks fail according to repository settings. + # - https://docs.github.com/en/code-security/code-scanning/integrating-with-code-scanning/uploading-a-sarif-file-to-github + # - https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning + - name: Report actionable vulnerabilities + uses: aquasecurity/trivy-action@0.28.0 + with: + scan-type: filesystem + ignore-unfixed: true + format: 'sarif' + output: 'trivy-results.sarif' + scanners: secret,vuln + + - name: Upload Trivy scan results to GitHub Security tab + uses: github/codeql-action/upload-sarif@v3 + with: + sarif_file: 'trivy-results.sarif' diff --git a/.gitignore b/.gitignore index 210f4ef69a..dcfd7074a3 100644 --- a/.gitignore +++ b/.gitignore @@ -1,3 +1,4 @@ .DS_Store /vendor/ -tools +/testing/kuttl/e2e-generated*/ +gke_gcloud_auth_plugin_cache diff --git a/.gitmodules b/.gitmodules deleted file mode 100644 index b8907ec067..0000000000 --- a/.gitmodules +++ /dev/null @@ -1,3 +0,0 @@ -[submodule "hugo/themes/crunchy-hugo-theme"] - path = docs/themes/crunchy-hugo-theme - url = https://github.com/crunchydata/crunchy-hugo-theme diff --git a/.golangci.next.yaml b/.golangci.next.yaml new file mode 100644 index 0000000000..95b3f63347 --- /dev/null +++ b/.golangci.next.yaml @@ -0,0 +1,40 @@ +# https://golangci-lint.run/usage/configuration/ +# +# This file is for linters that might be interesting to enforce in the future. +# Rules that should be enforced immediately belong in [.golangci.yaml]. +# +# Both files are used by [.github/workflows/lint.yaml]. + +linters: + disable-all: true + enable: + - contextcheck + - err113 + - errchkjson + - gocritic + - godot + - godox + - gofumpt + - gosec # exclude-use-default + - nilnil + - nolintlint + - predeclared + - revive + - staticcheck # exclude-use-default + - tenv + - thelper + - tparallel + - wastedassign + +issues: + # https://github.com/golangci/golangci-lint/issues/2239 + exclude-use-default: false + +linters-settings: + errchkjson: + check-error-free-encoding: true + + thelper: + # https://github.com/kulti/thelper/issues/27 + tb: { begin: true, first: true } + test: { begin: true, first: true, name: true } diff --git a/.golangci.yaml b/.golangci.yaml new file mode 100644 index 0000000000..87a6ed0464 --- /dev/null +++ b/.golangci.yaml @@ -0,0 +1,87 @@ +# https://golangci-lint.run/usage/configuration/ + +linters: + disable: + - contextcheck + - errchkjson + - gci + - gofumpt + enable: + - depguard + - goheader + - gomodguard + - gosimple + - importas + - misspell + - unconvert + presets: + - bugs + - format + - unused + +linters-settings: + depguard: + rules: + everything: + deny: + - pkg: io/ioutil + desc: > + Use the "io" and "os" packages instead. + See https://go.dev/doc/go1.16#ioutil + + not-tests: + files: ['!$test'] + deny: + - pkg: net/http/httptest + desc: Should be used only in tests. + + - pkg: testing/* + desc: The "testing" packages should be used only in tests. + + - pkg: github.com/crunchydata/postgres-operator/internal/testing/* + desc: The "internal/testing" packages should be used only in tests. + + exhaustive: + default-signifies-exhaustive: true + + goheader: + template: |- + Copyright {{ DATES }} Crunchy Data Solutions, Inc. + + SPDX-License-Identifier: Apache-2.0 + values: + regexp: + DATES: '((201[7-9]|202[0-3]) - 2024|2024)' + + goimports: + local-prefixes: github.com/crunchydata/postgres-operator + + gomodguard: + blocked: + modules: + - gopkg.in/yaml.v2: { recommendations: [sigs.k8s.io/yaml] } + - gopkg.in/yaml.v3: { recommendations: [sigs.k8s.io/yaml] } + - gotest.tools: { recommendations: [gotest.tools/v3] } + - k8s.io/kubernetes: + reason: > + k8s.io/kubernetes is for managing dependencies of the Kubernetes + project, i.e. building kubelet and kubeadm. + + gosec: + excludes: + # Flags for potentially-unsafe casting of ints, similar problem to globally-disabled G103 + - G115 + + importas: + alias: + - pkg: k8s.io/api/(\w+)/(v[\w\w]+) + alias: $1$2 + - pkg: k8s.io/apimachinery/pkg/apis/(\w+)/(v[\w\d]+) + alias: $1$2 + - pkg: k8s.io/apimachinery/pkg/api/errors + alias: apierrors + no-unaliased: true + +issues: + exclude-dirs: + - pkg/generated diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index d4aa43bc59..e209f4e5a7 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -13,15 +13,11 @@ Thanks! We look forward to your contribution. # General Contributing Guidelines All ongoing development for an upcoming release gets committed to the -**`master`** branch. The `master` branch technically serves as the "development" -branch as well, but all code that is committed to the `master` branch should be +**`main`** branch. The `main` branch technically serves as the "development" +branch as well, but all code that is committed to the `main` branch should be considered _stable_, even if it is part of an ongoing release cycle. -All fixes for a supported release should be committed to the supported release -branch. For example, the 4.3 release is maintained on the `REL_4_3` branch. -Please see the section on _Supported Releases_ for more information. - -Ensure any changes are clear and well-documented. When we say "well-documented": +Ensure any changes are clear and well-documented: - If the changes include code, ensure all additional code has corresponding documentation in and around it. This includes documenting the definition of @@ -32,10 +28,7 @@ summarize how. Avoid simply repeating details from declarations,. When in doubt, favor overexplaining to underexplaining. - Code comments should be consistent with their language conventions. For -example, please use GoDoc conventions for Go source code. - -- Any new features must have corresponding user documentation. Any removed -features must have their user documentation removed from the documents. +example, please use `gofmt` [conventions](https://go.dev/doc/comment) for Go source code. - Do not submit commented-out code. If the code does not need to be used anymore, please remove it. @@ -62,12 +55,7 @@ All commits must either be rebased in atomic order or squashed (if the squashed commit is considered atomic). Merge commits are not accepted. All conflicts must be resolved prior to pushing changes. -**All pull requests should be made from the `master` branch** unless it is a fix -for a specific supported release. - -Once a major or minor release is made, no new features are added into the -release branch, only bug fixes. Any new features are added to the `master` -branch until the time that said new features are released. +**All pull requests should be made from the `main` branch.** # Commit Messages @@ -86,12 +74,11 @@ possible as to what the changes are. Good things to include: understand. ``` -If you wish to tag a Github issue or another project management tracker, please +If you wish to tag a GitHub issue or another project management tracker, please do so at the bottom of the commit message, and make it clearly labeled like so: ``` -Issue: #123 -Issue: [ch1234] +Issue: CrunchyData/postgres-operator#123 ``` # Submitting Pull Requests @@ -100,102 +87,23 @@ All work should be made in your own repository fork. When you believe your work is ready to be committed, please follow the guidance below for creating a pull request. -## Upcoming Releases / Features - -Ongoing work for new features should occur in branches off of the `master` -branch. It is suggested, but not required, that the branch name should reflect -that this is for an upcoming release, i.e. `upstream/branch-name` where the -`branch-name` is something descriptive for what you're working on. - -## Supported Releases / Fixes - -While not required, it is recommended to make your branch name along the lines -of: `REL_X_Y/branch-name` where the `branch-name` is something descriptive -for what you're working on. - -# Releases & Versioning - -Overall, release tags attempt to follow the -[semantic versioning](https://semver.org) scheme. - -"Supported releases" (described in the next section) occur on "minor" release -branches (e.g. the `x.y` portion of the `x.y.z`). - -One or more "patch" releases can occur after a minor release. A patch release is -used to fix bugs and other issues that may be found after a supported release. - -Fixes found on the `master` branch can be backported to a support release -branch. Any fixes for a supported release must have a pull request off of the -supported release branch, which is detailed below. - -## Supported Releases +## Upcoming Features -When a "minor" release is made, the release is stamped using the `vx.y.0` format -as denoted above, and a branch is created with the name `REL_X_Y`. Once a -minor release occurs, no new features are added to the `REL_X_Y` branch. -However, bug fixes can (and if found, should) be added to this branch. +Ongoing work for new features should occur in branches off of the `main` +branch. -To contribute a bug fix to a supported release, please make a pull request off -of the supported release branch. For instance, if you find a bug in the 4.3 -release, then you would make a pull request off of the `REL_4_3` branch. +## Unsupported Branches -## Unsupported Releases - -When a release is no longer supported, the branch will be renamed following the +When a release branch is no longer supported, it will be renamed following the pattern `REL_X_Y_FINAL` with the key suffix being _FINAL_. For example, `REL_3_2_FINAL` indicates that the 3.2 release is no longer supported. Nothing should ever be pushed to a `REL_X_Y_FINAL` branch once `FINAL` is on the branch name. -## Alpha, Beta, Release Candidate Releases - -At any point in the release cycle for a new release, there could exist one or -more alpha, beta, or release candidate (RC) release. Alpha, beta, and release -candidates **should not be used in production environments**. - -Alpha is the early stage of a release cycle and is typically made to test the -mechanics of an upcoming release. These should be considered relatively -unstable. The format for an alpha release tag is `v4.3.0-alpha.1`, which in this -case indicates it is the first alpha release for 4.3. - -Beta occurs during the later stage of a release cycle. At this point, the -release should be considered feature complete and the beta is used to -distribute, test, and collect feedback on the upcoming release. The betas should -be considered unstable, but as mentioned feature complete. The format for an -beta release tag is `v4.3.0-beta.1`, which in this case indicates it is the -first beta release for 4.3. - -Release candidates (RCs) occur just before a release. A release candidate should -be considered stable, and is typically used for a final round of bug checking -and testing. Multiple release candidates can occur in the event of serious bugs. -The format for a release candidate tag is `v4.3.0-rc.1`, which in this -case indicates it is the first release candidate for 4.3. - -**After a major or minor release, no alpha, beta, or release candidate releases -are supported**. In fact, any newer release of an alpha, beta, or RC immediately -deprecates any older alpha, beta or RC. (Naturally, a beta deprecates an alpha, -and a RC deprecates a beta). - -If you are testing on an older alpha, beta or RC, bug reports will not be -accepted. Please ensure you are testing on the latest version. - # Testing -We greatly appreciate any and all testing for the project. When testing, please -be sure you do the following: - -- If testing against a release, ensure your tests are performed against the -latest minor version (the last number in the release denotes the minor version, -e.g. the "3" in the 4.3.3) -- If testing against a pre-release (alpha, beta, RC), ensure your tests are -performed against latest version -- If testing against a development (`master`) or release (`REL_X_Y`) branch, -ensure your tests are performed against the latest commit - -Please do not test against unsupported versions (e.g. any release that is marked -final). - +We greatly appreciate any and all testing for the project. There are several ways to help with the testing effort: - Manual testing: testing particular features with a series of manual commands diff --git a/Gopkg.lock b/Gopkg.lock deleted file mode 100644 index ebbdb30834..0000000000 --- a/Gopkg.lock +++ /dev/null @@ -1,1045 +0,0 @@ -# This file is autogenerated, do not edit; changes may be undone by the next 'dep ensure'. - - -[[projects]] - digest = "1:805922a698719105a020d35c21420478beab6505764348ab994cb443cdc1ac4b" - name = "cloud.google.com/go" - packages = ["compute/metadata"] - pruneopts = "UT" - revision = "69e77e66e59741e053c986d469791a983ee1d6a5" - version = "v0.56.0" - -[[projects]] - digest = "1:a2682518d905d662d984ef9959984ef87cecb777d379bfa9d9fe40e78069b3e4" - name = "github.com/PuerkitoBio/purell" - packages = ["."] - pruneopts = "UT" - revision = "44968752391892e1b0d0b821ee79e9a85fa13049" - version = "v1.1.1" - -[[projects]] - branch = "master" - digest = "1:c739832d67eb1e9cc478a19cc1a1ccd78df0397bf8a32978b759152e205f644b" - name = "github.com/PuerkitoBio/urlesc" - packages = ["."] - pruneopts = "UT" - revision = "de5bf2ad457846296e2031421a34e2568e304e35" - -[[projects]] - digest = "1:7cb4fdca4c251b3ef8027c90ea35f70c7b661a593b9eeae34753c65499098bb1" - name = "github.com/cpuguy83/go-md2man" - packages = ["md2man"] - pruneopts = "UT" - revision = "7762f7e404f8416dfa1d9bb6a8c192aa9acb4d19" - version = "v1.0.10" - -[[projects]] - digest = "1:ffe9824d294da03b391f44e1ae8281281b4afc1bdaa9588c9097785e3af10cec" - name = "github.com/davecgh/go-spew" - packages = ["spew"] - pruneopts = "UT" - revision = "8991bc29aa16c548c550c7ff78260e27b9ab7c73" - version = "v1.1.1" - -[[projects]] - branch = "master" - digest = "1:ecdc8e0fe3bc7d549af1c9c36acf3820523b707d6c071b6d0c3860882c6f7b42" - name = "github.com/docker/spdystream" - packages = [ - ".", - "spdy", - ] - pruneopts = "UT" - revision = "6480d4af844c189cf5dd913db24ddd339d3a4f85" - -[[projects]] - digest = "1:8bf756db54ad444d058320222daf307b2ce712b98a80af9a1c67285c9036b299" - name = "github.com/emicklei/go-restful" - packages = [ - ".", - "log", - ] - pruneopts = "UT" - revision = "7c2556bbbff7dc38dc0673ce61902903671c0b55" - version = "v2.12.0" - -[[projects]] - digest = "1:55672a73a23bb7e4f59691db22eb3939e748d410320fd63d3bfc328c0037ef70" - name = "github.com/evanphx/json-patch" - packages = ["."] - pruneopts = "UT" - revision = "63b09d42374be6de7cff01f036f467438066a6eb" - version = "v4.7.0" - -[[projects]] - digest = "1:bde2a1ce1de28312f165a88926c12f4acd19dc4d7da96a445dfa5a29eae07a93" - name = "github.com/fatih/color" - packages = ["."] - pruneopts = "UT" - revision = "daf2830f2741ebb735b21709a520c5f37d642d85" - version = "v1.9.0" - -[[projects]] - digest = "1:ed15647db08b6d63666bf9755d337725960c302bbfa5e23754b4b915a4797e42" - name = "github.com/go-openapi/jsonpointer" - packages = ["."] - pruneopts = "UT" - revision = "ed123515f087412cd7ef02e49b0b0a5e6a79a360" - version = "v0.19.3" - -[[projects]] - digest = "1:451fe53c19443c6941be5d4295edc973a3eb16baccb940efee94284024be03b0" - name = "github.com/go-openapi/jsonreference" - packages = ["."] - pruneopts = "UT" - revision = "82f31475a8f7a12bc26962f6e26ceade8ea6f66a" - version = "v0.19.3" - -[[projects]] - digest = "1:9e6992ea4c7005c711e3460e90d20e0c40cce6743bbdc65d12e8fe45e5b6beaf" - name = "github.com/go-openapi/spec" - packages = ["."] - pruneopts = "UT" - revision = "1297e9a4ddf9325269fe013d7c1300aac3985f92" - version = "v0.19.7" - -[[projects]] - digest = "1:8b59d79dc97889c333cdad6fcbd9c47d4d9abc19a5ec0683bfa40e5a36d1b1b7" - name = "github.com/go-openapi/swag" - packages = ["."] - pruneopts = "UT" - revision = "59a9232e9392613952a0a4c90523c40c99140043" - version = "v0.19.8" - -[[projects]] - digest = "1:582e25eccee928dc12416ea4c23b6dae8f3b5687730632aa1473ebebe80a2359" - name = "github.com/gogo/protobuf" - packages = [ - "proto", - "sortkeys", - ] - pruneopts = "UT" - revision = "5628607bb4c51c3157aacc3a50f0ab707582b805" - version = "v1.3.1" - -[[projects]] - digest = "1:ecd73c8c5c5e48f9079e042ae733c3f3ab021218d6c4da3411d82727fd5a412a" - name = "github.com/golang/protobuf" - packages = [ - "proto", - "ptypes", - "ptypes/any", - "ptypes/duration", - "ptypes/timestamp", - ] - pruneopts = "UT" - revision = "84668698ea25b64748563aa20726db66a6b8d299" - version = "v1.3.5" - -[[projects]] - digest = "1:e4f5819333ac698d294fe04dbf640f84719658d5c7ce195b10060cc37292ce79" - name = "github.com/golang/snappy" - packages = ["."] - pruneopts = "UT" - revision = "2a8bb927dd31d8daada140a5d09578521ce5c36a" - version = "v0.0.1" - -[[projects]] - digest = "1:0aeda02073125667ac6c9df50c7921cb22c08a4accdc54589c697a7e76be65c2" - name = "github.com/google/go-cmp" - packages = [ - "cmp", - "cmp/internal/diff", - "cmp/internal/flags", - "cmp/internal/function", - "cmp/internal/value", - ] - pruneopts = "UT" - revision = "5a6f75716e1203a923a78c9efb94089d857df0f6" - version = "v0.4.0" - -[[projects]] - digest = "1:a840b166971a2e76fcc4fbafaa181ea109b92d461e7f9b608a49b70be2765bac" - name = "github.com/google/gofuzz" - packages = ["."] - pruneopts = "UT" - revision = "db92cf7ae75e4a7a28abc005addab2b394362888" - version = "v1.1.0" - -[[projects]] - digest = "1:e35285db21b7d730a52b891a959783e567ca516ea64f2748070f1f917fbccd82" - name = "github.com/googleapis/gnostic" - packages = [ - "OpenAPIv2", - "compiler", - "extensions", - ] - pruneopts = "UT" - revision = "99384834bf8c58ce7ab88db353283bedcb53e1ca" - version = "v0.4.0" - -[[projects]] - digest = "1:25ebe6496abb289ef977c081b2d49f56dd97c32db4ca083d37f95923909ced02" - name = "github.com/gorilla/mux" - packages = ["."] - pruneopts = "UT" - revision = "75dcda0896e109a2a22c9315bca3bb21b87b2ba5" - version = "v1.7.4" - -[[projects]] - digest = "1:e631368e174090a276fc00b48283f92ac4ccfbbb1945bcfcee083f5f9210dc00" - name = "github.com/hashicorp/golang-lru" - packages = [ - ".", - "simplelru", - ] - pruneopts = "UT" - revision = "14eae340515388ca95aa8e7b86f0de668e981f54" - version = "v0.5.4" - -[[projects]] - digest = "1:1a7059d684f8972987e4b6f0703083f207d63f63da0ea19610ef2e6bb73db059" - name = "github.com/imdario/mergo" - packages = ["."] - pruneopts = "UT" - revision = "66f88b4ae75f5edcc556623b96ff32c06360fbb7" - version = "v0.3.9" - -[[projects]] - digest = "1:870d441fe217b8e689d7949fef6e43efbc787e50f200cb1e70dbca9204a1d6be" - name = "github.com/inconshreveable/mousetrap" - packages = ["."] - pruneopts = "UT" - revision = "76626ae9c91c4f2a10f34cad8ce83ea42c93bb75" - version = "v1.0" - -[[projects]] - digest = "1:7cd2924a44ecf80a319cfa2378529fabd348d011b739fb4eccc565f65e3296c4" - name = "github.com/json-iterator/go" - packages = ["."] - pruneopts = "UT" - revision = "acfec88f7a0d5140ace3dcdbee10184e3684a9e1" - version = "v1.1.9" - -[[projects]] - digest = "1:31e761d97c76151dde79e9d28964a812c46efc5baee4085b86f68f0c654450de" - name = "github.com/konsorten/go-windows-terminal-sequences" - packages = ["."] - pruneopts = "UT" - revision = "f55edac94c9bbba5d6182a4be46d86a2c9b5b50e" - version = "v1.0.2" - -[[projects]] - digest = "1:7bbccd3dd7998f2a180264ec1d12e362ed8e02f55ea7b82ac0d0f48ffa2d8888" - name = "github.com/mailru/easyjson" - packages = [ - "buffer", - "jlexer", - "jwriter", - ] - pruneopts = "UT" - revision = "8edcc4e51f39ddbd3505a3386aff3f435a7fd028" - version = "v0.7.1" - -[[projects]] - digest = "1:0109cf4321a15313ec895f42e723e1f76121c6975ea006abfa20012272ec0937" - name = "github.com/mattn/go-colorable" - packages = ["."] - pruneopts = "UT" - revision = "68e95eba382c972aafde02ead2cd2426a8a92480" - version = "v0.1.6" - -[[projects]] - digest = "1:0c58d31abe2a2ccb429c559b6292e7df89dcda675456fecc282fa90aa08273eb" - name = "github.com/mattn/go-isatty" - packages = ["."] - pruneopts = "UT" - revision = "7b513a986450394f7bbf1476909911b3aa3a55ce" - version = "v0.0.12" - -[[projects]] - digest = "1:33422d238f147d247752996a26574ac48dcf472976eda7f5134015f06bf16563" - name = "github.com/modern-go/concurrent" - packages = ["."] - pruneopts = "UT" - revision = "bacd9c7ef1dd9b15be4a9909b8ac7a4e313eec94" - version = "1.0.3" - -[[projects]] - digest = "1:e32bdbdb7c377a07a9a46378290059822efdce5c8d96fe71940d87cb4f918855" - name = "github.com/modern-go/reflect2" - packages = ["."] - pruneopts = "UT" - revision = "4b7aa43c6742a2c18fdef89dd197aaae7dac7ccd" - version = "1.0.1" - -[[projects]] - digest = "1:a20520c30001f1d49ca07e26eafe5d8612ce5d7dac5abbdab754e14bb83c1b4e" - name = "github.com/nsqio/go-nsq" - packages = ["."] - pruneopts = "UT" - revision = "d7acddb4babdf3329ad415cc02b605a239011b4b" - version = "v1.0.8" - -[[projects]] - digest = "1:9e1d37b58d17113ec3cb5608ac0382313c5b59470b94ed97d0976e69c7022314" - name = "github.com/pkg/errors" - packages = ["."] - pruneopts = "UT" - revision = "614d223910a179a466c1767a985424175c39b465" - version = "v0.9.1" - -[[projects]] - digest = "1:32db15a47b5be06a5d40e863ab0ebed107741845fd2d8676121f788284d26923" - name = "github.com/robfig/cron" - packages = ["."] - pruneopts = "UT" - revision = "ccba498c397bb90a9c84945bbb0f7af2d72b6309" - version = "v3.0.1" - -[[projects]] - digest = "1:b36a0ede02c4c2aef7df7f91cbbb7bb88a98b5d253509d4f997dda526e50c88c" - name = "github.com/russross/blackfriday" - packages = ["."] - pruneopts = "UT" - revision = "05f3235734ad95d0016f6a23902f06461fcf567a" - version = "v1.5.2" - -[[projects]] - digest = "1:0599141a8403114d34f1e546604ad6c5361b70dfa80e80c635f438cdbf71b43a" - name = "github.com/sirupsen/logrus" - packages = ["."] - pruneopts = "UT" - revision = "d417be0fe654de640a82370515129985b407c7e3" - version = "v1.5.0" - -[[projects]] - digest = "1:abe9f3f23399646a6263682cacc9e86969f6c7e768f0ef036449926aa24cbbef" - name = "github.com/spf13/cobra" - packages = [ - ".", - "doc", - ] - pruneopts = "UT" - revision = "f2b07da1e2c38d5f12845a4f607e2e1018cbb1f5" - version = "v0.0.5" - -[[projects]] - digest = "1:524b71991fc7d9246cc7dc2d9e0886ccb97648091c63e30eef619e6862c955dd" - name = "github.com/spf13/pflag" - packages = ["."] - pruneopts = "UT" - revision = "2e9d26c8c37aae03e3f9d4e90b7116f5accb7cab" - version = "v1.0.5" - -[[projects]] - digest = "1:577b99fc55ee19cbf1ffd072808c91ced78cd732bd9d0517bec9b90e532069cb" - name = "github.com/xdg/stringprep" - packages = ["."] - pruneopts = "UT" - revision = "bd625b8dc1e3b0f57412280ccbcc317f0c69d8db" - version = "v1.0.0" - -[[projects]] - branch = "master" - digest = "1:17da86687c09ab43a342bf91c2b2e5feba1dda6b7cce9626d64201fbf0113264" - name = "golang.org/x/crypto" - packages = [ - "blowfish", - "chacha20", - "curve25519", - "ed25519", - "ed25519/internal/edwards25519", - "internal/subtle", - "pbkdf2", - "poly1305", - "ssh", - "ssh/internal/bcrypt_pbkdf", - "ssh/terminal", - ] - pruneopts = "UT" - revision = "0ec3e9974c59449edd84298612e9f16fa13368e8" - -[[projects]] - digest = "1:467bb8fb8fa786448b8d486cd0bb7c1a5577dcd7310441aa02a20110cd9f727d" - name = "golang.org/x/mod" - packages = [ - "module", - "semver", - ] - pruneopts = "UT" - revision = "ed3ec21bb8e252814c380df79a80f366440ddb2d" - version = "v0.2.0" - -[[projects]] - branch = "master" - digest = "1:1d07884b4c6e30ca158be7bb0190ebf35b9650f39b02848e66de3b40b46e7680" - name = "golang.org/x/net" - packages = [ - "context", - "context/ctxhttp", - "http/httpguts", - "http2", - "http2/hpack", - "idna", - ] - pruneopts = "UT" - revision = "d3edc9973b7eb1fb302b0ff2c62357091cea9a30" - -[[projects]] - branch = "master" - digest = "1:79edde3241bb55de9f4143d5083bfcff722e550c3cb8db94084eab50d0e440b5" - name = "golang.org/x/oauth2" - packages = [ - ".", - "google", - "internal", - "jws", - "jwt", - ] - pruneopts = "UT" - revision = "bf48bf16ab8d622ce64ec6ce98d2c98f916b6303" - -[[projects]] - branch = "master" - digest = "1:4692f916cb72b2c295f04841036d85a3f13e96d1cc9e8e4c2c30edebac518053" - name = "golang.org/x/sync" - packages = ["semaphore"] - pruneopts = "UT" - revision = "43a5402ce75a95522677f77c619865d66b8c57ab" - -[[projects]] - branch = "master" - digest = "1:3dcef1d66750aa78c2c57361b88e715b70a2486ab2a9f6c70a7cfb5e0712e9c3" - name = "golang.org/x/sys" - packages = [ - "cpu", - "unix", - "windows", - ] - pruneopts = "UT" - revision = "c3d80250170dec19bf61949c81233cede5ddaf61" - -[[projects]] - digest = "1:66a2f252a58b4fbbad0e4e180e1d85a83c222b6bce09c3dcdef3dc87c72eda7c" - name = "golang.org/x/text" - packages = [ - "collate", - "collate/build", - "internal/colltab", - "internal/gen", - "internal/language", - "internal/language/compact", - "internal/tag", - "internal/triegen", - "internal/ucd", - "language", - "secure/bidirule", - "transform", - "unicode/bidi", - "unicode/cldr", - "unicode/norm", - "unicode/rangetable", - "width", - ] - pruneopts = "UT" - revision = "342b2e1fbaa52c93f31447ad2c6abc048c63e475" - version = "v0.3.2" - -[[projects]] - branch = "master" - digest = "1:a2f668c709f9078828e99cb1768cb02e876cb81030545046a32b54b2ac2a9ea8" - name = "golang.org/x/time" - packages = ["rate"] - pruneopts = "UT" - revision = "555d28b269f0569763d25dbe1a237ae74c6bcc82" - -[[projects]] - branch = "master" - digest = "1:f09beeb593a009d70c55af474c713e5fdc13c0843896396db9e063c73fe572e4" - name = "golang.org/x/tools" - packages = [ - "go/ast/astutil", - "imports", - "internal/fastwalk", - "internal/gocommand", - "internal/gopathwalk", - "internal/imports", - ] - pruneopts = "UT" - revision = "69646383afec4980001de1c52d6a3a6f3073ff6a" - -[[projects]] - branch = "master" - digest = "1:918a46e4a2fb83df33f668f5a6bd51b2996775d073fce1800d3ec01b0a5ddd2b" - name = "golang.org/x/xerrors" - packages = [ - ".", - "internal", - ] - pruneopts = "UT" - revision = "9bdfabe68543c54f90421aeb9a60ef8061b5b544" - -[[projects]] - digest = "1:1db6b0636d0815e5cce3118bba811946c88c2c1b72a18021da3763c4181aeb06" - name = "gonum.org/v1/gonum" - packages = [ - "blas", - "blas/blas64", - "blas/cblas128", - "blas/gonum", - "floats", - "graph", - "graph/internal/linear", - "graph/internal/ordered", - "graph/internal/set", - "graph/internal/uid", - "graph/iterator", - "graph/simple", - "graph/topo", - "graph/traverse", - "internal/asm/c128", - "internal/asm/c64", - "internal/asm/f32", - "internal/asm/f64", - "internal/cmplx64", - "internal/math32", - "lapack", - "lapack/gonum", - "lapack/lapack64", - "mat", - ] - pruneopts = "UT" - revision = "960a37950cca5ec8d1efe76ed4eec9dfe3065bd6" - version = "v0.7.0" - -[[projects]] - digest = "1:3c03b58f57452764a4499c55c582346c0ee78c8a5033affe5bdfd9efd3da5bd1" - name = "google.golang.org/appengine" - packages = [ - ".", - "internal", - "internal/app_identity", - "internal/base", - "internal/datastore", - "internal/log", - "internal/modules", - "internal/remote_api", - "internal/urlfetch", - "urlfetch", - ] - pruneopts = "UT" - revision = "971852bfffca25b069c31162ae8f247a3dba083b" - version = "v1.6.5" - -[[projects]] - digest = "1:2d1fbdc6777e5408cabeb02bf336305e724b925ff4546ded0fa8715a7267922a" - name = "gopkg.in/inf.v0" - packages = ["."] - pruneopts = "UT" - revision = "d2d2541c53f18d2a059457998ce2876cc8e67cbf" - version = "v0.9.1" - -[[projects]] - digest = "1:55b110c99c5fdc4f14930747326acce56b52cfce60b24b1c03ef686ac0e46bb1" - name = "gopkg.in/yaml.v2" - packages = ["."] - pruneopts = "UT" - revision = "53403b58ad1b561927d19068c655246f2db79d48" - version = "v2.2.8" - -[[projects]] - digest = "1:cd7322a2669aba7fe506383dc31ece3881fc3e39ccac5334360c50f946a8ade4" - name = "k8s.io/api" - packages = [ - "admissionregistration/v1", - "admissionregistration/v1beta1", - "apps/v1", - "apps/v1beta1", - "apps/v1beta2", - "auditregistration/v1alpha1", - "authentication/v1", - "authentication/v1beta1", - "authorization/v1", - "authorization/v1beta1", - "autoscaling/v1", - "autoscaling/v2beta1", - "autoscaling/v2beta2", - "batch/v1", - "batch/v1beta1", - "batch/v2alpha1", - "certificates/v1beta1", - "coordination/v1", - "coordination/v1beta1", - "core/v1", - "discovery/v1alpha1", - "discovery/v1beta1", - "events/v1beta1", - "extensions/v1beta1", - "flowcontrol/v1alpha1", - "networking/v1", - "networking/v1beta1", - "node/v1alpha1", - "node/v1beta1", - "policy/v1beta1", - "rbac/v1", - "rbac/v1alpha1", - "rbac/v1beta1", - "scheduling/v1", - "scheduling/v1alpha1", - "scheduling/v1beta1", - "settings/v1alpha1", - "storage/v1", - "storage/v1alpha1", - "storage/v1beta1", - ] - pruneopts = "UT" - revision = "d3c87f2f52e31c74a4ccb065b706b688f4e6341d" - version = "v0.17.4" - -[[projects]] - digest = "1:cdd77e0255f0aab8559125ec721158115d242d01fe69fe4c8e29712407930f58" - name = "k8s.io/apimachinery" - packages = [ - "pkg/api/errors", - "pkg/api/meta", - "pkg/api/resource", - "pkg/apis/meta/internalversion", - "pkg/apis/meta/v1", - "pkg/apis/meta/v1/unstructured", - "pkg/apis/meta/v1beta1", - "pkg/conversion", - "pkg/conversion/queryparams", - "pkg/fields", - "pkg/labels", - "pkg/runtime", - "pkg/runtime/schema", - "pkg/runtime/serializer", - "pkg/runtime/serializer/json", - "pkg/runtime/serializer/protobuf", - "pkg/runtime/serializer/recognizer", - "pkg/runtime/serializer/streaming", - "pkg/runtime/serializer/versioning", - "pkg/selection", - "pkg/types", - "pkg/util/cache", - "pkg/util/clock", - "pkg/util/diff", - "pkg/util/errors", - "pkg/util/framer", - "pkg/util/httpstream", - "pkg/util/httpstream/spdy", - "pkg/util/intstr", - "pkg/util/json", - "pkg/util/mergepatch", - "pkg/util/naming", - "pkg/util/net", - "pkg/util/remotecommand", - "pkg/util/runtime", - "pkg/util/sets", - "pkg/util/strategicpatch", - "pkg/util/validation", - "pkg/util/validation/field", - "pkg/util/wait", - "pkg/util/yaml", - "pkg/version", - "pkg/watch", - "third_party/forked/golang/json", - "third_party/forked/golang/netutil", - "third_party/forked/golang/reflect", - ] - pruneopts = "UT" - revision = "731dcecc205498f52a21b12e311af095efb4b188" - version = "v0.17.4" - -[[projects]] - digest = "1:001147e1be74745e35dd215ac066fe4c497249a7730ec9cee9faf8222cafbb2a" - name = "k8s.io/client-go" - packages = [ - "discovery", - "discovery/fake", - "informers", - "informers/admissionregistration", - "informers/admissionregistration/v1", - "informers/admissionregistration/v1beta1", - "informers/apps", - "informers/apps/v1", - "informers/apps/v1beta1", - "informers/apps/v1beta2", - "informers/auditregistration", - "informers/auditregistration/v1alpha1", - "informers/autoscaling", - "informers/autoscaling/v1", - "informers/autoscaling/v2beta1", - "informers/autoscaling/v2beta2", - "informers/batch", - "informers/batch/v1", - "informers/batch/v1beta1", - "informers/batch/v2alpha1", - "informers/certificates", - "informers/certificates/v1beta1", - "informers/coordination", - "informers/coordination/v1", - "informers/coordination/v1beta1", - "informers/core", - "informers/core/v1", - "informers/discovery", - "informers/discovery/v1alpha1", - "informers/discovery/v1beta1", - "informers/events", - "informers/events/v1beta1", - "informers/extensions", - "informers/extensions/v1beta1", - "informers/flowcontrol", - "informers/flowcontrol/v1alpha1", - "informers/internalinterfaces", - "informers/networking", - "informers/networking/v1", - "informers/networking/v1beta1", - "informers/node", - "informers/node/v1alpha1", - "informers/node/v1beta1", - "informers/policy", - "informers/policy/v1beta1", - "informers/rbac", - "informers/rbac/v1", - "informers/rbac/v1alpha1", - "informers/rbac/v1beta1", - "informers/scheduling", - "informers/scheduling/v1", - "informers/scheduling/v1alpha1", - "informers/scheduling/v1beta1", - "informers/settings", - "informers/settings/v1alpha1", - "informers/storage", - "informers/storage/v1", - "informers/storage/v1alpha1", - "informers/storage/v1beta1", - "kubernetes", - "kubernetes/fake", - "kubernetes/scheme", - "kubernetes/typed/admissionregistration/v1", - "kubernetes/typed/admissionregistration/v1/fake", - "kubernetes/typed/admissionregistration/v1beta1", - "kubernetes/typed/admissionregistration/v1beta1/fake", - "kubernetes/typed/apps/v1", - "kubernetes/typed/apps/v1/fake", - "kubernetes/typed/apps/v1beta1", - "kubernetes/typed/apps/v1beta1/fake", - "kubernetes/typed/apps/v1beta2", - "kubernetes/typed/apps/v1beta2/fake", - "kubernetes/typed/auditregistration/v1alpha1", - "kubernetes/typed/auditregistration/v1alpha1/fake", - "kubernetes/typed/authentication/v1", - "kubernetes/typed/authentication/v1/fake", - "kubernetes/typed/authentication/v1beta1", - "kubernetes/typed/authentication/v1beta1/fake", - "kubernetes/typed/authorization/v1", - "kubernetes/typed/authorization/v1/fake", - "kubernetes/typed/authorization/v1beta1", - "kubernetes/typed/authorization/v1beta1/fake", - "kubernetes/typed/autoscaling/v1", - "kubernetes/typed/autoscaling/v1/fake", - "kubernetes/typed/autoscaling/v2beta1", - "kubernetes/typed/autoscaling/v2beta1/fake", - "kubernetes/typed/autoscaling/v2beta2", - "kubernetes/typed/autoscaling/v2beta2/fake", - "kubernetes/typed/batch/v1", - "kubernetes/typed/batch/v1/fake", - "kubernetes/typed/batch/v1beta1", - "kubernetes/typed/batch/v1beta1/fake", - "kubernetes/typed/batch/v2alpha1", - "kubernetes/typed/batch/v2alpha1/fake", - "kubernetes/typed/certificates/v1beta1", - "kubernetes/typed/certificates/v1beta1/fake", - "kubernetes/typed/coordination/v1", - "kubernetes/typed/coordination/v1/fake", - "kubernetes/typed/coordination/v1beta1", - "kubernetes/typed/coordination/v1beta1/fake", - "kubernetes/typed/core/v1", - "kubernetes/typed/core/v1/fake", - "kubernetes/typed/discovery/v1alpha1", - "kubernetes/typed/discovery/v1alpha1/fake", - "kubernetes/typed/discovery/v1beta1", - "kubernetes/typed/discovery/v1beta1/fake", - "kubernetes/typed/events/v1beta1", - "kubernetes/typed/events/v1beta1/fake", - "kubernetes/typed/extensions/v1beta1", - "kubernetes/typed/extensions/v1beta1/fake", - "kubernetes/typed/flowcontrol/v1alpha1", - "kubernetes/typed/flowcontrol/v1alpha1/fake", - "kubernetes/typed/networking/v1", - "kubernetes/typed/networking/v1/fake", - "kubernetes/typed/networking/v1beta1", - "kubernetes/typed/networking/v1beta1/fake", - "kubernetes/typed/node/v1alpha1", - "kubernetes/typed/node/v1alpha1/fake", - "kubernetes/typed/node/v1beta1", - "kubernetes/typed/node/v1beta1/fake", - "kubernetes/typed/policy/v1beta1", - "kubernetes/typed/policy/v1beta1/fake", - "kubernetes/typed/rbac/v1", - "kubernetes/typed/rbac/v1/fake", - "kubernetes/typed/rbac/v1alpha1", - "kubernetes/typed/rbac/v1alpha1/fake", - "kubernetes/typed/rbac/v1beta1", - "kubernetes/typed/rbac/v1beta1/fake", - "kubernetes/typed/scheduling/v1", - "kubernetes/typed/scheduling/v1/fake", - "kubernetes/typed/scheduling/v1alpha1", - "kubernetes/typed/scheduling/v1alpha1/fake", - "kubernetes/typed/scheduling/v1beta1", - "kubernetes/typed/scheduling/v1beta1/fake", - "kubernetes/typed/settings/v1alpha1", - "kubernetes/typed/settings/v1alpha1/fake", - "kubernetes/typed/storage/v1", - "kubernetes/typed/storage/v1/fake", - "kubernetes/typed/storage/v1alpha1", - "kubernetes/typed/storage/v1alpha1/fake", - "kubernetes/typed/storage/v1beta1", - "kubernetes/typed/storage/v1beta1/fake", - "listers/admissionregistration/v1", - "listers/admissionregistration/v1beta1", - "listers/apps/v1", - "listers/apps/v1beta1", - "listers/apps/v1beta2", - "listers/auditregistration/v1alpha1", - "listers/autoscaling/v1", - "listers/autoscaling/v2beta1", - "listers/autoscaling/v2beta2", - "listers/batch/v1", - "listers/batch/v1beta1", - "listers/batch/v2alpha1", - "listers/certificates/v1beta1", - "listers/coordination/v1", - "listers/coordination/v1beta1", - "listers/core/v1", - "listers/discovery/v1alpha1", - "listers/discovery/v1beta1", - "listers/events/v1beta1", - "listers/extensions/v1beta1", - "listers/flowcontrol/v1alpha1", - "listers/networking/v1", - "listers/networking/v1beta1", - "listers/node/v1alpha1", - "listers/node/v1beta1", - "listers/policy/v1beta1", - "listers/rbac/v1", - "listers/rbac/v1alpha1", - "listers/rbac/v1beta1", - "listers/scheduling/v1", - "listers/scheduling/v1alpha1", - "listers/scheduling/v1beta1", - "listers/settings/v1alpha1", - "listers/storage/v1", - "listers/storage/v1alpha1", - "listers/storage/v1beta1", - "pkg/apis/clientauthentication", - "pkg/apis/clientauthentication/v1alpha1", - "pkg/apis/clientauthentication/v1beta1", - "pkg/version", - "plugin/pkg/client/auth/exec", - "plugin/pkg/client/auth/gcp", - "rest", - "rest/watch", - "testing", - "third_party/forked/golang/template", - "tools/auth", - "tools/cache", - "tools/clientcmd", - "tools/clientcmd/api", - "tools/clientcmd/api/latest", - "tools/clientcmd/api/v1", - "tools/metrics", - "tools/pager", - "tools/reference", - "tools/remotecommand", - "transport", - "transport/spdy", - "util/cert", - "util/connrotation", - "util/exec", - "util/flowcontrol", - "util/homedir", - "util/jsonpath", - "util/keyutil", - "util/retry", - "util/workqueue", - ] - pruneopts = "UT" - revision = "9927afa2880713c4332723b7f0865adee5e63a7b" - version = "v0.17.4" - -[[projects]] - digest = "1:956d50c8f39e1dc12669a7fe88986fd38dba7990e6b0d445939ea43de1d6ab25" - name = "k8s.io/code-generator" - packages = [ - ".", - "cmd/client-gen", - "cmd/client-gen/args", - "cmd/client-gen/generators", - "cmd/client-gen/generators/fake", - "cmd/client-gen/generators/scheme", - "cmd/client-gen/generators/util", - "cmd/client-gen/path", - "cmd/client-gen/types", - "cmd/conversion-gen", - "cmd/conversion-gen/args", - "cmd/conversion-gen/generators", - "cmd/deepcopy-gen", - "cmd/deepcopy-gen/args", - "cmd/defaulter-gen", - "cmd/defaulter-gen/args", - "cmd/go-to-protobuf", - "cmd/go-to-protobuf/protobuf", - "cmd/import-boss", - "cmd/informer-gen", - "cmd/informer-gen/args", - "cmd/informer-gen/generators", - "cmd/lister-gen", - "cmd/lister-gen/args", - "cmd/lister-gen/generators", - "cmd/openapi-gen", - "cmd/register-gen", - "cmd/register-gen/args", - "cmd/register-gen/generators", - "cmd/set-gen", - "pkg/namer", - "pkg/util", - "third_party/forked/golang/reflect", - ] - pruneopts = "UT" - revision = "4ae19cfe9b46bf48d232c065a9078d1dff3de06c" - version = "v0.17.4" - -[[projects]] - branch = "master" - digest = "1:e30d632a7bc319d28928972cd57cc24c2b7c614136e7c4d173951b3c36ad01fb" - name = "k8s.io/gengo" - packages = [ - "args", - "examples/deepcopy-gen/generators", - "examples/defaulter-gen/generators", - "examples/import-boss/generators", - "examples/set-gen/generators", - "examples/set-gen/sets", - "generator", - "namer", - "parser", - "types", - ] - pruneopts = "UT" - revision = "e0e292d8aa122d458174e1bef5f142b4d0a67a05" - -[[projects]] - digest = "1:93e82f25d75aba18436ad1ac042cb49493f096011f2541075721ed6f9e05c044" - name = "k8s.io/klog" - packages = ["."] - pruneopts = "UT" - revision = "2ca9ad30301bf30a8a6e0fa2110db6b8df699a91" - version = "v1.0.0" - -[[projects]] - branch = "release-1.17" - digest = "1:3f1b53acc37422a3481e5fd0980eaf41d69832c9566212083513165947d0b5b0" - name = "k8s.io/kube-openapi" - packages = [ - "cmd/openapi-gen/args", - "pkg/common", - "pkg/generators", - "pkg/generators/rules", - "pkg/util/proto", - "pkg/util/sets", - ] - pruneopts = "UT" - revision = "82d701f24f9def7a89b13f95b48f9375d96254c4" - -[[projects]] - branch = "master" - digest = "1:2d3f59daa4b479ff4e100a2e1d8fea6780040fdadc177869531fe4cc29407f55" - name = "k8s.io/utils" - packages = [ - "buffer", - "integer", - "trace", - ] - pruneopts = "UT" - revision = "6496210b90e852b26b227eaedea39b286063fae6" - -[[projects]] - digest = "1:4fefb4676ce9bae0f81a5122e4fe6638ca51843567d0bdecce400cae401d4514" - name = "sigs.k8s.io/controller-runtime" - packages = ["pkg/manager/signals"] - pruneopts = "UT" - revision = "1c83ff6f06bc764c95dd69b0f743740c064c4bf6" - version = "v0.6.0" - -[[projects]] - digest = "1:36d2b2cb1fa6e4a731e38c3582c203213cdbc52c5f202af07db6dc6eeaec88dc" - name = "sigs.k8s.io/yaml" - packages = ["."] - pruneopts = "UT" - revision = "9fc95527decd95bb9d28cc2eab08179b2d0f6971" - version = "v1.2.0" - -[solve-meta] - analyzer-name = "dep" - analyzer-version = 1 - input-imports = [ - "github.com/evanphx/json-patch", - "github.com/fatih/color", - "github.com/gorilla/mux", - "github.com/nsqio/go-nsq", - "github.com/robfig/cron", - "github.com/sirupsen/logrus", - "github.com/spf13/cobra", - "github.com/spf13/cobra/doc", - "github.com/spf13/pflag", - "github.com/xdg/stringprep", - "golang.org/x/crypto/pbkdf2", - "golang.org/x/crypto/ssh", - "golang.org/x/sync/semaphore", - "k8s.io/api/apps/v1", - "k8s.io/api/authorization/v1", - "k8s.io/api/batch/v1", - "k8s.io/api/core/v1", - "k8s.io/api/rbac/v1", - "k8s.io/apimachinery/pkg/api/errors", - "k8s.io/apimachinery/pkg/api/meta", - "k8s.io/apimachinery/pkg/api/resource", - "k8s.io/apimachinery/pkg/apis/meta/v1", - "k8s.io/apimachinery/pkg/fields", - "k8s.io/apimachinery/pkg/labels", - "k8s.io/apimachinery/pkg/runtime", - "k8s.io/apimachinery/pkg/runtime/schema", - "k8s.io/apimachinery/pkg/runtime/serializer", - "k8s.io/apimachinery/pkg/types", - "k8s.io/apimachinery/pkg/util/runtime", - "k8s.io/apimachinery/pkg/util/sets", - "k8s.io/apimachinery/pkg/util/validation", - "k8s.io/apimachinery/pkg/util/wait", - "k8s.io/apimachinery/pkg/watch", - "k8s.io/client-go/discovery", - "k8s.io/client-go/discovery/fake", - "k8s.io/client-go/informers", - "k8s.io/client-go/informers/batch/v1", - "k8s.io/client-go/informers/core/v1", - "k8s.io/client-go/kubernetes", - "k8s.io/client-go/kubernetes/fake", - "k8s.io/client-go/kubernetes/scheme", - "k8s.io/client-go/listers/core/v1", - "k8s.io/client-go/plugin/pkg/client/auth/gcp", - "k8s.io/client-go/rest", - "k8s.io/client-go/testing", - "k8s.io/client-go/tools/cache", - "k8s.io/client-go/tools/clientcmd", - "k8s.io/client-go/tools/remotecommand", - "k8s.io/client-go/transport/spdy", - "k8s.io/client-go/util/flowcontrol", - "k8s.io/client-go/util/workqueue", - "k8s.io/code-generator", - "sigs.k8s.io/controller-runtime/pkg/manager/signals", - "sigs.k8s.io/yaml", - ] - solver-name = "gps-cdcl" - solver-version = 1 diff --git a/Gopkg.toml b/Gopkg.toml deleted file mode 100644 index ac819e5cdc..0000000000 --- a/Gopkg.toml +++ /dev/null @@ -1,78 +0,0 @@ -ignored = [ - # The following four packages are imported by the testing directory which is - # a separate Go module. - "github.com/jackc/pgx/v4/*", - "github.com/stretchr/testify/*", - "k8s.io/apiserver/pkg/storage/names", - "k8s.io/client-go/tools/portforward", -] - -required = [ - "k8s.io/code-generator", -] - -[[constraint]] - name = "github.com/evanphx/json-patch" - version = "4.6.0" - -[[constraint]] - name = "github.com/fatih/color" - version = "1.9.0" - -[[constraint]] - name = "github.com/gorilla/mux" - version = "1.7.4" - -[[constraint]] - name = "github.com/nsqio/go-nsq" - version = "1.0.8" - -[[constraint]] - name = "github.com/robfig/cron" - version = "3.0.1" - -[[constraint]] - name = "github.com/sirupsen/logrus" - version = "1.4.2" - -[[constraint]] - name = "github.com/spf13/cobra" - version = "0.0.5" - -[[constraint]] - name = "github.com/spf13/pflag" - version = "1.0.5" - -[[constraint]] - branch = "master" - name = "golang.org/x/crypto" - -[[constraint]] - name = "k8s.io/api" - version = "0.17.4" - -[[constraint]] - name = "k8s.io/apimachinery" - version = "0.17.4" - -[[constraint]] - name = "k8s.io/client-go" - version = "0.17.4" - -[[constraint]] - name = "k8s.io/code-generator" - version = "0.17.4" - -[[constraint]] - name = "sigs.k8s.io/controller-runtime" - version = "0.6.0" - -# The following override is for generating the docs using the -# "generatedocs" program in pgo -[[override]] - name = "github.com/russross/blackfriday" - version = "1.5.2" - -[prune] - go-tests = true - unused-packages = true diff --git a/LICENSE.md b/LICENSE.md index 90fe0562e2..8d57ad6f2e 100644 --- a/LICENSE.md +++ b/LICENSE.md @@ -176,7 +176,7 @@ END OF TERMS AND CONDITIONS - Copyright 2017 - 2020 Crunchy Data Solutions, Inc. + Copyright 2017 - 2024 Crunchy Data Solutions, Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/Makefile b/Makefile index d0caa5ed40..37aca1a37e 100644 --- a/Makefile +++ b/Makefile @@ -1,254 +1,336 @@ -GOPATH ?= $(HOME)/odev/go -GOBIN ?= $(GOPATH)/bin - -# Default values if not already set -ANSIBLE_VERSION ?= 2.9.* -PGOROOT ?= $(GOPATH)/src/github.com/crunchydata/postgres-operator -PGO_BASEOS ?= centos7 -PGO_CMD ?= kubectl -PGO_IMAGE_PREFIX ?= crunchydata -PGO_IMAGE_TAG ?= $(PGO_BASEOS)-$(PGO_VERSION) -PGO_OPERATOR_NAMESPACE ?= pgo -PGO_VERSION ?= 4.5.0 -PGO_PG_VERSION ?= 12 -PGO_PG_FULLVERSION ?= 12.4 -PGO_BACKREST_VERSION ?= 2.29 -PACKAGER ?= yum - -RELTMPDIR=/tmp/release.$(PGO_VERSION) -RELFILE=/tmp/postgres-operator.$(PGO_VERSION).tar.gz - -# Valid values: buildah (default), docker -IMGBUILDER ?= buildah -# Determines whether or not rootless builds are enabled -IMG_ROOTLESS_BUILD ?= false -# The utility to use when pushing/pulling to and from an image repo (e.g. docker or buildah) -IMG_PUSHER_PULLER ?= docker -# Determines whether or not images should be pushed to the local docker daemon when building with -# a tool other than docker (e.g. when building with buildah) -IMG_PUSH_TO_DOCKER_DAEMON ?= true -# Defines the sudo command that should be prepended to various build commands when rootless builds are -# not enabled -IMGCMDSUDO= -ifneq ("$(IMG_ROOTLESS_BUILD)", "true") - IMGCMDSUDO=sudo --preserve-env -endif -IMGCMDSTEM=$(IMGCMDSUDO) buildah bud --layers $(SQUASH) -DFSET=$(PGO_BASEOS) - -# Default the buildah format to docker to ensure it is possible to pull the images from a docker -# repository using docker (otherwise the images may not be recognized) -export BUILDAH_FORMAT ?= docker - -DOCKERBASEREGISTRY=registry.access.redhat.com/ - -# Allows simplification of IMGBUILDER switching -ifeq ("$(IMGBUILDER)","docker") - IMGCMDSTEM=docker build -endif - -# Allows consolidation of ubi/rhel/centos Dockerfile sets -ifeq ("$(PGO_BASEOS)", "rhel7") - DFSET=rhel -endif - -ifeq ("$(PGO_BASEOS)", "ubi7") - DFSET=rhel -endif - -ifeq ("$(PGO_BASEOS)", "ubi8") - DFSET=rhel - PACKAGER=dnf -endif - -ifeq ("$(PGO_BASEOS)", "centos7") - DFSET=centos - DOCKERBASEREGISTRY=centos: -endif - -ifeq ("$(PGO_BASEOS)", "centos8") - DFSET=centos - PACKAGER=dnf - DOCKERBASEREGISTRY=centos: -endif - -DEBUG_BUILD ?= false -GCFLAGS= -# Disable optimizations if creating a debug build -ifeq ("$(DEBUG_BUILD)", "true") - GCFLAGS=all=-N -l -endif - -# To build a specific image, run 'make -image' (e.g. 'make pgo-apiserver-image') -images = pgo-apiserver \ - pgo-backrest \ - pgo-backrest-repo \ - pgo-backrest-repo-sync \ - pgo-backrest-restore \ - pgo-event \ - pgo-rmdata \ - pgo-scheduler \ - pgo-sqlrunner \ - pgo-client \ - pgo-deployer \ - crunchy-postgres-exporter \ - postgres-operator - -.PHONY: all installrbac setup setupnamespaces cleannamespaces bounce \ - deployoperator runmain runapiserver cli-docs clean push pull \ - release default - - -#======= Main functions ======= -all: linuxpgo $(images:%=%-image) - -installrbac: - cd deploy && ./install-rbac.sh - -setup: - ./bin/get-deps.sh - -setupnamespaces: - cd deploy && ./setupnamespaces.sh - -cleannamespaces: - cd deploy && ./cleannamespaces.sh - -bounce: - $(PGO_CMD) \ - --namespace=$(PGO_OPERATOR_NAMESPACE) \ - get pod \ - --selector=name=postgres-operator \ - -o=jsonpath="{.items[0].metadata.name}" \ - | xargs $(PGO_CMD) --namespace=$(PGO_OPERATOR_NAMESPACE) delete pod - -deployoperator: - cd deploy && ./deploy.sh - - -#======= Binary builds ======= -build-pgo-apiserver: - go install -gcflags='$(GCFLAGS)' apiserver.go - cp $(GOBIN)/apiserver bin/ - -build-pgo-backrest: - go install -gcflags='$(GCFLAGS)' pgo-backrest/pgo-backrest.go - cp $(GOBIN)/pgo-backrest bin/pgo-backrest/ - -build-pgo-rmdata: - go install -gcflags='$(GCFLAGS)' pgo-rmdata/pgo-rmdata.go - cp $(GOBIN)/pgo-rmdata bin/pgo-rmdata/ - -build-pgo-scheduler: - go install -gcflags='$(GCFLAGS)' pgo-scheduler/pgo-scheduler.go - cp $(GOBIN)/pgo-scheduler bin/pgo-scheduler/ - -build-postgres-operator: - go install -gcflags='$(GCFLAGS)' postgres-operator.go - cp $(GOBIN)/postgres-operator bin/postgres-operator/ - -build-pgo-client: - go install -gcflags='$(GCFLAGS)' pgo/pgo.go - cp $(GOBIN)/pgo bin/pgo - -build-pgo-%: - $(info No binary build needed for $@) - -build-crunchy-postgres-exporter: - $(info No binary build needed for $@) - -linuxpgo: build-pgo-client - -macpgo: - cd pgo && env GOOS=darwin GOARCH=amd64 go build pgo.go && mv pgo $(GOBIN)/pgo-mac - -winpgo: - cd pgo && env GOOS=windows GOARCH=386 go build pgo.go && mv pgo.exe $(GOBIN)/pgo.exe - - -#======= Image builds ======= -$(PGOROOT)/build/%/Dockerfile: - $(error No Dockerfile found for $* naming pattern: [$@]) - -%-img-build: pgo-base-$(IMGBUILDER) build-% $(PGOROOT)/build/%/Dockerfile - $(IMGCMDSTEM) \ - -f $(PGOROOT)/build/$*/Dockerfile \ - -t $(PGO_IMAGE_PREFIX)/$*:$(PGO_IMAGE_TAG) \ - --build-arg BASEOS=$(PGO_BASEOS) \ - --build-arg BASEVER=$(PGO_VERSION) \ - --build-arg PREFIX=$(PGO_IMAGE_PREFIX) \ - --build-arg PGVERSION=$(PGO_PG_VERSION) \ - --build-arg BACKREST_VERSION=$(PGO_BACKREST_VERSION) \ - --build-arg ANSIBLE_VERSION=$(ANSIBLE_VERSION) \ - --build-arg DFSET=$(DFSET) \ - --build-arg PACKAGER=$(PACKAGER) \ - $(PGOROOT) - -%-img-buildah: %-img-build ; -# only push to docker daemon if variable PGO_PUSH_TO_DOCKER_DAEMON is set to "true" -ifeq ("$(IMG_PUSH_TO_DOCKER_DAEMON)", "true") - $(IMGCMDSUDO) buildah push $(PGO_IMAGE_PREFIX)/$*:$(PGO_IMAGE_TAG) docker-daemon:$(PGO_IMAGE_PREFIX)/$*:$(PGO_IMAGE_TAG) -endif - -%-img-docker: %-img-build ; - -%-image: %-img-$(IMGBUILDER) ; - -pgo-base: pgo-base-$(IMGBUILDER) - -pgo-base-build: $(PGOROOT)/build/pgo-base/Dockerfile - $(IMGCMDSTEM) \ - -f $(PGOROOT)/build/pgo-base/Dockerfile \ - -t $(PGO_IMAGE_PREFIX)/pgo-base:$(PGO_IMAGE_TAG) \ - --build-arg BASEOS=$(PGO_BASEOS) \ - --build-arg RELVER=$(PGO_VERSION) \ - --build-arg PGVERSION=$(PGO_PG_VERSION) \ - --build-arg PG_FULL=$(PGO_PG_FULLVERSION) \ - --build-arg DFSET=$(DFSET) \ - --build-arg PACKAGER=$(PACKAGER) \ - --build-arg DOCKERBASEREGISTRY=$(DOCKERBASEREGISTRY) \ - $(PGOROOT) - -pgo-base-buildah: pgo-base-build ; -# only push to docker daemon if variable PGO_PUSH_TO_DOCKER_DAEMON is set to "true" -ifeq ("$(IMG_PUSH_TO_DOCKER_DAEMON)", "true") - $(IMGCMDSUDO) buildah push $(PGO_IMAGE_PREFIX)/pgo-base:$(PGO_IMAGE_TAG) docker-daemon:$(PGO_IMAGE_PREFIX)/pgo-base:$(PGO_IMAGE_TAG) -endif - -pgo-base-docker: pgo-base-build - - -#======== Utility ======= -cli-docs: - cd $(PGOROOT)/docs/content/operatorcli/cli && go run $(PGOROOT)/pgo/generatedocs.go - -clean: - rm -rf $(GOPATH)/pkg/* $(GOBIN)/postgres-operator $(GOBIN)/apiserver $(GOBIN)/*pgo - -push: $(images:%=push-%) ; - -push-%: - $(IMG_PUSHER_PULLER) push $(PGO_IMAGE_PREFIX)/$*:$(PGO_IMAGE_TAG) - -pull: $(images:%=pull-%) ; - -pull-%: - $(IMG_PUSHER_PULLER) pull $(PGO_IMAGE_PREFIX)/$*:$(PGO_IMAGE_TAG) - -release: linuxpgo macpgo winpgo - rm -rf $(RELTMPDIR) $(RELFILE) - mkdir $(RELTMPDIR) - cp -r $(PGOROOT)/examples $(RELTMPDIR) - cp -r $(PGOROOT)/deploy $(RELTMPDIR) - cp -r $(PGOROOT)/conf $(RELTMPDIR) - cp $(GOBIN)/pgo $(RELTMPDIR) - cp $(GOBIN)/pgo-mac $(RELTMPDIR) - cp $(GOBIN)/pgo.exe $(RELTMPDIR) - cp $(PGOROOT)/examples/pgo-bash-completion $(RELTMPDIR) - tar czvf $(RELFILE) -C $(RELTMPDIR) . - -update-codegen: - $(PGOROOT)/hack/update-codegen.sh - -verify-codegen: - $(PGOROOT)/hack/verify-codegen.sh +PGO_IMAGE_NAME ?= postgres-operator +PGO_IMAGE_MAINTAINER ?= Crunchy Data +PGO_IMAGE_SUMMARY ?= Crunchy PostgreSQL Operator +PGO_IMAGE_DESCRIPTION ?= $(PGO_IMAGE_SUMMARY) +PGO_IMAGE_URL ?= https://www.crunchydata.com/products/crunchy-postgresql-for-kubernetes +PGO_IMAGE_PREFIX ?= localhost + +PGMONITOR_DIR ?= hack/tools/pgmonitor +PGMONITOR_VERSION ?= v5.1.1 +QUERIES_CONFIG_DIR ?= hack/tools/queries + +EXTERNAL_SNAPSHOTTER_DIR ?= hack/tools/external-snapshotter +EXTERNAL_SNAPSHOTTER_VERSION ?= v8.0.1 + +# Buildah's "build" used to be "bud". Use the alias to be compatible for a while. +BUILDAH_BUILD ?= buildah bud + +GO ?= go +GO_BUILD = $(GO) build +GO_TEST ?= $(GO) test +KUTTL ?= kubectl-kuttl +KUTTL_TEST ?= $(KUTTL) test + +##@ General + +# The help target prints out all targets with their descriptions organized +# beneath their categories. The categories are represented by '##@' and the +# target descriptions by '##'. The awk command is responsible for reading the +# entire set of makefiles included in this invocation, looking for lines of the +# file as xyz: ## something, and then pretty-formatting the target and help. Then, +# if there's a line with ##@ something, that gets pretty-printed as a category. +# More info on the usage of ANSI control characters for terminal formatting: +# https://en.wikipedia.org/wiki/ANSI_escape_code#SGR_parameters +# More info on the awk command: +# http://linuxcommand.org/lc3_adv_awk.php + +.PHONY: help +help: ## Display this help. + @awk 'BEGIN {FS = ":.*##"; printf "\nUsage:\n make \033[36m\033[0m\n"} /^[a-zA-Z_0-9-]+:.*?##/ { printf " \033[36m%-15s\033[0m %s\n", $$1, $$2 } /^##@/ { printf "\n\033[1m%s\033[0m\n", substr($$0, 5) } ' $(MAKEFILE_LIST) + +.PHONY: all +all: ## Build all images +all: build-postgres-operator-image + +.PHONY: setup +setup: ## Run Setup needed to build images +setup: get-pgmonitor + +.PHONY: get-pgmonitor +get-pgmonitor: + git -C '$(dir $(PGMONITOR_DIR))' clone https://github.com/CrunchyData/pgmonitor.git || git -C '$(PGMONITOR_DIR)' fetch origin + @git -C '$(PGMONITOR_DIR)' checkout '$(PGMONITOR_VERSION)' + @git -C '$(PGMONITOR_DIR)' config pull.ff only + [ -d '${QUERIES_CONFIG_DIR}' ] || mkdir -p '${QUERIES_CONFIG_DIR}' + cp -r '$(PGMONITOR_DIR)/postgres_exporter/common/.' '${QUERIES_CONFIG_DIR}' + cp '$(PGMONITOR_DIR)/postgres_exporter/linux/queries_backrest.yml' '${QUERIES_CONFIG_DIR}' + +.PHONY: get-external-snapshotter +get-external-snapshotter: + git -C '$(dir $(EXTERNAL_SNAPSHOTTER_DIR))' clone https://github.com/kubernetes-csi/external-snapshotter.git || git -C '$(EXTERNAL_SNAPSHOTTER_DIR)' fetch origin + @git -C '$(EXTERNAL_SNAPSHOTTER_DIR)' checkout '$(EXTERNAL_SNAPSHOTTER_VERSION)' + @git -C '$(EXTERNAL_SNAPSHOTTER_DIR)' config pull.ff only + +.PHONY: clean +clean: ## Clean resources +clean: clean-deprecated + rm -f bin/postgres-operator + rm -rf licenses/*/ + [ ! -d testing/kuttl/e2e-generated ] || rm -r testing/kuttl/e2e-generated + [ ! -d testing/kuttl/e2e-generated-other ] || rm -r testing/kuttl/e2e-generated-other + [ ! -f hack/tools/setup-envtest ] || rm hack/tools/setup-envtest + [ ! -d hack/tools/envtest ] || { chmod -R u+w hack/tools/envtest && rm -r hack/tools/envtest; } + [ ! -d hack/tools/pgmonitor ] || rm -rf hack/tools/pgmonitor + [ ! -d hack/tools/external-snapshotter ] || rm -rf hack/tools/external-snapshotter + [ ! -n "$$(ls hack/tools)" ] || rm -r hack/tools/* + [ ! -d hack/.kube ] || rm -r hack/.kube + +.PHONY: clean-deprecated +clean-deprecated: ## Clean deprecated resources + @# packages used to be downloaded into the vendor directory + [ ! -d vendor ] || rm -r vendor + @# executables used to be compiled into the $GOBIN directory + [ ! -n '$(GOBIN)' ] || rm -f $(GOBIN)/postgres-operator $(GOBIN)/apiserver $(GOBIN)/*pgo + @# executables used to be in subdirectories + [ ! -d bin/pgo-rmdata ] || rm -r bin/pgo-rmdata + [ ! -d bin/pgo-backrest ] || rm -r bin/pgo-backrest + [ ! -d bin/pgo-scheduler ] || rm -r bin/pgo-scheduler + [ ! -d bin/postgres-operator ] || rm -r bin/postgres-operator + @# keys used to be generated before install + [ ! -d conf/pgo-backrest-repo ] || rm -r conf/pgo-backrest-repo + [ ! -d conf/postgres-operator ] || rm -r conf/postgres-operator + @# crunchy-postgres-exporter used to live in this repo + [ ! -d bin/crunchy-postgres-exporter ] || rm -r bin/crunchy-postgres-exporter + [ ! -d build/crunchy-postgres-exporter ] || rm -r build/crunchy-postgres-exporter + @# CRDs used to require patching + [ ! -d build/crd ] || rm -r build/crd + + +##@ Deployment +.PHONY: createnamespaces +createnamespaces: ## Create operator and target namespaces + kubectl apply -k ./config/namespace + +.PHONY: deletenamespaces +deletenamespaces: ## Delete operator and target namespaces + kubectl delete -k ./config/namespace + +.PHONY: install +install: ## Install the postgrescluster CRD + kubectl apply --server-side -k ./config/crd + +.PHONY: uninstall +uninstall: ## Delete the postgrescluster CRD + kubectl delete -k ./config/crd + +.PHONY: deploy +deploy: ## Deploy the PostgreSQL Operator (enables the postgrescluster controller) + kubectl apply --server-side -k ./config/default + +.PHONY: undeploy +undeploy: ## Undeploy the PostgreSQL Operator + kubectl delete -k ./config/default + +.PHONY: deploy-dev +deploy-dev: ## Deploy the PostgreSQL Operator locally +deploy-dev: PGO_FEATURE_GATES ?= "TablespaceVolumes=true,VolumeSnapshots=true" +deploy-dev: get-pgmonitor +deploy-dev: build-postgres-operator +deploy-dev: createnamespaces + kubectl apply --server-side -k ./config/dev + hack/create-kubeconfig.sh postgres-operator pgo + env \ + QUERIES_CONFIG_DIR="${QUERIES_CONFIG_DIR}" \ + CRUNCHY_DEBUG=true \ + PGO_FEATURE_GATES="${PGO_FEATURE_GATES}" \ + CHECK_FOR_UPGRADES='$(if $(CHECK_FOR_UPGRADES),$(CHECK_FOR_UPGRADES),false)' \ + KUBECONFIG=hack/.kube/postgres-operator/pgo \ + PGO_NAMESPACE='postgres-operator' \ + PGO_INSTALLER='deploy-dev' \ + PGO_INSTALLER_ORIGIN='postgres-operator-repo' \ + BUILD_SOURCE='build-postgres-operator' \ + $(shell kubectl kustomize ./config/dev | \ + sed -ne '/^kind: Deployment/,/^---/ { \ + /RELATED_IMAGE_/ { N; s,.*\(RELATED_[^[:space:]]*\).*value:[[:space:]]*\([^[:space:]]*\),\1="\2",; p; }; \ + }') \ + $(foreach v,$(filter RELATED_IMAGE_%,$(.VARIABLES)),$(v)="$($(v))") \ + bin/postgres-operator + +##@ Build - Binary +.PHONY: build-postgres-operator +build-postgres-operator: ## Build the postgres-operator binary + CGO_ENABLED=1 $(GO_BUILD) $(\ + ) --ldflags '-X "main.versionString=$(PGO_VERSION)"' $(\ + ) --trimpath -o bin/postgres-operator ./cmd/postgres-operator + +##@ Build - Images +.PHONY: build-postgres-operator-image +build-postgres-operator-image: ## Build the postgres-operator image +build-postgres-operator-image: PGO_IMAGE_REVISION := $(shell git rev-parse HEAD) +build-postgres-operator-image: PGO_IMAGE_TIMESTAMP := $(shell date -u +%FT%TZ) +build-postgres-operator-image: build-postgres-operator +build-postgres-operator-image: build/postgres-operator/Dockerfile + $(if $(shell (echo 'buildah version 1.24'; $(word 1,$(BUILDAH_BUILD)) --version) | sort -Vc 2>&1), \ + $(warning WARNING: old buildah does not invalidate its cache for changed labels: \ + https://github.com/containers/buildah/issues/3517)) + $(if $(IMAGE_TAG),, $(error missing IMAGE_TAG)) + $(strip $(BUILDAH_BUILD)) \ + --tag $(BUILDAH_TRANSPORT)$(PGO_IMAGE_PREFIX)/$(PGO_IMAGE_NAME):$(IMAGE_TAG) \ + --label name='$(PGO_IMAGE_NAME)' \ + --label build-date='$(PGO_IMAGE_TIMESTAMP)' \ + --label description='$(PGO_IMAGE_DESCRIPTION)' \ + --label maintainer='$(PGO_IMAGE_MAINTAINER)' \ + --label summary='$(PGO_IMAGE_SUMMARY)' \ + --label url='$(PGO_IMAGE_URL)' \ + --label vcs-ref='$(PGO_IMAGE_REVISION)' \ + --label vendor='$(PGO_IMAGE_MAINTAINER)' \ + --label io.k8s.display-name='$(PGO_IMAGE_NAME)' \ + --label io.k8s.description='$(PGO_IMAGE_DESCRIPTION)' \ + --label io.openshift.tags="postgresql,postgres,sql,nosql,crunchy" \ + --annotation org.opencontainers.image.authors='$(PGO_IMAGE_MAINTAINER)' \ + --annotation org.opencontainers.image.vendor='$(PGO_IMAGE_MAINTAINER)' \ + --annotation org.opencontainers.image.created='$(PGO_IMAGE_TIMESTAMP)' \ + --annotation org.opencontainers.image.description='$(PGO_IMAGE_DESCRIPTION)' \ + --annotation org.opencontainers.image.revision='$(PGO_IMAGE_REVISION)' \ + --annotation org.opencontainers.image.title='$(PGO_IMAGE_SUMMARY)' \ + --annotation org.opencontainers.image.url='$(PGO_IMAGE_URL)' \ + $(if $(PGO_VERSION),$(strip \ + --label release='$(PGO_VERSION)' \ + --label version='$(PGO_VERSION)' \ + --annotation org.opencontainers.image.version='$(PGO_VERSION)' \ + )) \ + --file $< --format docker --layers . + +##@ Test +.PHONY: check +check: ## Run basic go tests with coverage output +check: get-pgmonitor + QUERIES_CONFIG_DIR="$(CURDIR)/${QUERIES_CONFIG_DIR}" $(GO_TEST) -cover ./... + +# Available versions: curl -s 'https://storage.googleapis.com/kubebuilder-tools/' | grep -o '[^<]*' +# - KUBEBUILDER_ATTACH_CONTROL_PLANE_OUTPUT=true +.PHONY: check-envtest +check-envtest: ## Run check using envtest and a mock kube api +check-envtest: ENVTEST_USE = $(ENVTEST) --bin-dir=$(CURDIR)/hack/tools/envtest use $(ENVTEST_K8S_VERSION) +check-envtest: SHELL = bash +check-envtest: get-pgmonitor tools/setup-envtest get-external-snapshotter + @$(ENVTEST_USE) --print=overview && echo + source <($(ENVTEST_USE) --print=env) && PGO_NAMESPACE="postgres-operator" QUERIES_CONFIG_DIR="$(CURDIR)/${QUERIES_CONFIG_DIR}" \ + $(GO_TEST) -count=1 -cover ./... + +# The "PGO_TEST_TIMEOUT_SCALE" environment variable (default: 1) can be set to a +# positive number that extends test timeouts. The following runs tests with +# timeouts that are 20% longer than normal: +# make check-envtest-existing PGO_TEST_TIMEOUT_SCALE=1.2 +.PHONY: check-envtest-existing +check-envtest-existing: ## Run check using envtest and an existing kube api +check-envtest-existing: get-pgmonitor get-external-snapshotter +check-envtest-existing: createnamespaces + kubectl apply --server-side -k ./config/dev + USE_EXISTING_CLUSTER=true PGO_NAMESPACE="postgres-operator" QUERIES_CONFIG_DIR="$(CURDIR)/${QUERIES_CONFIG_DIR}" \ + $(GO_TEST) -count=1 -cover -p=1 ./... + kubectl delete -k ./config/dev + +# Expects operator to be running +.PHONY: check-kuttl +check-kuttl: ## Run kuttl end-to-end tests +check-kuttl: ## example command: make check-kuttl KUTTL_TEST=' + ${KUTTL_TEST} \ + --config testing/kuttl/kuttl-test.yaml + +.PHONY: generate-kuttl +generate-kuttl: export KUTTL_PG_UPGRADE_FROM_VERSION ?= 15 +generate-kuttl: export KUTTL_PG_UPGRADE_TO_VERSION ?= 16 +generate-kuttl: export KUTTL_PG_VERSION ?= 16 +generate-kuttl: export KUTTL_POSTGIS_VERSION ?= 3.4 +generate-kuttl: export KUTTL_PSQL_IMAGE ?= registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-16.3-1 +generate-kuttl: export KUTTL_TEST_DELETE_NAMESPACE ?= kuttl-test-delete-namespace +generate-kuttl: ## Generate kuttl tests + [ ! -d testing/kuttl/e2e-generated ] || rm -r testing/kuttl/e2e-generated + [ ! -d testing/kuttl/e2e-generated-other ] || rm -r testing/kuttl/e2e-generated-other + bash -ceu ' \ + case $(KUTTL_PG_VERSION) in \ + 16 ) export KUTTL_BITNAMI_IMAGE_TAG=16.0.0-debian-11-r3 ;; \ + 15 ) export KUTTL_BITNAMI_IMAGE_TAG=15.0.0-debian-11-r4 ;; \ + 14 ) export KUTTL_BITNAMI_IMAGE_TAG=14.5.0-debian-11-r37 ;; \ + 13 ) export KUTTL_BITNAMI_IMAGE_TAG=13.8.0-debian-11-r39 ;; \ + 12 ) export KUTTL_BITNAMI_IMAGE_TAG=12.12.0-debian-11-r40 ;; \ + esac; \ + render() { envsubst '"'"' \ + $$KUTTL_PG_UPGRADE_FROM_VERSION $$KUTTL_PG_UPGRADE_TO_VERSION \ + $$KUTTL_PG_VERSION $$KUTTL_POSTGIS_VERSION $$KUTTL_PSQL_IMAGE \ + $$KUTTL_BITNAMI_IMAGE_TAG $$KUTTL_TEST_DELETE_NAMESPACE'"'"'; }; \ + while [ $$# -gt 0 ]; do \ + source="$${1}" target="$${1/e2e/e2e-generated}"; \ + mkdir -p "$${target%/*}"; render < "$${source}" > "$${target}"; \ + shift; \ + done' - testing/kuttl/e2e/*/*.yaml testing/kuttl/e2e-other/*/*.yaml testing/kuttl/e2e/*/*/*.yaml testing/kuttl/e2e-other/*/*/*.yaml + +##@ Generate + +.PHONY: check-generate +check-generate: ## Check crd, deepcopy functions, and rbac generation +check-generate: generate-crd +check-generate: generate-deepcopy +check-generate: generate-rbac + git diff --exit-code -- config/crd + git diff --exit-code -- config/rbac + git diff --exit-code -- pkg/apis + +.PHONY: generate +generate: ## Generate crd, deepcopy functions, and rbac +generate: generate-crd +generate: generate-deepcopy +generate: generate-rbac + +.PHONY: generate-crd +generate-crd: ## Generate Custom Resource Definitions (CRDs) +generate-crd: tools/controller-gen + $(CONTROLLER) \ + crd:crdVersions='v1' \ + paths='./pkg/apis/...' \ + output:dir='config/crd/bases' # {directory}/{group}_{plural}.yaml + +.PHONY: generate-deepcopy +generate-deepcopy: ## Generate DeepCopy functions +generate-deepcopy: tools/controller-gen + $(CONTROLLER) \ + object:headerFile='hack/boilerplate.go.txt' \ + paths='./pkg/apis/postgres-operator.crunchydata.com/...' + +.PHONY: generate-rbac +generate-rbac: ## Generate RBAC +generate-rbac: tools/controller-gen + $(CONTROLLER) \ + rbac:roleName='postgres-operator' \ + paths='./cmd/...' paths='./internal/...' \ + output:dir='config/rbac' # {directory}/role.yaml + +##@ Tools + +.PHONY: tools +tools: ## Download tools like controller-gen and kustomize if necessary. + +# go-get-tool will 'go install' any package $2 and install it to $1. +define go-get-tool +@[ -f '$(1)' ] || { echo Downloading '$(2)'; GOBIN='$(abspath $(dir $(1)))' $(GO) install '$(2)'; } +endef + +CONTROLLER ?= hack/tools/controller-gen +tools: tools/controller-gen +tools/controller-gen: + $(call go-get-tool,$(CONTROLLER),sigs.k8s.io/controller-tools/cmd/controller-gen@v0.16.4) + +ENVTEST ?= hack/tools/setup-envtest +tools: tools/setup-envtest +tools/setup-envtest: + $(call go-get-tool,$(ENVTEST),sigs.k8s.io/controller-runtime/tools/setup-envtest@latest) + +##@ Release + +.PHONY: license licenses +license: licenses +licenses: ## Aggregate license files + ./bin/license_aggregator.sh ./cmd/... + +.PHONY: release-postgres-operator-image release-postgres-operator-image-labels +release-postgres-operator-image: ## Build the postgres-operator image and all its prerequisites +release-postgres-operator-image: release-postgres-operator-image-labels +release-postgres-operator-image: licenses +release-postgres-operator-image: build-postgres-operator-image +release-postgres-operator-image-labels: + $(if $(PGO_IMAGE_DESCRIPTION),, $(error missing PGO_IMAGE_DESCRIPTION)) + $(if $(PGO_IMAGE_MAINTAINER),, $(error missing PGO_IMAGE_MAINTAINER)) + $(if $(PGO_IMAGE_NAME),, $(error missing PGO_IMAGE_NAME)) + $(if $(PGO_IMAGE_SUMMARY),, $(error missing PGO_IMAGE_SUMMARY)) + $(if $(PGO_VERSION),, $(error missing PGO_VERSION)) diff --git a/README.md b/README.md index 86704fd5ba..357734566e 100644 --- a/README.md +++ b/README.md @@ -1,15 +1,53 @@ -

Crunchy Data PostgreSQL Operator

+

PGO: The Postgres Operator from Crunchy Data

- Crunchy Data + PGO: The Postgres Operator from Crunchy Data

[![Go Report Card](https://goreportcard.com/badge/github.com/CrunchyData/postgres-operator)](https://goreportcard.com/report/github.com/CrunchyData/postgres-operator) +![GitHub Repo stars](https://img.shields.io/github/stars/CrunchyData/postgres-operator) +[![License](https://img.shields.io/github/license/CrunchyData/postgres-operator)](LICENSE.md) +[![Discord](https://img.shields.io/discord/1068276526740676708?label=discord&logo=discord)](https://discord.gg/a7vWKG8Ec9) -# Run your own production-grade PostgreSQL-as-a-Service on Kubernetes! +# Production Postgres Made Easy -The [Crunchy PostgreSQL Operator][documentation] automates and simplifies deploying and managing -open source PostgreSQL clusters on Kubernetes and other Kubernetes-enabled Platforms by providing -the essential features you need to keep your PostgreSQL clusters up and running, including: +[PGO](https://github.com/CrunchyData/postgres-operator), the [Postgres Operator](https://github.com/CrunchyData/postgres-operator) from [Crunchy Data](https://www.crunchydata.com), gives you a **declarative Postgres** solution that automatically manages your [PostgreSQL](https://www.postgresql.org) clusters. + +Designed for your GitOps workflows, it is [easy to get started](https://access.crunchydata.com/documentation/postgres-operator/v5/quickstart/) with Postgres on Kubernetes with PGO. Within a few moments, you can have a production-grade Postgres cluster complete with high availability, disaster recovery, and monitoring, all over secure TLS communications. Even better, PGO lets you easily customize your Postgres cluster to tailor it to your workload! + +With conveniences like cloning Postgres clusters to using rolling updates to roll out disruptive changes with minimal downtime, PGO is ready to support your Postgres data at every stage of your release pipeline. Built for resiliency and uptime, PGO will keep your Postgres cluster in its desired state, so you do not need to worry about it. + +PGO is developed with many years of production experience in automating Postgres management on Kubernetes, providing a seamless cloud native Postgres solution to keep your data always available. + +Have questions or looking for help? [Join our Discord group](https://discord.gg/a7vWKG8Ec9). + +# Installation + +Crunchy Data makes PGO available as the orchestration behind Crunchy Postgres for Kubernetes. Crunchy Postgres for Kubernetes is the integrated product that includes PostgreSQL, PGO and a collection of PostgreSQL tools and extensions that includes the various [open source components listed in the documentation](https://access.crunchydata.com/documentation/postgres-operator/latest/references/components). + +We recommend following our [Quickstart](https://access.crunchydata.com/documentation/postgres-operator/v5/quickstart/) for how to install and get up and running. However, if you can't wait to try it out, here are some instructions to get Postgres up and running on Kubernetes: + +1. [Fork the Postgres Operator examples repository](https://github.com/CrunchyData/postgres-operator-examples/fork) and clone it to your host machine. For example: + +```sh +YOUR_GITHUB_UN="" +git clone --depth 1 "git@github.com:${YOUR_GITHUB_UN}/postgres-operator-examples.git" +cd postgres-operator-examples +``` + +2. Run the following commands + +```sh +kubectl apply -k kustomize/install/namespace +kubectl apply --server-side -k kustomize/install/default +``` + +For more information please read the [Quickstart](https://access.crunchydata.com/documentation/postgres-operator/v5/quickstart/) and [Tutorial](https://access.crunchydata.com/documentation/postgres-operator/v5/tutorials/). + +These installation instructions provide the steps necessary to install PGO along with Crunchy Data's Postgres distribution, Crunchy Postgres, as Crunchy Postgres for Kubernetes. In doing so the installation downloads a series of container images from Crunchy Data's Developer Portal. For more information on the use of container images downloaded from the Crunchy Data Developer Portal or other third party sources, please see 'License and Terms' below. The installation and use of PGO outside of the use of Crunchy Postgres for Kubernetes will require modifications of these installation instructions and creation of the necessary PostgreSQL and related containers. + +# Cloud Native Postgres for Kubernetes + +PGO, the Postgres Operator from Crunchy Data, comes with all of the features you need for a complete cloud native Postgres experience on Kubernetes! #### PostgreSQL Cluster [Provisioning][provisioning] @@ -18,7 +56,7 @@ Pods and PostgreSQL configuration! #### [High Availability][high-availability] -Safe, automated failover backed by a [distributed consensus based high-availability solution][high-availability]. +Safe, automated failover backed by a [distributed consensus high availability solution][high-availability]. Uses [Pod Anti-Affinity][k8s-anti-affinity] to help resiliency; you can configure how aggressive this can be! Failed primaries automatically heal, allowing for faster recovery time. @@ -26,173 +64,105 @@ Support for [standby PostgreSQL clusters][multiple-cluster] that work both withi #### [Disaster Recovery][disaster-recovery] -Backups and restores leverage the open source [pgBackRest][] utility and -[includes support for full, incremental, and differential backups as well as efficient delta restores][disaster-recovery]. -Set how long you want your backups retained for. Works great with very large databases! +[Backups][backups] and [restores][disaster-recovery] leverage the open source [pgBackRest][] utility and +[includes support for full, incremental, and differential backups as well as efficient delta restores][backups]. +Set how long you to retain your backups. Works great with very large databases! -#### TLS +#### Security and [TLS][tls] -Secure communication between your applications and data servers by [enabling TLS for your PostgreSQL servers][pgo-task-tls], -including the ability to enforce that all of your connections to use TLS. +PGO enforces that all connections are over [TLS][tls]. You can also [bring your own TLS infrastructure][tls] if you do not want to use the defaults provided by PGO. + +PGO runs containers with locked-down settings and provides Postgres credentials in a secure, convenient way for connecting your applications to your data. #### [Monitoring][monitoring] [Track the health of your PostgreSQL clusters][monitoring] using the open source [pgMonitor][] library. -#### PostgreSQL User Management - -Quickly add and remove users from your PostgreSQL clusters with powerful commands. Manage password -expiration policies or use your preferred PostgreSQL authentication scheme. - -#### Upgrade Management +#### [Upgrade Management][update-postgres] -Safely apply PostgreSQL updates with minimal availability impact to your PostgreSQL clusters. +Safely [apply PostgreSQL updates][update-postgres] with minimal impact to the availability of your PostgreSQL clusters. #### Advanced Replication Support -Choose between [asynchronous replication][high-availability] and [synchronous replication][high-availability-sync] +Choose between [asynchronous][high-availability] and synchronous replication for workloads that are sensitive to losing transactions. -#### Clone - -Create new clusters from your existing clusters or backups with [`pgo create cluster --restore-from`][pgo-create-cluster]. - -#### Connection Pooling +#### [Clone][clone] -Use [pgBouncer][] for connection pooling +[Create new clusters from your existing clusters or backups][clone] with efficient data cloning. -#### Node Affinity +#### [Connection Pooling][pool] -Have your PostgreSQL clusters deployed to [Kubernetes Nodes][k8s-nodes] of your preference +Advanced [connection pooling][pool] support using [pgBouncer][]. -#### Scheduled Backups +#### Pod Anti-Affinity, Node Affinity, Pod Tolerations -Choose the type of backup (full, incremental, differential) and [how frequently you want it to occur][disaster-recovery-scheduling] on each PostgreSQL cluster. +Have your PostgreSQL clusters deployed to [Kubernetes Nodes][k8s-nodes] of your preference. Set your [pod anti-affinity][k8s-anti-affinity], node affinity, Pod tolerations, and more rules to customize your deployment topology! -#### Backup to S3 +#### [Scheduled Backups][backup-management] -[Store your backups in Amazon S3][disaster-recovery-s3] or any object storage system that supports -the S3 protocol. The PostgreSQL Operator can backup, restore, and create new clusters from these backups. +Choose the type of backup (full, incremental, differential) and [how frequently you want it to occur][backup-management] on each PostgreSQL cluster. -#### Multi-Namespace Support +#### Backup to Local Storage, [S3][backups-s3], [GCS][backups-gcs], [Azure][backups-azure], or a Combo! -You can control how the PostgreSQL Operator leverages [Kubernetes Namespaces][k8s-namespaces] with several different deployment models: +[Store your backups in Amazon S3][backups-s3] or any object storage system that supports +the S3 protocol. You can also store backups in [Google Cloud Storage][backups-gcs] and [Azure Blob Storage][backups-azure]. -- Deploy the PostgreSQL Operator and all PostgreSQL clusters to the same namespace -- Deploy the PostgreSQL Operator to one namespaces, and all PostgreSQL clusters to a different namespace -- Deploy the PostgreSQL Operator to one namespace, and have your PostgreSQL clusters managed across multiple namespaces -- Dynamically add and remove namespaces managed by the PostgreSQL Operator using the `pgo create namespace` and `pgo delete namespace` commands +You can also [mix-and-match][backups-multi]: PGO lets you [store backups in multiple locations][backups-multi]. -#### Full Customizability +#### [Full Customizability][customize-cluster] -The Crunchy PostgreSQL Operator makes it easy to get your own PostgreSQL-as-a-Service up and running on Kubernetes-enabled platforms, but we know that there are further customizations that you can make. As such, the Crunchy PostgreSQL Operator allows you to further customize your deployments, including: +PGO makes it easy to fully customize your Postgres cluster to tailor to your workload: -- Selecting different storage classes for your primary, replica, and backup storage -- Select your own container resources class for each PostgreSQL cluster deployment; differentiate between resources applied for primary and replica clusters! -- Use your own container image repository, including support `imagePullSecrets` and private repositories -- [Customize your PostgreSQL configuration](https://access.crunchydata.com/documentation/postgres-operator/latest/advanced/custom-configuration/) -- Bring your own trusted certificate authority (CA) for use with the Operator API server -- Override your PostgreSQL configuration for each cluster +- Choose the resources for your Postgres cluster: [container resources and storage size][resize-cluster]. [Resize at any time][resize-cluster] with minimal disruption. +- - Use your own container image repository, including support `imagePullSecrets` and private repositories +- [Customize your PostgreSQL configuration][customize-cluster] +#### [Namespaces][k8s-namespaces] -[disaster-recovery]: https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/disaster-recovery/ -[disaster-recovery-s3]: https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/disaster-recovery/#using-s3 -[disaster-recovery-scheduling]: https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/disaster-recovery/#scheduling-backups -[high-availability]: https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/high-availability/ -[high-availability-sync]: https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/high-availability/#synchronous-replication-guarding-against-transactions-loss -[monitoring]: https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/monitoring/ -[multiple-cluster]: https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/high-availability/multi-cluster-kubernetes/ -[pgo-create-cluster]: https://access.crunchydata.com/documentation/postgres-operator/latest/pgo-client/reference/pgo_create_cluster/ -[pgo-task-tls]: https://access.crunchydata.com/documentation/postgres-operator/latest/pgo-client/common-tasks/#enable-tls -[provisioning]: https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/provisioning/ +Deploy PGO to watch Postgres clusters in all of your [namespaces][k8s-namespaces], or [restrict which namespaces][single-namespace] you want PGO to manage Postgres clusters in! +[backups]: https://access.crunchydata.com/documentation/postgres-operator/v5/tutorials/backups-disaster-recovery/backups +[backups-s3]: https://access.crunchydata.com/documentation/postgres-operator/v5/tutorials/backups-disaster-recovery/backups#using-s3 +[backups-gcs]: https://access.crunchydata.com/documentation/postgres-operator/v5/tutorials/backups-disaster-recovery/backups#using-google-cloud-storage-gcs +[backups-azure]: https://access.crunchydata.com/documentation/postgres-operator/v5/tutorials/backups-disaster-recovery/backups#using-azure-blob-storage +[backups-multi]: https://access.crunchydata.com/documentation/postgres-operator/v5/tutorials/backups-disaster-recovery/backups#set-up-multiple-backup-repositories +[backup-management]: https://access.crunchydata.com/documentation/postgres-operator/v5/tutorials/backups-disaster-recovery/backup-management +[clone]: https://access.crunchydata.com/documentation/postgres-operator/v5/tutorials/backups-disaster-recovery/disaster-recovery#clone-a-postgres-cluster +[customize-cluster]: https://access.crunchydata.com/documentation/postgres-operator/v5/tutorials/day-two/customize-cluster +[disaster-recovery]: https://access.crunchydata.com/documentation/postgres-operator/v5/tutorials/backups-disaster-recovery/disaster-recovery +[high-availability]: https://access.crunchydata.com/documentation/postgres-operator/v5/tutorials/day-two/high-availability/ +[monitoring]: https://access.crunchydata.com/documentation/postgres-operator/v5/tutorials/day-two/monitoring/ +[multiple-cluster]: https://access.crunchydata.com/documentation/postgres-operator/v5/architecture/disaster-recovery/#standby-cluster-overview +[pool]: https://access.crunchydata.com/documentation/postgres-operator/v5/tutorials/basic-setup/connection-pooling/ +[provisioning]: https://access.crunchydata.com/documentation/postgres-operator/v5/tutorials/basic-setup/create-cluster/ +[resize-cluster]: https://access.crunchydata.com/documentation/postgres-operator/v5/tutorials/cluster-management/resize-cluster/ +[single-namespace]: https://access.crunchydata.com/documentation/postgres-operator/v5/installation/kustomize/#installation-mode +[tls]: https://access.crunchydata.com/documentation/postgres-operator/v5/tutorials/day-two/customize-cluster#customize-tls +[update-postgres]: https://access.crunchydata.com/documentation/postgres-operator/v5/tutorials/cluster-management/update-cluster [k8s-anti-affinity]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity [k8s-namespaces]: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ [k8s-nodes]: https://kubernetes.io/docs/concepts/architecture/nodes/ - [pgBackRest]: https://www.pgbackrest.org -[pgBouncer]: https://access.crunchydata.com/documentation/pgbouncer/ +[pgBouncer]: https://access.crunchydata.com/documentation/postgres-operator/v5/tutorials/basic-setup/connection-pooling/ [pgMonitor]: https://github.com/CrunchyData/pgmonitor - -## Deployment Requirements - -The PostgreSQL Operator is validated for deployment on Kubernetes, OpenShift, and VMware Enterprise PKS clusters. Some form of storage is required, NFS, hostPath, and Storage Classes are currently supported. - -The PostgreSQL Operator includes various components that get deployed to your -Kubernetes cluster as shown in the following diagram and detailed -in the Design section of the documentation for the version you are running. - -![Reference](https://access.crunchydata.com/documentation/postgres-operator/latest/Operator-Architecture.png) - -The PostgreSQL Operator is developed and tested on CentOS and RHEL linux platforms but is known to run on other Linux variants. - -### Supported Platforms - -The Crunchy PostgreSQL Operator is tested on the following Platforms: - -- Kubernetes 1.13+ -- OpenShift 3.11+ -- Google Kubernetes Engine (GKE), including Anthos -- Amazon EKS -- VMware Enterprise PKS 1.3+ - -### Storage - -The Crunchy PostgreSQL Operator is tested with a variety of different types of Kubernetes storage and Storage Classes, including: - -- Google Compute Engine persistent volumes -- HostPath -- NFS -- Rook -- StorageOS - -and more. - -We know there are a variety of different types of [Storage Classes](https://kubernetes.io/docs/concepts/storage/storage-classes/) available for Kubernetes and we do our best to test each one, but due to the breadth of this area we are unable to verify PostgreSQL Operator functionality in each one. With that said, the PostgreSQL Operator is designed to be storage class agnostic and has been demonstrated to work with additional Storage Classes. - -## Installation - -### PostgreSQL Operator Installation - -The PostgreSQL Operator provides a few different methods for installation based on your use case. - -Based on your storage settings in your Kubernetes environment, you may be able to start as quickly as: - -```shell -kubectl create namespace pgo -kubectl apply -f https://raw.githubusercontent.com/CrunchyData/postgres-operator/v4.5.0/installers/kubectl/postgres-operator.yml -``` - -Otherwise, we highly recommend following the instructions from our [Quickstart](https://access.crunchydata.com/documentation/postgres-operator/latest/quickstart/). - -Installations methods include: - -- [Quickstart](https://access.crunchydata.com/documentation/postgres-operator/latest/quickstart/) -- [PostgreSQL Operator Installer](https://access.crunchydata.com/documentation/postgres-operator/latest/installation/postgres-operator/) -- [Ansible](https://access.crunchydata.com/documentation/postgres-operator/latest/installation/other/ansible/) -- [OperatorHub](https://operatorhub.io/operator/postgresql) -- [Developer Installation](https://access.crunchydata.com/documentation/postgres-operator/latest/installation/other/bash/) - -### `pgo` Client Installation - -If you have the PostgreSQL Operator installed in your environment, and are interested in installation of the client interface, please start here: - -- [pgo Client Install](https://access.crunchydata.com/documentation/postgres-operator/latest/installation/pgo-client/) - -There is also a `pgo-client` container if you wish to deploy the client directly to your Kubernetes environment. - -### Included Components +## Included Components [PostgreSQL containers](https://github.com/CrunchyData/crunchy-containers) deployed with the PostgreSQL Operator include the following components: - [PostgreSQL](https://www.postgresql.org) - [PostgreSQL Contrib Modules](https://www.postgresql.org/docs/current/contrib.html) - [PL/Python + PL/Python 3](https://www.postgresql.org/docs/current/plpython.html) + - [PL/Perl](https://www.postgresql.org/docs/current/plperl.html) + - [PL/Tcl](https://www.postgresql.org/docs/current/pltcl.html) - [pgAudit](https://www.pgaudit.org/) - [pgAudit Analyze](https://github.com/pgaudit/pgaudit_analyze) + - [pg_cron](https://github.com/citusdata/pg_cron) + - [pg_partman](https://github.com/pgpartman/pg_partman) - [pgnodemx](https://github.com/CrunchyData/pgnodemx) - [set_user](https://github.com/pgaudit/set_user) + - [TimescaleDB](https://github.com/timescale/timescaledb) (Apache-licensed community edition) - [wal2json](https://github.com/eulerto/wal2json) - [pgBackRest](https://pgbackrest.org/) - [pgBouncer](http://pgbouncer.github.io/) @@ -205,73 +175,79 @@ In addition to the above, the geospatially enhanced PostgreSQL + PostGIS contain - [PostGIS](http://postgis.net/) - [pgRouting](https://pgrouting.org/) -- [PL/R](https://github.com/postgres-plr/plr) -[PostgreSQL Operator Monitoring](https://crunchydata.github.io/postgres-operator/latest/architecture/monitoring/) uses the following components: +[PostgreSQL Operator Monitoring](https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/monitoring/) uses the following components: - [pgMonitor](https://github.com/CrunchyData/pgmonitor) - [Prometheus](https://github.com/prometheus/prometheus) - [Grafana](https://github.com/grafana/grafana) - [Alertmanager](https://github.com/prometheus/alertmanager) -Additional containers that are not directly integrated with the PostgreSQL Operator but can work alongside it include: - -- [pgPool II](https://access.crunchydata.com/documentation/crunchy-postgres-containers/latest/container-specifications/crunchy-pgpool/) -- [pg_upgrade](https://access.crunchydata.com/documentation/crunchy-postgres-containers/latest/container-specifications/crunchy-upgrade/) -- [pgBench](https://access.crunchydata.com/documentation/crunchy-postgres-containers/latest/container-specifications/crunchy-pgbench/) +For more information about which versions of the PostgreSQL Operator include which components, please visit the [compatibility](https://access.crunchydata.com/documentation/postgres-operator/v5/references/components/) section of the documentation. -For more information about which versions of the PostgreSQL Operator include which components, please visit the [compatibility](https://access.crunchydata.com/documentation/postgres-operator/latest/configuration/compatibility/) section of the documentation. +## [Supported Platforms](https://access.crunchydata.com/documentation/postgres-operator/latest/overview/supported-platforms) -## Using the PostgreSQL Operator +PGO, the Postgres Operator from Crunchy Data, is tested on the following platforms: -If you have the PostgreSQL and Client Interface installed in your environment and are interested in guidance on the use of the Crunchy PostgreSQL Operator, please start here: - -- [PostgreSQL Operator Documentation](https://access.crunchydata.com/documentation/postgres-operator/) -- [`pgo` Client User Guide](https://access.crunchydata.com/documentation/postgres-operator/latest/pgo-client/) +- Kubernetes +- OpenShift +- Rancher +- Google Kubernetes Engine (GKE), including Anthos +- Amazon EKS +- Microsoft AKS +- VMware Tanzu -## Contributing to the Project +# Contributing to the Project Want to contribute to the PostgreSQL Operator project? Great! We've put together -as set of contributing guidelines that you can review here: +a set of contributing guidelines that you can review here: - [Contributing Guidelines](CONTRIBUTING.md) -If you want to learn how to get up your development environment, please read our -documentation here: - - - [Developer Setup](https://access.crunchydata.com/documentation/postgres-operator/latest/contributing/developer-setup/) - Once you are ready to submit a Pull Request, please ensure you do the following: -1. Reviewing the [contributing guidelines](CONTRIBUTING.md) and ensure your -that you have followed the commit message format, added testing where -appropriate, documented your changes, etc. +1. Reviewing the [contributing guidelines](CONTRIBUTING.md) and ensure + that you have followed the commit message format, added testing where + appropriate, documented your changes, etc. 1. Open up a pull request based upon the guidelines. If you are adding a new -feature, please open up the pull request on the `master` branch. If you have -a bug fix for a supported version, open up a pull request against the supported -version branch (e.g. `REL_4_2` for 4.2) + feature, please open up the pull request on the `main` branch. 1. Please be as descriptive in your pull request as possible. If you are -referencing an issue, please be sure to include the issue in your pull request + referencing an issue, please be sure to include the issue in your pull request ## Support -If you believe you have found a bug or have detailed feature request, please open a GitHub issue and follow the guidelines for submitting a bug. +If you believe you have found a bug or have a detailed feature request, please open a GitHub issue and follow the guidelines for submitting a bug. -For general questions or community support, we welcome you to join the PostgreSQL Operator community mailing list at [postgres-operator@crunchydata.com](mailto:postgres-operator@crunchydata.com) and ask your question there. +For general questions or community support, we welcome you to join our [community Discord](https://discord.gg/a7vWKG8Ec9) and ask your questions there. For other information, please visit the [Support](https://access.crunchydata.com/documentation/postgres-operator/latest/support/) section of the documentation. -## Documentation +# Documentation -For additional information regarding design, configuration and operation of the +For additional information regarding the design, configuration, and operation of the PostgreSQL Operator, pleases see the [Official Project Documentation][documentation]. -If you are looking for the [nightly builds of the documentation](https://crunchydata.github.io/postgres-operator/latest/), you can view them at: +[documentation]: https://access.crunchydata.com/documentation/postgres-operator/latest/ + +## Past Versions -https://crunchydata.github.io/postgres-operator/latest/ +Documentation for previous releases can be found at the [Crunchy Data Access Portal](https://access.crunchydata.com/documentation/). -[documentation]: https://access.crunchydata.com/documentation/postgres-operator/ +# Releases -## Past Versions +When a PostgreSQL Operator general availability (GA) release occurs, the container images are distributed on the following platforms in order: + +- [Crunchy Data Customer Portal](https://access.crunchydata.com/) +- [Crunchy Data Developer Portal](https://www.crunchydata.com/developers) + +The image rollout can occur over the course of several days. + +To stay up-to-date on when releases are made available in the [Crunchy Data Developer Portal](https://www.crunchydata.com/developers), please sign up for the [Crunchy Data Developer Program Newsletter](https://www.crunchydata.com/developers#email). You can also [join the PGO project community discord](https://discord.gg/a7vWKG8Ec9) + +# FAQs, License and Terms + +For more information regarding PGO, the Postgres Operator project from Crunchy Data, and Crunchy Postgres for Kubernetes, please see the [frequently asked questions](https://access.crunchydata.com/documentation/postgres-operator/latest/faq). + +The installation instructions provided in this repo are designed for the use of PGO along with Crunchy Data's Postgres distribution, Crunchy Postgres, as Crunchy Postgres for Kubernetes. The unmodified use of these installation instructions will result in downloading container images from Crunchy Data repositories - specifically the Crunchy Data Developer Portal. The use of container images downloaded from the Crunchy Data Developer Portal are subject to the [Crunchy Data Developer Program terms](https://www.crunchydata.com/developers/terms-of-use). -Documentation for previous releases can be found at the [Crunchy Data Access Portal](https://access.crunchydata.com/documentation/) +The PGO Postgres Operator project source code is available subject to the [Apache 2.0 license](LICENSE.md) with the PGO logo and branding assets covered by [our trademark guidelines](docs/static/logos/TRADEMARKS.md). diff --git a/apiserver.go b/apiserver.go deleted file mode 100644 index 2c3858bb63..0000000000 --- a/apiserver.go +++ /dev/null @@ -1,174 +0,0 @@ -package main - -/* - Copyright 2017 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "crypto/tls" - "crypto/x509" - "net/http" - "os" - "strconv" - "strings" - "time" - - "github.com/crunchydata/postgres-operator/internal/apiserver" - "github.com/crunchydata/postgres-operator/internal/apiserver/routing" - crunchylog "github.com/crunchydata/postgres-operator/internal/logging" - "github.com/crunchydata/postgres-operator/internal/tlsutil" - - "github.com/gorilla/mux" - log "github.com/sirupsen/logrus" -) - -// Created as part of the apiserver.WriteTLSCert call -const serverCertPath = "/tmp/server.crt" -const serverKeyPath = "/tmp/server.key" - -func main() { - // Environment-overridden variables - srvPort := "8443" - tlsDisabled := false - tlsNoVerify := false - tlsTrustedCAs := x509.NewCertPool() - - // NOAUTH_ROUTES identifies a comma-separated list of URL routes - // which will have authentication disabled, both system-to-system - // via TLS and HTTP Basic used to power RBAC - skipAuthRoutes := strings.TrimSpace(os.Getenv("NOAUTH_ROUTES")) - - // PORT overrides the server listening port - if p, ok := os.LookupEnv("PORT"); ok && p != "" { - srvPort = p - } - - // CRUNCHY_DEBUG sets the logging level to Debug (more verbose) - if debug, _ := strconv.ParseBool(os.Getenv("CRUNCHY_DEBUG")); debug { - log.SetLevel(log.DebugLevel) - log.Debug("debug flag set to true") - } else { - log.Info("debug flag set to false") - } - - // TLS_NO_VERIFY disables verification of SSL client certificates - if noVerify, _ := strconv.ParseBool(os.Getenv("TLS_NO_VERIFY")); noVerify { - tlsNoVerify = noVerify - } - log.Debugf("TLS_NO_VERIFY set as %t", tlsNoVerify) - - // DISABLE_TLS configures the server to listen over HTTP - if noTLS, _ := strconv.ParseBool(os.Getenv("DISABLE_TLS")); noTLS { - tlsDisabled = noTLS - } - log.Debugf("DISABLE_TLS set as %t", tlsDisabled) - - if !tlsDisabled { - // ADD_OS_TRUSTSTORE causes the API server to trust clients with a - // cert issued by CAs the underlying OS already trusts - if osTrust, _ := strconv.ParseBool(os.Getenv("ADD_OS_TRUSTSTORE")); osTrust { - if osCAs, err := x509.SystemCertPool(); err != nil { - log.Errorf("unable to read OS truststore - [%v], ignoring option", err) - } else { - tlsTrustedCAs = osCAs - } - } - - // TLS_CA_TRUST identifies a PEM-encoded file containing certificate - // authorities trusted to identify client SSL connections - if tp, ok := os.LookupEnv("TLS_CA_TRUST"); ok && tp != "" { - if trustFile, err := os.Open(tp); err != nil { - log.Errorf("unable to load TLS trust from %s - [%v], ignoring option", tp, err) - } else { - err = tlsutil.ExtendTrust(tlsTrustedCAs, trustFile) - if err != nil { - log.Errorf("error reading %s - %v, ignoring option", tp, err) - } - trustFile.Close() - } - } - } - - // init crunchy-formatted logger - crunchylog.CrunchyLogger(crunchylog.SetParameters()) - - // give time for pgo-event to start up - time.Sleep(time.Duration(5) * time.Second) - - log.Infoln("postgres-operator apiserver starts") - apiserver.Initialize() - - r := mux.NewRouter() - routing.RegisterAllRoutes(r) - - var srv *http.Server - if !tlsDisabled { - // Set up deferred enforcement of certs, given Verify...IfGiven setting - skipAuth := []string{ - "/healthz", // Required for kube probes - } - if len(skipAuthRoutes) > 0 { - skipAuth = append(skipAuth, strings.Split(skipAuthRoutes, ",")...) - } - certEnforcer, err := apiserver.NewCertEnforcer(skipAuth) - if err != nil { - // Since disabling authentication would break functionality - // dependent on the user identity, only certain routes may be - // configured in NOAUTH_ROUTES. - log.Fatalf("NOAUTH_ROUTES configured incorrectly: %s", err) - } - r.Use(certEnforcer.Enforce) - - // Cert files are used for http.ListenAndServeTLS - err = apiserver.WriteTLSCert(serverCertPath, serverKeyPath) - if err != nil { - log.Fatalf("unable to open server cert at %s - %v", serverKeyPath, err) - } - - // Add server cert to trust root, necessarily includes server - // certificate issuer chain (intermediate and root CAs) - if svrCertFile, err := os.Open(serverCertPath); err != nil { - log.Fatalf("unable to open %s for reading - %v", serverCertPath, err) - } else { - if err = tlsutil.ExtendTrust(tlsTrustedCAs, svrCertFile); err != nil { - log.Fatalf("error reading server cert at %s - %v", serverCertPath, err) - } - svrCertFile.Close() - } - - cfg := &tls.Config{ - //specify pgo-apiserver in the CN....then, add ServerName: "pgo-apiserver", - ServerName: "pgo-apiserver", - ClientAuth: tls.VerifyClientCertIfGiven, - InsecureSkipVerify: tlsNoVerify, - ClientCAs: tlsTrustedCAs, - MinVersion: tls.VersionTLS11, - } - - srv = &http.Server{ - Addr: ":" + srvPort, - Handler: r, - TLSConfig: cfg, - } - log.Info("listening on port " + srvPort) - log.Fatal(srv.ListenAndServeTLS(serverCertPath, serverKeyPath)) - } else { - srv = &http.Server{ - Addr: ":" + srvPort, - Handler: r, - } - log.Info("listening on port " + srvPort) - log.Fatal(srv.ListenAndServe()) - } -} diff --git a/bin/.gitignore b/bin/.gitignore index 1e38065bf7..b138e9d063 100644 --- a/bin/.gitignore +++ b/bin/.gitignore @@ -1,2 +1,5 @@ -apiserver -pgo +/apiserver +/pgo +/pgo-mac +/pgo.exe +/postgres-operator diff --git a/bin/crunchy-postgres-exporter/.gitignore b/bin/crunchy-postgres-exporter/.gitignore deleted file mode 100644 index bd718d9cd9..0000000000 --- a/bin/crunchy-postgres-exporter/.gitignore +++ /dev/null @@ -1 +0,0 @@ -collectserver diff --git a/bin/crunchy-postgres-exporter/common_lib.sh b/bin/crunchy-postgres-exporter/common_lib.sh deleted file mode 100755 index 283352062b..0000000000 --- a/bin/crunchy-postgres-exporter/common_lib.sh +++ /dev/null @@ -1,48 +0,0 @@ -#!/bin/bash - -# Copyright 2018 - 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -RED="\033[0;31m" -GREEN="\033[0;32m" -YELLOW="\033[0;33m" -RESET="\033[0m" - -function enable_debugging() { - if [[ ${CRUNCHY_DEBUG:-false} == "true" ]] - then - echo_info "Turning debugging on.." - export PS4='+(${BASH_SOURCE}:${LINENO})> ${FUNCNAME[0]:+${FUNCNAME[0]}(): }' - set -x - fi -} - -function env_check_err() { - if [[ -z ${!1} ]] - then - echo_err "$1 environment variable is not set, aborting." - exit 1 - fi -} - -function echo_err() { - echo -e "${RED?}$(date) ERROR: ${1?}${RESET?}" -} - -function echo_info() { - echo -e "${GREEN?}$(date) INFO: ${1?}${RESET?}" -} - -function echo_warn() { - echo -e "${YELLOW?}$(date) WARN: ${1?}${RESET?}" -} diff --git a/bin/crunchy-postgres-exporter/start.sh b/bin/crunchy-postgres-exporter/start.sh deleted file mode 100755 index f8e02e4094..0000000000 --- a/bin/crunchy-postgres-exporter/start.sh +++ /dev/null @@ -1,201 +0,0 @@ -#!/bin/bash - -# Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -source /opt/cpm/bin/common_lib.sh -enable_debugging - -export PG_EXP_HOME=$(find /opt/cpm/bin/ -type d -name 'postgres_exporter*') -export PG_DIR=$(find /usr/ -type d -name 'pgsql-*') -POSTGRES_EXPORTER_PIDFILE=/tmp/postgres_exporter.pid -CONFIG_DIR='/opt/cpm/conf' -QUERIES=( - queries_backrest - queries_common - queries_per_db - queries_nodemx -) - -function trap_sigterm() { - echo_info "Doing trap logic.." - - echo_warn "Clean shutdown of postgres-exporter.." - kill -SIGINT $(head -1 ${POSTGRES_EXPORTER_PIDFILE?}) -} - -# Set default env vars for the postgres exporter container -set_default_postgres_exporter_env() { - if [[ ! -v POSTGRES_EXPORTER_PORT ]] - then - export POSTGRES_EXPORTER_PORT="9187" - default_exporter_env_vars+=("POSTGRES_EXPORTER_PORT=${POSTGRES_EXPORTER_PORT}") - fi -} - -# Set default PG env vars for the exporter container -set_default_pg_exporter_env() { - - if [[ ! -v EXPORTER_PG_HOST ]] - then - export EXPORTER_PG_HOST="127.0.0.1" - default_exporter_env_vars+=("EXPORTER_PG_HOST=${EXPORTER_PG_HOST}") - fi - - if [[ ! -v EXPORTER_PG_PORT ]] - then - export EXPORTER_PG_PORT="5432" - default_exporter_env_vars+=("EXPORTER_PG_PORT=${EXPORTER_PG_PORT}") - fi - - if [[ ! -v EXPORTER_PG_DATABASE ]] - then - export EXPORTER_PG_DATABASE="postgres" - default_exporter_env_vars+=("EXPORTER_PG_DATABASE=${EXPORTER_PG_DATABASE}") - fi - - if [[ ! -v EXPORTER_PG_USER ]] - then - export EXPORTER_PG_USER="ccp_monitoring" - default_exporter_env_vars+=("EXPORTER_PG_USER=${EXPORTER_PG_USER}") - fi - - env_check_err "EXPORTER_PG_PASSWORD" -} - -trap 'trap_sigterm' SIGINT SIGTERM - -set_default_postgres_exporter_env - -if [[ ! -v DATA_SOURCE_NAME ]] -then - set_default_pg_exporter_env - if [[ ! -z "${EXPORTER_PG_PARAMS}" ]] - then - EXPORTER_PG_PARAMS="?${EXPORTER_PG_PARAMS}" - fi - export DATA_SOURCE_NAME="postgresql://${EXPORTER_PG_USER}:${EXPORTER_PG_PASSWORD}\ -@${EXPORTER_PG_HOST}:${EXPORTER_PG_PORT}/${EXPORTER_PG_DATABASE}${EXPORTER_PG_PARAMS}" -fi - - - -if [[ ! ${#default_exporter_env_vars[@]} -eq 0 ]] -then - echo_info "Defaults have been set for the following exporter env vars:" - echo_info "[${default_exporter_env_vars[*]}]" -fi - -# Check that postgres is accepting connections. -echo_info "Waiting for PostgreSQL to be ready.." -while true; do - ${PG_DIR?}/bin/pg_isready -d ${DATA_SOURCE_NAME} - if [ $? -eq 0 ]; then - break - fi - sleep 2 -done - -echo_info "Checking if PostgreSQL is accepting queries.." -while true; do - ${PG_DIR?}/bin/psql "${DATA_SOURCE_NAME}" -c "SELECT now();" - if [ $? -eq 0 ]; then - break - fi - sleep 2 -done - -if [[ -f /conf/queries.yml ]] -then - echo_info "Custom queries configuration detected.." - QUERY_DIR='/conf' -else - echo_info "No custom queries detected. Applying default configuration.." - QUERY_DIR='/tmp' - - touch ${QUERY_DIR?}/queries.yml && > ${QUERY_DIR?}/queries.yml - for query in "${QUERIES[@]}" - do - if [[ -f ${CONFIG_DIR?}/${query?}.yml ]] - then - cat ${CONFIG_DIR?}/${query?}.yml >> /tmp/queries.yml - else - echo_err "Custom Query file ${query?}.yml does not exist (it should).." - exit 1 - fi - done - - VERSION=$(${PG_DIR?}/bin/psql "${DATA_SOURCE_NAME}" -qtAX -c "SELECT current_setting('server_version_num')") - if (( ${VERSION?} >= 90500 )) && (( ${VERSION?} < 90600 )) - then - if [[ -f ${CONFIG_DIR?}/queries_pg95.yml ]] - then - cat ${CONFIG_DIR?}/queries_pg95.yml >> /tmp/queries.yml - else - echo_err "Custom Query file queries_pg95.yml does not exist (it should).." - fi - elif (( ${VERSION?} >= 90600 )) && (( ${VERSION?} < 100000 )) - then - if [[ -f ${CONFIG_DIR?}/queries_pg96.yml ]] - then - cat ${CONFIG_DIR?}/queries_pg96.yml >> /tmp/queries.yml - else - echo_err "Custom Query file queries_pg96.yml does not exist (it should).." - fi - elif (( ${VERSION?} >= 100000 )) && (( ${VERSION?} < 110000 )) - then - if [[ -f ${CONFIG_DIR?}/queries_pg10.yml ]] - then - cat ${CONFIG_DIR?}/queries_pg10.yml >> /tmp/queries.yml - else - echo_err "Custom Query file queries_pg10.yml does not exist (it should).." - fi - elif (( ${VERSION?} >= 110000 )) && (( ${VERSION?} < 120000 )) - then - if [[ -f ${CONFIG_DIR?}/queries_pg11.yml ]] - then - cat ${CONFIG_DIR?}/queries_pg11.yml >> /tmp/queries.yml - else - echo_err "Custom Query file queries_pg11.yml does not exist (it should).." - fi - elif (( ${VERSION?} >= 120000 )) && (( ${VERSION?} < 130000 )) - then - if [[ -f ${CONFIG_DIR?}/queries_pg12.yml ]] - then - cat ${CONFIG_DIR?}/queries_pg12.yml >> /tmp/queries.yml - else - echo_err "Custom Query file queries_pg12.yml does not exist (it should).." - fi - elif (( ${VERSION?} >= 130000 )) - then - if [[ -f ${CONFIG_DIR?}/queries_pg13.yml ]] - then - cat ${CONFIG_DIR?}/queries_pg13.yml >> /tmp/queries.yml - else - echo_err "Custom Query file queries_pg12.yml does not exist (it should).." - fi - else - echo_err "Unknown or unsupported version of PostgreSQL. Exiting.." - exit 1 - fi -fi - -sed -i "s/#PGBACKREST_INFO_THROTTLE_MINUTES#/${PGBACKREST_INFO_THROTTLE_MINUTES:-10}/g" /tmp/queries.yml - -PG_OPTIONS="--extend.query-path=${QUERY_DIR?}/queries.yml --web.listen-address=:${POSTGRES_EXPORTER_PORT}" - -echo_info "Starting postgres-exporter.." -${PG_EXP_HOME?}/postgres_exporter ${PG_OPTIONS?} >>/dev/stdout 2>&1 & -echo $! > $POSTGRES_EXPORTER_PIDFILE - -wait diff --git a/bin/get-deps.sh b/bin/get-deps.sh deleted file mode 100755 index a0fddd048e..0000000000 --- a/bin/get-deps.sh +++ /dev/null @@ -1,96 +0,0 @@ -#!/bin/bash -e - -# Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -echo "Ensuring project dependencies..." -BINDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" -EVTDIR="$BINDIR/pgo-event" -POSTGRES_EXPORTER_VERSION=0.8.0 - -# Precondition checks -if [ "$GOPATH" = "" ]; then - # Alternatively, take dep approach of go env GOPATH later in the process - echo "GOPATH not defined, exiting..." >&2 - exit 1 -fi -if ! (echo $PATH | egrep -q "$GOPATH/bin") ; then - echo '$GOPATH/bin not part of $PATH, exiting...' >&2 - exit 2 -fi - - -# Idempotent installations -if (yum repolist | egrep -q '^epel/') ; then - echo "Confirmed EPEL repo exists..." -else - echo "=== Installing EPEL ===" - # Prefer distro-managed epel-release if it exists (e.g. CentOS) - if (yum -q list epel-release 2>/dev/null); then - sudo yum -y install epel-release - else - sudo yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm - fi -fi - -if which go; then - echo -n " Found: " && go version -else - echo "=== Installing golang ===" - sudo yum -y install golang -fi - -if ! [ -f $EVTDIR/nsqd -a -f $EVTDIR/nsqadmin ]; then - echo "=== Installing NSQ binaries ===" - NSQ=nsq-1.1.0.linux-amd64.go1.10.3 - curl -S https://s3.amazonaws.com/bitly-downloads/nsq/$NSQ.tar.gz | \ - tar xz --strip=2 -C $EVTDIR/ '*/bin/*' -fi - -if which docker; then - # Suppress errors for this call, as docker returns non-zero when it can't talk to the daemon - set +e - echo -n " Found: " && docker version --format '{{.Client.Version}}' 2>/dev/null - set -e -else - echo "=== Installing docker ===" - if [ -f /etc/centos-release ]; then - sudo yum -y install docker - else - sudo yum -y install docker --enablerepo=rhel-7-server-extras-rpms - fi -fi - -if which buildah; then - echo -n " Found: " && buildah --version -else - echo "=== Installing buildah ===" - if [ -f /etc/centos-release ]; then - sudo yum -y install buildah - else - sudo yum -y install buildah --enablerepo=rhel-7-server-extras-rpms - fi -fi - -if which dep; then - echo -n " Found: " && (dep version | egrep '^ version') -else - echo "=== Installing dep ===" - curl -S https://raw.githubusercontent.com/golang/dep/master/install.sh | sh -fi - -# Download Postgres Exporter, only required to build the Crunchy Postgres Exporter container -wget -O $PGOROOT/postgres_exporter.tar.gz https://github.com/wrouesnel/postgres_exporter/releases/download/v${POSTGRES_EXPORTER_VERSION?}/postgres_exporter_v${POSTGRES_EXPORTER_VERSION?}_linux-amd64.tar.gz - -# pgMonitor Setup -source $BINDIR/get-pgmonitor.sh diff --git a/bin/get-pgmonitor.sh b/bin/get-pgmonitor.sh deleted file mode 100755 index 6bfb720b3e..0000000000 --- a/bin/get-pgmonitor.sh +++ /dev/null @@ -1,27 +0,0 @@ -#!/bin/bash -e - -# Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -echo "Getting pgMonitor..." -PGMONITOR_COMMIT='v4.4-RC6' - -# pgMonitor Setup -if [[ -d ${PGOROOT?}/tools/pgmonitor ]] -then - rm -rf ${PGOROOT?}/tools/pgmonitor -fi - -git clone https://github.com/CrunchyData/pgmonitor.git ${PGOROOT?}/tools/pgmonitor -cd ${PGOROOT?}/tools/pgmonitor -git checkout ${PGMONITOR_COMMIT?} diff --git a/bin/license_aggregator.sh b/bin/license_aggregator.sh new file mode 100755 index 0000000000..66f7284a97 --- /dev/null +++ b/bin/license_aggregator.sh @@ -0,0 +1,45 @@ +#!/usr/bin/env bash + +# Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +set -eu + +# Inputs / outputs +IN_PACKAGES=("$@") +OUT_DIR=licenses + +# Clean up before we start our work +rm -rf ${OUT_DIR:?}/*/ + +# Download dependencies of the requested packages, excluding the main module. +# - https://golang.org/ref/mod#glos-main-module +module=$(go list -m) +modules=$(go list -deps -f '{{with .Module}}{{.Path}}{{"\t"}}{{.Dir}}{{end}}' "${IN_PACKAGES[@]}") +dependencies=$(grep -v "^${module}" <<< "${modules}") + +while IFS=$'\t' read -r module directory; do + licenses=$(find "${directory}" -type f -ipath '*license*' -not -name '*.go') + [ -n "${licenses}" ] || continue + + while IFS= read -r license; do + # Replace the local module directory with the module path. + # - https://golang.org/ref/mod#module-path + relative="${module}${license:${#directory}}" + + # Copy the license file with the same layout as the module. + destination="${OUT_DIR}/${relative%/*}" + install -d "${destination}" + install -m 0644 "${license}" "${destination}" + done <<< "${licenses}" +done <<< "${dependencies}" diff --git a/bin/pgo-backrest-repo-sync/pgo-backrest-repo-sync.sh b/bin/pgo-backrest-repo-sync/pgo-backrest-repo-sync.sh deleted file mode 100644 index 53e98e3a2e..0000000000 --- a/bin/pgo-backrest-repo-sync/pgo-backrest-repo-sync.sh +++ /dev/null @@ -1,95 +0,0 @@ -#!/bin/bash -x - -# Copyright 2019 - 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -function trap_sigterm() { - echo "Signal trap triggered, beginning shutdown.." - killall sshd -} - -trap 'trap_sigterm' SIGINT SIGTERM - -# First enable sshd prior to running rsync if using pgbackrest with a repository -# host -enable_sshd() { - SSHD_CONFIG=/sshd - - mkdir ~/.ssh/ - cp $SSHD_CONFIG/config ~/.ssh/ - cp $SSHD_CONFIG/id_ed25519 /tmp - chmod 400 /tmp/id_ed25519 ~/.ssh/config - - # start sshd which is used by pgbackrest for remote connections - /usr/sbin/sshd -D -f $SSHD_CONFIG/sshd_config & - - echo "sleep 5 secs to let sshd come up before running rsync command" - sleep 5 -} - -# Runs rync to sync from a specified source directory to a target directory -rsync_repo() { - echo "rsync pgbackrest from ${1} to ${2}" - # note, the "/" after the repo path is important, as we do not want to sync - # the top level directory - rsync -a --progress "${1}" "${2}" - echo "finished rsync" -} - -# Use the aws cli sync command to sync files from a source location to a target -# location. The this includes syncing files between who s3 locations, -# syncing a local directory to s3, or syncing from s3 to a local directory. -aws_sync_repo() { - export AWS_CA_BUNDLE="${PGBACKREST_REPO1_S3_CA_FILE}" - export AWS_ACCESS_KEY_ID="${PGBACKREST_REPO1_S3_KEY}" - export AWS_SECRET_ACCESS_KEY="${PGBACKREST_REPO1_S3_KEY_SECRET}" - export AWS_DEFAULT_REGION="${PGBACKREST_REPO1_S3_REGION}" - - echo "Executing aws s3 sync from source ${1} to target ${2}" - aws s3 sync "${1}" "${2}" - echo "Finished aws s3 sync" -} - -# If s3 is identifed as the data source, then the aws cli will be utilized to -# sync the repo to the target location in s3. If local storage is also enabled -# (along with s3) for the cluster, then also use the aws cli to sync the repo -# from s3 to the target volume locally. -# -# If the data source is local (the default if not specified at all), then first -# rsync the repo to the target directory locally. Then, if s3 storage is also -# enabled (along with local), use the aws cli to sync the local repo to the -# target s3 location. -if [[ "${BACKREST_STORAGE_SOURCE}" == "s3" ]] -then - aws_source="s3://${PGBACKREST_REPO1_S3_BUCKET}${PGBACKREST_REPO1_PATH}/" - aws_target="s3://${PGBACKREST_REPO1_S3_BUCKET}${NEW_PGBACKREST_REPO}/" - aws_sync_repo "${aws_source}" "${aws_target}" - if [[ "${PGHA_PGBACKREST_LOCAL_S3_STORAGE}" == "true" ]] - then - aws_source="s3://${PGBACKREST_REPO1_S3_BUCKET}${PGBACKREST_REPO1_PATH}/" - aws_target="${NEW_PGBACKREST_REPO}/" - aws_sync_repo "${aws_source}" "${aws_target}" - fi -else - enable_sshd # enable sshd for rsync - - rsync_source="${PGBACKREST_REPO1_HOST}:${PGBACKREST_REPO1_PATH}/" - rsync_target="$NEW_PGBACKREST_REPO" - rsync_repo "${rsync_source}" "${rsync_target}" - if [[ "${PGHA_PGBACKREST_LOCAL_S3_STORAGE}" == "true" ]] - then - aws_source="${NEW_PGBACKREST_REPO}/" - aws_target="s3://${PGBACKREST_REPO1_S3_BUCKET}${NEW_PGBACKREST_REPO}/" - aws_sync_repo "${aws_source}" "${aws_target}" - fi -fi diff --git a/bin/pgo-backrest-repo/archive-push-s3.sh b/bin/pgo-backrest-repo/archive-push-s3.sh deleted file mode 100755 index 2cafa76d90..0000000000 --- a/bin/pgo-backrest-repo/archive-push-s3.sh +++ /dev/null @@ -1,3 +0,0 @@ -#!/bin/bash - -pgbackrest "$@" diff --git a/bin/pgo-backrest-repo/pgo-backrest-repo.sh b/bin/pgo-backrest-repo/pgo-backrest-repo.sh deleted file mode 100755 index 25fdec5f69..0000000000 --- a/bin/pgo-backrest-repo/pgo-backrest-repo.sh +++ /dev/null @@ -1,81 +0,0 @@ -#!/bin/bash - -# Copyright 2019 - 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -function trap_sigterm() { - echo "Signal trap triggered, beginning shutdown.." - killall sshd -} - -trap 'trap_sigterm' SIGINT SIGTERM - -echo "Starting the pgBackRest repo" - -CONFIG=/sshd -REPO=/backrestrepo - -if [ ! -d $PGBACKREST_REPO1_PATH ]; then - echo "creating " $PGBACKREST_REPO1_PATH - mkdir -p $PGBACKREST_REPO1_PATH -fi - -# This is a workaround for changes introduced in pgBackRest v2.24. Specifically, a pg1-path -# setting must now be visible when another container executes a pgBackRest command via SSH. -# Since env vars, and therefore the PGBACKREST_DB_PATH setting, is not visible when another -# container executes a command via SSH, this adds the pg1-path setting to the pgBackRest config -# file instead, ensuring the setting is always available in the environment during SSH calls. -# Additionally, since the value for pg1-path setting in the repository container is irrelevant -# (i.e. the value specified by the container running the command via SSH is used instead), it is -# simply set to a dummy directory within the config file. -# If the URI style is set to 'path' instead of the default 'host' value, pgBackRest will -# connect to S3 by prependinging bucket names to URIs instead of the default 'bucket.endpoint' style -# Finally, if TLS verification is set to 'n', pgBackRest disables verification of the S3 server -# certificate. -mkdir -p /tmp/pg1path -if ! grep -Fxq "[${PGBACKREST_STANZA}]" "/etc/pgbackrest/pgbackrest.conf" 2> /dev/null -then - - printf "[%s]\npg1-path=/tmp/pg1path\n" "$PGBACKREST_STANZA" > /etc/pgbackrest/pgbackrest.conf - - # Additionally, if the PGBACKREST S3 variables are set, add them here - if [[ "${PGBACKREST_REPO1_S3_KEY}" != "" ]] - then - printf "repo1-s3-key=%s\n" "${PGBACKREST_REPO1_S3_KEY}" >> /etc/pgbackrest/pgbackrest.conf - fi - - if [[ "${PGBACKREST_REPO1_S3_KEY_SECRET}" != "" ]] - then - printf "repo1-s3-key-secret=%s\n" "${PGBACKREST_REPO1_S3_KEY_SECRET}" >> /etc/pgbackrest/pgbackrest.conf - fi - - if [[ "${PGBACKREST_REPO1_S3_URI_STYLE}" != "" ]] - then - printf "repo1-s3-uri-style=%s\n" "${PGBACKREST_REPO1_S3_URI_STYLE}" >> /etc/pgbackrest/pgbackrest.conf - fi - -fi - -mkdir -p ~/.ssh/ -cp $CONFIG/config ~/.ssh/ -#cp $CONFIG/authorized_keys ~/.ssh/ -cp $CONFIG/id_ed25519 /tmp -chmod 400 /tmp/id_ed25519 ~/.ssh/config - -# start sshd which is used by pgbackrest for remote connections -/usr/sbin/sshd -D -f $CONFIG/sshd_config & - -echo "The pgBackRest repo has been started" - -wait diff --git a/bin/pgo-backrest-restore/.gitignore b/bin/pgo-backrest-restore/.gitignore deleted file mode 100644 index 979f43f2d0..0000000000 --- a/bin/pgo-backrest-restore/.gitignore +++ /dev/null @@ -1 +0,0 @@ -pgo-backrest-restore diff --git a/bin/pgo-backrest-restore/pgo-backrest-restore.sh b/bin/pgo-backrest-restore/pgo-backrest-restore.sh deleted file mode 100755 index 89f3888fff..0000000000 --- a/bin/pgo-backrest-restore/pgo-backrest-restore.sh +++ /dev/null @@ -1,47 +0,0 @@ -#!/bin/bash -x - -# Copyright 2019 - 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -function trap_sigterm() { - echo "Signal trap triggered, beginning shutdown.." - killall sshd -} - -trap 'trap_sigterm' SIGINT SIGTERM - -CONFIG=/sshd - -mkdir ~/.ssh/ -cp $CONFIG/config ~/.ssh/ -cp $CONFIG/id_ed25519 /tmp -chmod 400 /tmp/id_ed25519 ~/.ssh/config - -# start sshd which is used by pgbackrest for remote connections -/usr/sbin/sshd -D -f $CONFIG/sshd_config & - -# create the directory the restore will go into -mkdir $PGBACKREST_DB_PATH - -echo "sleep 5 secs to let sshd come up before running pgbackrest command" -sleep 5 - -if [ "$PITR_TARGET" = "" ] -then - echo "PITR_TARGET is empty" - pgbackrest restore $COMMAND_OPTS -else - echo PITR_TARGET is not empty [$PITR_TARGET] - pgbackrest restore $COMMAND_OPTS "--target=$PITR_TARGET" -fi diff --git a/bin/pgo-backrest/.gitignore b/bin/pgo-backrest/.gitignore deleted file mode 100644 index 230c647366..0000000000 --- a/bin/pgo-backrest/.gitignore +++ /dev/null @@ -1 +0,0 @@ -pgo-backrest diff --git a/bin/pgo-backrest/README.txt b/bin/pgo-backrest/README.txt deleted file mode 100644 index 23f92ef4a4..0000000000 --- a/bin/pgo-backrest/README.txt +++ /dev/null @@ -1,3 +0,0 @@ -pgo-backrest binary goes in this directory and gets -copied into the pgo-backrest image, .gitignore is here -to keep the binary from making its way into github diff --git a/bin/pgo-backrest/pgo-backrest.sh b/bin/pgo-backrest/pgo-backrest.sh deleted file mode 100755 index fda20af57c..0000000000 --- a/bin/pgo-backrest/pgo-backrest.sh +++ /dev/null @@ -1,23 +0,0 @@ -#!/bin/sh - -# Copyright 2018 - 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -/opt/cpm/bin/pgo-backrest - -echo $UID "is the UID in the script" - -chown -R $UID:$UID $PGBACKREST_DB_PATH - -chmod -R o+rx $PGBACKREST_DB_PATH diff --git a/bin/pgo-event/.gitignore b/bin/pgo-event/.gitignore deleted file mode 100644 index 8f4ebcc036..0000000000 --- a/bin/pgo-event/.gitignore +++ /dev/null @@ -1 +0,0 @@ -*nsq* diff --git a/bin/pgo-event/pgo-event.sh b/bin/pgo-event/pgo-event.sh deleted file mode 100755 index cddcb2e708..0000000000 --- a/bin/pgo-event/pgo-event.sh +++ /dev/null @@ -1,40 +0,0 @@ -#!/bin/bash -x - -# Copyright 2019 - 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -function trap_sigterm() { - echo "Signal trap triggered, beginning shutdown.." - kill -9 $(pidof nsqd) - kill -9 $(pidof nsqadmin) -} - -echo "pgo-event starting" - -trap 'trap_sigterm' SIGINT SIGTERM - -echo "pgo-event starting nsqadmin" - -/usr/local/bin/nsqadmin --http-address=0.0.0.0:4171 --nsqd-http-address=0.0.0.0:4151 & - -sleep 3 - -echo "pgo-event starting nsqd" - -/usr/local/bin/nsqd --data-path=/tmp --http-address=0.0.0.0:4151 --tcp-address=0.0.0.0:4150 --log-level=warn & - -echo "pgo-event waiting till sigterm" - -wait - -echo "end of pgo-event" diff --git a/bin/pgo-rmdata/.gitignore b/bin/pgo-rmdata/.gitignore deleted file mode 100644 index 23a0d5a0ec..0000000000 --- a/bin/pgo-rmdata/.gitignore +++ /dev/null @@ -1 +0,0 @@ -pgo-rmdata diff --git a/bin/pgo-rmdata/start.sh b/bin/pgo-rmdata/start.sh deleted file mode 100755 index 95a4903289..0000000000 --- a/bin/pgo-rmdata/start.sh +++ /dev/null @@ -1,23 +0,0 @@ -#!/bin/bash -x - -# Copyright 2018 - 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -/usr/local/bin/pgo-rmdata -pg-cluster=$PG_CLUSTER \ - -replica-name=$REPLICA_NAME \ - -namespace=$NAMESPACE \ - -remove-data=$REMOVE_DATA \ - -remove-backup=$REMOVE_BACKUP \ - -is-backup=$IS_BACKUP \ - -is-replica=$IS_REPLICA \ - -pgha-scope=$PGHA_SCOPE diff --git a/bin/pgo-scheduler/.gitignore b/bin/pgo-scheduler/.gitignore deleted file mode 100644 index 8d5d1740b1..0000000000 --- a/bin/pgo-scheduler/.gitignore +++ /dev/null @@ -1 +0,0 @@ -pgo-scheduler diff --git a/bin/pgo-scheduler/start.sh b/bin/pgo-scheduler/start.sh deleted file mode 100755 index 4a32cf8bc3..0000000000 --- a/bin/pgo-scheduler/start.sh +++ /dev/null @@ -1,29 +0,0 @@ -#!/bin/bash - -# Copyright 2019 - 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -function trap_sigterm() { - echo "Signal trap triggered, beginning shutdown.." - - if ! pgrep pgo-scheduler > /dev/null - then - kill -9 $(pidof pgo-scheduler) - fi -} - -trap 'trap_sigterm' SIGINT SIGTERM - -/opt/cpm/bin/pgo-scheduler & - -wait diff --git a/bin/pgo-sqlrunner/start.sh b/bin/pgo-sqlrunner/start.sh deleted file mode 100755 index 0b2eb6d417..0000000000 --- a/bin/pgo-sqlrunner/start.sh +++ /dev/null @@ -1,32 +0,0 @@ -#!/bin/bash - -# Copyright 2019 - 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -set -e -u - -export PGPASSFILE=/tmp/pgpass - -cat >> "${PGPASSFILE?}" <<-EOF -${PG_HOST?}:${PG_PORT?}:${PG_DATABASE?}:${PG_USER?}:${PG_PASSWORD?} -EOF -chmod 0600 ${PGPASSFILE?} - -for sql in /pgconf/*.sql -do - psql -d ${PG_DATABASE?} -U ${PG_USER?} \ - -p ${PG_PORT?} -h ${PG_HOST?} \ - -f ${sql?} -done - -exit 0 diff --git a/bin/postgres-operator/.gitignore b/bin/postgres-operator/.gitignore deleted file mode 100644 index bcb8a2a57f..0000000000 --- a/bin/postgres-operator/.gitignore +++ /dev/null @@ -1 +0,0 @@ -postgres-operator diff --git a/bin/postgres-operator/README.txt b/bin/postgres-operator/README.txt deleted file mode 100644 index 95b36e29c6..0000000000 --- a/bin/postgres-operator/README.txt +++ /dev/null @@ -1 +0,0 @@ -nothing should be here other than supporting scripts, no binaries diff --git a/bin/pre-pull-crunchy-containers.sh b/bin/pre-pull-crunchy-containers.sh deleted file mode 100755 index 5a7031f8e9..0000000000 --- a/bin/pre-pull-crunchy-containers.sh +++ /dev/null @@ -1,19 +0,0 @@ -#!/bin/bash - -# Copyright 2018 - 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -for CNAME in crunchy-postgres crunchy-pgbadger crunchy-pgbouncer -do - docker pull crunchydata/$CNAME:$CCP_IMAGE_TAG -done diff --git a/bin/pull-ccp-from-gcr.sh b/bin/pull-ccp-from-gcr.sh deleted file mode 100755 index 0e6dc20aea..0000000000 --- a/bin/pull-ccp-from-gcr.sh +++ /dev/null @@ -1,34 +0,0 @@ -#!/bin/bash - -set -e -u - -REGISTRY='us.gcr.io/container-suite' -VERSION=$CCP_IMAGE_TAG -IMAGES=( - crunchy-postgres-ha - crunchy-pgbadger - crunchy-pgbouncer - crunchy-pgdump - crunchy-pgrestore -) - -function echo_green() { - echo -e "\033[0;32m" - echo "$1" - echo -e "\033[0m" -} - -gcloud auth login -gcloud config set project container-suite -gcloud auth configure-docker - -for image in "${IMAGES[@]}" -do - echo_green "=> Pulling ${REGISTRY?}/${image?}:${VERSION?}.." - docker pull ${REGISTRY?}/${image?}:${VERSION?} - docker tag ${REGISTRY?}/${image?}:${VERSION?} crunchydata/${image?}:${VERSION?} -done - -echo_green "=> Done!" - -exit 0 diff --git a/bin/pull-from-gcr.sh b/bin/pull-from-gcr.sh deleted file mode 100755 index ad25336aab..0000000000 --- a/bin/pull-from-gcr.sh +++ /dev/null @@ -1,55 +0,0 @@ -#!/bin/bash - -# Copyright 2018 - 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -set -e -u - -REGISTRY='us.gcr.io/container-suite' -VERSION=$PGO_IMAGE_TAG -IMAGES=( - pgo-event - pgo-backrest-repo - pgo-backrest-repo-sync - pgo-backrest-restore - pgo-scheduler - pgo-sqlrunner - postgres-operator - pgo-apiserver - pgo-rmdata - pgo-backrest - pgo-client - pgo-deployer - crunchy-postgres-exporter -) - -function echo_green() { - echo -e "\033[0;32m" - echo "$1" - echo -e "\033[0m" -} - -gcloud auth login -gcloud config set project container-suite -gcloud auth configure-docker - -for image in "${IMAGES[@]}" -do - echo_green "=> Pulling ${REGISTRY?}/${image?}:${VERSION?}.." - docker pull ${REGISTRY?}/${image?}:${VERSION?} - docker tag ${REGISTRY?}/${image?}:${VERSION?} crunchydata/${image?}:${VERSION?} -done - -echo_green "=> Done!" - -exit 0 diff --git a/bin/push-ccp-to-gcr.sh b/bin/push-ccp-to-gcr.sh deleted file mode 100755 index 3b9de84ed0..0000000000 --- a/bin/push-ccp-to-gcr.sh +++ /dev/null @@ -1,35 +0,0 @@ -#!/bin/bash - -# Copyright 2019 - 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -GCR_IMAGE_PREFIX=gcr.io/crunchy-dev-test - -CCP_IMAGE_PREFIX=crunchydata -CCP_IMAGE_TAG=centos7-12.4-4.5.0 - -IMAGES=( -crunchy-prometheus -crunchy-grafana -crunchy-pgbadger -crunchy-backup -crunchy-postgres -crunchy-pgbouncer -) - -for image in "${IMAGES[@]}" -do - docker tag $CCP_IMAGE_PREFIX/$image:$CCP_IMAGE_TAG \ - $GCR_IMAGE_PREFIX/$image:$CCP_IMAGE_TAG - gcloud docker -- push $GCR_IMAGE_PREFIX/$image:$CCP_IMAGE_TAG -done diff --git a/bin/push-to-gcr.sh b/bin/push-to-gcr.sh deleted file mode 100755 index f8293d9159..0000000000 --- a/bin/push-to-gcr.sh +++ /dev/null @@ -1,39 +0,0 @@ -#!/bin/bash - -# Copyright 2018 - 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -GCR_IMAGE_PREFIX=gcr.io/crunchy-dev-test - -IMAGES=( -pgo-event -pgo-backrest-repo -pgo-backrest-repo-sync -pgo-backrest-restore -pgo-scheduler -pgo-sqlrunner -postgres-operator -pgo-apiserver -pgo-rmdata -pgo-backrest -pgo-client -pgo-deployer -crunchy-postgres-exporter -) - -for image in "${IMAGES[@]}" -do - docker tag $PGO_IMAGE_PREFIX/$image:$PGO_IMAGE_TAG \ - $GCR_IMAGE_PREFIX/$image:$PGO_IMAGE_TAG - gcloud docker -- push $GCR_IMAGE_PREFIX/$image:$PGO_IMAGE_TAG -done diff --git a/bin/uid_daemon.sh b/bin/uid_daemon.sh deleted file mode 100755 index 83d8aca5e2..0000000000 --- a/bin/uid_daemon.sh +++ /dev/null @@ -1,26 +0,0 @@ -#!/usr/bin/bash - -# Copyright 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -if ! whoami &> /dev/null -then - if [[ -w /etc/passwd ]] - then - sed "/daemon:x:2:/d" /etc/passwd >> /tmp/uid.tmp - cp /tmp/uid.tmp /etc/passwd - rm -f /tmp/uid.tmp - echo "${USER_NAME:-daemon}:x:$(id -u):0:${USER_NAME:-daemon} user:${HOME}:/bin/bash" >> /etc/passwd - fi -fi -exec "$@" diff --git a/bin/uid_pgbackrest.sh b/bin/uid_pgbackrest.sh deleted file mode 100755 index 3f9c9d1957..0000000000 --- a/bin/uid_pgbackrest.sh +++ /dev/null @@ -1,22 +0,0 @@ -#!/bin/bash - -if ! whoami &> /dev/null -then - if [[ -w /etc/passwd ]] - then - sed "/pgbackrest:x:2000:/d" /etc/passwd >> /tmp/uid.tmp - cp /tmp/uid.tmp /etc/passwd - rm -f /tmp/uid.tmp - echo "${USER_NAME:-pgbackrest}:x:$(id -u):0:${USER_NAME:-pgbackrest} user:${HOME}:/bin/bash" >> /etc/passwd - fi - - if [[ -w /etc/group ]] - then - sed "/pgbackrest:x:2000/d" /etc/group >> /tmp/gid.tmp - cp /tmp/gid.tmp /etc/group - rm -f /tmp/gid.tmp - echo "nfsnobody:x:65534:" >> /etc/group - echo "pgbackrest:x:$(id -g):pgbackrest" >> /etc/group - fi -fi -exec "$@" diff --git a/bin/uid_postgres.sh b/bin/uid_postgres.sh deleted file mode 100755 index 8a79ea9a35..0000000000 --- a/bin/uid_postgres.sh +++ /dev/null @@ -1,22 +0,0 @@ -#!/bin/bash - -if ! whoami &> /dev/null -then - if [[ -w /etc/passwd ]] - then - sed "/postgres:x:26:/d" /etc/passwd >> /tmp/uid.tmp - cp /tmp/uid.tmp /etc/passwd - rm -f /tmp/uid.tmp - echo "${USER_NAME:-postgres}:x:$(id -u):0:${USER_NAME:-postgres} user:${HOME}:/bin/bash" >> /etc/passwd - fi - - if [[ -w /etc/group ]] - then - sed "/postgres:x:26/d" /etc/group >> /tmp/gid.tmp - cp /tmp/gid.tmp /etc/group - rm -f /tmp/gid.tmp - echo "nfsnobody:x:65534:" >> /etc/group - echo "postgres:x:$(id -g):postgres" >> /etc/group - fi -fi -exec "$@" diff --git a/bin/upgrade-secret.sh b/bin/upgrade-secret.sh deleted file mode 100755 index ee93af1377..0000000000 --- a/bin/upgrade-secret.sh +++ /dev/null @@ -1,65 +0,0 @@ -#!/bin/bash - -# Copyright 2018 - 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# this is a script used to upgrade a database user credential from -# the old pre-2.5 release format to the 2.5 post format -# it will prompt the user along the way - -echo "CLUSTER is " $1 - -CLUSTER=$1 - -CURRENT_POSTGRES_PASSWORD=`kubectl get secret $CLUSTER-root-secret -o jsonpath="{.data.password}"` -echo "current decoded postgres password is..." -POSTGRES_PASSWORD=`echo -n $CURRENT_POSTGRES_PASSWORD | base64 --decode` -echo $POSTGRES_PASSWORD - -USERNAME=postgres - -kubectl create secret generic $CLUSTER-$USERNAME-secret \ - --from-literal=username=$USERNAME \ - --from-literal=password=$POSTGRES_PASSWORD - -kubectl label secret $CLUSTER-$USERNAME-secret pg-cluster=$CLUSTER - -# do the same for the primaryuser - -CURRENT_PASSWORD=`kubectl get secret $CLUSTER-primary-secret -o jsonpath="{.data.password}"` -echo "current decoded primaryuser password is..." -POSTGRES_PASSWORD=`echo -n $CURRENT_PASSWORD | base64 --decode` -echo $POSTGRES_PASSWORD - -USERNAME=primaryuser - -kubectl create secret generic $CLUSTER-$USERNAME-secret \ - --from-literal=username=$USERNAME \ - --from-literal=password=$POSTGRES_PASSWORD - -kubectl label secret $CLUSTER-$USERNAME-secret pg-cluster=$CLUSTER - -# do the same for the testuser - -USERNAME=testuser - -CURRENT_PASSWORD=`kubectl get secret $CLUSTER-user-secret -o jsonpath="{.data.password}"` -echo "current decoded testuser password is..." -POSTGRES_PASSWORD=`echo -n $CURRENT_PASSWORD | base64 --decode` -echo $POSTGRES_PASSWORD - -kubectl create secret generic $CLUSTER-$USERNAME-secret \ - --from-literal=username=$USERNAME \ - --from-literal=password=$POSTGRES_PASSWORD - -kubectl label secret $CLUSTER-$USERNAME-secret pg-cluster=$CLUSTER diff --git a/btn.png b/btn.png deleted file mode 100644 index ff0c0db182..0000000000 Binary files a/btn.png and /dev/null differ diff --git a/build/crunchy-postgres-exporter/Dockerfile b/build/crunchy-postgres-exporter/Dockerfile deleted file mode 100644 index f5104b924c..0000000000 --- a/build/crunchy-postgres-exporter/Dockerfile +++ /dev/null @@ -1,55 +0,0 @@ -ARG BASEOS -ARG BASEVER -ARG PREFIX -FROM ${PREFIX}/pgo-base:${BASEOS}-${BASEVER} - -ARG PGVERSION -ARG PACKAGER -ARG DFSET - -LABEL name="crunchy-postgres-exporter" \ - summary="Metrics exporter for PostgreSQL" \ - description="When run with the crunchy-postgres family of containers, crunchy-postgres-exporter reads the PostgreSQL data directory and has a SQL interface to a database to allow for metrics collection." \ - io.k8s.description="Crunchy PostgreSQL Exporter" \ - io.k8s.display-name="Crunchy PostgreSQL Exporter" \ - io.openshift.tags="postgresql,postgres,monitoring,database,crunchy" - -RUN if [ "$DFSET" = "centos" ] ; then \ - ${PACKAGER} -y install epel-release \ - && ${PACKAGER} install -y \ - --setopt=skip_missing_names_on_install=False \ - postgresql${PGVERSION} \ - && ${PACKAGER} -y clean all ; \ -fi - -RUN if [ "$DFSET" = "rhel" ] ; then \ - ${PACKAGER} install -y \ - --setopt=skip_missing_names_on_install=False \ - postgresql${PGVERSION} \ - && ${PACKAGER} -y clean all ; \ -fi - -RUN mkdir -p /opt/cpm/bin /opt/cpm/conf - -ADD postgres_exporter.tar.gz /opt/cpm/bin -ADD tools/pgmonitor/exporter/postgres /opt/cpm/conf -ADD bin/crunchy-postgres-exporter /opt/cpm/bin -ADD bin/uid_daemon.sh /opt/cpm/bin - -RUN chgrp -R 0 /opt/cpm/bin /opt/cpm/conf && \ - chmod -R g=u /opt/cpm/bin/ opt/cpm/conf - -# postgres_exporter -EXPOSE 9187 - -RUN chmod g=u /etc/passwd - -# The VOLUME directive must appear after all RUN directives to ensure the proper -# volume permissions are applied when building the image -VOLUME ["/conf"] - -ENTRYPOINT ["/opt/cpm/bin/uid_daemon.sh"] - -USER 2 - -CMD ["/opt/cpm/bin/start.sh"] diff --git a/build/pgo-apiserver/Dockerfile b/build/pgo-apiserver/Dockerfile deleted file mode 100644 index a2ec3c3b3a..0000000000 --- a/build/pgo-apiserver/Dockerfile +++ /dev/null @@ -1,24 +0,0 @@ -ARG BASEOS -ARG BASEVER -ARG PREFIX -ARG PGVERSION -ARG BACKREST_VERSION -FROM ${PREFIX}/pgo-base:${BASEOS}-${BASEVER} - -LABEL name="pgo-apiserver" \ - summary="Crunchy PostgreSQL Operator - Apiserver" \ - description="Crunchy PostgreSQL Operator - Apiserver" - -RUN yum -y install \ - --setopt=skip_missing_names_on_install=False \ - postgresql${PGVERSION} \ - hostname \ - && yum -y clean all - -ADD bin/apiserver /usr/local/bin -ADD installers/ansible/roles/pgo-operator/files/pgo-configs /default-pgo-config -ADD conf/postgres-operator/pgo.yaml /default-pgo-config/pgo.yaml - -USER 2 - -ENTRYPOINT ["/usr/local/bin/apiserver"] diff --git a/build/pgo-backrest-repo-sync/Dockerfile b/build/pgo-backrest-repo-sync/Dockerfile deleted file mode 100644 index a40aad72c0..0000000000 --- a/build/pgo-backrest-repo-sync/Dockerfile +++ /dev/null @@ -1,109 +0,0 @@ -ARG BASEOS -ARG BASEVER -ARG PREFIX -FROM ${PREFIX}/pgo-base:${BASEOS}-${BASEVER} - -ARG BASEOS -ARG PGVERSION -ARG BACKREST_VERSION -ARG PACKAGER -ARG DFSET - -LABEL name="pgo-backrest-repo-sync" \ - summary="Crunchy PostgreSQL Operator - pgBackRest Repo Sync" \ - description="Synchronizes the contents between two pgBackRest repositories." - -RUN if [ "$BASEOS" = "centos7" ] ; then \ - ${PACKAGER} -y install \ - --setopt=skip_missing_names_on_install=False \ - crunchy-backrest-"${BACKREST_VERSION}" \ - openssh-clients \ - openssh-server \ - postgresql${PGVERSION}-server \ - procps-ng \ - psmisc \ - rsync \ - awscli \ - && ${PACKAGER} -y clean all ; \ -fi - -RUN if [ "$BASEOS" = "centos8" ] ; then \ - ${PACKAGER} -y install \ - --setopt=skip_missing_names_on_install=False \ - crunchy-backrest-"${BACKREST_VERSION}" \ - openssh-clients \ - openssh-server \ - postgresql${PGVERSION}-server \ - procps-ng \ - psmisc \ - rsync \ - python3-pip \ - && pip3 install --upgrade awscli \ - && ${PACKAGER} -y clean all ; \ -fi - -RUN if [ "$BASEOS" = "rhel7" ] ; then \ - ${PACKAGER} -y install \ - --setopt=skip_missing_names_on_install=False \ - --enablerepo=rhel-ha-for-rhel-7-server-rpms \ - crunchy-backrest-"${BACKREST_VERSION}" \ - openssh-clients \ - openssh-server \ - postgresql${PGVERSION}-server \ - procps-ng \ - psmisc \ - rsync \ - awscli \ - && ${PACKAGER} -y --enablerepo=rhel-ha-for-rhel-7-server-rpms clean all ; \ -fi - -RUN if [ "$BASEOS" = "ubi7" ] ; then \ - ${PACKAGER} -y install \ - --setopt=skip_missing_names_on_install=False \ - --enablerepo=rhel-ha-for-rhel-7-server-rpms \ - crunchy-backrest-"${BACKREST_VERSION}" \ - openssh-clients \ - openssh-server \ - postgresql${PGVERSION}-server \ - procps-ng \ - psmisc \ - rsync \ - awscli \ - && ${PACKAGER} -y --enablerepo=rhel-ha-for-rhel-7-server-rpms clean all ; \ -fi - -RUN if [ "$BASEOS" = "ubi8" ] ; then \ - ${PACKAGER} -y install \ - --setopt=skip_missing_names_on_install=False \ - crunchy-backrest-"${BACKREST_VERSION}" \ - openssh-clients \ - openssh-server \ - postgresql${PGVERSION}-server \ - procps-ng \ - psmisc \ - rsync \ - python3-pip \ - && pip3 install --upgrade awscli \ - && ${PACKAGER} -y clean all ; \ -fi - -RUN groupadd pgbackrest -g 2000 && useradd pgbackrest -u 2000 -g 2000 -ADD bin/pgo-backrest-repo-sync/ /usr/local/bin -RUN chmod +x /usr/local/bin/pgo-backrest-repo-sync.sh && \ - mkdir -p /opt/cpm/bin /backrestrepo && \ - chown -R pgbackrest:pgbackrest /opt/cpm /backrestrepo - -ADD bin/uid_pgbackrest.sh /opt/cpm/bin - -RUN chmod g=u /etc/passwd && \ - chmod g=u /etc/group - -RUN mkdir /.ssh && chown pgbackrest:pgbackrest /.ssh && chmod o+rwx /.ssh - -VOLUME ["/sshd", "/backrestrepo"] - -USER 2000 - -ENTRYPOINT ["/opt/cpm/bin/uid_pgbackrest.sh"] - -CMD ["pgo-backrest-repo-sync.sh"] diff --git a/build/pgo-backrest-repo/Dockerfile b/build/pgo-backrest-repo/Dockerfile deleted file mode 100644 index 0d0e1dd6b8..0000000000 --- a/build/pgo-backrest-repo/Dockerfile +++ /dev/null @@ -1,46 +0,0 @@ -ARG BASEOS -ARG BASEVER -ARG PREFIX -FROM ${PREFIX}/pgo-base:${BASEOS}-${BASEVER} - -ARG BACKREST_VERSION -ARG PACKAGER -ARG DFSET - -LABEL name="pgo-backrest-repo" \ - summary="Crunchy PostgreSQL Operator - pgBackRest Repository" \ - description="Crunchy PostgreSQL Operator - pgBackRest Repository" - -RUN ${PACKAGER} -y install \ - --setopt=skip_missing_names_on_install=False \ - crunchy-backrest-"${BACKREST_VERSION}" \ - hostname \ - openssh-clients \ - openssh-server \ - procps-ng \ - psmisc \ - rsync \ - && ${PACKAGER} -y clean all - -RUN groupadd pgbackrest -g 2000 && useradd pgbackrest -u 2000 -g 2000 -ADD bin/pgo-backrest-repo /usr/local/bin -RUN chmod +x /usr/local/bin/pgo-backrest-repo.sh /usr/local/bin/archive-push-s3.sh \ - && mkdir -p /opt/cpm/bin /etc/pgbackrest \ - && chown -R pgbackrest:pgbackrest /opt/cpm \ - && chown -R pgbackrest /etc/pgbackrest - -ADD bin/uid_pgbackrest.sh /opt/cpm/bin - -RUN chmod g=u /etc/passwd \ - && chmod g=u /etc/group \ - && chmod -R g=u /etc/pgbackrest \ - && rm -f /run/nologin - -RUN mkdir /.ssh && chown pgbackrest:pgbackrest /.ssh && chmod o+rwx /.ssh - -USER 2000 - -ENTRYPOINT ["/opt/cpm/bin/uid_pgbackrest.sh"] -VOLUME ["/sshd", "/backrestrepo" ] - -CMD ["pgo-backrest-repo.sh"] diff --git a/build/pgo-backrest-restore/Dockerfile b/build/pgo-backrest-restore/Dockerfile deleted file mode 100644 index f0ba6bdd24..0000000000 --- a/build/pgo-backrest-restore/Dockerfile +++ /dev/null @@ -1,42 +0,0 @@ -ARG BASEOS -ARG BASEVER -ARG PREFIX -FROM ${PREFIX}/pgo-base:${BASEOS}-${BASEVER} - -ARG PGVERSION -ARG BACKREST_VERSION -ARG PACKAGER -ARG DFSET - -LABEL name="pgo-backrest-restore" \ - summary="Crunchy PostgreSQL Operator - pgBackRest Restore" \ - description="Performs a restore operation for a PostgreSQL database using pgBackRest." - -RUN ${PACKAGER} -y install \ - --setopt=skip_missing_names_on_install=False \ - crunchy-backrest-"${BACKREST_VERSION}" \ - openssh-clients \ - openssh-server \ - postgresql${PGVERSION}-server \ - procps-ng \ - psmisc \ - && ${PACKAGER} -y clean all - -RUN mkdir -p /opt/cpm/bin /pgdata /tablespaces && \ - chown -R 26:26 /opt/cpm /tablespaces - -ADD bin/pgo-backrest-restore/ /opt/cpm/bin -ADD bin/uid_postgres.sh /opt/cpm/bin - -RUN chmod g=u /etc/passwd && \ - chmod g=u /etc/group - -RUN mkdir /.ssh && chown 26:0 /.ssh && chmod g+rwx /.ssh - -VOLUME ["/sshd", "/pgdata"] - -ENTRYPOINT ["/opt/cpm/bin/uid_postgres.sh"] - -USER 26 - -CMD ["/opt/cpm/bin/pgo-backrest-restore.sh"] diff --git a/build/pgo-backrest/Dockerfile b/build/pgo-backrest/Dockerfile deleted file mode 100644 index 25adb20ee3..0000000000 --- a/build/pgo-backrest/Dockerfile +++ /dev/null @@ -1,31 +0,0 @@ -ARG BASEOS -ARG BASEVER -ARG PREFIX -FROM ${PREFIX}/pgo-base:${BASEOS}-${BASEVER} - -ARG PGVERSION -ARG BACKREST_VERSION -ARG PACKAGER -ARG DFSET - -LABEL name="pgo-backrest" \ - summary="Crunchy PostgreSQL Operator - pgBackRest" \ - description="pgBackRest image that is integrated for use with Crunchy Data's PostgreSQL Operator." - -RUN ${PACKAGER} -y install \ - --setopt=skip_missing_names_on_install=False \ - postgresql${PGVERSION}-server \ - crunchy-backrest-"${BACKREST_VERSION}" \ - && ${PACKAGER} -y clean all - -RUN mkdir -p /opt/cpm/bin /pgdata /backrestrepo && chown -R 26:26 /opt/cpm -ADD bin/pgo-backrest/ /opt/cpm/bin -ADD bin/uid_postgres.sh /opt/cpm/bin - -RUN chmod g=u /etc/passwd && \ - chmod g=u /etc/group - -USER 26 -ENTRYPOINT ["/opt/cpm/bin/uid_postgres.sh"] -VOLUME ["/pgdata","/backrestrepo"] -CMD ["/opt/cpm/bin/pgo-backrest"] diff --git a/build/pgo-base/Dockerfile b/build/pgo-base/Dockerfile deleted file mode 100644 index e9a80bea5c..0000000000 --- a/build/pgo-base/Dockerfile +++ /dev/null @@ -1,65 +0,0 @@ -ARG BASEOS -ARG DOCKERBASEREGISTRY -FROM ${DOCKERBASEREGISTRY}${BASEOS} - -ARG BASEOS -ARG RELVER -ARG PGVERSION -ARG PG_FULL -ARG PACKAGER -ARG DFSET - -MAINTAINER info@crunchydata.com - -LABEL vendor="Crunchy Data" \ - url="https://crunchydata.com" \ - release="${RELVER}" \ - postgresql.version.major="${PGVERSION}" \ - postgresql.version="${PG_FULL}" \ - os.version="7.7" \ - org.opencontainers.image.vendor="Crunchy Data" \ - io.openshift.tags="postgresql,postgres,sql,nosql,crunchy" \ - io.k8s.description="Trusted open source PostgreSQL-as-a-Service" - -COPY redhat/licenses /licenses -COPY redhat/atomic/help.1 /help.1 -COPY redhat/atomic/help.md /help.md -COPY licenses /licenses - -RUN if [ "$BASEOS" = "centos7" ]; then \ - ${PACKAGER} -y update \ - && ${PACKAGER} -y clean all ; \ -fi - -RUN if [ "$BASEOS" = "centos8" ]; then \ - ${PACKAGER} -y update \ - && ${PACKAGER} -y install \ - --setopt=skip_missing_names_on_install=False \ - glibc-langpack-en \ - && ${PACKAGER} -y clean all \ - && ${PACKAGER} -qy module disable postgresql ; \ -fi - -RUN if [ "$BASEOS" = "rhel7" ] ; then \ - ${PACKAGER} -y --enablerepo=rhel-7-server-ose-3.11-rpms update \ - && ${PACKAGER} -y --enablerepo=rhel-7-server-ose-3.11-rpms clean all ; \ -fi - -RUN if [ "$BASEOS" = "ubi7" ] ; then \ - ${PACKAGER} -y --enablerepo=rhel-7-server-ose-3.11-rpms update \ - && ${PACKAGER} -y --enablerepo=rhel-7-server-ose-3.11-rpms clean all ; \ -fi - -RUN if [ "$BASEOS" = "ubi8" ]; then \ - ${PACKAGER} -y update \ - && ${PACKAGER} -y install \ - --setopt=skip_missing_names_on_install=False \ - glibc-langpack-en \ - && ${PACKAGER} -y clean all \ - && ${PACKAGER} -qy module disable postgresql ; \ -fi - -# Crunchy PostgreSQL repository -ADD conf/RPM-GPG-KEY-crunchydata* / -ADD conf/crunchypg${PGVERSION}.repo /etc/yum.repos.d/ -RUN rpm --import RPM-GPG-KEY-crunchydata* diff --git a/build/pgo-client/Dockerfile b/build/pgo-client/Dockerfile deleted file mode 100644 index 9a5a97f8f2..0000000000 --- a/build/pgo-client/Dockerfile +++ /dev/null @@ -1,28 +0,0 @@ -ARG BASEOS -ARG BASEVER -ARG PREFIX -FROM ${PREFIX}/pgo-base:${BASEOS}-${BASEVER} - -ARG PGVERSION -ARG BACKREST_VERSION -ARG PACKAGER -ARG DFSET - -LABEL name="pgo-client" \ - summary="Crunchy PostgreSQL Operator - pgo client" \ - description="Crunchy PostgreSQL Operator - pgo client" - -ADD bin/pgo /usr/local/bin - -ENV PGO_APISERVER_URL=${PGO_APISERVER_URL} -ENV PGOUSERNAME=${PGOUSERNAME} -ENV PGOUSERPASS=${PGOUSERPASS} -ENV PGO_CA_CERT=${PGO_CA_CERT} -ENV PGO_CLIENT_CERT=${PGO_CLIENT_CERT} -ENV PGO_CLIENT_KEY=${PGO_CLIENT_KEY} - -RUN chmod +x /usr/local/bin/pgo - -USER 2 - -CMD tail -f /dev/null diff --git a/build/pgo-deployer/Dockerfile b/build/pgo-deployer/Dockerfile deleted file mode 100644 index c2000eaa87..0000000000 --- a/build/pgo-deployer/Dockerfile +++ /dev/null @@ -1,86 +0,0 @@ -ARG BASEOS -ARG BASEVER -ARG PREFIX -FROM ${PREFIX}/pgo-base:${BASEOS}-${BASEVER} - -ARG BASEOS -ARG ANSIBLE_VERSION -ARG PACKAGER -ARG DFSET - -LABEL name="pgo-deployer" \ - summary="Crunchy PostgreSQL Operator - Installer" \ - description="Crunchy PostgreSQL Operator - Installer" - -COPY installers/image/conf/kubernetes.repo /etc/yum.repos.d/kubernetes.repo - -RUN if [ "$DFSET" = "centos" ] ; then \ - ${PACKAGER} install -y epel-release \ - && ${PACKAGER} -y install \ - --setopt=skip_missing_names_on_install=False \ - kubectl \ - ansible-${ANSIBLE_VERSION} \ - which \ - gettext \ - openssl \ - && ${PACKAGER} -y clean all ; \ -fi - -RUN if [ "$BASEOS" = "rhel7" ] ; then \ - rm /etc/yum.repos.d/kubernetes.repo \ - && ${PACKAGER} install -y https://download.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm \ - && ${PACKAGER} -y install \ - --setopt=skip_missing_names_on_install=False \ - --enablerepo='rhel-7-server-ose-4.4-rpms' \ - openshift-clients \ - ansible-${ANSIBLE_VERSION} \ - which \ - gettext \ - openssl \ - && ${PACKAGER} -y clean all --enablerepo='rhel-7-server-ose-4.4-rpms' ; \ -fi - -RUN if [ "$BASEOS" = "ubi7" ] ; then \ - rm /etc/yum.repos.d/kubernetes.repo \ - && ${PACKAGER} install -y https://download.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm \ - && ${PACKAGER} -y install \ - --setopt=skip_missing_names_on_install=False \ - --enablerepo='rhel-7-server-ose-4.4-rpms' \ - openshift-clients \ - ansible-${ANSIBLE_VERSION} \ - which \ - gettext \ - openssl \ - && ${PACKAGER} -y clean all --enablerepo='rhel-7-server-ose-4.4-rpms' ; \ -fi - -RUN if [ "$BASEOS" = "ubi8" ] ; then \ - rm /etc/yum.repos.d/kubernetes.repo \ - && ${PACKAGER} install -y https://download.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm \ - && ${PACKAGER} -y install \ - --setopt=skip_missing_names_on_install=False \ - --enablerepo='rhocp-4.5-for-rhel-8-x86_64-rpms' \ - openshift-clients \ - ansible-${ANSIBLE_VERSION} \ - which \ - gettext \ - openssl \ - && ${PACKAGER} -y clean all --enablerepo='rhocp-4.5-for-rhel-8-x86_64-rpms' ; \ -fi - -COPY installers/ansible /ansible/postgres-operator -COPY installers/metrics/ansible /ansible/metrics -COPY installers/image/bin/pgo-deploy.sh /pgo-deploy.sh -COPY bin/uid_daemon.sh /uid_daemon.sh - -ENV ANSIBLE_CONFIG="/ansible/postgres-operator/ansible.cfg" -ENV HOME="/tmp" - -RUN chmod g=u /etc/passwd -RUN chmod g=u /uid_daemon.sh - -ENTRYPOINT ["/uid_daemon.sh"] - -USER 2 - -CMD ["/pgo-deploy.sh"] diff --git a/build/pgo-event/Dockerfile b/build/pgo-event/Dockerfile deleted file mode 100644 index bf728ba8b4..0000000000 --- a/build/pgo-event/Dockerfile +++ /dev/null @@ -1,19 +0,0 @@ -ARG BASEOS -ARG BASEVER -ARG PREFIX -FROM ${PREFIX}/pgo-base:${BASEOS}-${BASEVER} - -ARG PGVERSION -ARG BACKREST_VERSION -ARG PACKAGER -ARG DFSET - -LABEL name="pgo-event" \ - summary="Crunchy PostgreSQL Operator - pgo-event" \ - description="Crunchy PostgreSQL Operator - pgo-event" - -ADD bin/pgo-event /usr/local/bin - -USER 2 - -ENTRYPOINT ["/usr/local/bin/pgo-event.sh"] diff --git a/build/pgo-rmdata/Dockerfile b/build/pgo-rmdata/Dockerfile deleted file mode 100644 index 6ff3956879..0000000000 --- a/build/pgo-rmdata/Dockerfile +++ /dev/null @@ -1,19 +0,0 @@ -ARG BASEOS -ARG BASEVER -ARG PREFIX -FROM ${PREFIX}/pgo-base:${BASEOS}-${BASEVER} - -ARG PGVERSION -ARG BACKREST_VERSION -ARG PACKAGER -ARG DFSET - -LABEL name="pgo-rmdata" \ - summary="Crunchy PostgreSQL Operator - Remove Data" \ - description="Crunchy PostgreSQL Operator - Remove Data" - -ADD bin/pgo-rmdata/ /usr/local/bin - -USER 2 - -CMD ["/usr/local/bin/start.sh"] diff --git a/build/pgo-scheduler/Dockerfile b/build/pgo-scheduler/Dockerfile deleted file mode 100644 index 49e3700e4e..0000000000 --- a/build/pgo-scheduler/Dockerfile +++ /dev/null @@ -1,39 +0,0 @@ -ARG BASEOS -ARG BASEVER -ARG PREFIX -FROM ${PREFIX}/pgo-base:${BASEOS}-${BASEVER} - -ARG PGVERSION -ARG BACKREST_VERSION -ARG PACKAGER -ARG DFSET - -LABEL name="pgo-scheduler" \ - summary="Crunchy PostgreSQL Operator - Scheduler" \ - description="Crunchy PostgreSQL Operator - Scheduler" - -RUN if [ "$DFSET" = "centos" ] ; then \ - mkdir -p /opt/cpm/bin /opt/cpm/conf /configs \ - && chown -R 2:2 /opt/cpm /configs \ - && ${PACKAGER} -y install epel-release \ - && ${PACKAGER} -y install \ - --setopt=skip_missing_names_on_install=False \ - gettext \ - hostname \ - nss_wrapper \ - procps-ng \ - && ${PACKAGER} -y clean all ; \ -fi - -RUN if [ "$DFSET" = "rhel" ] ; then \ - mkdir -p /opt/cpm/bin /opt/cpm/conf /pgo-config \ - && chown -R 2:2 /opt/cpm /pgo-config ; \ -fi - -ADD bin/pgo-scheduler /opt/cpm/bin -ADD installers/ansible/roles/pgo-operator/files/pgo-configs /default-pgo-config -ADD conf/postgres-operator/pgo.yaml /default-pgo-config/pgo.yaml - -USER 2 - -CMD ["/opt/cpm/bin/start.sh"] diff --git a/build/pgo-sqlrunner/Dockerfile b/build/pgo-sqlrunner/Dockerfile deleted file mode 100644 index 5b5dd2c45f..0000000000 --- a/build/pgo-sqlrunner/Dockerfile +++ /dev/null @@ -1,45 +0,0 @@ -ARG BASEOS -ARG BASEVER -ARG PREFIX -FROM ${PREFIX}/pgo-base:${BASEOS}-${BASEVER} - -ARG PGVERSION -ARG BACKREST_VERSION -ARG PACKAGER -ARG DFSET - -LABEL name="pgo-sqlrunner" \ - summary="Crunchy PostgreSQL Operator - SQL Runner" \ - description="Crunchy PostgreSQL Operator - SQL Runner" - -ENV PGROOT="/usr/pgsql-${PGVERSION}" - -RUN if [ "$DFSET" = "centos" ] ; then \ - ${PACKAGER} -y install epel-release \ - && ${PACKAGER} -y install \ - --setopt=skip_missing_names_on_install=False \ - gettext \ - hostname \ - nss_wrapper \ - procps-ng \ - postgresql${PGVERSION} \ - && ${PACKAGER} -y clean all ; \ -fi - -RUN if [ "$DFSET" = "rhel" ] ; then \ - ${PACKAGER} -y install \ - --setopt=skip_missing_names_on_install=False \ - postgresql${PGVERSION} \ - && ${PACKAGER} -y clean all ; \ -fi - -RUN mkdir -p /opt/cpm/bin /opt/cpm/conf /pgconf \ - && chown -R 26:26 /opt/cpm /pgconf - -ADD bin/pgo-sqlrunner /opt/cpm/bin - -VOLUME ["/pgconf"] - -USER 26 - -CMD ["/opt/cpm/bin/start.sh"] diff --git a/build/postgres-operator/Dockerfile b/build/postgres-operator/Dockerfile index d88621d73f..69c5953761 100644 --- a/build/postgres-operator/Dockerfile +++ b/build/postgres-operator/Dockerfile @@ -1,36 +1,15 @@ -ARG BASEOS -ARG BASEVER -ARG PREFIX -FROM ${PREFIX}/pgo-base:${BASEOS}-${BASEVER} +FROM registry.access.redhat.com/ubi8/ubi-minimal -ARG PGVERSION -ARG BACKREST_VERSION -ARG PACKAGER -ARG DFSET +COPY licenses /licenses -LABEL name="postgres-operator" \ - summary="Crunchy PostgreSQL Operator" \ - description="Crunchy PostgreSQL Operator" +COPY bin/postgres-operator /usr/local/bin -RUN if [ "$DFSET" = "centos" ] ; then \ - ${PACKAGER} -y install \ - --setopt=skip_missing_names_on_install=False \ - hostname \ - postgresql${PGVERSION} \ - && ${PACKAGER} -y clean all ; \ -fi +RUN mkdir -p /opt/crunchy/conf -RUN if [ "$DFSET" = "rhel" ] ; then \ - ${PACKAGER} -y install \ - --setopt=skip_missing_names_on_install=False \ - postgresql${PGVERSION} \ - && ${PACKAGER} -y clean all ; \ -fi +COPY hack/tools/queries /opt/crunchy/conf -ADD bin/postgres-operator /usr/local/bin -ADD installers/ansible/roles/pgo-operator/files/pgo-configs /default-pgo-config -ADD conf/postgres-operator/pgo.yaml /default-pgo-config/pgo.yaml +RUN chgrp -R 0 /opt/crunchy/conf && chmod -R g=u opt/crunchy/conf USER 2 -ENTRYPOINT ["postgres-operator"] +CMD ["postgres-operator"] diff --git a/cmd/postgres-operator/main.go b/cmd/postgres-operator/main.go new file mode 100644 index 0000000000..b2f8ae49b6 --- /dev/null +++ b/cmd/postgres-operator/main.go @@ -0,0 +1,302 @@ +// Copyright 2017 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package main + +import ( + "context" + "fmt" + "net/http" + "os" + "strconv" + "strings" + "time" + "unicode" + + "go.opentelemetry.io/otel" + "k8s.io/apimachinery/pkg/util/validation" + "k8s.io/client-go/discovery" + "k8s.io/client-go/rest" + "sigs.k8s.io/controller-runtime/pkg/healthz" + + "github.com/crunchydata/postgres-operator/internal/bridge" + "github.com/crunchydata/postgres-operator/internal/bridge/crunchybridgecluster" + "github.com/crunchydata/postgres-operator/internal/controller/pgupgrade" + "github.com/crunchydata/postgres-operator/internal/controller/postgrescluster" + "github.com/crunchydata/postgres-operator/internal/controller/runtime" + "github.com/crunchydata/postgres-operator/internal/controller/standalone_pgadmin" + "github.com/crunchydata/postgres-operator/internal/feature" + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/logging" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/registration" + "github.com/crunchydata/postgres-operator/internal/upgradecheck" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +var versionString string + +// assertNoError panics when err is not nil. +func assertNoError(err error) { + if err != nil { + panic(err) + } +} + +func initLogging() { + // Configure a singleton that treats logging.Logger.V(1) as logrus.DebugLevel. + var verbosity int + if strings.EqualFold(os.Getenv("CRUNCHY_DEBUG"), "true") { + verbosity = 1 + } + logging.SetLogSink(logging.Logrus(os.Stdout, versionString, 1, verbosity)) + + global := logging.FromContext(context.Background()) + runtime.SetLogger(global) +} + +//+kubebuilder:rbac:groups="coordination.k8s.io",resources="leases",verbs={get,create,update,watch} + +func initManager() (runtime.Options, error) { + log := logging.FromContext(context.Background()) + + options := runtime.Options{} + options.Cache.SyncPeriod = initialize.Pointer(time.Hour) + + options.HealthProbeBindAddress = ":8081" + + // Enable leader elections when configured with a valid Lease.coordination.k8s.io name. + // - https://docs.k8s.io/concepts/architecture/leases + // - https://releases.k8s.io/v1.30.0/pkg/apis/coordination/validation/validation.go#L26 + if lease := os.Getenv("PGO_CONTROLLER_LEASE_NAME"); len(lease) > 0 { + if errs := validation.IsDNS1123Subdomain(lease); len(errs) > 0 { + return options, fmt.Errorf("value for PGO_CONTROLLER_LEASE_NAME is invalid: %v", errs) + } + + options.LeaderElection = true + options.LeaderElectionID = lease + options.LeaderElectionNamespace = os.Getenv("PGO_NAMESPACE") + } + + // Check PGO_TARGET_NAMESPACE for backwards compatibility with + // "singlenamespace" installations + singlenamespace := strings.TrimSpace(os.Getenv("PGO_TARGET_NAMESPACE")) + + // Check PGO_TARGET_NAMESPACES for non-cluster-wide, multi-namespace + // installations + multinamespace := strings.TrimSpace(os.Getenv("PGO_TARGET_NAMESPACES")) + + // Initialize DefaultNamespaces if any target namespaces are set + if len(singlenamespace) > 0 || len(multinamespace) > 0 { + options.Cache.DefaultNamespaces = map[string]runtime.CacheConfig{} + } + + if len(singlenamespace) > 0 { + options.Cache.DefaultNamespaces[singlenamespace] = runtime.CacheConfig{} + } + + if len(multinamespace) > 0 { + for _, namespace := range strings.FieldsFunc(multinamespace, func(c rune) bool { + return c != '-' && !unicode.IsLetter(c) && !unicode.IsNumber(c) + }) { + options.Cache.DefaultNamespaces[namespace] = runtime.CacheConfig{} + } + } + + options.Controller.GroupKindConcurrency = map[string]int{ + "PostgresCluster." + v1beta1.GroupVersion.Group: 2, + } + + if s := os.Getenv("PGO_WORKERS"); s != "" { + if i, err := strconv.Atoi(s); err == nil && i > 0 { + options.Controller.GroupKindConcurrency["PostgresCluster."+v1beta1.GroupVersion.Group] = i + } else { + log.Error(err, "PGO_WORKERS must be a positive number") + } + } + + return options, nil +} + +func main() { + // This context is canceled by SIGINT, SIGTERM, or by calling shutdown. + ctx, shutdown := context.WithCancel(runtime.SignalHandler()) + + otelFlush, err := initOpenTelemetry() + assertNoError(err) + defer otelFlush() + + initLogging() + + log := logging.FromContext(ctx) + log.V(1).Info("debug flag set to true") + + features := feature.NewGate() + assertNoError(features.Set(os.Getenv("PGO_FEATURE_GATES"))) + log.Info("feature gates enabled", "PGO_FEATURE_GATES", features.String()) + + cfg, err := runtime.GetConfig() + assertNoError(err) + + cfg.Wrap(otelTransportWrapper()) + + // Configure client-go to suppress warnings when warning headers are encountered. This prevents + // warnings from being logged over and over again during reconciliation (e.g. this will suppress + // deprecation warnings when using an older version of a resource for backwards compatibility). + rest.SetDefaultWarningHandler(rest.NoWarnings{}) + + options, err := initManager() + assertNoError(err) + + // Add to the Context that Manager passes to Reconciler.Start, Runnable.Start, + // and eventually Reconciler.Reconcile. + options.BaseContext = func() context.Context { + ctx := context.Background() + ctx = feature.NewContext(ctx, features) + return ctx + } + + mgr, err := runtime.NewManager(cfg, options) + assertNoError(err) + + openshift := isOpenshift(cfg) + if openshift { + log.Info("detected OpenShift environment") + } + + registrar, err := registration.NewRunner(os.Getenv("RSA_KEY"), os.Getenv("TOKEN_PATH"), shutdown) + assertNoError(err) + assertNoError(mgr.Add(registrar)) + token, _ := registrar.CheckToken() + + // add all PostgreSQL Operator controllers to the runtime manager + addControllersToManager(mgr, openshift, log, registrar) + + if features.Enabled(feature.BridgeIdentifiers) { + constructor := func() *bridge.Client { + client := bridge.NewClient(os.Getenv("PGO_BRIDGE_URL"), versionString) + client.Transport = otelTransportWrapper()(http.DefaultTransport) + return client + } + + assertNoError(bridge.ManagedInstallationReconciler(mgr, constructor)) + } + + // Enable upgrade checking + upgradeCheckingDisabled := strings.EqualFold(os.Getenv("CHECK_FOR_UPGRADES"), "false") + if !upgradeCheckingDisabled { + log.Info("upgrade checking enabled") + // get the URL for the check for upgrades endpoint if set in the env + assertNoError( + upgradecheck.ManagedScheduler( + mgr, + openshift, + os.Getenv("CHECK_FOR_UPGRADES_URL"), + versionString, + token, + )) + } else { + log.Info("upgrade checking disabled") + } + + // Enable health probes + assertNoError(mgr.AddHealthzCheck("health", healthz.Ping)) + assertNoError(mgr.AddReadyzCheck("check", healthz.Ping)) + + log.Info("starting controller runtime manager and will wait for signal to exit") + + assertNoError(mgr.Start(ctx)) + log.Info("signal received, exiting") +} + +// addControllersToManager adds all PostgreSQL Operator controllers to the provided controller +// runtime manager. +func addControllersToManager(mgr runtime.Manager, openshift bool, log logging.Logger, reg registration.Registration) { + pgReconciler := &postgrescluster.Reconciler{ + Client: mgr.GetClient(), + IsOpenShift: openshift, + Owner: postgrescluster.ControllerName, + Recorder: mgr.GetEventRecorderFor(postgrescluster.ControllerName), + Registration: reg, + Tracer: otel.Tracer(postgrescluster.ControllerName), + } + + if err := pgReconciler.SetupWithManager(mgr); err != nil { + log.Error(err, "unable to create PostgresCluster controller") + os.Exit(1) + } + + upgradeReconciler := &pgupgrade.PGUpgradeReconciler{ + Client: mgr.GetClient(), + Owner: "pgupgrade-controller", + Recorder: mgr.GetEventRecorderFor("pgupgrade-controller"), + Registration: reg, + } + + if err := upgradeReconciler.SetupWithManager(mgr); err != nil { + log.Error(err, "unable to create PGUpgrade controller") + os.Exit(1) + } + + pgAdminReconciler := &standalone_pgadmin.PGAdminReconciler{ + Client: mgr.GetClient(), + Owner: "pgadmin-controller", + Recorder: mgr.GetEventRecorderFor(naming.ControllerPGAdmin), + IsOpenShift: openshift, + } + + if err := pgAdminReconciler.SetupWithManager(mgr); err != nil { + log.Error(err, "unable to create PGAdmin controller") + os.Exit(1) + } + + constructor := func() bridge.ClientInterface { + client := bridge.NewClient(os.Getenv("PGO_BRIDGE_URL"), versionString) + client.Transport = otelTransportWrapper()(http.DefaultTransport) + return client + } + + crunchyBridgeClusterReconciler := &crunchybridgecluster.CrunchyBridgeClusterReconciler{ + Client: mgr.GetClient(), + Owner: "crunchybridgecluster-controller", + // TODO(crunchybridgecluster): recorder? + // Recorder: mgr.GetEventRecorderFor(naming...), + NewClient: constructor, + } + + if err := crunchyBridgeClusterReconciler.SetupWithManager(mgr); err != nil { + log.Error(err, "unable to create CrunchyBridgeCluster controller") + os.Exit(1) + } +} + +func isOpenshift(cfg *rest.Config) bool { + const sccGroupName, sccKind = "security.openshift.io", "SecurityContextConstraints" + + client, err := discovery.NewDiscoveryClientForConfig(cfg) + assertNoError(err) + + groups, err := client.ServerGroups() + if err != nil { + assertNoError(err) + } + for _, g := range groups.Groups { + if g.Name != sccGroupName { + continue + } + for _, v := range g.Versions { + resourceList, err := client.ServerResourcesForGroupVersion(v.GroupVersion) + if err != nil { + assertNoError(err) + } + for _, r := range resourceList.APIResources { + if r.Kind == sccKind { + return true + } + } + } + } + + return false +} diff --git a/cmd/postgres-operator/main_test.go b/cmd/postgres-operator/main_test.go new file mode 100644 index 0000000000..f369ce6bd3 --- /dev/null +++ b/cmd/postgres-operator/main_test.go @@ -0,0 +1,118 @@ +// Copyright 2017 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package main + +import ( + "reflect" + "testing" + "time" + + "gotest.tools/v3/assert" + "gotest.tools/v3/assert/cmp" +) + +func TestInitManager(t *testing.T) { + t.Run("Defaults", func(t *testing.T) { + options, err := initManager() + assert.NilError(t, err) + + if assert.Check(t, options.Cache.SyncPeriod != nil) { + assert.Equal(t, *options.Cache.SyncPeriod, time.Hour) + } + + assert.Assert(t, options.HealthProbeBindAddress == ":8081") + + assert.DeepEqual(t, options.Controller.GroupKindConcurrency, + map[string]int{ + "PostgresCluster.postgres-operator.crunchydata.com": 2, + }) + + assert.Assert(t, options.Cache.DefaultNamespaces == nil) + assert.Assert(t, options.LeaderElection == false) + + { + options.Cache.SyncPeriod = nil + options.Controller.GroupKindConcurrency = nil + options.HealthProbeBindAddress = "" + + assert.Assert(t, reflect.ValueOf(options).IsZero(), + "expected remaining fields to be unset:\n%+v", options) + } + }) + + t.Run("PGO_CONTROLLER_LEASE_NAME", func(t *testing.T) { + t.Setenv("PGO_NAMESPACE", "test-namespace") + + t.Run("Invalid", func(t *testing.T) { + t.Setenv("PGO_CONTROLLER_LEASE_NAME", "INVALID_NAME") + + options, err := initManager() + assert.ErrorContains(t, err, "PGO_CONTROLLER_LEASE_NAME") + assert.ErrorContains(t, err, "invalid") + + assert.Assert(t, options.LeaderElection == false) + assert.Equal(t, options.LeaderElectionNamespace, "") + }) + + t.Run("Valid", func(t *testing.T) { + t.Setenv("PGO_CONTROLLER_LEASE_NAME", "valid-name") + + options, err := initManager() + assert.NilError(t, err) + assert.Assert(t, options.LeaderElection == true) + assert.Equal(t, options.LeaderElectionNamespace, "test-namespace") + assert.Equal(t, options.LeaderElectionID, "valid-name") + }) + }) + + t.Run("PGO_TARGET_NAMESPACE", func(t *testing.T) { + t.Setenv("PGO_TARGET_NAMESPACE", "some-such") + + options, err := initManager() + assert.NilError(t, err) + assert.Assert(t, cmp.Len(options.Cache.DefaultNamespaces, 1), + "expected only one configured namespace") + + assert.Assert(t, cmp.Contains(options.Cache.DefaultNamespaces, "some-such")) + }) + + t.Run("PGO_TARGET_NAMESPACES", func(t *testing.T) { + t.Setenv("PGO_TARGET_NAMESPACES", "some-such,another-one") + + options, err := initManager() + assert.NilError(t, err) + assert.Assert(t, cmp.Len(options.Cache.DefaultNamespaces, 2), + "expect two configured namespaces") + + assert.Assert(t, cmp.Contains(options.Cache.DefaultNamespaces, "some-such")) + assert.Assert(t, cmp.Contains(options.Cache.DefaultNamespaces, "another-one")) + }) + + t.Run("PGO_WORKERS", func(t *testing.T) { + t.Run("Invalid", func(t *testing.T) { + for _, v := range []string{"-3", "0", "3.14"} { + t.Setenv("PGO_WORKERS", v) + + options, err := initManager() + assert.NilError(t, err) + assert.DeepEqual(t, options.Controller.GroupKindConcurrency, + map[string]int{ + "PostgresCluster.postgres-operator.crunchydata.com": 2, + }) + } + }) + + t.Run("Valid", func(t *testing.T) { + t.Setenv("PGO_WORKERS", "19") + + options, err := initManager() + assert.NilError(t, err) + assert.DeepEqual(t, options.Controller.GroupKindConcurrency, + map[string]int{ + "PostgresCluster.postgres-operator.crunchydata.com": 19, + }) + }) + }) +} diff --git a/cmd/postgres-operator/open_telemetry.go b/cmd/postgres-operator/open_telemetry.go new file mode 100644 index 0000000000..2c9eedc135 --- /dev/null +++ b/cmd/postgres-operator/open_telemetry.go @@ -0,0 +1,91 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package main + +import ( + "context" + "fmt" + "io" + "net/http" + "os" + + "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp" + "go.opentelemetry.io/otel" + "go.opentelemetry.io/otel/exporters/otlp/otlptrace" + "go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp" + "go.opentelemetry.io/otel/exporters/stdout/stdouttrace" + "go.opentelemetry.io/otel/sdk/trace" +) + +func initOpenTelemetry() (func(), error) { + // At the time of this writing, the SDK (go.opentelemetry.io/otel@v1.2.0) + // does not automatically initialize any exporter. We import the OTLP and + // stdout exporters and configure them below. Much of the OTLP exporter can + // be configured through environment variables. + // + // - https://github.com/open-telemetry/opentelemetry-go/issues/2310 + // - https://github.com/open-telemetry/opentelemetry-specification/blob/v1.8.0/specification/sdk-environment-variables.md + + switch os.Getenv("OTEL_TRACES_EXPORTER") { + case "json": + var closer io.Closer + filename := os.Getenv("OTEL_JSON_FILE") + options := []stdouttrace.Option{} + + if filename != "" { + file, err := os.OpenFile(filename, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0o644) + if err != nil { + return nil, fmt.Errorf("unable to open exporter file: %w", err) + } + closer = file + options = append(options, stdouttrace.WithWriter(file)) + } + + exporter, err := stdouttrace.New(options...) + if err != nil { + return nil, fmt.Errorf("unable to initialize stdout exporter: %w", err) + } + + provider := trace.NewTracerProvider(trace.WithBatcher(exporter)) + flush := func() { + _ = provider.Shutdown(context.TODO()) + if closer != nil { + _ = closer.Close() + } + } + + otel.SetTracerProvider(provider) + return flush, nil + + case "otlp": + client := otlptracehttp.NewClient() + exporter, err := otlptrace.New(context.TODO(), client) + if err != nil { + return nil, fmt.Errorf("unable to initialize OTLP exporter: %w", err) + } + + provider := trace.NewTracerProvider(trace.WithBatcher(exporter)) + flush := func() { + _ = provider.Shutdown(context.TODO()) + } + + otel.SetTracerProvider(provider) + return flush, nil + } + + // $OTEL_TRACES_EXPORTER is unset or unknown, so no TracerProvider has been assigned. + // The default at this time is a single "no-op" tracer. + + return func() {}, nil +} + +// otelTransportWrapper creates a function that wraps the provided net/http.RoundTripper +// with one that starts a span for each request, injects context into that request, +// and ends the span when that request's response body is closed. +func otelTransportWrapper(options ...otelhttp.Option) func(http.RoundTripper) http.RoundTripper { + return func(rt http.RoundTripper) http.RoundTripper { + return otelhttp.NewTransport(rt, options...) + } +} diff --git a/conf/.gitignore b/conf/.gitignore index 2212b52b1d..8925435045 100644 --- a/conf/.gitignore +++ b/conf/.gitignore @@ -1,4 +1,4 @@ *.repo *.public *.private -RPM-GPG-KEY-* +*KEY* diff --git a/conf/pgo-backrest-repo/.gitignore b/conf/pgo-backrest-repo/.gitignore deleted file mode 100644 index 706159b833..0000000000 --- a/conf/pgo-backrest-repo/.gitignore +++ /dev/null @@ -1,9 +0,0 @@ -authorized_keys -id_rsa -id_rsa.pub -ssh_host_ecdsa_key -ssh_host_ecdsa_key.pub -ssh_host_ed25519_key -ssh_host_ed25519_key.pub -ssh_host_rsa_key -ssh_host_rsa_key.pub diff --git a/conf/pgo-backrest-repo/aws-s3-credentials.yaml b/conf/pgo-backrest-repo/aws-s3-credentials.yaml deleted file mode 100644 index 2be2173e04..0000000000 --- a/conf/pgo-backrest-repo/aws-s3-credentials.yaml +++ /dev/null @@ -1,3 +0,0 @@ ---- -aws-s3-key: -aws-s3-key-secret: diff --git a/conf/pgo-load/passwd.template b/conf/pgo-load/passwd.template deleted file mode 100644 index ceb4519425..0000000000 --- a/conf/pgo-load/passwd.template +++ /dev/null @@ -1,14 +0,0 @@ -root:x:0:0:root:/root:/bin/bash -bin:x:1:1:bin:/bin:/sbin/nologin -daemon:x:2:2:daemon:/sbin:/sbin/nologin -adm:x:3:4:adm:/var/adm:/sbin/nologin -lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin -sync:x:5:0:sync:/sbin:/bin/sync -shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown -halt:x:7:0:halt:/sbin:/sbin/halt -mail:x:8:12:mail:/var/spool/mail:/sbin/nologin -operator:x:11:0:operator:/root:/sbin/nologin -games:x:12:100:games:/usr/games:/sbin/nologin -ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin -nobody:x:99:99:Nobody:/:/sbin/nologin -postgres:x:${USER_ID}:${GROUP_ID}:PostgreSQL Server:${HOME}:/bin/bash diff --git a/conf/postgres-operator/.gitignore b/conf/postgres-operator/.gitignore deleted file mode 100644 index 10cdeb24a0..0000000000 --- a/conf/postgres-operator/.gitignore +++ /dev/null @@ -1,2 +0,0 @@ -server.crt -server.key diff --git a/conf/postgres-operator/pgo.yaml b/conf/postgres-operator/pgo.yaml deleted file mode 100644 index ff5c97ec7f..0000000000 --- a/conf/postgres-operator/pgo.yaml +++ /dev/null @@ -1,85 +0,0 @@ -Cluster: - CCPImagePrefix: registry.developers.crunchydata.com/crunchydata - Metrics: false - Badger: false - CCPImageTag: centos7-12.4-4.5.0 - Port: 5432 - PGBadgerPort: 10000 - ExporterPort: 9187 - User: testuser - Database: "" - PasswordAgeDays: 0 - PasswordLength: 24 - Replicas: 0 - ArchiveMode: false - ServiceType: ClusterIP - BackrestPort: 2022 - BackrestS3Bucket: - BackrestS3Endpoint: - BackrestS3Region: - BackrestS3URIStyle: "" - BackrestS3VerifyTLS: true - DisableAutofail: false - PodAntiAffinity: preferred - PodAntiAffinityPgBackRest: "" - PodAntiAffinityPgBouncer: "" - SyncReplication: false - DefaultInstanceMemory: "128Mi" - DefaultBackrestMemory: - DefaultPgBouncerMemory: - DefaultExporterMemory: - DisableFSGroup: false -PrimaryStorage: default -WALStorage: -BackupStorage: default -ReplicaStorage: default -BackrestStorage: default -Storage: - default: - AccessMode: ReadWriteOnce - Size: 1G - StorageType: dynamic - hostpathstorage: - AccessMode: ReadWriteMany - Size: 1G - StorageType: create - nfsstorage: - AccessMode: ReadWriteMany - Size: 1G - StorageType: create - SupplementalGroups: 65534 - nfsstoragered: - AccessMode: ReadWriteMany - Size: 1G - MatchLabels: crunchyzone=red - StorageType: create - SupplementalGroups: 65534 - storageos: - AccessMode: ReadWriteOnce - Size: 5Gi - StorageType: dynamic - StorageClass: fast - primarysite: - AccessMode: ReadWriteOnce - Size: 4G - StorageType: dynamic - StorageClass: primarysite - alternatesite: - AccessMode: ReadWriteOnce - Size: 4G - StorageType: dynamic - StorageClass: alternatesite - gce: - AccessMode: ReadWriteOnce - Size: 300M - StorageType: dynamic - StorageClass: standard - rook: - AccessMode: ReadWriteOnce - Size: 1G - StorageType: dynamic - StorageClass: rook-ceph-block -Pgo: - Audit: false - PGOImagePrefix: registry.developers.crunchydata.com/crunchydata - PGOImageTag: centos7-4.5.0 diff --git a/config/README.md b/config/README.md new file mode 100644 index 0000000000..73d2e59e6f --- /dev/null +++ b/config/README.md @@ -0,0 +1,28 @@ + + + +## Targets + +- The `default` target installs the operator in the `postgres-operator` + namespace and configures it to manage resources in all namespaces. + + + + +## Bases + +- The `crd` base creates `CustomResourceDefinition`s that are managed by the + operator. + +- The `manager` base creates the `Deployment` that runs the operator. Do not + run this as a target. + +- The `rbac` base creates a `ClusterRole` that allows the operator to + manage resources in all current and future namespaces. diff --git a/config/crd/bases/postgres-operator.crunchydata.com_crunchybridgeclusters.yaml b/config/crd/bases/postgres-operator.crunchydata.com_crunchybridgeclusters.yaml new file mode 100644 index 0000000000..82db84b466 --- /dev/null +++ b/config/crd/bases/postgres-operator.crunchydata.com_crunchybridgeclusters.yaml @@ -0,0 +1,290 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.4 + name: crunchybridgeclusters.postgres-operator.crunchydata.com +spec: + group: postgres-operator.crunchydata.com + names: + kind: CrunchyBridgeCluster + listKind: CrunchyBridgeClusterList + plural: crunchybridgeclusters + singular: crunchybridgecluster + scope: Namespaced + versions: + - name: v1beta1 + schema: + openAPIV3Schema: + description: CrunchyBridgeCluster is the Schema for the crunchybridgeclusters + API + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + metadata: + type: object + spec: + description: |- + CrunchyBridgeClusterSpec defines the desired state of CrunchyBridgeCluster + to be managed by Crunchy Data Bridge + properties: + clusterName: + description: The name of the cluster + maxLength: 50 + minLength: 5 + pattern: ^[A-Za-z][A-Za-z0-9\-_ ]*[A-Za-z0-9]$ + type: string + isHa: + description: |- + Whether the cluster is high availability, + meaning that it has a secondary it can fail over to quickly + in case the primary becomes unavailable. + type: boolean + isProtected: + description: |- + Whether the cluster is protected. Protected clusters can't be destroyed until + their protected flag is removed + type: boolean + majorVersion: + description: |- + The ID of the cluster's major Postgres version. + Currently Bridge offers 13-17 + maximum: 17 + minimum: 13 + type: integer + metadata: + description: Metadata contains metadata for custom resources + properties: + annotations: + additionalProperties: + type: string + type: object + labels: + additionalProperties: + type: string + type: object + type: object + plan: + description: The ID of the cluster's plan. Determines instance, CPU, + and memory. + type: string + provider: + description: |- + The cloud provider where the cluster is located. + Currently Bridge offers aws, azure, and gcp only + enum: + - aws + - azure + - gcp + type: string + x-kubernetes-validations: + - message: immutable + rule: self == oldSelf + region: + description: The provider region where the cluster is located. + type: string + x-kubernetes-validations: + - message: immutable + rule: self == oldSelf + roles: + description: |- + Roles for which to create Secrets that contain their credentials which + are retrieved from the Bridge API. An empty list creates no role secrets. + Removing a role from this list does NOT drop the role nor revoke their + access, but it will delete that role's secret from the kube cluster. + items: + properties: + name: + description: |- + Name of the role within Crunchy Bridge. + More info: https://docs.crunchybridge.com/concepts/users + type: string + secretName: + description: The name of the Secret that will hold the role + credentials. + maxLength: 253 + pattern: ^[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*$ + type: string + required: + - name + - secretName + type: object + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + secret: + description: The name of the secret containing the API key and team + id + type: string + storage: + anyOf: + - type: integer + - type: string + description: |- + The amount of storage available to the cluster in gigabytes. + The amount must be an integer, followed by Gi (gibibytes) or G (gigabytes) to match Kubernetes conventions. + If the amount is given in Gi, we round to the nearest G value. + The minimum value allowed by Bridge is 10 GB. + The maximum value allowed by Bridge is 65535 GB. + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + required: + - clusterName + - isHa + - majorVersion + - plan + - provider + - region + - secret + - storage + type: object + status: + description: CrunchyBridgeClusterStatus defines the observed state of + CrunchyBridgeCluster + properties: + conditions: + description: conditions represent the observations of postgres cluster's + current state. + items: + description: Condition contains details for one aspect of the current + state of this API Resource. + properties: + lastTransitionTime: + description: |- + lastTransitionTime is the last time the condition transitioned from one status to another. + This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. + format: date-time + type: string + message: + description: |- + message is a human readable message indicating details about the transition. + This may be an empty string. + maxLength: 32768 + type: string + observedGeneration: + description: |- + observedGeneration represents the .metadata.generation that the condition was set based upon. + For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date + with respect to the current state of the instance. + format: int64 + minimum: 0 + type: integer + reason: + description: |- + reason contains a programmatic identifier indicating the reason for the condition's last transition. + Producers of specific condition types may define expected values and meanings for this field, + and whether the values are considered a guaranteed API. + The value should be a CamelCase string. + This field may not be empty. + maxLength: 1024 + minLength: 1 + pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$ + type: string + status: + description: status of the condition, one of True, False, Unknown. + enum: + - "True" + - "False" + - Unknown + type: string + type: + description: type of condition in CamelCase or in foo.example.com/CamelCase. + maxLength: 316 + pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$ + type: string + required: + - lastTransitionTime + - message + - reason + - status + - type + type: object + type: array + x-kubernetes-list-map-keys: + - type + x-kubernetes-list-type: map + host: + description: The Hostname of the postgres cluster in Bridge, provided + by Bridge API and null until then. + type: string + id: + description: The ID of the postgres cluster in Bridge, provided by + Bridge API and null until then. + type: string + isHa: + description: |- + Whether the cluster is high availability, meaning that it has a secondary it can fail + over to quickly in case the primary becomes unavailable. + type: boolean + isProtected: + description: |- + Whether the cluster is protected. Protected clusters can't be destroyed until + their protected flag is removed + type: boolean + majorVersion: + description: The cluster's major Postgres version. + type: integer + name: + description: The name of the cluster in Bridge. + type: string + observedGeneration: + description: observedGeneration represents the .metadata.generation + on which the status was based. + format: int64 + minimum: 0 + type: integer + ongoingUpgrade: + description: The cluster upgrade as represented by Bridge + items: + properties: + flavor: + type: string + starting_from: + type: string + state: + type: string + required: + - flavor + - starting_from + - state + type: object + type: array + plan: + description: The ID of the cluster's plan. Determines instance, CPU, + and memory. + type: string + responses: + description: Most recent, raw responses from Bridge API + type: object + x-kubernetes-preserve-unknown-fields: true + state: + description: State of cluster in Bridge. + type: string + storage: + anyOf: + - type: integer + - type: string + description: The amount of storage available to the cluster. + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + type: object + served: true + storage: true + subresources: + status: {} diff --git a/config/crd/bases/postgres-operator.crunchydata.com_pgadmins.yaml b/config/crd/bases/postgres-operator.crunchydata.com_pgadmins.yaml new file mode 100644 index 0000000000..da729cfaf2 --- /dev/null +++ b/config/crd/bases/postgres-operator.crunchydata.com_pgadmins.yaml @@ -0,0 +1,1924 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.4 + name: pgadmins.postgres-operator.crunchydata.com +spec: + group: postgres-operator.crunchydata.com + names: + kind: PGAdmin + listKind: PGAdminList + plural: pgadmins + singular: pgadmin + scope: Namespaced + versions: + - name: v1beta1 + schema: + openAPIV3Schema: + description: PGAdmin is the Schema for the PGAdmin API + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + metadata: + type: object + spec: + description: PGAdminSpec defines the desired state of PGAdmin + properties: + affinity: + description: |- + Scheduling constraints of the PGAdmin pod. + More info: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node + properties: + nodeAffinity: + description: Describes node affinity scheduling rules for the + pod. + properties: + preferredDuringSchedulingIgnoredDuringExecution: + description: |- + The scheduler will prefer to schedule pods to nodes that satisfy + the affinity expressions specified by this field, but it may choose + a node that violates one or more of the expressions. The node that is + most preferred is the one with the greatest sum of weights, i.e. + for each node that meets all of the scheduling requirements (resource + request, requiredDuringScheduling affinity expressions, etc.), + compute a sum by iterating through the elements of this field and adding + "weight" to the sum if the node matches the corresponding matchExpressions; the + node(s) with the highest sum are the most preferred. + items: + description: |- + An empty preferred scheduling term matches all objects with implicit weight 0 + (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). + properties: + preference: + description: A node selector term, associated with the + corresponding weight. + properties: + matchExpressions: + description: A list of node selector requirements + by node's labels. + items: + description: |- + A node selector requirement is a selector that contains values, a key, and an operator + that relates the key and values. + properties: + key: + description: The label key that the selector + applies to. + type: string + operator: + description: |- + Represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: |- + An array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. If the operator is Gt or Lt, the values + array must have a single element, which will be interpreted as an integer. + This array is replaced during a strategic merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchFields: + description: A list of node selector requirements + by node's fields. + items: + description: |- + A node selector requirement is a selector that contains values, a key, and an operator + that relates the key and values. + properties: + key: + description: The label key that the selector + applies to. + type: string + operator: + description: |- + Represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: |- + An array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. If the operator is Gt or Lt, the values + array must have a single element, which will be interpreted as an integer. + This array is replaced during a strategic merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + type: object + x-kubernetes-map-type: atomic + weight: + description: Weight associated with matching the corresponding + nodeSelectorTerm, in the range 1-100. + format: int32 + type: integer + required: + - preference + - weight + type: object + type: array + x-kubernetes-list-type: atomic + requiredDuringSchedulingIgnoredDuringExecution: + description: |- + If the affinity requirements specified by this field are not met at + scheduling time, the pod will not be scheduled onto the node. + If the affinity requirements specified by this field cease to be met + at some point during pod execution (e.g. due to an update), the system + may or may not try to eventually evict the pod from its node. + properties: + nodeSelectorTerms: + description: Required. A list of node selector terms. + The terms are ORed. + items: + description: |- + A null or empty node selector term matches no objects. The requirements of + them are ANDed. + The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. + properties: + matchExpressions: + description: A list of node selector requirements + by node's labels. + items: + description: |- + A node selector requirement is a selector that contains values, a key, and an operator + that relates the key and values. + properties: + key: + description: The label key that the selector + applies to. + type: string + operator: + description: |- + Represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: |- + An array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. If the operator is Gt or Lt, the values + array must have a single element, which will be interpreted as an integer. + This array is replaced during a strategic merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchFields: + description: A list of node selector requirements + by node's fields. + items: + description: |- + A node selector requirement is a selector that contains values, a key, and an operator + that relates the key and values. + properties: + key: + description: The label key that the selector + applies to. + type: string + operator: + description: |- + Represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: |- + An array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. If the operator is Gt or Lt, the values + array must have a single element, which will be interpreted as an integer. + This array is replaced during a strategic merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + type: object + x-kubernetes-map-type: atomic + type: array + x-kubernetes-list-type: atomic + required: + - nodeSelectorTerms + type: object + x-kubernetes-map-type: atomic + type: object + podAffinity: + description: Describes pod affinity scheduling rules (e.g. co-locate + this pod in the same node, zone, etc. as some other pod(s)). + properties: + preferredDuringSchedulingIgnoredDuringExecution: + description: |- + The scheduler will prefer to schedule pods to nodes that satisfy + the affinity expressions specified by this field, but it may choose + a node that violates one or more of the expressions. The node that is + most preferred is the one with the greatest sum of weights, i.e. + for each node that meets all of the scheduling requirements (resource + request, requiredDuringScheduling affinity expressions, etc.), + compute a sum by iterating through the elements of this field and adding + "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the + node(s) with the highest sum are the most preferred. + items: + description: The weights of all of the matched WeightedPodAffinityTerm + fields are added per-node to find the most preferred node(s) + properties: + podAffinityTerm: + description: Required. A pod affinity term, associated + with the corresponding weight. + properties: + labelSelector: + description: |- + A label query over a set of resources, in this case pods. + If it's null, this PodAffinityTerm matches with no Pods. + properties: + matchExpressions: + description: matchExpressions is a list of label + selector requirements. The requirements are + ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key that + the selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both matchLabelKeys and labelSelector. + Also, matchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + description: |- + MismatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. + Also, mismatchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + description: |- + A label query over the set of namespaces that the term applies to. + The term is applied to the union of the namespaces selected by this field + and the ones listed in the namespaces field. + null selector and null or empty namespaces list means "this pod's namespace". + An empty selector ({}) matches all namespaces. + properties: + matchExpressions: + description: matchExpressions is a list of label + selector requirements. The requirements are + ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key that + the selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + description: |- + namespaces specifies a static list of namespace names that the term applies to. + The term is applied to the union of the namespaces listed in this field + and the ones selected by namespaceSelector. + null or empty namespaces list and null namespaceSelector means "this pod's namespace". + items: + type: string + type: array + x-kubernetes-list-type: atomic + topologyKey: + description: |- + This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching + the labelSelector in the specified namespaces, where co-located is defined as running on a node + whose value of the label with key topologyKey matches that of any node on which any of the + selected pods is running. + Empty topologyKey is not allowed. + type: string + required: + - topologyKey + type: object + weight: + description: |- + weight associated with matching the corresponding podAffinityTerm, + in the range 1-100. + format: int32 + type: integer + required: + - podAffinityTerm + - weight + type: object + type: array + x-kubernetes-list-type: atomic + requiredDuringSchedulingIgnoredDuringExecution: + description: |- + If the affinity requirements specified by this field are not met at + scheduling time, the pod will not be scheduled onto the node. + If the affinity requirements specified by this field cease to be met + at some point during pod execution (e.g. due to a pod label update), the + system may or may not try to eventually evict the pod from its node. + When there are multiple elements, the lists of nodes corresponding to each + podAffinityTerm are intersected, i.e. all terms must be satisfied. + items: + description: |- + Defines a set of pods (namely those matching the labelSelector + relative to the given namespace(s)) that this pod should be + co-located (affinity) or not co-located (anti-affinity) with, + where co-located is defined as running on a node whose value of + the label with key matches that of any node on which + a pod of the set of pods is running + properties: + labelSelector: + description: |- + A label query over a set of resources, in this case pods. + If it's null, this PodAffinityTerm matches with no Pods. + properties: + matchExpressions: + description: matchExpressions is a list of label + selector requirements. The requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key that the + selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both matchLabelKeys and labelSelector. + Also, matchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + description: |- + MismatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. + Also, mismatchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + description: |- + A label query over the set of namespaces that the term applies to. + The term is applied to the union of the namespaces selected by this field + and the ones listed in the namespaces field. + null selector and null or empty namespaces list means "this pod's namespace". + An empty selector ({}) matches all namespaces. + properties: + matchExpressions: + description: matchExpressions is a list of label + selector requirements. The requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key that the + selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + description: |- + namespaces specifies a static list of namespace names that the term applies to. + The term is applied to the union of the namespaces listed in this field + and the ones selected by namespaceSelector. + null or empty namespaces list and null namespaceSelector means "this pod's namespace". + items: + type: string + type: array + x-kubernetes-list-type: atomic + topologyKey: + description: |- + This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching + the labelSelector in the specified namespaces, where co-located is defined as running on a node + whose value of the label with key topologyKey matches that of any node on which any of the + selected pods is running. + Empty topologyKey is not allowed. + type: string + required: + - topologyKey + type: object + type: array + x-kubernetes-list-type: atomic + type: object + podAntiAffinity: + description: Describes pod anti-affinity scheduling rules (e.g. + avoid putting this pod in the same node, zone, etc. as some + other pod(s)). + properties: + preferredDuringSchedulingIgnoredDuringExecution: + description: |- + The scheduler will prefer to schedule pods to nodes that satisfy + the anti-affinity expressions specified by this field, but it may choose + a node that violates one or more of the expressions. The node that is + most preferred is the one with the greatest sum of weights, i.e. + for each node that meets all of the scheduling requirements (resource + request, requiredDuringScheduling anti-affinity expressions, etc.), + compute a sum by iterating through the elements of this field and adding + "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the + node(s) with the highest sum are the most preferred. + items: + description: The weights of all of the matched WeightedPodAffinityTerm + fields are added per-node to find the most preferred node(s) + properties: + podAffinityTerm: + description: Required. A pod affinity term, associated + with the corresponding weight. + properties: + labelSelector: + description: |- + A label query over a set of resources, in this case pods. + If it's null, this PodAffinityTerm matches with no Pods. + properties: + matchExpressions: + description: matchExpressions is a list of label + selector requirements. The requirements are + ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key that + the selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both matchLabelKeys and labelSelector. + Also, matchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + description: |- + MismatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. + Also, mismatchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + description: |- + A label query over the set of namespaces that the term applies to. + The term is applied to the union of the namespaces selected by this field + and the ones listed in the namespaces field. + null selector and null or empty namespaces list means "this pod's namespace". + An empty selector ({}) matches all namespaces. + properties: + matchExpressions: + description: matchExpressions is a list of label + selector requirements. The requirements are + ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key that + the selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + description: |- + namespaces specifies a static list of namespace names that the term applies to. + The term is applied to the union of the namespaces listed in this field + and the ones selected by namespaceSelector. + null or empty namespaces list and null namespaceSelector means "this pod's namespace". + items: + type: string + type: array + x-kubernetes-list-type: atomic + topologyKey: + description: |- + This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching + the labelSelector in the specified namespaces, where co-located is defined as running on a node + whose value of the label with key topologyKey matches that of any node on which any of the + selected pods is running. + Empty topologyKey is not allowed. + type: string + required: + - topologyKey + type: object + weight: + description: |- + weight associated with matching the corresponding podAffinityTerm, + in the range 1-100. + format: int32 + type: integer + required: + - podAffinityTerm + - weight + type: object + type: array + x-kubernetes-list-type: atomic + requiredDuringSchedulingIgnoredDuringExecution: + description: |- + If the anti-affinity requirements specified by this field are not met at + scheduling time, the pod will not be scheduled onto the node. + If the anti-affinity requirements specified by this field cease to be met + at some point during pod execution (e.g. due to a pod label update), the + system may or may not try to eventually evict the pod from its node. + When there are multiple elements, the lists of nodes corresponding to each + podAffinityTerm are intersected, i.e. all terms must be satisfied. + items: + description: |- + Defines a set of pods (namely those matching the labelSelector + relative to the given namespace(s)) that this pod should be + co-located (affinity) or not co-located (anti-affinity) with, + where co-located is defined as running on a node whose value of + the label with key matches that of any node on which + a pod of the set of pods is running + properties: + labelSelector: + description: |- + A label query over a set of resources, in this case pods. + If it's null, this PodAffinityTerm matches with no Pods. + properties: + matchExpressions: + description: matchExpressions is a list of label + selector requirements. The requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key that the + selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both matchLabelKeys and labelSelector. + Also, matchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + description: |- + MismatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. + Also, mismatchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + description: |- + A label query over the set of namespaces that the term applies to. + The term is applied to the union of the namespaces selected by this field + and the ones listed in the namespaces field. + null selector and null or empty namespaces list means "this pod's namespace". + An empty selector ({}) matches all namespaces. + properties: + matchExpressions: + description: matchExpressions is a list of label + selector requirements. The requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key that the + selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + description: |- + namespaces specifies a static list of namespace names that the term applies to. + The term is applied to the union of the namespaces listed in this field + and the ones selected by namespaceSelector. + null or empty namespaces list and null namespaceSelector means "this pod's namespace". + items: + type: string + type: array + x-kubernetes-list-type: atomic + topologyKey: + description: |- + This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching + the labelSelector in the specified namespaces, where co-located is defined as running on a node + whose value of the label with key topologyKey matches that of any node on which any of the + selected pods is running. + Empty topologyKey is not allowed. + type: string + required: + - topologyKey + type: object + type: array + x-kubernetes-list-type: atomic + type: object + type: object + config: + description: |- + Configuration settings for the pgAdmin process. Changes to any of these + values will be loaded without validation. Be careful, as + you may put pgAdmin into an unusable state. + properties: + configDatabaseURI: + description: |- + A Secret containing the value for the CONFIG_DATABASE_URI setting. + More info: https://www.pgadmin.org/docs/pgadmin4/latest/external_database.html + properties: + key: + description: The key of the secret to select from. Must be + a valid secret key. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the Secret or its key must be + defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + files: + description: |- + Files allows the user to mount projected volumes into the pgAdmin + container so that files can be referenced by pgAdmin as needed. + items: + description: Projection that may be projected along with other + supported volume types + properties: + clusterTrustBundle: + description: |- + ClusterTrustBundle allows a pod to access the `.spec.trustBundle` field + of ClusterTrustBundle objects in an auto-updating file. + + Alpha, gated by the ClusterTrustBundleProjection feature gate. + + ClusterTrustBundle objects can either be selected by name, or by the + combination of signer name and a label selector. + + Kubelet performs aggressive normalization of the PEM contents written + into the pod filesystem. Esoteric PEM features such as inter-block + comments and block headers are stripped. Certificates are deduplicated. + The ordering of certificates within the file is arbitrary, and Kubelet + may change the order over time. + properties: + labelSelector: + description: |- + Select all ClusterTrustBundles that match this label selector. Only has + effect if signerName is set. Mutually-exclusive with name. If unset, + interpreted as "match nothing". If set but empty, interpreted as "match + everything". + properties: + matchExpressions: + description: matchExpressions is a list of label + selector requirements. The requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key that the + selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + name: + description: |- + Select a single ClusterTrustBundle by object name. Mutually-exclusive + with signerName and labelSelector. + type: string + optional: + description: |- + If true, don't block pod startup if the referenced ClusterTrustBundle(s) + aren't available. If using name, then the named ClusterTrustBundle is + allowed not to exist. If using signerName, then the combination of + signerName and labelSelector is allowed to match zero + ClusterTrustBundles. + type: boolean + path: + description: Relative path from the volume root to write + the bundle. + type: string + signerName: + description: |- + Select all ClusterTrustBundles that match this signer name. + Mutually-exclusive with name. The contents of all selected + ClusterTrustBundles will be unified and deduplicated. + type: string + required: + - path + type: object + configMap: + description: configMap information about the configMap data + to project + properties: + items: + description: |- + items if unspecified, each key-value pair in the Data field of the referenced + ConfigMap will be projected into the volume as a file whose name is the + key and content is the value. If specified, the listed keys will be + projected into the specified paths, and unlisted keys will not be + present. If a key is specified which is not present in the ConfigMap, + the volume setup will error unless it is marked optional. Paths must be + relative and may not contain the '..' path or start with '..'. + items: + description: Maps a string key to a path within a + volume. + properties: + key: + description: key is the key to project. + type: string + mode: + description: |- + mode is Optional: mode bits used to set permissions on this file. + Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. + YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. + If not specified, the volume defaultMode will be used. + This might be in conflict with other options that affect the file + mode, like fsGroup, and the result can be other mode bits set. + format: int32 + type: integer + path: + description: |- + path is the relative path of the file to map the key to. + May not be an absolute path. + May not contain the path element '..'. + May not start with the string '..'. + type: string + required: + - key + - path + type: object + type: array + x-kubernetes-list-type: atomic + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: optional specify whether the ConfigMap + or its keys must be defined + type: boolean + type: object + x-kubernetes-map-type: atomic + downwardAPI: + description: downwardAPI information about the downwardAPI + data to project + properties: + items: + description: Items is a list of DownwardAPIVolume file + items: + description: DownwardAPIVolumeFile represents information + to create the file containing the pod field + properties: + fieldRef: + description: 'Required: Selects a field of the + pod: only annotations, labels, name, namespace + and uid are supported.' + properties: + apiVersion: + description: Version of the schema the FieldPath + is written in terms of, defaults to "v1". + type: string + fieldPath: + description: Path of the field to select in + the specified API version. + type: string + required: + - fieldPath + type: object + x-kubernetes-map-type: atomic + mode: + description: |- + Optional: mode bits used to set permissions on this file, must be an octal value + between 0000 and 0777 or a decimal value between 0 and 511. + YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. + If not specified, the volume defaultMode will be used. + This might be in conflict with other options that affect the file + mode, like fsGroup, and the result can be other mode bits set. + format: int32 + type: integer + path: + description: 'Required: Path is the relative + path name of the file to be created. Must not + be absolute or contain the ''..'' path. Must + be utf-8 encoded. The first item of the relative + path must not start with ''..''' + type: string + resourceFieldRef: + description: |- + Selects a resource of the container: only resources limits and requests + (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. + properties: + containerName: + description: 'Container name: required for + volumes, optional for env vars' + type: string + divisor: + anyOf: + - type: integer + - type: string + description: Specifies the output format of + the exposed resources, defaults to "1" + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + resource: + description: 'Required: resource to select' + type: string + required: + - resource + type: object + x-kubernetes-map-type: atomic + required: + - path + type: object + type: array + x-kubernetes-list-type: atomic + type: object + secret: + description: secret information about the secret data to + project + properties: + items: + description: |- + items if unspecified, each key-value pair in the Data field of the referenced + Secret will be projected into the volume as a file whose name is the + key and content is the value. If specified, the listed keys will be + projected into the specified paths, and unlisted keys will not be + present. If a key is specified which is not present in the Secret, + the volume setup will error unless it is marked optional. Paths must be + relative and may not contain the '..' path or start with '..'. + items: + description: Maps a string key to a path within a + volume. + properties: + key: + description: key is the key to project. + type: string + mode: + description: |- + mode is Optional: mode bits used to set permissions on this file. + Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. + YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. + If not specified, the volume defaultMode will be used. + This might be in conflict with other options that affect the file + mode, like fsGroup, and the result can be other mode bits set. + format: int32 + type: integer + path: + description: |- + path is the relative path of the file to map the key to. + May not be an absolute path. + May not contain the path element '..'. + May not start with the string '..'. + type: string + required: + - key + - path + type: object + type: array + x-kubernetes-list-type: atomic + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: optional field specify whether the Secret + or its key must be defined + type: boolean + type: object + x-kubernetes-map-type: atomic + serviceAccountToken: + description: serviceAccountToken is information about the + serviceAccountToken data to project + properties: + audience: + description: |- + audience is the intended audience of the token. A recipient of a token + must identify itself with an identifier specified in the audience of the + token, and otherwise should reject the token. The audience defaults to the + identifier of the apiserver. + type: string + expirationSeconds: + description: |- + expirationSeconds is the requested duration of validity of the service + account token. As the token approaches expiration, the kubelet volume + plugin will proactively rotate the service account token. The kubelet will + start trying to rotate the token if the token is older than 80 percent of + its time to live or if the token is older than 24 hours.Defaults to 1 hour + and must be at least 10 minutes. + format: int64 + type: integer + path: + description: |- + path is the path relative to the mount point of the file to project the + token into. + type: string + required: + - path + type: object + type: object + type: array + gunicorn: + description: |- + Settings for the gunicorn server. + More info: https://docs.gunicorn.org/en/latest/settings.html + type: object + x-kubernetes-preserve-unknown-fields: true + ldapBindPassword: + description: |- + A Secret containing the value for the LDAP_BIND_PASSWORD setting. + More info: https://www.pgadmin.org/docs/pgadmin4/latest/ldap.html + properties: + key: + description: The key of the secret to select from. Must be + a valid secret key. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the Secret or its key must be + defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + settings: + description: |- + Settings for the pgAdmin server process. Keys should be uppercase and + values must be constants. + More info: https://www.pgadmin.org/docs/pgadmin4/latest/config_py.html + type: object + x-kubernetes-preserve-unknown-fields: true + type: object + dataVolumeClaimSpec: + description: |- + Defines a PersistentVolumeClaim for pgAdmin data. + More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes + properties: + accessModes: + description: |- + accessModes contains the desired access modes the volume should have. + More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 + items: + type: string + type: array + x-kubernetes-list-type: atomic + dataSource: + description: |- + dataSource field can be used to specify either: + * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) + * An existing PVC (PersistentVolumeClaim) + If the provisioner or an external controller can support the specified data source, + it will create a new volume based on the contents of the specified data source. + When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, + and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. + If the namespace is specified, then dataSourceRef will not be copied to dataSource. + properties: + apiGroup: + description: |- + APIGroup is the group for the resource being referenced. + If APIGroup is not specified, the specified Kind must be in the core API group. + For any other third-party types, APIGroup is required. + type: string + kind: + description: Kind is the type of resource being referenced + type: string + name: + description: Name is the name of resource being referenced + type: string + required: + - kind + - name + type: object + x-kubernetes-map-type: atomic + dataSourceRef: + description: |- + dataSourceRef specifies the object from which to populate the volume with data, if a non-empty + volume is desired. This may be any object from a non-empty API group (non + core object) or a PersistentVolumeClaim object. + When this field is specified, volume binding will only succeed if the type of + the specified object matches some installed volume populator or dynamic + provisioner. + This field will replace the functionality of the dataSource field and as such + if both fields are non-empty, they must have the same value. For backwards + compatibility, when namespace isn't specified in dataSourceRef, + both fields (dataSource and dataSourceRef) will be set to the same + value automatically if one of them is empty and the other is non-empty. + When namespace is specified in dataSourceRef, + dataSource isn't set to the same value and must be empty. + There are three important differences between dataSource and dataSourceRef: + * While dataSource only allows two specific types of objects, dataSourceRef + allows any non-core object, as well as PersistentVolumeClaim objects. + * While dataSource ignores disallowed values (dropping them), dataSourceRef + preserves all values, and generates an error if a disallowed value is + specified. + * While dataSource only allows local objects, dataSourceRef allows objects + in any namespaces. + (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. + (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. + properties: + apiGroup: + description: |- + APIGroup is the group for the resource being referenced. + If APIGroup is not specified, the specified Kind must be in the core API group. + For any other third-party types, APIGroup is required. + type: string + kind: + description: Kind is the type of resource being referenced + type: string + name: + description: Name is the name of resource being referenced + type: string + namespace: + description: |- + Namespace is the namespace of resource being referenced + Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. + (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. + type: string + required: + - kind + - name + type: object + resources: + description: |- + resources represents the minimum resources the volume should have. + If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements + that are lower than previous value but must still be higher than capacity recorded in the + status field of the claim. + More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources + properties: + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Limits describes the maximum amount of compute resources allowed. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Requests describes the minimum amount of compute resources required. + If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, + otherwise to an implementation-defined value. Requests cannot exceed Limits. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + type: object + selector: + description: selector is a label query over volumes to consider + for binding. + properties: + matchExpressions: + description: matchExpressions is a list of label selector + requirements. The requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key that the selector + applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + storageClassName: + description: |- + storageClassName is the name of the StorageClass required by the claim. + More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 + type: string + volumeAttributesClassName: + description: |- + volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. + If specified, the CSI driver will create or update the volume with the attributes defined + in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, + it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass + will be applied to the claim but it's not allowed to reset this field to empty string once it is set. + If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass + will be set by the persistentvolume controller if it exists. + If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be + set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource + exists. + More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ + (Alpha) Using this field requires the VolumeAttributesClass feature gate to be enabled. + type: string + volumeMode: + description: |- + volumeMode defines what type of volume is required by the claim. + Value of Filesystem is implied when not included in claim spec. + type: string + volumeName: + description: volumeName is the binding reference to the PersistentVolume + backing this claim. + type: string + type: object + image: + description: The image name to use for pgAdmin instance. + type: string + imagePullPolicy: + description: |- + ImagePullPolicy is used to determine when Kubernetes will attempt to + pull (download) container images. + More info: https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy + enum: + - Always + - Never + - IfNotPresent + type: string + imagePullSecrets: + description: |- + The image pull secrets used to pull from a private registry. + Changing this value causes all running PGAdmin pods to restart. + https://k8s.io/docs/tasks/configure-pod-container/pull-image-private-registry/ + items: + description: |- + LocalObjectReference contains enough information to let you locate the + referenced object inside the same namespace. + properties: + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + type: object + x-kubernetes-map-type: atomic + type: array + metadata: + description: Metadata contains metadata for custom resources + properties: + annotations: + additionalProperties: + type: string + type: object + labels: + additionalProperties: + type: string + type: object + type: object + priorityClassName: + description: |- + Priority class name for the PGAdmin pod. Changing this + value causes PGAdmin pod to restart. + More info: https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/ + type: string + resources: + description: Resource requirements for the PGAdmin container. + properties: + claims: + description: |- + Claims lists the names of resources, defined in spec.resourceClaims, + that are used by this container. + + This is an alpha field and requires enabling the + DynamicResourceAllocation feature gate. + + This field is immutable. It can only be set for containers. + items: + description: ResourceClaim references one entry in PodSpec.ResourceClaims. + properties: + name: + description: |- + Name must match the name of one entry in pod.spec.resourceClaims of + the Pod where this field is used. It makes that resource available + inside a container. + type: string + required: + - name + type: object + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Limits describes the maximum amount of compute resources allowed. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Requests describes the minimum amount of compute resources required. + If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, + otherwise to an implementation-defined value. Requests cannot exceed Limits. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + type: object + serverGroups: + description: |- + ServerGroups for importing PostgresClusters to pgAdmin. + To create a pgAdmin with no selectors, leave this field empty. + A pgAdmin created with no `ServerGroups` will not automatically + add any servers through discovery. PostgresClusters can still be + added manually. + items: + properties: + name: + description: |- + The name for the ServerGroup in pgAdmin. + Must be unique in the pgAdmin's ServerGroups since it becomes the ServerGroup name in pgAdmin. + type: string + postgresClusterName: + description: PostgresClusterName selects one cluster to add + to pgAdmin by name. + type: string + postgresClusterSelector: + description: |- + PostgresClusterSelector selects clusters to dynamically add to pgAdmin by matching labels. + An empty selector like `{}` will select ALL clusters in the namespace. + properties: + matchExpressions: + description: matchExpressions is a list of label selector + requirements. The requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key that the selector + applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + required: + - name + type: object + x-kubernetes-validations: + - message: exactly one of "postgresClusterName" or "postgresClusterSelector" + is required + rule: '[has(self.postgresClusterName),has(self.postgresClusterSelector)].exists_one(x,x)' + type: array + serviceName: + description: |- + ServiceName will be used as the name of a ClusterIP service pointing + to the pgAdmin pod and port. If the service already exists, PGO will + update the service. For more information about services reference + the Kubernetes and CrunchyData documentation. + https://kubernetes.io/docs/concepts/services-networking/service/ + type: string + tolerations: + description: |- + Tolerations of the PGAdmin pod. + More info: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration + items: + description: |- + The pod this Toleration is attached to tolerates any taint that matches + the triple using the matching operator . + properties: + effect: + description: |- + Effect indicates the taint effect to match. Empty means match all taint effects. + When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. + type: string + key: + description: |- + Key is the taint key that the toleration applies to. Empty means match all taint keys. + If the key is empty, operator must be Exists; this combination means to match all values and all keys. + type: string + operator: + description: |- + Operator represents a key's relationship to the value. + Valid operators are Exists and Equal. Defaults to Equal. + Exists is equivalent to wildcard for value, so that a pod can + tolerate all taints of a particular category. + type: string + tolerationSeconds: + description: |- + TolerationSeconds represents the period of time the toleration (which must be + of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, + it is not set, which means tolerate the taint forever (do not evict). Zero and + negative values will be treated as 0 (evict immediately) by the system. + format: int64 + type: integer + value: + description: |- + Value is the taint value the toleration matches to. + If the operator is Exists, the value should be empty, otherwise just a regular string. + type: string + type: object + type: array + users: + description: |- + pgAdmin users that are managed via the PGAdmin spec. Users can still + be added via the pgAdmin GUI, but those users will not show up here. + items: + properties: + passwordRef: + description: A reference to the secret that holds the user's + password. + properties: + key: + description: The key of the secret to select from. Must + be a valid secret key. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the Secret or its key must + be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + role: + description: |- + Role determines whether the user has admin privileges or not. + Defaults to User. Valid options are Administrator and User. + enum: + - Administrator + - User + type: string + username: + description: |- + The username for User in pgAdmin. + Must be unique in the pgAdmin's users list. + type: string + required: + - passwordRef + - username + type: object + type: array + x-kubernetes-list-map-keys: + - username + x-kubernetes-list-type: map + required: + - dataVolumeClaimSpec + type: object + status: + description: PGAdminStatus defines the observed state of PGAdmin + properties: + conditions: + description: |- + conditions represent the observations of pgAdmin's current state. + Known .status.conditions.type is: "PersistentVolumeResizing" + items: + description: Condition contains details for one aspect of the current + state of this API Resource. + properties: + lastTransitionTime: + description: |- + lastTransitionTime is the last time the condition transitioned from one status to another. + This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. + format: date-time + type: string + message: + description: |- + message is a human readable message indicating details about the transition. + This may be an empty string. + maxLength: 32768 + type: string + observedGeneration: + description: |- + observedGeneration represents the .metadata.generation that the condition was set based upon. + For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date + with respect to the current state of the instance. + format: int64 + minimum: 0 + type: integer + reason: + description: |- + reason contains a programmatic identifier indicating the reason for the condition's last transition. + Producers of specific condition types may define expected values and meanings for this field, + and whether the values are considered a guaranteed API. + The value should be a CamelCase string. + This field may not be empty. + maxLength: 1024 + minLength: 1 + pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$ + type: string + status: + description: status of the condition, one of True, False, Unknown. + enum: + - "True" + - "False" + - Unknown + type: string + type: + description: type of condition in CamelCase or in foo.example.com/CamelCase. + maxLength: 316 + pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$ + type: string + required: + - lastTransitionTime + - message + - reason + - status + - type + type: object + type: array + x-kubernetes-list-map-keys: + - type + x-kubernetes-list-type: map + imageSHA: + description: ImageSHA represents the image SHA for the container running + pgAdmin. + type: string + majorVersion: + description: MajorVersion represents the major version of the running + pgAdmin. + type: integer + observedGeneration: + description: observedGeneration represents the .metadata.generation + on which the status was based. + format: int64 + minimum: 0 + type: integer + type: object + type: object + served: true + storage: true + subresources: + status: {} diff --git a/config/crd/bases/postgres-operator.crunchydata.com_pgupgrades.yaml b/config/crd/bases/postgres-operator.crunchydata.com_pgupgrades.yaml new file mode 100644 index 0000000000..4ae831cfc7 --- /dev/null +++ b/config/crd/bases/postgres-operator.crunchydata.com_pgupgrades.yaml @@ -0,0 +1,1210 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.4 + name: pgupgrades.postgres-operator.crunchydata.com +spec: + group: postgres-operator.crunchydata.com + names: + kind: PGUpgrade + listKind: PGUpgradeList + plural: pgupgrades + singular: pgupgrade + scope: Namespaced + versions: + - name: v1beta1 + schema: + openAPIV3Schema: + description: PGUpgrade is the Schema for the pgupgrades API + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + metadata: + type: object + spec: + description: PGUpgradeSpec defines the desired state of PGUpgrade + properties: + affinity: + description: |- + Scheduling constraints of the PGUpgrade pod. + More info: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node + properties: + nodeAffinity: + description: Describes node affinity scheduling rules for the + pod. + properties: + preferredDuringSchedulingIgnoredDuringExecution: + description: |- + The scheduler will prefer to schedule pods to nodes that satisfy + the affinity expressions specified by this field, but it may choose + a node that violates one or more of the expressions. The node that is + most preferred is the one with the greatest sum of weights, i.e. + for each node that meets all of the scheduling requirements (resource + request, requiredDuringScheduling affinity expressions, etc.), + compute a sum by iterating through the elements of this field and adding + "weight" to the sum if the node matches the corresponding matchExpressions; the + node(s) with the highest sum are the most preferred. + items: + description: |- + An empty preferred scheduling term matches all objects with implicit weight 0 + (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). + properties: + preference: + description: A node selector term, associated with the + corresponding weight. + properties: + matchExpressions: + description: A list of node selector requirements + by node's labels. + items: + description: |- + A node selector requirement is a selector that contains values, a key, and an operator + that relates the key and values. + properties: + key: + description: The label key that the selector + applies to. + type: string + operator: + description: |- + Represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: |- + An array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. If the operator is Gt or Lt, the values + array must have a single element, which will be interpreted as an integer. + This array is replaced during a strategic merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchFields: + description: A list of node selector requirements + by node's fields. + items: + description: |- + A node selector requirement is a selector that contains values, a key, and an operator + that relates the key and values. + properties: + key: + description: The label key that the selector + applies to. + type: string + operator: + description: |- + Represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: |- + An array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. If the operator is Gt or Lt, the values + array must have a single element, which will be interpreted as an integer. + This array is replaced during a strategic merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + type: object + x-kubernetes-map-type: atomic + weight: + description: Weight associated with matching the corresponding + nodeSelectorTerm, in the range 1-100. + format: int32 + type: integer + required: + - preference + - weight + type: object + type: array + x-kubernetes-list-type: atomic + requiredDuringSchedulingIgnoredDuringExecution: + description: |- + If the affinity requirements specified by this field are not met at + scheduling time, the pod will not be scheduled onto the node. + If the affinity requirements specified by this field cease to be met + at some point during pod execution (e.g. due to an update), the system + may or may not try to eventually evict the pod from its node. + properties: + nodeSelectorTerms: + description: Required. A list of node selector terms. + The terms are ORed. + items: + description: |- + A null or empty node selector term matches no objects. The requirements of + them are ANDed. + The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. + properties: + matchExpressions: + description: A list of node selector requirements + by node's labels. + items: + description: |- + A node selector requirement is a selector that contains values, a key, and an operator + that relates the key and values. + properties: + key: + description: The label key that the selector + applies to. + type: string + operator: + description: |- + Represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: |- + An array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. If the operator is Gt or Lt, the values + array must have a single element, which will be interpreted as an integer. + This array is replaced during a strategic merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchFields: + description: A list of node selector requirements + by node's fields. + items: + description: |- + A node selector requirement is a selector that contains values, a key, and an operator + that relates the key and values. + properties: + key: + description: The label key that the selector + applies to. + type: string + operator: + description: |- + Represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: |- + An array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. If the operator is Gt or Lt, the values + array must have a single element, which will be interpreted as an integer. + This array is replaced during a strategic merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + type: object + x-kubernetes-map-type: atomic + type: array + x-kubernetes-list-type: atomic + required: + - nodeSelectorTerms + type: object + x-kubernetes-map-type: atomic + type: object + podAffinity: + description: Describes pod affinity scheduling rules (e.g. co-locate + this pod in the same node, zone, etc. as some other pod(s)). + properties: + preferredDuringSchedulingIgnoredDuringExecution: + description: |- + The scheduler will prefer to schedule pods to nodes that satisfy + the affinity expressions specified by this field, but it may choose + a node that violates one or more of the expressions. The node that is + most preferred is the one with the greatest sum of weights, i.e. + for each node that meets all of the scheduling requirements (resource + request, requiredDuringScheduling affinity expressions, etc.), + compute a sum by iterating through the elements of this field and adding + "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the + node(s) with the highest sum are the most preferred. + items: + description: The weights of all of the matched WeightedPodAffinityTerm + fields are added per-node to find the most preferred node(s) + properties: + podAffinityTerm: + description: Required. A pod affinity term, associated + with the corresponding weight. + properties: + labelSelector: + description: |- + A label query over a set of resources, in this case pods. + If it's null, this PodAffinityTerm matches with no Pods. + properties: + matchExpressions: + description: matchExpressions is a list of label + selector requirements. The requirements are + ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key that + the selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both matchLabelKeys and labelSelector. + Also, matchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + description: |- + MismatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. + Also, mismatchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + description: |- + A label query over the set of namespaces that the term applies to. + The term is applied to the union of the namespaces selected by this field + and the ones listed in the namespaces field. + null selector and null or empty namespaces list means "this pod's namespace". + An empty selector ({}) matches all namespaces. + properties: + matchExpressions: + description: matchExpressions is a list of label + selector requirements. The requirements are + ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key that + the selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + description: |- + namespaces specifies a static list of namespace names that the term applies to. + The term is applied to the union of the namespaces listed in this field + and the ones selected by namespaceSelector. + null or empty namespaces list and null namespaceSelector means "this pod's namespace". + items: + type: string + type: array + x-kubernetes-list-type: atomic + topologyKey: + description: |- + This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching + the labelSelector in the specified namespaces, where co-located is defined as running on a node + whose value of the label with key topologyKey matches that of any node on which any of the + selected pods is running. + Empty topologyKey is not allowed. + type: string + required: + - topologyKey + type: object + weight: + description: |- + weight associated with matching the corresponding podAffinityTerm, + in the range 1-100. + format: int32 + type: integer + required: + - podAffinityTerm + - weight + type: object + type: array + x-kubernetes-list-type: atomic + requiredDuringSchedulingIgnoredDuringExecution: + description: |- + If the affinity requirements specified by this field are not met at + scheduling time, the pod will not be scheduled onto the node. + If the affinity requirements specified by this field cease to be met + at some point during pod execution (e.g. due to a pod label update), the + system may or may not try to eventually evict the pod from its node. + When there are multiple elements, the lists of nodes corresponding to each + podAffinityTerm are intersected, i.e. all terms must be satisfied. + items: + description: |- + Defines a set of pods (namely those matching the labelSelector + relative to the given namespace(s)) that this pod should be + co-located (affinity) or not co-located (anti-affinity) with, + where co-located is defined as running on a node whose value of + the label with key matches that of any node on which + a pod of the set of pods is running + properties: + labelSelector: + description: |- + A label query over a set of resources, in this case pods. + If it's null, this PodAffinityTerm matches with no Pods. + properties: + matchExpressions: + description: matchExpressions is a list of label + selector requirements. The requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key that the + selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both matchLabelKeys and labelSelector. + Also, matchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + description: |- + MismatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. + Also, mismatchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + description: |- + A label query over the set of namespaces that the term applies to. + The term is applied to the union of the namespaces selected by this field + and the ones listed in the namespaces field. + null selector and null or empty namespaces list means "this pod's namespace". + An empty selector ({}) matches all namespaces. + properties: + matchExpressions: + description: matchExpressions is a list of label + selector requirements. The requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key that the + selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + description: |- + namespaces specifies a static list of namespace names that the term applies to. + The term is applied to the union of the namespaces listed in this field + and the ones selected by namespaceSelector. + null or empty namespaces list and null namespaceSelector means "this pod's namespace". + items: + type: string + type: array + x-kubernetes-list-type: atomic + topologyKey: + description: |- + This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching + the labelSelector in the specified namespaces, where co-located is defined as running on a node + whose value of the label with key topologyKey matches that of any node on which any of the + selected pods is running. + Empty topologyKey is not allowed. + type: string + required: + - topologyKey + type: object + type: array + x-kubernetes-list-type: atomic + type: object + podAntiAffinity: + description: Describes pod anti-affinity scheduling rules (e.g. + avoid putting this pod in the same node, zone, etc. as some + other pod(s)). + properties: + preferredDuringSchedulingIgnoredDuringExecution: + description: |- + The scheduler will prefer to schedule pods to nodes that satisfy + the anti-affinity expressions specified by this field, but it may choose + a node that violates one or more of the expressions. The node that is + most preferred is the one with the greatest sum of weights, i.e. + for each node that meets all of the scheduling requirements (resource + request, requiredDuringScheduling anti-affinity expressions, etc.), + compute a sum by iterating through the elements of this field and adding + "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the + node(s) with the highest sum are the most preferred. + items: + description: The weights of all of the matched WeightedPodAffinityTerm + fields are added per-node to find the most preferred node(s) + properties: + podAffinityTerm: + description: Required. A pod affinity term, associated + with the corresponding weight. + properties: + labelSelector: + description: |- + A label query over a set of resources, in this case pods. + If it's null, this PodAffinityTerm matches with no Pods. + properties: + matchExpressions: + description: matchExpressions is a list of label + selector requirements. The requirements are + ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key that + the selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both matchLabelKeys and labelSelector. + Also, matchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + description: |- + MismatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. + Also, mismatchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + description: |- + A label query over the set of namespaces that the term applies to. + The term is applied to the union of the namespaces selected by this field + and the ones listed in the namespaces field. + null selector and null or empty namespaces list means "this pod's namespace". + An empty selector ({}) matches all namespaces. + properties: + matchExpressions: + description: matchExpressions is a list of label + selector requirements. The requirements are + ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key that + the selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + description: |- + namespaces specifies a static list of namespace names that the term applies to. + The term is applied to the union of the namespaces listed in this field + and the ones selected by namespaceSelector. + null or empty namespaces list and null namespaceSelector means "this pod's namespace". + items: + type: string + type: array + x-kubernetes-list-type: atomic + topologyKey: + description: |- + This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching + the labelSelector in the specified namespaces, where co-located is defined as running on a node + whose value of the label with key topologyKey matches that of any node on which any of the + selected pods is running. + Empty topologyKey is not allowed. + type: string + required: + - topologyKey + type: object + weight: + description: |- + weight associated with matching the corresponding podAffinityTerm, + in the range 1-100. + format: int32 + type: integer + required: + - podAffinityTerm + - weight + type: object + type: array + x-kubernetes-list-type: atomic + requiredDuringSchedulingIgnoredDuringExecution: + description: |- + If the anti-affinity requirements specified by this field are not met at + scheduling time, the pod will not be scheduled onto the node. + If the anti-affinity requirements specified by this field cease to be met + at some point during pod execution (e.g. due to a pod label update), the + system may or may not try to eventually evict the pod from its node. + When there are multiple elements, the lists of nodes corresponding to each + podAffinityTerm are intersected, i.e. all terms must be satisfied. + items: + description: |- + Defines a set of pods (namely those matching the labelSelector + relative to the given namespace(s)) that this pod should be + co-located (affinity) or not co-located (anti-affinity) with, + where co-located is defined as running on a node whose value of + the label with key matches that of any node on which + a pod of the set of pods is running + properties: + labelSelector: + description: |- + A label query over a set of resources, in this case pods. + If it's null, this PodAffinityTerm matches with no Pods. + properties: + matchExpressions: + description: matchExpressions is a list of label + selector requirements. The requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key that the + selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both matchLabelKeys and labelSelector. + Also, matchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + description: |- + MismatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. + Also, mismatchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + description: |- + A label query over the set of namespaces that the term applies to. + The term is applied to the union of the namespaces selected by this field + and the ones listed in the namespaces field. + null selector and null or empty namespaces list means "this pod's namespace". + An empty selector ({}) matches all namespaces. + properties: + matchExpressions: + description: matchExpressions is a list of label + selector requirements. The requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key that the + selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + description: |- + namespaces specifies a static list of namespace names that the term applies to. + The term is applied to the union of the namespaces listed in this field + and the ones selected by namespaceSelector. + null or empty namespaces list and null namespaceSelector means "this pod's namespace". + items: + type: string + type: array + x-kubernetes-list-type: atomic + topologyKey: + description: |- + This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching + the labelSelector in the specified namespaces, where co-located is defined as running on a node + whose value of the label with key topologyKey matches that of any node on which any of the + selected pods is running. + Empty topologyKey is not allowed. + type: string + required: + - topologyKey + type: object + type: array + x-kubernetes-list-type: atomic + type: object + type: object + fromPostgresVersion: + description: The major version of PostgreSQL before the upgrade. + maximum: 17 + minimum: 11 + type: integer + image: + description: The image name to use for major PostgreSQL upgrades. + type: string + imagePullPolicy: + description: |- + ImagePullPolicy is used to determine when Kubernetes will attempt to + pull (download) container images. + More info: https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy + enum: + - Always + - Never + - IfNotPresent + type: string + imagePullSecrets: + description: |- + The image pull secrets used to pull from a private registry. + Changing this value causes all running PGUpgrade pods to restart. + https://k8s.io/docs/tasks/configure-pod-container/pull-image-private-registry/ + items: + description: |- + LocalObjectReference contains enough information to let you locate the + referenced object inside the same namespace. + properties: + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + type: object + x-kubernetes-map-type: atomic + type: array + metadata: + description: Metadata contains metadata for custom resources + properties: + annotations: + additionalProperties: + type: string + type: object + labels: + additionalProperties: + type: string + type: object + type: object + postgresClusterName: + description: The name of the cluster to be updated + minLength: 1 + type: string + priorityClassName: + description: |- + Priority class name for the PGUpgrade pod. Changing this + value causes PGUpgrade pod to restart. + More info: https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/ + type: string + resources: + description: Resource requirements for the PGUpgrade container. + properties: + claims: + description: |- + Claims lists the names of resources, defined in spec.resourceClaims, + that are used by this container. + + This is an alpha field and requires enabling the + DynamicResourceAllocation feature gate. + + This field is immutable. It can only be set for containers. + items: + description: ResourceClaim references one entry in PodSpec.ResourceClaims. + properties: + name: + description: |- + Name must match the name of one entry in pod.spec.resourceClaims of + the Pod where this field is used. It makes that resource available + inside a container. + type: string + required: + - name + type: object + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Limits describes the maximum amount of compute resources allowed. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Requests describes the minimum amount of compute resources required. + If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, + otherwise to an implementation-defined value. Requests cannot exceed Limits. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + type: object + toPostgresImage: + description: |- + The image name to use for PostgreSQL containers after upgrade. + When omitted, the value comes from an operator environment variable. + type: string + toPostgresVersion: + description: The major version of PostgreSQL to be upgraded to. + maximum: 17 + minimum: 11 + type: integer + tolerations: + description: |- + Tolerations of the PGUpgrade pod. + More info: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration + items: + description: |- + The pod this Toleration is attached to tolerates any taint that matches + the triple using the matching operator . + properties: + effect: + description: |- + Effect indicates the taint effect to match. Empty means match all taint effects. + When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. + type: string + key: + description: |- + Key is the taint key that the toleration applies to. Empty means match all taint keys. + If the key is empty, operator must be Exists; this combination means to match all values and all keys. + type: string + operator: + description: |- + Operator represents a key's relationship to the value. + Valid operators are Exists and Equal. Defaults to Equal. + Exists is equivalent to wildcard for value, so that a pod can + tolerate all taints of a particular category. + type: string + tolerationSeconds: + description: |- + TolerationSeconds represents the period of time the toleration (which must be + of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, + it is not set, which means tolerate the taint forever (do not evict). Zero and + negative values will be treated as 0 (evict immediately) by the system. + format: int64 + type: integer + value: + description: |- + Value is the taint value the toleration matches to. + If the operator is Exists, the value should be empty, otherwise just a regular string. + type: string + type: object + type: array + required: + - fromPostgresVersion + - postgresClusterName + - toPostgresVersion + type: object + status: + description: PGUpgradeStatus defines the observed state of PGUpgrade + properties: + conditions: + description: conditions represent the observations of PGUpgrade's + current state. + items: + description: Condition contains details for one aspect of the current + state of this API Resource. + properties: + lastTransitionTime: + description: |- + lastTransitionTime is the last time the condition transitioned from one status to another. + This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. + format: date-time + type: string + message: + description: |- + message is a human readable message indicating details about the transition. + This may be an empty string. + maxLength: 32768 + type: string + observedGeneration: + description: |- + observedGeneration represents the .metadata.generation that the condition was set based upon. + For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date + with respect to the current state of the instance. + format: int64 + minimum: 0 + type: integer + reason: + description: |- + reason contains a programmatic identifier indicating the reason for the condition's last transition. + Producers of specific condition types may define expected values and meanings for this field, + and whether the values are considered a guaranteed API. + The value should be a CamelCase string. + This field may not be empty. + maxLength: 1024 + minLength: 1 + pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$ + type: string + status: + description: status of the condition, one of True, False, Unknown. + enum: + - "True" + - "False" + - Unknown + type: string + type: + description: type of condition in CamelCase or in foo.example.com/CamelCase. + maxLength: 316 + pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$ + type: string + required: + - lastTransitionTime + - message + - reason + - status + - type + type: object + type: array + x-kubernetes-list-map-keys: + - type + x-kubernetes-list-type: map + observedGeneration: + description: observedGeneration represents the .metadata.generation + on which the status was based. + format: int64 + minimum: 0 + type: integer + type: object + type: object + served: true + storage: true + subresources: + status: {} diff --git a/config/crd/bases/postgres-operator.crunchydata.com_postgresclusters.yaml b/config/crd/bases/postgres-operator.crunchydata.com_postgresclusters.yaml new file mode 100644 index 0000000000..6f9dd40f02 --- /dev/null +++ b/config/crd/bases/postgres-operator.crunchydata.com_postgresclusters.yaml @@ -0,0 +1,17329 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.16.4 + name: postgresclusters.postgres-operator.crunchydata.com +spec: + group: postgres-operator.crunchydata.com + names: + kind: PostgresCluster + listKind: PostgresClusterList + plural: postgresclusters + singular: postgrescluster + scope: Namespaced + versions: + - name: v1beta1 + schema: + openAPIV3Schema: + description: PostgresCluster is the Schema for the postgresclusters API + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + metadata: + type: object + spec: + description: PostgresClusterSpec defines the desired state of PostgresCluster + properties: + backups: + description: PostgreSQL backup configuration + properties: + pgbackrest: + description: pgBackRest archive configuration + properties: + configuration: + description: |- + Projected volumes containing custom pgBackRest configuration. These files are mounted + under "/etc/pgbackrest/conf.d" alongside any pgBackRest configuration generated by the + PostgreSQL Operator: + https://pgbackrest.org/configuration.html + items: + description: Projection that may be projected along with + other supported volume types + properties: + clusterTrustBundle: + description: |- + ClusterTrustBundle allows a pod to access the `.spec.trustBundle` field + of ClusterTrustBundle objects in an auto-updating file. + + Alpha, gated by the ClusterTrustBundleProjection feature gate. + + ClusterTrustBundle objects can either be selected by name, or by the + combination of signer name and a label selector. + + Kubelet performs aggressive normalization of the PEM contents written + into the pod filesystem. Esoteric PEM features such as inter-block + comments and block headers are stripped. Certificates are deduplicated. + The ordering of certificates within the file is arbitrary, and Kubelet + may change the order over time. + properties: + labelSelector: + description: |- + Select all ClusterTrustBundles that match this label selector. Only has + effect if signerName is set. Mutually-exclusive with name. If unset, + interpreted as "match nothing". If set but empty, interpreted as "match + everything". + properties: + matchExpressions: + description: matchExpressions is a list of label + selector requirements. The requirements are + ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key that + the selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + name: + description: |- + Select a single ClusterTrustBundle by object name. Mutually-exclusive + with signerName and labelSelector. + type: string + optional: + description: |- + If true, don't block pod startup if the referenced ClusterTrustBundle(s) + aren't available. If using name, then the named ClusterTrustBundle is + allowed not to exist. If using signerName, then the combination of + signerName and labelSelector is allowed to match zero + ClusterTrustBundles. + type: boolean + path: + description: Relative path from the volume root + to write the bundle. + type: string + signerName: + description: |- + Select all ClusterTrustBundles that match this signer name. + Mutually-exclusive with name. The contents of all selected + ClusterTrustBundles will be unified and deduplicated. + type: string + required: + - path + type: object + configMap: + description: configMap information about the configMap + data to project + properties: + items: + description: |- + items if unspecified, each key-value pair in the Data field of the referenced + ConfigMap will be projected into the volume as a file whose name is the + key and content is the value. If specified, the listed keys will be + projected into the specified paths, and unlisted keys will not be + present. If a key is specified which is not present in the ConfigMap, + the volume setup will error unless it is marked optional. Paths must be + relative and may not contain the '..' path or start with '..'. + items: + description: Maps a string key to a path within + a volume. + properties: + key: + description: key is the key to project. + type: string + mode: + description: |- + mode is Optional: mode bits used to set permissions on this file. + Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. + YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. + If not specified, the volume defaultMode will be used. + This might be in conflict with other options that affect the file + mode, like fsGroup, and the result can be other mode bits set. + format: int32 + type: integer + path: + description: |- + path is the relative path of the file to map the key to. + May not be an absolute path. + May not contain the path element '..'. + May not start with the string '..'. + type: string + required: + - key + - path + type: object + type: array + x-kubernetes-list-type: atomic + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: optional specify whether the ConfigMap + or its keys must be defined + type: boolean + type: object + x-kubernetes-map-type: atomic + downwardAPI: + description: downwardAPI information about the downwardAPI + data to project + properties: + items: + description: Items is a list of DownwardAPIVolume + file + items: + description: DownwardAPIVolumeFile represents + information to create the file containing the + pod field + properties: + fieldRef: + description: 'Required: Selects a field of + the pod: only annotations, labels, name, + namespace and uid are supported.' + properties: + apiVersion: + description: Version of the schema the + FieldPath is written in terms of, defaults + to "v1". + type: string + fieldPath: + description: Path of the field to select + in the specified API version. + type: string + required: + - fieldPath + type: object + x-kubernetes-map-type: atomic + mode: + description: |- + Optional: mode bits used to set permissions on this file, must be an octal value + between 0000 and 0777 or a decimal value between 0 and 511. + YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. + If not specified, the volume defaultMode will be used. + This might be in conflict with other options that affect the file + mode, like fsGroup, and the result can be other mode bits set. + format: int32 + type: integer + path: + description: 'Required: Path is the relative + path name of the file to be created. Must + not be absolute or contain the ''..'' path. + Must be utf-8 encoded. The first item of + the relative path must not start with ''..''' + type: string + resourceFieldRef: + description: |- + Selects a resource of the container: only resources limits and requests + (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. + properties: + containerName: + description: 'Container name: required + for volumes, optional for env vars' + type: string + divisor: + anyOf: + - type: integer + - type: string + description: Specifies the output format + of the exposed resources, defaults to + "1" + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + resource: + description: 'Required: resource to select' + type: string + required: + - resource + type: object + x-kubernetes-map-type: atomic + required: + - path + type: object + type: array + x-kubernetes-list-type: atomic + type: object + secret: + description: secret information about the secret data + to project + properties: + items: + description: |- + items if unspecified, each key-value pair in the Data field of the referenced + Secret will be projected into the volume as a file whose name is the + key and content is the value. If specified, the listed keys will be + projected into the specified paths, and unlisted keys will not be + present. If a key is specified which is not present in the Secret, + the volume setup will error unless it is marked optional. Paths must be + relative and may not contain the '..' path or start with '..'. + items: + description: Maps a string key to a path within + a volume. + properties: + key: + description: key is the key to project. + type: string + mode: + description: |- + mode is Optional: mode bits used to set permissions on this file. + Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. + YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. + If not specified, the volume defaultMode will be used. + This might be in conflict with other options that affect the file + mode, like fsGroup, and the result can be other mode bits set. + format: int32 + type: integer + path: + description: |- + path is the relative path of the file to map the key to. + May not be an absolute path. + May not contain the path element '..'. + May not start with the string '..'. + type: string + required: + - key + - path + type: object + type: array + x-kubernetes-list-type: atomic + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: optional field specify whether the + Secret or its key must be defined + type: boolean + type: object + x-kubernetes-map-type: atomic + serviceAccountToken: + description: serviceAccountToken is information about + the serviceAccountToken data to project + properties: + audience: + description: |- + audience is the intended audience of the token. A recipient of a token + must identify itself with an identifier specified in the audience of the + token, and otherwise should reject the token. The audience defaults to the + identifier of the apiserver. + type: string + expirationSeconds: + description: |- + expirationSeconds is the requested duration of validity of the service + account token. As the token approaches expiration, the kubelet volume + plugin will proactively rotate the service account token. The kubelet will + start trying to rotate the token if the token is older than 80 percent of + its time to live or if the token is older than 24 hours.Defaults to 1 hour + and must be at least 10 minutes. + format: int64 + type: integer + path: + description: |- + path is the path relative to the mount point of the file to project the + token into. + type: string + required: + - path + type: object + type: object + type: array + global: + additionalProperties: + type: string + description: |- + Global pgBackRest configuration settings. These settings are included in the "global" + section of the pgBackRest configuration generated by the PostgreSQL Operator, and then + mounted under "/etc/pgbackrest/conf.d": + https://pgbackrest.org/configuration.html + type: object + image: + description: |- + The image name to use for pgBackRest containers. Utilized to run + pgBackRest repository hosts and backups. The image may also be set using + the RELATED_IMAGE_PGBACKREST environment variable + type: string + jobs: + description: Jobs field allows configuration for all backup + jobs + properties: + affinity: + description: |- + Scheduling constraints of pgBackRest backup Job pods. + More info: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node + properties: + nodeAffinity: + description: Describes node affinity scheduling rules + for the pod. + properties: + preferredDuringSchedulingIgnoredDuringExecution: + description: |- + The scheduler will prefer to schedule pods to nodes that satisfy + the affinity expressions specified by this field, but it may choose + a node that violates one or more of the expressions. The node that is + most preferred is the one with the greatest sum of weights, i.e. + for each node that meets all of the scheduling requirements (resource + request, requiredDuringScheduling affinity expressions, etc.), + compute a sum by iterating through the elements of this field and adding + "weight" to the sum if the node matches the corresponding matchExpressions; the + node(s) with the highest sum are the most preferred. + items: + description: |- + An empty preferred scheduling term matches all objects with implicit weight 0 + (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). + properties: + preference: + description: A node selector term, associated + with the corresponding weight. + properties: + matchExpressions: + description: A list of node selector + requirements by node's labels. + items: + description: |- + A node selector requirement is a selector that contains values, a key, and an operator + that relates the key and values. + properties: + key: + description: The label key that + the selector applies to. + type: string + operator: + description: |- + Represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: |- + An array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. If the operator is Gt or Lt, the values + array must have a single element, which will be interpreted as an integer. + This array is replaced during a strategic merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchFields: + description: A list of node selector + requirements by node's fields. + items: + description: |- + A node selector requirement is a selector that contains values, a key, and an operator + that relates the key and values. + properties: + key: + description: The label key that + the selector applies to. + type: string + operator: + description: |- + Represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: |- + An array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. If the operator is Gt or Lt, the values + array must have a single element, which will be interpreted as an integer. + This array is replaced during a strategic merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + type: object + x-kubernetes-map-type: atomic + weight: + description: Weight associated with matching + the corresponding nodeSelectorTerm, in + the range 1-100. + format: int32 + type: integer + required: + - preference + - weight + type: object + type: array + x-kubernetes-list-type: atomic + requiredDuringSchedulingIgnoredDuringExecution: + description: |- + If the affinity requirements specified by this field are not met at + scheduling time, the pod will not be scheduled onto the node. + If the affinity requirements specified by this field cease to be met + at some point during pod execution (e.g. due to an update), the system + may or may not try to eventually evict the pod from its node. + properties: + nodeSelectorTerms: + description: Required. A list of node selector + terms. The terms are ORed. + items: + description: |- + A null or empty node selector term matches no objects. The requirements of + them are ANDed. + The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. + properties: + matchExpressions: + description: A list of node selector + requirements by node's labels. + items: + description: |- + A node selector requirement is a selector that contains values, a key, and an operator + that relates the key and values. + properties: + key: + description: The label key that + the selector applies to. + type: string + operator: + description: |- + Represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: |- + An array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. If the operator is Gt or Lt, the values + array must have a single element, which will be interpreted as an integer. + This array is replaced during a strategic merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchFields: + description: A list of node selector + requirements by node's fields. + items: + description: |- + A node selector requirement is a selector that contains values, a key, and an operator + that relates the key and values. + properties: + key: + description: The label key that + the selector applies to. + type: string + operator: + description: |- + Represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: |- + An array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. If the operator is Gt or Lt, the values + array must have a single element, which will be interpreted as an integer. + This array is replaced during a strategic merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + type: object + x-kubernetes-map-type: atomic + type: array + x-kubernetes-list-type: atomic + required: + - nodeSelectorTerms + type: object + x-kubernetes-map-type: atomic + type: object + podAffinity: + description: Describes pod affinity scheduling rules + (e.g. co-locate this pod in the same node, zone, + etc. as some other pod(s)). + properties: + preferredDuringSchedulingIgnoredDuringExecution: + description: |- + The scheduler will prefer to schedule pods to nodes that satisfy + the affinity expressions specified by this field, but it may choose + a node that violates one or more of the expressions. The node that is + most preferred is the one with the greatest sum of weights, i.e. + for each node that meets all of the scheduling requirements (resource + request, requiredDuringScheduling affinity expressions, etc.), + compute a sum by iterating through the elements of this field and adding + "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the + node(s) with the highest sum are the most preferred. + items: + description: The weights of all of the matched + WeightedPodAffinityTerm fields are added per-node + to find the most preferred node(s) + properties: + podAffinityTerm: + description: Required. A pod affinity term, + associated with the corresponding weight. + properties: + labelSelector: + description: |- + A label query over a set of resources, in this case pods. + If it's null, this PodAffinityTerm matches with no Pods. + properties: + matchExpressions: + description: matchExpressions is + a list of label selector requirements. + The requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label + key that the selector applies + to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both matchLabelKeys and labelSelector. + Also, matchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + description: |- + MismatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. + Also, mismatchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + description: |- + A label query over the set of namespaces that the term applies to. + The term is applied to the union of the namespaces selected by this field + and the ones listed in the namespaces field. + null selector and null or empty namespaces list means "this pod's namespace". + An empty selector ({}) matches all namespaces. + properties: + matchExpressions: + description: matchExpressions is + a list of label selector requirements. + The requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label + key that the selector applies + to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + description: |- + namespaces specifies a static list of namespace names that the term applies to. + The term is applied to the union of the namespaces listed in this field + and the ones selected by namespaceSelector. + null or empty namespaces list and null namespaceSelector means "this pod's namespace". + items: + type: string + type: array + x-kubernetes-list-type: atomic + topologyKey: + description: |- + This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching + the labelSelector in the specified namespaces, where co-located is defined as running on a node + whose value of the label with key topologyKey matches that of any node on which any of the + selected pods is running. + Empty topologyKey is not allowed. + type: string + required: + - topologyKey + type: object + weight: + description: |- + weight associated with matching the corresponding podAffinityTerm, + in the range 1-100. + format: int32 + type: integer + required: + - podAffinityTerm + - weight + type: object + type: array + x-kubernetes-list-type: atomic + requiredDuringSchedulingIgnoredDuringExecution: + description: |- + If the affinity requirements specified by this field are not met at + scheduling time, the pod will not be scheduled onto the node. + If the affinity requirements specified by this field cease to be met + at some point during pod execution (e.g. due to a pod label update), the + system may or may not try to eventually evict the pod from its node. + When there are multiple elements, the lists of nodes corresponding to each + podAffinityTerm are intersected, i.e. all terms must be satisfied. + items: + description: |- + Defines a set of pods (namely those matching the labelSelector + relative to the given namespace(s)) that this pod should be + co-located (affinity) or not co-located (anti-affinity) with, + where co-located is defined as running on a node whose value of + the label with key matches that of any node on which + a pod of the set of pods is running + properties: + labelSelector: + description: |- + A label query over a set of resources, in this case pods. + If it's null, this PodAffinityTerm matches with no Pods. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The + requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label + key that the selector applies + to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both matchLabelKeys and labelSelector. + Also, matchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + description: |- + MismatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. + Also, mismatchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + description: |- + A label query over the set of namespaces that the term applies to. + The term is applied to the union of the namespaces selected by this field + and the ones listed in the namespaces field. + null selector and null or empty namespaces list means "this pod's namespace". + An empty selector ({}) matches all namespaces. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The + requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label + key that the selector applies + to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + description: |- + namespaces specifies a static list of namespace names that the term applies to. + The term is applied to the union of the namespaces listed in this field + and the ones selected by namespaceSelector. + null or empty namespaces list and null namespaceSelector means "this pod's namespace". + items: + type: string + type: array + x-kubernetes-list-type: atomic + topologyKey: + description: |- + This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching + the labelSelector in the specified namespaces, where co-located is defined as running on a node + whose value of the label with key topologyKey matches that of any node on which any of the + selected pods is running. + Empty topologyKey is not allowed. + type: string + required: + - topologyKey + type: object + type: array + x-kubernetes-list-type: atomic + type: object + podAntiAffinity: + description: Describes pod anti-affinity scheduling + rules (e.g. avoid putting this pod in the same node, + zone, etc. as some other pod(s)). + properties: + preferredDuringSchedulingIgnoredDuringExecution: + description: |- + The scheduler will prefer to schedule pods to nodes that satisfy + the anti-affinity expressions specified by this field, but it may choose + a node that violates one or more of the expressions. The node that is + most preferred is the one with the greatest sum of weights, i.e. + for each node that meets all of the scheduling requirements (resource + request, requiredDuringScheduling anti-affinity expressions, etc.), + compute a sum by iterating through the elements of this field and adding + "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the + node(s) with the highest sum are the most preferred. + items: + description: The weights of all of the matched + WeightedPodAffinityTerm fields are added per-node + to find the most preferred node(s) + properties: + podAffinityTerm: + description: Required. A pod affinity term, + associated with the corresponding weight. + properties: + labelSelector: + description: |- + A label query over a set of resources, in this case pods. + If it's null, this PodAffinityTerm matches with no Pods. + properties: + matchExpressions: + description: matchExpressions is + a list of label selector requirements. + The requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label + key that the selector applies + to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both matchLabelKeys and labelSelector. + Also, matchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + description: |- + MismatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. + Also, mismatchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + description: |- + A label query over the set of namespaces that the term applies to. + The term is applied to the union of the namespaces selected by this field + and the ones listed in the namespaces field. + null selector and null or empty namespaces list means "this pod's namespace". + An empty selector ({}) matches all namespaces. + properties: + matchExpressions: + description: matchExpressions is + a list of label selector requirements. + The requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label + key that the selector applies + to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + description: |- + namespaces specifies a static list of namespace names that the term applies to. + The term is applied to the union of the namespaces listed in this field + and the ones selected by namespaceSelector. + null or empty namespaces list and null namespaceSelector means "this pod's namespace". + items: + type: string + type: array + x-kubernetes-list-type: atomic + topologyKey: + description: |- + This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching + the labelSelector in the specified namespaces, where co-located is defined as running on a node + whose value of the label with key topologyKey matches that of any node on which any of the + selected pods is running. + Empty topologyKey is not allowed. + type: string + required: + - topologyKey + type: object + weight: + description: |- + weight associated with matching the corresponding podAffinityTerm, + in the range 1-100. + format: int32 + type: integer + required: + - podAffinityTerm + - weight + type: object + type: array + x-kubernetes-list-type: atomic + requiredDuringSchedulingIgnoredDuringExecution: + description: |- + If the anti-affinity requirements specified by this field are not met at + scheduling time, the pod will not be scheduled onto the node. + If the anti-affinity requirements specified by this field cease to be met + at some point during pod execution (e.g. due to a pod label update), the + system may or may not try to eventually evict the pod from its node. + When there are multiple elements, the lists of nodes corresponding to each + podAffinityTerm are intersected, i.e. all terms must be satisfied. + items: + description: |- + Defines a set of pods (namely those matching the labelSelector + relative to the given namespace(s)) that this pod should be + co-located (affinity) or not co-located (anti-affinity) with, + where co-located is defined as running on a node whose value of + the label with key matches that of any node on which + a pod of the set of pods is running + properties: + labelSelector: + description: |- + A label query over a set of resources, in this case pods. + If it's null, this PodAffinityTerm matches with no Pods. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The + requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label + key that the selector applies + to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both matchLabelKeys and labelSelector. + Also, matchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + description: |- + MismatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. + Also, mismatchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + description: |- + A label query over the set of namespaces that the term applies to. + The term is applied to the union of the namespaces selected by this field + and the ones listed in the namespaces field. + null selector and null or empty namespaces list means "this pod's namespace". + An empty selector ({}) matches all namespaces. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The + requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label + key that the selector applies + to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + description: |- + namespaces specifies a static list of namespace names that the term applies to. + The term is applied to the union of the namespaces listed in this field + and the ones selected by namespaceSelector. + null or empty namespaces list and null namespaceSelector means "this pod's namespace". + items: + type: string + type: array + x-kubernetes-list-type: atomic + topologyKey: + description: |- + This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching + the labelSelector in the specified namespaces, where co-located is defined as running on a node + whose value of the label with key topologyKey matches that of any node on which any of the + selected pods is running. + Empty topologyKey is not allowed. + type: string + required: + - topologyKey + type: object + type: array + x-kubernetes-list-type: atomic + type: object + type: object + priorityClassName: + description: |- + Priority class name for the pgBackRest backup Job pods. + More info: https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/ + type: string + resources: + description: |- + Resource limits for backup jobs. Includes manual, scheduled and replica + create backups + properties: + claims: + description: |- + Claims lists the names of resources, defined in spec.resourceClaims, + that are used by this container. + + This is an alpha field and requires enabling the + DynamicResourceAllocation feature gate. + + This field is immutable. It can only be set for containers. + items: + description: ResourceClaim references one entry + in PodSpec.ResourceClaims. + properties: + name: + description: |- + Name must match the name of one entry in pod.spec.resourceClaims of + the Pod where this field is used. It makes that resource available + inside a container. + type: string + required: + - name + type: object + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Limits describes the maximum amount of compute resources allowed. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Requests describes the minimum amount of compute resources required. + If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, + otherwise to an implementation-defined value. Requests cannot exceed Limits. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + type: object + tolerations: + description: |- + Tolerations of pgBackRest backup Job pods. + More info: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration + items: + description: |- + The pod this Toleration is attached to tolerates any taint that matches + the triple using the matching operator . + properties: + effect: + description: |- + Effect indicates the taint effect to match. Empty means match all taint effects. + When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. + type: string + key: + description: |- + Key is the taint key that the toleration applies to. Empty means match all taint keys. + If the key is empty, operator must be Exists; this combination means to match all values and all keys. + type: string + operator: + description: |- + Operator represents a key's relationship to the value. + Valid operators are Exists and Equal. Defaults to Equal. + Exists is equivalent to wildcard for value, so that a pod can + tolerate all taints of a particular category. + type: string + tolerationSeconds: + description: |- + TolerationSeconds represents the period of time the toleration (which must be + of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, + it is not set, which means tolerate the taint forever (do not evict). Zero and + negative values will be treated as 0 (evict immediately) by the system. + format: int64 + type: integer + value: + description: |- + Value is the taint value the toleration matches to. + If the operator is Exists, the value should be empty, otherwise just a regular string. + type: string + type: object + type: array + ttlSecondsAfterFinished: + description: |- + Limit the lifetime of a Job that has finished. + More info: https://kubernetes.io/docs/concepts/workloads/controllers/job + format: int32 + minimum: 60 + type: integer + type: object + manual: + description: Defines details for manual pgBackRest backup + Jobs + properties: + options: + description: |- + Command line options to include when running the pgBackRest backup command. + https://pgbackrest.org/command.html#command-backup + items: + type: string + type: array + repoName: + description: The name of the pgBackRest repo to run the + backup command against. + pattern: ^repo[1-4] + type: string + required: + - repoName + type: object + metadata: + description: Metadata contains metadata for custom resources + properties: + annotations: + additionalProperties: + type: string + type: object + labels: + additionalProperties: + type: string + type: object + type: object + repoHost: + description: |- + Defines configuration for a pgBackRest dedicated repository host. This section is only + applicable if at least one "volume" (i.e. PVC-based) repository is defined in the "repos" + section, therefore enabling a dedicated repository host Deployment. + properties: + affinity: + description: |- + Scheduling constraints of the Dedicated repo host pod. + Changing this value causes repo host to restart. + More info: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node + properties: + nodeAffinity: + description: Describes node affinity scheduling rules + for the pod. + properties: + preferredDuringSchedulingIgnoredDuringExecution: + description: |- + The scheduler will prefer to schedule pods to nodes that satisfy + the affinity expressions specified by this field, but it may choose + a node that violates one or more of the expressions. The node that is + most preferred is the one with the greatest sum of weights, i.e. + for each node that meets all of the scheduling requirements (resource + request, requiredDuringScheduling affinity expressions, etc.), + compute a sum by iterating through the elements of this field and adding + "weight" to the sum if the node matches the corresponding matchExpressions; the + node(s) with the highest sum are the most preferred. + items: + description: |- + An empty preferred scheduling term matches all objects with implicit weight 0 + (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). + properties: + preference: + description: A node selector term, associated + with the corresponding weight. + properties: + matchExpressions: + description: A list of node selector + requirements by node's labels. + items: + description: |- + A node selector requirement is a selector that contains values, a key, and an operator + that relates the key and values. + properties: + key: + description: The label key that + the selector applies to. + type: string + operator: + description: |- + Represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: |- + An array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. If the operator is Gt or Lt, the values + array must have a single element, which will be interpreted as an integer. + This array is replaced during a strategic merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchFields: + description: A list of node selector + requirements by node's fields. + items: + description: |- + A node selector requirement is a selector that contains values, a key, and an operator + that relates the key and values. + properties: + key: + description: The label key that + the selector applies to. + type: string + operator: + description: |- + Represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: |- + An array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. If the operator is Gt or Lt, the values + array must have a single element, which will be interpreted as an integer. + This array is replaced during a strategic merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + type: object + x-kubernetes-map-type: atomic + weight: + description: Weight associated with matching + the corresponding nodeSelectorTerm, in + the range 1-100. + format: int32 + type: integer + required: + - preference + - weight + type: object + type: array + x-kubernetes-list-type: atomic + requiredDuringSchedulingIgnoredDuringExecution: + description: |- + If the affinity requirements specified by this field are not met at + scheduling time, the pod will not be scheduled onto the node. + If the affinity requirements specified by this field cease to be met + at some point during pod execution (e.g. due to an update), the system + may or may not try to eventually evict the pod from its node. + properties: + nodeSelectorTerms: + description: Required. A list of node selector + terms. The terms are ORed. + items: + description: |- + A null or empty node selector term matches no objects. The requirements of + them are ANDed. + The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. + properties: + matchExpressions: + description: A list of node selector + requirements by node's labels. + items: + description: |- + A node selector requirement is a selector that contains values, a key, and an operator + that relates the key and values. + properties: + key: + description: The label key that + the selector applies to. + type: string + operator: + description: |- + Represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: |- + An array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. If the operator is Gt or Lt, the values + array must have a single element, which will be interpreted as an integer. + This array is replaced during a strategic merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchFields: + description: A list of node selector + requirements by node's fields. + items: + description: |- + A node selector requirement is a selector that contains values, a key, and an operator + that relates the key and values. + properties: + key: + description: The label key that + the selector applies to. + type: string + operator: + description: |- + Represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: |- + An array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. If the operator is Gt or Lt, the values + array must have a single element, which will be interpreted as an integer. + This array is replaced during a strategic merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + type: object + x-kubernetes-map-type: atomic + type: array + x-kubernetes-list-type: atomic + required: + - nodeSelectorTerms + type: object + x-kubernetes-map-type: atomic + type: object + podAffinity: + description: Describes pod affinity scheduling rules + (e.g. co-locate this pod in the same node, zone, + etc. as some other pod(s)). + properties: + preferredDuringSchedulingIgnoredDuringExecution: + description: |- + The scheduler will prefer to schedule pods to nodes that satisfy + the affinity expressions specified by this field, but it may choose + a node that violates one or more of the expressions. The node that is + most preferred is the one with the greatest sum of weights, i.e. + for each node that meets all of the scheduling requirements (resource + request, requiredDuringScheduling affinity expressions, etc.), + compute a sum by iterating through the elements of this field and adding + "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the + node(s) with the highest sum are the most preferred. + items: + description: The weights of all of the matched + WeightedPodAffinityTerm fields are added per-node + to find the most preferred node(s) + properties: + podAffinityTerm: + description: Required. A pod affinity term, + associated with the corresponding weight. + properties: + labelSelector: + description: |- + A label query over a set of resources, in this case pods. + If it's null, this PodAffinityTerm matches with no Pods. + properties: + matchExpressions: + description: matchExpressions is + a list of label selector requirements. + The requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label + key that the selector applies + to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both matchLabelKeys and labelSelector. + Also, matchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + description: |- + MismatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. + Also, mismatchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + description: |- + A label query over the set of namespaces that the term applies to. + The term is applied to the union of the namespaces selected by this field + and the ones listed in the namespaces field. + null selector and null or empty namespaces list means "this pod's namespace". + An empty selector ({}) matches all namespaces. + properties: + matchExpressions: + description: matchExpressions is + a list of label selector requirements. + The requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label + key that the selector applies + to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + description: |- + namespaces specifies a static list of namespace names that the term applies to. + The term is applied to the union of the namespaces listed in this field + and the ones selected by namespaceSelector. + null or empty namespaces list and null namespaceSelector means "this pod's namespace". + items: + type: string + type: array + x-kubernetes-list-type: atomic + topologyKey: + description: |- + This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching + the labelSelector in the specified namespaces, where co-located is defined as running on a node + whose value of the label with key topologyKey matches that of any node on which any of the + selected pods is running. + Empty topologyKey is not allowed. + type: string + required: + - topologyKey + type: object + weight: + description: |- + weight associated with matching the corresponding podAffinityTerm, + in the range 1-100. + format: int32 + type: integer + required: + - podAffinityTerm + - weight + type: object + type: array + x-kubernetes-list-type: atomic + requiredDuringSchedulingIgnoredDuringExecution: + description: |- + If the affinity requirements specified by this field are not met at + scheduling time, the pod will not be scheduled onto the node. + If the affinity requirements specified by this field cease to be met + at some point during pod execution (e.g. due to a pod label update), the + system may or may not try to eventually evict the pod from its node. + When there are multiple elements, the lists of nodes corresponding to each + podAffinityTerm are intersected, i.e. all terms must be satisfied. + items: + description: |- + Defines a set of pods (namely those matching the labelSelector + relative to the given namespace(s)) that this pod should be + co-located (affinity) or not co-located (anti-affinity) with, + where co-located is defined as running on a node whose value of + the label with key matches that of any node on which + a pod of the set of pods is running + properties: + labelSelector: + description: |- + A label query over a set of resources, in this case pods. + If it's null, this PodAffinityTerm matches with no Pods. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The + requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label + key that the selector applies + to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both matchLabelKeys and labelSelector. + Also, matchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + description: |- + MismatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. + Also, mismatchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + description: |- + A label query over the set of namespaces that the term applies to. + The term is applied to the union of the namespaces selected by this field + and the ones listed in the namespaces field. + null selector and null or empty namespaces list means "this pod's namespace". + An empty selector ({}) matches all namespaces. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The + requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label + key that the selector applies + to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + description: |- + namespaces specifies a static list of namespace names that the term applies to. + The term is applied to the union of the namespaces listed in this field + and the ones selected by namespaceSelector. + null or empty namespaces list and null namespaceSelector means "this pod's namespace". + items: + type: string + type: array + x-kubernetes-list-type: atomic + topologyKey: + description: |- + This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching + the labelSelector in the specified namespaces, where co-located is defined as running on a node + whose value of the label with key topologyKey matches that of any node on which any of the + selected pods is running. + Empty topologyKey is not allowed. + type: string + required: + - topologyKey + type: object + type: array + x-kubernetes-list-type: atomic + type: object + podAntiAffinity: + description: Describes pod anti-affinity scheduling + rules (e.g. avoid putting this pod in the same node, + zone, etc. as some other pod(s)). + properties: + preferredDuringSchedulingIgnoredDuringExecution: + description: |- + The scheduler will prefer to schedule pods to nodes that satisfy + the anti-affinity expressions specified by this field, but it may choose + a node that violates one or more of the expressions. The node that is + most preferred is the one with the greatest sum of weights, i.e. + for each node that meets all of the scheduling requirements (resource + request, requiredDuringScheduling anti-affinity expressions, etc.), + compute a sum by iterating through the elements of this field and adding + "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the + node(s) with the highest sum are the most preferred. + items: + description: The weights of all of the matched + WeightedPodAffinityTerm fields are added per-node + to find the most preferred node(s) + properties: + podAffinityTerm: + description: Required. A pod affinity term, + associated with the corresponding weight. + properties: + labelSelector: + description: |- + A label query over a set of resources, in this case pods. + If it's null, this PodAffinityTerm matches with no Pods. + properties: + matchExpressions: + description: matchExpressions is + a list of label selector requirements. + The requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label + key that the selector applies + to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both matchLabelKeys and labelSelector. + Also, matchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + description: |- + MismatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. + Also, mismatchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + description: |- + A label query over the set of namespaces that the term applies to. + The term is applied to the union of the namespaces selected by this field + and the ones listed in the namespaces field. + null selector and null or empty namespaces list means "this pod's namespace". + An empty selector ({}) matches all namespaces. + properties: + matchExpressions: + description: matchExpressions is + a list of label selector requirements. + The requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label + key that the selector applies + to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + description: |- + namespaces specifies a static list of namespace names that the term applies to. + The term is applied to the union of the namespaces listed in this field + and the ones selected by namespaceSelector. + null or empty namespaces list and null namespaceSelector means "this pod's namespace". + items: + type: string + type: array + x-kubernetes-list-type: atomic + topologyKey: + description: |- + This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching + the labelSelector in the specified namespaces, where co-located is defined as running on a node + whose value of the label with key topologyKey matches that of any node on which any of the + selected pods is running. + Empty topologyKey is not allowed. + type: string + required: + - topologyKey + type: object + weight: + description: |- + weight associated with matching the corresponding podAffinityTerm, + in the range 1-100. + format: int32 + type: integer + required: + - podAffinityTerm + - weight + type: object + type: array + x-kubernetes-list-type: atomic + requiredDuringSchedulingIgnoredDuringExecution: + description: |- + If the anti-affinity requirements specified by this field are not met at + scheduling time, the pod will not be scheduled onto the node. + If the anti-affinity requirements specified by this field cease to be met + at some point during pod execution (e.g. due to a pod label update), the + system may or may not try to eventually evict the pod from its node. + When there are multiple elements, the lists of nodes corresponding to each + podAffinityTerm are intersected, i.e. all terms must be satisfied. + items: + description: |- + Defines a set of pods (namely those matching the labelSelector + relative to the given namespace(s)) that this pod should be + co-located (affinity) or not co-located (anti-affinity) with, + where co-located is defined as running on a node whose value of + the label with key matches that of any node on which + a pod of the set of pods is running + properties: + labelSelector: + description: |- + A label query over a set of resources, in this case pods. + If it's null, this PodAffinityTerm matches with no Pods. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The + requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label + key that the selector applies + to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both matchLabelKeys and labelSelector. + Also, matchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + description: |- + MismatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. + Also, mismatchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + description: |- + A label query over the set of namespaces that the term applies to. + The term is applied to the union of the namespaces selected by this field + and the ones listed in the namespaces field. + null selector and null or empty namespaces list means "this pod's namespace". + An empty selector ({}) matches all namespaces. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The + requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label + key that the selector applies + to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + description: |- + namespaces specifies a static list of namespace names that the term applies to. + The term is applied to the union of the namespaces listed in this field + and the ones selected by namespaceSelector. + null or empty namespaces list and null namespaceSelector means "this pod's namespace". + items: + type: string + type: array + x-kubernetes-list-type: atomic + topologyKey: + description: |- + This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching + the labelSelector in the specified namespaces, where co-located is defined as running on a node + whose value of the label with key topologyKey matches that of any node on which any of the + selected pods is running. + Empty topologyKey is not allowed. + type: string + required: + - topologyKey + type: object + type: array + x-kubernetes-list-type: atomic + type: object + type: object + priorityClassName: + description: |- + Priority class name for the pgBackRest repo host pod. Changing this value + causes PostgreSQL to restart. + More info: https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/ + type: string + resources: + description: Resource requirements for a pgBackRest repository + host + properties: + claims: + description: |- + Claims lists the names of resources, defined in spec.resourceClaims, + that are used by this container. + + This is an alpha field and requires enabling the + DynamicResourceAllocation feature gate. + + This field is immutable. It can only be set for containers. + items: + description: ResourceClaim references one entry + in PodSpec.ResourceClaims. + properties: + name: + description: |- + Name must match the name of one entry in pod.spec.resourceClaims of + the Pod where this field is used. It makes that resource available + inside a container. + type: string + required: + - name + type: object + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Limits describes the maximum amount of compute resources allowed. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Requests describes the minimum amount of compute resources required. + If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, + otherwise to an implementation-defined value. Requests cannot exceed Limits. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + type: object + sshConfigMap: + description: |- + ConfigMap containing custom SSH configuration. + Deprecated: Repository hosts use mTLS for encryption, authentication, and authorization. + properties: + items: + description: |- + items if unspecified, each key-value pair in the Data field of the referenced + ConfigMap will be projected into the volume as a file whose name is the + key and content is the value. If specified, the listed keys will be + projected into the specified paths, and unlisted keys will not be + present. If a key is specified which is not present in the ConfigMap, + the volume setup will error unless it is marked optional. Paths must be + relative and may not contain the '..' path or start with '..'. + items: + description: Maps a string key to a path within + a volume. + properties: + key: + description: key is the key to project. + type: string + mode: + description: |- + mode is Optional: mode bits used to set permissions on this file. + Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. + YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. + If not specified, the volume defaultMode will be used. + This might be in conflict with other options that affect the file + mode, like fsGroup, and the result can be other mode bits set. + format: int32 + type: integer + path: + description: |- + path is the relative path of the file to map the key to. + May not be an absolute path. + May not contain the path element '..'. + May not start with the string '..'. + type: string + required: + - key + - path + type: object + type: array + x-kubernetes-list-type: atomic + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: optional specify whether the ConfigMap + or its keys must be defined + type: boolean + type: object + x-kubernetes-map-type: atomic + sshSecret: + description: |- + Secret containing custom SSH keys. + Deprecated: Repository hosts use mTLS for encryption, authentication, and authorization. + properties: + items: + description: |- + items if unspecified, each key-value pair in the Data field of the referenced + Secret will be projected into the volume as a file whose name is the + key and content is the value. If specified, the listed keys will be + projected into the specified paths, and unlisted keys will not be + present. If a key is specified which is not present in the Secret, + the volume setup will error unless it is marked optional. Paths must be + relative and may not contain the '..' path or start with '..'. + items: + description: Maps a string key to a path within + a volume. + properties: + key: + description: key is the key to project. + type: string + mode: + description: |- + mode is Optional: mode bits used to set permissions on this file. + Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. + YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. + If not specified, the volume defaultMode will be used. + This might be in conflict with other options that affect the file + mode, like fsGroup, and the result can be other mode bits set. + format: int32 + type: integer + path: + description: |- + path is the relative path of the file to map the key to. + May not be an absolute path. + May not contain the path element '..'. + May not start with the string '..'. + type: string + required: + - key + - path + type: object + type: array + x-kubernetes-list-type: atomic + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: optional field specify whether the Secret + or its key must be defined + type: boolean + type: object + x-kubernetes-map-type: atomic + tolerations: + description: |- + Tolerations of a PgBackRest repo host pod. Changing this value causes a restart. + More info: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration + items: + description: |- + The pod this Toleration is attached to tolerates any taint that matches + the triple using the matching operator . + properties: + effect: + description: |- + Effect indicates the taint effect to match. Empty means match all taint effects. + When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. + type: string + key: + description: |- + Key is the taint key that the toleration applies to. Empty means match all taint keys. + If the key is empty, operator must be Exists; this combination means to match all values and all keys. + type: string + operator: + description: |- + Operator represents a key's relationship to the value. + Valid operators are Exists and Equal. Defaults to Equal. + Exists is equivalent to wildcard for value, so that a pod can + tolerate all taints of a particular category. + type: string + tolerationSeconds: + description: |- + TolerationSeconds represents the period of time the toleration (which must be + of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, + it is not set, which means tolerate the taint forever (do not evict). Zero and + negative values will be treated as 0 (evict immediately) by the system. + format: int64 + type: integer + value: + description: |- + Value is the taint value the toleration matches to. + If the operator is Exists, the value should be empty, otherwise just a regular string. + type: string + type: object + type: array + topologySpreadConstraints: + description: |- + Topology spread constraints of a Dedicated repo host pod. Changing this + value causes the repo host to restart. + More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/ + items: + description: TopologySpreadConstraint specifies how + to spread matching pods among the given topology. + properties: + labelSelector: + description: |- + LabelSelector is used to find matching pods. + Pods that match this label selector are counted to determine the number of pods + in their corresponding topology domain. + properties: + matchExpressions: + description: matchExpressions is a list of label + selector requirements. The requirements are + ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key that + the selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select the pods over which + spreading will be calculated. The keys are used to lookup values from the + incoming pod labels, those key-value labels are ANDed with labelSelector + to select the group of existing pods over which spreading will be calculated + for the incoming pod. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. + MatchLabelKeys cannot be set when LabelSelector isn't set. + Keys that don't exist in the incoming pod labels will + be ignored. A null or empty list means only match against labelSelector. + + This is a beta field and requires the MatchLabelKeysInPodTopologySpread feature gate to be enabled (enabled by default). + items: + type: string + type: array + x-kubernetes-list-type: atomic + maxSkew: + description: |- + MaxSkew describes the degree to which pods may be unevenly distributed. + When `whenUnsatisfiable=DoNotSchedule`, it is the maximum permitted difference + between the number of matching pods in the target topology and the global minimum. + The global minimum is the minimum number of matching pods in an eligible domain + or zero if the number of eligible domains is less than MinDomains. + For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same + labelSelector spread as 2/2/1: + In this case, the global minimum is 1. + | zone1 | zone2 | zone3 | + | P P | P P | P | + - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; + scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) + violate MaxSkew(1). + - if MaxSkew is 2, incoming pod can be scheduled onto any zone. + When `whenUnsatisfiable=ScheduleAnyway`, it is used to give higher precedence + to topologies that satisfy it. + It's a required field. Default value is 1 and 0 is not allowed. + format: int32 + type: integer + minDomains: + description: |- + MinDomains indicates a minimum number of eligible domains. + When the number of eligible domains with matching topology keys is less than minDomains, + Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed. + And when the number of eligible domains with matching topology keys equals or greater than minDomains, + this value has no effect on scheduling. + As a result, when the number of eligible domains is less than minDomains, + scheduler won't schedule more than maxSkew Pods to those domains. + If value is nil, the constraint behaves as if MinDomains is equal to 1. + Valid values are integers greater than 0. + When value is not nil, WhenUnsatisfiable must be DoNotSchedule. + + For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same + labelSelector spread as 2/2/2: + | zone1 | zone2 | zone3 | + | P P | P P | P P | + The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0. + In this situation, new pod with the same labelSelector cannot be scheduled, + because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, + it will violate MaxSkew. + format: int32 + type: integer + nodeAffinityPolicy: + description: |- + NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector + when calculating pod topology spread skew. Options are: + - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. + - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. + + If this value is nil, the behavior is equivalent to the Honor policy. + This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. + type: string + nodeTaintsPolicy: + description: |- + NodeTaintsPolicy indicates how we will treat node taints when calculating + pod topology spread skew. Options are: + - Honor: nodes without taints, along with tainted nodes for which the incoming pod + has a toleration, are included. + - Ignore: node taints are ignored. All nodes are included. + + If this value is nil, the behavior is equivalent to the Ignore policy. + This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. + type: string + topologyKey: + description: |- + TopologyKey is the key of node labels. Nodes that have a label with this key + and identical values are considered to be in the same topology. + We consider each as a "bucket", and try to put balanced number + of pods into each bucket. + We define a domain as a particular instance of a topology. + Also, we define an eligible domain as a domain whose nodes meet the requirements of + nodeAffinityPolicy and nodeTaintsPolicy. + e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. + And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. + It's a required field. + type: string + whenUnsatisfiable: + description: |- + WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy + the spread constraint. + - DoNotSchedule (default) tells the scheduler not to schedule it. + - ScheduleAnyway tells the scheduler to schedule the pod in any location, + but giving higher precedence to topologies that would help reduce the + skew. + A constraint is considered "Unsatisfiable" for an incoming pod + if and only if every possible node assignment for that pod would violate + "MaxSkew" on some topology. + For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same + labelSelector spread as 3/1/1: + | zone1 | zone2 | zone3 | + | P P P | P | P | + If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled + to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies + MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler + won't make it *more* imbalanced. + It's a required field. + type: string + required: + - maxSkew + - topologyKey + - whenUnsatisfiable + type: object + type: array + type: object + repos: + description: Defines a pgBackRest repository + items: + description: PGBackRestRepo represents a pgBackRest repository. Only + one of its members may be specified. + properties: + azure: + description: Represents a pgBackRest repository that + is created using Azure storage + properties: + container: + description: The Azure container utilized for the + repository + type: string + required: + - container + type: object + gcs: + description: Represents a pgBackRest repository that + is created using Google Cloud Storage + properties: + bucket: + description: The GCS bucket utilized for the repository + type: string + required: + - bucket + type: object + name: + description: The name of the repository + pattern: ^repo[1-4] + type: string + s3: + description: |- + RepoS3 represents a pgBackRest repository that is created using AWS S3 (or S3-compatible) + storage + properties: + bucket: + description: The S3 bucket utilized for the repository + type: string + endpoint: + description: A valid endpoint corresponding to the + specified region + type: string + region: + description: The region corresponding to the S3 + bucket + type: string + required: + - bucket + - endpoint + - region + type: object + schedules: + description: |- + Defines the schedules for the pgBackRest backups + Full, Differential and Incremental backup types are supported: + https://pgbackrest.org/user-guide.html#concept/backup + properties: + differential: + description: |- + Defines the Cron schedule for a differential pgBackRest backup. + Follows the standard Cron schedule syntax: + https://k8s.io/docs/concepts/workloads/controllers/cron-jobs/#cron-schedule-syntax + minLength: 6 + type: string + full: + description: |- + Defines the Cron schedule for a full pgBackRest backup. + Follows the standard Cron schedule syntax: + https://k8s.io/docs/concepts/workloads/controllers/cron-jobs/#cron-schedule-syntax + minLength: 6 + type: string + incremental: + description: |- + Defines the Cron schedule for an incremental pgBackRest backup. + Follows the standard Cron schedule syntax: + https://k8s.io/docs/concepts/workloads/controllers/cron-jobs/#cron-schedule-syntax + minLength: 6 + type: string + type: object + volume: + description: Represents a pgBackRest repository that + is created using a PersistentVolumeClaim + properties: + volumeClaimSpec: + description: Defines a PersistentVolumeClaim spec + used to create and/or bind a volume + properties: + accessModes: + description: |- + accessModes contains the desired access modes the volume should have. + More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 + items: + type: string + type: array + x-kubernetes-list-type: atomic + dataSource: + description: |- + dataSource field can be used to specify either: + * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) + * An existing PVC (PersistentVolumeClaim) + If the provisioner or an external controller can support the specified data source, + it will create a new volume based on the contents of the specified data source. + When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, + and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. + If the namespace is specified, then dataSourceRef will not be copied to dataSource. + properties: + apiGroup: + description: |- + APIGroup is the group for the resource being referenced. + If APIGroup is not specified, the specified Kind must be in the core API group. + For any other third-party types, APIGroup is required. + type: string + kind: + description: Kind is the type of resource + being referenced + type: string + name: + description: Name is the name of resource + being referenced + type: string + required: + - kind + - name + type: object + x-kubernetes-map-type: atomic + dataSourceRef: + description: |- + dataSourceRef specifies the object from which to populate the volume with data, if a non-empty + volume is desired. This may be any object from a non-empty API group (non + core object) or a PersistentVolumeClaim object. + When this field is specified, volume binding will only succeed if the type of + the specified object matches some installed volume populator or dynamic + provisioner. + This field will replace the functionality of the dataSource field and as such + if both fields are non-empty, they must have the same value. For backwards + compatibility, when namespace isn't specified in dataSourceRef, + both fields (dataSource and dataSourceRef) will be set to the same + value automatically if one of them is empty and the other is non-empty. + When namespace is specified in dataSourceRef, + dataSource isn't set to the same value and must be empty. + There are three important differences between dataSource and dataSourceRef: + * While dataSource only allows two specific types of objects, dataSourceRef + allows any non-core object, as well as PersistentVolumeClaim objects. + * While dataSource ignores disallowed values (dropping them), dataSourceRef + preserves all values, and generates an error if a disallowed value is + specified. + * While dataSource only allows local objects, dataSourceRef allows objects + in any namespaces. + (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. + (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. + properties: + apiGroup: + description: |- + APIGroup is the group for the resource being referenced. + If APIGroup is not specified, the specified Kind must be in the core API group. + For any other third-party types, APIGroup is required. + type: string + kind: + description: Kind is the type of resource + being referenced + type: string + name: + description: Name is the name of resource + being referenced + type: string + namespace: + description: |- + Namespace is the namespace of resource being referenced + Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. + (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. + type: string + required: + - kind + - name + type: object + resources: + description: |- + resources represents the minimum resources the volume should have. + If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements + that are lower than previous value but must still be higher than capacity recorded in the + status field of the claim. + More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources + properties: + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Limits describes the maximum amount of compute resources allowed. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Requests describes the minimum amount of compute resources required. + If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, + otherwise to an implementation-defined value. Requests cannot exceed Limits. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + type: object + selector: + description: selector is a label query over + volumes to consider for binding. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The requirements + are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key + that the selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + storageClassName: + description: |- + storageClassName is the name of the StorageClass required by the claim. + More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 + type: string + volumeAttributesClassName: + description: |- + volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. + If specified, the CSI driver will create or update the volume with the attributes defined + in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, + it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass + will be applied to the claim but it's not allowed to reset this field to empty string once it is set. + If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass + will be set by the persistentvolume controller if it exists. + If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be + set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource + exists. + More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ + (Alpha) Using this field requires the VolumeAttributesClass feature gate to be enabled. + type: string + volumeMode: + description: |- + volumeMode defines what type of volume is required by the claim. + Value of Filesystem is implied when not included in claim spec. + type: string + volumeName: + description: volumeName is the binding reference + to the PersistentVolume backing this claim. + type: string + type: object + x-kubernetes-validations: + - message: missing accessModes + rule: has(self.accessModes) && size(self.accessModes) + > 0 + - message: missing storage request + rule: has(self.resources) && has(self.resources.requests) + && has(self.resources.requests.storage) + required: + - volumeClaimSpec + type: object + required: + - name + type: object + minItems: 1 + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + restore: + description: Defines details for performing an in-place restore + using pgBackRest + properties: + affinity: + description: |- + Scheduling constraints of the pgBackRest restore Job. + More info: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node + properties: + nodeAffinity: + description: Describes node affinity scheduling rules + for the pod. + properties: + preferredDuringSchedulingIgnoredDuringExecution: + description: |- + The scheduler will prefer to schedule pods to nodes that satisfy + the affinity expressions specified by this field, but it may choose + a node that violates one or more of the expressions. The node that is + most preferred is the one with the greatest sum of weights, i.e. + for each node that meets all of the scheduling requirements (resource + request, requiredDuringScheduling affinity expressions, etc.), + compute a sum by iterating through the elements of this field and adding + "weight" to the sum if the node matches the corresponding matchExpressions; the + node(s) with the highest sum are the most preferred. + items: + description: |- + An empty preferred scheduling term matches all objects with implicit weight 0 + (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). + properties: + preference: + description: A node selector term, associated + with the corresponding weight. + properties: + matchExpressions: + description: A list of node selector + requirements by node's labels. + items: + description: |- + A node selector requirement is a selector that contains values, a key, and an operator + that relates the key and values. + properties: + key: + description: The label key that + the selector applies to. + type: string + operator: + description: |- + Represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: |- + An array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. If the operator is Gt or Lt, the values + array must have a single element, which will be interpreted as an integer. + This array is replaced during a strategic merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchFields: + description: A list of node selector + requirements by node's fields. + items: + description: |- + A node selector requirement is a selector that contains values, a key, and an operator + that relates the key and values. + properties: + key: + description: The label key that + the selector applies to. + type: string + operator: + description: |- + Represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: |- + An array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. If the operator is Gt or Lt, the values + array must have a single element, which will be interpreted as an integer. + This array is replaced during a strategic merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + type: object + x-kubernetes-map-type: atomic + weight: + description: Weight associated with matching + the corresponding nodeSelectorTerm, in + the range 1-100. + format: int32 + type: integer + required: + - preference + - weight + type: object + type: array + x-kubernetes-list-type: atomic + requiredDuringSchedulingIgnoredDuringExecution: + description: |- + If the affinity requirements specified by this field are not met at + scheduling time, the pod will not be scheduled onto the node. + If the affinity requirements specified by this field cease to be met + at some point during pod execution (e.g. due to an update), the system + may or may not try to eventually evict the pod from its node. + properties: + nodeSelectorTerms: + description: Required. A list of node selector + terms. The terms are ORed. + items: + description: |- + A null or empty node selector term matches no objects. The requirements of + them are ANDed. + The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. + properties: + matchExpressions: + description: A list of node selector + requirements by node's labels. + items: + description: |- + A node selector requirement is a selector that contains values, a key, and an operator + that relates the key and values. + properties: + key: + description: The label key that + the selector applies to. + type: string + operator: + description: |- + Represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: |- + An array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. If the operator is Gt or Lt, the values + array must have a single element, which will be interpreted as an integer. + This array is replaced during a strategic merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchFields: + description: A list of node selector + requirements by node's fields. + items: + description: |- + A node selector requirement is a selector that contains values, a key, and an operator + that relates the key and values. + properties: + key: + description: The label key that + the selector applies to. + type: string + operator: + description: |- + Represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: |- + An array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. If the operator is Gt or Lt, the values + array must have a single element, which will be interpreted as an integer. + This array is replaced during a strategic merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + type: object + x-kubernetes-map-type: atomic + type: array + x-kubernetes-list-type: atomic + required: + - nodeSelectorTerms + type: object + x-kubernetes-map-type: atomic + type: object + podAffinity: + description: Describes pod affinity scheduling rules + (e.g. co-locate this pod in the same node, zone, + etc. as some other pod(s)). + properties: + preferredDuringSchedulingIgnoredDuringExecution: + description: |- + The scheduler will prefer to schedule pods to nodes that satisfy + the affinity expressions specified by this field, but it may choose + a node that violates one or more of the expressions. The node that is + most preferred is the one with the greatest sum of weights, i.e. + for each node that meets all of the scheduling requirements (resource + request, requiredDuringScheduling affinity expressions, etc.), + compute a sum by iterating through the elements of this field and adding + "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the + node(s) with the highest sum are the most preferred. + items: + description: The weights of all of the matched + WeightedPodAffinityTerm fields are added per-node + to find the most preferred node(s) + properties: + podAffinityTerm: + description: Required. A pod affinity term, + associated with the corresponding weight. + properties: + labelSelector: + description: |- + A label query over a set of resources, in this case pods. + If it's null, this PodAffinityTerm matches with no Pods. + properties: + matchExpressions: + description: matchExpressions is + a list of label selector requirements. + The requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label + key that the selector applies + to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both matchLabelKeys and labelSelector. + Also, matchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + description: |- + MismatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. + Also, mismatchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + description: |- + A label query over the set of namespaces that the term applies to. + The term is applied to the union of the namespaces selected by this field + and the ones listed in the namespaces field. + null selector and null or empty namespaces list means "this pod's namespace". + An empty selector ({}) matches all namespaces. + properties: + matchExpressions: + description: matchExpressions is + a list of label selector requirements. + The requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label + key that the selector applies + to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + description: |- + namespaces specifies a static list of namespace names that the term applies to. + The term is applied to the union of the namespaces listed in this field + and the ones selected by namespaceSelector. + null or empty namespaces list and null namespaceSelector means "this pod's namespace". + items: + type: string + type: array + x-kubernetes-list-type: atomic + topologyKey: + description: |- + This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching + the labelSelector in the specified namespaces, where co-located is defined as running on a node + whose value of the label with key topologyKey matches that of any node on which any of the + selected pods is running. + Empty topologyKey is not allowed. + type: string + required: + - topologyKey + type: object + weight: + description: |- + weight associated with matching the corresponding podAffinityTerm, + in the range 1-100. + format: int32 + type: integer + required: + - podAffinityTerm + - weight + type: object + type: array + x-kubernetes-list-type: atomic + requiredDuringSchedulingIgnoredDuringExecution: + description: |- + If the affinity requirements specified by this field are not met at + scheduling time, the pod will not be scheduled onto the node. + If the affinity requirements specified by this field cease to be met + at some point during pod execution (e.g. due to a pod label update), the + system may or may not try to eventually evict the pod from its node. + When there are multiple elements, the lists of nodes corresponding to each + podAffinityTerm are intersected, i.e. all terms must be satisfied. + items: + description: |- + Defines a set of pods (namely those matching the labelSelector + relative to the given namespace(s)) that this pod should be + co-located (affinity) or not co-located (anti-affinity) with, + where co-located is defined as running on a node whose value of + the label with key matches that of any node on which + a pod of the set of pods is running + properties: + labelSelector: + description: |- + A label query over a set of resources, in this case pods. + If it's null, this PodAffinityTerm matches with no Pods. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The + requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label + key that the selector applies + to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both matchLabelKeys and labelSelector. + Also, matchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + description: |- + MismatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. + Also, mismatchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + description: |- + A label query over the set of namespaces that the term applies to. + The term is applied to the union of the namespaces selected by this field + and the ones listed in the namespaces field. + null selector and null or empty namespaces list means "this pod's namespace". + An empty selector ({}) matches all namespaces. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The + requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label + key that the selector applies + to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + description: |- + namespaces specifies a static list of namespace names that the term applies to. + The term is applied to the union of the namespaces listed in this field + and the ones selected by namespaceSelector. + null or empty namespaces list and null namespaceSelector means "this pod's namespace". + items: + type: string + type: array + x-kubernetes-list-type: atomic + topologyKey: + description: |- + This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching + the labelSelector in the specified namespaces, where co-located is defined as running on a node + whose value of the label with key topologyKey matches that of any node on which any of the + selected pods is running. + Empty topologyKey is not allowed. + type: string + required: + - topologyKey + type: object + type: array + x-kubernetes-list-type: atomic + type: object + podAntiAffinity: + description: Describes pod anti-affinity scheduling + rules (e.g. avoid putting this pod in the same node, + zone, etc. as some other pod(s)). + properties: + preferredDuringSchedulingIgnoredDuringExecution: + description: |- + The scheduler will prefer to schedule pods to nodes that satisfy + the anti-affinity expressions specified by this field, but it may choose + a node that violates one or more of the expressions. The node that is + most preferred is the one with the greatest sum of weights, i.e. + for each node that meets all of the scheduling requirements (resource + request, requiredDuringScheduling anti-affinity expressions, etc.), + compute a sum by iterating through the elements of this field and adding + "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the + node(s) with the highest sum are the most preferred. + items: + description: The weights of all of the matched + WeightedPodAffinityTerm fields are added per-node + to find the most preferred node(s) + properties: + podAffinityTerm: + description: Required. A pod affinity term, + associated with the corresponding weight. + properties: + labelSelector: + description: |- + A label query over a set of resources, in this case pods. + If it's null, this PodAffinityTerm matches with no Pods. + properties: + matchExpressions: + description: matchExpressions is + a list of label selector requirements. + The requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label + key that the selector applies + to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both matchLabelKeys and labelSelector. + Also, matchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + description: |- + MismatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. + Also, mismatchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + description: |- + A label query over the set of namespaces that the term applies to. + The term is applied to the union of the namespaces selected by this field + and the ones listed in the namespaces field. + null selector and null or empty namespaces list means "this pod's namespace". + An empty selector ({}) matches all namespaces. + properties: + matchExpressions: + description: matchExpressions is + a list of label selector requirements. + The requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label + key that the selector applies + to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + description: |- + namespaces specifies a static list of namespace names that the term applies to. + The term is applied to the union of the namespaces listed in this field + and the ones selected by namespaceSelector. + null or empty namespaces list and null namespaceSelector means "this pod's namespace". + items: + type: string + type: array + x-kubernetes-list-type: atomic + topologyKey: + description: |- + This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching + the labelSelector in the specified namespaces, where co-located is defined as running on a node + whose value of the label with key topologyKey matches that of any node on which any of the + selected pods is running. + Empty topologyKey is not allowed. + type: string + required: + - topologyKey + type: object + weight: + description: |- + weight associated with matching the corresponding podAffinityTerm, + in the range 1-100. + format: int32 + type: integer + required: + - podAffinityTerm + - weight + type: object + type: array + x-kubernetes-list-type: atomic + requiredDuringSchedulingIgnoredDuringExecution: + description: |- + If the anti-affinity requirements specified by this field are not met at + scheduling time, the pod will not be scheduled onto the node. + If the anti-affinity requirements specified by this field cease to be met + at some point during pod execution (e.g. due to a pod label update), the + system may or may not try to eventually evict the pod from its node. + When there are multiple elements, the lists of nodes corresponding to each + podAffinityTerm are intersected, i.e. all terms must be satisfied. + items: + description: |- + Defines a set of pods (namely those matching the labelSelector + relative to the given namespace(s)) that this pod should be + co-located (affinity) or not co-located (anti-affinity) with, + where co-located is defined as running on a node whose value of + the label with key matches that of any node on which + a pod of the set of pods is running + properties: + labelSelector: + description: |- + A label query over a set of resources, in this case pods. + If it's null, this PodAffinityTerm matches with no Pods. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The + requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label + key that the selector applies + to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both matchLabelKeys and labelSelector. + Also, matchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + description: |- + MismatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. + Also, mismatchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + description: |- + A label query over the set of namespaces that the term applies to. + The term is applied to the union of the namespaces selected by this field + and the ones listed in the namespaces field. + null selector and null or empty namespaces list means "this pod's namespace". + An empty selector ({}) matches all namespaces. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The + requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label + key that the selector applies + to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + description: |- + namespaces specifies a static list of namespace names that the term applies to. + The term is applied to the union of the namespaces listed in this field + and the ones selected by namespaceSelector. + null or empty namespaces list and null namespaceSelector means "this pod's namespace". + items: + type: string + type: array + x-kubernetes-list-type: atomic + topologyKey: + description: |- + This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching + the labelSelector in the specified namespaces, where co-located is defined as running on a node + whose value of the label with key topologyKey matches that of any node on which any of the + selected pods is running. + Empty topologyKey is not allowed. + type: string + required: + - topologyKey + type: object + type: array + x-kubernetes-list-type: atomic + type: object + type: object + clusterName: + description: |- + The name of an existing PostgresCluster to use as the data source for the new PostgresCluster. + Defaults to the name of the PostgresCluster being created if not provided. + type: string + clusterNamespace: + description: |- + The namespace of the cluster specified as the data source using the clusterName field. + Defaults to the namespace of the PostgresCluster being created if not provided. + type: string + enabled: + default: false + description: Whether or not in-place pgBackRest restores + are enabled for this PostgresCluster. + type: boolean + options: + description: |- + Command line options to include when running the pgBackRest restore command. + https://pgbackrest.org/command.html#command-restore + items: + type: string + type: array + priorityClassName: + description: |- + Priority class name for the pgBackRest restore Job pod. Changing this + value causes PostgreSQL to restart. + More info: https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/ + type: string + repoName: + description: |- + The name of the pgBackRest repo within the source PostgresCluster that contains the backups + that should be utilized to perform a pgBackRest restore when initializing the data source + for the new PostgresCluster. + pattern: ^repo[1-4] + type: string + resources: + description: Resource requirements for the pgBackRest + restore Job. + properties: + claims: + description: |- + Claims lists the names of resources, defined in spec.resourceClaims, + that are used by this container. + + This is an alpha field and requires enabling the + DynamicResourceAllocation feature gate. + + This field is immutable. It can only be set for containers. + items: + description: ResourceClaim references one entry + in PodSpec.ResourceClaims. + properties: + name: + description: |- + Name must match the name of one entry in pod.spec.resourceClaims of + the Pod where this field is used. It makes that resource available + inside a container. + type: string + required: + - name + type: object + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Limits describes the maximum amount of compute resources allowed. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Requests describes the minimum amount of compute resources required. + If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, + otherwise to an implementation-defined value. Requests cannot exceed Limits. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + type: object + tolerations: + description: |- + Tolerations of the pgBackRest restore Job. + More info: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration + items: + description: |- + The pod this Toleration is attached to tolerates any taint that matches + the triple using the matching operator . + properties: + effect: + description: |- + Effect indicates the taint effect to match. Empty means match all taint effects. + When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. + type: string + key: + description: |- + Key is the taint key that the toleration applies to. Empty means match all taint keys. + If the key is empty, operator must be Exists; this combination means to match all values and all keys. + type: string + operator: + description: |- + Operator represents a key's relationship to the value. + Valid operators are Exists and Equal. Defaults to Equal. + Exists is equivalent to wildcard for value, so that a pod can + tolerate all taints of a particular category. + type: string + tolerationSeconds: + description: |- + TolerationSeconds represents the period of time the toleration (which must be + of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, + it is not set, which means tolerate the taint forever (do not evict). Zero and + negative values will be treated as 0 (evict immediately) by the system. + format: int64 + type: integer + value: + description: |- + Value is the taint value the toleration matches to. + If the operator is Exists, the value should be empty, otherwise just a regular string. + type: string + type: object + type: array + required: + - enabled + - repoName + type: object + sidecars: + description: Configuration for pgBackRest sidecar containers + properties: + pgbackrest: + description: Defines the configuration for the pgBackRest + sidecar container + properties: + resources: + description: Resource requirements for a sidecar container + properties: + claims: + description: |- + Claims lists the names of resources, defined in spec.resourceClaims, + that are used by this container. + + This is an alpha field and requires enabling the + DynamicResourceAllocation feature gate. + + This field is immutable. It can only be set for containers. + items: + description: ResourceClaim references one entry + in PodSpec.ResourceClaims. + properties: + name: + description: |- + Name must match the name of one entry in pod.spec.resourceClaims of + the Pod where this field is used. It makes that resource available + inside a container. + type: string + required: + - name + type: object + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Limits describes the maximum amount of compute resources allowed. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Requests describes the minimum amount of compute resources required. + If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, + otherwise to an implementation-defined value. Requests cannot exceed Limits. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + type: object + type: object + pgbackrestConfig: + description: Defines the configuration for the pgBackRest + config sidecar container + properties: + resources: + description: Resource requirements for a sidecar container + properties: + claims: + description: |- + Claims lists the names of resources, defined in spec.resourceClaims, + that are used by this container. + + This is an alpha field and requires enabling the + DynamicResourceAllocation feature gate. + + This field is immutable. It can only be set for containers. + items: + description: ResourceClaim references one entry + in PodSpec.ResourceClaims. + properties: + name: + description: |- + Name must match the name of one entry in pod.spec.resourceClaims of + the Pod where this field is used. It makes that resource available + inside a container. + type: string + required: + - name + type: object + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Limits describes the maximum amount of compute resources allowed. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Requests describes the minimum amount of compute resources required. + If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, + otherwise to an implementation-defined value. Requests cannot exceed Limits. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + type: object + type: object + type: object + required: + - repos + type: object + snapshots: + description: VolumeSnapshot configuration + properties: + volumeSnapshotClassName: + description: Name of the VolumeSnapshotClass that should be + used by VolumeSnapshots + minLength: 1 + type: string + required: + - volumeSnapshotClassName + type: object + type: object + config: + properties: + files: + items: + description: Projection that may be projected along with other + supported volume types + properties: + clusterTrustBundle: + description: |- + ClusterTrustBundle allows a pod to access the `.spec.trustBundle` field + of ClusterTrustBundle objects in an auto-updating file. + + Alpha, gated by the ClusterTrustBundleProjection feature gate. + + ClusterTrustBundle objects can either be selected by name, or by the + combination of signer name and a label selector. + + Kubelet performs aggressive normalization of the PEM contents written + into the pod filesystem. Esoteric PEM features such as inter-block + comments and block headers are stripped. Certificates are deduplicated. + The ordering of certificates within the file is arbitrary, and Kubelet + may change the order over time. + properties: + labelSelector: + description: |- + Select all ClusterTrustBundles that match this label selector. Only has + effect if signerName is set. Mutually-exclusive with name. If unset, + interpreted as "match nothing". If set but empty, interpreted as "match + everything". + properties: + matchExpressions: + description: matchExpressions is a list of label + selector requirements. The requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key that the + selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + name: + description: |- + Select a single ClusterTrustBundle by object name. Mutually-exclusive + with signerName and labelSelector. + type: string + optional: + description: |- + If true, don't block pod startup if the referenced ClusterTrustBundle(s) + aren't available. If using name, then the named ClusterTrustBundle is + allowed not to exist. If using signerName, then the combination of + signerName and labelSelector is allowed to match zero + ClusterTrustBundles. + type: boolean + path: + description: Relative path from the volume root to write + the bundle. + type: string + signerName: + description: |- + Select all ClusterTrustBundles that match this signer name. + Mutually-exclusive with name. The contents of all selected + ClusterTrustBundles will be unified and deduplicated. + type: string + required: + - path + type: object + configMap: + description: configMap information about the configMap data + to project + properties: + items: + description: |- + items if unspecified, each key-value pair in the Data field of the referenced + ConfigMap will be projected into the volume as a file whose name is the + key and content is the value. If specified, the listed keys will be + projected into the specified paths, and unlisted keys will not be + present. If a key is specified which is not present in the ConfigMap, + the volume setup will error unless it is marked optional. Paths must be + relative and may not contain the '..' path or start with '..'. + items: + description: Maps a string key to a path within a + volume. + properties: + key: + description: key is the key to project. + type: string + mode: + description: |- + mode is Optional: mode bits used to set permissions on this file. + Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. + YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. + If not specified, the volume defaultMode will be used. + This might be in conflict with other options that affect the file + mode, like fsGroup, and the result can be other mode bits set. + format: int32 + type: integer + path: + description: |- + path is the relative path of the file to map the key to. + May not be an absolute path. + May not contain the path element '..'. + May not start with the string '..'. + type: string + required: + - key + - path + type: object + type: array + x-kubernetes-list-type: atomic + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: optional specify whether the ConfigMap + or its keys must be defined + type: boolean + type: object + x-kubernetes-map-type: atomic + downwardAPI: + description: downwardAPI information about the downwardAPI + data to project + properties: + items: + description: Items is a list of DownwardAPIVolume file + items: + description: DownwardAPIVolumeFile represents information + to create the file containing the pod field + properties: + fieldRef: + description: 'Required: Selects a field of the + pod: only annotations, labels, name, namespace + and uid are supported.' + properties: + apiVersion: + description: Version of the schema the FieldPath + is written in terms of, defaults to "v1". + type: string + fieldPath: + description: Path of the field to select in + the specified API version. + type: string + required: + - fieldPath + type: object + x-kubernetes-map-type: atomic + mode: + description: |- + Optional: mode bits used to set permissions on this file, must be an octal value + between 0000 and 0777 or a decimal value between 0 and 511. + YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. + If not specified, the volume defaultMode will be used. + This might be in conflict with other options that affect the file + mode, like fsGroup, and the result can be other mode bits set. + format: int32 + type: integer + path: + description: 'Required: Path is the relative + path name of the file to be created. Must not + be absolute or contain the ''..'' path. Must + be utf-8 encoded. The first item of the relative + path must not start with ''..''' + type: string + resourceFieldRef: + description: |- + Selects a resource of the container: only resources limits and requests + (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. + properties: + containerName: + description: 'Container name: required for + volumes, optional for env vars' + type: string + divisor: + anyOf: + - type: integer + - type: string + description: Specifies the output format of + the exposed resources, defaults to "1" + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + resource: + description: 'Required: resource to select' + type: string + required: + - resource + type: object + x-kubernetes-map-type: atomic + required: + - path + type: object + type: array + x-kubernetes-list-type: atomic + type: object + secret: + description: secret information about the secret data to + project + properties: + items: + description: |- + items if unspecified, each key-value pair in the Data field of the referenced + Secret will be projected into the volume as a file whose name is the + key and content is the value. If specified, the listed keys will be + projected into the specified paths, and unlisted keys will not be + present. If a key is specified which is not present in the Secret, + the volume setup will error unless it is marked optional. Paths must be + relative and may not contain the '..' path or start with '..'. + items: + description: Maps a string key to a path within a + volume. + properties: + key: + description: key is the key to project. + type: string + mode: + description: |- + mode is Optional: mode bits used to set permissions on this file. + Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. + YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. + If not specified, the volume defaultMode will be used. + This might be in conflict with other options that affect the file + mode, like fsGroup, and the result can be other mode bits set. + format: int32 + type: integer + path: + description: |- + path is the relative path of the file to map the key to. + May not be an absolute path. + May not contain the path element '..'. + May not start with the string '..'. + type: string + required: + - key + - path + type: object + type: array + x-kubernetes-list-type: atomic + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: optional field specify whether the Secret + or its key must be defined + type: boolean + type: object + x-kubernetes-map-type: atomic + serviceAccountToken: + description: serviceAccountToken is information about the + serviceAccountToken data to project + properties: + audience: + description: |- + audience is the intended audience of the token. A recipient of a token + must identify itself with an identifier specified in the audience of the + token, and otherwise should reject the token. The audience defaults to the + identifier of the apiserver. + type: string + expirationSeconds: + description: |- + expirationSeconds is the requested duration of validity of the service + account token. As the token approaches expiration, the kubelet volume + plugin will proactively rotate the service account token. The kubelet will + start trying to rotate the token if the token is older than 80 percent of + its time to live or if the token is older than 24 hours.Defaults to 1 hour + and must be at least 10 minutes. + format: int64 + type: integer + path: + description: |- + path is the path relative to the mount point of the file to project the + token into. + type: string + required: + - path + type: object + type: object + type: array + type: object + customReplicationTLSSecret: + description: |- + The secret containing the replication client certificates and keys for + secure connections to the PostgreSQL server. It will need to contain the + client TLS certificate, TLS key and the Certificate Authority certificate + with the data keys set to tls.crt, tls.key and ca.crt, respectively. + NOTE: If CustomReplicationClientTLSSecret is provided, CustomTLSSecret + MUST be provided and the ca.crt provided must be the same. + properties: + items: + description: |- + items if unspecified, each key-value pair in the Data field of the referenced + Secret will be projected into the volume as a file whose name is the + key and content is the value. If specified, the listed keys will be + projected into the specified paths, and unlisted keys will not be + present. If a key is specified which is not present in the Secret, + the volume setup will error unless it is marked optional. Paths must be + relative and may not contain the '..' path or start with '..'. + items: + description: Maps a string key to a path within a volume. + properties: + key: + description: key is the key to project. + type: string + mode: + description: |- + mode is Optional: mode bits used to set permissions on this file. + Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. + YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. + If not specified, the volume defaultMode will be used. + This might be in conflict with other options that affect the file + mode, like fsGroup, and the result can be other mode bits set. + format: int32 + type: integer + path: + description: |- + path is the relative path of the file to map the key to. + May not be an absolute path. + May not contain the path element '..'. + May not start with the string '..'. + type: string + required: + - key + - path + type: object + type: array + x-kubernetes-list-type: atomic + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: optional field specify whether the Secret or its + key must be defined + type: boolean + type: object + x-kubernetes-map-type: atomic + customTLSSecret: + description: |- + The secret containing the Certificates and Keys to encrypt PostgreSQL + traffic will need to contain the server TLS certificate, TLS key and the + Certificate Authority certificate with the data keys set to tls.crt, + tls.key and ca.crt, respectively. It will then be mounted as a volume + projection to the '/pgconf/tls' directory. For more information on + Kubernetes secret projections, please see + https://k8s.io/docs/concepts/configuration/secret/#projection-of-secret-keys-to-specific-paths + NOTE: If CustomTLSSecret is provided, CustomReplicationClientTLSSecret + MUST be provided and the ca.crt provided must be the same. + properties: + items: + description: |- + items if unspecified, each key-value pair in the Data field of the referenced + Secret will be projected into the volume as a file whose name is the + key and content is the value. If specified, the listed keys will be + projected into the specified paths, and unlisted keys will not be + present. If a key is specified which is not present in the Secret, + the volume setup will error unless it is marked optional. Paths must be + relative and may not contain the '..' path or start with '..'. + items: + description: Maps a string key to a path within a volume. + properties: + key: + description: key is the key to project. + type: string + mode: + description: |- + mode is Optional: mode bits used to set permissions on this file. + Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. + YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. + If not specified, the volume defaultMode will be used. + This might be in conflict with other options that affect the file + mode, like fsGroup, and the result can be other mode bits set. + format: int32 + type: integer + path: + description: |- + path is the relative path of the file to map the key to. + May not be an absolute path. + May not contain the path element '..'. + May not start with the string '..'. + type: string + required: + - key + - path + type: object + type: array + x-kubernetes-list-type: atomic + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: optional field specify whether the Secret or its + key must be defined + type: boolean + type: object + x-kubernetes-map-type: atomic + dataSource: + description: Specifies a data source for bootstrapping the PostgreSQL + cluster. + properties: + pgbackrest: + description: |- + Defines a pgBackRest cloud-based data source that can be used to pre-populate the + PostgreSQL data directory for a new PostgreSQL cluster using a pgBackRest restore. + The PGBackRest field is incompatible with the PostgresCluster field: only one + data source can be used for pre-populating a new PostgreSQL cluster + properties: + affinity: + description: |- + Scheduling constraints of the pgBackRest restore Job. + More info: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node + properties: + nodeAffinity: + description: Describes node affinity scheduling rules + for the pod. + properties: + preferredDuringSchedulingIgnoredDuringExecution: + description: |- + The scheduler will prefer to schedule pods to nodes that satisfy + the affinity expressions specified by this field, but it may choose + a node that violates one or more of the expressions. The node that is + most preferred is the one with the greatest sum of weights, i.e. + for each node that meets all of the scheduling requirements (resource + request, requiredDuringScheduling affinity expressions, etc.), + compute a sum by iterating through the elements of this field and adding + "weight" to the sum if the node matches the corresponding matchExpressions; the + node(s) with the highest sum are the most preferred. + items: + description: |- + An empty preferred scheduling term matches all objects with implicit weight 0 + (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). + properties: + preference: + description: A node selector term, associated + with the corresponding weight. + properties: + matchExpressions: + description: A list of node selector requirements + by node's labels. + items: + description: |- + A node selector requirement is a selector that contains values, a key, and an operator + that relates the key and values. + properties: + key: + description: The label key that the + selector applies to. + type: string + operator: + description: |- + Represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: |- + An array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. If the operator is Gt or Lt, the values + array must have a single element, which will be interpreted as an integer. + This array is replaced during a strategic merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchFields: + description: A list of node selector requirements + by node's fields. + items: + description: |- + A node selector requirement is a selector that contains values, a key, and an operator + that relates the key and values. + properties: + key: + description: The label key that the + selector applies to. + type: string + operator: + description: |- + Represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: |- + An array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. If the operator is Gt or Lt, the values + array must have a single element, which will be interpreted as an integer. + This array is replaced during a strategic merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + type: object + x-kubernetes-map-type: atomic + weight: + description: Weight associated with matching + the corresponding nodeSelectorTerm, in the + range 1-100. + format: int32 + type: integer + required: + - preference + - weight + type: object + type: array + x-kubernetes-list-type: atomic + requiredDuringSchedulingIgnoredDuringExecution: + description: |- + If the affinity requirements specified by this field are not met at + scheduling time, the pod will not be scheduled onto the node. + If the affinity requirements specified by this field cease to be met + at some point during pod execution (e.g. due to an update), the system + may or may not try to eventually evict the pod from its node. + properties: + nodeSelectorTerms: + description: Required. A list of node selector + terms. The terms are ORed. + items: + description: |- + A null or empty node selector term matches no objects. The requirements of + them are ANDed. + The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. + properties: + matchExpressions: + description: A list of node selector requirements + by node's labels. + items: + description: |- + A node selector requirement is a selector that contains values, a key, and an operator + that relates the key and values. + properties: + key: + description: The label key that the + selector applies to. + type: string + operator: + description: |- + Represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: |- + An array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. If the operator is Gt or Lt, the values + array must have a single element, which will be interpreted as an integer. + This array is replaced during a strategic merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchFields: + description: A list of node selector requirements + by node's fields. + items: + description: |- + A node selector requirement is a selector that contains values, a key, and an operator + that relates the key and values. + properties: + key: + description: The label key that the + selector applies to. + type: string + operator: + description: |- + Represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: |- + An array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. If the operator is Gt or Lt, the values + array must have a single element, which will be interpreted as an integer. + This array is replaced during a strategic merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + type: object + x-kubernetes-map-type: atomic + type: array + x-kubernetes-list-type: atomic + required: + - nodeSelectorTerms + type: object + x-kubernetes-map-type: atomic + type: object + podAffinity: + description: Describes pod affinity scheduling rules (e.g. + co-locate this pod in the same node, zone, etc. as some + other pod(s)). + properties: + preferredDuringSchedulingIgnoredDuringExecution: + description: |- + The scheduler will prefer to schedule pods to nodes that satisfy + the affinity expressions specified by this field, but it may choose + a node that violates one or more of the expressions. The node that is + most preferred is the one with the greatest sum of weights, i.e. + for each node that meets all of the scheduling requirements (resource + request, requiredDuringScheduling affinity expressions, etc.), + compute a sum by iterating through the elements of this field and adding + "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the + node(s) with the highest sum are the most preferred. + items: + description: The weights of all of the matched WeightedPodAffinityTerm + fields are added per-node to find the most preferred + node(s) + properties: + podAffinityTerm: + description: Required. A pod affinity term, + associated with the corresponding weight. + properties: + labelSelector: + description: |- + A label query over a set of resources, in this case pods. + If it's null, this PodAffinityTerm matches with no Pods. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The + requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label + key that the selector applies + to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both matchLabelKeys and labelSelector. + Also, matchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + description: |- + MismatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. + Also, mismatchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + description: |- + A label query over the set of namespaces that the term applies to. + The term is applied to the union of the namespaces selected by this field + and the ones listed in the namespaces field. + null selector and null or empty namespaces list means "this pod's namespace". + An empty selector ({}) matches all namespaces. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The + requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label + key that the selector applies + to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + description: |- + namespaces specifies a static list of namespace names that the term applies to. + The term is applied to the union of the namespaces listed in this field + and the ones selected by namespaceSelector. + null or empty namespaces list and null namespaceSelector means "this pod's namespace". + items: + type: string + type: array + x-kubernetes-list-type: atomic + topologyKey: + description: |- + This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching + the labelSelector in the specified namespaces, where co-located is defined as running on a node + whose value of the label with key topologyKey matches that of any node on which any of the + selected pods is running. + Empty topologyKey is not allowed. + type: string + required: + - topologyKey + type: object + weight: + description: |- + weight associated with matching the corresponding podAffinityTerm, + in the range 1-100. + format: int32 + type: integer + required: + - podAffinityTerm + - weight + type: object + type: array + x-kubernetes-list-type: atomic + requiredDuringSchedulingIgnoredDuringExecution: + description: |- + If the affinity requirements specified by this field are not met at + scheduling time, the pod will not be scheduled onto the node. + If the affinity requirements specified by this field cease to be met + at some point during pod execution (e.g. due to a pod label update), the + system may or may not try to eventually evict the pod from its node. + When there are multiple elements, the lists of nodes corresponding to each + podAffinityTerm are intersected, i.e. all terms must be satisfied. + items: + description: |- + Defines a set of pods (namely those matching the labelSelector + relative to the given namespace(s)) that this pod should be + co-located (affinity) or not co-located (anti-affinity) with, + where co-located is defined as running on a node whose value of + the label with key matches that of any node on which + a pod of the set of pods is running + properties: + labelSelector: + description: |- + A label query over a set of resources, in this case pods. + If it's null, this PodAffinityTerm matches with no Pods. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The requirements + are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key + that the selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both matchLabelKeys and labelSelector. + Also, matchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + description: |- + MismatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. + Also, mismatchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + description: |- + A label query over the set of namespaces that the term applies to. + The term is applied to the union of the namespaces selected by this field + and the ones listed in the namespaces field. + null selector and null or empty namespaces list means "this pod's namespace". + An empty selector ({}) matches all namespaces. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The requirements + are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key + that the selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + description: |- + namespaces specifies a static list of namespace names that the term applies to. + The term is applied to the union of the namespaces listed in this field + and the ones selected by namespaceSelector. + null or empty namespaces list and null namespaceSelector means "this pod's namespace". + items: + type: string + type: array + x-kubernetes-list-type: atomic + topologyKey: + description: |- + This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching + the labelSelector in the specified namespaces, where co-located is defined as running on a node + whose value of the label with key topologyKey matches that of any node on which any of the + selected pods is running. + Empty topologyKey is not allowed. + type: string + required: + - topologyKey + type: object + type: array + x-kubernetes-list-type: atomic + type: object + podAntiAffinity: + description: Describes pod anti-affinity scheduling rules + (e.g. avoid putting this pod in the same node, zone, + etc. as some other pod(s)). + properties: + preferredDuringSchedulingIgnoredDuringExecution: + description: |- + The scheduler will prefer to schedule pods to nodes that satisfy + the anti-affinity expressions specified by this field, but it may choose + a node that violates one or more of the expressions. The node that is + most preferred is the one with the greatest sum of weights, i.e. + for each node that meets all of the scheduling requirements (resource + request, requiredDuringScheduling anti-affinity expressions, etc.), + compute a sum by iterating through the elements of this field and adding + "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the + node(s) with the highest sum are the most preferred. + items: + description: The weights of all of the matched WeightedPodAffinityTerm + fields are added per-node to find the most preferred + node(s) + properties: + podAffinityTerm: + description: Required. A pod affinity term, + associated with the corresponding weight. + properties: + labelSelector: + description: |- + A label query over a set of resources, in this case pods. + If it's null, this PodAffinityTerm matches with no Pods. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The + requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label + key that the selector applies + to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both matchLabelKeys and labelSelector. + Also, matchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + description: |- + MismatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. + Also, mismatchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + description: |- + A label query over the set of namespaces that the term applies to. + The term is applied to the union of the namespaces selected by this field + and the ones listed in the namespaces field. + null selector and null or empty namespaces list means "this pod's namespace". + An empty selector ({}) matches all namespaces. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The + requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label + key that the selector applies + to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + description: |- + namespaces specifies a static list of namespace names that the term applies to. + The term is applied to the union of the namespaces listed in this field + and the ones selected by namespaceSelector. + null or empty namespaces list and null namespaceSelector means "this pod's namespace". + items: + type: string + type: array + x-kubernetes-list-type: atomic + topologyKey: + description: |- + This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching + the labelSelector in the specified namespaces, where co-located is defined as running on a node + whose value of the label with key topologyKey matches that of any node on which any of the + selected pods is running. + Empty topologyKey is not allowed. + type: string + required: + - topologyKey + type: object + weight: + description: |- + weight associated with matching the corresponding podAffinityTerm, + in the range 1-100. + format: int32 + type: integer + required: + - podAffinityTerm + - weight + type: object + type: array + x-kubernetes-list-type: atomic + requiredDuringSchedulingIgnoredDuringExecution: + description: |- + If the anti-affinity requirements specified by this field are not met at + scheduling time, the pod will not be scheduled onto the node. + If the anti-affinity requirements specified by this field cease to be met + at some point during pod execution (e.g. due to a pod label update), the + system may or may not try to eventually evict the pod from its node. + When there are multiple elements, the lists of nodes corresponding to each + podAffinityTerm are intersected, i.e. all terms must be satisfied. + items: + description: |- + Defines a set of pods (namely those matching the labelSelector + relative to the given namespace(s)) that this pod should be + co-located (affinity) or not co-located (anti-affinity) with, + where co-located is defined as running on a node whose value of + the label with key matches that of any node on which + a pod of the set of pods is running + properties: + labelSelector: + description: |- + A label query over a set of resources, in this case pods. + If it's null, this PodAffinityTerm matches with no Pods. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The requirements + are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key + that the selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both matchLabelKeys and labelSelector. + Also, matchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + description: |- + MismatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. + Also, mismatchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + description: |- + A label query over the set of namespaces that the term applies to. + The term is applied to the union of the namespaces selected by this field + and the ones listed in the namespaces field. + null selector and null or empty namespaces list means "this pod's namespace". + An empty selector ({}) matches all namespaces. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The requirements + are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key + that the selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + description: |- + namespaces specifies a static list of namespace names that the term applies to. + The term is applied to the union of the namespaces listed in this field + and the ones selected by namespaceSelector. + null or empty namespaces list and null namespaceSelector means "this pod's namespace". + items: + type: string + type: array + x-kubernetes-list-type: atomic + topologyKey: + description: |- + This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching + the labelSelector in the specified namespaces, where co-located is defined as running on a node + whose value of the label with key topologyKey matches that of any node on which any of the + selected pods is running. + Empty topologyKey is not allowed. + type: string + required: + - topologyKey + type: object + type: array + x-kubernetes-list-type: atomic + type: object + type: object + configuration: + description: |- + Projected volumes containing custom pgBackRest configuration. These files are mounted + under "/etc/pgbackrest/conf.d" alongside any pgBackRest configuration generated by the + PostgreSQL Operator: + https://pgbackrest.org/configuration.html + items: + description: Projection that may be projected along with + other supported volume types + properties: + clusterTrustBundle: + description: |- + ClusterTrustBundle allows a pod to access the `.spec.trustBundle` field + of ClusterTrustBundle objects in an auto-updating file. + + Alpha, gated by the ClusterTrustBundleProjection feature gate. + + ClusterTrustBundle objects can either be selected by name, or by the + combination of signer name and a label selector. + + Kubelet performs aggressive normalization of the PEM contents written + into the pod filesystem. Esoteric PEM features such as inter-block + comments and block headers are stripped. Certificates are deduplicated. + The ordering of certificates within the file is arbitrary, and Kubelet + may change the order over time. + properties: + labelSelector: + description: |- + Select all ClusterTrustBundles that match this label selector. Only has + effect if signerName is set. Mutually-exclusive with name. If unset, + interpreted as "match nothing". If set but empty, interpreted as "match + everything". + properties: + matchExpressions: + description: matchExpressions is a list of label + selector requirements. The requirements are + ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key that + the selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + name: + description: |- + Select a single ClusterTrustBundle by object name. Mutually-exclusive + with signerName and labelSelector. + type: string + optional: + description: |- + If true, don't block pod startup if the referenced ClusterTrustBundle(s) + aren't available. If using name, then the named ClusterTrustBundle is + allowed not to exist. If using signerName, then the combination of + signerName and labelSelector is allowed to match zero + ClusterTrustBundles. + type: boolean + path: + description: Relative path from the volume root + to write the bundle. + type: string + signerName: + description: |- + Select all ClusterTrustBundles that match this signer name. + Mutually-exclusive with name. The contents of all selected + ClusterTrustBundles will be unified and deduplicated. + type: string + required: + - path + type: object + configMap: + description: configMap information about the configMap + data to project + properties: + items: + description: |- + items if unspecified, each key-value pair in the Data field of the referenced + ConfigMap will be projected into the volume as a file whose name is the + key and content is the value. If specified, the listed keys will be + projected into the specified paths, and unlisted keys will not be + present. If a key is specified which is not present in the ConfigMap, + the volume setup will error unless it is marked optional. Paths must be + relative and may not contain the '..' path or start with '..'. + items: + description: Maps a string key to a path within + a volume. + properties: + key: + description: key is the key to project. + type: string + mode: + description: |- + mode is Optional: mode bits used to set permissions on this file. + Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. + YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. + If not specified, the volume defaultMode will be used. + This might be in conflict with other options that affect the file + mode, like fsGroup, and the result can be other mode bits set. + format: int32 + type: integer + path: + description: |- + path is the relative path of the file to map the key to. + May not be an absolute path. + May not contain the path element '..'. + May not start with the string '..'. + type: string + required: + - key + - path + type: object + type: array + x-kubernetes-list-type: atomic + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: optional specify whether the ConfigMap + or its keys must be defined + type: boolean + type: object + x-kubernetes-map-type: atomic + downwardAPI: + description: downwardAPI information about the downwardAPI + data to project + properties: + items: + description: Items is a list of DownwardAPIVolume + file + items: + description: DownwardAPIVolumeFile represents + information to create the file containing the + pod field + properties: + fieldRef: + description: 'Required: Selects a field of + the pod: only annotations, labels, name, + namespace and uid are supported.' + properties: + apiVersion: + description: Version of the schema the + FieldPath is written in terms of, defaults + to "v1". + type: string + fieldPath: + description: Path of the field to select + in the specified API version. + type: string + required: + - fieldPath + type: object + x-kubernetes-map-type: atomic + mode: + description: |- + Optional: mode bits used to set permissions on this file, must be an octal value + between 0000 and 0777 or a decimal value between 0 and 511. + YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. + If not specified, the volume defaultMode will be used. + This might be in conflict with other options that affect the file + mode, like fsGroup, and the result can be other mode bits set. + format: int32 + type: integer + path: + description: 'Required: Path is the relative + path name of the file to be created. Must + not be absolute or contain the ''..'' path. + Must be utf-8 encoded. The first item of + the relative path must not start with ''..''' + type: string + resourceFieldRef: + description: |- + Selects a resource of the container: only resources limits and requests + (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. + properties: + containerName: + description: 'Container name: required + for volumes, optional for env vars' + type: string + divisor: + anyOf: + - type: integer + - type: string + description: Specifies the output format + of the exposed resources, defaults to + "1" + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + resource: + description: 'Required: resource to select' + type: string + required: + - resource + type: object + x-kubernetes-map-type: atomic + required: + - path + type: object + type: array + x-kubernetes-list-type: atomic + type: object + secret: + description: secret information about the secret data + to project + properties: + items: + description: |- + items if unspecified, each key-value pair in the Data field of the referenced + Secret will be projected into the volume as a file whose name is the + key and content is the value. If specified, the listed keys will be + projected into the specified paths, and unlisted keys will not be + present. If a key is specified which is not present in the Secret, + the volume setup will error unless it is marked optional. Paths must be + relative and may not contain the '..' path or start with '..'. + items: + description: Maps a string key to a path within + a volume. + properties: + key: + description: key is the key to project. + type: string + mode: + description: |- + mode is Optional: mode bits used to set permissions on this file. + Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. + YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. + If not specified, the volume defaultMode will be used. + This might be in conflict with other options that affect the file + mode, like fsGroup, and the result can be other mode bits set. + format: int32 + type: integer + path: + description: |- + path is the relative path of the file to map the key to. + May not be an absolute path. + May not contain the path element '..'. + May not start with the string '..'. + type: string + required: + - key + - path + type: object + type: array + x-kubernetes-list-type: atomic + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: optional field specify whether the + Secret or its key must be defined + type: boolean + type: object + x-kubernetes-map-type: atomic + serviceAccountToken: + description: serviceAccountToken is information about + the serviceAccountToken data to project + properties: + audience: + description: |- + audience is the intended audience of the token. A recipient of a token + must identify itself with an identifier specified in the audience of the + token, and otherwise should reject the token. The audience defaults to the + identifier of the apiserver. + type: string + expirationSeconds: + description: |- + expirationSeconds is the requested duration of validity of the service + account token. As the token approaches expiration, the kubelet volume + plugin will proactively rotate the service account token. The kubelet will + start trying to rotate the token if the token is older than 80 percent of + its time to live or if the token is older than 24 hours.Defaults to 1 hour + and must be at least 10 minutes. + format: int64 + type: integer + path: + description: |- + path is the path relative to the mount point of the file to project the + token into. + type: string + required: + - path + type: object + type: object + type: array + global: + additionalProperties: + type: string + description: |- + Global pgBackRest configuration settings. These settings are included in the "global" + section of the pgBackRest configuration generated by the PostgreSQL Operator, and then + mounted under "/etc/pgbackrest/conf.d": + https://pgbackrest.org/configuration.html + type: object + options: + description: |- + Command line options to include when running the pgBackRest restore command. + https://pgbackrest.org/command.html#command-restore + items: + type: string + type: array + priorityClassName: + description: |- + Priority class name for the pgBackRest restore Job pod. Changing this + value causes PostgreSQL to restart. + More info: https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/ + type: string + repo: + description: Defines a pgBackRest repository + properties: + azure: + description: Represents a pgBackRest repository that is + created using Azure storage + properties: + container: + description: The Azure container utilized for the + repository + type: string + required: + - container + type: object + gcs: + description: Represents a pgBackRest repository that is + created using Google Cloud Storage + properties: + bucket: + description: The GCS bucket utilized for the repository + type: string + required: + - bucket + type: object + name: + description: The name of the repository + pattern: ^repo[1-4] + type: string + s3: + description: |- + RepoS3 represents a pgBackRest repository that is created using AWS S3 (or S3-compatible) + storage + properties: + bucket: + description: The S3 bucket utilized for the repository + type: string + endpoint: + description: A valid endpoint corresponding to the + specified region + type: string + region: + description: The region corresponding to the S3 bucket + type: string + required: + - bucket + - endpoint + - region + type: object + schedules: + description: |- + Defines the schedules for the pgBackRest backups + Full, Differential and Incremental backup types are supported: + https://pgbackrest.org/user-guide.html#concept/backup + properties: + differential: + description: |- + Defines the Cron schedule for a differential pgBackRest backup. + Follows the standard Cron schedule syntax: + https://k8s.io/docs/concepts/workloads/controllers/cron-jobs/#cron-schedule-syntax + minLength: 6 + type: string + full: + description: |- + Defines the Cron schedule for a full pgBackRest backup. + Follows the standard Cron schedule syntax: + https://k8s.io/docs/concepts/workloads/controllers/cron-jobs/#cron-schedule-syntax + minLength: 6 + type: string + incremental: + description: |- + Defines the Cron schedule for an incremental pgBackRest backup. + Follows the standard Cron schedule syntax: + https://k8s.io/docs/concepts/workloads/controllers/cron-jobs/#cron-schedule-syntax + minLength: 6 + type: string + type: object + volume: + description: Represents a pgBackRest repository that is + created using a PersistentVolumeClaim + properties: + volumeClaimSpec: + description: Defines a PersistentVolumeClaim spec + used to create and/or bind a volume + properties: + accessModes: + description: |- + accessModes contains the desired access modes the volume should have. + More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 + items: + type: string + type: array + x-kubernetes-list-type: atomic + dataSource: + description: |- + dataSource field can be used to specify either: + * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) + * An existing PVC (PersistentVolumeClaim) + If the provisioner or an external controller can support the specified data source, + it will create a new volume based on the contents of the specified data source. + When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, + and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. + If the namespace is specified, then dataSourceRef will not be copied to dataSource. + properties: + apiGroup: + description: |- + APIGroup is the group for the resource being referenced. + If APIGroup is not specified, the specified Kind must be in the core API group. + For any other third-party types, APIGroup is required. + type: string + kind: + description: Kind is the type of resource + being referenced + type: string + name: + description: Name is the name of resource + being referenced + type: string + required: + - kind + - name + type: object + x-kubernetes-map-type: atomic + dataSourceRef: + description: |- + dataSourceRef specifies the object from which to populate the volume with data, if a non-empty + volume is desired. This may be any object from a non-empty API group (non + core object) or a PersistentVolumeClaim object. + When this field is specified, volume binding will only succeed if the type of + the specified object matches some installed volume populator or dynamic + provisioner. + This field will replace the functionality of the dataSource field and as such + if both fields are non-empty, they must have the same value. For backwards + compatibility, when namespace isn't specified in dataSourceRef, + both fields (dataSource and dataSourceRef) will be set to the same + value automatically if one of them is empty and the other is non-empty. + When namespace is specified in dataSourceRef, + dataSource isn't set to the same value and must be empty. + There are three important differences between dataSource and dataSourceRef: + * While dataSource only allows two specific types of objects, dataSourceRef + allows any non-core object, as well as PersistentVolumeClaim objects. + * While dataSource ignores disallowed values (dropping them), dataSourceRef + preserves all values, and generates an error if a disallowed value is + specified. + * While dataSource only allows local objects, dataSourceRef allows objects + in any namespaces. + (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. + (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. + properties: + apiGroup: + description: |- + APIGroup is the group for the resource being referenced. + If APIGroup is not specified, the specified Kind must be in the core API group. + For any other third-party types, APIGroup is required. + type: string + kind: + description: Kind is the type of resource + being referenced + type: string + name: + description: Name is the name of resource + being referenced + type: string + namespace: + description: |- + Namespace is the namespace of resource being referenced + Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. + (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. + type: string + required: + - kind + - name + type: object + resources: + description: |- + resources represents the minimum resources the volume should have. + If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements + that are lower than previous value but must still be higher than capacity recorded in the + status field of the claim. + More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources + properties: + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Limits describes the maximum amount of compute resources allowed. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Requests describes the minimum amount of compute resources required. + If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, + otherwise to an implementation-defined value. Requests cannot exceed Limits. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + type: object + selector: + description: selector is a label query over volumes + to consider for binding. + properties: + matchExpressions: + description: matchExpressions is a list of + label selector requirements. The requirements + are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key that + the selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + storageClassName: + description: |- + storageClassName is the name of the StorageClass required by the claim. + More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 + type: string + volumeAttributesClassName: + description: |- + volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. + If specified, the CSI driver will create or update the volume with the attributes defined + in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, + it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass + will be applied to the claim but it's not allowed to reset this field to empty string once it is set. + If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass + will be set by the persistentvolume controller if it exists. + If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be + set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource + exists. + More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ + (Alpha) Using this field requires the VolumeAttributesClass feature gate to be enabled. + type: string + volumeMode: + description: |- + volumeMode defines what type of volume is required by the claim. + Value of Filesystem is implied when not included in claim spec. + type: string + volumeName: + description: volumeName is the binding reference + to the PersistentVolume backing this claim. + type: string + type: object + x-kubernetes-validations: + - message: missing accessModes + rule: has(self.accessModes) && size(self.accessModes) + > 0 + - message: missing storage request + rule: has(self.resources) && has(self.resources.requests) + && has(self.resources.requests.storage) + required: + - volumeClaimSpec + type: object + required: + - name + type: object + resources: + description: Resource requirements for the pgBackRest restore + Job. + properties: + claims: + description: |- + Claims lists the names of resources, defined in spec.resourceClaims, + that are used by this container. + + This is an alpha field and requires enabling the + DynamicResourceAllocation feature gate. + + This field is immutable. It can only be set for containers. + items: + description: ResourceClaim references one entry in PodSpec.ResourceClaims. + properties: + name: + description: |- + Name must match the name of one entry in pod.spec.resourceClaims of + the Pod where this field is used. It makes that resource available + inside a container. + type: string + required: + - name + type: object + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Limits describes the maximum amount of compute resources allowed. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Requests describes the minimum amount of compute resources required. + If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, + otherwise to an implementation-defined value. Requests cannot exceed Limits. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + type: object + stanza: + default: db + description: |- + The name of an existing pgBackRest stanza to use as the data source for the new PostgresCluster. + Defaults to `db` if not provided. + type: string + tolerations: + description: |- + Tolerations of the pgBackRest restore Job. + More info: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration + items: + description: |- + The pod this Toleration is attached to tolerates any taint that matches + the triple using the matching operator . + properties: + effect: + description: |- + Effect indicates the taint effect to match. Empty means match all taint effects. + When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. + type: string + key: + description: |- + Key is the taint key that the toleration applies to. Empty means match all taint keys. + If the key is empty, operator must be Exists; this combination means to match all values and all keys. + type: string + operator: + description: |- + Operator represents a key's relationship to the value. + Valid operators are Exists and Equal. Defaults to Equal. + Exists is equivalent to wildcard for value, so that a pod can + tolerate all taints of a particular category. + type: string + tolerationSeconds: + description: |- + TolerationSeconds represents the period of time the toleration (which must be + of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, + it is not set, which means tolerate the taint forever (do not evict). Zero and + negative values will be treated as 0 (evict immediately) by the system. + format: int64 + type: integer + value: + description: |- + Value is the taint value the toleration matches to. + If the operator is Exists, the value should be empty, otherwise just a regular string. + type: string + type: object + type: array + required: + - repo + - stanza + type: object + postgresCluster: + description: |- + Defines a pgBackRest data source that can be used to pre-populate the PostgreSQL data + directory for a new PostgreSQL cluster using a pgBackRest restore. + The PGBackRest field is incompatible with the PostgresCluster field: only one + data source can be used for pre-populating a new PostgreSQL cluster + properties: + affinity: + description: |- + Scheduling constraints of the pgBackRest restore Job. + More info: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node + properties: + nodeAffinity: + description: Describes node affinity scheduling rules + for the pod. + properties: + preferredDuringSchedulingIgnoredDuringExecution: + description: |- + The scheduler will prefer to schedule pods to nodes that satisfy + the affinity expressions specified by this field, but it may choose + a node that violates one or more of the expressions. The node that is + most preferred is the one with the greatest sum of weights, i.e. + for each node that meets all of the scheduling requirements (resource + request, requiredDuringScheduling affinity expressions, etc.), + compute a sum by iterating through the elements of this field and adding + "weight" to the sum if the node matches the corresponding matchExpressions; the + node(s) with the highest sum are the most preferred. + items: + description: |- + An empty preferred scheduling term matches all objects with implicit weight 0 + (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). + properties: + preference: + description: A node selector term, associated + with the corresponding weight. + properties: + matchExpressions: + description: A list of node selector requirements + by node's labels. + items: + description: |- + A node selector requirement is a selector that contains values, a key, and an operator + that relates the key and values. + properties: + key: + description: The label key that the + selector applies to. + type: string + operator: + description: |- + Represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: |- + An array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. If the operator is Gt or Lt, the values + array must have a single element, which will be interpreted as an integer. + This array is replaced during a strategic merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchFields: + description: A list of node selector requirements + by node's fields. + items: + description: |- + A node selector requirement is a selector that contains values, a key, and an operator + that relates the key and values. + properties: + key: + description: The label key that the + selector applies to. + type: string + operator: + description: |- + Represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: |- + An array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. If the operator is Gt or Lt, the values + array must have a single element, which will be interpreted as an integer. + This array is replaced during a strategic merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + type: object + x-kubernetes-map-type: atomic + weight: + description: Weight associated with matching + the corresponding nodeSelectorTerm, in the + range 1-100. + format: int32 + type: integer + required: + - preference + - weight + type: object + type: array + x-kubernetes-list-type: atomic + requiredDuringSchedulingIgnoredDuringExecution: + description: |- + If the affinity requirements specified by this field are not met at + scheduling time, the pod will not be scheduled onto the node. + If the affinity requirements specified by this field cease to be met + at some point during pod execution (e.g. due to an update), the system + may or may not try to eventually evict the pod from its node. + properties: + nodeSelectorTerms: + description: Required. A list of node selector + terms. The terms are ORed. + items: + description: |- + A null or empty node selector term matches no objects. The requirements of + them are ANDed. + The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. + properties: + matchExpressions: + description: A list of node selector requirements + by node's labels. + items: + description: |- + A node selector requirement is a selector that contains values, a key, and an operator + that relates the key and values. + properties: + key: + description: The label key that the + selector applies to. + type: string + operator: + description: |- + Represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: |- + An array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. If the operator is Gt or Lt, the values + array must have a single element, which will be interpreted as an integer. + This array is replaced during a strategic merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchFields: + description: A list of node selector requirements + by node's fields. + items: + description: |- + A node selector requirement is a selector that contains values, a key, and an operator + that relates the key and values. + properties: + key: + description: The label key that the + selector applies to. + type: string + operator: + description: |- + Represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: |- + An array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. If the operator is Gt or Lt, the values + array must have a single element, which will be interpreted as an integer. + This array is replaced during a strategic merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + type: object + x-kubernetes-map-type: atomic + type: array + x-kubernetes-list-type: atomic + required: + - nodeSelectorTerms + type: object + x-kubernetes-map-type: atomic + type: object + podAffinity: + description: Describes pod affinity scheduling rules (e.g. + co-locate this pod in the same node, zone, etc. as some + other pod(s)). + properties: + preferredDuringSchedulingIgnoredDuringExecution: + description: |- + The scheduler will prefer to schedule pods to nodes that satisfy + the affinity expressions specified by this field, but it may choose + a node that violates one or more of the expressions. The node that is + most preferred is the one with the greatest sum of weights, i.e. + for each node that meets all of the scheduling requirements (resource + request, requiredDuringScheduling affinity expressions, etc.), + compute a sum by iterating through the elements of this field and adding + "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the + node(s) with the highest sum are the most preferred. + items: + description: The weights of all of the matched WeightedPodAffinityTerm + fields are added per-node to find the most preferred + node(s) + properties: + podAffinityTerm: + description: Required. A pod affinity term, + associated with the corresponding weight. + properties: + labelSelector: + description: |- + A label query over a set of resources, in this case pods. + If it's null, this PodAffinityTerm matches with no Pods. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The + requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label + key that the selector applies + to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both matchLabelKeys and labelSelector. + Also, matchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + description: |- + MismatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. + Also, mismatchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + description: |- + A label query over the set of namespaces that the term applies to. + The term is applied to the union of the namespaces selected by this field + and the ones listed in the namespaces field. + null selector and null or empty namespaces list means "this pod's namespace". + An empty selector ({}) matches all namespaces. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The + requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label + key that the selector applies + to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + description: |- + namespaces specifies a static list of namespace names that the term applies to. + The term is applied to the union of the namespaces listed in this field + and the ones selected by namespaceSelector. + null or empty namespaces list and null namespaceSelector means "this pod's namespace". + items: + type: string + type: array + x-kubernetes-list-type: atomic + topologyKey: + description: |- + This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching + the labelSelector in the specified namespaces, where co-located is defined as running on a node + whose value of the label with key topologyKey matches that of any node on which any of the + selected pods is running. + Empty topologyKey is not allowed. + type: string + required: + - topologyKey + type: object + weight: + description: |- + weight associated with matching the corresponding podAffinityTerm, + in the range 1-100. + format: int32 + type: integer + required: + - podAffinityTerm + - weight + type: object + type: array + x-kubernetes-list-type: atomic + requiredDuringSchedulingIgnoredDuringExecution: + description: |- + If the affinity requirements specified by this field are not met at + scheduling time, the pod will not be scheduled onto the node. + If the affinity requirements specified by this field cease to be met + at some point during pod execution (e.g. due to a pod label update), the + system may or may not try to eventually evict the pod from its node. + When there are multiple elements, the lists of nodes corresponding to each + podAffinityTerm are intersected, i.e. all terms must be satisfied. + items: + description: |- + Defines a set of pods (namely those matching the labelSelector + relative to the given namespace(s)) that this pod should be + co-located (affinity) or not co-located (anti-affinity) with, + where co-located is defined as running on a node whose value of + the label with key matches that of any node on which + a pod of the set of pods is running + properties: + labelSelector: + description: |- + A label query over a set of resources, in this case pods. + If it's null, this PodAffinityTerm matches with no Pods. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The requirements + are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key + that the selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both matchLabelKeys and labelSelector. + Also, matchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + description: |- + MismatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. + Also, mismatchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + description: |- + A label query over the set of namespaces that the term applies to. + The term is applied to the union of the namespaces selected by this field + and the ones listed in the namespaces field. + null selector and null or empty namespaces list means "this pod's namespace". + An empty selector ({}) matches all namespaces. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The requirements + are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key + that the selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + description: |- + namespaces specifies a static list of namespace names that the term applies to. + The term is applied to the union of the namespaces listed in this field + and the ones selected by namespaceSelector. + null or empty namespaces list and null namespaceSelector means "this pod's namespace". + items: + type: string + type: array + x-kubernetes-list-type: atomic + topologyKey: + description: |- + This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching + the labelSelector in the specified namespaces, where co-located is defined as running on a node + whose value of the label with key topologyKey matches that of any node on which any of the + selected pods is running. + Empty topologyKey is not allowed. + type: string + required: + - topologyKey + type: object + type: array + x-kubernetes-list-type: atomic + type: object + podAntiAffinity: + description: Describes pod anti-affinity scheduling rules + (e.g. avoid putting this pod in the same node, zone, + etc. as some other pod(s)). + properties: + preferredDuringSchedulingIgnoredDuringExecution: + description: |- + The scheduler will prefer to schedule pods to nodes that satisfy + the anti-affinity expressions specified by this field, but it may choose + a node that violates one or more of the expressions. The node that is + most preferred is the one with the greatest sum of weights, i.e. + for each node that meets all of the scheduling requirements (resource + request, requiredDuringScheduling anti-affinity expressions, etc.), + compute a sum by iterating through the elements of this field and adding + "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the + node(s) with the highest sum are the most preferred. + items: + description: The weights of all of the matched WeightedPodAffinityTerm + fields are added per-node to find the most preferred + node(s) + properties: + podAffinityTerm: + description: Required. A pod affinity term, + associated with the corresponding weight. + properties: + labelSelector: + description: |- + A label query over a set of resources, in this case pods. + If it's null, this PodAffinityTerm matches with no Pods. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The + requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label + key that the selector applies + to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both matchLabelKeys and labelSelector. + Also, matchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + description: |- + MismatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. + Also, mismatchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + description: |- + A label query over the set of namespaces that the term applies to. + The term is applied to the union of the namespaces selected by this field + and the ones listed in the namespaces field. + null selector and null or empty namespaces list means "this pod's namespace". + An empty selector ({}) matches all namespaces. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The + requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label + key that the selector applies + to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + description: |- + namespaces specifies a static list of namespace names that the term applies to. + The term is applied to the union of the namespaces listed in this field + and the ones selected by namespaceSelector. + null or empty namespaces list and null namespaceSelector means "this pod's namespace". + items: + type: string + type: array + x-kubernetes-list-type: atomic + topologyKey: + description: |- + This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching + the labelSelector in the specified namespaces, where co-located is defined as running on a node + whose value of the label with key topologyKey matches that of any node on which any of the + selected pods is running. + Empty topologyKey is not allowed. + type: string + required: + - topologyKey + type: object + weight: + description: |- + weight associated with matching the corresponding podAffinityTerm, + in the range 1-100. + format: int32 + type: integer + required: + - podAffinityTerm + - weight + type: object + type: array + x-kubernetes-list-type: atomic + requiredDuringSchedulingIgnoredDuringExecution: + description: |- + If the anti-affinity requirements specified by this field are not met at + scheduling time, the pod will not be scheduled onto the node. + If the anti-affinity requirements specified by this field cease to be met + at some point during pod execution (e.g. due to a pod label update), the + system may or may not try to eventually evict the pod from its node. + When there are multiple elements, the lists of nodes corresponding to each + podAffinityTerm are intersected, i.e. all terms must be satisfied. + items: + description: |- + Defines a set of pods (namely those matching the labelSelector + relative to the given namespace(s)) that this pod should be + co-located (affinity) or not co-located (anti-affinity) with, + where co-located is defined as running on a node whose value of + the label with key matches that of any node on which + a pod of the set of pods is running + properties: + labelSelector: + description: |- + A label query over a set of resources, in this case pods. + If it's null, this PodAffinityTerm matches with no Pods. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The requirements + are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key + that the selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both matchLabelKeys and labelSelector. + Also, matchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + description: |- + MismatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. + Also, mismatchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + description: |- + A label query over the set of namespaces that the term applies to. + The term is applied to the union of the namespaces selected by this field + and the ones listed in the namespaces field. + null selector and null or empty namespaces list means "this pod's namespace". + An empty selector ({}) matches all namespaces. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The requirements + are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key + that the selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + description: |- + namespaces specifies a static list of namespace names that the term applies to. + The term is applied to the union of the namespaces listed in this field + and the ones selected by namespaceSelector. + null or empty namespaces list and null namespaceSelector means "this pod's namespace". + items: + type: string + type: array + x-kubernetes-list-type: atomic + topologyKey: + description: |- + This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching + the labelSelector in the specified namespaces, where co-located is defined as running on a node + whose value of the label with key topologyKey matches that of any node on which any of the + selected pods is running. + Empty topologyKey is not allowed. + type: string + required: + - topologyKey + type: object + type: array + x-kubernetes-list-type: atomic + type: object + type: object + clusterName: + description: |- + The name of an existing PostgresCluster to use as the data source for the new PostgresCluster. + Defaults to the name of the PostgresCluster being created if not provided. + type: string + clusterNamespace: + description: |- + The namespace of the cluster specified as the data source using the clusterName field. + Defaults to the namespace of the PostgresCluster being created if not provided. + type: string + options: + description: |- + Command line options to include when running the pgBackRest restore command. + https://pgbackrest.org/command.html#command-restore + items: + type: string + type: array + priorityClassName: + description: |- + Priority class name for the pgBackRest restore Job pod. Changing this + value causes PostgreSQL to restart. + More info: https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/ + type: string + repoName: + description: |- + The name of the pgBackRest repo within the source PostgresCluster that contains the backups + that should be utilized to perform a pgBackRest restore when initializing the data source + for the new PostgresCluster. + pattern: ^repo[1-4] + type: string + resources: + description: Resource requirements for the pgBackRest restore + Job. + properties: + claims: + description: |- + Claims lists the names of resources, defined in spec.resourceClaims, + that are used by this container. + + This is an alpha field and requires enabling the + DynamicResourceAllocation feature gate. + + This field is immutable. It can only be set for containers. + items: + description: ResourceClaim references one entry in PodSpec.ResourceClaims. + properties: + name: + description: |- + Name must match the name of one entry in pod.spec.resourceClaims of + the Pod where this field is used. It makes that resource available + inside a container. + type: string + required: + - name + type: object + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Limits describes the maximum amount of compute resources allowed. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Requests describes the minimum amount of compute resources required. + If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, + otherwise to an implementation-defined value. Requests cannot exceed Limits. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + type: object + tolerations: + description: |- + Tolerations of the pgBackRest restore Job. + More info: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration + items: + description: |- + The pod this Toleration is attached to tolerates any taint that matches + the triple using the matching operator . + properties: + effect: + description: |- + Effect indicates the taint effect to match. Empty means match all taint effects. + When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. + type: string + key: + description: |- + Key is the taint key that the toleration applies to. Empty means match all taint keys. + If the key is empty, operator must be Exists; this combination means to match all values and all keys. + type: string + operator: + description: |- + Operator represents a key's relationship to the value. + Valid operators are Exists and Equal. Defaults to Equal. + Exists is equivalent to wildcard for value, so that a pod can + tolerate all taints of a particular category. + type: string + tolerationSeconds: + description: |- + TolerationSeconds represents the period of time the toleration (which must be + of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, + it is not set, which means tolerate the taint forever (do not evict). Zero and + negative values will be treated as 0 (evict immediately) by the system. + format: int64 + type: integer + value: + description: |- + Value is the taint value the toleration matches to. + If the operator is Exists, the value should be empty, otherwise just a regular string. + type: string + type: object + type: array + required: + - repoName + type: object + volumes: + description: Defines any existing volumes to reuse for this PostgresCluster. + properties: + pgBackRestVolume: + description: |- + Defines the existing pgBackRest repo volume and directory to use in the + current PostgresCluster. + properties: + directory: + description: |- + The existing directory. When not set, a move Job is not created for the + associated volume. + type: string + pvcName: + description: The existing PVC name. + type: string + required: + - pvcName + type: object + pgDataVolume: + description: |- + Defines the existing pgData volume and directory to use in the current + PostgresCluster. + properties: + directory: + description: |- + The existing directory. When not set, a move Job is not created for the + associated volume. + type: string + pvcName: + description: The existing PVC name. + type: string + required: + - pvcName + type: object + pgWALVolume: + description: |- + Defines the existing pg_wal volume and directory to use in the current + PostgresCluster. Note that a defined pg_wal volume MUST be accompanied by + a pgData volume. + properties: + directory: + description: |- + The existing directory. When not set, a move Job is not created for the + associated volume. + type: string + pvcName: + description: The existing PVC name. + type: string + required: + - pvcName + type: object + type: object + type: object + databaseInitSQL: + description: |- + DatabaseInitSQL defines a ConfigMap containing custom SQL that will + be run after the cluster is initialized. This ConfigMap must be in the same + namespace as the cluster. + properties: + key: + description: Key is the ConfigMap data key that points to a SQL + string + type: string + name: + description: Name is the name of a ConfigMap + type: string + required: + - key + - name + type: object + disableDefaultPodScheduling: + description: |- + Whether or not the PostgreSQL cluster should use the defined default + scheduling constraints. If the field is unset or false, the default + scheduling constraints will be used in addition to any custom constraints + provided. + type: boolean + image: + description: |- + The image name to use for PostgreSQL containers. When omitted, the value + comes from an operator environment variable. For standard PostgreSQL images, + the format is RELATED_IMAGE_POSTGRES_{postgresVersion}, + e.g. RELATED_IMAGE_POSTGRES_13. For PostGIS enabled PostgreSQL images, + the format is RELATED_IMAGE_POSTGRES_{postgresVersion}_GIS_{postGISVersion}, + e.g. RELATED_IMAGE_POSTGRES_13_GIS_3.1. + type: string + imagePullPolicy: + description: |- + ImagePullPolicy is used to determine when Kubernetes will attempt to + pull (download) container images. + More info: https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy + enum: + - Always + - Never + - IfNotPresent + type: string + imagePullSecrets: + description: |- + The image pull secrets used to pull from a private registry + Changing this value causes all running pods to restart. + https://k8s.io/docs/tasks/configure-pod-container/pull-image-private-registry/ + items: + description: |- + LocalObjectReference contains enough information to let you locate the + referenced object inside the same namespace. + properties: + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + type: object + x-kubernetes-map-type: atomic + type: array + instances: + description: |- + Specifies one or more sets of PostgreSQL pods that replicate data for + this cluster. + items: + properties: + affinity: + description: |- + Scheduling constraints of a PostgreSQL pod. Changing this value causes + PostgreSQL to restart. + More info: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node + properties: + nodeAffinity: + description: Describes node affinity scheduling rules for + the pod. + properties: + preferredDuringSchedulingIgnoredDuringExecution: + description: |- + The scheduler will prefer to schedule pods to nodes that satisfy + the affinity expressions specified by this field, but it may choose + a node that violates one or more of the expressions. The node that is + most preferred is the one with the greatest sum of weights, i.e. + for each node that meets all of the scheduling requirements (resource + request, requiredDuringScheduling affinity expressions, etc.), + compute a sum by iterating through the elements of this field and adding + "weight" to the sum if the node matches the corresponding matchExpressions; the + node(s) with the highest sum are the most preferred. + items: + description: |- + An empty preferred scheduling term matches all objects with implicit weight 0 + (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). + properties: + preference: + description: A node selector term, associated + with the corresponding weight. + properties: + matchExpressions: + description: A list of node selector requirements + by node's labels. + items: + description: |- + A node selector requirement is a selector that contains values, a key, and an operator + that relates the key and values. + properties: + key: + description: The label key that the + selector applies to. + type: string + operator: + description: |- + Represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: |- + An array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. If the operator is Gt or Lt, the values + array must have a single element, which will be interpreted as an integer. + This array is replaced during a strategic merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchFields: + description: A list of node selector requirements + by node's fields. + items: + description: |- + A node selector requirement is a selector that contains values, a key, and an operator + that relates the key and values. + properties: + key: + description: The label key that the + selector applies to. + type: string + operator: + description: |- + Represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: |- + An array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. If the operator is Gt or Lt, the values + array must have a single element, which will be interpreted as an integer. + This array is replaced during a strategic merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + type: object + x-kubernetes-map-type: atomic + weight: + description: Weight associated with matching the + corresponding nodeSelectorTerm, in the range + 1-100. + format: int32 + type: integer + required: + - preference + - weight + type: object + type: array + x-kubernetes-list-type: atomic + requiredDuringSchedulingIgnoredDuringExecution: + description: |- + If the affinity requirements specified by this field are not met at + scheduling time, the pod will not be scheduled onto the node. + If the affinity requirements specified by this field cease to be met + at some point during pod execution (e.g. due to an update), the system + may or may not try to eventually evict the pod from its node. + properties: + nodeSelectorTerms: + description: Required. A list of node selector terms. + The terms are ORed. + items: + description: |- + A null or empty node selector term matches no objects. The requirements of + them are ANDed. + The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. + properties: + matchExpressions: + description: A list of node selector requirements + by node's labels. + items: + description: |- + A node selector requirement is a selector that contains values, a key, and an operator + that relates the key and values. + properties: + key: + description: The label key that the + selector applies to. + type: string + operator: + description: |- + Represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: |- + An array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. If the operator is Gt or Lt, the values + array must have a single element, which will be interpreted as an integer. + This array is replaced during a strategic merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchFields: + description: A list of node selector requirements + by node's fields. + items: + description: |- + A node selector requirement is a selector that contains values, a key, and an operator + that relates the key and values. + properties: + key: + description: The label key that the + selector applies to. + type: string + operator: + description: |- + Represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: |- + An array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. If the operator is Gt or Lt, the values + array must have a single element, which will be interpreted as an integer. + This array is replaced during a strategic merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + type: object + x-kubernetes-map-type: atomic + type: array + x-kubernetes-list-type: atomic + required: + - nodeSelectorTerms + type: object + x-kubernetes-map-type: atomic + type: object + podAffinity: + description: Describes pod affinity scheduling rules (e.g. + co-locate this pod in the same node, zone, etc. as some + other pod(s)). + properties: + preferredDuringSchedulingIgnoredDuringExecution: + description: |- + The scheduler will prefer to schedule pods to nodes that satisfy + the affinity expressions specified by this field, but it may choose + a node that violates one or more of the expressions. The node that is + most preferred is the one with the greatest sum of weights, i.e. + for each node that meets all of the scheduling requirements (resource + request, requiredDuringScheduling affinity expressions, etc.), + compute a sum by iterating through the elements of this field and adding + "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the + node(s) with the highest sum are the most preferred. + items: + description: The weights of all of the matched WeightedPodAffinityTerm + fields are added per-node to find the most preferred + node(s) + properties: + podAffinityTerm: + description: Required. A pod affinity term, associated + with the corresponding weight. + properties: + labelSelector: + description: |- + A label query over a set of resources, in this case pods. + If it's null, this PodAffinityTerm matches with no Pods. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The + requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key + that the selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both matchLabelKeys and labelSelector. + Also, matchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + description: |- + MismatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. + Also, mismatchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + description: |- + A label query over the set of namespaces that the term applies to. + The term is applied to the union of the namespaces selected by this field + and the ones listed in the namespaces field. + null selector and null or empty namespaces list means "this pod's namespace". + An empty selector ({}) matches all namespaces. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The + requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key + that the selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + description: |- + namespaces specifies a static list of namespace names that the term applies to. + The term is applied to the union of the namespaces listed in this field + and the ones selected by namespaceSelector. + null or empty namespaces list and null namespaceSelector means "this pod's namespace". + items: + type: string + type: array + x-kubernetes-list-type: atomic + topologyKey: + description: |- + This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching + the labelSelector in the specified namespaces, where co-located is defined as running on a node + whose value of the label with key topologyKey matches that of any node on which any of the + selected pods is running. + Empty topologyKey is not allowed. + type: string + required: + - topologyKey + type: object + weight: + description: |- + weight associated with matching the corresponding podAffinityTerm, + in the range 1-100. + format: int32 + type: integer + required: + - podAffinityTerm + - weight + type: object + type: array + x-kubernetes-list-type: atomic + requiredDuringSchedulingIgnoredDuringExecution: + description: |- + If the affinity requirements specified by this field are not met at + scheduling time, the pod will not be scheduled onto the node. + If the affinity requirements specified by this field cease to be met + at some point during pod execution (e.g. due to a pod label update), the + system may or may not try to eventually evict the pod from its node. + When there are multiple elements, the lists of nodes corresponding to each + podAffinityTerm are intersected, i.e. all terms must be satisfied. + items: + description: |- + Defines a set of pods (namely those matching the labelSelector + relative to the given namespace(s)) that this pod should be + co-located (affinity) or not co-located (anti-affinity) with, + where co-located is defined as running on a node whose value of + the label with key matches that of any node on which + a pod of the set of pods is running + properties: + labelSelector: + description: |- + A label query over a set of resources, in this case pods. + If it's null, this PodAffinityTerm matches with no Pods. + properties: + matchExpressions: + description: matchExpressions is a list of + label selector requirements. The requirements + are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key that + the selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both matchLabelKeys and labelSelector. + Also, matchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + description: |- + MismatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. + Also, mismatchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + description: |- + A label query over the set of namespaces that the term applies to. + The term is applied to the union of the namespaces selected by this field + and the ones listed in the namespaces field. + null selector and null or empty namespaces list means "this pod's namespace". + An empty selector ({}) matches all namespaces. + properties: + matchExpressions: + description: matchExpressions is a list of + label selector requirements. The requirements + are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key that + the selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + description: |- + namespaces specifies a static list of namespace names that the term applies to. + The term is applied to the union of the namespaces listed in this field + and the ones selected by namespaceSelector. + null or empty namespaces list and null namespaceSelector means "this pod's namespace". + items: + type: string + type: array + x-kubernetes-list-type: atomic + topologyKey: + description: |- + This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching + the labelSelector in the specified namespaces, where co-located is defined as running on a node + whose value of the label with key topologyKey matches that of any node on which any of the + selected pods is running. + Empty topologyKey is not allowed. + type: string + required: + - topologyKey + type: object + type: array + x-kubernetes-list-type: atomic + type: object + podAntiAffinity: + description: Describes pod anti-affinity scheduling rules + (e.g. avoid putting this pod in the same node, zone, etc. + as some other pod(s)). + properties: + preferredDuringSchedulingIgnoredDuringExecution: + description: |- + The scheduler will prefer to schedule pods to nodes that satisfy + the anti-affinity expressions specified by this field, but it may choose + a node that violates one or more of the expressions. The node that is + most preferred is the one with the greatest sum of weights, i.e. + for each node that meets all of the scheduling requirements (resource + request, requiredDuringScheduling anti-affinity expressions, etc.), + compute a sum by iterating through the elements of this field and adding + "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the + node(s) with the highest sum are the most preferred. + items: + description: The weights of all of the matched WeightedPodAffinityTerm + fields are added per-node to find the most preferred + node(s) + properties: + podAffinityTerm: + description: Required. A pod affinity term, associated + with the corresponding weight. + properties: + labelSelector: + description: |- + A label query over a set of resources, in this case pods. + If it's null, this PodAffinityTerm matches with no Pods. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The + requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key + that the selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both matchLabelKeys and labelSelector. + Also, matchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + description: |- + MismatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. + Also, mismatchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + description: |- + A label query over the set of namespaces that the term applies to. + The term is applied to the union of the namespaces selected by this field + and the ones listed in the namespaces field. + null selector and null or empty namespaces list means "this pod's namespace". + An empty selector ({}) matches all namespaces. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The + requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key + that the selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + description: |- + namespaces specifies a static list of namespace names that the term applies to. + The term is applied to the union of the namespaces listed in this field + and the ones selected by namespaceSelector. + null or empty namespaces list and null namespaceSelector means "this pod's namespace". + items: + type: string + type: array + x-kubernetes-list-type: atomic + topologyKey: + description: |- + This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching + the labelSelector in the specified namespaces, where co-located is defined as running on a node + whose value of the label with key topologyKey matches that of any node on which any of the + selected pods is running. + Empty topologyKey is not allowed. + type: string + required: + - topologyKey + type: object + weight: + description: |- + weight associated with matching the corresponding podAffinityTerm, + in the range 1-100. + format: int32 + type: integer + required: + - podAffinityTerm + - weight + type: object + type: array + x-kubernetes-list-type: atomic + requiredDuringSchedulingIgnoredDuringExecution: + description: |- + If the anti-affinity requirements specified by this field are not met at + scheduling time, the pod will not be scheduled onto the node. + If the anti-affinity requirements specified by this field cease to be met + at some point during pod execution (e.g. due to a pod label update), the + system may or may not try to eventually evict the pod from its node. + When there are multiple elements, the lists of nodes corresponding to each + podAffinityTerm are intersected, i.e. all terms must be satisfied. + items: + description: |- + Defines a set of pods (namely those matching the labelSelector + relative to the given namespace(s)) that this pod should be + co-located (affinity) or not co-located (anti-affinity) with, + where co-located is defined as running on a node whose value of + the label with key matches that of any node on which + a pod of the set of pods is running + properties: + labelSelector: + description: |- + A label query over a set of resources, in this case pods. + If it's null, this PodAffinityTerm matches with no Pods. + properties: + matchExpressions: + description: matchExpressions is a list of + label selector requirements. The requirements + are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key that + the selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both matchLabelKeys and labelSelector. + Also, matchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + description: |- + MismatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. + Also, mismatchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + description: |- + A label query over the set of namespaces that the term applies to. + The term is applied to the union of the namespaces selected by this field + and the ones listed in the namespaces field. + null selector and null or empty namespaces list means "this pod's namespace". + An empty selector ({}) matches all namespaces. + properties: + matchExpressions: + description: matchExpressions is a list of + label selector requirements. The requirements + are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key that + the selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + description: |- + namespaces specifies a static list of namespace names that the term applies to. + The term is applied to the union of the namespaces listed in this field + and the ones selected by namespaceSelector. + null or empty namespaces list and null namespaceSelector means "this pod's namespace". + items: + type: string + type: array + x-kubernetes-list-type: atomic + topologyKey: + description: |- + This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching + the labelSelector in the specified namespaces, where co-located is defined as running on a node + whose value of the label with key topologyKey matches that of any node on which any of the + selected pods is running. + Empty topologyKey is not allowed. + type: string + required: + - topologyKey + type: object + type: array + x-kubernetes-list-type: atomic + type: object + type: object + containers: + description: |- + Custom sidecars for PostgreSQL instance pods. Changing this value causes + PostgreSQL to restart. + items: + description: A single application container that you want + to run within a pod. + properties: + args: + description: |- + Arguments to the entrypoint. + The container image's CMD is used if this is not provided. + Variable references $(VAR_NAME) are expanded using the container's environment. If a variable + cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced + to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will + produce the string literal "$(VAR_NAME)". Escaped references will never be expanded, regardless + of whether the variable exists or not. Cannot be updated. + More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell + items: + type: string + type: array + x-kubernetes-list-type: atomic + command: + description: |- + Entrypoint array. Not executed within a shell. + The container image's ENTRYPOINT is used if this is not provided. + Variable references $(VAR_NAME) are expanded using the container's environment. If a variable + cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced + to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will + produce the string literal "$(VAR_NAME)". Escaped references will never be expanded, regardless + of whether the variable exists or not. Cannot be updated. + More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell + items: + type: string + type: array + x-kubernetes-list-type: atomic + env: + description: |- + List of environment variables to set in the container. + Cannot be updated. + items: + description: EnvVar represents an environment variable + present in a Container. + properties: + name: + description: Name of the environment variable. Must + be a C_IDENTIFIER. + type: string + value: + description: |- + Variable references $(VAR_NAME) are expanded + using the previously defined environment variables in the container and + any service environment variables. If a variable cannot be resolved, + the reference in the input string will be unchanged. Double $$ are reduced + to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. + "$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". + Escaped references will never be expanded, regardless of whether the variable + exists or not. + Defaults to "". + type: string + valueFrom: + description: Source for the environment variable's + value. Cannot be used if value is not empty. + properties: + configMapKeyRef: + description: Selects a key of a ConfigMap. + properties: + key: + description: The key to select. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the ConfigMap + or its key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + fieldRef: + description: |- + Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels['']`, `metadata.annotations['']`, + spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. + properties: + apiVersion: + description: Version of the schema the FieldPath + is written in terms of, defaults to "v1". + type: string + fieldPath: + description: Path of the field to select + in the specified API version. + type: string + required: + - fieldPath + type: object + x-kubernetes-map-type: atomic + resourceFieldRef: + description: |- + Selects a resource of the container: only resources limits and requests + (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. + properties: + containerName: + description: 'Container name: required for + volumes, optional for env vars' + type: string + divisor: + anyOf: + - type: integer + - type: string + description: Specifies the output format + of the exposed resources, defaults to + "1" + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + resource: + description: 'Required: resource to select' + type: string + required: + - resource + type: object + x-kubernetes-map-type: atomic + secretKeyRef: + description: Selects a key of a secret in the + pod's namespace + properties: + key: + description: The key of the secret to select + from. Must be a valid secret key. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the Secret + or its key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + type: object + required: + - name + type: object + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + envFrom: + description: |- + List of sources to populate environment variables in the container. + The keys defined within a source must be a C_IDENTIFIER. All invalid keys + will be reported as an event when the container is starting. When a key exists in multiple + sources, the value associated with the last source will take precedence. + Values defined by an Env with a duplicate key will take precedence. + Cannot be updated. + items: + description: EnvFromSource represents the source of + a set of ConfigMaps + properties: + configMapRef: + description: The ConfigMap to select from + properties: + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the ConfigMap must + be defined + type: boolean + type: object + x-kubernetes-map-type: atomic + prefix: + description: An optional identifier to prepend to + each key in the ConfigMap. Must be a C_IDENTIFIER. + type: string + secretRef: + description: The Secret to select from + properties: + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the Secret must + be defined + type: boolean + type: object + x-kubernetes-map-type: atomic + type: object + type: array + x-kubernetes-list-type: atomic + image: + description: |- + Container image name. + More info: https://kubernetes.io/docs/concepts/containers/images + This field is optional to allow higher level config management to default or override + container images in workload controllers like Deployments and StatefulSets. + type: string + imagePullPolicy: + description: |- + Image pull policy. + One of Always, Never, IfNotPresent. + Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. + Cannot be updated. + More info: https://kubernetes.io/docs/concepts/containers/images#updating-images + type: string + lifecycle: + description: |- + Actions that the management system should take in response to container lifecycle events. + Cannot be updated. + properties: + postStart: + description: |- + PostStart is called immediately after a container is created. If the handler fails, + the container is terminated and restarted according to its restart policy. + Other management of the container blocks until the hook completes. + More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks + properties: + exec: + description: Exec specifies the action to take. + properties: + command: + description: |- + Command is the command line to execute inside the container, the working directory for the + command is root ('/') in the container's filesystem. The command is simply exec'd, it is + not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use + a shell, you need to explicitly call out to that shell. + Exit status of 0 is treated as live/healthy and non-zero is unhealthy. + items: + type: string + type: array + x-kubernetes-list-type: atomic + type: object + httpGet: + description: HTTPGet specifies the http request + to perform. + properties: + host: + description: |- + Host name to connect to, defaults to the pod IP. You probably want to set + "Host" in httpHeaders instead. + type: string + httpHeaders: + description: Custom headers to set in the + request. HTTP allows repeated headers. + items: + description: HTTPHeader describes a custom + header to be used in HTTP probes + properties: + name: + description: |- + The header field name. + This will be canonicalized upon output, so case-variant names will be understood as the same header. + type: string + value: + description: The header field value + type: string + required: + - name + - value + type: object + type: array + x-kubernetes-list-type: atomic + path: + description: Path to access on the HTTP server. + type: string + port: + anyOf: + - type: integer + - type: string + description: |- + Name or number of the port to access on the container. + Number must be in the range 1 to 65535. + Name must be an IANA_SVC_NAME. + x-kubernetes-int-or-string: true + scheme: + description: |- + Scheme to use for connecting to the host. + Defaults to HTTP. + type: string + required: + - port + type: object + sleep: + description: Sleep represents the duration that + the container should sleep before being terminated. + properties: + seconds: + description: Seconds is the number of seconds + to sleep. + format: int64 + type: integer + required: + - seconds + type: object + tcpSocket: + description: |- + Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept + for the backward compatibility. There are no validation of this field and + lifecycle hooks will fail in runtime when tcp handler is specified. + properties: + host: + description: 'Optional: Host name to connect + to, defaults to the pod IP.' + type: string + port: + anyOf: + - type: integer + - type: string + description: |- + Number or name of the port to access on the container. + Number must be in the range 1 to 65535. + Name must be an IANA_SVC_NAME. + x-kubernetes-int-or-string: true + required: + - port + type: object + type: object + preStop: + description: |- + PreStop is called immediately before a container is terminated due to an + API request or management event such as liveness/startup probe failure, + preemption, resource contention, etc. The handler is not called if the + container crashes or exits. The Pod's termination grace period countdown begins before the + PreStop hook is executed. Regardless of the outcome of the handler, the + container will eventually terminate within the Pod's termination grace + period (unless delayed by finalizers). Other management of the container blocks until the hook completes + or until the termination grace period is reached. + More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks + properties: + exec: + description: Exec specifies the action to take. + properties: + command: + description: |- + Command is the command line to execute inside the container, the working directory for the + command is root ('/') in the container's filesystem. The command is simply exec'd, it is + not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use + a shell, you need to explicitly call out to that shell. + Exit status of 0 is treated as live/healthy and non-zero is unhealthy. + items: + type: string + type: array + x-kubernetes-list-type: atomic + type: object + httpGet: + description: HTTPGet specifies the http request + to perform. + properties: + host: + description: |- + Host name to connect to, defaults to the pod IP. You probably want to set + "Host" in httpHeaders instead. + type: string + httpHeaders: + description: Custom headers to set in the + request. HTTP allows repeated headers. + items: + description: HTTPHeader describes a custom + header to be used in HTTP probes + properties: + name: + description: |- + The header field name. + This will be canonicalized upon output, so case-variant names will be understood as the same header. + type: string + value: + description: The header field value + type: string + required: + - name + - value + type: object + type: array + x-kubernetes-list-type: atomic + path: + description: Path to access on the HTTP server. + type: string + port: + anyOf: + - type: integer + - type: string + description: |- + Name or number of the port to access on the container. + Number must be in the range 1 to 65535. + Name must be an IANA_SVC_NAME. + x-kubernetes-int-or-string: true + scheme: + description: |- + Scheme to use for connecting to the host. + Defaults to HTTP. + type: string + required: + - port + type: object + sleep: + description: Sleep represents the duration that + the container should sleep before being terminated. + properties: + seconds: + description: Seconds is the number of seconds + to sleep. + format: int64 + type: integer + required: + - seconds + type: object + tcpSocket: + description: |- + Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept + for the backward compatibility. There are no validation of this field and + lifecycle hooks will fail in runtime when tcp handler is specified. + properties: + host: + description: 'Optional: Host name to connect + to, defaults to the pod IP.' + type: string + port: + anyOf: + - type: integer + - type: string + description: |- + Number or name of the port to access on the container. + Number must be in the range 1 to 65535. + Name must be an IANA_SVC_NAME. + x-kubernetes-int-or-string: true + required: + - port + type: object + type: object + type: object + livenessProbe: + description: |- + Periodic probe of container liveness. + Container will be restarted if the probe fails. + Cannot be updated. + More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes + properties: + exec: + description: Exec specifies the action to take. + properties: + command: + description: |- + Command is the command line to execute inside the container, the working directory for the + command is root ('/') in the container's filesystem. The command is simply exec'd, it is + not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use + a shell, you need to explicitly call out to that shell. + Exit status of 0 is treated as live/healthy and non-zero is unhealthy. + items: + type: string + type: array + x-kubernetes-list-type: atomic + type: object + failureThreshold: + description: |- + Minimum consecutive failures for the probe to be considered failed after having succeeded. + Defaults to 3. Minimum value is 1. + format: int32 + type: integer + grpc: + description: GRPC specifies an action involving a + GRPC port. + properties: + port: + description: Port number of the gRPC service. + Number must be in the range 1 to 65535. + format: int32 + type: integer + service: + default: "" + description: |- + Service is the name of the service to place in the gRPC HealthCheckRequest + (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). + + If this is not specified, the default behavior is defined by gRPC. + type: string + required: + - port + type: object + httpGet: + description: HTTPGet specifies the http request to + perform. + properties: + host: + description: |- + Host name to connect to, defaults to the pod IP. You probably want to set + "Host" in httpHeaders instead. + type: string + httpHeaders: + description: Custom headers to set in the request. + HTTP allows repeated headers. + items: + description: HTTPHeader describes a custom header + to be used in HTTP probes + properties: + name: + description: |- + The header field name. + This will be canonicalized upon output, so case-variant names will be understood as the same header. + type: string + value: + description: The header field value + type: string + required: + - name + - value + type: object + type: array + x-kubernetes-list-type: atomic + path: + description: Path to access on the HTTP server. + type: string + port: + anyOf: + - type: integer + - type: string + description: |- + Name or number of the port to access on the container. + Number must be in the range 1 to 65535. + Name must be an IANA_SVC_NAME. + x-kubernetes-int-or-string: true + scheme: + description: |- + Scheme to use for connecting to the host. + Defaults to HTTP. + type: string + required: + - port + type: object + initialDelaySeconds: + description: |- + Number of seconds after the container has started before liveness probes are initiated. + More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes + format: int32 + type: integer + periodSeconds: + description: |- + How often (in seconds) to perform the probe. + Default to 10 seconds. Minimum value is 1. + format: int32 + type: integer + successThreshold: + description: |- + Minimum consecutive successes for the probe to be considered successful after having failed. + Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. + format: int32 + type: integer + tcpSocket: + description: TCPSocket specifies an action involving + a TCP port. + properties: + host: + description: 'Optional: Host name to connect to, + defaults to the pod IP.' + type: string + port: + anyOf: + - type: integer + - type: string + description: |- + Number or name of the port to access on the container. + Number must be in the range 1 to 65535. + Name must be an IANA_SVC_NAME. + x-kubernetes-int-or-string: true + required: + - port + type: object + terminationGracePeriodSeconds: + description: |- + Optional duration in seconds the pod needs to terminate gracefully upon probe failure. + The grace period is the duration in seconds after the processes running in the pod are sent + a termination signal and the time when the processes are forcibly halted with a kill signal. + Set this value longer than the expected cleanup time for your process. + If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this + value overrides the value provided by the pod spec. + Value must be non-negative integer. The value zero indicates stop immediately via + the kill signal (no opportunity to shut down). + This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. + Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. + format: int64 + type: integer + timeoutSeconds: + description: |- + Number of seconds after which the probe times out. + Defaults to 1 second. Minimum value is 1. + More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes + format: int32 + type: integer + type: object + name: + description: |- + Name of the container specified as a DNS_LABEL. + Each container in a pod must have a unique name (DNS_LABEL). + Cannot be updated. + type: string + ports: + description: |- + List of ports to expose from the container. Not specifying a port here + DOES NOT prevent that port from being exposed. Any port which is + listening on the default "0.0.0.0" address inside a container will be + accessible from the network. + Modifying this array with strategic merge patch may corrupt the data. + For more information See https://github.com/kubernetes/kubernetes/issues/108255. + Cannot be updated. + items: + description: ContainerPort represents a network port + in a single container. + properties: + containerPort: + description: |- + Number of port to expose on the pod's IP address. + This must be a valid port number, 0 < x < 65536. + format: int32 + type: integer + hostIP: + description: What host IP to bind the external port + to. + type: string + hostPort: + description: |- + Number of port to expose on the host. + If specified, this must be a valid port number, 0 < x < 65536. + If HostNetwork is specified, this must match ContainerPort. + Most containers do not need this. + format: int32 + type: integer + name: + description: |- + If specified, this must be an IANA_SVC_NAME and unique within the pod. Each + named port in a pod must have a unique name. Name for the port that can be + referred to by services. + type: string + protocol: + default: TCP + description: |- + Protocol for port. Must be UDP, TCP, or SCTP. + Defaults to "TCP". + type: string + required: + - containerPort + type: object + type: array + x-kubernetes-list-map-keys: + - containerPort + - protocol + x-kubernetes-list-type: map + readinessProbe: + description: |- + Periodic probe of container service readiness. + Container will be removed from service endpoints if the probe fails. + Cannot be updated. + More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes + properties: + exec: + description: Exec specifies the action to take. + properties: + command: + description: |- + Command is the command line to execute inside the container, the working directory for the + command is root ('/') in the container's filesystem. The command is simply exec'd, it is + not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use + a shell, you need to explicitly call out to that shell. + Exit status of 0 is treated as live/healthy and non-zero is unhealthy. + items: + type: string + type: array + x-kubernetes-list-type: atomic + type: object + failureThreshold: + description: |- + Minimum consecutive failures for the probe to be considered failed after having succeeded. + Defaults to 3. Minimum value is 1. + format: int32 + type: integer + grpc: + description: GRPC specifies an action involving a + GRPC port. + properties: + port: + description: Port number of the gRPC service. + Number must be in the range 1 to 65535. + format: int32 + type: integer + service: + default: "" + description: |- + Service is the name of the service to place in the gRPC HealthCheckRequest + (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). + + If this is not specified, the default behavior is defined by gRPC. + type: string + required: + - port + type: object + httpGet: + description: HTTPGet specifies the http request to + perform. + properties: + host: + description: |- + Host name to connect to, defaults to the pod IP. You probably want to set + "Host" in httpHeaders instead. + type: string + httpHeaders: + description: Custom headers to set in the request. + HTTP allows repeated headers. + items: + description: HTTPHeader describes a custom header + to be used in HTTP probes + properties: + name: + description: |- + The header field name. + This will be canonicalized upon output, so case-variant names will be understood as the same header. + type: string + value: + description: The header field value + type: string + required: + - name + - value + type: object + type: array + x-kubernetes-list-type: atomic + path: + description: Path to access on the HTTP server. + type: string + port: + anyOf: + - type: integer + - type: string + description: |- + Name or number of the port to access on the container. + Number must be in the range 1 to 65535. + Name must be an IANA_SVC_NAME. + x-kubernetes-int-or-string: true + scheme: + description: |- + Scheme to use for connecting to the host. + Defaults to HTTP. + type: string + required: + - port + type: object + initialDelaySeconds: + description: |- + Number of seconds after the container has started before liveness probes are initiated. + More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes + format: int32 + type: integer + periodSeconds: + description: |- + How often (in seconds) to perform the probe. + Default to 10 seconds. Minimum value is 1. + format: int32 + type: integer + successThreshold: + description: |- + Minimum consecutive successes for the probe to be considered successful after having failed. + Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. + format: int32 + type: integer + tcpSocket: + description: TCPSocket specifies an action involving + a TCP port. + properties: + host: + description: 'Optional: Host name to connect to, + defaults to the pod IP.' + type: string + port: + anyOf: + - type: integer + - type: string + description: |- + Number or name of the port to access on the container. + Number must be in the range 1 to 65535. + Name must be an IANA_SVC_NAME. + x-kubernetes-int-or-string: true + required: + - port + type: object + terminationGracePeriodSeconds: + description: |- + Optional duration in seconds the pod needs to terminate gracefully upon probe failure. + The grace period is the duration in seconds after the processes running in the pod are sent + a termination signal and the time when the processes are forcibly halted with a kill signal. + Set this value longer than the expected cleanup time for your process. + If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this + value overrides the value provided by the pod spec. + Value must be non-negative integer. The value zero indicates stop immediately via + the kill signal (no opportunity to shut down). + This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. + Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. + format: int64 + type: integer + timeoutSeconds: + description: |- + Number of seconds after which the probe times out. + Defaults to 1 second. Minimum value is 1. + More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes + format: int32 + type: integer + type: object + resizePolicy: + description: Resources resize policy for the container. + items: + description: ContainerResizePolicy represents resource + resize policy for the container. + properties: + resourceName: + description: |- + Name of the resource to which this resource resize policy applies. + Supported values: cpu, memory. + type: string + restartPolicy: + description: |- + Restart policy to apply when specified resource is resized. + If not specified, it defaults to NotRequired. + type: string + required: + - resourceName + - restartPolicy + type: object + type: array + x-kubernetes-list-type: atomic + resources: + description: |- + Compute Resources required by this container. + Cannot be updated. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + properties: + claims: + description: |- + Claims lists the names of resources, defined in spec.resourceClaims, + that are used by this container. + + This is an alpha field and requires enabling the + DynamicResourceAllocation feature gate. + + This field is immutable. It can only be set for containers. + items: + description: ResourceClaim references one entry + in PodSpec.ResourceClaims. + properties: + name: + description: |- + Name must match the name of one entry in pod.spec.resourceClaims of + the Pod where this field is used. It makes that resource available + inside a container. + type: string + required: + - name + type: object + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Limits describes the maximum amount of compute resources allowed. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Requests describes the minimum amount of compute resources required. + If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, + otherwise to an implementation-defined value. Requests cannot exceed Limits. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + type: object + restartPolicy: + description: |- + RestartPolicy defines the restart behavior of individual containers in a pod. + This field may only be set for init containers, and the only allowed value is "Always". + For non-init containers or when this field is not specified, + the restart behavior is defined by the Pod's restart policy and the container type. + Setting the RestartPolicy as "Always" for the init container will have the following effect: + this init container will be continually restarted on + exit until all regular containers have terminated. Once all regular + containers have completed, all init containers with restartPolicy "Always" + will be shut down. This lifecycle differs from normal init containers and + is often referred to as a "sidecar" container. Although this init + container still starts in the init container sequence, it does not wait + for the container to complete before proceeding to the next init + container. Instead, the next init container starts immediately after this + init container is started, or after any startupProbe has successfully + completed. + type: string + securityContext: + description: |- + SecurityContext defines the security options the container should be run with. + If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. + More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ + properties: + allowPrivilegeEscalation: + description: |- + AllowPrivilegeEscalation controls whether a process can gain more + privileges than its parent process. This bool directly controls if + the no_new_privs flag will be set on the container process. + AllowPrivilegeEscalation is true always when the container is: + 1) run as Privileged + 2) has CAP_SYS_ADMIN + Note that this field cannot be set when spec.os.name is windows. + type: boolean + appArmorProfile: + description: |- + appArmorProfile is the AppArmor options to use by this container. If set, this profile + overrides the pod's appArmorProfile. + Note that this field cannot be set when spec.os.name is windows. + properties: + localhostProfile: + description: |- + localhostProfile indicates a profile loaded on the node that should be used. + The profile must be preconfigured on the node to work. + Must match the loaded name of the profile. + Must be set if and only if type is "Localhost". + type: string + type: + description: |- + type indicates which kind of AppArmor profile will be applied. + Valid options are: + Localhost - a profile pre-loaded on the node. + RuntimeDefault - the container runtime's default profile. + Unconfined - no AppArmor enforcement. + type: string + required: + - type + type: object + capabilities: + description: |- + The capabilities to add/drop when running containers. + Defaults to the default set of capabilities granted by the container runtime. + Note that this field cannot be set when spec.os.name is windows. + properties: + add: + description: Added capabilities + items: + description: Capability represent POSIX capabilities + type + type: string + type: array + x-kubernetes-list-type: atomic + drop: + description: Removed capabilities + items: + description: Capability represent POSIX capabilities + type + type: string + type: array + x-kubernetes-list-type: atomic + type: object + privileged: + description: |- + Run container in privileged mode. + Processes in privileged containers are essentially equivalent to root on the host. + Defaults to false. + Note that this field cannot be set when spec.os.name is windows. + type: boolean + procMount: + description: |- + procMount denotes the type of proc mount to use for the containers. + The default is DefaultProcMount which uses the container runtime defaults for + readonly paths and masked paths. + This requires the ProcMountType feature flag to be enabled. + Note that this field cannot be set when spec.os.name is windows. + type: string + readOnlyRootFilesystem: + description: |- + Whether this container has a read-only root filesystem. + Default is false. + Note that this field cannot be set when spec.os.name is windows. + type: boolean + runAsGroup: + description: |- + The GID to run the entrypoint of the container process. + Uses runtime default if unset. + May also be set in PodSecurityContext. If set in both SecurityContext and + PodSecurityContext, the value specified in SecurityContext takes precedence. + Note that this field cannot be set when spec.os.name is windows. + format: int64 + type: integer + runAsNonRoot: + description: |- + Indicates that the container must run as a non-root user. + If true, the Kubelet will validate the image at runtime to ensure that it + does not run as UID 0 (root) and fail to start the container if it does. + If unset or false, no such validation will be performed. + May also be set in PodSecurityContext. If set in both SecurityContext and + PodSecurityContext, the value specified in SecurityContext takes precedence. + type: boolean + runAsUser: + description: |- + The UID to run the entrypoint of the container process. + Defaults to user specified in image metadata if unspecified. + May also be set in PodSecurityContext. If set in both SecurityContext and + PodSecurityContext, the value specified in SecurityContext takes precedence. + Note that this field cannot be set when spec.os.name is windows. + format: int64 + type: integer + seLinuxOptions: + description: |- + The SELinux context to be applied to the container. + If unspecified, the container runtime will allocate a random SELinux context for each + container. May also be set in PodSecurityContext. If set in both SecurityContext and + PodSecurityContext, the value specified in SecurityContext takes precedence. + Note that this field cannot be set when spec.os.name is windows. + properties: + level: + description: Level is SELinux level label that + applies to the container. + type: string + role: + description: Role is a SELinux role label that + applies to the container. + type: string + type: + description: Type is a SELinux type label that + applies to the container. + type: string + user: + description: User is a SELinux user label that + applies to the container. + type: string + type: object + seccompProfile: + description: |- + The seccomp options to use by this container. If seccomp options are + provided at both the pod & container level, the container options + override the pod options. + Note that this field cannot be set when spec.os.name is windows. + properties: + localhostProfile: + description: |- + localhostProfile indicates a profile defined in a file on the node should be used. + The profile must be preconfigured on the node to work. + Must be a descending path, relative to the kubelet's configured seccomp profile location. + Must be set if type is "Localhost". Must NOT be set for any other type. + type: string + type: + description: |- + type indicates which kind of seccomp profile will be applied. + Valid options are: + + Localhost - a profile defined in a file on the node should be used. + RuntimeDefault - the container runtime default profile should be used. + Unconfined - no profile should be applied. + type: string + required: + - type + type: object + windowsOptions: + description: |- + The Windows specific settings applied to all containers. + If unspecified, the options from the PodSecurityContext will be used. + If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. + Note that this field cannot be set when spec.os.name is linux. + properties: + gmsaCredentialSpec: + description: |- + GMSACredentialSpec is where the GMSA admission webhook + (https://github.com/kubernetes-sigs/windows-gmsa) inlines the contents of the + GMSA credential spec named by the GMSACredentialSpecName field. + type: string + gmsaCredentialSpecName: + description: GMSACredentialSpecName is the name + of the GMSA credential spec to use. + type: string + hostProcess: + description: |- + HostProcess determines if a container should be run as a 'Host Process' container. + All of a Pod's containers must have the same effective HostProcess value + (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). + In addition, if HostProcess is true then HostNetwork must also be set to true. + type: boolean + runAsUserName: + description: |- + The UserName in Windows to run the entrypoint of the container process. + Defaults to the user specified in image metadata if unspecified. + May also be set in PodSecurityContext. If set in both SecurityContext and + PodSecurityContext, the value specified in SecurityContext takes precedence. + type: string + type: object + type: object + startupProbe: + description: |- + StartupProbe indicates that the Pod has successfully initialized. + If specified, no other probes are executed until this completes successfully. + If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. + This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, + when it might take a long time to load data or warm a cache, than during steady-state operation. + This cannot be updated. + More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes + properties: + exec: + description: Exec specifies the action to take. + properties: + command: + description: |- + Command is the command line to execute inside the container, the working directory for the + command is root ('/') in the container's filesystem. The command is simply exec'd, it is + not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use + a shell, you need to explicitly call out to that shell. + Exit status of 0 is treated as live/healthy and non-zero is unhealthy. + items: + type: string + type: array + x-kubernetes-list-type: atomic + type: object + failureThreshold: + description: |- + Minimum consecutive failures for the probe to be considered failed after having succeeded. + Defaults to 3. Minimum value is 1. + format: int32 + type: integer + grpc: + description: GRPC specifies an action involving a + GRPC port. + properties: + port: + description: Port number of the gRPC service. + Number must be in the range 1 to 65535. + format: int32 + type: integer + service: + default: "" + description: |- + Service is the name of the service to place in the gRPC HealthCheckRequest + (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). + + If this is not specified, the default behavior is defined by gRPC. + type: string + required: + - port + type: object + httpGet: + description: HTTPGet specifies the http request to + perform. + properties: + host: + description: |- + Host name to connect to, defaults to the pod IP. You probably want to set + "Host" in httpHeaders instead. + type: string + httpHeaders: + description: Custom headers to set in the request. + HTTP allows repeated headers. + items: + description: HTTPHeader describes a custom header + to be used in HTTP probes + properties: + name: + description: |- + The header field name. + This will be canonicalized upon output, so case-variant names will be understood as the same header. + type: string + value: + description: The header field value + type: string + required: + - name + - value + type: object + type: array + x-kubernetes-list-type: atomic + path: + description: Path to access on the HTTP server. + type: string + port: + anyOf: + - type: integer + - type: string + description: |- + Name or number of the port to access on the container. + Number must be in the range 1 to 65535. + Name must be an IANA_SVC_NAME. + x-kubernetes-int-or-string: true + scheme: + description: |- + Scheme to use for connecting to the host. + Defaults to HTTP. + type: string + required: + - port + type: object + initialDelaySeconds: + description: |- + Number of seconds after the container has started before liveness probes are initiated. + More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes + format: int32 + type: integer + periodSeconds: + description: |- + How often (in seconds) to perform the probe. + Default to 10 seconds. Minimum value is 1. + format: int32 + type: integer + successThreshold: + description: |- + Minimum consecutive successes for the probe to be considered successful after having failed. + Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. + format: int32 + type: integer + tcpSocket: + description: TCPSocket specifies an action involving + a TCP port. + properties: + host: + description: 'Optional: Host name to connect to, + defaults to the pod IP.' + type: string + port: + anyOf: + - type: integer + - type: string + description: |- + Number or name of the port to access on the container. + Number must be in the range 1 to 65535. + Name must be an IANA_SVC_NAME. + x-kubernetes-int-or-string: true + required: + - port + type: object + terminationGracePeriodSeconds: + description: |- + Optional duration in seconds the pod needs to terminate gracefully upon probe failure. + The grace period is the duration in seconds after the processes running in the pod are sent + a termination signal and the time when the processes are forcibly halted with a kill signal. + Set this value longer than the expected cleanup time for your process. + If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this + value overrides the value provided by the pod spec. + Value must be non-negative integer. The value zero indicates stop immediately via + the kill signal (no opportunity to shut down). + This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. + Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. + format: int64 + type: integer + timeoutSeconds: + description: |- + Number of seconds after which the probe times out. + Defaults to 1 second. Minimum value is 1. + More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes + format: int32 + type: integer + type: object + stdin: + description: |- + Whether this container should allocate a buffer for stdin in the container runtime. If this + is not set, reads from stdin in the container will always result in EOF. + Default is false. + type: boolean + stdinOnce: + description: |- + Whether the container runtime should close the stdin channel after it has been opened by + a single attach. When stdin is true the stdin stream will remain open across multiple attach + sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the + first client attaches to stdin, and then remains open and accepts data until the client disconnects, + at which time stdin is closed and remains closed until the container is restarted. If this + flag is false, a container processes that reads from stdin will never receive an EOF. + Default is false + type: boolean + terminationMessagePath: + description: |- + Optional: Path at which the file to which the container's termination message + will be written is mounted into the container's filesystem. + Message written is intended to be brief final status, such as an assertion failure message. + Will be truncated by the node if greater than 4096 bytes. The total message length across + all containers will be limited to 12kb. + Defaults to /dev/termination-log. + Cannot be updated. + type: string + terminationMessagePolicy: + description: |- + Indicate how the termination message should be populated. File will use the contents of + terminationMessagePath to populate the container status message on both success and failure. + FallbackToLogsOnError will use the last chunk of container log output if the termination + message file is empty and the container exited with an error. + The log output is limited to 2048 bytes or 80 lines, whichever is smaller. + Defaults to File. + Cannot be updated. + type: string + tty: + description: |- + Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. + Default is false. + type: boolean + volumeDevices: + description: volumeDevices is the list of block devices + to be used by the container. + items: + description: volumeDevice describes a mapping of a raw + block device within a container. + properties: + devicePath: + description: devicePath is the path inside of the + container that the device will be mapped to. + type: string + name: + description: name must match the name of a persistentVolumeClaim + in the pod + type: string + required: + - devicePath + - name + type: object + type: array + x-kubernetes-list-map-keys: + - devicePath + x-kubernetes-list-type: map + volumeMounts: + description: |- + Pod volumes to mount into the container's filesystem. + Cannot be updated. + items: + description: VolumeMount describes a mounting of a Volume + within a container. + properties: + mountPath: + description: |- + Path within the container at which the volume should be mounted. Must + not contain ':'. + type: string + mountPropagation: + description: |- + mountPropagation determines how mounts are propagated from the host + to container and the other way around. + When not set, MountPropagationNone is used. + This field is beta in 1.10. + When RecursiveReadOnly is set to IfPossible or to Enabled, MountPropagation must be None or unspecified + (which defaults to None). + type: string + name: + description: This must match the Name of a Volume. + type: string + readOnly: + description: |- + Mounted read-only if true, read-write otherwise (false or unspecified). + Defaults to false. + type: boolean + recursiveReadOnly: + description: |- + RecursiveReadOnly specifies whether read-only mounts should be handled + recursively. + + If ReadOnly is false, this field has no meaning and must be unspecified. + + If ReadOnly is true, and this field is set to Disabled, the mount is not made + recursively read-only. If this field is set to IfPossible, the mount is made + recursively read-only, if it is supported by the container runtime. If this + field is set to Enabled, the mount is made recursively read-only if it is + supported by the container runtime, otherwise the pod will not be started and + an error will be generated to indicate the reason. + + If this field is set to IfPossible or Enabled, MountPropagation must be set to + None (or be unspecified, which defaults to None). + + If this field is not specified, it is treated as an equivalent of Disabled. + type: string + subPath: + description: |- + Path within the volume from which the container's volume should be mounted. + Defaults to "" (volume's root). + type: string + subPathExpr: + description: |- + Expanded path within the volume from which the container's volume should be mounted. + Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container's environment. + Defaults to "" (volume's root). + SubPathExpr and SubPath are mutually exclusive. + type: string + required: + - mountPath + - name + type: object + type: array + x-kubernetes-list-map-keys: + - mountPath + x-kubernetes-list-type: map + workingDir: + description: |- + Container's working directory. + If not specified, the container runtime's default will be used, which + might be configured in the container image. + Cannot be updated. + type: string + required: + - name + type: object + type: array + dataVolumeClaimSpec: + description: |- + Defines a PersistentVolumeClaim for PostgreSQL data. + More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes + properties: + accessModes: + description: |- + accessModes contains the desired access modes the volume should have. + More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 + items: + type: string + type: array + x-kubernetes-list-type: atomic + dataSource: + description: |- + dataSource field can be used to specify either: + * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) + * An existing PVC (PersistentVolumeClaim) + If the provisioner or an external controller can support the specified data source, + it will create a new volume based on the contents of the specified data source. + When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, + and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. + If the namespace is specified, then dataSourceRef will not be copied to dataSource. + properties: + apiGroup: + description: |- + APIGroup is the group for the resource being referenced. + If APIGroup is not specified, the specified Kind must be in the core API group. + For any other third-party types, APIGroup is required. + type: string + kind: + description: Kind is the type of resource being referenced + type: string + name: + description: Name is the name of resource being referenced + type: string + required: + - kind + - name + type: object + x-kubernetes-map-type: atomic + dataSourceRef: + description: |- + dataSourceRef specifies the object from which to populate the volume with data, if a non-empty + volume is desired. This may be any object from a non-empty API group (non + core object) or a PersistentVolumeClaim object. + When this field is specified, volume binding will only succeed if the type of + the specified object matches some installed volume populator or dynamic + provisioner. + This field will replace the functionality of the dataSource field and as such + if both fields are non-empty, they must have the same value. For backwards + compatibility, when namespace isn't specified in dataSourceRef, + both fields (dataSource and dataSourceRef) will be set to the same + value automatically if one of them is empty and the other is non-empty. + When namespace is specified in dataSourceRef, + dataSource isn't set to the same value and must be empty. + There are three important differences between dataSource and dataSourceRef: + * While dataSource only allows two specific types of objects, dataSourceRef + allows any non-core object, as well as PersistentVolumeClaim objects. + * While dataSource ignores disallowed values (dropping them), dataSourceRef + preserves all values, and generates an error if a disallowed value is + specified. + * While dataSource only allows local objects, dataSourceRef allows objects + in any namespaces. + (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. + (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. + properties: + apiGroup: + description: |- + APIGroup is the group for the resource being referenced. + If APIGroup is not specified, the specified Kind must be in the core API group. + For any other third-party types, APIGroup is required. + type: string + kind: + description: Kind is the type of resource being referenced + type: string + name: + description: Name is the name of resource being referenced + type: string + namespace: + description: |- + Namespace is the namespace of resource being referenced + Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. + (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. + type: string + required: + - kind + - name + type: object + resources: + description: |- + resources represents the minimum resources the volume should have. + If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements + that are lower than previous value but must still be higher than capacity recorded in the + status field of the claim. + More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources + properties: + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Limits describes the maximum amount of compute resources allowed. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Requests describes the minimum amount of compute resources required. + If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, + otherwise to an implementation-defined value. Requests cannot exceed Limits. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + type: object + selector: + description: selector is a label query over volumes to consider + for binding. + properties: + matchExpressions: + description: matchExpressions is a list of label selector + requirements. The requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key that the selector + applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + storageClassName: + description: |- + storageClassName is the name of the StorageClass required by the claim. + More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 + type: string + volumeAttributesClassName: + description: |- + volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. + If specified, the CSI driver will create or update the volume with the attributes defined + in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, + it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass + will be applied to the claim but it's not allowed to reset this field to empty string once it is set. + If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass + will be set by the persistentvolume controller if it exists. + If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be + set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource + exists. + More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ + (Alpha) Using this field requires the VolumeAttributesClass feature gate to be enabled. + type: string + volumeMode: + description: |- + volumeMode defines what type of volume is required by the claim. + Value of Filesystem is implied when not included in claim spec. + type: string + volumeName: + description: volumeName is the binding reference to the + PersistentVolume backing this claim. + type: string + type: object + x-kubernetes-validations: + - message: missing accessModes + rule: has(self.accessModes) && size(self.accessModes) > 0 + - message: missing storage request + rule: has(self.resources) && has(self.resources.requests) + && has(self.resources.requests.storage) + metadata: + description: Metadata contains metadata for custom resources + properties: + annotations: + additionalProperties: + type: string + type: object + labels: + additionalProperties: + type: string + type: object + type: object + minAvailable: + anyOf: + - type: integer + - type: string + description: |- + Minimum number of pods that should be available at a time. + Defaults to one when the replicas field is greater than one. + x-kubernetes-int-or-string: true + name: + default: "" + description: |- + Name that associates this set of PostgreSQL pods. This field is optional + when only one instance set is defined. Each instance set in a cluster + must have a unique name. The combined length of this and the cluster name + must be 46 characters or less. + pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?)?$ + type: string + priorityClassName: + description: |- + Priority class name for the PostgreSQL pod. Changing this value causes + PostgreSQL to restart. + More info: https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/ + type: string + replicas: + default: 1 + description: Number of desired PostgreSQL pods. + format: int32 + minimum: 1 + type: integer + resources: + description: Compute resources of a PostgreSQL container. + properties: + claims: + description: |- + Claims lists the names of resources, defined in spec.resourceClaims, + that are used by this container. + + This is an alpha field and requires enabling the + DynamicResourceAllocation feature gate. + + This field is immutable. It can only be set for containers. + items: + description: ResourceClaim references one entry in PodSpec.ResourceClaims. + properties: + name: + description: |- + Name must match the name of one entry in pod.spec.resourceClaims of + the Pod where this field is used. It makes that resource available + inside a container. + type: string + required: + - name + type: object + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Limits describes the maximum amount of compute resources allowed. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Requests describes the minimum amount of compute resources required. + If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, + otherwise to an implementation-defined value. Requests cannot exceed Limits. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + type: object + sidecars: + description: Configuration for instance sidecar containers + properties: + replicaCertCopy: + description: Defines the configuration for the replica cert + copy sidecar container + properties: + resources: + description: Resource requirements for a sidecar container + properties: + claims: + description: |- + Claims lists the names of resources, defined in spec.resourceClaims, + that are used by this container. + + This is an alpha field and requires enabling the + DynamicResourceAllocation feature gate. + + This field is immutable. It can only be set for containers. + items: + description: ResourceClaim references one entry + in PodSpec.ResourceClaims. + properties: + name: + description: |- + Name must match the name of one entry in pod.spec.resourceClaims of + the Pod where this field is used. It makes that resource available + inside a container. + type: string + required: + - name + type: object + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Limits describes the maximum amount of compute resources allowed. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Requests describes the minimum amount of compute resources required. + If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, + otherwise to an implementation-defined value. Requests cannot exceed Limits. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + type: object + type: object + type: object + tablespaceVolumes: + description: |- + The list of tablespaces volumes to mount for this postgrescluster + This field requires enabling TablespaceVolumes feature gate + items: + properties: + dataVolumeClaimSpec: + description: |- + Defines a PersistentVolumeClaim for a tablespace. + More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes + properties: + accessModes: + description: |- + accessModes contains the desired access modes the volume should have. + More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 + items: + type: string + type: array + x-kubernetes-list-type: atomic + dataSource: + description: |- + dataSource field can be used to specify either: + * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) + * An existing PVC (PersistentVolumeClaim) + If the provisioner or an external controller can support the specified data source, + it will create a new volume based on the contents of the specified data source. + When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, + and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. + If the namespace is specified, then dataSourceRef will not be copied to dataSource. + properties: + apiGroup: + description: |- + APIGroup is the group for the resource being referenced. + If APIGroup is not specified, the specified Kind must be in the core API group. + For any other third-party types, APIGroup is required. + type: string + kind: + description: Kind is the type of resource being + referenced + type: string + name: + description: Name is the name of resource being + referenced + type: string + required: + - kind + - name + type: object + x-kubernetes-map-type: atomic + dataSourceRef: + description: |- + dataSourceRef specifies the object from which to populate the volume with data, if a non-empty + volume is desired. This may be any object from a non-empty API group (non + core object) or a PersistentVolumeClaim object. + When this field is specified, volume binding will only succeed if the type of + the specified object matches some installed volume populator or dynamic + provisioner. + This field will replace the functionality of the dataSource field and as such + if both fields are non-empty, they must have the same value. For backwards + compatibility, when namespace isn't specified in dataSourceRef, + both fields (dataSource and dataSourceRef) will be set to the same + value automatically if one of them is empty and the other is non-empty. + When namespace is specified in dataSourceRef, + dataSource isn't set to the same value and must be empty. + There are three important differences between dataSource and dataSourceRef: + * While dataSource only allows two specific types of objects, dataSourceRef + allows any non-core object, as well as PersistentVolumeClaim objects. + * While dataSource ignores disallowed values (dropping them), dataSourceRef + preserves all values, and generates an error if a disallowed value is + specified. + * While dataSource only allows local objects, dataSourceRef allows objects + in any namespaces. + (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. + (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. + properties: + apiGroup: + description: |- + APIGroup is the group for the resource being referenced. + If APIGroup is not specified, the specified Kind must be in the core API group. + For any other third-party types, APIGroup is required. + type: string + kind: + description: Kind is the type of resource being + referenced + type: string + name: + description: Name is the name of resource being + referenced + type: string + namespace: + description: |- + Namespace is the namespace of resource being referenced + Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. + (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. + type: string + required: + - kind + - name + type: object + resources: + description: |- + resources represents the minimum resources the volume should have. + If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements + that are lower than previous value but must still be higher than capacity recorded in the + status field of the claim. + More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources + properties: + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Limits describes the maximum amount of compute resources allowed. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Requests describes the minimum amount of compute resources required. + If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, + otherwise to an implementation-defined value. Requests cannot exceed Limits. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + type: object + selector: + description: selector is a label query over volumes + to consider for binding. + properties: + matchExpressions: + description: matchExpressions is a list of label + selector requirements. The requirements are + ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key that the + selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + storageClassName: + description: |- + storageClassName is the name of the StorageClass required by the claim. + More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 + type: string + volumeAttributesClassName: + description: |- + volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. + If specified, the CSI driver will create or update the volume with the attributes defined + in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, + it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass + will be applied to the claim but it's not allowed to reset this field to empty string once it is set. + If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass + will be set by the persistentvolume controller if it exists. + If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be + set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource + exists. + More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ + (Alpha) Using this field requires the VolumeAttributesClass feature gate to be enabled. + type: string + volumeMode: + description: |- + volumeMode defines what type of volume is required by the claim. + Value of Filesystem is implied when not included in claim spec. + type: string + volumeName: + description: volumeName is the binding reference to + the PersistentVolume backing this claim. + type: string + type: object + x-kubernetes-validations: + - message: missing accessModes + rule: has(self.accessModes) && size(self.accessModes) + > 0 + - message: missing storage request + rule: has(self.resources) && has(self.resources.requests) + && has(self.resources.requests.storage) + name: + description: |- + The name for the tablespace, used as the path name for the volume. + Must be unique in the instance set since they become the directory names. + minLength: 1 + pattern: ^[a-z][a-z0-9]*$ + type: string + required: + - dataVolumeClaimSpec + - name + type: object + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + tolerations: + description: |- + Tolerations of a PostgreSQL pod. Changing this value causes PostgreSQL to restart. + More info: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration + items: + description: |- + The pod this Toleration is attached to tolerates any taint that matches + the triple using the matching operator . + properties: + effect: + description: |- + Effect indicates the taint effect to match. Empty means match all taint effects. + When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. + type: string + key: + description: |- + Key is the taint key that the toleration applies to. Empty means match all taint keys. + If the key is empty, operator must be Exists; this combination means to match all values and all keys. + type: string + operator: + description: |- + Operator represents a key's relationship to the value. + Valid operators are Exists and Equal. Defaults to Equal. + Exists is equivalent to wildcard for value, so that a pod can + tolerate all taints of a particular category. + type: string + tolerationSeconds: + description: |- + TolerationSeconds represents the period of time the toleration (which must be + of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, + it is not set, which means tolerate the taint forever (do not evict). Zero and + negative values will be treated as 0 (evict immediately) by the system. + format: int64 + type: integer + value: + description: |- + Value is the taint value the toleration matches to. + If the operator is Exists, the value should be empty, otherwise just a regular string. + type: string + type: object + type: array + topologySpreadConstraints: + description: |- + Topology spread constraints of a PostgreSQL pod. Changing this value causes + PostgreSQL to restart. + More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/ + items: + description: TopologySpreadConstraint specifies how to spread + matching pods among the given topology. + properties: + labelSelector: + description: |- + LabelSelector is used to find matching pods. + Pods that match this label selector are counted to determine the number of pods + in their corresponding topology domain. + properties: + matchExpressions: + description: matchExpressions is a list of label selector + requirements. The requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key that the selector + applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select the pods over which + spreading will be calculated. The keys are used to lookup values from the + incoming pod labels, those key-value labels are ANDed with labelSelector + to select the group of existing pods over which spreading will be calculated + for the incoming pod. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. + MatchLabelKeys cannot be set when LabelSelector isn't set. + Keys that don't exist in the incoming pod labels will + be ignored. A null or empty list means only match against labelSelector. + + This is a beta field and requires the MatchLabelKeysInPodTopologySpread feature gate to be enabled (enabled by default). + items: + type: string + type: array + x-kubernetes-list-type: atomic + maxSkew: + description: |- + MaxSkew describes the degree to which pods may be unevenly distributed. + When `whenUnsatisfiable=DoNotSchedule`, it is the maximum permitted difference + between the number of matching pods in the target topology and the global minimum. + The global minimum is the minimum number of matching pods in an eligible domain + or zero if the number of eligible domains is less than MinDomains. + For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same + labelSelector spread as 2/2/1: + In this case, the global minimum is 1. + | zone1 | zone2 | zone3 | + | P P | P P | P | + - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; + scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) + violate MaxSkew(1). + - if MaxSkew is 2, incoming pod can be scheduled onto any zone. + When `whenUnsatisfiable=ScheduleAnyway`, it is used to give higher precedence + to topologies that satisfy it. + It's a required field. Default value is 1 and 0 is not allowed. + format: int32 + type: integer + minDomains: + description: |- + MinDomains indicates a minimum number of eligible domains. + When the number of eligible domains with matching topology keys is less than minDomains, + Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed. + And when the number of eligible domains with matching topology keys equals or greater than minDomains, + this value has no effect on scheduling. + As a result, when the number of eligible domains is less than minDomains, + scheduler won't schedule more than maxSkew Pods to those domains. + If value is nil, the constraint behaves as if MinDomains is equal to 1. + Valid values are integers greater than 0. + When value is not nil, WhenUnsatisfiable must be DoNotSchedule. + + For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same + labelSelector spread as 2/2/2: + | zone1 | zone2 | zone3 | + | P P | P P | P P | + The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0. + In this situation, new pod with the same labelSelector cannot be scheduled, + because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, + it will violate MaxSkew. + format: int32 + type: integer + nodeAffinityPolicy: + description: |- + NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector + when calculating pod topology spread skew. Options are: + - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. + - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. + + If this value is nil, the behavior is equivalent to the Honor policy. + This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. + type: string + nodeTaintsPolicy: + description: |- + NodeTaintsPolicy indicates how we will treat node taints when calculating + pod topology spread skew. Options are: + - Honor: nodes without taints, along with tainted nodes for which the incoming pod + has a toleration, are included. + - Ignore: node taints are ignored. All nodes are included. + + If this value is nil, the behavior is equivalent to the Ignore policy. + This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. + type: string + topologyKey: + description: |- + TopologyKey is the key of node labels. Nodes that have a label with this key + and identical values are considered to be in the same topology. + We consider each as a "bucket", and try to put balanced number + of pods into each bucket. + We define a domain as a particular instance of a topology. + Also, we define an eligible domain as a domain whose nodes meet the requirements of + nodeAffinityPolicy and nodeTaintsPolicy. + e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. + And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. + It's a required field. + type: string + whenUnsatisfiable: + description: |- + WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy + the spread constraint. + - DoNotSchedule (default) tells the scheduler not to schedule it. + - ScheduleAnyway tells the scheduler to schedule the pod in any location, + but giving higher precedence to topologies that would help reduce the + skew. + A constraint is considered "Unsatisfiable" for an incoming pod + if and only if every possible node assignment for that pod would violate + "MaxSkew" on some topology. + For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same + labelSelector spread as 3/1/1: + | zone1 | zone2 | zone3 | + | P P P | P | P | + If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled + to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies + MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler + won't make it *more* imbalanced. + It's a required field. + type: string + required: + - maxSkew + - topologyKey + - whenUnsatisfiable + type: object + type: array + walVolumeClaimSpec: + description: |- + Defines a separate PersistentVolumeClaim for PostgreSQL's write-ahead log. + More info: https://www.postgresql.org/docs/current/wal.html + properties: + accessModes: + description: |- + accessModes contains the desired access modes the volume should have. + More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 + items: + type: string + type: array + x-kubernetes-list-type: atomic + dataSource: + description: |- + dataSource field can be used to specify either: + * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) + * An existing PVC (PersistentVolumeClaim) + If the provisioner or an external controller can support the specified data source, + it will create a new volume based on the contents of the specified data source. + When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, + and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. + If the namespace is specified, then dataSourceRef will not be copied to dataSource. + properties: + apiGroup: + description: |- + APIGroup is the group for the resource being referenced. + If APIGroup is not specified, the specified Kind must be in the core API group. + For any other third-party types, APIGroup is required. + type: string + kind: + description: Kind is the type of resource being referenced + type: string + name: + description: Name is the name of resource being referenced + type: string + required: + - kind + - name + type: object + x-kubernetes-map-type: atomic + dataSourceRef: + description: |- + dataSourceRef specifies the object from which to populate the volume with data, if a non-empty + volume is desired. This may be any object from a non-empty API group (non + core object) or a PersistentVolumeClaim object. + When this field is specified, volume binding will only succeed if the type of + the specified object matches some installed volume populator or dynamic + provisioner. + This field will replace the functionality of the dataSource field and as such + if both fields are non-empty, they must have the same value. For backwards + compatibility, when namespace isn't specified in dataSourceRef, + both fields (dataSource and dataSourceRef) will be set to the same + value automatically if one of them is empty and the other is non-empty. + When namespace is specified in dataSourceRef, + dataSource isn't set to the same value and must be empty. + There are three important differences between dataSource and dataSourceRef: + * While dataSource only allows two specific types of objects, dataSourceRef + allows any non-core object, as well as PersistentVolumeClaim objects. + * While dataSource ignores disallowed values (dropping them), dataSourceRef + preserves all values, and generates an error if a disallowed value is + specified. + * While dataSource only allows local objects, dataSourceRef allows objects + in any namespaces. + (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. + (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. + properties: + apiGroup: + description: |- + APIGroup is the group for the resource being referenced. + If APIGroup is not specified, the specified Kind must be in the core API group. + For any other third-party types, APIGroup is required. + type: string + kind: + description: Kind is the type of resource being referenced + type: string + name: + description: Name is the name of resource being referenced + type: string + namespace: + description: |- + Namespace is the namespace of resource being referenced + Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. + (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. + type: string + required: + - kind + - name + type: object + resources: + description: |- + resources represents the minimum resources the volume should have. + If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements + that are lower than previous value but must still be higher than capacity recorded in the + status field of the claim. + More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources + properties: + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Limits describes the maximum amount of compute resources allowed. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Requests describes the minimum amount of compute resources required. + If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, + otherwise to an implementation-defined value. Requests cannot exceed Limits. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + type: object + selector: + description: selector is a label query over volumes to consider + for binding. + properties: + matchExpressions: + description: matchExpressions is a list of label selector + requirements. The requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key that the selector + applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + storageClassName: + description: |- + storageClassName is the name of the StorageClass required by the claim. + More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 + type: string + volumeAttributesClassName: + description: |- + volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. + If specified, the CSI driver will create or update the volume with the attributes defined + in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, + it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass + will be applied to the claim but it's not allowed to reset this field to empty string once it is set. + If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass + will be set by the persistentvolume controller if it exists. + If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be + set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource + exists. + More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ + (Alpha) Using this field requires the VolumeAttributesClass feature gate to be enabled. + type: string + volumeMode: + description: |- + volumeMode defines what type of volume is required by the claim. + Value of Filesystem is implied when not included in claim spec. + type: string + volumeName: + description: volumeName is the binding reference to the + PersistentVolume backing this claim. + type: string + type: object + x-kubernetes-validations: + - message: missing accessModes + rule: has(self.accessModes) && size(self.accessModes) > 0 + - message: missing storage request + rule: has(self.resources) && has(self.resources.requests) + && has(self.resources.requests.storage) + required: + - dataVolumeClaimSpec + type: object + minItems: 1 + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + metadata: + description: Metadata contains metadata for custom resources + properties: + annotations: + additionalProperties: + type: string + type: object + labels: + additionalProperties: + type: string + type: object + type: object + monitoring: + description: The specification of monitoring tools that connect to + PostgreSQL + properties: + pgmonitor: + description: PGMonitorSpec defines the desired state of the pgMonitor + tool suite + properties: + exporter: + properties: + configuration: + description: |- + Projected volumes containing custom PostgreSQL Exporter configuration. Currently supports + the customization of PostgreSQL Exporter queries. If a "queries.yml" file is detected in + any volume projected using this field, it will be loaded using the "extend.query-path" flag: + https://github.com/prometheus-community/postgres_exporter#flags + Changing the values of field causes PostgreSQL and the exporter to restart. + items: + description: Projection that may be projected along + with other supported volume types + properties: + clusterTrustBundle: + description: |- + ClusterTrustBundle allows a pod to access the `.spec.trustBundle` field + of ClusterTrustBundle objects in an auto-updating file. + + Alpha, gated by the ClusterTrustBundleProjection feature gate. + + ClusterTrustBundle objects can either be selected by name, or by the + combination of signer name and a label selector. + + Kubelet performs aggressive normalization of the PEM contents written + into the pod filesystem. Esoteric PEM features such as inter-block + comments and block headers are stripped. Certificates are deduplicated. + The ordering of certificates within the file is arbitrary, and Kubelet + may change the order over time. + properties: + labelSelector: + description: |- + Select all ClusterTrustBundles that match this label selector. Only has + effect if signerName is set. Mutually-exclusive with name. If unset, + interpreted as "match nothing". If set but empty, interpreted as "match + everything". + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The requirements + are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key + that the selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + name: + description: |- + Select a single ClusterTrustBundle by object name. Mutually-exclusive + with signerName and labelSelector. + type: string + optional: + description: |- + If true, don't block pod startup if the referenced ClusterTrustBundle(s) + aren't available. If using name, then the named ClusterTrustBundle is + allowed not to exist. If using signerName, then the combination of + signerName and labelSelector is allowed to match zero + ClusterTrustBundles. + type: boolean + path: + description: Relative path from the volume root + to write the bundle. + type: string + signerName: + description: |- + Select all ClusterTrustBundles that match this signer name. + Mutually-exclusive with name. The contents of all selected + ClusterTrustBundles will be unified and deduplicated. + type: string + required: + - path + type: object + configMap: + description: configMap information about the configMap + data to project + properties: + items: + description: |- + items if unspecified, each key-value pair in the Data field of the referenced + ConfigMap will be projected into the volume as a file whose name is the + key and content is the value. If specified, the listed keys will be + projected into the specified paths, and unlisted keys will not be + present. If a key is specified which is not present in the ConfigMap, + the volume setup will error unless it is marked optional. Paths must be + relative and may not contain the '..' path or start with '..'. + items: + description: Maps a string key to a path within + a volume. + properties: + key: + description: key is the key to project. + type: string + mode: + description: |- + mode is Optional: mode bits used to set permissions on this file. + Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. + YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. + If not specified, the volume defaultMode will be used. + This might be in conflict with other options that affect the file + mode, like fsGroup, and the result can be other mode bits set. + format: int32 + type: integer + path: + description: |- + path is the relative path of the file to map the key to. + May not be an absolute path. + May not contain the path element '..'. + May not start with the string '..'. + type: string + required: + - key + - path + type: object + type: array + x-kubernetes-list-type: atomic + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: optional specify whether the ConfigMap + or its keys must be defined + type: boolean + type: object + x-kubernetes-map-type: atomic + downwardAPI: + description: downwardAPI information about the downwardAPI + data to project + properties: + items: + description: Items is a list of DownwardAPIVolume + file + items: + description: DownwardAPIVolumeFile represents + information to create the file containing + the pod field + properties: + fieldRef: + description: 'Required: Selects a field + of the pod: only annotations, labels, + name, namespace and uid are supported.' + properties: + apiVersion: + description: Version of the schema + the FieldPath is written in terms + of, defaults to "v1". + type: string + fieldPath: + description: Path of the field to + select in the specified API version. + type: string + required: + - fieldPath + type: object + x-kubernetes-map-type: atomic + mode: + description: |- + Optional: mode bits used to set permissions on this file, must be an octal value + between 0000 and 0777 or a decimal value between 0 and 511. + YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. + If not specified, the volume defaultMode will be used. + This might be in conflict with other options that affect the file + mode, like fsGroup, and the result can be other mode bits set. + format: int32 + type: integer + path: + description: 'Required: Path is the relative + path name of the file to be created. + Must not be absolute or contain the + ''..'' path. Must be utf-8 encoded. + The first item of the relative path + must not start with ''..''' + type: string + resourceFieldRef: + description: |- + Selects a resource of the container: only resources limits and requests + (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. + properties: + containerName: + description: 'Container name: required + for volumes, optional for env vars' + type: string + divisor: + anyOf: + - type: integer + - type: string + description: Specifies the output + format of the exposed resources, + defaults to "1" + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + resource: + description: 'Required: resource to + select' + type: string + required: + - resource + type: object + x-kubernetes-map-type: atomic + required: + - path + type: object + type: array + x-kubernetes-list-type: atomic + type: object + secret: + description: secret information about the secret + data to project + properties: + items: + description: |- + items if unspecified, each key-value pair in the Data field of the referenced + Secret will be projected into the volume as a file whose name is the + key and content is the value. If specified, the listed keys will be + projected into the specified paths, and unlisted keys will not be + present. If a key is specified which is not present in the Secret, + the volume setup will error unless it is marked optional. Paths must be + relative and may not contain the '..' path or start with '..'. + items: + description: Maps a string key to a path within + a volume. + properties: + key: + description: key is the key to project. + type: string + mode: + description: |- + mode is Optional: mode bits used to set permissions on this file. + Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. + YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. + If not specified, the volume defaultMode will be used. + This might be in conflict with other options that affect the file + mode, like fsGroup, and the result can be other mode bits set. + format: int32 + type: integer + path: + description: |- + path is the relative path of the file to map the key to. + May not be an absolute path. + May not contain the path element '..'. + May not start with the string '..'. + type: string + required: + - key + - path + type: object + type: array + x-kubernetes-list-type: atomic + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: optional field specify whether + the Secret or its key must be defined + type: boolean + type: object + x-kubernetes-map-type: atomic + serviceAccountToken: + description: serviceAccountToken is information + about the serviceAccountToken data to project + properties: + audience: + description: |- + audience is the intended audience of the token. A recipient of a token + must identify itself with an identifier specified in the audience of the + token, and otherwise should reject the token. The audience defaults to the + identifier of the apiserver. + type: string + expirationSeconds: + description: |- + expirationSeconds is the requested duration of validity of the service + account token. As the token approaches expiration, the kubelet volume + plugin will proactively rotate the service account token. The kubelet will + start trying to rotate the token if the token is older than 80 percent of + its time to live or if the token is older than 24 hours.Defaults to 1 hour + and must be at least 10 minutes. + format: int64 + type: integer + path: + description: |- + path is the path relative to the mount point of the file to project the + token into. + type: string + required: + - path + type: object + type: object + type: array + customTLSSecret: + description: |- + Projected secret containing custom TLS certificates to encrypt output from the exporter + web server + properties: + items: + description: |- + items if unspecified, each key-value pair in the Data field of the referenced + Secret will be projected into the volume as a file whose name is the + key and content is the value. If specified, the listed keys will be + projected into the specified paths, and unlisted keys will not be + present. If a key is specified which is not present in the Secret, + the volume setup will error unless it is marked optional. Paths must be + relative and may not contain the '..' path or start with '..'. + items: + description: Maps a string key to a path within + a volume. + properties: + key: + description: key is the key to project. + type: string + mode: + description: |- + mode is Optional: mode bits used to set permissions on this file. + Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. + YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. + If not specified, the volume defaultMode will be used. + This might be in conflict with other options that affect the file + mode, like fsGroup, and the result can be other mode bits set. + format: int32 + type: integer + path: + description: |- + path is the relative path of the file to map the key to. + May not be an absolute path. + May not contain the path element '..'. + May not start with the string '..'. + type: string + required: + - key + - path + type: object + type: array + x-kubernetes-list-type: atomic + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: optional field specify whether the Secret + or its key must be defined + type: boolean + type: object + x-kubernetes-map-type: atomic + image: + description: |- + The image name to use for crunchy-postgres-exporter containers. The image may + also be set using the RELATED_IMAGE_PGEXPORTER environment variable. + type: string + resources: + description: |- + Changing this value causes PostgreSQL and the exporter to restart. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers + properties: + claims: + description: |- + Claims lists the names of resources, defined in spec.resourceClaims, + that are used by this container. + + This is an alpha field and requires enabling the + DynamicResourceAllocation feature gate. + + This field is immutable. It can only be set for containers. + items: + description: ResourceClaim references one entry + in PodSpec.ResourceClaims. + properties: + name: + description: |- + Name must match the name of one entry in pod.spec.resourceClaims of + the Pod where this field is used. It makes that resource available + inside a container. + type: string + required: + - name + type: object + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Limits describes the maximum amount of compute resources allowed. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Requests describes the minimum amount of compute resources required. + If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, + otherwise to an implementation-defined value. Requests cannot exceed Limits. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + type: object + type: object + type: object + type: object + openshift: + description: |- + Whether or not the PostgreSQL cluster is being deployed to an OpenShift + environment. If the field is unset, the operator will automatically + detect the environment. + type: boolean + patroni: + properties: + dynamicConfiguration: + description: |- + Patroni dynamic configuration settings. Changes to this value will be + automatically reloaded without validation. Changes to certain PostgreSQL + parameters cause PostgreSQL to restart. + More info: https://patroni.readthedocs.io/en/latest/dynamic_configuration.html + type: object + x-kubernetes-preserve-unknown-fields: true + leaderLeaseDurationSeconds: + default: 30 + description: |- + TTL of the cluster leader lock. "Think of it as the + length of time before initiation of the automatic failover process." + Changing this value causes PostgreSQL to restart. + format: int32 + minimum: 3 + type: integer + port: + default: 8008 + description: |- + The port on which Patroni should listen. + Changing this value causes PostgreSQL to restart. + format: int32 + minimum: 1024 + type: integer + switchover: + description: Switchover gives options to perform ad hoc switchovers + in a PostgresCluster. + properties: + enabled: + description: Whether or not the operator should allow switchovers + in a PostgresCluster + type: boolean + targetInstance: + description: |- + The instance that should become primary during a switchover. This field is + optional when Type is "Switchover" and required when Type is "Failover". + When it is not specified, a healthy replica is automatically selected. + type: string + type: + default: Switchover + description: |- + Type of switchover to perform. Valid options are Switchover and Failover. + "Switchover" changes the primary instance of a healthy PostgresCluster. + "Failover" forces a particular instance to be primary, regardless of other + factors. A TargetInstance must be specified to failover. + NOTE: The Failover type is reserved as the "last resort" case. + enum: + - Switchover + - Failover + type: string + required: + - enabled + type: object + syncPeriodSeconds: + default: 10 + description: |- + The interval for refreshing the leader lock and applying + dynamicConfiguration. Must be less than leaderLeaseDurationSeconds. + Changing this value causes PostgreSQL to restart. + format: int32 + minimum: 1 + type: integer + type: object + paused: + description: |- + Suspends the rollout and reconciliation of changes made to the + PostgresCluster spec. + type: boolean + port: + default: 5432 + description: The port on which PostgreSQL should listen. + format: int32 + minimum: 1024 + type: integer + postGISVersion: + description: |- + The PostGIS extension version installed in the PostgreSQL image. + When image is not set, indicates a PostGIS enabled image will be used. + type: string + postgresVersion: + description: The major version of PostgreSQL installed in the PostgreSQL + image + maximum: 17 + minimum: 11 + type: integer + proxy: + description: The specification of a proxy that connects to PostgreSQL. + properties: + pgBouncer: + description: Defines a PgBouncer proxy and connection pooler. + properties: + affinity: + description: |- + Scheduling constraints of a PgBouncer pod. Changing this value causes + PgBouncer to restart. + More info: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node + properties: + nodeAffinity: + description: Describes node affinity scheduling rules + for the pod. + properties: + preferredDuringSchedulingIgnoredDuringExecution: + description: |- + The scheduler will prefer to schedule pods to nodes that satisfy + the affinity expressions specified by this field, but it may choose + a node that violates one or more of the expressions. The node that is + most preferred is the one with the greatest sum of weights, i.e. + for each node that meets all of the scheduling requirements (resource + request, requiredDuringScheduling affinity expressions, etc.), + compute a sum by iterating through the elements of this field and adding + "weight" to the sum if the node matches the corresponding matchExpressions; the + node(s) with the highest sum are the most preferred. + items: + description: |- + An empty preferred scheduling term matches all objects with implicit weight 0 + (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). + properties: + preference: + description: A node selector term, associated + with the corresponding weight. + properties: + matchExpressions: + description: A list of node selector requirements + by node's labels. + items: + description: |- + A node selector requirement is a selector that contains values, a key, and an operator + that relates the key and values. + properties: + key: + description: The label key that the + selector applies to. + type: string + operator: + description: |- + Represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: |- + An array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. If the operator is Gt or Lt, the values + array must have a single element, which will be interpreted as an integer. + This array is replaced during a strategic merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchFields: + description: A list of node selector requirements + by node's fields. + items: + description: |- + A node selector requirement is a selector that contains values, a key, and an operator + that relates the key and values. + properties: + key: + description: The label key that the + selector applies to. + type: string + operator: + description: |- + Represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: |- + An array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. If the operator is Gt or Lt, the values + array must have a single element, which will be interpreted as an integer. + This array is replaced during a strategic merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + type: object + x-kubernetes-map-type: atomic + weight: + description: Weight associated with matching + the corresponding nodeSelectorTerm, in the + range 1-100. + format: int32 + type: integer + required: + - preference + - weight + type: object + type: array + x-kubernetes-list-type: atomic + requiredDuringSchedulingIgnoredDuringExecution: + description: |- + If the affinity requirements specified by this field are not met at + scheduling time, the pod will not be scheduled onto the node. + If the affinity requirements specified by this field cease to be met + at some point during pod execution (e.g. due to an update), the system + may or may not try to eventually evict the pod from its node. + properties: + nodeSelectorTerms: + description: Required. A list of node selector + terms. The terms are ORed. + items: + description: |- + A null or empty node selector term matches no objects. The requirements of + them are ANDed. + The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. + properties: + matchExpressions: + description: A list of node selector requirements + by node's labels. + items: + description: |- + A node selector requirement is a selector that contains values, a key, and an operator + that relates the key and values. + properties: + key: + description: The label key that the + selector applies to. + type: string + operator: + description: |- + Represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: |- + An array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. If the operator is Gt or Lt, the values + array must have a single element, which will be interpreted as an integer. + This array is replaced during a strategic merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchFields: + description: A list of node selector requirements + by node's fields. + items: + description: |- + A node selector requirement is a selector that contains values, a key, and an operator + that relates the key and values. + properties: + key: + description: The label key that the + selector applies to. + type: string + operator: + description: |- + Represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: |- + An array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. If the operator is Gt or Lt, the values + array must have a single element, which will be interpreted as an integer. + This array is replaced during a strategic merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + type: object + x-kubernetes-map-type: atomic + type: array + x-kubernetes-list-type: atomic + required: + - nodeSelectorTerms + type: object + x-kubernetes-map-type: atomic + type: object + podAffinity: + description: Describes pod affinity scheduling rules (e.g. + co-locate this pod in the same node, zone, etc. as some + other pod(s)). + properties: + preferredDuringSchedulingIgnoredDuringExecution: + description: |- + The scheduler will prefer to schedule pods to nodes that satisfy + the affinity expressions specified by this field, but it may choose + a node that violates one or more of the expressions. The node that is + most preferred is the one with the greatest sum of weights, i.e. + for each node that meets all of the scheduling requirements (resource + request, requiredDuringScheduling affinity expressions, etc.), + compute a sum by iterating through the elements of this field and adding + "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the + node(s) with the highest sum are the most preferred. + items: + description: The weights of all of the matched WeightedPodAffinityTerm + fields are added per-node to find the most preferred + node(s) + properties: + podAffinityTerm: + description: Required. A pod affinity term, + associated with the corresponding weight. + properties: + labelSelector: + description: |- + A label query over a set of resources, in this case pods. + If it's null, this PodAffinityTerm matches with no Pods. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The + requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label + key that the selector applies + to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both matchLabelKeys and labelSelector. + Also, matchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + description: |- + MismatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. + Also, mismatchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + description: |- + A label query over the set of namespaces that the term applies to. + The term is applied to the union of the namespaces selected by this field + and the ones listed in the namespaces field. + null selector and null or empty namespaces list means "this pod's namespace". + An empty selector ({}) matches all namespaces. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The + requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label + key that the selector applies + to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + description: |- + namespaces specifies a static list of namespace names that the term applies to. + The term is applied to the union of the namespaces listed in this field + and the ones selected by namespaceSelector. + null or empty namespaces list and null namespaceSelector means "this pod's namespace". + items: + type: string + type: array + x-kubernetes-list-type: atomic + topologyKey: + description: |- + This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching + the labelSelector in the specified namespaces, where co-located is defined as running on a node + whose value of the label with key topologyKey matches that of any node on which any of the + selected pods is running. + Empty topologyKey is not allowed. + type: string + required: + - topologyKey + type: object + weight: + description: |- + weight associated with matching the corresponding podAffinityTerm, + in the range 1-100. + format: int32 + type: integer + required: + - podAffinityTerm + - weight + type: object + type: array + x-kubernetes-list-type: atomic + requiredDuringSchedulingIgnoredDuringExecution: + description: |- + If the affinity requirements specified by this field are not met at + scheduling time, the pod will not be scheduled onto the node. + If the affinity requirements specified by this field cease to be met + at some point during pod execution (e.g. due to a pod label update), the + system may or may not try to eventually evict the pod from its node. + When there are multiple elements, the lists of nodes corresponding to each + podAffinityTerm are intersected, i.e. all terms must be satisfied. + items: + description: |- + Defines a set of pods (namely those matching the labelSelector + relative to the given namespace(s)) that this pod should be + co-located (affinity) or not co-located (anti-affinity) with, + where co-located is defined as running on a node whose value of + the label with key matches that of any node on which + a pod of the set of pods is running + properties: + labelSelector: + description: |- + A label query over a set of resources, in this case pods. + If it's null, this PodAffinityTerm matches with no Pods. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The requirements + are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key + that the selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both matchLabelKeys and labelSelector. + Also, matchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + description: |- + MismatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. + Also, mismatchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + description: |- + A label query over the set of namespaces that the term applies to. + The term is applied to the union of the namespaces selected by this field + and the ones listed in the namespaces field. + null selector and null or empty namespaces list means "this pod's namespace". + An empty selector ({}) matches all namespaces. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The requirements + are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key + that the selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + description: |- + namespaces specifies a static list of namespace names that the term applies to. + The term is applied to the union of the namespaces listed in this field + and the ones selected by namespaceSelector. + null or empty namespaces list and null namespaceSelector means "this pod's namespace". + items: + type: string + type: array + x-kubernetes-list-type: atomic + topologyKey: + description: |- + This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching + the labelSelector in the specified namespaces, where co-located is defined as running on a node + whose value of the label with key topologyKey matches that of any node on which any of the + selected pods is running. + Empty topologyKey is not allowed. + type: string + required: + - topologyKey + type: object + type: array + x-kubernetes-list-type: atomic + type: object + podAntiAffinity: + description: Describes pod anti-affinity scheduling rules + (e.g. avoid putting this pod in the same node, zone, + etc. as some other pod(s)). + properties: + preferredDuringSchedulingIgnoredDuringExecution: + description: |- + The scheduler will prefer to schedule pods to nodes that satisfy + the anti-affinity expressions specified by this field, but it may choose + a node that violates one or more of the expressions. The node that is + most preferred is the one with the greatest sum of weights, i.e. + for each node that meets all of the scheduling requirements (resource + request, requiredDuringScheduling anti-affinity expressions, etc.), + compute a sum by iterating through the elements of this field and adding + "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the + node(s) with the highest sum are the most preferred. + items: + description: The weights of all of the matched WeightedPodAffinityTerm + fields are added per-node to find the most preferred + node(s) + properties: + podAffinityTerm: + description: Required. A pod affinity term, + associated with the corresponding weight. + properties: + labelSelector: + description: |- + A label query over a set of resources, in this case pods. + If it's null, this PodAffinityTerm matches with no Pods. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The + requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label + key that the selector applies + to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both matchLabelKeys and labelSelector. + Also, matchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + description: |- + MismatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. + Also, mismatchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + description: |- + A label query over the set of namespaces that the term applies to. + The term is applied to the union of the namespaces selected by this field + and the ones listed in the namespaces field. + null selector and null or empty namespaces list means "this pod's namespace". + An empty selector ({}) matches all namespaces. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The + requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label + key that the selector applies + to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + description: |- + namespaces specifies a static list of namespace names that the term applies to. + The term is applied to the union of the namespaces listed in this field + and the ones selected by namespaceSelector. + null or empty namespaces list and null namespaceSelector means "this pod's namespace". + items: + type: string + type: array + x-kubernetes-list-type: atomic + topologyKey: + description: |- + This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching + the labelSelector in the specified namespaces, where co-located is defined as running on a node + whose value of the label with key topologyKey matches that of any node on which any of the + selected pods is running. + Empty topologyKey is not allowed. + type: string + required: + - topologyKey + type: object + weight: + description: |- + weight associated with matching the corresponding podAffinityTerm, + in the range 1-100. + format: int32 + type: integer + required: + - podAffinityTerm + - weight + type: object + type: array + x-kubernetes-list-type: atomic + requiredDuringSchedulingIgnoredDuringExecution: + description: |- + If the anti-affinity requirements specified by this field are not met at + scheduling time, the pod will not be scheduled onto the node. + If the anti-affinity requirements specified by this field cease to be met + at some point during pod execution (e.g. due to a pod label update), the + system may or may not try to eventually evict the pod from its node. + When there are multiple elements, the lists of nodes corresponding to each + podAffinityTerm are intersected, i.e. all terms must be satisfied. + items: + description: |- + Defines a set of pods (namely those matching the labelSelector + relative to the given namespace(s)) that this pod should be + co-located (affinity) or not co-located (anti-affinity) with, + where co-located is defined as running on a node whose value of + the label with key matches that of any node on which + a pod of the set of pods is running + properties: + labelSelector: + description: |- + A label query over a set of resources, in this case pods. + If it's null, this PodAffinityTerm matches with no Pods. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The requirements + are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key + that the selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both matchLabelKeys and labelSelector. + Also, matchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + description: |- + MismatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. + Also, mismatchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + description: |- + A label query over the set of namespaces that the term applies to. + The term is applied to the union of the namespaces selected by this field + and the ones listed in the namespaces field. + null selector and null or empty namespaces list means "this pod's namespace". + An empty selector ({}) matches all namespaces. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The requirements + are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key + that the selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + description: |- + namespaces specifies a static list of namespace names that the term applies to. + The term is applied to the union of the namespaces listed in this field + and the ones selected by namespaceSelector. + null or empty namespaces list and null namespaceSelector means "this pod's namespace". + items: + type: string + type: array + x-kubernetes-list-type: atomic + topologyKey: + description: |- + This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching + the labelSelector in the specified namespaces, where co-located is defined as running on a node + whose value of the label with key topologyKey matches that of any node on which any of the + selected pods is running. + Empty topologyKey is not allowed. + type: string + required: + - topologyKey + type: object + type: array + x-kubernetes-list-type: atomic + type: object + type: object + config: + description: |- + Configuration settings for the PgBouncer process. Changes to any of these + values will be automatically reloaded without validation. Be careful, as + you may put PgBouncer into an unusable state. + More info: https://www.pgbouncer.org/usage.html#reload + properties: + databases: + additionalProperties: + type: string + description: |- + PgBouncer database definitions. The key is the database requested by a + client while the value is a libpq-styled connection string. The special + key "*" acts as a fallback. When this field is empty, PgBouncer is + configured with a single "*" entry that connects to the primary + PostgreSQL instance. + More info: https://www.pgbouncer.org/config.html#section-databases + type: object + files: + description: |- + Files to mount under "/etc/pgbouncer". When specified, settings in the + "pgbouncer.ini" file are loaded before all others. From there, other + files may be included by absolute path. Changing these references causes + PgBouncer to restart, but changes to the file contents are automatically + reloaded. + More info: https://www.pgbouncer.org/config.html#include-directive + items: + description: Projection that may be projected along + with other supported volume types + properties: + clusterTrustBundle: + description: |- + ClusterTrustBundle allows a pod to access the `.spec.trustBundle` field + of ClusterTrustBundle objects in an auto-updating file. + + Alpha, gated by the ClusterTrustBundleProjection feature gate. + + ClusterTrustBundle objects can either be selected by name, or by the + combination of signer name and a label selector. + + Kubelet performs aggressive normalization of the PEM contents written + into the pod filesystem. Esoteric PEM features such as inter-block + comments and block headers are stripped. Certificates are deduplicated. + The ordering of certificates within the file is arbitrary, and Kubelet + may change the order over time. + properties: + labelSelector: + description: |- + Select all ClusterTrustBundles that match this label selector. Only has + effect if signerName is set. Mutually-exclusive with name. If unset, + interpreted as "match nothing". If set but empty, interpreted as "match + everything". + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The requirements + are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key + that the selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + name: + description: |- + Select a single ClusterTrustBundle by object name. Mutually-exclusive + with signerName and labelSelector. + type: string + optional: + description: |- + If true, don't block pod startup if the referenced ClusterTrustBundle(s) + aren't available. If using name, then the named ClusterTrustBundle is + allowed not to exist. If using signerName, then the combination of + signerName and labelSelector is allowed to match zero + ClusterTrustBundles. + type: boolean + path: + description: Relative path from the volume root + to write the bundle. + type: string + signerName: + description: |- + Select all ClusterTrustBundles that match this signer name. + Mutually-exclusive with name. The contents of all selected + ClusterTrustBundles will be unified and deduplicated. + type: string + required: + - path + type: object + configMap: + description: configMap information about the configMap + data to project + properties: + items: + description: |- + items if unspecified, each key-value pair in the Data field of the referenced + ConfigMap will be projected into the volume as a file whose name is the + key and content is the value. If specified, the listed keys will be + projected into the specified paths, and unlisted keys will not be + present. If a key is specified which is not present in the ConfigMap, + the volume setup will error unless it is marked optional. Paths must be + relative and may not contain the '..' path or start with '..'. + items: + description: Maps a string key to a path within + a volume. + properties: + key: + description: key is the key to project. + type: string + mode: + description: |- + mode is Optional: mode bits used to set permissions on this file. + Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. + YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. + If not specified, the volume defaultMode will be used. + This might be in conflict with other options that affect the file + mode, like fsGroup, and the result can be other mode bits set. + format: int32 + type: integer + path: + description: |- + path is the relative path of the file to map the key to. + May not be an absolute path. + May not contain the path element '..'. + May not start with the string '..'. + type: string + required: + - key + - path + type: object + type: array + x-kubernetes-list-type: atomic + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: optional specify whether the ConfigMap + or its keys must be defined + type: boolean + type: object + x-kubernetes-map-type: atomic + downwardAPI: + description: downwardAPI information about the downwardAPI + data to project + properties: + items: + description: Items is a list of DownwardAPIVolume + file + items: + description: DownwardAPIVolumeFile represents + information to create the file containing + the pod field + properties: + fieldRef: + description: 'Required: Selects a field + of the pod: only annotations, labels, + name, namespace and uid are supported.' + properties: + apiVersion: + description: Version of the schema + the FieldPath is written in terms + of, defaults to "v1". + type: string + fieldPath: + description: Path of the field to + select in the specified API version. + type: string + required: + - fieldPath + type: object + x-kubernetes-map-type: atomic + mode: + description: |- + Optional: mode bits used to set permissions on this file, must be an octal value + between 0000 and 0777 or a decimal value between 0 and 511. + YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. + If not specified, the volume defaultMode will be used. + This might be in conflict with other options that affect the file + mode, like fsGroup, and the result can be other mode bits set. + format: int32 + type: integer + path: + description: 'Required: Path is the relative + path name of the file to be created. + Must not be absolute or contain the + ''..'' path. Must be utf-8 encoded. + The first item of the relative path + must not start with ''..''' + type: string + resourceFieldRef: + description: |- + Selects a resource of the container: only resources limits and requests + (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. + properties: + containerName: + description: 'Container name: required + for volumes, optional for env vars' + type: string + divisor: + anyOf: + - type: integer + - type: string + description: Specifies the output + format of the exposed resources, + defaults to "1" + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + resource: + description: 'Required: resource to + select' + type: string + required: + - resource + type: object + x-kubernetes-map-type: atomic + required: + - path + type: object + type: array + x-kubernetes-list-type: atomic + type: object + secret: + description: secret information about the secret + data to project + properties: + items: + description: |- + items if unspecified, each key-value pair in the Data field of the referenced + Secret will be projected into the volume as a file whose name is the + key and content is the value. If specified, the listed keys will be + projected into the specified paths, and unlisted keys will not be + present. If a key is specified which is not present in the Secret, + the volume setup will error unless it is marked optional. Paths must be + relative and may not contain the '..' path or start with '..'. + items: + description: Maps a string key to a path within + a volume. + properties: + key: + description: key is the key to project. + type: string + mode: + description: |- + mode is Optional: mode bits used to set permissions on this file. + Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. + YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. + If not specified, the volume defaultMode will be used. + This might be in conflict with other options that affect the file + mode, like fsGroup, and the result can be other mode bits set. + format: int32 + type: integer + path: + description: |- + path is the relative path of the file to map the key to. + May not be an absolute path. + May not contain the path element '..'. + May not start with the string '..'. + type: string + required: + - key + - path + type: object + type: array + x-kubernetes-list-type: atomic + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: optional field specify whether + the Secret or its key must be defined + type: boolean + type: object + x-kubernetes-map-type: atomic + serviceAccountToken: + description: serviceAccountToken is information + about the serviceAccountToken data to project + properties: + audience: + description: |- + audience is the intended audience of the token. A recipient of a token + must identify itself with an identifier specified in the audience of the + token, and otherwise should reject the token. The audience defaults to the + identifier of the apiserver. + type: string + expirationSeconds: + description: |- + expirationSeconds is the requested duration of validity of the service + account token. As the token approaches expiration, the kubelet volume + plugin will proactively rotate the service account token. The kubelet will + start trying to rotate the token if the token is older than 80 percent of + its time to live or if the token is older than 24 hours.Defaults to 1 hour + and must be at least 10 minutes. + format: int64 + type: integer + path: + description: |- + path is the path relative to the mount point of the file to project the + token into. + type: string + required: + - path + type: object + type: object + type: array + global: + additionalProperties: + type: string + description: |- + Settings that apply to the entire PgBouncer process. + More info: https://www.pgbouncer.org/config.html + type: object + users: + additionalProperties: + type: string + description: |- + Connection settings specific to particular users. + More info: https://www.pgbouncer.org/config.html#section-users + type: object + type: object + containers: + description: |- + Custom sidecars for a PgBouncer pod. Changing this value causes + PgBouncer to restart. + items: + description: A single application container that you want + to run within a pod. + properties: + args: + description: |- + Arguments to the entrypoint. + The container image's CMD is used if this is not provided. + Variable references $(VAR_NAME) are expanded using the container's environment. If a variable + cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced + to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will + produce the string literal "$(VAR_NAME)". Escaped references will never be expanded, regardless + of whether the variable exists or not. Cannot be updated. + More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell + items: + type: string + type: array + x-kubernetes-list-type: atomic + command: + description: |- + Entrypoint array. Not executed within a shell. + The container image's ENTRYPOINT is used if this is not provided. + Variable references $(VAR_NAME) are expanded using the container's environment. If a variable + cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced + to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will + produce the string literal "$(VAR_NAME)". Escaped references will never be expanded, regardless + of whether the variable exists or not. Cannot be updated. + More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell + items: + type: string + type: array + x-kubernetes-list-type: atomic + env: + description: |- + List of environment variables to set in the container. + Cannot be updated. + items: + description: EnvVar represents an environment variable + present in a Container. + properties: + name: + description: Name of the environment variable. + Must be a C_IDENTIFIER. + type: string + value: + description: |- + Variable references $(VAR_NAME) are expanded + using the previously defined environment variables in the container and + any service environment variables. If a variable cannot be resolved, + the reference in the input string will be unchanged. Double $$ are reduced + to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. + "$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". + Escaped references will never be expanded, regardless of whether the variable + exists or not. + Defaults to "". + type: string + valueFrom: + description: Source for the environment variable's + value. Cannot be used if value is not empty. + properties: + configMapKeyRef: + description: Selects a key of a ConfigMap. + properties: + key: + description: The key to select. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the ConfigMap + or its key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + fieldRef: + description: |- + Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels['']`, `metadata.annotations['']`, + spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. + properties: + apiVersion: + description: Version of the schema the + FieldPath is written in terms of, defaults + to "v1". + type: string + fieldPath: + description: Path of the field to select + in the specified API version. + type: string + required: + - fieldPath + type: object + x-kubernetes-map-type: atomic + resourceFieldRef: + description: |- + Selects a resource of the container: only resources limits and requests + (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. + properties: + containerName: + description: 'Container name: required + for volumes, optional for env vars' + type: string + divisor: + anyOf: + - type: integer + - type: string + description: Specifies the output format + of the exposed resources, defaults to + "1" + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + resource: + description: 'Required: resource to select' + type: string + required: + - resource + type: object + x-kubernetes-map-type: atomic + secretKeyRef: + description: Selects a key of a secret in + the pod's namespace + properties: + key: + description: The key of the secret to + select from. Must be a valid secret + key. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the Secret + or its key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + type: object + required: + - name + type: object + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + envFrom: + description: |- + List of sources to populate environment variables in the container. + The keys defined within a source must be a C_IDENTIFIER. All invalid keys + will be reported as an event when the container is starting. When a key exists in multiple + sources, the value associated with the last source will take precedence. + Values defined by an Env with a duplicate key will take precedence. + Cannot be updated. + items: + description: EnvFromSource represents the source of + a set of ConfigMaps + properties: + configMapRef: + description: The ConfigMap to select from + properties: + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the ConfigMap + must be defined + type: boolean + type: object + x-kubernetes-map-type: atomic + prefix: + description: An optional identifier to prepend + to each key in the ConfigMap. Must be a C_IDENTIFIER. + type: string + secretRef: + description: The Secret to select from + properties: + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the Secret must + be defined + type: boolean + type: object + x-kubernetes-map-type: atomic + type: object + type: array + x-kubernetes-list-type: atomic + image: + description: |- + Container image name. + More info: https://kubernetes.io/docs/concepts/containers/images + This field is optional to allow higher level config management to default or override + container images in workload controllers like Deployments and StatefulSets. + type: string + imagePullPolicy: + description: |- + Image pull policy. + One of Always, Never, IfNotPresent. + Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. + Cannot be updated. + More info: https://kubernetes.io/docs/concepts/containers/images#updating-images + type: string + lifecycle: + description: |- + Actions that the management system should take in response to container lifecycle events. + Cannot be updated. + properties: + postStart: + description: |- + PostStart is called immediately after a container is created. If the handler fails, + the container is terminated and restarted according to its restart policy. + Other management of the container blocks until the hook completes. + More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks + properties: + exec: + description: Exec specifies the action to take. + properties: + command: + description: |- + Command is the command line to execute inside the container, the working directory for the + command is root ('/') in the container's filesystem. The command is simply exec'd, it is + not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use + a shell, you need to explicitly call out to that shell. + Exit status of 0 is treated as live/healthy and non-zero is unhealthy. + items: + type: string + type: array + x-kubernetes-list-type: atomic + type: object + httpGet: + description: HTTPGet specifies the http request + to perform. + properties: + host: + description: |- + Host name to connect to, defaults to the pod IP. You probably want to set + "Host" in httpHeaders instead. + type: string + httpHeaders: + description: Custom headers to set in the + request. HTTP allows repeated headers. + items: + description: HTTPHeader describes a custom + header to be used in HTTP probes + properties: + name: + description: |- + The header field name. + This will be canonicalized upon output, so case-variant names will be understood as the same header. + type: string + value: + description: The header field value + type: string + required: + - name + - value + type: object + type: array + x-kubernetes-list-type: atomic + path: + description: Path to access on the HTTP + server. + type: string + port: + anyOf: + - type: integer + - type: string + description: |- + Name or number of the port to access on the container. + Number must be in the range 1 to 65535. + Name must be an IANA_SVC_NAME. + x-kubernetes-int-or-string: true + scheme: + description: |- + Scheme to use for connecting to the host. + Defaults to HTTP. + type: string + required: + - port + type: object + sleep: + description: Sleep represents the duration that + the container should sleep before being terminated. + properties: + seconds: + description: Seconds is the number of seconds + to sleep. + format: int64 + type: integer + required: + - seconds + type: object + tcpSocket: + description: |- + Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept + for the backward compatibility. There are no validation of this field and + lifecycle hooks will fail in runtime when tcp handler is specified. + properties: + host: + description: 'Optional: Host name to connect + to, defaults to the pod IP.' + type: string + port: + anyOf: + - type: integer + - type: string + description: |- + Number or name of the port to access on the container. + Number must be in the range 1 to 65535. + Name must be an IANA_SVC_NAME. + x-kubernetes-int-or-string: true + required: + - port + type: object + type: object + preStop: + description: |- + PreStop is called immediately before a container is terminated due to an + API request or management event such as liveness/startup probe failure, + preemption, resource contention, etc. The handler is not called if the + container crashes or exits. The Pod's termination grace period countdown begins before the + PreStop hook is executed. Regardless of the outcome of the handler, the + container will eventually terminate within the Pod's termination grace + period (unless delayed by finalizers). Other management of the container blocks until the hook completes + or until the termination grace period is reached. + More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks + properties: + exec: + description: Exec specifies the action to take. + properties: + command: + description: |- + Command is the command line to execute inside the container, the working directory for the + command is root ('/') in the container's filesystem. The command is simply exec'd, it is + not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use + a shell, you need to explicitly call out to that shell. + Exit status of 0 is treated as live/healthy and non-zero is unhealthy. + items: + type: string + type: array + x-kubernetes-list-type: atomic + type: object + httpGet: + description: HTTPGet specifies the http request + to perform. + properties: + host: + description: |- + Host name to connect to, defaults to the pod IP. You probably want to set + "Host" in httpHeaders instead. + type: string + httpHeaders: + description: Custom headers to set in the + request. HTTP allows repeated headers. + items: + description: HTTPHeader describes a custom + header to be used in HTTP probes + properties: + name: + description: |- + The header field name. + This will be canonicalized upon output, so case-variant names will be understood as the same header. + type: string + value: + description: The header field value + type: string + required: + - name + - value + type: object + type: array + x-kubernetes-list-type: atomic + path: + description: Path to access on the HTTP + server. + type: string + port: + anyOf: + - type: integer + - type: string + description: |- + Name or number of the port to access on the container. + Number must be in the range 1 to 65535. + Name must be an IANA_SVC_NAME. + x-kubernetes-int-or-string: true + scheme: + description: |- + Scheme to use for connecting to the host. + Defaults to HTTP. + type: string + required: + - port + type: object + sleep: + description: Sleep represents the duration that + the container should sleep before being terminated. + properties: + seconds: + description: Seconds is the number of seconds + to sleep. + format: int64 + type: integer + required: + - seconds + type: object + tcpSocket: + description: |- + Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept + for the backward compatibility. There are no validation of this field and + lifecycle hooks will fail in runtime when tcp handler is specified. + properties: + host: + description: 'Optional: Host name to connect + to, defaults to the pod IP.' + type: string + port: + anyOf: + - type: integer + - type: string + description: |- + Number or name of the port to access on the container. + Number must be in the range 1 to 65535. + Name must be an IANA_SVC_NAME. + x-kubernetes-int-or-string: true + required: + - port + type: object + type: object + type: object + livenessProbe: + description: |- + Periodic probe of container liveness. + Container will be restarted if the probe fails. + Cannot be updated. + More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes + properties: + exec: + description: Exec specifies the action to take. + properties: + command: + description: |- + Command is the command line to execute inside the container, the working directory for the + command is root ('/') in the container's filesystem. The command is simply exec'd, it is + not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use + a shell, you need to explicitly call out to that shell. + Exit status of 0 is treated as live/healthy and non-zero is unhealthy. + items: + type: string + type: array + x-kubernetes-list-type: atomic + type: object + failureThreshold: + description: |- + Minimum consecutive failures for the probe to be considered failed after having succeeded. + Defaults to 3. Minimum value is 1. + format: int32 + type: integer + grpc: + description: GRPC specifies an action involving + a GRPC port. + properties: + port: + description: Port number of the gRPC service. + Number must be in the range 1 to 65535. + format: int32 + type: integer + service: + default: "" + description: |- + Service is the name of the service to place in the gRPC HealthCheckRequest + (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). + + If this is not specified, the default behavior is defined by gRPC. + type: string + required: + - port + type: object + httpGet: + description: HTTPGet specifies the http request + to perform. + properties: + host: + description: |- + Host name to connect to, defaults to the pod IP. You probably want to set + "Host" in httpHeaders instead. + type: string + httpHeaders: + description: Custom headers to set in the request. + HTTP allows repeated headers. + items: + description: HTTPHeader describes a custom + header to be used in HTTP probes + properties: + name: + description: |- + The header field name. + This will be canonicalized upon output, so case-variant names will be understood as the same header. + type: string + value: + description: The header field value + type: string + required: + - name + - value + type: object + type: array + x-kubernetes-list-type: atomic + path: + description: Path to access on the HTTP server. + type: string + port: + anyOf: + - type: integer + - type: string + description: |- + Name or number of the port to access on the container. + Number must be in the range 1 to 65535. + Name must be an IANA_SVC_NAME. + x-kubernetes-int-or-string: true + scheme: + description: |- + Scheme to use for connecting to the host. + Defaults to HTTP. + type: string + required: + - port + type: object + initialDelaySeconds: + description: |- + Number of seconds after the container has started before liveness probes are initiated. + More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes + format: int32 + type: integer + periodSeconds: + description: |- + How often (in seconds) to perform the probe. + Default to 10 seconds. Minimum value is 1. + format: int32 + type: integer + successThreshold: + description: |- + Minimum consecutive successes for the probe to be considered successful after having failed. + Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. + format: int32 + type: integer + tcpSocket: + description: TCPSocket specifies an action involving + a TCP port. + properties: + host: + description: 'Optional: Host name to connect + to, defaults to the pod IP.' + type: string + port: + anyOf: + - type: integer + - type: string + description: |- + Number or name of the port to access on the container. + Number must be in the range 1 to 65535. + Name must be an IANA_SVC_NAME. + x-kubernetes-int-or-string: true + required: + - port + type: object + terminationGracePeriodSeconds: + description: |- + Optional duration in seconds the pod needs to terminate gracefully upon probe failure. + The grace period is the duration in seconds after the processes running in the pod are sent + a termination signal and the time when the processes are forcibly halted with a kill signal. + Set this value longer than the expected cleanup time for your process. + If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this + value overrides the value provided by the pod spec. + Value must be non-negative integer. The value zero indicates stop immediately via + the kill signal (no opportunity to shut down). + This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. + Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. + format: int64 + type: integer + timeoutSeconds: + description: |- + Number of seconds after which the probe times out. + Defaults to 1 second. Minimum value is 1. + More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes + format: int32 + type: integer + type: object + name: + description: |- + Name of the container specified as a DNS_LABEL. + Each container in a pod must have a unique name (DNS_LABEL). + Cannot be updated. + type: string + ports: + description: |- + List of ports to expose from the container. Not specifying a port here + DOES NOT prevent that port from being exposed. Any port which is + listening on the default "0.0.0.0" address inside a container will be + accessible from the network. + Modifying this array with strategic merge patch may corrupt the data. + For more information See https://github.com/kubernetes/kubernetes/issues/108255. + Cannot be updated. + items: + description: ContainerPort represents a network port + in a single container. + properties: + containerPort: + description: |- + Number of port to expose on the pod's IP address. + This must be a valid port number, 0 < x < 65536. + format: int32 + type: integer + hostIP: + description: What host IP to bind the external + port to. + type: string + hostPort: + description: |- + Number of port to expose on the host. + If specified, this must be a valid port number, 0 < x < 65536. + If HostNetwork is specified, this must match ContainerPort. + Most containers do not need this. + format: int32 + type: integer + name: + description: |- + If specified, this must be an IANA_SVC_NAME and unique within the pod. Each + named port in a pod must have a unique name. Name for the port that can be + referred to by services. + type: string + protocol: + default: TCP + description: |- + Protocol for port. Must be UDP, TCP, or SCTP. + Defaults to "TCP". + type: string + required: + - containerPort + type: object + type: array + x-kubernetes-list-map-keys: + - containerPort + - protocol + x-kubernetes-list-type: map + readinessProbe: + description: |- + Periodic probe of container service readiness. + Container will be removed from service endpoints if the probe fails. + Cannot be updated. + More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes + properties: + exec: + description: Exec specifies the action to take. + properties: + command: + description: |- + Command is the command line to execute inside the container, the working directory for the + command is root ('/') in the container's filesystem. The command is simply exec'd, it is + not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use + a shell, you need to explicitly call out to that shell. + Exit status of 0 is treated as live/healthy and non-zero is unhealthy. + items: + type: string + type: array + x-kubernetes-list-type: atomic + type: object + failureThreshold: + description: |- + Minimum consecutive failures for the probe to be considered failed after having succeeded. + Defaults to 3. Minimum value is 1. + format: int32 + type: integer + grpc: + description: GRPC specifies an action involving + a GRPC port. + properties: + port: + description: Port number of the gRPC service. + Number must be in the range 1 to 65535. + format: int32 + type: integer + service: + default: "" + description: |- + Service is the name of the service to place in the gRPC HealthCheckRequest + (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). + + If this is not specified, the default behavior is defined by gRPC. + type: string + required: + - port + type: object + httpGet: + description: HTTPGet specifies the http request + to perform. + properties: + host: + description: |- + Host name to connect to, defaults to the pod IP. You probably want to set + "Host" in httpHeaders instead. + type: string + httpHeaders: + description: Custom headers to set in the request. + HTTP allows repeated headers. + items: + description: HTTPHeader describes a custom + header to be used in HTTP probes + properties: + name: + description: |- + The header field name. + This will be canonicalized upon output, so case-variant names will be understood as the same header. + type: string + value: + description: The header field value + type: string + required: + - name + - value + type: object + type: array + x-kubernetes-list-type: atomic + path: + description: Path to access on the HTTP server. + type: string + port: + anyOf: + - type: integer + - type: string + description: |- + Name or number of the port to access on the container. + Number must be in the range 1 to 65535. + Name must be an IANA_SVC_NAME. + x-kubernetes-int-or-string: true + scheme: + description: |- + Scheme to use for connecting to the host. + Defaults to HTTP. + type: string + required: + - port + type: object + initialDelaySeconds: + description: |- + Number of seconds after the container has started before liveness probes are initiated. + More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes + format: int32 + type: integer + periodSeconds: + description: |- + How often (in seconds) to perform the probe. + Default to 10 seconds. Minimum value is 1. + format: int32 + type: integer + successThreshold: + description: |- + Minimum consecutive successes for the probe to be considered successful after having failed. + Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. + format: int32 + type: integer + tcpSocket: + description: TCPSocket specifies an action involving + a TCP port. + properties: + host: + description: 'Optional: Host name to connect + to, defaults to the pod IP.' + type: string + port: + anyOf: + - type: integer + - type: string + description: |- + Number or name of the port to access on the container. + Number must be in the range 1 to 65535. + Name must be an IANA_SVC_NAME. + x-kubernetes-int-or-string: true + required: + - port + type: object + terminationGracePeriodSeconds: + description: |- + Optional duration in seconds the pod needs to terminate gracefully upon probe failure. + The grace period is the duration in seconds after the processes running in the pod are sent + a termination signal and the time when the processes are forcibly halted with a kill signal. + Set this value longer than the expected cleanup time for your process. + If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this + value overrides the value provided by the pod spec. + Value must be non-negative integer. The value zero indicates stop immediately via + the kill signal (no opportunity to shut down). + This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. + Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. + format: int64 + type: integer + timeoutSeconds: + description: |- + Number of seconds after which the probe times out. + Defaults to 1 second. Minimum value is 1. + More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes + format: int32 + type: integer + type: object + resizePolicy: + description: Resources resize policy for the container. + items: + description: ContainerResizePolicy represents resource + resize policy for the container. + properties: + resourceName: + description: |- + Name of the resource to which this resource resize policy applies. + Supported values: cpu, memory. + type: string + restartPolicy: + description: |- + Restart policy to apply when specified resource is resized. + If not specified, it defaults to NotRequired. + type: string + required: + - resourceName + - restartPolicy + type: object + type: array + x-kubernetes-list-type: atomic + resources: + description: |- + Compute Resources required by this container. + Cannot be updated. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + properties: + claims: + description: |- + Claims lists the names of resources, defined in spec.resourceClaims, + that are used by this container. + + This is an alpha field and requires enabling the + DynamicResourceAllocation feature gate. + + This field is immutable. It can only be set for containers. + items: + description: ResourceClaim references one entry + in PodSpec.ResourceClaims. + properties: + name: + description: |- + Name must match the name of one entry in pod.spec.resourceClaims of + the Pod where this field is used. It makes that resource available + inside a container. + type: string + required: + - name + type: object + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Limits describes the maximum amount of compute resources allowed. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Requests describes the minimum amount of compute resources required. + If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, + otherwise to an implementation-defined value. Requests cannot exceed Limits. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + type: object + restartPolicy: + description: |- + RestartPolicy defines the restart behavior of individual containers in a pod. + This field may only be set for init containers, and the only allowed value is "Always". + For non-init containers or when this field is not specified, + the restart behavior is defined by the Pod's restart policy and the container type. + Setting the RestartPolicy as "Always" for the init container will have the following effect: + this init container will be continually restarted on + exit until all regular containers have terminated. Once all regular + containers have completed, all init containers with restartPolicy "Always" + will be shut down. This lifecycle differs from normal init containers and + is often referred to as a "sidecar" container. Although this init + container still starts in the init container sequence, it does not wait + for the container to complete before proceeding to the next init + container. Instead, the next init container starts immediately after this + init container is started, or after any startupProbe has successfully + completed. + type: string + securityContext: + description: |- + SecurityContext defines the security options the container should be run with. + If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. + More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ + properties: + allowPrivilegeEscalation: + description: |- + AllowPrivilegeEscalation controls whether a process can gain more + privileges than its parent process. This bool directly controls if + the no_new_privs flag will be set on the container process. + AllowPrivilegeEscalation is true always when the container is: + 1) run as Privileged + 2) has CAP_SYS_ADMIN + Note that this field cannot be set when spec.os.name is windows. + type: boolean + appArmorProfile: + description: |- + appArmorProfile is the AppArmor options to use by this container. If set, this profile + overrides the pod's appArmorProfile. + Note that this field cannot be set when spec.os.name is windows. + properties: + localhostProfile: + description: |- + localhostProfile indicates a profile loaded on the node that should be used. + The profile must be preconfigured on the node to work. + Must match the loaded name of the profile. + Must be set if and only if type is "Localhost". + type: string + type: + description: |- + type indicates which kind of AppArmor profile will be applied. + Valid options are: + Localhost - a profile pre-loaded on the node. + RuntimeDefault - the container runtime's default profile. + Unconfined - no AppArmor enforcement. + type: string + required: + - type + type: object + capabilities: + description: |- + The capabilities to add/drop when running containers. + Defaults to the default set of capabilities granted by the container runtime. + Note that this field cannot be set when spec.os.name is windows. + properties: + add: + description: Added capabilities + items: + description: Capability represent POSIX capabilities + type + type: string + type: array + x-kubernetes-list-type: atomic + drop: + description: Removed capabilities + items: + description: Capability represent POSIX capabilities + type + type: string + type: array + x-kubernetes-list-type: atomic + type: object + privileged: + description: |- + Run container in privileged mode. + Processes in privileged containers are essentially equivalent to root on the host. + Defaults to false. + Note that this field cannot be set when spec.os.name is windows. + type: boolean + procMount: + description: |- + procMount denotes the type of proc mount to use for the containers. + The default is DefaultProcMount which uses the container runtime defaults for + readonly paths and masked paths. + This requires the ProcMountType feature flag to be enabled. + Note that this field cannot be set when spec.os.name is windows. + type: string + readOnlyRootFilesystem: + description: |- + Whether this container has a read-only root filesystem. + Default is false. + Note that this field cannot be set when spec.os.name is windows. + type: boolean + runAsGroup: + description: |- + The GID to run the entrypoint of the container process. + Uses runtime default if unset. + May also be set in PodSecurityContext. If set in both SecurityContext and + PodSecurityContext, the value specified in SecurityContext takes precedence. + Note that this field cannot be set when spec.os.name is windows. + format: int64 + type: integer + runAsNonRoot: + description: |- + Indicates that the container must run as a non-root user. + If true, the Kubelet will validate the image at runtime to ensure that it + does not run as UID 0 (root) and fail to start the container if it does. + If unset or false, no such validation will be performed. + May also be set in PodSecurityContext. If set in both SecurityContext and + PodSecurityContext, the value specified in SecurityContext takes precedence. + type: boolean + runAsUser: + description: |- + The UID to run the entrypoint of the container process. + Defaults to user specified in image metadata if unspecified. + May also be set in PodSecurityContext. If set in both SecurityContext and + PodSecurityContext, the value specified in SecurityContext takes precedence. + Note that this field cannot be set when spec.os.name is windows. + format: int64 + type: integer + seLinuxOptions: + description: |- + The SELinux context to be applied to the container. + If unspecified, the container runtime will allocate a random SELinux context for each + container. May also be set in PodSecurityContext. If set in both SecurityContext and + PodSecurityContext, the value specified in SecurityContext takes precedence. + Note that this field cannot be set when spec.os.name is windows. + properties: + level: + description: Level is SELinux level label that + applies to the container. + type: string + role: + description: Role is a SELinux role label that + applies to the container. + type: string + type: + description: Type is a SELinux type label that + applies to the container. + type: string + user: + description: User is a SELinux user label that + applies to the container. + type: string + type: object + seccompProfile: + description: |- + The seccomp options to use by this container. If seccomp options are + provided at both the pod & container level, the container options + override the pod options. + Note that this field cannot be set when spec.os.name is windows. + properties: + localhostProfile: + description: |- + localhostProfile indicates a profile defined in a file on the node should be used. + The profile must be preconfigured on the node to work. + Must be a descending path, relative to the kubelet's configured seccomp profile location. + Must be set if type is "Localhost". Must NOT be set for any other type. + type: string + type: + description: |- + type indicates which kind of seccomp profile will be applied. + Valid options are: + + Localhost - a profile defined in a file on the node should be used. + RuntimeDefault - the container runtime default profile should be used. + Unconfined - no profile should be applied. + type: string + required: + - type + type: object + windowsOptions: + description: |- + The Windows specific settings applied to all containers. + If unspecified, the options from the PodSecurityContext will be used. + If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. + Note that this field cannot be set when spec.os.name is linux. + properties: + gmsaCredentialSpec: + description: |- + GMSACredentialSpec is where the GMSA admission webhook + (https://github.com/kubernetes-sigs/windows-gmsa) inlines the contents of the + GMSA credential spec named by the GMSACredentialSpecName field. + type: string + gmsaCredentialSpecName: + description: GMSACredentialSpecName is the name + of the GMSA credential spec to use. + type: string + hostProcess: + description: |- + HostProcess determines if a container should be run as a 'Host Process' container. + All of a Pod's containers must have the same effective HostProcess value + (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). + In addition, if HostProcess is true then HostNetwork must also be set to true. + type: boolean + runAsUserName: + description: |- + The UserName in Windows to run the entrypoint of the container process. + Defaults to the user specified in image metadata if unspecified. + May also be set in PodSecurityContext. If set in both SecurityContext and + PodSecurityContext, the value specified in SecurityContext takes precedence. + type: string + type: object + type: object + startupProbe: + description: |- + StartupProbe indicates that the Pod has successfully initialized. + If specified, no other probes are executed until this completes successfully. + If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. + This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, + when it might take a long time to load data or warm a cache, than during steady-state operation. + This cannot be updated. + More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes + properties: + exec: + description: Exec specifies the action to take. + properties: + command: + description: |- + Command is the command line to execute inside the container, the working directory for the + command is root ('/') in the container's filesystem. The command is simply exec'd, it is + not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use + a shell, you need to explicitly call out to that shell. + Exit status of 0 is treated as live/healthy and non-zero is unhealthy. + items: + type: string + type: array + x-kubernetes-list-type: atomic + type: object + failureThreshold: + description: |- + Minimum consecutive failures for the probe to be considered failed after having succeeded. + Defaults to 3. Minimum value is 1. + format: int32 + type: integer + grpc: + description: GRPC specifies an action involving + a GRPC port. + properties: + port: + description: Port number of the gRPC service. + Number must be in the range 1 to 65535. + format: int32 + type: integer + service: + default: "" + description: |- + Service is the name of the service to place in the gRPC HealthCheckRequest + (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). + + If this is not specified, the default behavior is defined by gRPC. + type: string + required: + - port + type: object + httpGet: + description: HTTPGet specifies the http request + to perform. + properties: + host: + description: |- + Host name to connect to, defaults to the pod IP. You probably want to set + "Host" in httpHeaders instead. + type: string + httpHeaders: + description: Custom headers to set in the request. + HTTP allows repeated headers. + items: + description: HTTPHeader describes a custom + header to be used in HTTP probes + properties: + name: + description: |- + The header field name. + This will be canonicalized upon output, so case-variant names will be understood as the same header. + type: string + value: + description: The header field value + type: string + required: + - name + - value + type: object + type: array + x-kubernetes-list-type: atomic + path: + description: Path to access on the HTTP server. + type: string + port: + anyOf: + - type: integer + - type: string + description: |- + Name or number of the port to access on the container. + Number must be in the range 1 to 65535. + Name must be an IANA_SVC_NAME. + x-kubernetes-int-or-string: true + scheme: + description: |- + Scheme to use for connecting to the host. + Defaults to HTTP. + type: string + required: + - port + type: object + initialDelaySeconds: + description: |- + Number of seconds after the container has started before liveness probes are initiated. + More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes + format: int32 + type: integer + periodSeconds: + description: |- + How often (in seconds) to perform the probe. + Default to 10 seconds. Minimum value is 1. + format: int32 + type: integer + successThreshold: + description: |- + Minimum consecutive successes for the probe to be considered successful after having failed. + Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. + format: int32 + type: integer + tcpSocket: + description: TCPSocket specifies an action involving + a TCP port. + properties: + host: + description: 'Optional: Host name to connect + to, defaults to the pod IP.' + type: string + port: + anyOf: + - type: integer + - type: string + description: |- + Number or name of the port to access on the container. + Number must be in the range 1 to 65535. + Name must be an IANA_SVC_NAME. + x-kubernetes-int-or-string: true + required: + - port + type: object + terminationGracePeriodSeconds: + description: |- + Optional duration in seconds the pod needs to terminate gracefully upon probe failure. + The grace period is the duration in seconds after the processes running in the pod are sent + a termination signal and the time when the processes are forcibly halted with a kill signal. + Set this value longer than the expected cleanup time for your process. + If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this + value overrides the value provided by the pod spec. + Value must be non-negative integer. The value zero indicates stop immediately via + the kill signal (no opportunity to shut down). + This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. + Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. + format: int64 + type: integer + timeoutSeconds: + description: |- + Number of seconds after which the probe times out. + Defaults to 1 second. Minimum value is 1. + More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes + format: int32 + type: integer + type: object + stdin: + description: |- + Whether this container should allocate a buffer for stdin in the container runtime. If this + is not set, reads from stdin in the container will always result in EOF. + Default is false. + type: boolean + stdinOnce: + description: |- + Whether the container runtime should close the stdin channel after it has been opened by + a single attach. When stdin is true the stdin stream will remain open across multiple attach + sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the + first client attaches to stdin, and then remains open and accepts data until the client disconnects, + at which time stdin is closed and remains closed until the container is restarted. If this + flag is false, a container processes that reads from stdin will never receive an EOF. + Default is false + type: boolean + terminationMessagePath: + description: |- + Optional: Path at which the file to which the container's termination message + will be written is mounted into the container's filesystem. + Message written is intended to be brief final status, such as an assertion failure message. + Will be truncated by the node if greater than 4096 bytes. The total message length across + all containers will be limited to 12kb. + Defaults to /dev/termination-log. + Cannot be updated. + type: string + terminationMessagePolicy: + description: |- + Indicate how the termination message should be populated. File will use the contents of + terminationMessagePath to populate the container status message on both success and failure. + FallbackToLogsOnError will use the last chunk of container log output if the termination + message file is empty and the container exited with an error. + The log output is limited to 2048 bytes or 80 lines, whichever is smaller. + Defaults to File. + Cannot be updated. + type: string + tty: + description: |- + Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. + Default is false. + type: boolean + volumeDevices: + description: volumeDevices is the list of block devices + to be used by the container. + items: + description: volumeDevice describes a mapping of a + raw block device within a container. + properties: + devicePath: + description: devicePath is the path inside of + the container that the device will be mapped + to. + type: string + name: + description: name must match the name of a persistentVolumeClaim + in the pod + type: string + required: + - devicePath + - name + type: object + type: array + x-kubernetes-list-map-keys: + - devicePath + x-kubernetes-list-type: map + volumeMounts: + description: |- + Pod volumes to mount into the container's filesystem. + Cannot be updated. + items: + description: VolumeMount describes a mounting of a + Volume within a container. + properties: + mountPath: + description: |- + Path within the container at which the volume should be mounted. Must + not contain ':'. + type: string + mountPropagation: + description: |- + mountPropagation determines how mounts are propagated from the host + to container and the other way around. + When not set, MountPropagationNone is used. + This field is beta in 1.10. + When RecursiveReadOnly is set to IfPossible or to Enabled, MountPropagation must be None or unspecified + (which defaults to None). + type: string + name: + description: This must match the Name of a Volume. + type: string + readOnly: + description: |- + Mounted read-only if true, read-write otherwise (false or unspecified). + Defaults to false. + type: boolean + recursiveReadOnly: + description: |- + RecursiveReadOnly specifies whether read-only mounts should be handled + recursively. + + If ReadOnly is false, this field has no meaning and must be unspecified. + + If ReadOnly is true, and this field is set to Disabled, the mount is not made + recursively read-only. If this field is set to IfPossible, the mount is made + recursively read-only, if it is supported by the container runtime. If this + field is set to Enabled, the mount is made recursively read-only if it is + supported by the container runtime, otherwise the pod will not be started and + an error will be generated to indicate the reason. + + If this field is set to IfPossible or Enabled, MountPropagation must be set to + None (or be unspecified, which defaults to None). + + If this field is not specified, it is treated as an equivalent of Disabled. + type: string + subPath: + description: |- + Path within the volume from which the container's volume should be mounted. + Defaults to "" (volume's root). + type: string + subPathExpr: + description: |- + Expanded path within the volume from which the container's volume should be mounted. + Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container's environment. + Defaults to "" (volume's root). + SubPathExpr and SubPath are mutually exclusive. + type: string + required: + - mountPath + - name + type: object + type: array + x-kubernetes-list-map-keys: + - mountPath + x-kubernetes-list-type: map + workingDir: + description: |- + Container's working directory. + If not specified, the container runtime's default will be used, which + might be configured in the container image. + Cannot be updated. + type: string + required: + - name + type: object + type: array + customTLSSecret: + description: |- + A secret projection containing a certificate and key with which to encrypt + connections to PgBouncer. The "tls.crt", "tls.key", and "ca.crt" paths must + be PEM-encoded certificates and keys. Changing this value causes PgBouncer + to restart. + More info: https://kubernetes.io/docs/concepts/configuration/secret/#projection-of-secret-keys-to-specific-paths + properties: + items: + description: |- + items if unspecified, each key-value pair in the Data field of the referenced + Secret will be projected into the volume as a file whose name is the + key and content is the value. If specified, the listed keys will be + projected into the specified paths, and unlisted keys will not be + present. If a key is specified which is not present in the Secret, + the volume setup will error unless it is marked optional. Paths must be + relative and may not contain the '..' path or start with '..'. + items: + description: Maps a string key to a path within a volume. + properties: + key: + description: key is the key to project. + type: string + mode: + description: |- + mode is Optional: mode bits used to set permissions on this file. + Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. + YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. + If not specified, the volume defaultMode will be used. + This might be in conflict with other options that affect the file + mode, like fsGroup, and the result can be other mode bits set. + format: int32 + type: integer + path: + description: |- + path is the relative path of the file to map the key to. + May not be an absolute path. + May not contain the path element '..'. + May not start with the string '..'. + type: string + required: + - key + - path + type: object + type: array + x-kubernetes-list-type: atomic + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: optional field specify whether the Secret + or its key must be defined + type: boolean + type: object + x-kubernetes-map-type: atomic + image: + description: |- + Name of a container image that can run PgBouncer 1.15 or newer. Changing + this value causes PgBouncer to restart. The image may also be set using + the RELATED_IMAGE_PGBOUNCER environment variable. + More info: https://kubernetes.io/docs/concepts/containers/images + type: string + metadata: + description: Metadata contains metadata for custom resources + properties: + annotations: + additionalProperties: + type: string + type: object + labels: + additionalProperties: + type: string + type: object + type: object + minAvailable: + anyOf: + - type: integer + - type: string + description: |- + Minimum number of pods that should be available at a time. + Defaults to one when the replicas field is greater than one. + x-kubernetes-int-or-string: true + port: + default: 5432 + description: |- + Port on which PgBouncer should listen for client connections. Changing + this value causes PgBouncer to restart. + format: int32 + minimum: 1024 + type: integer + priorityClassName: + description: |- + Priority class name for the pgBouncer pod. Changing this value causes + PostgreSQL to restart. + More info: https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/ + type: string + replicas: + default: 1 + description: Number of desired PgBouncer pods. + format: int32 + minimum: 0 + type: integer + resources: + description: |- + Compute resources of a PgBouncer container. Changing this value causes + PgBouncer to restart. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers + properties: + claims: + description: |- + Claims lists the names of resources, defined in spec.resourceClaims, + that are used by this container. + + This is an alpha field and requires enabling the + DynamicResourceAllocation feature gate. + + This field is immutable. It can only be set for containers. + items: + description: ResourceClaim references one entry in PodSpec.ResourceClaims. + properties: + name: + description: |- + Name must match the name of one entry in pod.spec.resourceClaims of + the Pod where this field is used. It makes that resource available + inside a container. + type: string + required: + - name + type: object + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Limits describes the maximum amount of compute resources allowed. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Requests describes the minimum amount of compute resources required. + If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, + otherwise to an implementation-defined value. Requests cannot exceed Limits. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + type: object + service: + description: Specification of the service that exposes PgBouncer. + properties: + externalTrafficPolicy: + description: 'More info: https://kubernetes.io/docs/concepts/services-networking/service/#traffic-policies' + enum: + - Cluster + - Local + type: string + internalTrafficPolicy: + description: 'More info: https://kubernetes.io/docs/concepts/services-networking/service/#traffic-policies' + enum: + - Cluster + - Local + type: string + metadata: + description: Metadata contains metadata for custom resources + properties: + annotations: + additionalProperties: + type: string + type: object + labels: + additionalProperties: + type: string + type: object + type: object + nodePort: + description: |- + The port on which this service is exposed when type is NodePort or + LoadBalancer. Value must be in-range and not in use or the operation will + fail. If unspecified, a port will be allocated if this Service requires one. + - https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport + format: int32 + type: integer + type: + default: ClusterIP + description: 'More info: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types' + enum: + - ClusterIP + - NodePort + - LoadBalancer + type: string + type: object + sidecars: + description: Configuration for pgBouncer sidecar containers + properties: + pgbouncerConfig: + description: Defines the configuration for the pgBouncer + config sidecar container + properties: + resources: + description: Resource requirements for a sidecar container + properties: + claims: + description: |- + Claims lists the names of resources, defined in spec.resourceClaims, + that are used by this container. + + This is an alpha field and requires enabling the + DynamicResourceAllocation feature gate. + + This field is immutable. It can only be set for containers. + items: + description: ResourceClaim references one entry + in PodSpec.ResourceClaims. + properties: + name: + description: |- + Name must match the name of one entry in pod.spec.resourceClaims of + the Pod where this field is used. It makes that resource available + inside a container. + type: string + required: + - name + type: object + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Limits describes the maximum amount of compute resources allowed. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Requests describes the minimum amount of compute resources required. + If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, + otherwise to an implementation-defined value. Requests cannot exceed Limits. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + type: object + type: object + type: object + tolerations: + description: |- + Tolerations of a PgBouncer pod. Changing this value causes PgBouncer to + restart. + More info: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration + items: + description: |- + The pod this Toleration is attached to tolerates any taint that matches + the triple using the matching operator . + properties: + effect: + description: |- + Effect indicates the taint effect to match. Empty means match all taint effects. + When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. + type: string + key: + description: |- + Key is the taint key that the toleration applies to. Empty means match all taint keys. + If the key is empty, operator must be Exists; this combination means to match all values and all keys. + type: string + operator: + description: |- + Operator represents a key's relationship to the value. + Valid operators are Exists and Equal. Defaults to Equal. + Exists is equivalent to wildcard for value, so that a pod can + tolerate all taints of a particular category. + type: string + tolerationSeconds: + description: |- + TolerationSeconds represents the period of time the toleration (which must be + of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, + it is not set, which means tolerate the taint forever (do not evict). Zero and + negative values will be treated as 0 (evict immediately) by the system. + format: int64 + type: integer + value: + description: |- + Value is the taint value the toleration matches to. + If the operator is Exists, the value should be empty, otherwise just a regular string. + type: string + type: object + type: array + topologySpreadConstraints: + description: |- + Topology spread constraints of a PgBouncer pod. Changing this value causes + PgBouncer to restart. + More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/ + items: + description: TopologySpreadConstraint specifies how to spread + matching pods among the given topology. + properties: + labelSelector: + description: |- + LabelSelector is used to find matching pods. + Pods that match this label selector are counted to determine the number of pods + in their corresponding topology domain. + properties: + matchExpressions: + description: matchExpressions is a list of label + selector requirements. The requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key that the + selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select the pods over which + spreading will be calculated. The keys are used to lookup values from the + incoming pod labels, those key-value labels are ANDed with labelSelector + to select the group of existing pods over which spreading will be calculated + for the incoming pod. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. + MatchLabelKeys cannot be set when LabelSelector isn't set. + Keys that don't exist in the incoming pod labels will + be ignored. A null or empty list means only match against labelSelector. + + This is a beta field and requires the MatchLabelKeysInPodTopologySpread feature gate to be enabled (enabled by default). + items: + type: string + type: array + x-kubernetes-list-type: atomic + maxSkew: + description: |- + MaxSkew describes the degree to which pods may be unevenly distributed. + When `whenUnsatisfiable=DoNotSchedule`, it is the maximum permitted difference + between the number of matching pods in the target topology and the global minimum. + The global minimum is the minimum number of matching pods in an eligible domain + or zero if the number of eligible domains is less than MinDomains. + For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same + labelSelector spread as 2/2/1: + In this case, the global minimum is 1. + | zone1 | zone2 | zone3 | + | P P | P P | P | + - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; + scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) + violate MaxSkew(1). + - if MaxSkew is 2, incoming pod can be scheduled onto any zone. + When `whenUnsatisfiable=ScheduleAnyway`, it is used to give higher precedence + to topologies that satisfy it. + It's a required field. Default value is 1 and 0 is not allowed. + format: int32 + type: integer + minDomains: + description: |- + MinDomains indicates a minimum number of eligible domains. + When the number of eligible domains with matching topology keys is less than minDomains, + Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed. + And when the number of eligible domains with matching topology keys equals or greater than minDomains, + this value has no effect on scheduling. + As a result, when the number of eligible domains is less than minDomains, + scheduler won't schedule more than maxSkew Pods to those domains. + If value is nil, the constraint behaves as if MinDomains is equal to 1. + Valid values are integers greater than 0. + When value is not nil, WhenUnsatisfiable must be DoNotSchedule. + + For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same + labelSelector spread as 2/2/2: + | zone1 | zone2 | zone3 | + | P P | P P | P P | + The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0. + In this situation, new pod with the same labelSelector cannot be scheduled, + because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, + it will violate MaxSkew. + format: int32 + type: integer + nodeAffinityPolicy: + description: |- + NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector + when calculating pod topology spread skew. Options are: + - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. + - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. + + If this value is nil, the behavior is equivalent to the Honor policy. + This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. + type: string + nodeTaintsPolicy: + description: |- + NodeTaintsPolicy indicates how we will treat node taints when calculating + pod topology spread skew. Options are: + - Honor: nodes without taints, along with tainted nodes for which the incoming pod + has a toleration, are included. + - Ignore: node taints are ignored. All nodes are included. + + If this value is nil, the behavior is equivalent to the Ignore policy. + This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. + type: string + topologyKey: + description: |- + TopologyKey is the key of node labels. Nodes that have a label with this key + and identical values are considered to be in the same topology. + We consider each as a "bucket", and try to put balanced number + of pods into each bucket. + We define a domain as a particular instance of a topology. + Also, we define an eligible domain as a domain whose nodes meet the requirements of + nodeAffinityPolicy and nodeTaintsPolicy. + e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. + And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. + It's a required field. + type: string + whenUnsatisfiable: + description: |- + WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy + the spread constraint. + - DoNotSchedule (default) tells the scheduler not to schedule it. + - ScheduleAnyway tells the scheduler to schedule the pod in any location, + but giving higher precedence to topologies that would help reduce the + skew. + A constraint is considered "Unsatisfiable" for an incoming pod + if and only if every possible node assignment for that pod would violate + "MaxSkew" on some topology. + For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same + labelSelector spread as 3/1/1: + | zone1 | zone2 | zone3 | + | P P P | P | P | + If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled + to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies + MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler + won't make it *more* imbalanced. + It's a required field. + type: string + required: + - maxSkew + - topologyKey + - whenUnsatisfiable + type: object + type: array + type: object + required: + - pgBouncer + type: object + replicaService: + description: Specification of the service that exposes PostgreSQL + replica instances + properties: + externalTrafficPolicy: + description: 'More info: https://kubernetes.io/docs/concepts/services-networking/service/#traffic-policies' + enum: + - Cluster + - Local + type: string + internalTrafficPolicy: + description: 'More info: https://kubernetes.io/docs/concepts/services-networking/service/#traffic-policies' + enum: + - Cluster + - Local + type: string + metadata: + description: Metadata contains metadata for custom resources + properties: + annotations: + additionalProperties: + type: string + type: object + labels: + additionalProperties: + type: string + type: object + type: object + nodePort: + description: |- + The port on which this service is exposed when type is NodePort or + LoadBalancer. Value must be in-range and not in use or the operation will + fail. If unspecified, a port will be allocated if this Service requires one. + - https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport + format: int32 + type: integer + type: + default: ClusterIP + description: 'More info: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types' + enum: + - ClusterIP + - NodePort + - LoadBalancer + type: string + type: object + service: + description: Specification of the service that exposes the PostgreSQL + primary instance. + properties: + externalTrafficPolicy: + description: 'More info: https://kubernetes.io/docs/concepts/services-networking/service/#traffic-policies' + enum: + - Cluster + - Local + type: string + internalTrafficPolicy: + description: 'More info: https://kubernetes.io/docs/concepts/services-networking/service/#traffic-policies' + enum: + - Cluster + - Local + type: string + metadata: + description: Metadata contains metadata for custom resources + properties: + annotations: + additionalProperties: + type: string + type: object + labels: + additionalProperties: + type: string + type: object + type: object + nodePort: + description: |- + The port on which this service is exposed when type is NodePort or + LoadBalancer. Value must be in-range and not in use or the operation will + fail. If unspecified, a port will be allocated if this Service requires one. + - https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport + format: int32 + type: integer + type: + default: ClusterIP + description: 'More info: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types' + enum: + - ClusterIP + - NodePort + - LoadBalancer + type: string + type: object + shutdown: + description: |- + Whether or not the PostgreSQL cluster should be stopped. + When this is true, workloads are scaled to zero and CronJobs + are suspended. + Other resources, such as Services and Volumes, remain in place. + type: boolean + standby: + description: Run this cluster as a read-only copy of an existing cluster + or archive. + properties: + enabled: + default: true + description: |- + Whether or not the PostgreSQL cluster should be read-only. When this is + true, WAL files are applied from a pgBackRest repository or another + PostgreSQL server. + type: boolean + host: + description: Network address of the PostgreSQL server to follow + via streaming replication. + type: string + port: + description: Network port of the PostgreSQL server to follow via + streaming replication. + format: int32 + minimum: 1024 + type: integer + repoName: + description: The name of the pgBackRest repository to follow for + WAL files. + pattern: ^repo[1-4] + type: string + type: object + supplementalGroups: + description: |- + A list of group IDs applied to the process of a container. These can be + useful when accessing shared file systems with constrained permissions. + More info: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context + items: + format: int64 + maximum: 2147483647 + minimum: 1 + type: integer + type: array + userInterface: + description: The specification of a user interface that connects to + PostgreSQL. + properties: + pgAdmin: + description: Defines a pgAdmin user interface. + properties: + affinity: + description: |- + Scheduling constraints of a pgAdmin pod. Changing this value causes + pgAdmin to restart. + More info: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node + properties: + nodeAffinity: + description: Describes node affinity scheduling rules + for the pod. + properties: + preferredDuringSchedulingIgnoredDuringExecution: + description: |- + The scheduler will prefer to schedule pods to nodes that satisfy + the affinity expressions specified by this field, but it may choose + a node that violates one or more of the expressions. The node that is + most preferred is the one with the greatest sum of weights, i.e. + for each node that meets all of the scheduling requirements (resource + request, requiredDuringScheduling affinity expressions, etc.), + compute a sum by iterating through the elements of this field and adding + "weight" to the sum if the node matches the corresponding matchExpressions; the + node(s) with the highest sum are the most preferred. + items: + description: |- + An empty preferred scheduling term matches all objects with implicit weight 0 + (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). + properties: + preference: + description: A node selector term, associated + with the corresponding weight. + properties: + matchExpressions: + description: A list of node selector requirements + by node's labels. + items: + description: |- + A node selector requirement is a selector that contains values, a key, and an operator + that relates the key and values. + properties: + key: + description: The label key that the + selector applies to. + type: string + operator: + description: |- + Represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: |- + An array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. If the operator is Gt or Lt, the values + array must have a single element, which will be interpreted as an integer. + This array is replaced during a strategic merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchFields: + description: A list of node selector requirements + by node's fields. + items: + description: |- + A node selector requirement is a selector that contains values, a key, and an operator + that relates the key and values. + properties: + key: + description: The label key that the + selector applies to. + type: string + operator: + description: |- + Represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: |- + An array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. If the operator is Gt or Lt, the values + array must have a single element, which will be interpreted as an integer. + This array is replaced during a strategic merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + type: object + x-kubernetes-map-type: atomic + weight: + description: Weight associated with matching + the corresponding nodeSelectorTerm, in the + range 1-100. + format: int32 + type: integer + required: + - preference + - weight + type: object + type: array + x-kubernetes-list-type: atomic + requiredDuringSchedulingIgnoredDuringExecution: + description: |- + If the affinity requirements specified by this field are not met at + scheduling time, the pod will not be scheduled onto the node. + If the affinity requirements specified by this field cease to be met + at some point during pod execution (e.g. due to an update), the system + may or may not try to eventually evict the pod from its node. + properties: + nodeSelectorTerms: + description: Required. A list of node selector + terms. The terms are ORed. + items: + description: |- + A null or empty node selector term matches no objects. The requirements of + them are ANDed. + The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. + properties: + matchExpressions: + description: A list of node selector requirements + by node's labels. + items: + description: |- + A node selector requirement is a selector that contains values, a key, and an operator + that relates the key and values. + properties: + key: + description: The label key that the + selector applies to. + type: string + operator: + description: |- + Represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: |- + An array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. If the operator is Gt or Lt, the values + array must have a single element, which will be interpreted as an integer. + This array is replaced during a strategic merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchFields: + description: A list of node selector requirements + by node's fields. + items: + description: |- + A node selector requirement is a selector that contains values, a key, and an operator + that relates the key and values. + properties: + key: + description: The label key that the + selector applies to. + type: string + operator: + description: |- + Represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: |- + An array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. If the operator is Gt or Lt, the values + array must have a single element, which will be interpreted as an integer. + This array is replaced during a strategic merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + type: object + x-kubernetes-map-type: atomic + type: array + x-kubernetes-list-type: atomic + required: + - nodeSelectorTerms + type: object + x-kubernetes-map-type: atomic + type: object + podAffinity: + description: Describes pod affinity scheduling rules (e.g. + co-locate this pod in the same node, zone, etc. as some + other pod(s)). + properties: + preferredDuringSchedulingIgnoredDuringExecution: + description: |- + The scheduler will prefer to schedule pods to nodes that satisfy + the affinity expressions specified by this field, but it may choose + a node that violates one or more of the expressions. The node that is + most preferred is the one with the greatest sum of weights, i.e. + for each node that meets all of the scheduling requirements (resource + request, requiredDuringScheduling affinity expressions, etc.), + compute a sum by iterating through the elements of this field and adding + "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the + node(s) with the highest sum are the most preferred. + items: + description: The weights of all of the matched WeightedPodAffinityTerm + fields are added per-node to find the most preferred + node(s) + properties: + podAffinityTerm: + description: Required. A pod affinity term, + associated with the corresponding weight. + properties: + labelSelector: + description: |- + A label query over a set of resources, in this case pods. + If it's null, this PodAffinityTerm matches with no Pods. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The + requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label + key that the selector applies + to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both matchLabelKeys and labelSelector. + Also, matchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + description: |- + MismatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. + Also, mismatchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + description: |- + A label query over the set of namespaces that the term applies to. + The term is applied to the union of the namespaces selected by this field + and the ones listed in the namespaces field. + null selector and null or empty namespaces list means "this pod's namespace". + An empty selector ({}) matches all namespaces. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The + requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label + key that the selector applies + to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + description: |- + namespaces specifies a static list of namespace names that the term applies to. + The term is applied to the union of the namespaces listed in this field + and the ones selected by namespaceSelector. + null or empty namespaces list and null namespaceSelector means "this pod's namespace". + items: + type: string + type: array + x-kubernetes-list-type: atomic + topologyKey: + description: |- + This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching + the labelSelector in the specified namespaces, where co-located is defined as running on a node + whose value of the label with key topologyKey matches that of any node on which any of the + selected pods is running. + Empty topologyKey is not allowed. + type: string + required: + - topologyKey + type: object + weight: + description: |- + weight associated with matching the corresponding podAffinityTerm, + in the range 1-100. + format: int32 + type: integer + required: + - podAffinityTerm + - weight + type: object + type: array + x-kubernetes-list-type: atomic + requiredDuringSchedulingIgnoredDuringExecution: + description: |- + If the affinity requirements specified by this field are not met at + scheduling time, the pod will not be scheduled onto the node. + If the affinity requirements specified by this field cease to be met + at some point during pod execution (e.g. due to a pod label update), the + system may or may not try to eventually evict the pod from its node. + When there are multiple elements, the lists of nodes corresponding to each + podAffinityTerm are intersected, i.e. all terms must be satisfied. + items: + description: |- + Defines a set of pods (namely those matching the labelSelector + relative to the given namespace(s)) that this pod should be + co-located (affinity) or not co-located (anti-affinity) with, + where co-located is defined as running on a node whose value of + the label with key matches that of any node on which + a pod of the set of pods is running + properties: + labelSelector: + description: |- + A label query over a set of resources, in this case pods. + If it's null, this PodAffinityTerm matches with no Pods. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The requirements + are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key + that the selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both matchLabelKeys and labelSelector. + Also, matchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + description: |- + MismatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. + Also, mismatchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + description: |- + A label query over the set of namespaces that the term applies to. + The term is applied to the union of the namespaces selected by this field + and the ones listed in the namespaces field. + null selector and null or empty namespaces list means "this pod's namespace". + An empty selector ({}) matches all namespaces. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The requirements + are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key + that the selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + description: |- + namespaces specifies a static list of namespace names that the term applies to. + The term is applied to the union of the namespaces listed in this field + and the ones selected by namespaceSelector. + null or empty namespaces list and null namespaceSelector means "this pod's namespace". + items: + type: string + type: array + x-kubernetes-list-type: atomic + topologyKey: + description: |- + This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching + the labelSelector in the specified namespaces, where co-located is defined as running on a node + whose value of the label with key topologyKey matches that of any node on which any of the + selected pods is running. + Empty topologyKey is not allowed. + type: string + required: + - topologyKey + type: object + type: array + x-kubernetes-list-type: atomic + type: object + podAntiAffinity: + description: Describes pod anti-affinity scheduling rules + (e.g. avoid putting this pod in the same node, zone, + etc. as some other pod(s)). + properties: + preferredDuringSchedulingIgnoredDuringExecution: + description: |- + The scheduler will prefer to schedule pods to nodes that satisfy + the anti-affinity expressions specified by this field, but it may choose + a node that violates one or more of the expressions. The node that is + most preferred is the one with the greatest sum of weights, i.e. + for each node that meets all of the scheduling requirements (resource + request, requiredDuringScheduling anti-affinity expressions, etc.), + compute a sum by iterating through the elements of this field and adding + "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the + node(s) with the highest sum are the most preferred. + items: + description: The weights of all of the matched WeightedPodAffinityTerm + fields are added per-node to find the most preferred + node(s) + properties: + podAffinityTerm: + description: Required. A pod affinity term, + associated with the corresponding weight. + properties: + labelSelector: + description: |- + A label query over a set of resources, in this case pods. + If it's null, this PodAffinityTerm matches with no Pods. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The + requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label + key that the selector applies + to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both matchLabelKeys and labelSelector. + Also, matchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + description: |- + MismatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. + Also, mismatchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + description: |- + A label query over the set of namespaces that the term applies to. + The term is applied to the union of the namespaces selected by this field + and the ones listed in the namespaces field. + null selector and null or empty namespaces list means "this pod's namespace". + An empty selector ({}) matches all namespaces. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The + requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label + key that the selector applies + to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + description: |- + namespaces specifies a static list of namespace names that the term applies to. + The term is applied to the union of the namespaces listed in this field + and the ones selected by namespaceSelector. + null or empty namespaces list and null namespaceSelector means "this pod's namespace". + items: + type: string + type: array + x-kubernetes-list-type: atomic + topologyKey: + description: |- + This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching + the labelSelector in the specified namespaces, where co-located is defined as running on a node + whose value of the label with key topologyKey matches that of any node on which any of the + selected pods is running. + Empty topologyKey is not allowed. + type: string + required: + - topologyKey + type: object + weight: + description: |- + weight associated with matching the corresponding podAffinityTerm, + in the range 1-100. + format: int32 + type: integer + required: + - podAffinityTerm + - weight + type: object + type: array + x-kubernetes-list-type: atomic + requiredDuringSchedulingIgnoredDuringExecution: + description: |- + If the anti-affinity requirements specified by this field are not met at + scheduling time, the pod will not be scheduled onto the node. + If the anti-affinity requirements specified by this field cease to be met + at some point during pod execution (e.g. due to a pod label update), the + system may or may not try to eventually evict the pod from its node. + When there are multiple elements, the lists of nodes corresponding to each + podAffinityTerm are intersected, i.e. all terms must be satisfied. + items: + description: |- + Defines a set of pods (namely those matching the labelSelector + relative to the given namespace(s)) that this pod should be + co-located (affinity) or not co-located (anti-affinity) with, + where co-located is defined as running on a node whose value of + the label with key matches that of any node on which + a pod of the set of pods is running + properties: + labelSelector: + description: |- + A label query over a set of resources, in this case pods. + If it's null, this PodAffinityTerm matches with no Pods. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The requirements + are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key + that the selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key in (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both matchLabelKeys and labelSelector. + Also, matchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + mismatchLabelKeys: + description: |- + MismatchLabelKeys is a set of pod label keys to select which pods will + be taken into consideration. The keys are used to lookup values from the + incoming pod labels, those key-value labels are merged with `labelSelector` as `key notin (value)` + to select the group of existing pods which pods will be taken into consideration + for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming + pod labels will be ignored. The default value is empty. + The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. + Also, mismatchLabelKeys cannot be set when labelSelector isn't set. + This is an alpha field and requires enabling MatchLabelKeysInPodAffinity feature gate. + items: + type: string + type: array + x-kubernetes-list-type: atomic + namespaceSelector: + description: |- + A label query over the set of namespaces that the term applies to. + The term is applied to the union of the namespaces selected by this field + and the ones listed in the namespaces field. + null selector and null or empty namespaces list means "this pod's namespace". + An empty selector ({}) matches all namespaces. + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The requirements + are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key + that the selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + namespaces: + description: |- + namespaces specifies a static list of namespace names that the term applies to. + The term is applied to the union of the namespaces listed in this field + and the ones selected by namespaceSelector. + null or empty namespaces list and null namespaceSelector means "this pod's namespace". + items: + type: string + type: array + x-kubernetes-list-type: atomic + topologyKey: + description: |- + This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching + the labelSelector in the specified namespaces, where co-located is defined as running on a node + whose value of the label with key topologyKey matches that of any node on which any of the + selected pods is running. + Empty topologyKey is not allowed. + type: string + required: + - topologyKey + type: object + type: array + x-kubernetes-list-type: atomic + type: object + type: object + config: + description: |- + Configuration settings for the pgAdmin process. Changes to any of these + values will be loaded without validation. Be careful, as + you may put pgAdmin into an unusable state. + properties: + files: + description: |- + Files allows the user to mount projected volumes into the pgAdmin + container so that files can be referenced by pgAdmin as needed. + items: + description: Projection that may be projected along + with other supported volume types + properties: + clusterTrustBundle: + description: |- + ClusterTrustBundle allows a pod to access the `.spec.trustBundle` field + of ClusterTrustBundle objects in an auto-updating file. + + Alpha, gated by the ClusterTrustBundleProjection feature gate. + + ClusterTrustBundle objects can either be selected by name, or by the + combination of signer name and a label selector. + + Kubelet performs aggressive normalization of the PEM contents written + into the pod filesystem. Esoteric PEM features such as inter-block + comments and block headers are stripped. Certificates are deduplicated. + The ordering of certificates within the file is arbitrary, and Kubelet + may change the order over time. + properties: + labelSelector: + description: |- + Select all ClusterTrustBundles that match this label selector. Only has + effect if signerName is set. Mutually-exclusive with name. If unset, + interpreted as "match nothing". If set but empty, interpreted as "match + everything". + properties: + matchExpressions: + description: matchExpressions is a list + of label selector requirements. The requirements + are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key + that the selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + name: + description: |- + Select a single ClusterTrustBundle by object name. Mutually-exclusive + with signerName and labelSelector. + type: string + optional: + description: |- + If true, don't block pod startup if the referenced ClusterTrustBundle(s) + aren't available. If using name, then the named ClusterTrustBundle is + allowed not to exist. If using signerName, then the combination of + signerName and labelSelector is allowed to match zero + ClusterTrustBundles. + type: boolean + path: + description: Relative path from the volume root + to write the bundle. + type: string + signerName: + description: |- + Select all ClusterTrustBundles that match this signer name. + Mutually-exclusive with name. The contents of all selected + ClusterTrustBundles will be unified and deduplicated. + type: string + required: + - path + type: object + configMap: + description: configMap information about the configMap + data to project + properties: + items: + description: |- + items if unspecified, each key-value pair in the Data field of the referenced + ConfigMap will be projected into the volume as a file whose name is the + key and content is the value. If specified, the listed keys will be + projected into the specified paths, and unlisted keys will not be + present. If a key is specified which is not present in the ConfigMap, + the volume setup will error unless it is marked optional. Paths must be + relative and may not contain the '..' path or start with '..'. + items: + description: Maps a string key to a path within + a volume. + properties: + key: + description: key is the key to project. + type: string + mode: + description: |- + mode is Optional: mode bits used to set permissions on this file. + Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. + YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. + If not specified, the volume defaultMode will be used. + This might be in conflict with other options that affect the file + mode, like fsGroup, and the result can be other mode bits set. + format: int32 + type: integer + path: + description: |- + path is the relative path of the file to map the key to. + May not be an absolute path. + May not contain the path element '..'. + May not start with the string '..'. + type: string + required: + - key + - path + type: object + type: array + x-kubernetes-list-type: atomic + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: optional specify whether the ConfigMap + or its keys must be defined + type: boolean + type: object + x-kubernetes-map-type: atomic + downwardAPI: + description: downwardAPI information about the downwardAPI + data to project + properties: + items: + description: Items is a list of DownwardAPIVolume + file + items: + description: DownwardAPIVolumeFile represents + information to create the file containing + the pod field + properties: + fieldRef: + description: 'Required: Selects a field + of the pod: only annotations, labels, + name, namespace and uid are supported.' + properties: + apiVersion: + description: Version of the schema + the FieldPath is written in terms + of, defaults to "v1". + type: string + fieldPath: + description: Path of the field to + select in the specified API version. + type: string + required: + - fieldPath + type: object + x-kubernetes-map-type: atomic + mode: + description: |- + Optional: mode bits used to set permissions on this file, must be an octal value + between 0000 and 0777 or a decimal value between 0 and 511. + YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. + If not specified, the volume defaultMode will be used. + This might be in conflict with other options that affect the file + mode, like fsGroup, and the result can be other mode bits set. + format: int32 + type: integer + path: + description: 'Required: Path is the relative + path name of the file to be created. + Must not be absolute or contain the + ''..'' path. Must be utf-8 encoded. + The first item of the relative path + must not start with ''..''' + type: string + resourceFieldRef: + description: |- + Selects a resource of the container: only resources limits and requests + (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. + properties: + containerName: + description: 'Container name: required + for volumes, optional for env vars' + type: string + divisor: + anyOf: + - type: integer + - type: string + description: Specifies the output + format of the exposed resources, + defaults to "1" + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + resource: + description: 'Required: resource to + select' + type: string + required: + - resource + type: object + x-kubernetes-map-type: atomic + required: + - path + type: object + type: array + x-kubernetes-list-type: atomic + type: object + secret: + description: secret information about the secret + data to project + properties: + items: + description: |- + items if unspecified, each key-value pair in the Data field of the referenced + Secret will be projected into the volume as a file whose name is the + key and content is the value. If specified, the listed keys will be + projected into the specified paths, and unlisted keys will not be + present. If a key is specified which is not present in the Secret, + the volume setup will error unless it is marked optional. Paths must be + relative and may not contain the '..' path or start with '..'. + items: + description: Maps a string key to a path within + a volume. + properties: + key: + description: key is the key to project. + type: string + mode: + description: |- + mode is Optional: mode bits used to set permissions on this file. + Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. + YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. + If not specified, the volume defaultMode will be used. + This might be in conflict with other options that affect the file + mode, like fsGroup, and the result can be other mode bits set. + format: int32 + type: integer + path: + description: |- + path is the relative path of the file to map the key to. + May not be an absolute path. + May not contain the path element '..'. + May not start with the string '..'. + type: string + required: + - key + - path + type: object + type: array + x-kubernetes-list-type: atomic + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: optional field specify whether + the Secret or its key must be defined + type: boolean + type: object + x-kubernetes-map-type: atomic + serviceAccountToken: + description: serviceAccountToken is information + about the serviceAccountToken data to project + properties: + audience: + description: |- + audience is the intended audience of the token. A recipient of a token + must identify itself with an identifier specified in the audience of the + token, and otherwise should reject the token. The audience defaults to the + identifier of the apiserver. + type: string + expirationSeconds: + description: |- + expirationSeconds is the requested duration of validity of the service + account token. As the token approaches expiration, the kubelet volume + plugin will proactively rotate the service account token. The kubelet will + start trying to rotate the token if the token is older than 80 percent of + its time to live or if the token is older than 24 hours.Defaults to 1 hour + and must be at least 10 minutes. + format: int64 + type: integer + path: + description: |- + path is the path relative to the mount point of the file to project the + token into. + type: string + required: + - path + type: object + type: object + type: array + ldapBindPassword: + description: |- + A Secret containing the value for the LDAP_BIND_PASSWORD setting. + More info: https://www.pgadmin.org/docs/pgadmin4/latest/ldap.html + properties: + key: + description: The key of the secret to select from. Must + be a valid secret key. + type: string + name: + default: "" + description: |- + Name of the referent. + This field is effectively required, but due to backwards compatibility is + allowed to be empty. Instances of this type with an empty value here are + almost certainly wrong. + More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + type: string + optional: + description: Specify whether the Secret or its key + must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + settings: + description: |- + Settings for the pgAdmin server process. Keys should be uppercase and + values must be constants. + More info: https://www.pgadmin.org/docs/pgadmin4/latest/config_py.html + type: object + x-kubernetes-preserve-unknown-fields: true + type: object + dataVolumeClaimSpec: + description: |- + Defines a PersistentVolumeClaim for pgAdmin data. + More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes + properties: + accessModes: + description: |- + accessModes contains the desired access modes the volume should have. + More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 + items: + type: string + type: array + x-kubernetes-list-type: atomic + dataSource: + description: |- + dataSource field can be used to specify either: + * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) + * An existing PVC (PersistentVolumeClaim) + If the provisioner or an external controller can support the specified data source, + it will create a new volume based on the contents of the specified data source. + When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, + and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. + If the namespace is specified, then dataSourceRef will not be copied to dataSource. + properties: + apiGroup: + description: |- + APIGroup is the group for the resource being referenced. + If APIGroup is not specified, the specified Kind must be in the core API group. + For any other third-party types, APIGroup is required. + type: string + kind: + description: Kind is the type of resource being referenced + type: string + name: + description: Name is the name of resource being referenced + type: string + required: + - kind + - name + type: object + x-kubernetes-map-type: atomic + dataSourceRef: + description: |- + dataSourceRef specifies the object from which to populate the volume with data, if a non-empty + volume is desired. This may be any object from a non-empty API group (non + core object) or a PersistentVolumeClaim object. + When this field is specified, volume binding will only succeed if the type of + the specified object matches some installed volume populator or dynamic + provisioner. + This field will replace the functionality of the dataSource field and as such + if both fields are non-empty, they must have the same value. For backwards + compatibility, when namespace isn't specified in dataSourceRef, + both fields (dataSource and dataSourceRef) will be set to the same + value automatically if one of them is empty and the other is non-empty. + When namespace is specified in dataSourceRef, + dataSource isn't set to the same value and must be empty. + There are three important differences between dataSource and dataSourceRef: + * While dataSource only allows two specific types of objects, dataSourceRef + allows any non-core object, as well as PersistentVolumeClaim objects. + * While dataSource ignores disallowed values (dropping them), dataSourceRef + preserves all values, and generates an error if a disallowed value is + specified. + * While dataSource only allows local objects, dataSourceRef allows objects + in any namespaces. + (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. + (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. + properties: + apiGroup: + description: |- + APIGroup is the group for the resource being referenced. + If APIGroup is not specified, the specified Kind must be in the core API group. + For any other third-party types, APIGroup is required. + type: string + kind: + description: Kind is the type of resource being referenced + type: string + name: + description: Name is the name of resource being referenced + type: string + namespace: + description: |- + Namespace is the namespace of resource being referenced + Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. + (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. + type: string + required: + - kind + - name + type: object + resources: + description: |- + resources represents the minimum resources the volume should have. + If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements + that are lower than previous value but must still be higher than capacity recorded in the + status field of the claim. + More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources + properties: + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Limits describes the maximum amount of compute resources allowed. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Requests describes the minimum amount of compute resources required. + If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, + otherwise to an implementation-defined value. Requests cannot exceed Limits. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + type: object + selector: + description: selector is a label query over volumes to + consider for binding. + properties: + matchExpressions: + description: matchExpressions is a list of label selector + requirements. The requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key that the selector + applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + storageClassName: + description: |- + storageClassName is the name of the StorageClass required by the claim. + More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 + type: string + volumeAttributesClassName: + description: |- + volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. + If specified, the CSI driver will create or update the volume with the attributes defined + in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, + it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass + will be applied to the claim but it's not allowed to reset this field to empty string once it is set. + If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass + will be set by the persistentvolume controller if it exists. + If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be + set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource + exists. + More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ + (Alpha) Using this field requires the VolumeAttributesClass feature gate to be enabled. + type: string + volumeMode: + description: |- + volumeMode defines what type of volume is required by the claim. + Value of Filesystem is implied when not included in claim spec. + type: string + volumeName: + description: volumeName is the binding reference to the + PersistentVolume backing this claim. + type: string + type: object + image: + description: |- + Name of a container image that can run pgAdmin 4. Changing this value causes + pgAdmin to restart. The image may also be set using the RELATED_IMAGE_PGADMIN + environment variable. + More info: https://kubernetes.io/docs/concepts/containers/images + type: string + metadata: + description: Metadata contains metadata for custom resources + properties: + annotations: + additionalProperties: + type: string + type: object + labels: + additionalProperties: + type: string + type: object + type: object + priorityClassName: + description: |- + Priority class name for the pgAdmin pod. Changing this value causes pgAdmin + to restart. + More info: https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/ + type: string + replicas: + default: 1 + description: Number of desired pgAdmin pods. + format: int32 + maximum: 1 + minimum: 0 + type: integer + resources: + description: |- + Compute resources of a pgAdmin container. Changing this value causes + pgAdmin to restart. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers + properties: + claims: + description: |- + Claims lists the names of resources, defined in spec.resourceClaims, + that are used by this container. + + This is an alpha field and requires enabling the + DynamicResourceAllocation feature gate. + + This field is immutable. It can only be set for containers. + items: + description: ResourceClaim references one entry in PodSpec.ResourceClaims. + properties: + name: + description: |- + Name must match the name of one entry in pod.spec.resourceClaims of + the Pod where this field is used. It makes that resource available + inside a container. + type: string + required: + - name + type: object + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Limits describes the maximum amount of compute resources allowed. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: |- + Requests describes the minimum amount of compute resources required. + If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, + otherwise to an implementation-defined value. Requests cannot exceed Limits. + More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ + type: object + type: object + service: + description: Specification of the service that exposes pgAdmin. + properties: + externalTrafficPolicy: + description: 'More info: https://kubernetes.io/docs/concepts/services-networking/service/#traffic-policies' + enum: + - Cluster + - Local + type: string + internalTrafficPolicy: + description: 'More info: https://kubernetes.io/docs/concepts/services-networking/service/#traffic-policies' + enum: + - Cluster + - Local + type: string + metadata: + description: Metadata contains metadata for custom resources + properties: + annotations: + additionalProperties: + type: string + type: object + labels: + additionalProperties: + type: string + type: object + type: object + nodePort: + description: |- + The port on which this service is exposed when type is NodePort or + LoadBalancer. Value must be in-range and not in use or the operation will + fail. If unspecified, a port will be allocated if this Service requires one. + - https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport + format: int32 + type: integer + type: + default: ClusterIP + description: 'More info: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types' + enum: + - ClusterIP + - NodePort + - LoadBalancer + type: string + type: object + tolerations: + description: |- + Tolerations of a pgAdmin pod. Changing this value causes pgAdmin to restart. + More info: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration + items: + description: |- + The pod this Toleration is attached to tolerates any taint that matches + the triple using the matching operator . + properties: + effect: + description: |- + Effect indicates the taint effect to match. Empty means match all taint effects. + When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. + type: string + key: + description: |- + Key is the taint key that the toleration applies to. Empty means match all taint keys. + If the key is empty, operator must be Exists; this combination means to match all values and all keys. + type: string + operator: + description: |- + Operator represents a key's relationship to the value. + Valid operators are Exists and Equal. Defaults to Equal. + Exists is equivalent to wildcard for value, so that a pod can + tolerate all taints of a particular category. + type: string + tolerationSeconds: + description: |- + TolerationSeconds represents the period of time the toleration (which must be + of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, + it is not set, which means tolerate the taint forever (do not evict). Zero and + negative values will be treated as 0 (evict immediately) by the system. + format: int64 + type: integer + value: + description: |- + Value is the taint value the toleration matches to. + If the operator is Exists, the value should be empty, otherwise just a regular string. + type: string + type: object + type: array + topologySpreadConstraints: + description: |- + Topology spread constraints of a pgAdmin pod. Changing this value causes + pgAdmin to restart. + More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/ + items: + description: TopologySpreadConstraint specifies how to spread + matching pods among the given topology. + properties: + labelSelector: + description: |- + LabelSelector is used to find matching pods. + Pods that match this label selector are counted to determine the number of pods + in their corresponding topology domain. + properties: + matchExpressions: + description: matchExpressions is a list of label + selector requirements. The requirements are ANDed. + items: + description: |- + A label selector requirement is a selector that contains values, a key, and an operator that + relates the key and values. + properties: + key: + description: key is the label key that the + selector applies to. + type: string + operator: + description: |- + operator represents a key's relationship to a set of values. + Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: |- + values is an array of string values. If the operator is In or NotIn, + the values array must be non-empty. If the operator is Exists or DoesNotExist, + the values array must be empty. This array is replaced during a strategic + merge patch. + items: + type: string + type: array + x-kubernetes-list-type: atomic + required: + - key + - operator + type: object + type: array + x-kubernetes-list-type: atomic + matchLabels: + additionalProperties: + type: string + description: |- + matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels + map is equivalent to an element of matchExpressions, whose key field is "key", the + operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + type: object + x-kubernetes-map-type: atomic + matchLabelKeys: + description: |- + MatchLabelKeys is a set of pod label keys to select the pods over which + spreading will be calculated. The keys are used to lookup values from the + incoming pod labels, those key-value labels are ANDed with labelSelector + to select the group of existing pods over which spreading will be calculated + for the incoming pod. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. + MatchLabelKeys cannot be set when LabelSelector isn't set. + Keys that don't exist in the incoming pod labels will + be ignored. A null or empty list means only match against labelSelector. + + This is a beta field and requires the MatchLabelKeysInPodTopologySpread feature gate to be enabled (enabled by default). + items: + type: string + type: array + x-kubernetes-list-type: atomic + maxSkew: + description: |- + MaxSkew describes the degree to which pods may be unevenly distributed. + When `whenUnsatisfiable=DoNotSchedule`, it is the maximum permitted difference + between the number of matching pods in the target topology and the global minimum. + The global minimum is the minimum number of matching pods in an eligible domain + or zero if the number of eligible domains is less than MinDomains. + For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same + labelSelector spread as 2/2/1: + In this case, the global minimum is 1. + | zone1 | zone2 | zone3 | + | P P | P P | P | + - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; + scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) + violate MaxSkew(1). + - if MaxSkew is 2, incoming pod can be scheduled onto any zone. + When `whenUnsatisfiable=ScheduleAnyway`, it is used to give higher precedence + to topologies that satisfy it. + It's a required field. Default value is 1 and 0 is not allowed. + format: int32 + type: integer + minDomains: + description: |- + MinDomains indicates a minimum number of eligible domains. + When the number of eligible domains with matching topology keys is less than minDomains, + Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed. + And when the number of eligible domains with matching topology keys equals or greater than minDomains, + this value has no effect on scheduling. + As a result, when the number of eligible domains is less than minDomains, + scheduler won't schedule more than maxSkew Pods to those domains. + If value is nil, the constraint behaves as if MinDomains is equal to 1. + Valid values are integers greater than 0. + When value is not nil, WhenUnsatisfiable must be DoNotSchedule. + + For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same + labelSelector spread as 2/2/2: + | zone1 | zone2 | zone3 | + | P P | P P | P P | + The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0. + In this situation, new pod with the same labelSelector cannot be scheduled, + because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, + it will violate MaxSkew. + format: int32 + type: integer + nodeAffinityPolicy: + description: |- + NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector + when calculating pod topology spread skew. Options are: + - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. + - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. + + If this value is nil, the behavior is equivalent to the Honor policy. + This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. + type: string + nodeTaintsPolicy: + description: |- + NodeTaintsPolicy indicates how we will treat node taints when calculating + pod topology spread skew. Options are: + - Honor: nodes without taints, along with tainted nodes for which the incoming pod + has a toleration, are included. + - Ignore: node taints are ignored. All nodes are included. + + If this value is nil, the behavior is equivalent to the Ignore policy. + This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. + type: string + topologyKey: + description: |- + TopologyKey is the key of node labels. Nodes that have a label with this key + and identical values are considered to be in the same topology. + We consider each as a "bucket", and try to put balanced number + of pods into each bucket. + We define a domain as a particular instance of a topology. + Also, we define an eligible domain as a domain whose nodes meet the requirements of + nodeAffinityPolicy and nodeTaintsPolicy. + e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. + And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. + It's a required field. + type: string + whenUnsatisfiable: + description: |- + WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy + the spread constraint. + - DoNotSchedule (default) tells the scheduler not to schedule it. + - ScheduleAnyway tells the scheduler to schedule the pod in any location, + but giving higher precedence to topologies that would help reduce the + skew. + A constraint is considered "Unsatisfiable" for an incoming pod + if and only if every possible node assignment for that pod would violate + "MaxSkew" on some topology. + For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same + labelSelector spread as 3/1/1: + | zone1 | zone2 | zone3 | + | P P P | P | P | + If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled + to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies + MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler + won't make it *more* imbalanced. + It's a required field. + type: string + required: + - maxSkew + - topologyKey + - whenUnsatisfiable + type: object + type: array + required: + - dataVolumeClaimSpec + type: object + required: + - pgAdmin + type: object + users: + description: |- + Users to create inside PostgreSQL and the databases they should access. + The default creates one user that can access one database matching the + PostgresCluster name. An empty list creates no users. Removing a user + from this list does NOT drop the user nor revoke their access. + items: + properties: + databases: + description: |- + Databases to which this user can connect and create objects. Removing a + database from this list does NOT revoke access. This field is ignored for + the "postgres" user. + items: + description: |- + PostgreSQL identifiers are limited in length but may contain any character. + More info: https://www.postgresql.org/docs/current/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS + maxLength: 63 + minLength: 1 + type: string + type: array + x-kubernetes-list-type: set + name: + description: |- + The name of this PostgreSQL user. The value may contain only lowercase + letters, numbers, and hyphen so that it fits into Kubernetes metadata. + maxLength: 63 + minLength: 1 + pattern: ^[a-z0-9]([-a-z0-9]*[a-z0-9])?$ + type: string + options: + description: |- + ALTER ROLE options except for PASSWORD. This field is ignored for the + "postgres" user. + More info: https://www.postgresql.org/docs/current/role-attributes.html + maxLength: 200 + pattern: ^[^;]*$ + type: string + x-kubernetes-validations: + - message: cannot assign password + rule: '!self.matches("(?i:PASSWORD)")' + - message: cannot contain comments + rule: '!self.matches("(?:--|/[*]|[*]/)")' + password: + description: Properties of the password generated for this user. + properties: + type: + default: ASCII + description: |- + Type of password to generate. Defaults to ASCII. Valid options are ASCII + and AlphaNumeric. + "ASCII" passwords contain letters, numbers, and symbols from the US-ASCII character set. + "AlphaNumeric" passwords contain letters and numbers from the US-ASCII character set. + enum: + - ASCII + - AlphaNumeric + type: string + required: + - type + type: object + required: + - name + type: object + maxItems: 64 + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + required: + - instances + - postgresVersion + type: object + status: + description: PostgresClusterStatus defines the observed state of PostgresCluster + properties: + conditions: + description: |- + conditions represent the observations of postgrescluster's current state. + Known .status.conditions.type are: "PersistentVolumeResizing", + "Progressing", "ProxyAvailable" + items: + description: Condition contains details for one aspect of the current + state of this API Resource. + properties: + lastTransitionTime: + description: |- + lastTransitionTime is the last time the condition transitioned from one status to another. + This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. + format: date-time + type: string + message: + description: |- + message is a human readable message indicating details about the transition. + This may be an empty string. + maxLength: 32768 + type: string + observedGeneration: + description: |- + observedGeneration represents the .metadata.generation that the condition was set based upon. + For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date + with respect to the current state of the instance. + format: int64 + minimum: 0 + type: integer + reason: + description: |- + reason contains a programmatic identifier indicating the reason for the condition's last transition. + Producers of specific condition types may define expected values and meanings for this field, + and whether the values are considered a guaranteed API. + The value should be a CamelCase string. + This field may not be empty. + maxLength: 1024 + minLength: 1 + pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$ + type: string + status: + description: status of the condition, one of True, False, Unknown. + enum: + - "True" + - "False" + - Unknown + type: string + type: + description: type of condition in CamelCase or in foo.example.com/CamelCase. + maxLength: 316 + pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$ + type: string + required: + - lastTransitionTime + - message + - reason + - status + - type + type: object + type: array + x-kubernetes-list-map-keys: + - type + x-kubernetes-list-type: map + databaseInitSQL: + description: DatabaseInitSQL state of custom database initialization + in the cluster + type: string + databaseRevision: + description: Identifies the databases that have been installed into + PostgreSQL. + type: string + instances: + description: Current state of PostgreSQL instances. + items: + properties: + desiredPGDataVolume: + additionalProperties: + type: string + description: Desired Size of the pgData volume + type: object + name: + type: string + readyReplicas: + description: Total number of ready pods. + format: int32 + type: integer + replicas: + description: Total number of pods. + format: int32 + type: integer + updatedReplicas: + description: Total number of pods that have the desired specification. + format: int32 + type: integer + required: + - name + type: object + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + monitoring: + description: Current state of PostgreSQL cluster monitoring tool configuration + properties: + exporterConfiguration: + type: string + type: object + observedGeneration: + description: observedGeneration represents the .metadata.generation + on which the status was based. + format: int64 + minimum: 0 + type: integer + patroni: + properties: + switchover: + description: Tracks the execution of the switchover requests. + type: string + switchoverTimeline: + description: Tracks the current timeline during switchovers + format: int64 + type: integer + systemIdentifier: + description: The PostgreSQL system identifier reported by Patroni. + type: string + type: object + pgbackrest: + description: Status information for pgBackRest + properties: + manualBackup: + description: Status information for manual backups + properties: + active: + description: The number of actively running manual backup + Pods. + format: int32 + type: integer + completionTime: + description: |- + Represents the time the manual backup Job was determined by the Job controller + to be completed. This field is only set if the backup completed successfully. + Additionally, it is represented in RFC3339 form and is in UTC. + format: date-time + type: string + failed: + description: The number of Pods for the manual backup Job + that reached the "Failed" phase. + format: int32 + type: integer + finished: + description: |- + Specifies whether or not the Job is finished executing (does not indicate success or + failure). + type: boolean + id: + description: |- + A unique identifier for the manual backup as provided using the "pgbackrest-backup" + annotation when initiating a backup. + type: string + startTime: + description: |- + Represents the time the manual backup Job was acknowledged by the Job controller. + It is represented in RFC3339 form and is in UTC. + format: date-time + type: string + succeeded: + description: The number of Pods for the manual backup Job + that reached the "Succeeded" phase. + format: int32 + type: integer + required: + - finished + - id + type: object + repoHost: + description: Status information for the pgBackRest dedicated repository + host + properties: + apiVersion: + description: |- + APIVersion defines the versioned schema of this representation of an object. + Servers should convert recognized schemas to the latest internal value, and + may reject unrecognized values. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + type: string + kind: + description: |- + Kind is a string value representing the REST resource this object represents. + Servers may infer this from the endpoint the client submits requests to. + Cannot be updated. + In CamelCase. + More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + type: string + ready: + description: Whether or not the pgBackRest repository host + is ready for use + type: boolean + type: object + repos: + description: Status information for pgBackRest repositories + items: + description: RepoStatus the status of a pgBackRest repository + properties: + bound: + description: Whether or not the pgBackRest repository PersistentVolumeClaim + is bound to a volume + type: boolean + name: + description: The name of the pgBackRest repository + type: string + replicaCreateBackupComplete: + description: |- + ReplicaCreateBackupReady indicates whether a backup exists in the repository as needed + to bootstrap replicas. + type: boolean + repoOptionsHash: + description: |- + A hash of the required fields in the spec for defining an Azure, GCS or S3 repository, + Utilized to detect changes to these fields and then execute pgBackRest stanza-create + commands accordingly. + type: string + stanzaCreated: + description: Specifies whether or not a stanza has been + successfully created for the repository + type: boolean + volume: + description: The name of the volume the containing the pgBackRest + repository + type: string + required: + - name + type: object + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + restore: + description: Status information for in-place restores + properties: + active: + description: The number of actively running manual backup + Pods. + format: int32 + type: integer + completionTime: + description: |- + Represents the time the manual backup Job was determined by the Job controller + to be completed. This field is only set if the backup completed successfully. + Additionally, it is represented in RFC3339 form and is in UTC. + format: date-time + type: string + failed: + description: The number of Pods for the manual backup Job + that reached the "Failed" phase. + format: int32 + type: integer + finished: + description: |- + Specifies whether or not the Job is finished executing (does not indicate success or + failure). + type: boolean + id: + description: |- + A unique identifier for the manual backup as provided using the "pgbackrest-backup" + annotation when initiating a backup. + type: string + startTime: + description: |- + Represents the time the manual backup Job was acknowledged by the Job controller. + It is represented in RFC3339 form and is in UTC. + format: date-time + type: string + succeeded: + description: The number of Pods for the manual backup Job + that reached the "Succeeded" phase. + format: int32 + type: integer + required: + - finished + - id + type: object + scheduledBackups: + description: Status information for scheduled backups + items: + properties: + active: + description: The number of actively running manual backup + Pods. + format: int32 + type: integer + completionTime: + description: |- + Represents the time the manual backup Job was determined by the Job controller + to be completed. This field is only set if the backup completed successfully. + Additionally, it is represented in RFC3339 form and is in UTC. + format: date-time + type: string + cronJobName: + description: The name of the associated pgBackRest scheduled + backup CronJob + type: string + failed: + description: The number of Pods for the manual backup Job + that reached the "Failed" phase. + format: int32 + type: integer + repo: + description: The name of the associated pgBackRest repository + type: string + startTime: + description: |- + Represents the time the manual backup Job was acknowledged by the Job controller. + It is represented in RFC3339 form and is in UTC. + format: date-time + type: string + succeeded: + description: The number of Pods for the manual backup Job + that reached the "Succeeded" phase. + format: int32 + type: integer + type: + description: The pgBackRest backup type for this Job + type: string + type: object + type: array + type: object + postgresVersion: + description: |- + Stores the current PostgreSQL major version following a successful + major PostgreSQL upgrade. + type: integer + proxy: + description: Current state of the PostgreSQL proxy. + properties: + pgBouncer: + properties: + postgresRevision: + description: |- + Identifies the revision of PgBouncer assets that have been installed into + PostgreSQL. + type: string + readyReplicas: + description: Total number of ready pods. + format: int32 + type: integer + replicas: + description: Total number of non-terminated pods. + format: int32 + type: integer + type: object + type: object + registrationRequired: + properties: + pgoVersion: + type: string + type: object + startupInstance: + description: |- + The instance that should be started first when bootstrapping and/or starting a + PostgresCluster. + type: string + startupInstanceSet: + description: The instance set associated with the startupInstance + type: string + tokenRequired: + type: string + userInterface: + description: Current state of the PostgreSQL user interface. + properties: + pgAdmin: + description: The state of the pgAdmin user interface. + properties: + usersRevision: + description: Hash that indicates which users have been installed + into pgAdmin. + type: string + type: object + type: object + usersRevision: + description: Identifies the users that have been installed into PostgreSQL. + type: string + type: object + type: object + served: true + storage: true + subresources: + status: {} diff --git a/config/crd/kustomization.yaml b/config/crd/kustomization.yaml new file mode 100644 index 0000000000..85b7cbdf29 --- /dev/null +++ b/config/crd/kustomization.yaml @@ -0,0 +1,17 @@ +kind: Kustomization + +resources: +- bases/postgres-operator.crunchydata.com_crunchybridgeclusters.yaml +- bases/postgres-operator.crunchydata.com_postgresclusters.yaml +- bases/postgres-operator.crunchydata.com_pgupgrades.yaml +- bases/postgres-operator.crunchydata.com_pgadmins.yaml + +patches: +- target: + kind: CustomResourceDefinition + patch: |- + - op: add + path: /metadata/labels + value: + app.kubernetes.io/name: pgo + app.kubernetes.io/version: latest diff --git a/config/default/kustomization.yaml b/config/default/kustomization.yaml new file mode 100644 index 0000000000..7001380693 --- /dev/null +++ b/config/default/kustomization.yaml @@ -0,0 +1,20 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +namespace: postgres-operator + +labels: +- includeSelectors: true + pairs: + # Note: this label differs from the label set in postgres-operator-examples + postgres-operator.crunchydata.com/control-plane: postgres-operator + +resources: +- ../crd +- ../rbac +- ../manager + +images: +- name: postgres-operator + newName: registry.developers.crunchydata.com/crunchydata/postgres-operator + newTag: latest diff --git a/config/dev/kustomization.yaml b/config/dev/kustomization.yaml new file mode 100644 index 0000000000..2794e5fb69 --- /dev/null +++ b/config/dev/kustomization.yaml @@ -0,0 +1,8 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +resources: +- ../default + +patches: +- path: manager-dev.yaml diff --git a/config/dev/manager-dev.yaml b/config/dev/manager-dev.yaml new file mode 100644 index 0000000000..538a34cf42 --- /dev/null +++ b/config/dev/manager-dev.yaml @@ -0,0 +1,6 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: pgo +spec: + replicas: 0 diff --git a/config/manager/kustomization.yaml b/config/manager/kustomization.yaml new file mode 100644 index 0000000000..dfce22e6c5 --- /dev/null +++ b/config/manager/kustomization.yaml @@ -0,0 +1,5 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +resources: +- manager.yaml diff --git a/config/manager/manager.yaml b/config/manager/manager.yaml new file mode 100644 index 0000000000..2eb849e138 --- /dev/null +++ b/config/manager/manager.yaml @@ -0,0 +1,52 @@ +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: pgo +spec: + replicas: 1 + strategy: { type: Recreate } + template: + spec: + containers: + - name: operator + image: postgres-operator + env: + - name: PGO_INSTALLER + value: kustomize + - name: PGO_INSTALLER_ORIGIN + value: postgres-operator-repo + - name: PGO_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + - name: CRUNCHY_DEBUG + value: "true" + - name: RELATED_IMAGE_POSTGRES_16 + value: "registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-16.4-2" + - name: RELATED_IMAGE_POSTGRES_16_GIS_3.3 + value: "registry.developers.crunchydata.com/crunchydata/crunchy-postgres-gis:ubi8-16.4-3.3-2" + - name: RELATED_IMAGE_POSTGRES_16_GIS_3.4 + value: "registry.developers.crunchydata.com/crunchydata/crunchy-postgres-gis:ubi8-16.4-3.4-2" + - name: RELATED_IMAGE_POSTGRES_17 + value: "registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-17.0-0" + - name: RELATED_IMAGE_POSTGRES_17_GIS_3.4 + value: "registry.developers.crunchydata.com/crunchydata/crunchy-postgres-gis:ubi8-17.0-3.4-0" + - name: RELATED_IMAGE_PGADMIN + value: "registry.developers.crunchydata.com/crunchydata/crunchy-pgadmin4:ubi8-4.30-31" + - name: RELATED_IMAGE_PGBACKREST + value: "registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:ubi8-2.53.1-0" + - name: RELATED_IMAGE_PGBOUNCER + value: "registry.developers.crunchydata.com/crunchydata/crunchy-pgbouncer:ubi8-1.23-0" + - name: RELATED_IMAGE_PGEXPORTER + value: "registry.developers.crunchydata.com/crunchydata/crunchy-postgres-exporter:latest" + - name: RELATED_IMAGE_PGUPGRADE + value: "registry.developers.crunchydata.com/crunchydata/crunchy-upgrade:latest" + - name: RELATED_IMAGE_STANDALONE_PGADMIN + value: "registry.developers.crunchydata.com/crunchydata/crunchy-pgadmin4:ubi8-8.12-0" + securityContext: + allowPrivilegeEscalation: false + capabilities: { drop: [ALL] } + readOnlyRootFilesystem: true + runAsNonRoot: true + serviceAccountName: pgo diff --git a/config/namespace/kustomization.yaml b/config/namespace/kustomization.yaml new file mode 100644 index 0000000000..e06cce134a --- /dev/null +++ b/config/namespace/kustomization.yaml @@ -0,0 +1,5 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +resources: +- namespace.yaml diff --git a/config/namespace/namespace.yaml b/config/namespace/namespace.yaml new file mode 100644 index 0000000000..bfebd8ac2f --- /dev/null +++ b/config/namespace/namespace.yaml @@ -0,0 +1,4 @@ +apiVersion: v1 +kind: Namespace +metadata: + name: postgres-operator diff --git a/config/rbac/kustomization.yaml b/config/rbac/kustomization.yaml new file mode 100644 index 0000000000..82cfb0841b --- /dev/null +++ b/config/rbac/kustomization.yaml @@ -0,0 +1,7 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +resources: +- service_account.yaml +- role.yaml +- role_binding.yaml diff --git a/config/rbac/role.yaml b/config/rbac/role.yaml new file mode 100644 index 0000000000..d5783d00b1 --- /dev/null +++ b/config/rbac/role.yaml @@ -0,0 +1,176 @@ +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: postgres-operator +rules: +- apiGroups: + - "" + resources: + - configmaps + - persistentvolumeclaims + - secrets + - serviceaccounts + - services + verbs: + - create + - delete + - get + - list + - patch + - watch +- apiGroups: + - "" + resources: + - endpoints + verbs: + - create + - delete + - deletecollection + - get + - list + - patch + - watch +- apiGroups: + - "" + resources: + - endpoints/restricted + - pods/exec + verbs: + - create +- apiGroups: + - "" + resources: + - events + verbs: + - create + - patch +- apiGroups: + - "" + resources: + - pods + verbs: + - delete + - get + - list + - patch + - watch +- apiGroups: + - apps + resources: + - deployments + - statefulsets + verbs: + - create + - delete + - get + - list + - patch + - watch +- apiGroups: + - batch + resources: + - cronjobs + - jobs + verbs: + - create + - delete + - get + - list + - patch + - watch +- apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - create + - get + - update + - watch +- apiGroups: + - policy + resources: + - poddisruptionbudgets + verbs: + - create + - delete + - get + - list + - patch + - watch +- apiGroups: + - postgres-operator.crunchydata.com + resources: + - crunchybridgeclusters + verbs: + - get + - list + - patch + - update + - watch +- apiGroups: + - postgres-operator.crunchydata.com + resources: + - crunchybridgeclusters/finalizers + - crunchybridgeclusters/status + verbs: + - patch + - update +- apiGroups: + - postgres-operator.crunchydata.com + resources: + - pgadmins + - pgupgrades + verbs: + - get + - list + - watch +- apiGroups: + - postgres-operator.crunchydata.com + resources: + - pgadmins/finalizers + - pgupgrades/finalizers + - postgresclusters/finalizers + verbs: + - update +- apiGroups: + - postgres-operator.crunchydata.com + resources: + - pgadmins/status + - pgupgrades/status + - postgresclusters/status + verbs: + - patch +- apiGroups: + - postgres-operator.crunchydata.com + resources: + - postgresclusters + verbs: + - get + - list + - patch + - watch +- apiGroups: + - rbac.authorization.k8s.io + resources: + - rolebindings + - roles + verbs: + - create + - delete + - get + - list + - patch + - watch +- apiGroups: + - snapshot.storage.k8s.io + resources: + - volumesnapshots + verbs: + - create + - delete + - get + - list + - patch + - watch diff --git a/config/rbac/role_binding.yaml b/config/rbac/role_binding.yaml new file mode 100644 index 0000000000..584ec1668c --- /dev/null +++ b/config/rbac/role_binding.yaml @@ -0,0 +1,12 @@ +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: postgres-operator +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: postgres-operator +subjects: +- kind: ServiceAccount + name: pgo diff --git a/config/rbac/service_account.yaml b/config/rbac/service_account.yaml new file mode 100644 index 0000000000..364f797171 --- /dev/null +++ b/config/rbac/service_account.yaml @@ -0,0 +1,5 @@ +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: pgo diff --git a/crunchy_logo.png b/crunchy_logo.png deleted file mode 100644 index 2fbf3352c1..0000000000 Binary files a/crunchy_logo.png and /dev/null differ diff --git a/deploy/.gitignore b/deploy/.gitignore deleted file mode 100644 index e45e01ad97..0000000000 --- a/deploy/.gitignore +++ /dev/null @@ -1,2 +0,0 @@ -username.txt -password.txt diff --git a/deploy/add-targeted-namespace-reconcile-rbac.sh b/deploy/add-targeted-namespace-reconcile-rbac.sh deleted file mode 100755 index 8438c10912..0000000000 --- a/deploy/add-targeted-namespace-reconcile-rbac.sh +++ /dev/null @@ -1,48 +0,0 @@ -#!/bin/bash - -# Copyright 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" - -# Enforce required environment variables -test="${PGO_CMD:?Need to set PGO_CMD env variable}" -test="${PGOROOT:?Need to set PGOROOT env variable}" -test="${PGO_OPERATOR_NAMESPACE:?Need to set PGO_OPERATOR_NAMESPACE env variable}" -test="${PGO_INSTALLATION_NAME:?Need to set PGO_INSTALLATION_NAME env variable}" -test="${PGO_CONF_DIR:?Need to set PGO_CONF_DIR env variable}" - -if [[ -z "$1" ]]; then - echo "usage: add-targeted-namespace-reconcile-rbac.sh mynewnamespace" - exit -fi - -# create the namespace if necessary -$PGO_CMD get ns $1 > /dev/null -if [ $? -eq 0 ]; then - echo "namespace" $1 "already exists, adding labels" - # set the labels so that existing namespace is owned by this installation - $PGO_CMD label namespace/$1 pgo-created-by=add-script - $PGO_CMD label namespace/$1 vendor=crunchydata - $PGO_CMD label namespace/$1 pgo-installation-name=$PGO_INSTALLATION_NAME -else - echo "namespace" $1 "is new" - cat $DIR/target-namespace.yaml | sed -e 's/$TARGET_NAMESPACE/'"$1"'/' -e 's/$PGO_INSTALLATION_NAME/'"$PGO_INSTALLATION_NAME"'/' | $PGO_CMD create -f - -fi - -$PGO_CMD -n $1 delete --ignore-not-found rolebinding pgo-target-role-binding pgo-local-ns -$PGO_CMD -n $1 delete --ignore-not-found role pgo-target-role pgo-local-ns - -cat $PGO_CONF_DIR/pgo-configs/pgo-target-role.json | sed 's/{{.TargetNamespace}}/'"$1"'/' | $PGO_CMD -n $1 create -f - -cat $DIR/local-namespace-rbac.yaml | envsubst | $PGO_CMD -n $1 create -f - diff --git a/deploy/add-targeted-namespace.sh b/deploy/add-targeted-namespace.sh deleted file mode 100755 index af088314d9..0000000000 --- a/deploy/add-targeted-namespace.sh +++ /dev/null @@ -1,87 +0,0 @@ -#!/bin/bash - -# Copyright 2019 - 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" - -# the name of the service account utilized by the PG pods -PG_SA="pgo-pg" - -# Enforce required environment variables -test="${PGO_CMD:?Need to set PGO_CMD env variable}" -test="${PGOROOT:?Need to set PGOROOT env variable}" -test="${PGO_OPERATOR_NAMESPACE:?Need to set PGO_OPERATOR_NAMESPACE env variable}" -test="${PGO_INSTALLATION_NAME:?Need to set PGO_INSTALLATION_NAME env variable}" -test="${PGO_CONF_DIR:?Need to set PGO_CONF_DIR env variable}" - -if [[ -z "$1" ]]; then - echo "usage: add-targeted-namespace.sh mynewnamespace" - exit -fi - -# create the namespace if necessary -$PGO_CMD get ns $1 > /dev/null -if [ $? -eq 0 ]; then - echo "namespace" $1 "already exists, adding labels" - # set the labels so that existing namespace is owned by this installation - $PGO_CMD label namespace/$1 pgo-created-by=add-script - $PGO_CMD label namespace/$1 vendor=crunchydata - $PGO_CMD label namespace/$1 pgo-installation-name=$PGO_INSTALLATION_NAME -else - echo "namespace" $1 "is new" - cat $DIR/target-namespace.yaml | sed -e 's/$TARGET_NAMESPACE/'"$1"'/' -e 's/$PGO_INSTALLATION_NAME/'"$PGO_INSTALLATION_NAME"'/' | $PGO_CMD create -f - -fi - -# determine if an existing pod is using the 'pgo-pg' service account. if so, do not delete -# and recreate the SA or its associated role and role binding. this is to avoid any undesired -# behavior with existing PG clusters that are actively utilizing the SA. -$PGO_CMD -n $1 get pods -o yaml | grep "serviceAccount: ${PG_SA}" > /dev/null -if [ $? -ne 0 ]; then - $PGO_CMD -n $1 delete --ignore-not-found sa pgo-pg - $PGO_CMD -n $1 delete --ignore-not-found role pgo-pg-role - $PGO_CMD -n $1 delete --ignore-not-found rolebinding pgo-pg-role-binding - - cat $PGO_CONF_DIR/pgo-configs/pgo-pg-sa.json | sed 's/{{.TargetNamespace}}/'"$1"'/' | $PGO_CMD -n $1 create -f - - cat $PGO_CONF_DIR/pgo-configs/pgo-pg-role.json | sed 's/{{.TargetNamespace}}/'"$1"'/' | $PGO_CMD -n $1 create -f - - cat $PGO_CONF_DIR/pgo-configs/pgo-pg-role-binding.json | sed 's/{{.TargetNamespace}}/'"$1"'/' | $PGO_CMD -n $1 create -f - -else - echo "Running pods found using SA '${PG_SA}' in namespace $1, will not recreate" -fi - -# create RBAC -$PGO_CMD -n $1 delete --ignore-not-found sa pgo-backrest pgo-default pgo-target -$PGO_CMD -n $1 delete --ignore-not-found role pgo-backrest-role pgo-target-role -$PGO_CMD -n $1 delete --ignore-not-found rolebinding pgo-backrest-role-binding pgo-target-role-binding - -cat $PGO_CONF_DIR/pgo-configs/pgo-default-sa.json | sed 's/{{.TargetNamespace}}/'"$1"'/' | $PGO_CMD -n $1 create -f - -cat $PGO_CONF_DIR/pgo-configs/pgo-target-sa.json | sed 's/{{.TargetNamespace}}/'"$1"'/' | $PGO_CMD -n $1 create -f - -cat $PGO_CONF_DIR/pgo-configs/pgo-target-role.json | sed 's/{{.TargetNamespace}}/'"$1"'/' | $PGO_CMD -n $1 create -f - -cat $PGO_CONF_DIR/pgo-configs/pgo-target-role-binding.json | sed 's/{{.TargetNamespace}}/'"$1"'/' | sed 's/{{.OperatorNamespace}}/'"$PGO_OPERATOR_NAMESPACE"'/' | $PGO_CMD -n $1 create -f - -cat $PGO_CONF_DIR/pgo-configs/pgo-backrest-sa.json | sed 's/{{.TargetNamespace}}/'"$1"'/' | $PGO_CMD -n $1 create -f - -cat $PGO_CONF_DIR/pgo-configs/pgo-backrest-role.json | sed 's/{{.TargetNamespace}}/'"$1"'/' | $PGO_CMD -n $1 create -f - -cat $PGO_CONF_DIR/pgo-configs/pgo-backrest-role-binding.json | sed 's/{{.TargetNamespace}}/'"$1"'/' | $PGO_CMD -n $1 create -f - - -if [ -r "$PGO_IMAGE_PULL_SECRET_MANIFEST" ]; then - $PGO_CMD -n $1 create -f "$PGO_IMAGE_PULL_SECRET_MANIFEST" -fi - -if [ -n "$PGO_IMAGE_PULL_SECRET" ]; then - patch='{"imagePullSecrets": [{ "name": "'"$PGO_IMAGE_PULL_SECRET"'" }]}' - - $PGO_CMD -n $1 patch --type=strategic --patch="$patch" serviceaccount/pgo-backrest - $PGO_CMD -n $1 patch --type=strategic --patch="$patch" serviceaccount/pgo-default - $PGO_CMD -n $1 patch --type=strategic --patch="$patch" serviceaccount/pgo-pg - $PGO_CMD -n $1 patch --type=strategic --patch="$patch" serviceaccount/pgo-target -fi diff --git a/deploy/cleannamespaces.sh b/deploy/cleannamespaces.sh deleted file mode 100755 index 66cd693863..0000000000 --- a/deploy/cleannamespaces.sh +++ /dev/null @@ -1,36 +0,0 @@ -#!/bin/bash - -# Copyright 2019 - 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" - -if [ -z $PGO_OPERATOR_NAMESPACE ]; -then - echo "error: \$PGO_OPERATOR_NAMESPACE must be set" - exit 1 -fi - -if [ -z $PGO_INSTALLATION_NAME ]; -then - echo "error: \$PGO_INSTALLATION_NAME must be set" - exit 1 -fi - -echo "deleting the namespaces the operator is deployed into ($PGO_OPERATOR_NAMESPACE)..." -$PGO_CMD delete namespace $PGO_OPERATOR_NAMESPACE > /dev/null 2> /dev/null -echo "namespace $PGO_OPERATOR_NAMESPACE deleted" - -echo "" -echo "deleting the watched namespaces..." -$PGO_CMD delete namespace --selector="vendor=crunchydata,pgo-installation-name=$PGO_INSTALLATION_NAME" diff --git a/deploy/cleanup-rbac.sh b/deploy/cleanup-rbac.sh deleted file mode 100755 index 50f52bbc5f..0000000000 --- a/deploy/cleanup-rbac.sh +++ /dev/null @@ -1,56 +0,0 @@ -#!/bin/bash - -# Copyright 2018 - 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" - -# delete existing PGO SCC (SCC commands require 'oc' in place of 'kubectl' -oc get scc pgo > /dev/null 2> /dev/null -if [ $? -eq 0 ] -then - oc delete scc pgo -fi - -$PGO_CMD --namespace=$PGO_OPERATOR_NAMESPACE get serviceaccount postgres-operator > /dev/null 2> /dev/null -if [ $? -eq 0 ] -then - $PGO_CMD --namespace=$PGO_OPERATOR_NAMESPACE delete serviceaccount postgres-operator -fi - -$PGO_CMD get clusterrole pgo-cluster-role > /dev/null 2> /dev/null -if [ $? -eq 0 ] -then - $PGO_CMD delete clusterrole pgo-cluster-role -fi - -$PGO_CMD get clusterrolebinding pgo-cluster-role > /dev/null 2> /dev/null -if [ $? -eq 0 ] -then - $PGO_CMD delete clusterrolebinding pgo-cluster-role > /dev/null 2> /dev/null -fi - -$PGO_CMD -n $PGO_OPERATOR_NAMESPACE get role pgo-role > /dev/null 2> /dev/null -if [ $? -eq 0 ] -then - $PGO_CMD -n $PGO_OPERATOR_NAMESPACE delete role pgo-role -fi - -$PGO_CMD -n $PGO_OPERATOR_NAMESPACE get rolebinding pgo-role > /dev/null 2> /dev/null -if [ $? -eq 0 ] -then - $PGO_CMD -n $PGO_OPERATOR_NAMESPACE delete rolebinding pgo-role > /dev/null -fi - - -sleep 5 diff --git a/deploy/cleanup.sh b/deploy/cleanup.sh deleted file mode 100755 index afe13f98c7..0000000000 --- a/deploy/cleanup.sh +++ /dev/null @@ -1,62 +0,0 @@ -#!/bin/bash -# Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" - -$PGO_CMD --namespace=$PGO_OPERATOR_NAMESPACE get secret/pgo-backrest-repo-config 2> /dev/null - -if [ $? -eq 0 ] -then - $PGO_CMD --namespace=$PGO_OPERATOR_NAMESPACE delete secret/pgo-backrest-repo-config -fi - -$PGO_CMD --namespace=$PGO_OPERATOR_NAMESPACE get secret pgo.tls 2> /dev/null -if [ $? -eq 0 ] -then - $PGO_CMD --namespace=$PGO_OPERATOR_NAMESPACE delete secret pgo.tls -fi - -$PGO_CMD --namespace=$PGO_OPERATOR_NAMESPACE get configmap/pgo-config 2> /dev/null > /dev/null -if [ $? -eq 0 ] -then - $PGO_CMD --namespace=$PGO_OPERATOR_NAMESPACE delete configmap/pgo-config -fi - -$PGO_CMD --namespace=$PGO_OPERATOR_NAMESPACE get service/postgres-operator 2> /dev/null -if [ $? -eq 0 ] -then - $PGO_CMD --namespace=$PGO_OPERATOR_NAMESPACE delete service/postgres-operator -fi - -$PGO_CMD --namespace=$PGO_OPERATOR_NAMESPACE get deployment/postgres-operator 2> /dev/null -if [ $? -eq 0 ] -then - $PGO_CMD --namespace=$PGO_OPERATOR_NAMESPACE delete deployment/postgres-operator - for (( ; ; )) - do - echo "checking for postgres-operator pod..." - lines=`$PGO_CMD --namespace=$PGO_OPERATOR_NAMESPACE get pod --selector=name=postgres-operator --ignore-not-found=true --no-headers | wc -l` - - if [ $lines -eq 0 ] - then - echo postgres-operator pod is gone - break - elif [ $lines -eq 1 ] - then - echo postgres-operator is out there - fi - sleep 3 - done -fi - diff --git a/deploy/cluster-role-bindings.yaml b/deploy/cluster-role-bindings.yaml deleted file mode 100644 index be7d75bb2f..0000000000 --- a/deploy/cluster-role-bindings.yaml +++ /dev/null @@ -1,13 +0,0 @@ ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: pgo-cluster-role -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: pgo-cluster-role -subjects: -- kind: ServiceAccount - name: postgres-operator - namespace: "$PGO_OPERATOR_NAMESPACE" diff --git a/deploy/cluster-roles-readonly.yaml b/deploy/cluster-roles-readonly.yaml deleted file mode 100644 index 773e6cd07e..0000000000 --- a/deploy/cluster-roles-readonly.yaml +++ /dev/null @@ -1,13 +0,0 @@ -kind: ClusterRole -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - name: pgo-cluster-role -rules: - - apiGroups: - - '' - resources: - - namespaces - verbs: - - get - - list - - watch diff --git a/deploy/cluster-roles.yaml b/deploy/cluster-roles.yaml deleted file mode 100644 index cb0bb85b41..0000000000 --- a/deploy/cluster-roles.yaml +++ /dev/null @@ -1,99 +0,0 @@ ---- -kind: ClusterRole -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - name: pgo-cluster-role -rules: - - apiGroups: - - '' - resources: - - namespaces - verbs: - - get - - list - - watch - - create - - update - - delete - - apiGroups: - - '' - resources: - - serviceaccounts - verbs: - - get - - create - - update - - delete - - apiGroups: - - rbac.authorization.k8s.io - resources: - - roles - - rolebindings - verbs: - - get - - create - - update - - delete - - apiGroups: - - '' - resources: - - configmaps - - endpoints - - pods - - pods/exec - - pods/log - - replicasets - - secrets - - services - - persistentvolumeclaims - verbs: - - get - - list - - watch - - create - - patch - - update - - delete - - deletecollection - - apiGroups: - - apps - resources: - - deployments - verbs: - - get - - list - - watch - - create - - patch - - update - - delete - - deletecollection - - apiGroups: - - batch - resources: - - jobs - verbs: - - get - - list - - watch - - create - - patch - - update - - delete - - deletecollection - - apiGroups: - - crunchydata.com - resources: - - pgclusters - - pgpolicies - - pgreplicas - - pgtasks - verbs: - - get - - list - - watch - - create - - patch - - update - - delete - - deletecollection diff --git a/deploy/deploy.sh b/deploy/deploy.sh deleted file mode 100755 index 823671c7d9..0000000000 --- a/deploy/deploy.sh +++ /dev/null @@ -1,96 +0,0 @@ -#!/bin/bash -# Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -test="${PGO_CONF_DIR:?Need to set PGO_CONF_DIR env variable}" - -# awsKeySecret is borrowed from the legacy way to pull out the AWS s3 -# credentials in an environmental variable. This is only here while we -# transition away from whatever this was -awsKeySecret() { - val=$(grep "$1" -m 1 "${PGOROOT}/conf/pgo-backrest-repo/aws-s3-credentials.yaml" | sed "s/^.*:\s*//") - # remove leading and trailing whitespace - val=$(echo -e "${val}" | sed -e 's/^[[:space:]]*//' -e 's/[[:space:]]*$//') - if [[ "$val" != "" ]] - then - echo "${val}" - fi -} - -DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" - -$DIR/cleanup.sh - -if [[ "${PGO_NAMESPACE_MODE:-dynamic}" != "disabled" ]] -then - $PGO_CMD get clusterrole pgo-cluster-role 2> /dev/null > /dev/null - if [ $? -ne 0 ] - then - echo ERROR: pgo-cluster-role was not found - echo Verify you ran install-rbac.sh - exit - fi -fi - -# credentials for pgbackrest sshd -pgbackrest_aws_s3_key=$(awsKeySecret "aws-s3-key") -pgbackrest_aws_s3_key_secret=$(awsKeySecret "aws-s3-key-secret") - -$PGO_CMD --namespace=$PGO_OPERATOR_NAMESPACE create secret generic pgo-backrest-repo-config \ - --from-file=config=${PGO_CONF_DIR}/pgo-backrest-repo/config \ - --from-file=sshd_config=${PGO_CONF_DIR}/pgo-backrest-repo/sshd_config \ - --from-file=aws-s3-ca.crt=${PGO_CONF_DIR}/pgo-backrest-repo/aws-s3-ca.crt \ - --from-literal=aws-s3-key="${pgbackrest_aws_s3_key}" \ - --from-literal=aws-s3-key-secret="${pgbackrest_aws_s3_key_secret}" - -# -# credentials for pgo-apiserver TLS REST API -# -$PGO_CMD --namespace=$PGO_OPERATOR_NAMESPACE get secret pgo.tls > /dev/null 2> /dev/null -if [ $? -eq 0 ] -then - $PGO_CMD --namespace=$PGO_OPERATOR_NAMESPACE delete secret pgo.tls -fi - -$PGO_CMD --namespace=$PGO_OPERATOR_NAMESPACE create secret tls pgo.tls --key=${PGOROOT}/conf/postgres-operator/server.key --cert=${PGOROOT}/conf/postgres-operator/server.crt - -$PGO_CMD --namespace=$PGO_OPERATOR_NAMESPACE create configmap pgo-config \ - --from-file=${PGOROOT}/conf/postgres-operator/pgo.yaml \ - --from-file=${PGO_CONF_DIR}/pgo-configs - - -# -# check if custom port value is set, otherwise set default values -# - -if [[ -z ${PGO_APISERVER_PORT} ]] -then - echo "PGO_APISERVER_PORT is not set. Setting to default port value of 8443." - export PGO_APISERVER_PORT=8443 -fi - -export PGO_APISERVER_SCHEME="HTTPS" - -# check if TLS is disabled. If it is, both ensure that the probes occur over -# HTTP, and then also set TLS_NO_VERIFY to true as well, given TLS is disabled -if [[ "${DISABLE_TLS}" == "true" ]] -then - export PGO_APISERVER_SCHEME="HTTP" - export TLS_NO_VERIFY="true" -fi - -# -# create the postgres-operator Deployment and Service -# -envsubst < $DIR/deployment.json | $PGO_CMD --namespace=$PGO_OPERATOR_NAMESPACE create -f - -envsubst < $DIR/service.json | $PGO_CMD --namespace=$PGO_OPERATOR_NAMESPACE create -f - diff --git a/deploy/deployment.json b/deploy/deployment.json deleted file mode 100644 index e4da5c318b..0000000000 --- a/deploy/deployment.json +++ /dev/null @@ -1,234 +0,0 @@ -{ - "apiVersion": "apps/v1", - "kind": "Deployment", - "metadata": { - "name": "postgres-operator", - "labels": { - "vendor": "crunchydata" - } - }, - "spec": { - "replicas": 1, - "selector": { - "matchLabels": { - "name": "postgres-operator", - "vendor": "crunchydata" - } - }, - "template": { - "metadata": { - "labels": { - "name": "postgres-operator", - "vendor": "crunchydata" - } - }, - "spec": { - "serviceAccountName": "postgres-operator", - "containers": [ - { - "name": "apiserver", - "image": "$PGO_IMAGE_PREFIX/pgo-apiserver:$PGO_IMAGE_TAG", - "imagePullPolicy": "IfNotPresent", - "ports": [ - { "containerPort": $PGO_APISERVER_PORT } - ], - "readinessProbe": { - "httpGet": { - "path": "/healthz", - "port": $PGO_APISERVER_PORT, - "scheme": "${PGO_APISERVER_SCHEME}" - }, - "initialDelaySeconds": 15, - "periodSeconds": 5 - }, - "livenessProbe": { - "httpGet": { - "path": "/healthz", - "port": $PGO_APISERVER_PORT, - "scheme": "${PGO_APISERVER_SCHEME}" - }, - "initialDelaySeconds": 15, - "periodSeconds": 5 - }, - "env": [ - { - "name": "CRUNCHY_DEBUG", - "value": "true" - }, - { - "name": "PORT", - "value": "$PGO_APISERVER_PORT" - }, - { - "name": "NAMESPACE", - "value": "$NAMESPACE" - }, - { - "name": "PGO_INSTALLATION_NAME", - "value": "$PGO_INSTALLATION_NAME" - }, - { - "name": "PGO_OPERATOR_NAMESPACE", - "valueFrom": { - "fieldRef": { - "fieldPath": "metadata.namespace" - } - } - }, - { - "name": "TLS_CA_TRUST", - "value": "$TLS_CA_TRUST" - }, - { - "name": "TLS_NO_VERIFY", - "value": "${TLS_NO_VERIFY}" - }, - { - "name": "DISABLE_TLS", - "value": "${DISABLE_TLS}" - }, - { - "name": "NOAUTH_ROUTES", - "value": "$NOAUTH_ROUTES" - }, - { - "name": "ADD_OS_TRUSTSTORE", - "value": "$ADD_OS_TRUSTSTORE" - }, - { - "name": "DISABLE_EVENTING", - "value": "$DISABLE_EVENTING" - }, - { - "name": "EVENT_ADDR", - "value": "localhost:4150" - } - ], - "volumeMounts": [] - }, { - "name": "operator", - "image": "$PGO_IMAGE_PREFIX/postgres-operator:$PGO_IMAGE_TAG", - "imagePullPolicy": "IfNotPresent", - "readinessProbe": { - "exec": { - "command": [ - "ls", - "/tmp" - ] - }, - "initialDelaySeconds": 4, - "periodSeconds": 5 - }, - "env": [ - { - "name": "CRUNCHY_DEBUG", - "value": "true" - }, - { - "name": "NAMESPACE", - "value": "$NAMESPACE" - }, - { - "name": "PGO_INSTALLATION_NAME", - "value": "$PGO_INSTALLATION_NAME" - }, - { - "name": "PGO_OPERATOR_NAMESPACE", - "valueFrom": { - "fieldRef": { - "fieldPath": "metadata.namespace" - } - } - }, - { - "name": "MY_POD_NAME", - "valueFrom": { - "fieldRef": { - "fieldPath": "metadata.name" - } - } - }, - { - "name": "DISABLE_EVENTING", - "value": "$DISABLE_EVENTING" - }, - { - "name": "EVENT_ADDR", - "value": "localhost:4150" - } - ], - "volumeMounts": [] - }, { - "name": "scheduler", - "image": "$PGO_IMAGE_PREFIX/pgo-scheduler:$PGO_IMAGE_TAG", - "livenessProbe": { - "exec": { - "command": [ - "bash", - "-c", - "test -n \"$(find /tmp/scheduler.hb -newermt '61 sec ago')\"" - ] - }, - "failureThreshold": 2, - "initialDelaySeconds": 60, - "periodSeconds": 60 - }, - "env": [ - { - "name": "CRUNCHY_DEBUG", - "value": "true" - }, - { - "name": "PGO_OPERATOR_NAMESPACE", - "valueFrom": { - "fieldRef": { - "fieldPath": "metadata.namespace" - } - } - }, - { - "name": "NAMESPACE", - "value": "$NAMESPACE" - }, - { - "name": "PGO_INSTALLATION_NAME", - "value": "$PGO_INSTALLATION_NAME" - }, - { - "name": "TIMEOUT", - "value": "3600" - }, - { - "name": "EVENT_ADDR", - "value": "localhost:4150" - } - ], - "volumeMounts": [], - "imagePullPolicy": "IfNotPresent" - }, - { - "name": "event", - "image": "$PGO_IMAGE_PREFIX/pgo-event:$PGO_IMAGE_TAG", - "livenessProbe": { - "httpGet": { - "path": "/ping", - "port": 4151 - }, - "initialDelaySeconds": 15, - "periodSeconds": 5 - }, - "env": [ - { - "name": "TIMEOUT", - "value": "3600" - } - ], - "volumeMounts": [], - "imagePullPolicy": "IfNotPresent" - } - ], - "volumes": [] - } - } - } -} diff --git a/deploy/gen-api-keys.sh b/deploy/gen-api-keys.sh deleted file mode 100755 index 8aece10000..0000000000 --- a/deploy/gen-api-keys.sh +++ /dev/null @@ -1,40 +0,0 @@ -#!/bin/bash - -# Copyright 2019 - 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# -# generate self signed cert for apiserver REST service -# - -openssl req \ --x509 \ --nodes \ --newkey rsa:2048 \ --keyout $PGOROOT/conf/postgres-operator/server.key \ --out $PGOROOT/conf/postgres-operator/server.crt \ --days 3650 \ --subj "/C=US/ST=Texas/L=Austin/O=TestOrg/OU=TestDepartment/CN=*" - -# generate CA -#openssl genrsa -out $PGOROOT/conf/apiserver/rootCA.key 4096 -#openssl req -x509 -new -key $PGOROOT/conf/apiserver/rootCA.key -days 3650 -out $PGOROOT/conf/apiserver/rootCA.crt - -# generate cert for secure.domain.com signed with the created CA -#openssl genrsa -out $PGOROOT/conf/apiserver/secure.domain.com.key 2048 -#openssl req -new -key $PGOROOT/conf/apiserver/secure.domain.com.key -out $PGOROOT/conf/apiserver/secure.domain.com.csr -#In answer to question `Common Name (e.g. server FQDN or YOUR name) []:` you should set `secure.domain.com` (your real domain name) -#openssl x509 -req -in $PGOROOT/conf/apiserver/secure.domain.com.csr -CA $PGOROOT/conf/apiserver/rootCA.crt -CAkey $PGOROOT/conf/apiserver/rootCA.key -CAcreateserial -days 365 -out $PGOROOT/conf/apiserver/secure.domain.com.crt - -#openssl genrsa 2048 > $PGOROOT/conf/apiserver/key.pem -#openssl req -new -x509 -key $PGOROOT/conf/apiserver/key.pem -out $PGOROOT/conf/apiserver/cert.pem -days 1095 diff --git a/deploy/ingress.yml b/deploy/ingress.yml deleted file mode 100644 index d1c6e59aea..0000000000 --- a/deploy/ingress.yml +++ /dev/null @@ -1,13 +0,0 @@ ---- -apiVersion: networking.k8s.io/v1 -kind: Ingress -metadata: - name: postgres-operator - namespace: demo - annotations: - ingress.kubernetes.io/ssl-passthrough: "true" - nginx.ingress.kubernetes.io/secure-backends: "true" -spec: - backend: - serviceName: postgres-operator - servicePort: 8443 diff --git a/deploy/install-bootstrap-creds.sh b/deploy/install-bootstrap-creds.sh deleted file mode 100755 index 1b446824d3..0000000000 --- a/deploy/install-bootstrap-creds.sh +++ /dev/null @@ -1,39 +0,0 @@ -#!/bin/bash - -# Copyright 2019 - 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -set -eu - -# fill out these variables if you want to change the -# default pgo bootstrap user and role -PGOADMIN_USERNAME=admin -PGOADMIN_PASSWORD=examplepassword -PGOADMIN_ROLENAME=pgoadmin -PGOADMIN_PERMS="*" - - -$PGO_CMD -n "$PGO_OPERATOR_NAMESPACE" delete secret "pgorole-$PGOADMIN_ROLENAME" --ignore-not-found -$PGO_CMD -n "$PGO_OPERATOR_NAMESPACE" create secret generic "pgorole-$PGOADMIN_ROLENAME" \ - --from-literal="rolename=$PGOADMIN_ROLENAME" \ - --from-literal="permissions=$PGOADMIN_PERMS" -$PGO_CMD -n "$PGO_OPERATOR_NAMESPACE" label secret "pgorole-$PGOADMIN_ROLENAME" \ - 'vendor=crunchydata' 'pgo-pgorole=true' "rolename=$PGOADMIN_ROLENAME" - -$PGO_CMD -n "$PGO_OPERATOR_NAMESPACE" delete secret "pgouser-$PGOADMIN_USERNAME" --ignore-not-found -$PGO_CMD -n "$PGO_OPERATOR_NAMESPACE" create secret generic "pgouser-$PGOADMIN_USERNAME" \ - --from-literal="username=$PGOADMIN_USERNAME" \ - --from-literal="password=$PGOADMIN_PASSWORD" \ - --from-literal="roles=$PGOADMIN_ROLENAME" -$PGO_CMD -n "$PGO_OPERATOR_NAMESPACE" label secret "pgouser-$PGOADMIN_USERNAME" \ - 'vendor=crunchydata' 'pgo-pgouser=true' "username=$PGOADMIN_USERNAME" diff --git a/deploy/install-rbac.sh b/deploy/install-rbac.sh deleted file mode 100755 index d96532d9f1..0000000000 --- a/deploy/install-rbac.sh +++ /dev/null @@ -1,98 +0,0 @@ -#!/bin/bash - -# Copyright 2018 - 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" - -$DIR/cleanup-rbac.sh -test="${PGO_CONF_DIR:?Need to set PGO_CONF_DIR env variable}" - -# see if CRDs need to be created -$PGO_CMD get crd pgclusters.crunchydata.com > /dev/null -if [ $? -eq 1 ]; then - $PGO_CMD create \ - -f ${PGO_CONF_DIR}/crds/pgclusters-crd.yaml \ - -f ${PGO_CONF_DIR}/crds/pgpolicies-crd.yaml \ - -f ${PGO_CONF_DIR}/crds/pgreplicas-crd.yaml \ - -f ${PGO_CONF_DIR}/crds/pgtasks-crd.yaml -fi - -# create the initial pgo admin credential -$DIR/install-bootstrap-creds.sh - -# create the Operator service accounts -envsubst < $DIR/service-accounts.yaml | $PGO_CMD create -f - - -if [ -r "$PGO_IMAGE_PULL_SECRET_MANIFEST" ]; then - $PGO_CMD -n $PGO_OPERATOR_NAMESPACE create -f "$PGO_IMAGE_PULL_SECRET_MANIFEST" -fi - -if [ -n "$PGO_IMAGE_PULL_SECRET" ]; then - patch='{"imagePullSecrets": [{ "name": "'"$PGO_IMAGE_PULL_SECRET"'" }]}' - - $PGO_CMD -n $PGO_OPERATOR_NAMESPACE patch --type=strategic --patch="$patch" serviceaccount/postgres-operator -fi - -# Create the proper cluster roles corresponding to the namespace mode configured for the -# current Operator install. The namespace mode selected will determine which cluster roles are -# created for the Operator Service Account, with those cluster roles (or the absence thereof) -# providing the various describe across the various modes below: -# -# A value of "dynamic" enables full dynamic namespace capabilities, in which the Operator can -# create, delete and update any namespaces within the Kubernetes cluster. Additionally, while in -# this mode the Operator can listen for namespace events (e.g. namespace additions, updates and -# deletions), and then create or remove controllers for various namespaces as those namespaces are -# added or removed from the Kubernetes cluster and/or Operator install. -# -# If a value of "readonly" is provided, the Operator is still able to listen for namespace events -# within the Kubernetetes cluster, and then create and run and/or remove controllers as namespaces -# are added and deleted. However, the Operator is unable to create, delete and/or update -# namespaces. -# -# And finally, if "disabled" is selected, then namespace capabilities will be disabled altogether -# In this mode the Operator will simply attempt to work with the target namespaces specified during -# installation. If no target namespaces are specified, then it will be configured to work within the -# namespace in which it is deployed. -if [[ "${PGO_NAMESPACE_MODE:-dynamic}" == "dynamic" ]]; then - - if [[ "${PGO_RECONCILE_RBAC:-true}" == "true" ]] - then - # create the full ClusterRole with namespace and RBAC permissions if RBAC reconciliation - # is enabled - $PGO_CMD create -f $DIR/cluster-roles.yaml - else - # create a ClusterRole with only namespace permissions if RBAC reconciliation is disabled - sed '/- delete/q' $DIR/cluster-roles.yaml | $PGO_CMD create -f - - fi - - # create the cluster role binding for the Operator Service Account - envsubst < $DIR/cluster-role-bindings.yaml | $PGO_CMD create -f - - echo "Cluster roles installed to enable dynamic namespace capabilities" -elif [[ "${PGO_NAMESPACE_MODE}" == "readonly" ]]; then - # create the read-only cluster roles for the Operator - envsubst < $DIR/cluster-roles-readonly.yaml | $PGO_CMD create -f - - # create the cluster role binding for the Operator Service Account - envsubst < $DIR/cluster-role-bindings.yaml | $PGO_CMD create -f - - echo "Cluster roles installed to enable read-only namespace capabilities" -elif [[ "${PGO_NAMESPACE_MODE}" == "disabled" ]]; then - echo "Cluster roles not installed, namespace capabilites will be disabled" -fi - -# Create the roles the Operator requires within its own namespace -envsubst < $DIR/roles.yaml | $PGO_CMD create -f - -envsubst < $DIR/role-bindings.yaml | $PGO_CMD create -f - - -# create the keys used for pgo API -source $DIR/gen-api-keys.sh diff --git a/deploy/local-namespace-rbac.yaml b/deploy/local-namespace-rbac.yaml deleted file mode 100644 index d74f947653..0000000000 --- a/deploy/local-namespace-rbac.yaml +++ /dev/null @@ -1,51 +0,0 @@ ---- -kind: Role -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - name: pgo-local-ns -rules: - - apiGroups: - - '' - resources: - - serviceaccounts - verbs: - - get - - create - - update - - delete - - apiGroups: - - rbac.authorization.k8s.io - resources: - - roles - - rolebindings - verbs: - - get - - create - - update - - delete ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: RoleBinding -metadata: - name: pgo-local-ns -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: Role - name: pgo-local-ns -subjects: -- kind: ServiceAccount - name: postgres-operator - namespace: $PGO_OPERATOR_NAMESPACE ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: RoleBinding -metadata: - name: pgo-target-role-binding -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: Role - name: pgo-target-role -subjects: -- kind: ServiceAccount - name: postgres-operator - namespace: $PGO_OPERATOR_NAMESPACE diff --git a/deploy/pgorole.yaml b/deploy/pgorole.yaml deleted file mode 100644 index 6dd0de29f0..0000000000 --- a/deploy/pgorole.yaml +++ /dev/null @@ -1,15 +0,0 @@ -apiVersion: v1 -stringData: - permissions: $PGO_PERMS - rolename: $PGO_ROLENAME -kind: Secret -metadata: - labels: - pgo-created-by: upgrade - pgo-pgorole: "true" - rolename: $PGO_ROLENAME - vendor: crunchydata - name: pgorole-$PGO_ROLENAME - namespace: $PGO_OPERATOR_NAMESPACE -type: Opaque - diff --git a/deploy/pgouser.yaml b/deploy/pgouser.yaml deleted file mode 100644 index d94b51bfc6..0000000000 --- a/deploy/pgouser.yaml +++ /dev/null @@ -1,16 +0,0 @@ -apiVersion: v1 -stringData: - password: $PGO_PASSWORD - username: $PGO_USERNAME - roles: $PGO_ROLENAME -kind: Secret -metadata: - labels: - pgo-created-by: upgrade - pgo-pgouser: "true" - username: $PGO_USERNAME - vendor: crunchydata - name: pgouser-$PGO_USERNAME - namespace: $PGO_OPERATOR_NAMESPACE -type: Opaque - diff --git a/deploy/remove-crd.sh b/deploy/remove-crd.sh deleted file mode 100755 index 764645264f..0000000000 --- a/deploy/remove-crd.sh +++ /dev/null @@ -1,28 +0,0 @@ -#!/bin/bash -# Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" - -$PGO_CMD --namespace=$PGO_OPERATOR_NAMESPACE delete pgreplicas --all -$PGO_CMD --namespace=$PGO_OPERATOR_NAMESPACE delete pgclusters --all -$PGO_CMD --namespace=$PGO_OPERATOR_NAMESPACE delete pgpolicies --all -$PGO_CMD --namespace=$PGO_OPERATOR_NAMESPACE delete pgtasks --all - -$PGO_CMD --namespace=$PGO_OPERATOR_NAMESPACE delete crd \ - pgreplicas.crunchydata.com \ - pgclusters.crunchydata.com \ - pgpolicies.crunchydata.com \ - pgtasks.crunchydata.com - -$PGO_CMD --namespace=$PGO_OPERATOR_NAMESPACE delete jobs --selector=pgrmdata=true diff --git a/deploy/role-bindings.yaml b/deploy/role-bindings.yaml deleted file mode 100644 index b8f21c2391..0000000000 --- a/deploy/role-bindings.yaml +++ /dev/null @@ -1,14 +0,0 @@ ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: RoleBinding -metadata: - name: pgo-role - namespace: "$PGO_OPERATOR_NAMESPACE" -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: Role - name: pgo-role -subjects: -- kind: ServiceAccount - name: postgres-operator - namespace: "$PGO_OPERATOR_NAMESPACE" diff --git a/deploy/roles.yaml b/deploy/roles.yaml deleted file mode 100644 index 899551f6a1..0000000000 --- a/deploy/roles.yaml +++ /dev/null @@ -1,24 +0,0 @@ ---- -kind: Role -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - name: pgo-role - namespace: "$PGO_OPERATOR_NAMESPACE" -rules: - - apiGroups: - - '' - resources: - - serviceaccounts - verbs: - - get - - apiGroups: - - '' - resources: - - configmaps - - secrets - verbs: - - get - - list - - create - - update - - delete diff --git a/deploy/service-accounts.yaml b/deploy/service-accounts.yaml deleted file mode 100644 index f631c8e06b..0000000000 --- a/deploy/service-accounts.yaml +++ /dev/null @@ -1,6 +0,0 @@ ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: postgres-operator - namespace: $PGO_OPERATOR_NAMESPACE diff --git a/deploy/service.json b/deploy/service.json deleted file mode 100644 index f026f5d7d5..0000000000 --- a/deploy/service.json +++ /dev/null @@ -1,37 +0,0 @@ -{ - "kind": "Service", - "apiVersion": "v1", - "metadata": { - "name": "postgres-operator", - "labels": { - "name": "postgres-operator" - } - }, - "spec": { - "ports": [ - { - "name": "apiserver", - "protocol": "TCP", - "port": $PGO_APISERVER_PORT, - "targetPort": $PGO_APISERVER_PORT - }, - { - "name": "nsqadmin", - "protocol": "TCP", - "port": 4171, - "targetPort": 4171 - }, - { - "name": "nsqd", - "protocol": "TCP", - "port": 4150, - "targetPort": 4150 - } - ], - "selector": { - "name": "postgres-operator" - }, - "type": "ClusterIP", - "sessionAffinity": "None" - } -} diff --git a/deploy/setupnamespaces.sh b/deploy/setupnamespaces.sh deleted file mode 100755 index 9d2188a56f..0000000000 --- a/deploy/setupnamespaces.sh +++ /dev/null @@ -1,80 +0,0 @@ -#!/bin/bash - -# Copyright 2019 - 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" - -if [ -z $PGO_OPERATOR_NAMESPACE ]; -then - echo "error: \$PGO_OPERATOR_NAME must be set" - exit 1 -fi - -if [ -z $PGO_INSTALLATION_NAME ]; -then - echo "error: \$PGO_INSTALLATION_NAME must be set" - exit 1 -fi - -echo "creating "$PGO_OPERATOR_NAMESPACE" namespace to deploy the Operator into..." -$PGO_CMD get namespace $PGO_OPERATOR_NAMESPACE > /dev/null 2> /dev/null -if [ $? -eq 0 ] -then - echo namespace $PGO_OPERATOR_NAMESPACE is already created -else - $PGO_CMD create namespace $PGO_OPERATOR_NAMESPACE > /dev/null - echo namespace $PGO_OPERATOR_NAMESPACE created -fi - -echo "" -echo "creating namespaces for the Operator to watch and create PG clusters into..." - -IFS=', ' read -r -a array <<< "$NAMESPACE" - -if [ ${#array[@]} -eq 0 ] -then - echo "NAMESPACE is empty, updating Operator namespace ${PGO_OPERATOR_NAMESPACE}" - array=("${PGO_OPERATOR_NAMESPACE}") -fi - -# determine which "add namespace" script to run based on namespace mode and whether or not RBAC -# reconciliation is enabled (when using a 'dynamic' namespace mode with RBAC reconciliation -# enabled, no script is run since the PostgreSQL Operator is assigned the permissions to reconcile -# RBAC in any namespace a ClusterRole, and will also handle namespace creation itself). -if [[ "${PGO_RECONCILE_RBAC:-true}" == "true" ]] && - [[ "${PGO_NAMESPACE_MODE:-dynamic}" != "dynamic" ]] -then - add_ns_script=add-targeted-namespace-reconcile-rbac.sh -elif [[ "${PGO_RECONCILE_RBAC}" == "false" ]] -then - add_ns_script=add-targeted-namespace.sh -fi - -# now run the proper "add namespace" script for any namespaces if needed -if [[ "${add_ns_script}" != "" ]] -then - for ns in "${array[@]}" - do - $PGO_CMD get namespace $ns > /dev/null 2> /dev/null - - if [ $? -eq 0 ] - then - echo namespace $ns already exists, updating... - $PGOROOT/deploy/$add_ns_script $ns > /dev/null - else - echo namespace $ns creating... - $PGOROOT/deploy/$add_ns_script $ns > /dev/null - fi - done -fi \ No newline at end of file diff --git a/deploy/show-crd.sh b/deploy/show-crd.sh deleted file mode 100755 index 7f40285c5d..0000000000 --- a/deploy/show-crd.sh +++ /dev/null @@ -1,20 +0,0 @@ -#!/bin/bash -# Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" - -$PGO_CMD --namespace=$PGO_OPERATOR_NAMESPACE get pgclusters -$PGO_CMD --namespace=$PGO_OPERATOR_NAMESPACE get pgreplicas -$PGO_CMD --namespace=$PGO_OPERATOR_NAMESPACE get pgpolicies -$PGO_CMD --namespace=$PGO_OPERATOR_NAMESPACE get pgpolicylogs diff --git a/deploy/target-namespace.yaml b/deploy/target-namespace.yaml deleted file mode 100644 index d9a63a8dbb..0000000000 --- a/deploy/target-namespace.yaml +++ /dev/null @@ -1,8 +0,0 @@ -apiVersion: v1 -kind: Namespace -metadata: - labels: - pgo-created-by: add-script - pgo-installation-name: $PGO_INSTALLATION_NAME - vendor: crunchydata - name: $TARGET_NAMESPACE diff --git a/deploy/upgrade-creds.sh b/deploy/upgrade-creds.sh deleted file mode 100755 index ddc0953df7..0000000000 --- a/deploy/upgrade-creds.sh +++ /dev/null @@ -1,80 +0,0 @@ -#!/bin/bash - -# Copyright 2019 - 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" - -if [ $# -eq 2 ] && [ -f $1 ] && [ -f $2 ]; then - ROLE_INPUT=${1} - USER_INPUT=${2} - echo "Using command line input" -fi - -fail=false -if [ ! -f $ROLE_INPUT ]; then - echo "File not found in $ROLE_INPUT" - fail=true -fi -if [ ! -f $USER_INPUT ]; then - echo "File not found in $USER_INPUT" - fail=true -fi - -if $fail; then - echo "Please provide the path for your pgouser and pgorole files" - echo "upgrade-certs.sh /path/to/pgorole /path/to/pgouser" - exit 1 -fi - -while read -r line -do - IFS=':' read -r role perms <<< $line - if [ -z "$role" ] || [ -z "$perms" ]; then - echo "Role input file invalid. Expected format \"rolename:perm1,perm2,perm3\"" - exit 1 - fi - - export PGO_ROLENAME=$role - export PGO_PERMS=$perms - - # see if the bootstrap pgorole Secret exists or not, deleting it if found - $PGO_CMD get secret pgorole-$PGO_ROLENAME -n $PGO_OPERATOR_NAMESPACE 2> /dev/null - if [ $? -eq 0 ]; then - $PGO_CMD delete secret pgorole-$PGO_ROLENAME -n $PGO_OPERATOR_NAMESPACE - fi - - cat $DIR/pgorole.yaml | envsubst | $PGO_CMD create -f - -done < "$ROLE_INPUT" - - -while read -r line -do - IFS=':' read -r user pass role <<< $line - if [ -z "$user" ] || [ -z "$pass" ] || [ -z "$role" ]; then - echo "User input file invalid. Expected format \"username:password:rolename\"" - exit 1 - fi - - export PGO_USERNAME=$user - export PGO_PASSWORD=$pass - export PGO_ROLENAME=$role - - # see if the bootstrap pgouser Secret exists or not, deleting it if found - $PGO_CMD get secret pgouser-$PGO_USERNAME -n $PGO_OPERATOR_NAMESPACE 2> /dev/null - if [ $? -eq 0 ]; then - $PGO_CMD delete secret pgouser-$PGO_USERNAME -n $PGO_OPERATOR_NAMESPACE - fi - cat $DIR/pgouser.yaml | envsubst | $PGO_CMD create -f - -done < "$USER_INPUT" diff --git a/deploy/upgrade-pgo.sh b/deploy/upgrade-pgo.sh deleted file mode 100755 index 66f61639eb..0000000000 --- a/deploy/upgrade-pgo.sh +++ /dev/null @@ -1,70 +0,0 @@ -#!/bin/bash - -# Copyright 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# Get current working directory -DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" - -echo "" -echo "Before running the Postgres Operator upgrade script, please ensure you have already updated and" -echo "sourced your user's .bashrc file, as well as your \$PGOROOT\\postgres-operator\\pgo.yaml configuration file." -echo "" -echo "More information can be found in the \"Default Installation - Configure Environment\" section" -echo "of the Postgres Operator Bash installation instructions, located here:" -echo "" -echo "https://crunchydata.github.io/postgres-operator/stable/installation/operator-install/" -echo "" - -read -n1 -rsp $'Press any key to continue the upgrade or Ctrl+C to exit...\n' - -# Remove the current Operator -$DIR/cleanup.sh - -# Set up the defined namespaces for use with the new Operator version -$DIR/setupnamespaces.sh - -# Install the correct RBAC -$DIR/install-rbac.sh - -# Deploy the new Operator -$DIR/deploy.sh - -# Run 'dep ensure' to update needed libraries -dep ensure - -# Store the current location of the PGO client -MYPGO=`which pgo` -# Store the expected location of the PGO client -BASHPGO="${GOBIN}/pgo" - -if [ "$MYPGO" != "$BASHPGO" ]; then - - echo "Current location\(${MYPG}O\) does not match the expected location \(${BASHPGO}\). You will need to manually install the updated Posgres Operator client in your preferred location." - -else - # install the new PGO client - go install $PGOROOT/pgo/pgo.go - cp $GOBIN/pgo $PGOROOT/bin/pgo -fi - -# Final instructions -NEWLINE=$'\n' -echo "" -echo "" -echo "Postgres Operator upgrade has completed!" -echo "" -echo "Please note it may take a few minutes for the deployment to complete," -echo "" -echo "and you will need to use the setip function to update your Apiserver URL once the Operator is ready." -echo "" diff --git a/docs/archetypes/default.md b/docs/archetypes/default.md deleted file mode 100644 index 00e77bd79b..0000000000 --- a/docs/archetypes/default.md +++ /dev/null @@ -1,6 +0,0 @@ ---- -title: "{{ replace .Name "-" " " | title }}" -date: {{ .Date }} -draft: true ---- - diff --git a/docs/config.toml b/docs/config.toml deleted file mode 100644 index 48ef2d760d..0000000000 --- a/docs/config.toml +++ /dev/null @@ -1,71 +0,0 @@ -baseURL= "" - -languageCode = "en-us" -DefaultContentLanguage = "en" -title = "Crunchy PostgreSQL Operator Documentation" -theme = "crunchy-hugo-theme" -pygmentsCodeFences = true -pygmentsStyle = "monokailight" -publishDir = "" -canonifyurls = true - -defaultContentLanguage = "en" -defaultContentLanguageInSubdir= false -enableMissingTranslationPlaceholders = false - -[params] -editURL = "https://github.com/CrunchyData/postgres-operator/edit/master/docs/content/" -showVisitedLinks = false # default is false -themeStyle = "flex" # "original" or "flex" # default "flex" -themeVariant = "" # choose theme variant "green", "gold" , "gray", "blue" (default) -ordersectionsby = "weight" # ordersectionsby = "title" -disableHomeIcon = true # default is false -disableSearch = false # default is false -disableNavChevron = false # set true to hide next/prev chevron, default is false -highlightClientSide = false # set true to use highlight.pack.js instead of the default hugo chroma highlighter -menushortcutsnewtab = true # set true to open shortcuts links to a new tab/window -enableGitInfo = true -operatorVersion = "4.5.0" -postgresVersion = "12.4" -postgresVersion13 = "13.0" -postgresVersion12 = "12.4" -postgresVersion11 = "11.9" -postgresVersion10 = "10.14" -postgresVersion96 = "9.6.19" -postgresVersion95 = "9.5.23" -postgisVersion = "3.0" -centosBase = "centos7" - -[outputs] -home = [ "HTML", "RSS", "JSON"] - -[[menu.shortcuts]] -name = "" -url = "/" -weight = 1 - -[[menu.shortcuts]] -name = " " -url = "https://github.com/CrunchyData/postgres-operator" -weight = 10 - -[[menu.shortcuts]] -name = " " -identifier = "kubedoc" -url = "https://kubernetes.io/docs/" -weight = 20 - -[[menu.shortcuts]] -name = " " -url = "https://github.com/CrunchyData/postgres-operator/blob/master/LICENSE.md" -weight = 30 - -[[menu.downloads]] -name = " " -url = "/pdf/postgres_operator.pdf" -weight = 20 - -[[menu.downloads]] -name = " " -url = "/epub/postgres_operator.epub" -weight = 30 diff --git a/docs/content/Configuration/_index.md b/docs/content/Configuration/_index.md deleted file mode 100644 index 05d1b615b4..0000000000 --- a/docs/content/Configuration/_index.md +++ /dev/null @@ -1,6 +0,0 @@ ---- -title: "Configuration" -date: -draft: false -weight: 30 ---- diff --git a/docs/content/Configuration/compatibility.md b/docs/content/Configuration/compatibility.md deleted file mode 100644 index b805ef9c08..0000000000 --- a/docs/content/Configuration/compatibility.md +++ /dev/null @@ -1,133 +0,0 @@ - ---- -title: "Compatibility Requirements" -draft: false -weight: 1 ---- - -## Container Dependencies - -The Operator depends on the Crunchy Containers and there are -version dependencies between the two projects. Below are the operator releases and their dependent container release. For reference, the Postgres and PgBackrest versions for each container release are also listed. - -| Operator Release | Container Release | Postgres | PgBackrest Version -|:----------|:-------------|:------------|:-------------- -| 4.5.0 | 4.5.0 | 12.4 | 2.29 | -|||11.9|2.29| -|||10.14|2.29| -|||9.6.19|2.29| -|||9.5.23|2.29| -|||| -| 4.4.1 | 4.4.1 | 12.4 | 2.27 | -|||11.9|2.27| -|||10.14|2.27| -|||9.6.19|2.27| -|||9.5.23|2.27| -|||| -| 4.4.0 | 4.4.0 | 12.3 | 2.27 | -|||11.8|2.27| -|||10.13|2.27| -|||9.6.18|2.27| -|||9.5.22|2.27| -|||| -| 4.3.2 | 4.3.2 | 12.3 | 2.25 | -|||11.8|2.25| -|||10.13|2.25| -|||9.6.18|2.25| -|||9.5.22|2.25| -|||| -| 4.3.1 | 4.3.1 | 12.3 | 2.25 | -|||11.8|2.25| -|||10.13|2.25| -|||9.6.18|2.25| -|||9.5.22|2.25| -|||| -| 4.3.0 | 4.3.0 | 12.2 | 2.25 | -|||11.7|2.25| -|||10.12|2.25| -|||9.6.17|2.25| -|||9.5.21|2.25| -|||| -| 4.2.1 | 4.3.0 | 12.1 | 2.20 | -|||11.6|2.20| -|||10.11|2.20| -|||9.6.16|2.20| -|||9.5.20|2.20| -|||| -| 4.2.0 | 4.3.0 | 12.1 | 2.20 | -|||11.6|2.20| -|||10.11|2.20| -|||9.6.16|2.20| -|||9.5.20|2.20| -|||| -| 4.1.1 | 4.1.1 | 12.1 | 2.18 | -|||11.6|2.18| -|||10.11|2.18| -|||9.6.16|2.18| -|||9.5.20|2.18| -|||| -| 4.1.0 | 2.4.2 | 11.5 | 2.17 | -|||10.10| 2.17| -|||9.6.15|2.17| -|||9.5.19|2.17| -|||| -| 4.0.1 | 2.4.1 | 11.4 | 2.13 | -|||10.9| 2.13| -|||9.6.14|2.13| -|||9.5.18|2.13| -|||| -| 4.0.0 | 2.4.0 | 11.3 | 2.13 | -|||10.8| 2.13| -|||9.6.13|2.13| -|||9.5.17|2.13| -|||| -| 3.5.4 | 2.3.3 | 11.4| 2.13 | -|||10.9| 2.13| -|||9.6.14|2.13| -|||9.5.18|2.13| -|||| -| 3.5.3 | 2.3.2 | 11.3| 2.13 | -|||10.8| 2.13| -|||9.6.13|2.13| -|||9.5.17|2.13| -|||| -| 3.5.2 | 2.3.1 | 11.2| 2.10 | -|||10.7| 2.10| -|||9.6.12|2.10| -|||9.5.16|2.10| - -Features sometimes are added into the underlying Crunchy Containers -to support upstream features in the Operator thus dictating a -dependency between the two projects at a specific version level. - -## Operating Systems - -The PostgreSQL Operator is developed on both CentOS 7 and RHEL 7 operating -systems. The underlying containers are designed to use either CentOS 7 or -Red Hat UBI 7 as the base container image. - -Other Linux variants are possible but are not supported at this time. - -Also, please note that as of version 4.2.2 of the PostgreSQL Operator, -[Red Hat Universal Base Image (UBI)](https://www.redhat.com/en/blog/introducing-red-hat-universal-base-image) 7 -has replaced RHEL 7 as the base container image for the various PostgreSQL -Operator containers. You can find out more information about Red Hat UBI from -the following article: - -https://www.redhat.com/en/blog/introducing-red-hat-universal-base-image - -## Kubernetes Distributions - -The Operator is designed and tested on Kubernetes and OpenShift Container Platform. - -## Storage - -The Operator is designed to support HostPath, NFS, and Storage Classes for -persistence. The Operator does not currently include code specific to -a particular storage vendor. - -## Releases - -The Operator is released on a quarterly basis often to coincide with Postgres releases. - -There are pre-release and or minor bug fix releases created on an as-needed basis. diff --git a/docs/content/Configuration/configuration.md b/docs/content/Configuration/configuration.md deleted file mode 100644 index e85823a865..0000000000 --- a/docs/content/Configuration/configuration.md +++ /dev/null @@ -1,103 +0,0 @@ ---- -title: "Configuration Resources" -draft: false -weight: 2 ---- - -The operator is template-driven; this makes it simple to configure both the client and the operator. - -## conf Directory - -The Operator is configured with a collection of files found in the *conf* directory. These configuration files are deployed to your Kubernetes cluster when the Operator is deployed. Changes made to any of these configuration files currently require a redeployment of the Operator on the Kubernetes cluster. - -The server components of the Operator include Role Based Access Control resources which need to be created a single time by a privileged Kubernetes user. See the Installation section for details on installing a Postgres Operator server. - -The configuration files used by the Operator are found in 2 places: - * the pgo-config ConfigMap in the namespace the Operator is running in - * or, a copy of the configuration files are also included by default into the Operator container images themselves to support a very simplistic deployment of the Operator - -If the pgo-config ConfigMap is not found by the Operator, it will use -the configuration files that are included in the Operator container -images. - -## conf/postgres-operator/pgo.yaml -The *pgo.yaml* file sets many different Operator configuration settings and is described in the [pgo.yaml configuration]({{< ref "pgo-yaml-configuration.md" >}}) documentation section. - - -The *pgo.yaml* file is deployed along with the other Operator configuration files when you run: - - make deployoperator - -## Config Directory - -Files within the [*PGO_CONF_DIR*](/developer-setup/) directory contain various templates that are used by the Operator when creating Kubernetes resources. In an advanced Operator deployment, administrators can modify these templates to add their own custom meta-data or make other changes to influence the Resources that get created on your Kubernetes cluster by the Operator. - -Files within this directory are used specifically when creating PostgreSQL Cluster resources. Sidecar components such as pgBouncer templates are also located within this directory. - -As with the other Operator templates, administrators can make custom changes to this set of templates to add custom features or metadata into the Resources created by the Operator. - -## Operator API Server - -The Operator's API server can be configured to allow access to select URL routes -without requiring TLS authentication from the client and without -the HTTP Basic authentication used for role-based-access. - -This configuration is performed by defining the `NOAUTH_ROUTES` environment -variable for the apiserver container within the Operator pod. - -Typically, this configuration is made within the `deploy/deployment.json` -file for bash-based installations and -`ansible/roles/pgo-operator/templates/deployment.json.j2` for ansible installations. - -For example: -``` -... - containers: [ - { - "name": "apiserver" - "env": [ - { - "name": "NOAUTH_ROUTES", - "value": "/health" - } - ] - ... - } - ... - ] -... -``` - -The `NOAUTH_ROUTES` variable must be set to a comma-separated list of -URL routes. For example: `/health,/version,/example3` would opt to **disable** -authentication for `$APISERVER_URL/health`, `$APISERVER_URL/version`, and -`$APISERVER_URL/example3` respectively. - -Currently, only the following routes may have authentication disabled using -this setting: - -``` -/health -``` - -The `/healthz` route is used by kubernetes probes and has its authentication -disabed without requiring NOAUTH_ROUTES. - - -## Security - -Setting up pgo users and general security configuration is described in the [Security](/security) section of this documentation. - -## Local pgo CLI Configuration - -You can specify the default namespace you want to use by -setting the PGO_NAMESPACE environment variable locally -on the host the pgo CLI command is running. - - export PGO_NAMESPACE=pgouser1 - -When that variable is set, each command you issue with *pgo* will -use that namespace unless you over-ride it using the *--namespace* -command line flag. - - pgo show cluster foo --namespace=pgouser2 diff --git a/docs/content/Configuration/pgo-yaml-configuration.md b/docs/content/Configuration/pgo-yaml-configuration.md deleted file mode 100644 index c1b6a894e1..0000000000 --- a/docs/content/Configuration/pgo-yaml-configuration.md +++ /dev/null @@ -1,187 +0,0 @@ - ---- -title: "PGO YAML" - -draft: false -weight: 3 ---- - -# pgo.yaml Configuration -The *pgo.yaml* file contains many different configuration settings as described in this section of the documentation. - -The *pgo.yaml* file is broken into major sections as described below: -## Cluster - -| Setting |Definition | -|---|---| -|BasicAuth | If set to `"true"` will enable Basic Authentication. If set to `"false"`, will allow a valid Operator user to successfully authenticate regardless of the value of the password provided for Basic Authentication. Defaults to `"true".` -|CCPImagePrefix |newly created containers will be based on this image prefix (e.g. crunchydata), update this if you require a custom image prefix -|CCPImageTag |newly created containers will be based on this image version (e.g. {{< param centosBase >}}-{{< param postgresVersion >}}-{{< param operatorVersion >}}), unless you override it using the --ccp-image-tag command line flag -|Port | the PostgreSQL port to use for new containers (e.g. 5432) -|PGBadgerPort | the port used to connect to pgbadger (e.g. 10000) -|ExporterPort | the port used to connect to postgres exporter (e.g. 9187) -|User | the PostgreSQL normal user name -|Database | the PostgreSQL normal user database -|Replicas | the number of cluster replicas to create for newly created clusters, typically users will scale up replicas on the pgo CLI command line but this global value can be set as well -|PgmonitorPassword | the password to use for pgmonitor metrics collection if you specify --metrics when creating a PG cluster -|Metrics | boolean, if set to true will cause each new cluster to include crunchy-postgres-exporter as a sidecar container for metrics collection, if set to false (default), users can still add metrics on a cluster-by-cluster basis using the pgo command flag --metrics -|Badger | boolean, if set to true will cause each new cluster to include crunchy-pgbadger as a sidecar container for static log analysis, if set to false (default), users can still add pgbadger on a cluster-by-cluster basis using the pgo create cluster command flag --pgbadger -|Policies | optional, list of policies to apply to a newly created cluster, comma separated, must be valid policies in the catalog -|PasswordAgeDays | optional, if set, will set the VALID UNTIL date on passwords to this many days in the future when creating users or setting passwords, defaults to 60 days -|PasswordLength | optional, if set, will determine the password length used when creating passwords, defaults to 8 -|ServiceType | optional, if set, will determine the service type used when creating primary or replica services, defaults to ClusterIP if not set, can be overridden by the user on the command line as well -|Backrest | optional, if set, will cause clusters to have the pgbackrest volume PVC provisioned during cluster creation -|BackrestPort | currently required to be port 2022 -|DisableAutofail | optional, if set, will disable autofail capabilities by default in any newly created cluster -|DisableReplicaStartFailReinit | if set to `true` will disable the detection of a "start failed" states in PG replicas, which results in the re-initialization of the replica in an attempt to bring it back online -|PodAntiAffinity | either `preferred`, `required` or `disabled` to either specify the type of affinity that should be utilized for the default pod anti-affinity applied to PG clusters, or to disable default pod anti-affinity all together (default `preferred`) -|SyncReplication | boolean, if set to `true` will automatically enable synchronous replication in new PostgreSQL clusters (default `false`) -|DefaultInstanceMemory | string, matches a Kubernetes resource value. If set, it is used as the default value of the memory request for each instance in a PostgreSQL cluster. The example configuration uses `128Mi` which is very low for a PostgreSQL cluster, as the default amount of shared memory PostgreSQL requests is `128Mi`. However, for test clusters, this value is acceptable as the shared memory buffers won't be stressed, but you should absolutely consider raising this in production. If the value is unset, it defaults to `512Mi`, which is a much more appropriate minimum. -|DefaultBackrestMemory | string, matches a Kubernetes resource value. If set, it is used as the default value of the memory request for the pgBackRest repository (default `48Mi`) -|DefaultPgBouncerMemory | string, matches a Kubernetes resource value. If set, it is used as the default value of the memory request for pgBouncer instances (default `24Mi`) -|DisableFSGroup | If set to `true`, this will disable the use of the fsGroup for the containers related to PostgreSQL, which is normally set to 26. This is geared towards deployments that use Security Context Constraints in the mode of restricted (default `false`) | - -## Storage -| Setting|Definition | -|---|---| -|PrimaryStorage |required, the value of the storage configuration to use for the primary PostgreSQL deployment -|BackupStorage |required, the value of the storage configuration to use for backups, including the storage for pgbackrest repo volumes -|ReplicaStorage |required, the value of the storage configuration to use for the replica PostgreSQL deployments -|BackrestStorage |required, the value of the storage configuration to use for the pgbackrest shared repository deployment created when a user specifies pgbackrest to be enabled on a cluster -|WALStorage | optional, the value of the storage configuration to use for PostgreSQL Write Ahead Log -|StorageClass | optional, for a dynamic storage type, you can specify the storage class used for storage provisioning (e.g. standard, gold, fast) -|AccessMode |the access mode for new PVCs (e.g. ReadWriteMany, ReadWriteOnce, ReadOnlyMany). See below for descriptions of these. -|Size |the size to use when creating new PVCs (e.g. 100M, 1Gi) -|Storage.storage1.StorageType |supported values are either *dynamic*, *create*, if not supplied, *create* is used -|SupplementalGroups | optional, if set, will cause a SecurityContext to be added to generated Pod and Deployment definitions -|MatchLabels | optional, if set, will cause the PVC to add a *matchlabels* selector in order to match a PV, only useful when the StorageType is *create*, when specified a label of *key=value* is added to the PVC as a match criteria - -## Storage Configuration Examples -In *pgo.yaml*, you will need to configure your storage configurations -depending on which storage you are wanting to use for -Operator provisioning of Persistent Volume Claims. The examples -below are provided as a sample. In all the examples you are -free to change the *Size* to meet your requirements of Persistent -Volume Claim size. - -### HostPath Example - -HostPath is provided for simple testing and use -cases where you only intend to run on a single -Linux host for your Kubernetes cluster. - -``` - hostpathstorage: - AccessMode: ReadWriteMany - Size: 1G - StorageType: create -``` - -### NFS Example - -In the following NFS example, notice that the -*SupplementalGroups* setting is set, this can -be whatever GID you have your NFS mount set -to, typically we set this *nfsnobody* as below. -NFS file systems offer a *ReadWriteMany* access -mode. - -``` - nfsstorage: - AccessMode: ReadWriteMany - Size: 1G - StorageType: create - SupplementalGroups: 65534 -``` - -### Storage Class Example - -Most Storage Class providers offer *ReadWriteOnce* -access modes, but refer to your provider documentation -for other access modes it might support. - -``` - storageos: - AccessMode: ReadWriteOnce - Size: 1G - StorageType: dynamic - StorageClass: fast -``` - -## Miscellaneous (Pgo) -| Setting |Definition | -|---|---| -|Audit |boolean, if set to true will cause each apiserver call to be logged with an *audit* marking -|ConfigMapWorkerCount | The number of workers created for the worker queue within the ConfigMap controller (defaults to 2) -|ControllerGroupRefreshInterval | The refresh interval for any per-namespace controller with a refresh interval (defaults to 60 seconds) -|DisableReconcileRBAC | Whether or not to disable RBAC reconciliation in targeted namespaces (defaults to `false`) -|NamespaceRefreshInterval | The refresh interval for the namespace controller (defaults to 60 seconds) -|NamespaceWorkerCount | The number of workers created for the worker queue within the Namespace controller (defaults to 2) -|PgclusterWorkerCount | The number of workers created for the worker queue within the PGCluster controller (defaults to 1) -|PGOImagePrefix | image tag prefix to use for the Operator containers -|PGOImageTag |image tag to use for the Operator containers -|PGReplicaWorkerCount | The number of workers created for the worker queue within the PGReplica controller (defaults to 1) -|PGTaskWorkerCount | The number of workers created for the worker queue within the PGTask controller (defaults to 1) - -## Storage Configuration Details - -You can define n-number of Storage configurations within the *pgo.yaml* file. Those Storage configurations follow these conventions - - - * they must have lowercase name (e.g. storage1) - * they must be unique names (e.g. mydrstorage, faststorage, slowstorage) - -These Storage configurations are referenced in the BackupStorage, ReplicaStorage, and PrimaryStorage configuration values. However, there are command line -options in the *pgo* client that will let a user override these default global values to offer you the user a way to specify very targeted storage configurations when needed (e.g. disaster recovery storage for certain backups). - -You can set the storage AccessMode values to the following: - -* *ReadWriteMany* - mounts the volume as read-write by many nodes -* *ReadWriteOnce* - mounts the PVC as read-write by a single node -* *ReadOnlyMany* - mounts the PVC as read-only by many nodes - -These Storage configurations are validated when the *pgo-apiserver* starts, if a -non-valid configuration is found, the apiserver will abort. These Storage values are only read at *apiserver* start time. - -The following StorageType values are possible - - - * *dynamic* - this will allow for dynamic provisioning of storage using a StorageClass. - * *create* - This setting allows for the creation of a new PVC for each PostgreSQL cluster using a naming convention of *clustername*. When set, the *Size*, *AccessMode* settings are used in constructing the new PVC. - -The operator will create new PVCs using this naming convention: *dbname* where *dbname* is the database name you have specified. For example, if you run: - - pgo create cluster example1 -n pgouser1 - -It will result in a PVC being created named *example1* and in the case of a backup job, the pvc is named *example1-backup* - -Note, when Storage Type is *create*, you can specify a storage configuration setting of *MatchLabels*, when set, this will cause a *selector* of *key=value* to be added into the PVC, this will let you target specific PV(s) to be matched for this cluster. Note, if a PV does not match the claim request, then the cluster will not start. Users -that want to use this feature have to place labels on their PV resources as part of PG cluster creation before creating the PG cluster. For example, users would add a label like this to their PV before they create the PG cluster: - - kubectl label pv somepv myzone=somezone -n pgouser1 - -If you do not specify *MatchLabels* in the storage configuration, then no match filter is added and any available PV will be used to satisfy the PVC request. This option does not apply to *dynamic* storage types. - -Example PV creation scripts are provided that add labels to a set of PVs and can be used for testing: `$COROOT/pv/create-pv-nfs-labels.sh` - in that example, a label of **crunchyzone=red** is set on a set of PVs to test with. - -The *pgo.yaml* includes a storage config named **nfsstoragered** that when used will demonstrate the label matching. This feature allows you to support -n-number of NFS storage configurations and supports spreading a PG cluster across different NFS storage configurations. - -## Overriding Storage Configuration Defaults - - pgo create cluster testcluster --storage-config=bigdisk -n pgouser1 - -That example will create a cluster and specify a storage configuration of *bigdisk* to be used for the primary database storage. The replica storage will default to the value of ReplicaStorage as specified in *pgo.yaml*. - - pgo create cluster testcluster2 --storage-config=fastdisk --replica-storage-config=slowdisk -n pgouser1 - -That example will create a cluster and specify a storage configuration of *fastdisk* to be used for the primary database storage, while the replica storage will use the storage configuration *slowdisk*. - - pgo backup testcluster --storage-config=offsitestorage -n pgouser1 - -That example will create a backup and use the *offsitestorage* storage configuration for persisting the backup. - -## Using Storage Configurations for Disaster Recovery -A simple mechanism for partial disaster recovery can be obtained by leveraging network storage, Kubernetes storage classes, and the storage configuration options within the Operator. - -For example, if you define a Kubernetes storage class that refers to a storage backend that is running within your disaster recovery site, and then use that storage class as -a storage configuration for your backups, you essentially have moved your backup files automatically to your disaster recovery site thanks to network storage. diff --git a/docs/content/Configuration/tls.md b/docs/content/Configuration/tls.md deleted file mode 100644 index 9c7d861223..0000000000 --- a/docs/content/Configuration/tls.md +++ /dev/null @@ -1,144 +0,0 @@ ---- -title: "TLS" -date: -draft: false -weight: 6 ---- - -## TLS Configuration - -Should you desire to alter the default TLS settings for the Postgres -Operator, you can set the following variables as described below. - -### Server Settings - -To disable TLS and make an unsecured connection on port 8080 instead of -connecting securely over the default port, 8443, set: - -Bash environment variables - -```bash -export DISABLE_TLS=true -export PGO_APISERVER_PORT=8080 -``` - -Or inventory variables if using Ansible - -```yaml -pgo_disable_tls='true' -pgo_apiserver_port=8080 -``` - -To disable TLS verifcation, set the follwing as a Bash environment variable - -```bash -export TLS_NO_VERIFY=false -``` - -Or the following in the inventory file if using Ansible - -```yaml -pgo_tls_no_verify='false' -``` - -### TLS Trust - -#### Custom Trust Additions - -To configure the server to allow connections from any client presenting a -certificate issued by CAs within a custom, PEM-encoded certificate list, -set the following as a Bash environment variable - - -```bash -export TLS_CA_TRUST="/path/to/trust/file" -``` - -Or the following in the inventory file if using Ansible - -```yaml -pgo_tls_ca_store='/path/to/trust/file' -``` - -#### System Default Trust - -To configure the server to allow connections from any client presenting a -certificate issued by CAs within the operating system's default trust store, -set the following as a Bash environment variable - - -```bash -export ADD_OS_TRUSTSTORE=true -``` - -Or the following in the inventory file if using Ansible - -```yaml -pgo_add_os_ca_store='true' -``` - -### Connection Settings - -If TLS authentication has been disabled, or if the Operator's apiserver port -is changed, be sure to update the PGO_APISERVER_URL accordingly. - -For example with an Ansible installation, - -```bash -export PGO_APISERVER_URL='https://:8443' -``` - -would become - -```bash -export PGO_APISERVER_URL='http://:8080' -``` - -With a Bash installation, - -```bash -setip() -{ - export PGO_APISERVER_URL=https://`$PGO_CMD -n "$PGO_OPERATOR_NAMESPACE" get service postgres-operator -o=jsonpath="{.spec.clusterIP}"`:8443 -} -``` - -would become - -```bash -setip() -{ - export PGO_APISERVER_URL=http://`$PGO_CMD -n "$PGO_OPERATOR_NAMESPACE" get service postgres-operator -o=jsonpath="{.spec.clusterIP}"`:8080 -} -``` - -### Client Settings - -By default, the pgo client will trust certificates issued by one of the -Certificate Authorities listed in the operating system's default CA trust -store, if any. To exclude them, either use the environment variable - -```bash -EXCLUDE_OS_TRUST=true -``` - -or use the --exclude-os-trust flag - -```bash -pgo version --exclude-os-trust -``` - -Finally, if TLS has been disabled for the Operator's apiserver, the PGO -client connection must be set to match the given settings. - -Two options are available, either the Bash environment variable - -```bash -DISABLE_TLS=true -``` - -must be configured, or the --disable-tls flag must be included when using the client, i.e. - -```bash -pgo version --disable-tls -``` diff --git a/docs/content/Security/_index.md b/docs/content/Security/_index.md deleted file mode 100644 index 3a93569fca..0000000000 --- a/docs/content/Security/_index.md +++ /dev/null @@ -1,6 +0,0 @@ ---- -title: "RBAC Configuration" -date: -draft: false -weight: 60 ---- diff --git a/docs/content/Security/api-encryption-configuration.md b/docs/content/Security/api-encryption-configuration.md deleted file mode 100644 index a2b8eafa6d..0000000000 --- a/docs/content/Security/api-encryption-configuration.md +++ /dev/null @@ -1,64 +0,0 @@ ---- -title: "PostgreSQL Operator API Encryption Configuration" -date: -draft: false -weight: 7 ---- - -## Configuring Encryption of PostgreSQL Operator API Connection - -The PostgreSQL Operator REST API connection is encrypted with keys stored in the *pgo.tls* Secret. - -The pgo.tls Secret can be generated prior to starting the PostgreSQL Operator or you can let the PostgreSQL Operator generate the Secret for you if the Secret -does not exist. - -Adjust the default keys to meet your security requirements using your own keys. The *pgo.tls* Secret is created when you run: - - make deployoperator - -The keys are generated when the RBAC script is executed by the cluster admin: - - make installrbac - -In some scenarios like an OLM deployment, it is preferable for the Operator to generate the Secret keys at runtime, if the pgo.tls Secret does not exit when the Operator starts, a new TLS Secret will be generated. - -In this scenario, you can extract the generated Secret TLS keys using: - - kubectl cp /:/tmp/server.key /tmp/server.key -c apiserver - kubectl cp /:/tmp/server.crt /tmp/server.crt -c apiserver - -example of the command below: - - kubectl cp pgo/postgres-operator-585584f57d-ntwr5:tmp/server.key /tmp/server.key -c apiserver - kubectl cp pgo/postgres-operator-585584f57d-ntwr5:tmp/server.crt /tmp/server.crt -c apiserver - -This server.key and server.crt can then be used to access the *pgo-apiserver* from the pgo CLI by setting the following variables in your client environment: - - export PGO_CA_CERT=/tmp/server.crt - export PGO_CLIENT_CERT=/tmp/server.crt - export PGO_CLIENT_KEY=/tmp/server.key - -You can view the TLS secret using: - - kubectl get secret pgo.tls -n pgo -or - - oc get secret pgo.tls -n pgo - -If you create the Secret outside of the Operator, for example using the default installation script, the key and cert that are generated by the default installation are found here: - - $PGOROOT/conf/postgres-operator/server.crt - $PGOROOT/conf/postgres-operator/server.key - -The key and cert are generated using the *deploy/gen-api-keys.sh* script. - -That script gets executed when running: - - make installrbac - -You can extract the server.key and server.crt from the Secret using the following: - - oc get secret pgo.tls -n $PGO_OPERATOR_NAMESPACE -o jsonpath='{.data.tls\.key}' | base64 --decode > /tmp/server.key - oc get secret pgo.tls -n $PGO_OPERATOR_NAMESPACE -o jsonpath='{.data.tls\.crt}' | base64 --decode > /tmp/server.crt - -This server.key and server.crt can then be used to access the *pgo-apiserver* REST API from the pgo CLI on your client host. diff --git a/docs/content/Security/configure-postgres-operator-rbac.md b/docs/content/Security/configure-postgres-operator-rbac.md deleted file mode 100644 index de70e2b480..0000000000 --- a/docs/content/Security/configure-postgres-operator-rbac.md +++ /dev/null @@ -1,108 +0,0 @@ ---- -title: "Configuration of PostgreSQL Operator RBAC" -date: -draft: false -weight: 7 ---- - - -## PostreSQL Operator RBAC - -The *conf/postgres-operator/pgorole* file is read at start up time when the operator is deployed to the Kubernetes cluster. This file defines the PostgreSQL Operator roles whereby PostgreSQL Operator API users can be authorized. - -The *conf/postgres-operator/pgouser* file is read at start up time also and contains username, password, role, and namespace information as follows: - - username:password:pgoadmin: - pgouser1:password:pgoadmin:pgouser1 - pgouser2:password:pgoadmin:pgouser2 - pgouser3:password:pgoadmin:pgouser1,pgouser2 - readonlyuser:password:pgoreader: - -The format of the pgouser server file is: - - ::: - -The namespace is a comma separated list of namespaces that user has access to. If you do not specify a namespace, then all namespaces is assumed, meaning this user can access any namespace that the Operator is watching. - -A user creates a *.pgouser* file in their $HOME directory to identify themselves to the Operator. An entry in .pgouser will need to match entries in the *conf/postgres-operator/pgouser* file. A sample *.pgouser* file contains the following: - - username:password - -The format of the .pgouser client file is: - - : - -The users pgouser file can also be located at: - -*/etc/pgo/pgouser* - -or it can be found at a path specified by the PGOUSER environment variable. - -If the user tries to access a namespace that they are not configured for within the server side *pgouser* file then they will get an error message as follows: - - Error: user [pgouser1] is not allowed access to namespace [pgouser2] - - -If you wish to add all available permissions to a *pgorole*, you can specify it by using a single `*` in your configuration. Note that if you are editing your YAML file directly, you will need to ensure to write it as `"*"` to ensure it is recognized as a string. - -The following list shows the current complete list of possible pgo permissions that you can specify within the *pgorole* file when creating roles: - -|Permission|Description | -|---|---| -|ApplyPolicy | allow *pgo apply*| -|Cat | allow *pgo cat*| -|Clone | allow *pgo clone*| -|CreateBackup | allow *pgo backup*| -|CreateCluster | allow *pgo create cluster*| -|CreateDump | allow *pgo create pgdump*| -|CreateFailover | allow *pgo failover*| -|CreatePgAdmin | allow *pgo create pgadmin*| -|CreatePgbouncer | allow *pgo create pgbouncer*| -|CreatePolicy | allow *pgo create policy*| -|CreateSchedule | allow *pgo create schedule*| -|CreateUpgrade | allow *pgo upgrade*| -|CreateUser | allow *pgo create user*| -|DeleteBackup | allow *pgo delete backup*| -|DeleteCluster | allow *pgo delete cluster*| -|DeletePgAdmin | allow *pgo delete pgadmin*| -|DeletePgbouncer | allow *pgo delete pgbouncer*| -|DeletePolicy | allow *pgo delete policy*| -|DeleteSchedule | allow *pgo delete schedule*| -|DeleteUpgrade | allow *pgo delete upgrade*| -|DeleteUser | allow *pgo delete user*| -|DfCluster | allow *pgo df*| -|Label | allow *pgo label*| -|Reload | allow *pgo reload*| -|Restore | allow *pgo restore*| -|RestoreDump | allow *pgo restore* for pgdumps| -|ShowBackup | allow *pgo show backup*| -|ShowCluster | allow *pgo show cluster*| -|ShowConfig | allow *pgo show config*| -|ShowPgAdmin | allow *pgo show pgadmin*| -|ShowPgBouncer | allow *pgo show pgbouncer*| -|ShowPolicy | allow *pgo show policy*| -|ShowPVC | allow *pgo show pvc*| -|ShowSchedule | allow *pgo show schedule*| -|ShowNamespace | allow *pgo show namespace*| -|ShowSystemAccounts | allows commands with the `--show-system-accounts` flag to return system account information (e.g. the `postgres` superuser)| -|ShowUpgrade | allow *pgo show upgrade*| -|ShowWorkflow | allow *pgo show workflow*| -|Status | allow *pgo status*| -|TestCluster | allow *pgo test*| -|UpdatePgBouncer | allow *pgo update pgbouncer*| -|UpdateCluster | allow *pgo update cluster*| -|User | allow *pgo user*| -|Version | allow *pgo version*| - - -If the user is unauthorized for a pgo command, the user will get back this response: - - Error: Authentication Failed: 403 - -## Making Security Changes - -Importantly, it is necesssary to redeploy the PostgreSQL Operator prior to giving effect to the user security changes in the pgouser and pgorole files: - - make deployoperator - -Performing this command will recreate the *pgo-config* ConfigMap that stores these files and is mounted by the Operator during its initialization. diff --git a/docs/content/Security/install-postgres-operator-rbac.md b/docs/content/Security/install-postgres-operator-rbac.md deleted file mode 100644 index 39ab147ea6..0000000000 --- a/docs/content/Security/install-postgres-operator-rbac.md +++ /dev/null @@ -1,40 +0,0 @@ ---- -title: "Installation of PostgreSQL Operator RBAC" -date: -draft: false -weight: 7 ---- - -## Installation of PostgreSQL Operator RBAC - -For a list of the RBAC required to install the PostgreSQL Operator, please view the [`postgres-operator.yml`](https://raw.githubusercontent.com/CrunchyData/postgres-operator/v{{< param operatorVersion >}}/installers/kubectl/postgres-operator.yml) file: - -[https://raw.githubusercontent.com/CrunchyData/postgres-operator/v{{< param operatorVersion >}}/installers/kubectl/postgres-operator.yml](https://raw.githubusercontent.com/CrunchyData/postgres-operator/v{{< param operatorVersion >}}/installers/kubectl/postgres-operator.yml) - -The first step is to install the PostgreSQL Operator RBAC configuration. This can be accomplished by running: - - make installrbac - -This script will install the PostreSQL Operator Custom Resource Definitions, CRD’s and creates the following RBAC resources on your Kubernetes cluster: - -| Setting |Definition | -|---|---| -| Custom Resource Definitions | pgclusters| -| | pgpolicies| -| | pgreplicas| -| | pgtasks| -| | pgupgrades| -| Cluster Roles (cluster-roles.yaml) | pgopclusterrole| -| | pgopclusterrolecrd| -| Cluster Role Bindings (cluster-roles-bindings.yaml) | pgopclusterbinding| -| | pgopclusterbindingcrd| -| Service Account (service-accounts.yaml) | postgres-operator| -| | pgo-backrest| -| Roles (rbac.yaml) | pgo-role| -| | pgo-backrest-role| -|Role Bindings (rbac.yaml) | pgo-backrest-role-binding| -| | pgo-role-binding| - -Note that the cluster role bindings have a naming convention of pgopclusterbinding-$PGO_OPERATOR_NAMESPACE and pgopclusterbindingcrd-$PGO_OPERATOR_NAMESPACE. The PGO_OPERATOR_NAMESPACE environment variable is added to make each cluster role binding name unique and to support more than a single PostgreSQL Operator being deployed on the same Kubernertes cluster. - -Also, the specific Cluster Roles installed depends on the Namespace Mode enabled via the `PGO_NAMESPACE_MODE` environment variable when running `make installrbac`. Please consult the [Namespace documentation](/architecture/namespace/) for more information regarding the Namespace Modes available, including the specific `ClusterRoles` required to enable each mode. diff --git a/docs/content/Upgrade/_index.md b/docs/content/Upgrade/_index.md deleted file mode 100644 index 809272d1fb..0000000000 --- a/docs/content/Upgrade/_index.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: "Upgrade" -draft: false -weight: 80 ---- - -# Upgrading the Crunchy PostgreSQL Operator - -There are two methods for upgrading your existing deployment of the PostgreSQL Operator. - -If you are upgrading from PostgreSQL Operator 4.1.0 or later, you are encouraged to use the [Automated Upgrade Procedure](/upgrade/automatedupgrade). This method simplifies the upgrade process, as well as maintains your existing clusters in place prior to their upgrade. - -For versions before 4.1.0, please see the appropriate [manual procedure](/upgrade/manual). diff --git a/docs/content/Upgrade/automatedupgrade.md b/docs/content/Upgrade/automatedupgrade.md deleted file mode 100644 index 31480bc07e..0000000000 --- a/docs/content/Upgrade/automatedupgrade.md +++ /dev/null @@ -1,154 +0,0 @@ ---- -title: "Automated PostgreSQL Operator Upgrade - Operator 4.1+" -draft: false -weight: 80 ---- - -## Automated PostgreSQL Operator Upgrade Procedure - -The automated upgrade to a new release of the PostgreSQL Operator comprises two main steps: - -* Upgrading the PostgreSQL Operator itself -* Upgrading the existing PostgreSQL Clusters to the new release - -The first step will result in an upgraded PostgreSQL Operator that is able to create and manage new clusters as expected, but will be unable to manage existing clusters until they have been upgraded. The second step upgrades the clusters to the current Operator version, allowing them to once again be fully managed by the Operator. - -The automated upgrade procedure is designed to facilate the quickest and most efficient method to the current release of the PostgreSQL Operator. However, as with any upgrade, there are several considerations before beginning. - -### Considerations - -1. Versions Supported - This upgrade currently supports cluster upgrades from PostgreSQL Operator version 4.1.0 and later. - -2. PostgreSQL Major Version Requirements - The underlying PostgreSQL major version must match between the old and new clusters. For example, if you are upgrading a 4.1.0 version of the PostgreSQL Operator and the cluster is using PostgreSQL 11.5, your upgraded clusters will need to use container images with a later minor version of PostgreSQL 11. Note that this is not a requirement for new clusters, which may use any currently supported version. For more information, please see the [Compatibility Requirements]({{< relref "configuration/compatibility.md" >}}). - -3. Cluster Downtime - The re-creation of clusters will take some time, generally on the order of minutes but potentially longer depending on the operating environment. As such, the timing of the upgrade will be an important consideration. It should be noted that the upgrade of the PostgreSQL Operator itself will leave any existing cluster resources in place until individual pgcluster upgrades are performed. - -4. Destruction and Re-creation of Certain Resources - As this upgrade process does destroy and recreate most elements of the cluster, unhealthy Kubernetes or Openshift environments may have difficulty recreating the necessary elements. Node availability, necessary PVC storage allocations and processing requirements are a few of the resource considerations to make before proceeding. - -5. Compatibility with Custom Configurations - Given the nearly endless potential for custom configuration settings, it is important to consider any resource or implemenation that might be uniquely tied to the current PostgreSQL Operator version. - -6. Storage Requirements - An essential part of both the automated and manual upgrade procedures is the reuse of existing PVCs. As such, it is essential that the existing storage settings are maintained for any upgraded clusters. - -7. As opposed to the manual upgrade procedures, the automated upgrade is designed to leave existing resources (such as CRDs, config maps, secrets, etc) in place whenever possible to minimize the need for resource re-creation. - -8. Metrics - While the PostgreSQL Operator upgrade process will not delete an existing Metrics Stack, it does not currently support the upgrade of existing metrics infrastructure. - -##### NOTE: As with any upgrade procedure, it is strongly recommended that a full logical backup is taken before any upgrade procedure is started. Please see the [Logical Backups](/pgo-client/common-tasks#logical-backups-pg_dump--pg_dumpall) section of the Common Tasks page for more information. - -### Automated Upgrade when using the PostgreSQL Operator Installer (`pgo-deployer`), Helm or Ansible - -For all existing PostgreSQL Operator deployments that were installed using the Ansible installation method, the PostgreSQL Operator Installer or the Helm Chart Installation of the PostgreSQL Operator, the upgrade process is straightforward. - -First, you will copy your existing configuration file (whether inventory, postgres-operator.yml, values.yaml, etc, depending on method and version) as a backup for your existing settings. You will reference these settings, but you will need to use the updated version of this file for the current version of PostgreSQL Operator. - -In all three cases, you will need to use the relevant update functionality available with your chosen installation method. For all three options, please keep the above [Considerations](/upgrade/automatedupgrade#considerations) in mind, particularly with regard to the version and storage requirements listed. - -#### PostgreSQL Operator Installer - -For existing PostgreSQL Operator deployments that were installed using the PostgreSQL Operator Installer, you will check out the appropriate release tag and update your the new configuration files. After this, you will need to update your Operator installation using the `DEPLOY_ACTION` method described in the [Configuring to Update and Uninstall](/installation/postgres-operator#configuring-to-update-and-uninstall) section of the documentation. - -Please note, you will need to ensure that you have executed the [post-installation cleanup](/installation/postgres-operator#post-installation) between each `DEPLOY_ACTION` activity. - -#### Helm - -For existing PostgreSQL Operator deployments that were installed using the Helm installer, you will check out the appropriate release tag and update your the new configuration files. Then you will need to use the `helm upgrade` command as described in the [Helm Upgrade](/installation/other/helm#upgrade) section of the Helm installation documentation. - -#### Ansible - -For existing PostgreSQL Operator deployments that were installed using Ansible, you will first need to check out the appropriate release tag of the Operator. Then please follow the [Update Instructions]({{< relref "installation/other/ansible/updating-operator.md" >}}), being sure to update the new inventory file with your required settings. - -#### Wrapping Up the PostgreSQL Operator Upgrade - -Once the upgrade is complete, you should now see the PostgreSQL Operator pods are up and ready. It is strongly recommended that you create a test cluster to validate proper functionality before moving on to the [Automated Cluster Upgrade](/upgrade/automatedupgrade#postgresql-operator-automated-cluster-upgrade) section below. - -### Automated Upgrade when using a Bash installation of the PostgreSQL Operator - -Like the Ansible procedure given above, the Bash upgrade procedure for upgrading the PostgreSQL Operator will require some manual configuration steps before the upgrade can take place. These updates will be made to your user's environment variables and the pgo.yaml configuration file. - -#### PostgreSQL Operator Configuration Updates - -To begin, you will need to make the following updates to your existing configuration. - -##### Bashrc File Updates - -First, you will make the following updates to your $HOME/.bashrc file. - -When upgrading from version 4.1.X, in `$HOME/.bashrc` - -Add the following variables: - -``` -export TLS_CA_TRUST="" -export ADD_OS_TRUSTSTORE=false -export NOAUTH_ROUTES="" - -# Disable default inclusion of OS trust in PGO clients -export EXCLUDE_OS_TRUST=false -``` - -Then, for either 4.1.X or 4.2.X, - -Update the `PGO_VERSION` variable to `{{< param operatorVersion >}}` - -Finally, source this file with -``` -source $HOME/.bashrc -``` - -##### PostgreSQL Operator Configuration File updates - -Next, you will and save a copy of your existing pgo.yaml file (`$PGOROOT/conf/postgres-operator/pgo.yaml`) as pgo_old.yaml or similar. - -Once this is saved, you will checkout the current release of the PostgreSQL Operator and update the pgo.yaml for the current version, making sure to make updates to the CCPImageTag and storage settings in line with the [Considerations](/upgrade/automatedupgrade#considerations) given above. - -#### Upgrading the Operator - -Once the above configuration updates are completed, the PostgreSQL Operator can be upgraded. -To help ensure that needed resources are not inadvertently deleted during an upgrade of the PostgreSQL Operator, a helper script is provided. This script provides a similar function to the Ansible installation method's 'update' tag, where the Operator is undeployed, and the designated namespaces, RBAC rules, pods, etc are redeployed or recreated as appropriate, but required CRDs and other resources are left in place. - -To use the script, execute: -``` -$PGOROOT/deploy/upgrade-pgo.sh -``` -This script will undeploy the current PostgreSQL Operator, configure the desired namespaces, install the RBAC configuration, deploy the new Operator, and, attempt to install a new PGO client, assuming default location settings are being used. - -After this script completes, it is strongly recommended that you create a test cluster to validate the Operator is functioning as expected before moving on to the individual cluster upgrades. - -## PostgreSQL Operator Automated Cluster Upgrade - -Previously, the existing cluster upgrade focused on updating a cluster's underlying container images. However, due to the various changes in the PostgreSQL Operator's operation between the various versions (including numerous updates to the relevant CRDs, integration of Patroni for HA and other significant changes), updates between PostgreSQL Operator releases required the manual deletion of the existing clusters while preserving the underlying PVC storage. After installing the new PostgreSQL Operator version, the clusters could be recreated manually with the name of the new cluster matching the existing PVC's name. - -The automated upgrade process provides a mechanism where, instead of being deleted, the existing PostgreSQL clusters will be left in place during the PostgreSQL Operator upgrade. While normal Operator functionality will be restricted on these existing clusters until they are upgraded to the currently installed PostgreSQL Operator version, the pods, services, etc will still be in place and accessible via other methods (e.g. kubectl, service IP, etc). - -To upgrade a PostgreSQL cluster using the standard (`crunchy-postgres-ha`) image, you can run the following command: -``` -pgo upgrade mycluster -``` - -If you are using the PostGIS-enabled image (i.e. `crunchy-postgres-gis-ha`) or any other custom images, you will need to add the `--ccp-image-tag`: -``` -pgo upgrade --ccp-image-tag={{< param centosBase >}}-{{< param postgresVersion >}}-{{< param postgisVersion >}}-{{< param operatorVersion >}} mygiscluster -``` -Where `{{< param postgresVersion >}}` is the PostgreSQL version, `{{< param postgisVersion >}}` is the PostGIS version and `{{< param operatorVersion >}}` is the PostgreSQL Operator version. -Please note, no tag validation will be performed and additional steps may be required to upgrade your PostGIS extension implementation. For more information on PostGIS upgrade considerations, please see -[PostGIS Upgrade Documentation](https://access.crunchydata.com/documentation/postgis/latest/postgis_installation.html#upgrading). - -This will follow a similar process to the documented manual process, where the pods, deployments, replicasets, pgtasks and jobs are deleted, the cluster's replicas are scaled down and replica PVCs deleted, but the primary PVC and backrest-repo PVC are left in place. Existing services for the primary, replica and backrest-shared-repo are also kept and will be updated to the requirements of the current version. Configmaps and secrets are kept except where deletion is required. For a cluster 'mycluster', the following configmaps will be deleted (if they exist) and recreated: -``` -mycluster-leader -mycluster-pgha-default-config -``` -along with the following secret: -``` -mycluster-backrest-repo-config -``` - -The pgcluster CRD will be read, updated automatically and replaced, at which point the normal cluster creation process will take over. The end result of the upgrade should be an identical numer of pods, deployments, replicas, etc with a new pgbackrest backup taken, but existing backups left in place. - -Finally, to disable PostgreSQL version checking during the upgrade, such as for when container images are re-tagged and no longer follow the standard version tagging format, use the "ignore-validation" flag: - -``` -pgo upgrade mycluster --ignore-validation -``` - -That will allow the upgrade to proceed, regardless of the tag values. Please note, the underlying image must still be chosen in accordance with the [Considerations](/upgrade/automatedupgrade#considerations) listed above. diff --git a/docs/content/Upgrade/manual/_index.md b/docs/content/Upgrade/manual/_index.md deleted file mode 100644 index 162cbf94af..0000000000 --- a/docs/content/Upgrade/manual/_index.md +++ /dev/null @@ -1,19 +0,0 @@ ---- -title: "Manual Upgrades" -date: -draft: false -weight: 100 ---- - -## Manually Upgrading the Operator and PostgreSQL Clusters - -In the event that the automated upgrade cannot be used, below are manual upgrade procedures for both PostgreSQL Operator 3.5 and 4.0 releases. These procedures will require action by the Operator administrator of your organization in order to upgrade to the current release of the Operator. Some upgrade steps are still automated within the Operator, but not all are possible with this upgrade method. As such, the pages below show the specific steps required to upgrade different versions of the PostgreSQL Operator depending on your current environment. - -NOTE: If you are upgrading from Crunchy PostgreSQL Operator version 4.1.0 or later, the [Automated Upgrade Procedure](/upgrade/automatedupgrade) is recommended. If you are upgrading PostgreSQL 12 clusters, you MUST use the [Automated Upgrade Procedure](/upgrade/automatedupgrade). - -When performing a manual upgrade, it is recommended to upgrade to the latest PostgreSQL Operator available. - -[Manual Upgrade - PostgreSQL Operator 3.5]( {{< relref "upgrade/manual/upgrade35.md" >}}) - -[Manual Upgrade - PostgreSQL Operator 4]( {{< relref "upgrade/manual/upgrade4.md" >}}) - diff --git a/docs/content/Upgrade/manual/upgrade35.md b/docs/content/Upgrade/manual/upgrade35.md deleted file mode 100644 index cb7ec25138..0000000000 --- a/docs/content/Upgrade/manual/upgrade35.md +++ /dev/null @@ -1,244 +0,0 @@ ---- -title: "Manual Upgrade - Operator 3.5" -draft: false -weight: 8 ---- - -## Upgrading the Crunchy PostgreSQL Operator from Version 3.5 to {{< param operatorVersion >}} - -This section will outline the procedure to upgrade a given cluster created using PostgreSQL Operator 3.5.x to PostgreSQL Operator version {{< param operatorVersion >}}. This version of the PostgreSQL Operator has several fundamental changes to the existing PGCluster structure and deployment model. Most notably, all PGClusters use the new Crunchy PostgreSQL HA container in place of the previous Crunchy PostgreSQL containers. The use of this new container is a breaking change from previous versions of the Operator. - -#### Crunchy PostgreSQL High Availability Containers - -Using the PostgreSQL Operator {{< param operatorVersion >}} requires replacing your `crunchy-postgres` and `crunchy-postgres-gis` containers with the `crunchy-postgres-ha` and `crunchy-postgres-gis-ha` containers respectively. The underlying PostgreSQL installations in the container remain the same but are now optimized for Kubernetes environments to provide the new high-availability functionality. - -A major change to this container is that the PostgreSQL process is now managed by Patroni. This allows a PostgreSQL cluster that is deployed by the PostgreSQL Operator to manage its own uptime and availability, to elect a new leader in the event of a downtime scenario, and to automatically heal after a failover event. - -When creating your new clusters using version {{< param operatorVersion >}} of the PostgreSQL Operator, the `pgo create cluster` command will automatically use the new `crunchy-postgres-ha` image if the image is unspecified. If you are creating a PostGIS enabled cluster, please be sure to use the updated image name and image tag, as with the command: - -``` -pgo create cluster mygiscluster --ccp-image=crunchy-postgres-gis-ha --ccp-image-tag={{< param centosBase >}}-{{< param postgresVersion >}}-{{< param postgisVersion >}}-{{< param operatorVersion >}} -``` -Where `{{< param postgresVersion >}}` is the PostgreSQL version, `{{< param postgisVersion >}}` is the PostGIS version and `{{< param operatorVersion >}}` is the PostgreSQL Operator version. -Please note, no tag validation will be performed and additional steps may be required to upgrade your PostGIS extension implementation. For more information on PostGIS upgrade considerations, please see -[PostGIS Upgrade Documentation](https://access.crunchydata.com/documentation/postgis/latest/postgis_installation.html#upgrading). - -NOTE: As with any upgrade procedure, it is strongly recommended that a full logical backup is taken before any upgrade procedure is started. Please see the [Logical Backups](/pgo-client/common-tasks#logical-backups-pg_dump--pg_dumpall) section of the Common Tasks page for more information. - -##### Prerequisites. -You will need the following items to complete the upgrade: - -* The code for the latest PostgreSQL Operator available -* The latest client binary - -##### Step 1 - -Create a new Linux user with the same permissions as the existing user used to install the Crunchy PostgreSQL Operator. This is necessary to avoid any issues with environment variable differences between 3.5 and {{< param operatorVersion >}}. - -##### Step 2 - -For the cluster(s) you wish to upgrade, record the cluster details provided by - -``` -pgo show cluster -``` - -so that your new clusters can be recreated with the proper settings. - -Also, you will need to note the name of the primary PVC. If it does not exactly match the cluster name, you will need to recreate your cluster using the primary PVC name as the new cluster name. - -For example, given the following output: - -``` -$ pgo show cluster mycluster - -cluster : mycluster (crunchy-postgres:centos7-11.5-2.4.2) - pod : mycluster-7bbf54d785-pk5dq (Running) on kubernetes1 (1/1) (replica) - pvc : mycluster - pod : mycluster-ypvq-5b9b8d645-nvlb6 (Running) on kubernetes1 (1/1) (primary) - pvc : mycluster-ypvq -... -``` - -the new cluster's name will need to be "mycluster-ypvq" - -##### Step 3 - -NOTE: Skip this step if your primary PVC still matches your original cluster name, or if you do not have pgBackrestBackups you wish to preserve for use in the upgraded cluster. - -Otherwise, noting the primary PVC name mentioned in Step 2, run - -``` -kubectl exec mycluster-backrest-shared-repo- -- bash -c "mv /backrestrepo/mycluster-backrest-shared-repo /backrestrepo/mycluster-ypvq-backrest-shared-repo" -``` - -where "mycluster" is the original cluster name, "mycluster-ypvq" is the primary PVC name and "mycluster-backrest-shared-repo-" is the pgBackRest shared repo pod name. - -##### Step 4 - -For the cluster(s) you wish to upgrade, scale down any replicas, if necessary, then delete the cluster - -``` -pgo delete cluster -``` - -If there are any remaining jobs for this deleted cluster, use - -``` -kubectl -n delete job -``` - -to remove the job and any associated "Completed" pods. - -NOTE: Please record the name of each cluster, the namespace used, and be sure not to delete the associated PVCs or CRDs! - -##### Step 5 - -Delete the 3.5.x version of the operator by executing: - -``` -$COROOT/deploy/cleanup.sh -$COROOT/deploy/remove-crd.sh -``` - -##### Step 6 - -Log in as your new Linux user and install the {{< param operatorVersion >}} PostgreSQL Operator as described in the [Bash Installation Procedure]( {{< relref "installation/other/bash.md" >}}). - -Be sure to add the existing namespace to the Operator's list of watched namespaces (see the [Namespace]( {{< relref "architecture/namespace.md" >}}) section of this document for more information) and make sure to avoid overwriting any existing data storage. - -We strongly recommend that you create a test cluster before proceeding to the next step. - - -##### Step 7 - -Once the Operator is installed and functional, create a new {{< param operatorVersion >}} cluster matching the cluster details recorded in Step 1. Be sure to use the primary PVC name (also noted in Step 1) and the same major PostgreSQL version as was used previously. This will allow the new clusters to utilize the existing PVCs. - -NOTE: If you have existing pgBackRest backups stored that you would like to have available in the upgraded cluster, you will need to follow the [PVC Renaming Procedure]( {{< relref "Upgrade/manual/upgrade35#pgbackrest-repo-pvc-renaming" >}}). - -A simple example is given below, but more information on cluster creation can be found [here](/pgo-client/common-tasks#creating-a-postgresql-cluster) - -``` -pgo create cluster -n -``` - -##### Step 8 - -Manually update the old leftover Secrets to use the new label as defined in {{< param operatorVersion >}}: - -``` -kubectl -n label secret/-postgres-secret pg-cluster= -n -kubectl -n label secret/-primaryuser-secret pg-cluster= -n -kubectl -n label secret/-testuser-secret pg-cluster= -n -``` - -##### Step 9 - -To verify cluster status, run - -``` -pgo test -n -``` - -Output should be similar to: - -``` -cluster : mycluster - Services - primary (10.106.70.238:5432): UP - Instances - primary (mycluster-7d49d98665-7zxzd): UP -``` - -##### Step 10 - -Scale up to the required number of replicas, as needed. - -Congratulations! Your cluster is upgraded and ready to use! - - -### pgBackRest Repo PVC Renaming - -If the pgcluster you are upgrading has an existing pgBackRest repo PVC that you would like to continue to use (which is required for existing pgBackRest backups to be accessible by your new cluster), the following renaming procedure will be needed. - -##### Step 1 - -To start, if your current cluster is "mycluster", the pgBackRest PVC created by version 3.5 of the Postgres Operator will be named "mycluster-backrest-shared-repo". This will need to be renamed to "mycluster-pgbr-repo" to be used in your new cluster. - -To begin, save the output from - -``` -kubectl -n describe pvc mycluster-backrest-shared-repo -``` - -for later use when recreating this PVC with the new name. In this output, note the "Volume" name, which is the name of the underlying PV. - -##### Step 2 - -Now use - -``` -kubectl -n get pv -``` - -to check the "RECLAIM POLICY". If this is not set to "Retain", edit the "persistentVolumeReclaimPolicy" value so that it is set to "Retain" using - -``` -kubectl -n patch pv --type='json' -p='[{"op": "replace", "path": "/spec/persistentVolumeReclaimPolicy", "value":"Retain"}]' -``` - -##### Step 3 - -Now, delete the PVC: - -``` -kubectl -n delete pvc mycluster-backrest-shared-repo -``` - -##### Step 4 - -You will remove the "claimRef" section of the PV with - -``` -kubectl -n patch pv --type=json -p='[{"op": "remove", "path": "/spec/claimRef"}]' -``` - -which will make the PV "Available" so it may be reused by the new PVC. - -##### Step 5 - -Now, create a file with contents similar to the following: - -``` -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: mycluster-pgbr-repo - namespace: demo -spec: - storageClassName: "" - accessModes: - - ReadWriteMany - resources: - requests: - storage: 2Gi - volumeMode: Filesystem - volumeName: "crunchy-pv156" -``` - -where name matches your new cluster (Remember that this will need to match the "primary PVC" name identified in [Step 2]( {{< relref "Upgrade/manual/upgrade35#step-2" >}}) of the upgrade procedure!) and namespace, storageClassName, accessModes, storage, volumeMode and volumeName match your original PVC. - -##### Step 6 - -Now you can use the new file to recreate your PVC using - -``` -kubectl -n create -f -``` - -To check that your PVC is "Bound", run - -``` -kubectl -n get pvc mycluster-pgbr-repo -``` -Congratulations, you have renamed your PVC! Once the PVC Status is "Bound", your cluster can be recreated. If you altered the Reclaim Policy on your PV in Step 1, you will want to reset it now. diff --git a/docs/content/Upgrade/manual/upgrade4.md b/docs/content/Upgrade/manual/upgrade4.md deleted file mode 100644 index da11f86f15..0000000000 --- a/docs/content/Upgrade/manual/upgrade4.md +++ /dev/null @@ -1,562 +0,0 @@ ---- -title: "Manual Upgrade - Operator 4" -draft: false -weight: 8 ---- - -## Manual PostgreSQL Operator 4 Upgrade Procedure - -Below are the procedures for upgrading to version {{< param operatorVersion >}} of the Crunchy PostgreSQL Operator using the Bash or Ansible installation methods. This version of the PostgreSQL Operator has several fundamental changes to the existing PGCluster structure and deployment model. Most notably for those upgrading from 4.1 and below, all PGClusters use the new Crunchy PostgreSQL HA container in place of the previous Crunchy PostgreSQL containers. The use of this new container is a breaking change from previous versions of the Operator did not use the HA containers. - -NOTE: If you are upgrading from Crunchy PostgreSQL Operator version 4.1.0 or later, the [Automated Upgrade Procedure](/upgrade/automatedupgrade) is recommended. If you are upgrading PostgreSQL 12 clusters, you MUST use the [Automated Upgrade Procedure](/upgrade/automatedupgrade). - -#### Crunchy PostgreSQL High Availability Containers - -Using the PostgreSQL Operator {{< param operatorVersion >}} requires replacing your `crunchy-postgres` and `crunchy-postgres-gis` containers with the `crunchy-postgres-ha` and `crunchy-postgres-gis-ha` containers respectively. The underlying PostgreSQL installations in the container remain the same but are now optimized for Kubernetes environments to provide the new high-availability functionality. - -A major change to this container is that the PostgreSQL process is now managed by Patroni. This allows a PostgreSQL cluster that is deployed by the PostgreSQL Operator to manage its own uptime and availability, to elect a new leader in the event of a downtime scenario, and to automatically heal after a failover event. - -When creating your new clusters using version {{< param operatorVersion >}} of the PostgreSQL Operator, the `pgo create cluster` command will automatically use the new `crunchy-postgres-ha` image if the image is unspecified. If you are creating a PostGIS enabled cluster, please be sure to use the updated image name and image tag, as with the command: - -``` -pgo create cluster mygiscluster --ccp-image=crunchy-postgres-gis-ha --ccp-image-tag={{< param centosBase >}}-{{< param postgresVersion >}}-{{< param postgisVersion >}}-{{< param operatorVersion >}} -``` -Where `{{< param postgresVersion >}}` is the PostgreSQL version, `{{< param postgisVersion >}}` is the PostGIS version and `{{< param operatorVersion >}}` is the PostgreSQL Operator version. -Please note, no tag validation will be performed and additional steps may be required to upgrade your PostGIS extension implementation. For more information on PostGIS upgrade considerations, please see -[PostGIS Upgrade Documentation](https://access.crunchydata.com/documentation/postgis/latest/postgis_installation.html#upgrading). - -NOTE: As with any upgrade procedure, it is strongly recommended that a full logical backup is taken before any upgrade procedure is started. Please see the [Logical Backups](/pgo-client/common-tasks#logical-backups-pg_dump--pg_dumpall) section of the Common Tasks page for more information. - -The Ansible installation upgrade procedure is below. Please click [here](/upgrade/upgrade4#bash-installation-upgrade-procedure) for the Bash installation upgrade procedure. - -### Ansible Installation Upgrade Procedure - -Below are the procedures for upgrading the PostgreSQL Operator and PostgreSQL clusters using the Ansible installation method. - -##### Prerequisites. - -You will need the following items to complete the upgrade: - -* The latest {{< param operatorVersion >}} code for the Postgres Operator available - -These instructions assume you are executing in a terminal window and that your user has admin privileges in your Kubernetes or Openshift environment. - -##### Step 1 - -For the cluster(s) you wish to upgrade, record the cluster details provided by - -``` -pgo show cluster -``` - -so that your new clusters can be recreated with the proper settings. - -Also, you will need to note the name of the primary PVC. If it does not exactly match the cluster name, you will need to recreate your cluster using the primary PVC name as the new cluster name. - -For example, given the following output: - -``` -$ pgo show cluster mycluster - -cluster : mycluster (crunchy-postgres:centos7-11.5-2.4.2) - pod : mycluster-7bbf54d785-pk5dq (Running) on kubernetes1 (1/1) (replica) - pvc : mycluster - pod : mycluster-ypvq-5b9b8d645-nvlb6 (Running) on kubernetes1 (1/1) (primary) - pvc : mycluster-ypvq -... -``` - -the new cluster's name will need to be "mycluster-ypvq" - - -##### Step 2 - -NOTE: Skip this step if your primary PVC still matches your original cluster name, or if you do not have pgBackrestBackups you wish to preserve for use in the upgraded cluster. - -Otherwise, noting the primary PVC name mentioned in Step 2, run - -``` -kubectl exec mycluster-backrest-shared-repo- -- bash -c "mv /backrestrepo/mycluster-backrest-shared-repo /backrestrepo/mycluster-ypvq-backrest-shared-repo" -``` - -where "mycluster" is the original cluster name, "mycluster-ypvq" is the primary PVC name and "mycluster-backrest-shared-repo-" is the pgBackRest shared repo pod name. - - -##### Step 3 - -For the cluster(s) you wish to upgrade, scale down any replicas, if necessary (see `pgo scaledown --help` for more information on command usage) page for more information), then delete the cluster - -For 4.2: - -``` -pgo delete cluster --keep-backups --keep-data -``` - -For 4.0 and 4.1: - -``` -pgo delete cluster -``` - -and then, for all versions, delete the "backrest-repo-config" secret, if it exists: - -``` -kubectl delete secret -backrest-repo-config -``` - -If there are any remaining jobs for this deleted cluster, use - -``` -kubectl -n delete job -``` - -to remove the job and any associated "Completed" pods. - - -NOTE: Please note the name of each cluster, the namespace used, and be sure not to delete the associated PVCs or CRDs! - - -##### Step 4 - -Save a copy of your current inventory file with a new name (such as `inventory.backup)` and checkout the latest {{< param operatorVersion >}} tag of the Postgres Operator. - - -##### Step 5 - -Update the new inventory file with the appropriate values for your new Operator installation, as described in the [Ansible Install Prerequisites]( {{< relref "installation/other/ansible/prerequisites.md" >}}) and the [Compatibility Requirements Guide]( {{< relref "configuration/compatibility.md" >}}). - - -##### Step 6 - -Now you can upgrade your Operator installation and configure your connection settings as described in the [Ansible Update Page]( {{< relref "installation/other/ansible/updating-operator.md" >}}). - - -##### Step 7 - -Verify the Operator is running: - -``` -kubectl get pod -n -``` - -And that it is upgraded to the appropriate version - -``` -pgo version -``` - -We strongly recommend that you create a test cluster before proceeding to the next step. - -##### Step 8 - -Once the Operator is installed and functional, create a new {{< param operatorVersion >}} cluster matching the cluster details recorded in Step 1. Be sure to use the primary PVC name (also noted in Step 1) and the same major PostgreSQL version as was used previously. This will allow the new clusters to utilize the existing PVCs. - -NOTE: If you have existing pgBackRest backups stored that you would like to have available in the upgraded cluster, you will need to follow the [PVC Renaming Procedure]( {{< relref "Upgrade/manual/upgrade4#pgbackrest-repo-pvc-renaming" >}}). - -A simple example is given below, but more information on cluster creation can be found [here](/pgo-client/common-tasks#creating-a-postgresql-cluster) - -``` -pgo create cluster -n -``` - -##### Step 9 - -To verify cluster status, run - -``` -pgo test -n -``` - -Output should be similar to: - -``` -cluster : mycluster - Services - primary (10.106.70.238:5432): UP - Instances - primary (mycluster-7d49d98665-7zxzd): UP -``` - -##### Step 10 - -Scale up to the required number of replicas, as needed. - -Congratulations! Your cluster is upgraded and ready to use! - -### Bash Installation Upgrade Procedure - -Below are the procedures for upgrading the PostgreSQL Operator and PostgreSQL clusters using the Bash installation method. - -##### Prerequisites. - -You will need the following items to complete the upgrade: - -* The code for the latest release of the PostgreSQL Operator -* The latest PGO client binary - -Finally, these instructions assume you are executing from $PGOROOT in a terminal window and that your user has admin privileges in your Kubernetes or Openshift environment. - -##### Step 1 - -You will most likely want to run: - -``` -pgo show config -n -``` - -Save this output to compare once the procedure has been completed to ensure none of the current configuration changes are missing. - -##### Step 2 - -For the cluster(s) you wish to upgrade, record the cluster details provided by - -``` -pgo show cluster -``` - -so that your new clusters can be recreated with the proper settings. - -Also, you will need to note the name of the primary PVC. If it does not exactly match the cluster name, you will need to recreate your cluster using the primary PVC name as the new cluster name. - -For example, given the following output: - -``` -$ pgo show cluster mycluster - -cluster : mycluster (crunchy-postgres:centos7-11.5-2.4.2) - pod : mycluster-7bbf54d785-pk5dq (Running) on kubernetes1 (1/1) (replica) - pvc : mycluster - pod : mycluster-ypvq-5b9b8d645-nvlb6 (Running) on kubernetes1 (1/1) (primary) - pvc : mycluster-ypvq -... -``` - -the new cluster's name will need to be "mycluster-ypvq" - - -##### Step 3 - -NOTE: Skip this step if your primary PVC still matches your original cluster name, or if you do not have pgBackrestBackups you wish to preserve for use in the upgraded cluster. - -Otherwise, noting the primary PVC name mentioned in Step 2, run - -``` -kubectl exec mycluster-backrest-shared-repo- -- bash -c "mv /backrestrepo/mycluster-backrest-shared-repo /backrestrepo/mycluster-ypvq-backrest-shared-repo" -``` - -where "mycluster" is the original cluster name, "mycluster-ypvq" is the primary PVC name and "mycluster-backrest-shared-repo-" is the pgBackRest shared repo pod name. - -##### Step 4 - -For the cluster(s) you wish to upgrade, scale down any replicas, if necessary (see `pgo scaledown --help` for more information on command usage) page for more information), then delete the cluster - -For 4.2: - -``` -pgo delete cluster --keep-backups --keep-data -``` - -For 4.0 and 4.1: - -``` -pgo delete cluster -``` - -and then, for all versions, delete the "backrest-repo-config" secret, if it exists: - -``` -kubectl delete secret -backrest-repo-config -``` - -NOTE: Please record the name of each cluster, the namespace used, and be sure not to delete the associated PVCs or CRDs! - - -##### Step 5 - -Delete the 4.X version of the Operator by executing: - -``` -$PGOROOT/deploy/cleanup.sh -$PGOROOT/deploy/remove-crd.sh -$PGOROOT/deploy/cleanup-rbac.sh -``` - -##### Step 6 - -For versions 4.0, 4.1 and 4.2, update environment variables in the bashrc: - -``` -export PGO_VERSION={{< param operatorVersion >}} -``` - -NOTE: This will be the only update to the bashrc file for 4.2. - -If you are pulling your images from the same registry as before this should be the only update to the existing 4.X environment variables. - -###### Operator 4.0 - -If you are upgrading from PostgreSQL Operator 4.0, you will need the following new environment variables: - -``` -# PGO_INSTALLATION_NAME is the unique name given to this Operator install -# this supports multi-deployments of the Operator on the same Kubernetes cluster -export PGO_INSTALLATION_NAME=devtest - -# for setting the pgo apiserver port, disabling TLS or not verifying TLS -# if TLS is disabled, ensure setip() function port is updated and http is used in place of https -export PGO_APISERVER_PORT=8443 # Defaults: 8443 for TLS enabled, 8080 for TLS disabled -export DISABLE_TLS=false -export TLS_NO_VERIFY=false -export TLS_CA_TRUST="" -export ADD_OS_TRUSTSTORE=false -export NOAUTH_ROUTES="" - -# for disabling the Operator eventing -export DISABLE_EVENTING=false -``` - -There is a new eventing feature, so if you want an alias to look at the eventing logs you can add the following: - -``` -elog () { -$PGO_CMD -n "$PGO_OPERATOR_NAMESPACE" logs `$PGO_CMD -n "$PGO_OPERATOR_NAMESPACE" get pod --selector=name=postgres-operator -o jsonpath="{.items[0].metadata.name}"` -c event -} -``` - -###### Operator 4.1 - -If you are upgrading from PostgreSQL Operator 4.1.0 or 4.1.1, you will only need the following subset of the environment variables listed above: - -``` -export TLS_CA_TRUST="" -export ADD_OS_TRUSTSTORE=false -export NOAUTH_ROUTES="" -``` - -##### Step 7 - -Source the updated bash file: - -``` -source ~/.bashrc -``` - -##### Step 8 - -Ensure you have checked out the latest {{< param operatorVersion >}} version of the source code and update the pgo.yaml file in `$PGOROOT/conf/postgres-operator/pgo.yaml` - -You will want to use the {{< param operatorVersion >}} pgo.yaml file and update custom settings such as image locations, storage, and resource configs. - -##### Step 9 - -Create an initial Operator Admin user account. -You will need to edit the `$PGOROOT/deploy/install-bootstrap-creds.sh` file to configure the username and password that you want for the Admin account. The default values are: - -``` -PGOADMIN_USERNAME=admin -PGOADMIN_PASSWORD=examplepassword -``` - -You will need to update the `$HOME/.pgouser`file to match the values you set in order to use the Operator. Additional accounts can be created later following the steps described in the 'Operator Security' section of the main [Bash Installation Guide]( {{< relref "installation/other/bash.md" >}}). Once these accounts are created, you can change this file to login in via the PGO CLI as that user. - -##### Step 10 - -Install the {{< param operatorVersion >}} Operator: - -Setup the configured namespaces: - -``` -make setupnamespaces -``` - -Install the RBAC configurations: - -``` -make installrbac -``` - -Deploy the PostgreSQL Operator: - -``` -make deployoperator -``` - -Verify the Operator is running: - -``` -kubectl get pod -n -``` - -##### Step 11 - -Next, update the PGO client binary to {{< param operatorVersion >}} by replacing the existing 4.X binary with the latest {{< param operatorVersion >}} binary available. - -You can run: - -``` -which pgo -``` - -to ensure you are replacing the current binary. - - -##### Step 12 - -You will want to make sure that any and all configuration changes have been updated. You can run: - -``` -pgo show config -n -``` - -This will print out the current configuration that the Operator will be using. - -To ensure that you made any required configuration changes, you can compare with Step 0 to make sure you did not miss anything. If you happened to miss a setting, update the pgo.yaml file and rerun: - -``` -make deployoperator -``` - -##### Step 13 - -The Operator is now upgraded to {{< param operatorVersion >}} and all users and roles have been recreated. -Verify this by running: - -``` -pgo version -``` - -We strongly recommend that you create a test cluster before proceeding to the next step. - -##### Step 14 - -Once the Operator is installed and functional, create a new {{< param operatorVersion >}} cluster matching the cluster details recorded in Step 1. Be sure to use the same name and the same major PostgreSQL version as was used previously. This will allow the new clusters to utilize the existing PVCs. A simple example is given below, but more information on cluster creation can be found [here](/pgo-client/common-tasks#creating-a-postgresql-cluster) - -NOTE: If you have existing pgBackRest backups stored that you would like to have available in the upgraded cluster, you will need to follow the [PVC Renaming Procedure]( {{< relref "Upgrade/manual/upgrade4#pgbackrest-repo-pvc-renaming" >}}). - -``` -pgo create cluster -n -``` - -##### Step 15 - -To verify cluster status, run - -``` -pgo test -n -``` - -Output should be similar to: - -``` -cluster : mycluster - Services - primary (10.106.70.238:5432): UP - Instances - primary (mycluster-7d49d98665-7zxzd): UP -``` - -##### Step 16 - -Scale up to the required number of replicas, as needed. - -Congratulations! Your cluster is upgraded and ready to use! - -### pgBackRest Repo PVC Renaming - -If the pgcluster you are upgrading has an existing pgBackRest repo PVC that you would like to continue to use (which is required for existing pgBackRest backups to be accessible by your new cluster), the following renaming procedure will be needed. - -##### Step 1 - -To start, if your current cluster is "mycluster", the pgBackRest PVC created by version 3.5 of the Postgres Operator will be named "mycluster-backrest-shared-repo". This will need to be renamed to "mycluster-pgbr-repo" to be used in your new cluster. - -To begin, save the output description from the pgBackRest PVC: - -In 4.0: -``` -kubectl -n describe pvc mycluster-backrest-shared-repo -``` - -In 4.1 and later: -``` -kubectl -n describe pvc mycluster-pgbr-repo -``` - -for later use when recreating this PVC with the new name. In this output, note the "Volume" name, which is the name of the underlying PV. - -##### Step 2 - -Now use - -``` -kubectl -n get pv -``` - -to check the "RECLAIM POLICY". If this is not set to "Retain", edit the "persistentVolumeReclaimPolicy" value so that it is set to "Retain" using - -``` -kubectl -n patch pv --type='json' -p='[{"op": "replace", "path": "/spec/persistentVolumeReclaimPolicy", "value":"Retain"}]' -``` - -##### Step 3 - -Now, delete the PVC: - -In 4.0: -``` -kubectl -n delete pvc mycluster-backrest-shared-repo -``` - -In 4.1 and later: -``` -kubectl -n delete pvc mycluster-pgbr-repo -``` - - -##### Step 4 - -You will remove the "claimRef" section of the PV with - -``` -kubectl -n patch pv --type=json -p='[{"op": "remove", "path": "/spec/claimRef"}]' -``` - -which will make the PV "Available" so it may be reused by the new PVC. - -##### Step 5 - -Now, create a file with contents similar to the following: - -``` -kind: PersistentVolumeClaim -apiVersion: v1 -metadata: - name: mycluster-pgbr-repo - namespace: demo -spec: - storageClassName: "" - accessModes: - - ReadWriteMany - resources: - requests: - storage: 2Gi - volumeMode: Filesystem - volumeName: "crunchy-pv156" -``` - -where name matches your new cluster (Remember that this will need to match the "primary PVC" name identified in [Step 2]( {{< relref "Upgrade/manual/upgrade35#step-2" >}}) of the upgrade procedure!) and namespace, storageClassName, accessModes, storage, volumeMode and volumeName match your original PVC. - -##### Step 6 - -Now you can use the new file to recreate your PVC using - -``` -kubectl -n create -f -``` - -To check that your PVC is "Bound", run - -``` -kubectl -n get pvc mycluster-pgbr-repo -``` - -Congratulations, you have renamed your PVC! Once the PVC Status is "Bound", your cluster can be recreated. If you altered the Reclaim Policy on your PV in Step 1, you will want to reset it now. diff --git a/docs/content/Upgrade/metrics.md b/docs/content/Upgrade/metrics.md deleted file mode 100644 index bb2a792f4f..0000000000 --- a/docs/content/Upgrade/metrics.md +++ /dev/null @@ -1,78 +0,0 @@ ---- -title: Monitoring Upgrade Guidance -date: -draft: false -weight: 100 ---- - -# Upgrade Guidance for PostgreSQL Operator Monitoring - -## Migration to Upstream Containers - -The Crunchy PostgreSQL Monitoring infrastructure now uses upstream Prometheus and Grafana -containers. By default the installers will deploy the monitoring infrastructure using -images from Docker Hub but can easily be updated to point to a Red Hat certified -container repository. The Red Hat certified image catalog can be found -[here](https://catalog.redhat.com/software/containers/explore) and the Docker Hub -images can be found at the following links: - -- https://hub.docker.com/r/prom/prometheus -- https://hub.docker.com/r/grafana/grafana -- https://hub.docker.com/r/prom/alertmanager - -These containers are configurable through Kubernetes ConfigMaps and the updated -PostgreSQL Operator Monitoring installers. Once deployed Prometheus and Grafana -will be populated with resource data from metrics-enabled PostgreSQL clusters. - -## New Monitoring Features - -### Alerting -The updated PostgreSQL Operator Monitoring Infrastructure supports deployment of -Prometheus Alertmanager. This deployment uses upstream Prometheus -Alertmanager images that can be installed and configured with the metrics -installers and Kubernetes ConfigMaps. - -### Updated pgMonitor -Prometheus and Grafana have been updated to include a default configuration from -[pgMonitor](https://github.com/CrunchyData/pgmonitor) that is tailored for -container-based PostgreSQL deployments. This updated configuration will show -container specific resource information from your metrics-enabled PostgreSQL -clusters. By default the metrics infrastructure will include: - -- New Grafana dashboards tailored for container-based PostgreSQL deployments -- Container specific operating system metrics -- General PostgreSQL alerting rules. - -### Updated Monitoring Installer -The installer for the PostgreSQL Operating Monitoring infrastructure has been -split out into a separate set of installers. With each installer -([Ansible]({{< relref "/installation/metrics/other/ansible" >}}), -a [Kubectl job]({{< relref "installation/metrics/postgres-operator-metrics" >}}), -or [Helm]({{< relref "/installation/metrics/other/helm-metrics" >}})) -you will be able to apply custom configurations through Kubernetes -ConfigMaps. This includes: - -- Custom Grafana dashboards and datasources -- Custom Prometheus scrape configuration -- Custom Prometheus alerting rules -- Custom Alertmanager notification configuration - -## Updating from Pre-4.5.0 Monitoring - -Ensure that you have a copy of any install or custom configurations you have -applied to your previous metrics install. - -You can upgrade the Grafana and Prometheus deployments in place by using the new -installers. After you have updated the PostgreSQL Operator and configured the -`values.yaml`, run the -[metrics update]({{< relref "/installation/metrics/other/ansible/updating-metrics" >}}). -This will replace the old deployments while keeping your pvcs in place. - -{{% notice tip %}} -To make use of the updated exporter queries you must update -the PostgreSQL Operator and -[upgrade]({{< relref "/upgrade/automatedupgrade" >}}) -your cluster. -{{% /notice %}} - - diff --git a/docs/content/_index.md b/docs/content/_index.md deleted file mode 100644 index f83a7c49e1..0000000000 --- a/docs/content/_index.md +++ /dev/null @@ -1,162 +0,0 @@ ---- -title: "Crunchy PostgreSQL Operator" -date: -draft: false ---- - -# Crunchy PostgreSQL Operator - - - -## Run your own production-grade PostgreSQL-as-a-Service on Kubernetes! - -Latest Release: {{< param operatorVersion >}} - -The [Crunchy PostgreSQL Operator](https://www.crunchydata.com/developers/download-postgres/containers/postgres-operator) automates and simplifies deploying and managing open source PostgreSQL clusters on Kubernetes and other Kubernetes-enabled Platforms by providing the essential features you need to keep your PostgreSQL clusters up and running, including: - -#### PostgreSQL Cluster [Provisioning]({{< relref "/architecture/provisioning.md" >}}) - -[Create, Scale, & Delete PostgreSQL clusters with ease](/architecture/provisioning/), while fully customizing your Pods and PostgreSQL configuration! - -#### [High Availability]({{< relref "/architecture/high-availability/_index.md" >}}) - -Safe, automated failover backed by a [distributed consensus based high-availability solution](/architecture/high-availability/). Uses [Pod Anti-Affinity](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity) to help resiliency; you can configure how aggressive this can be! Failed primaries automatically heal, allowing for faster recovery time. - -Support for [standby PostgreSQL clusters]({{< relref "/architecture/high-availability/multi-cluster-kubernetes.md" >}}) that work both within an across [multiple Kubernetes clusters]({{< relref "/architecture/high-availability/multi-cluster-kubernetes.md" >}}). - -#### [Disaster Recovery]({{< relref "/architecture/disaster-recovery.md" >}}) - -Backups and restores leverage the open source [pgBackRest](https://www.pgbackrest.org) utility and [includes support for full, incremental, and differential backups as well as efficient delta restores](/architecture/disaster-recovery/). Set how long you want your backups retained for. Works great with very large databases! - -#### TLS - -Secure communication between your applications and data servers by [enabling TLS for your PostgreSQL servers](/pgo-client/common-tasks/#enable-tls), including the ability to enforce that all of your connections to use TLS. - -#### [Monitoring]({{< relref "/architecture/monitoring.md" >}}) - -[Track the health of your PostgreSQL clusters]({{< relref "/architecture/monitoring.md" >}}) -using the open source [pgMonitor](https://github.com/CrunchyData/pgmonitor) -library. - -#### PostgreSQL User Management - -Quickly add and remove users from your PostgreSQL clusters with powerful commands. Manage password expiration policies or use your preferred PostgreSQL authentication scheme. - -#### Upgrade Management - -Safely apply PostgreSQL updates with minimal availability impact to your PostgreSQL clusters. - -#### Advanced Replication Support - -Choose between [asynchronous replication](/architecture/high-availability/) and [synchronous replication](/architecture/high-availability/#synchronous-replication-guarding-against-transactions-loss) for workloads that are sensitive to losing transactions. - -#### Clone - -Create new clusters from your existing clusters or backups with [`pgo create cluster --restore-from`](/pgo-client/reference/pgo_create_cluster/). - -#### Connection Pooling - - Use [pgBouncer](https://access.crunchydata.com/documentation/pgbouncer/) for connection pooling - -#### Node Affinity - -Have your PostgreSQL clusters deployed to [Kubernetes Nodes](https://kubernetes.io/docs/concepts/architecture/nodes/) of your preference - -#### Scheduled Backups - -Choose the type of backup (full, incremental, differential) and [how frequently you want it to occur](/architecture/disaster-recovery/#scheduling-backups) on each PostgreSQL cluster. - -#### Backup to S3 - -[Store your backups in Amazon S3](/architecture/disaster-recovery/#using-s3) or any object storage system that supports the S3 protocol. The PostgreSQL Operator can backup, restore, and create new clusters from these backups. - -#### Multi-Namespace Support - -You can control how the PostgreSQL Operator leverages [Kubernetes Namespaces](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) with several different deployment models: - -- Deploy the PostgreSQL Operator and all PostgreSQL clusters to the same namespace -- Deploy the PostgreSQL Operator to one namespaces, and all PostgreSQL clusters to a different namespace -- Deploy the PostgreSQL Operator to one namespace, and have your PostgreSQL clusters managed acrossed multiple namespaces -- Dynamically add and remove namespaces managed by the PostgreSQL Operator using the `pgo create namespace` and `pgo delete namespace` commands - -#### Full Customizability - -The Crunchy PostgreSQL Operator makes it easy to get your own PostgreSQL-as-a-Service up and running on Kubernetes-enabled platforms, but we know that there are further customizations that you can make. As such, the Crunchy PostgreSQL Operator allows you to further customize your deployments, including: - -- Selecting different storage classes for your primary, replica, and backup storage -- Select your own container resources class for each PostgreSQL cluster deployment; differentiate between resources applied for primary and replica clusters! -- Use your own container image repository, including support `imagePullSecrets` and private repositories -- [Customize your PostgreSQL configuration]({{< relref "/advanced/custom-configuration.md" >}}) -- Bring your own trusted certificate authority (CA) for use with the Operator API server -- Override your PostgreSQL configuration for each cluster - -# How it Works - -![Architecture](/Operator-Architecture.png) - -The Crunchy PostgreSQL Operator extends Kubernetes to provide a higher-level abstraction for rapid creation and management of PostgreSQL clusters. The Crunchy PostgreSQL Operator leverages a Kubernetes concept referred to as "[Custom Resources](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/)” to create several [custom resource definitions (CRDs)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions) that allow for the management of PostgreSQL clusters. - - -# Included Components - -[PostgreSQL containers](https://github.com/CrunchyData/crunchy-containers) deployed with the PostgreSQL Operator include the following components: - -- [PostgreSQL](https://www.postgresql.org) - - [PostgreSQL Contrib Modules](https://www.postgresql.org/docs/current/contrib.html) - - [PL/Python + PL/Python 3](https://www.postgresql.org/docs/current/plpython.html) - - [pgAudit](https://www.pgaudit.org/) - - [pgAudit Analyze](https://github.com/pgaudit/pgaudit_analyze) - - [pgnodemx](https://github.com/CrunchyData/pgnodemx) - - [set_user](https://github.com/pgaudit/set_user) - - [wal2json](https://github.com/eulerto/wal2json) -- [pgBackRest](https://pgbackrest.org/) -- [pgBouncer](http://pgbouncer.github.io/) -- [pgAdmin 4](https://www.pgadmin.org/) -- [pgMonitor](https://github.com/CrunchyData/pgmonitor) -- [Patroni](https://patroni.readthedocs.io/) -- [LLVM](https://llvm.org/) (for [JIT compilation](https://www.postgresql.org/docs/current/jit.html)) - -In addition to the above, the geospatially enhanced PostgreSQL + PostGIS container adds the following components: - -- [PostGIS](http://postgis.net/) -- [pgRouting](https://pgrouting.org/) -- [PL/R](https://github.com/postgres-plr/plr) - -[PostgreSQL Operator Monitoring]({{< relref "architecture/monitoring/_index.md" >}}) uses the following components: - -- [pgMonitor](https://github.com/CrunchyData/pgmonitor) -- [Prometheus](https://github.com/prometheus/prometheus) -- [Grafana](https://github.com/grafana/grafana) -- [Alertmanager](https://github.com/prometheus/alertmanager) - -Additional containers that are not directly integrated with the PostgreSQL Operator but can work alongside it include: - -- [pgPool II](https://access.crunchydata.com/documentation/crunchy-postgres-containers/latest/container-specifications/crunchy-pgpool/) -- [pg_upgrade](https://access.crunchydata.com/documentation/crunchy-postgres-containers/latest/container-specifications/crunchy-upgrade/) -- [pgBench](https://access.crunchydata.com/documentation/crunchy-postgres-containers/latest/container-specifications/crunchy-pgbench/) - -For more information about which versions of the PostgreSQL Operator include which components, please visit the [compatibility]({{< relref "configuration/compatibility.md" >}}) section of the documentation. - -# Supported Platforms - -The Crunchy PostgreSQL Operator is tested on the following Platforms: - -- Kubernetes 1.13+ -- OpenShift 3.11+ -- Google Kubernetes Engine (GKE), including Anthos -- Amazon EKS -- VMware Enterprise PKS 1.3+ - -## Storage - -The Crunchy PostgreSQL Operator is tested with a variety of different types of Kubernetes storage and Storage Classes, including: - -- Rook -- StorageOS -- Google Compute Engine persistent volumes -- NFS -- HostPath - -and more. We have had reports of people using the PostgreSQL Operator with other [Storage Classes](https://kubernetes.io/docs/concepts/storage/storage-classes/) as well. - -We know there are a variety of different types of [Storage Classes](https://kubernetes.io/docs/concepts/storage/storage-classes/) available for Kubernetes and we do our best to test each one, but due to the breadth of this area we are unable to verify PostgreSQL Operator functionality in each one. With that said, the PostgreSQL Operator is designed to be storage class agnostic and has been demonstrated to work with additional Storage Classes. Storage is a rapidly evolving field in Kubernetes and we will continue to adapt the PostgreSQL Operator to modern Kubernetes storage standards. diff --git a/docs/content/advanced/_index.md b/docs/content/advanced/_index.md deleted file mode 100644 index dd93f93d41..0000000000 --- a/docs/content/advanced/_index.md +++ /dev/null @@ -1,6 +0,0 @@ ---- -title: "Advanced Topics" -date: -draft: false -weight: 70 ---- diff --git a/docs/content/advanced/crunchy-postgres-exporter.md b/docs/content/advanced/crunchy-postgres-exporter.md deleted file mode 100644 index b9b2a3ba09..0000000000 --- a/docs/content/advanced/crunchy-postgres-exporter.md +++ /dev/null @@ -1,79 +0,0 @@ ---- -title: "Crunchy Postgres Exporter" -date: -draft: false -weight: 3 ---- - -The crunchy-postgres-exporter container provides real time metrics about the PostgreSQL database -via an API. These metrics are scraped and stored by a [Prometheus](https://prometheus.io) -time-series database and are then graphed and visualized through the open source data -visualizer [Grafana](https://grafana.com/). - -The crunchy-postgres-exporter container uses [pgMonitor](https://github.com/CrunchyData/pgmonitor) for advanced metric collection. -It is required that the `crunchy-postgres-ha` container has the `PGMONITOR_PASSWORD` environment -variable to create the appropriate user (`ccp_monitoring`) to collect metrics. - -Custom queries to collect metrics can be specified by the user. By -mounting a **queries.yml** file to */conf* on the container, additional metrics -can be specified for the API to collect. For an example of a queries.yml file, see -[here](https://github.com/CrunchyData/pgmonitor/blob/master/exporter/postgres/queries_common.yml) - -## Packages - -The crunchy-postgres-exporter Docker image contains the following packages (versions vary depending on PostgreSQL version): - -* PostgreSQL ({{< param postgresVersion13 >}}, {{< param postgresVersion12 >}}, {{< param postgresVersion11 >}}, {{< param postgresVersion10 >}}, {{< param postgresVersion96 >}} and {{< param postgresVersion95 >}}) -* CentOS7 - publicly available -* UBI7 - customers only -* [PostgreSQL Exporter](https://github.com/wrouesnel/postgres_exporter) - -## Environment Variables - -### Required -**Name**|**Default**|**Description** -:-----|:-----|:----- -**EXPORTER_PG_PASSWORD**|none|Provides the password needed to generate the PostgreSQL URL required by the PostgreSQL Exporter to connect to a PG database. Should typically match the `PGMONITOR_PASSWORD` value set in the `crunchy-postgres` container.| - -### Optional -**Name**|**Default**|**Description** -:-----|:-----|:----- -**EXPORTER_PG_USER**|ccp_monitoring|Provides the username needed to generate the PostgreSQL URL required by the PostgreSQL Exporter to connect to a PG database. Should typically be `ccp_monitoring` per the [crunchy-postgres](/container-specifications/crunchy-postgres) container specification (see environment varaible `PGMONITOR_PASSWORD`). -**EXPORTER_PG_HOST**|127.0.0.1|Provides the host needed to generate the PostgreSQL URL required by the PostgreSQL Exporter to connect to a PG database| -**EXPORTER_PG_PORT**|5432|Provides the port needed to generate the PostgreSQL URL required by the PostgreSQL Exporter to connect to a PG database| -**EXPORTER_PG_DATABASE**|postgres|Provides the name of the database used to generate the PostgreSQL URL required by the PostgreSQL Exporter to connect to a PG database| -**DATA_SOURCE_NAME**|None|Explicitly defines the URL for connecting to the PostgreSQL database (must be in the form of `postgresql://`). If provided, overrides all other settings provided to generate the connection URL. -**CRUNCHY_DEBUG**|FALSE|Set this to true to enable debugging in logs. Note: this mode can reveal secrets in logs. -**POSTGRES_EXPORTER_PORT**|9187|Set the postgres-exporter port to listen on for web interface and telemetry. - -### Viewing Cluster Metrics - -To view a particular cluster's available metrics in a local browser window, port forwarding can be set up as follows. -For a pgcluster, `mycluster`, deployed in the `pgouser1` namespace, use - -``` -# If deployed to Kubernetes -kubectl port-forward -n pgouser1 svc/mycluster 9187:9187 - -# If deployed to OpenShift -oc port-forward -n pgouser1 svc/mycluster 9187:9187 -``` - -Then, in your local browser, go to `http://127.0.0.1:9187/metrics` to view the available metrics for that cluster. - - - -# Crunchy Postgres Exporter Metrics Detail - -Below are details on the various metrics available from the crunchy-postgres-exporter container. -The name, SQL query and metric details are given for each available item. - -{{< exporter_metrics >}} - -# [pgnodemx](https://github.com/CrunchyData/pgnodemx) - -In addition to the metrics above, the [pgnodemx](https://github.com/CrunchyData/pgnodemx) PostgreSQL extension provides SQL functions to allow the capture of node OS metrics via SQL queries. For more information, please see the [pgnodemx](https://github.com/CrunchyData/pgnodemx) project page: - -[https://github.com/CrunchyData/pgnodemx](https://github.com/CrunchyData/pgnodemx) - -{{< pgnodemx_metrics >}} diff --git a/docs/content/advanced/custom-configuration.md b/docs/content/advanced/custom-configuration.md deleted file mode 100644 index 4ac15d1578..0000000000 --- a/docs/content/advanced/custom-configuration.md +++ /dev/null @@ -1,326 +0,0 @@ ---- -title: "Custom Configuration" -date: -draft: false -weight: 4 ---- - -## Custom PostgreSQL Configuration - -Users and administrators can specify a -custom set of PostgreSQL configuration files to be used when creating -a new PostgreSQL cluster. The configuration files you can -change include - - - * postgres-ha.yaml - * setup.sql - -Different configurations for PostgreSQL might be defined for -the following - - - * OLTP types of databases - * OLAP types of databases - * High Memory - * Minimal Configuration for Development - * Project Specific configurations - * Special Security Requirements - -#### Global ConfigMap - -If you create a *configMap* called *pgo-custom-pg-config* with any -of the above files within it, new clusters will use those configuration -files when setting up a new database instance. You do *NOT* have to -specify all of the configuration files. It is entirely up to your use case -to determine which to use. - -An example set of configuration files and a script to create the -global configMap is found at -``` -$PGOROOT/examples/custom-config -``` - -If you run the *create.sh* script there, it will create the configMap -that will include the PostgreSQL configuration files within that directory. - -#### Config Files Purpose - -The *postgres-ha.yaml* file is the main configuration file that allows for the -configuration of a wide variety of tuning parameters for you PostgreSQL cluster. -This includes various PostgreSQL settings, e.g. those that should be applied to -files such as `postgresql.conf`, `pg_hba.conf` and `pg_ident.conf`, as well as -tuning parameters for the High Availability features included in each cluster. -The various configuration settings available can be -[found here](https://access.crunchydata.com/documentation/patroni/latest/settings/index.html#settings) - -The *setup.sql* file is a SQL file that is executed following the initialization -of a new PostgreSQL cluster, specifically after *initdb* is run when the database -is first created. Changes would be made to this if you wanted to define which -database objects are created by default. - -#### Granular Config Maps - -Granular config maps can be defined if it is necessary to use -a different set of configuration files for different clusters -rather than having a single configuration (e.g. Global Config Map). -A specific set of ConfigMaps with their own set of PostgreSQL -configuration files can be created. When creating new clusters, a -`--custom-config` flag can be passed along with the name of the -ConfigMap which will be used for that specific cluster or set of -clusters. - -#### Defaults - -If there is no reason to change the default PostgreSQL configuration -files that ship with the Crunchy Postgres container, there is no -requirement to. In this event, continue using the Operator as usual -and avoid defining a global configMap. - - -## Modifying PostgreSQL Cluster Configuration - -Once a PostgreSQL cluster has been initialized, its configuration settings -can be updated and modified as needed. This done by modifying the -`-pgha-config` ConfigMap that is created for each individual -PostgreSQL cluster. - -The `-pgha-config` ConfigMap is populated following cluster -initializtion, specifically using the baseline configuration settings used to -bootstrap the cluster. Therefore, any customiztions applied using a custom -`postgres-ha.yaml` file as described in the **Custom PostgreSQL Configuration** -section above will also be included when the ConfigMap is populated. - -The various configuration settings available for modifying and updating -and cluster's configuration can be -[found here](https://access.crunchydata.com/documentation/patroni/latest/settings/index.html#settings). -Please proceed with caution when modiying configuration, especially those settings -applied by default by Operator. Certain settings are required for normal operation -of the Operator and the PostgreSQL clusters it creates, and altering these -settings could result in expected behavior. - -### Types of Configuration - -Within the `-pgha-config` ConfigMap are two forms of configuration: - -- **Distributed Configuration Store (DCS):** Cluster-wide -configuration settings that are applied to all database servers in the PostgreSQL -cluster -- **Local Database:** Configuration settings that are applied -individually to each database server (i.e. the primary and each replica) within -the cluster. - -The DCS configuration settings are stored within the `-pgha-config` -ConfigMap in a configuration named `-dcs-config`, while the local -database configurations are stored in one or more configurations named -`-local-config` (with one local configuration for the primary and each -replica within the cluster). Please note that -[as described here](https://access.crunchydata.com/documentation/patroni/latest/dynamic_configuration/), -certain settings can only be applied via the DCS to ensure they are uniform among -the primary and all replicas within the cluster. - -The following is an example of the both the DCS and primary configuration settings -as stored in the `-pgha-config` ConfigMap for a cluster named `mycluster`. -Please note the `mycluster-dcs-config` configuration defining the DCS configuration -for `mycluster`, along with the `mycluster-local-config` configuration defining the -local configuration for the database server named `mycluster`, which is the current -primary within the PostgreSQL cluster. - -```bash -$ kubectl describe cm mycluster-pgha-config -Name: mycluster-pgha-config -Namespace: pgouser1 -Labels: pg-cluster=mycluster - pgha-config=true - vendor=crunchydata -Annotations: - -Data -==== -mycluster-dcs-config: ----- -postgresql: - parameters: - archive_command: source /opt/cpm/bin/pgbackrest/pgbackrest-set-env.sh && pgbackrest - archive-push "%p" - archive_mode: true - archive_timeout: 60 - log_directory: pg_log - log_min_duration_statement: 60000 - log_statement: none - max_wal_senders: 6 - shared_buffers: 128MB - shared_preload_libraries: pgaudit.so,pg_stat_statements.so - temp_buffers: 8MB - unix_socket_directories: /tmp,/crunchyadm - wal_level: logical - work_mem: 4MB - recovery_conf: - restore_command: source /opt/cpm/bin/pgbackrest/pgbackrest-set-env.sh && pgbackrest - archive-get %f "%p" - use_pg_rewind: true - -mycluster-local-config: ----- -postgresql: - callbacks: - on_role_change: /opt/cpm/bin/callbacks/pgha-on-role-change.sh - create_replica_methods: - - pgbackrest - - basebackup - pg_hba: - - local all postgres peer - - local all crunchyadm peer - - host replication primaryuser 0.0.0.0/0 md5 - - host all primaryuser 0.0.0.0/0 reject - - host all all 0.0.0.0/0 md5 - pgbackrest: - command: /opt/cpm/bin/pgbackrest/pgbackrest-create-replica.sh - keep_data: true - no_params: true - pgbackrest_standby: - command: /opt/cpm/bin/pgbackrest/pgbackrest-create-replica.sh - keep_data: true - no_master: 1 - no_params: true - pgpass: /tmp/.pgpass - remove_data_directory_on_rewind_failure: true - use_unix_socket: true -``` - -### Updating Configuration Settings - -In order to update a cluster's configuration settings and then apply -those settings (e.g. to the DCS and/or any individual database servers), the -DCS and local configuration settings within the `-pgha-config` -ConfigMap can be modified. This can be done using the various commands -available using the `kubectl` client (or the `oc` client if using OpenShift) -for modifying Kubernetes resources. For instance, the following command can be -utilized to open the ConfigMap in a local text editor, and then update the -various cluster configurations as needed: - -```bash -kubectl edit configmap mycluster-pgha-config -``` - -Once the `-pgha-config` ConfigMap has been updated, any -changes made will be detected by the Operator, and then applied to the -DCS and/or any individual database servers within the cluster. - -#### PostgreSQL Configuration - -In order to update the `postgresql.conf` file for a one of more database servers, the -`parameters` section of either the DCS and/or a local database configuration can be -updated, e.g.: - -```yaml ----- -postgresql: - parameters: - max_wal_senders: 10 -``` - -The various key/value pairs provided within the `parameters` section result in the -configuration of the same settings within the `postgresql.conf` file. Please note that -settings applied locally to a database server take precedence over those set via the DCS (with the -exception being those that must be set via the DCS, as -[described here](https://access.crunchydata.com/documentation/patroni/latest/dynamic_configuration/)). - -Also, please note that `pg_hba` and `pg_ident` sections exist to update both the `pg_hba.conf` and -`pg_ident.conf` PostgreSQL configuration files as needed. - -#### A Note on Customizing `authentication` - -One of the blocks that can be modified in a `local` database setting is the -`authentication` block. This can be useful for setting customizations such as -TLS connection requirements (`sslmode`). However, one should take care when -modifying this block, as modifying certain parameters can interfere with the -management features that the PostgreSQL Operator provides. - -In particular, one should **not** customize the `username` or `password` -attributes within this section as that will interface with the PostgreSQL -Operator. Additionally, is using the built-in support for certificate-based -authentication for replication users, you should not modify the `sslcert`, -`sslkey`, `sslrootcert`, and `sslcrl` entries in the `replication` block of the -`authentication` block. - -### Restarting Database Servers - -Changes to certain settings may require one or more PostgreSQL databases within the cluster to be -restarted. This can be accomplished using the `pgo restart` command included with the `pgo` client. -To detect if a restart is needed for a instance within a cluster called `mycluster` after making a -configuration change, the `query` flag can be utilized with the `pgo restart` command as follows: - -```bash -$ pgo restart mycluster2 --query - -Cluster: mycluster2 -INSTANCE ROLE STATUS NODE REPLICATION LAG PENDING RESTART -mycluster primary running node01 0 MB false -mycluster-ambq replica running node01 0 MB true -``` - -Here we can see that the `mycluster-ambq` instance (i.e. the sole replica in cluster `mycluster`) -is pending a restart, as shown by the `PENDING RESTART` column. A restart can then be requested -as follows: - -```bash -$ pgo restart mycluster --target mycluster-ambq -WARNING: Are you sure? (yes/no): yes -Successfully restarted instance mycluster -``` - -It is also possible to target multiple instances at the same time: - -```bash -$ pgo restart mycluster --target mycluster --target mycluster-ambq -WARNING: Are you sure? (yes/no): yes -Successfully restarted instance mycluster -Successfully restarted instance mycluster-ambq -``` - -Or if no target is specified, the all instances within the cluster will be restarted: - -```bash -$ pgo restart mycluster -WARNING: Are you sure? (yes/no): yes -Successfully restarted instance mycluster -Successfully restarted instance mycluster-ambq -``` - -### Refreshing Configuration Settings - -If necessary, it is possible to refresh the configuration stored within the -`-pgha-config` ConfigMap with a fresh copy of either the DCS -configuration and/or the configuration for one or more local database servers. -This is specifically done by fully deleting a configuration from the -`-pgha-config` ConfigMap. Once a configuration has been deleted, -the Operator will detect this and refresh the ConfigMap with a fresh copy of -that specific configuration. - -For instance, the following `kubectl patch` command can be utilized to -remove the `mycluster-dcs-config` configuration from the example above, -causing that specific configuration to be refreshed with a fresh copy of -the DCS configuration settings for `mycluster`: - -```bash -kubectl patch configmap mycluster-pgha-config \ - --type='json' -p='[{"op": "remove", "path": "/data/mycluster-dcs-config"}]' -``` - - -## Custom pgBackRest Configuration - -Users can configure pgBackRest by passing the name of an existing ConfigMap to -the `--pgbackrest-custom-config` flag when creating a PostgreSQL cluster. The -entire contents of that ConfigMap appear as files in pgBackRest's -[`config-include-path` directory](https://pgbackrest.org/user-guide.html). - -Regardless of the flags passed at creation, every PostgreSQL cluster is -automatically configured to read from a ConfigMap named -`-config-backrest` and a Secret named -`-config-backrest`. These objects can be populated either before -or _after_ a PostgreSQL cluster is created. The entire contents of each appear -as files in pgBackRest's `config-include-path` directory. - -Though the above is very flexible, not all pgBackRest settings can be managed -this way. There are a few that are always overridden by the PostgreSQL Operator -(the path to the PostgreSQL data directory, for example). diff --git a/docs/content/advanced/direct-api-calls.md b/docs/content/advanced/direct-api-calls.md deleted file mode 100644 index 662f09ac50..0000000000 --- a/docs/content/advanced/direct-api-calls.md +++ /dev/null @@ -1,59 +0,0 @@ ---- -title: "Rest API" -date: -draft: false -weight: 5 ---- - -## Direct API Calls - -The API can also be accessed by interacting directly with the API server. This can be done by making curl calls to POST or GET information from the server. In order to make these calls you will need to provide certificates along with your request using the `--cacert`, `--key`, and `--cert` flags. Next you will need to provide the username and password for the RBAC along with a header that includes the content type and the `--insecure` flag. These flags will be the same for all of your interactions with the API server and can be seen in the following examples. - -The most basic example of this interaction is getting the version of the API server. You can send a GET request to `$PGO_APISERVER_URL/version` and this will send back a json response including the API server version. This is important because the server version and the client version must match. If you are using `pgo` this means you must have the correct version of the client but with a direct call you can specify the client version as part of the request. - -The API server is setup to work with the pgo command line interface so the parameters that are passed to the server can be found by looking at the related flags. - -###### Get API Server Version -``` -curl --cacert $PGO_CA_CERT --key $PGO_CLIENT_KEY --cert $PGO_CA_CERT \ --u admin:examplepassword -H "Content-Type:application/json" --insecure \ --X GET $PGO_APISERVER_URL/version -``` - -You can create a cluster by sending a POST request to `$PGO_APISERVER_URL/clusters`. In this example `--data` is being sent to the API URL that includes the client version that was returned from the version call, the namespace where the cluster should be created, and the name of the new cluster. - -###### Create Cluster -``` -curl --cacert $PGO_CA_CERT --key $PGO_CLIENT_KEY --cert $PGO_CA_CERT \ --u admin:examplepassword -H "Content-Type:application/json" --insecure \ --X POST --data \ - '{"ClientVersion":"{{< param operatorVersion >}}", - "Namespace":"pgouser1", - "Name":"mycluster", - "Series":1}' \ -$PGO_APISERVER_URL/clusters -``` - -The last two examples show you how to `show` and `delete` a cluster. Notice how instead of passing `"Name":"mycluster"` you pass `"Clustername":"mycluster"`to reference a cluster that has already been created. For the show cluster example you can replace `"Clustername":"mycluster"` with `"AllFlag":true` to show all of the clusters that are in the given namespace. - -###### Show Cluster -``` -curl --cacert $PGO_CA_CERT --key $PGO_CLIENT_KEY --cert $PGO_CA_CERT \ --u admin:examplepassword -H "Content-Type:application/json" --insecure \ --X POST --data \ - '{"ClientVersion":"{{< param operatorVersion >}}", - "Namespace":"pgouser1", - "Clustername":"mycluster"}' \ -$PGO_APISERVER_URL/showclusters -``` - -###### Delete Cluster -``` -curl --cacert $PGO_CA_CERT --key $PGO_CLIENT_KEY --cert $PGO_CA_CERT \ --u admin:examplepassword -H "Content-Type:application/json" --insecure \ --X POST --data \ - '{"ClientVersion":"{{< param operatorVersion >}}", - "Namespace":"pgouser1", - "Clustername":"mycluster"}' \ -$PGO_APISERVER_URL/clustersdelete -``` diff --git a/docs/content/advanced/multi-zone-design-considerations.md b/docs/content/advanced/multi-zone-design-considerations.md deleted file mode 100644 index f87b3cffa0..0000000000 --- a/docs/content/advanced/multi-zone-design-considerations.md +++ /dev/null @@ -1,138 +0,0 @@ ---- -title: "Multi-Zone Cloud Considerations" -date: -draft: false -weight: 5 ---- - -## Considerations for PostgreSQL Operator Deployments in Multi-Zone Cloud Environments - -#### Overview - -When using the PostgreSQL Operator in a Kubernetes cluster consisting of nodes that span multiple zones, special consideration -must be taken to ensure all pods and the associated volumes re scheduled and provisioned within the same zone. - -Given that a pod is unable mount a volume that is located in another zone, any volumes that are dynamically provisioned must -be provisioned in a topology-aware manner according to the specific scheduling requirements for the pod. - -This means that when a new PostgreSQL cluster is created, it is necessary to ensure that the volume containing the database -files for the primary PostgreSQL database within the PostgreSQL clluster is provisioned in the same zone as the node containing the PostgreSQL primary pod that will be accesing the applicable volume. - -#### Dynamic Provisioning of Volumes: Default Behavior - -By default, the Kubernetes scheduler will ensure any pods created that claim a specific volume via a PVC are scheduled on a -node in the same zone as that volume. This is part of the default Kubernetes [multi-zone support](https://kubernetes.io/docs/setup/multiple-zones/). - -However, when using Kubernetes [dynamic provisioning](https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/), -volumes are not provisioned in a topology-aware manner. - -More specifically, when using dynamnic provisioning, volumes wills not be provisioned according to the same scheduling -requirements that will be placed on the pod that will be using it (e.g. it will not consider node selectors, resource -requirements, pod affinity/anti-affinity, and various other scheduling requirements). Rather, PVCs are immediately bound as -soon as they are requested, which means volumes are provisioned without knowledge of these scheduling requirements. - -This behavior defined using the `volumeBindingMode` configuration applicable to the Storage Class being utilized to -dynamically provision the volume. By default,`volumeBindingMode` is set to `Immediate`. - -This default behavior for dynamic provisioning can be seen in the Storage Class definition for a Google Cloud Engine Persistent Disk (GCE PD): - -```bash -kind: StorageClass -apiVersion: storage.k8s.io/v1 -metadata: - name: example-sc -provisioner: kubernetes.io/gce-pd -parameters: - type: pd-standard -volumeBindingMode: Immediate -``` -As indicated, `volumeBindingMode` indicates the default value of `Immediate`. - -#### Issues with Dynamic Provisioning of Volumes in PostgreSQL Operator - -Unfortunately, the default setting for dynamic provisinoing of volumes in mulit-zone Kubernetes cluster environments results in undesired behavior when using the PostgreSQL Operator. - -Within the PostgreSQL Operator, a **node label** is implemented as a `preferredDuringSchedulingIgnoredDuringExecution` node -affinity rule, which is an affinity rule that Kubernetes will attempt to adhere to when scheduling any pods for the cluster, -but _will not guarantee_. More information on node affinity rules can be found [here](https://kubernetes.i/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity)). - -By using `Immediate` for the `volumeBindingMode` in a multi-zone cluster environment, the scheduler will ignore any requested -_(but not mandatory)_ scheduling requirements if necessary to ensure the pod can be scheduled. The scheduler will ultimately -schedule the pod on a node in the same zone as the volume, even if another node was requested for scheduling that pod. - -As it relates to the PostgreSQL Operator specifically, a node label specified using the `--node-label` option when creating a -cluster using the `pgo create cluster` command in order target a specific node (or nodes) for the deployment of that cluster. - -Therefore, if the volume ends up in a zone other than the zone containing the node (or nodes) defined by the node label, the -node label will be ignored, and the pod will be scheduled according to the zone containing the volume. - -#### Configuring Volumes to be Topology Aware - -In order to overcome this default behavior, it is necessary to make the dynamically provisioned volumes topology aware. - -This is accomplished by setting the `volumeBindingMode` for the storage class to `WaitForFirstConsumer`, which delays the -dynamic provisioning of a volume until a pod using it is created. - -In other words, the PVC is no longer bound as soon as it is requested, but rather waits for a pod utilizing it to be creating -prior to binding. This change ensures that volume can take into account the scheduling requirements for the pod, which in the -case of a multi-zone cluster means ensuring the volume is provisioned in the same zone containing the node where the pod has -be scheduled. This also means the scheduler should no longer ignore a node label in order to follow a volume to another zone -when scheduling a pod, since the volume will now follow the pod according to the pods specificscheduling requirements. - -The following is an example of the the same Storage Class defined above, only with `volumeBindingMode` now set to `WaitForFirstConsumer`: - -```bash -kind: StorageClass -apiVersion: storage.k8s.io/v1 -metadata: - name: example-sc -provisioner: kubernetes.io/gce-pd -parameters: - type: pd-standard -volumeBindingMode: WaitForFirstConsumer -``` - -#### Additional Solutions - -If you are using a version of Kubernetes that does not support `WaitForFirstConsumer`, an alternate _(and now deprecated)_ -solution exists in the form of parameters that can be defined on the Storage Class definition to ensure volumes are -provisioned in a specific zone (or zones). - -For instance, when defining a Storage Class for a GCE PD for use in Google Kubernetes Engine (GKE) cluster, the **zone** -parameter can be used to ensure any volumes dynamically provisioned using that Storage Class are located in that specific -zone. The following is an example of a Storage Class for a GKE cluster that will provision volumes in the **us-east1** zone: - -```bash -kind: StorageClass -apiVersion: storage.k8s.io/v1 -metadata: - name: example-sc -provisioner: kubernetes.io/gce-pd -parameters: - type: pd-standard - replication-type: none - zone: us-east1 -``` - -Once storage classes have been defined for one or more zones, they can then be defined as one or more storage configurations -within the pgo.yaml configuration file (as described in the [PGO YAML configuration guide](/configuration/pgo-yaml -configuration)). - -From there those storage configurations can then be selected when creating a new cluster, as shown in the following example: - -```bash -pgo create cluster mycluster --storage-config=example-sc -``` - -With this approach, the pod will once again be scheduled according to the zone in which the volume was provisioned. - -However, the zone parameters defined on the Storage Class bring consistency to scheduling by guaranteeing that the volume, and -therefore also the pod using that volume, are scheduled in a specific zone as defined by the user, bringing consistency -and predictability to volume provisioning and pod scheduling in multi-zone clusters. - -For more information regarding the specific parameters available for the Storage Classes being utilizing in your cloud -environment, please see the -[Kubernetes documentation for Storage Classes](https://kubernetes.io/docs/concepts/storage/storage-classes/). - -Lastly, while the above applies to the dynamic provisioning of volumes, it should be noted that volumes can also be manually -provisioned in desired zones in order to achieve the desired topology requirements for any pods and their volumes. diff --git a/docs/content/architecture/_index.md b/docs/content/architecture/_index.md deleted file mode 100644 index 19db28c33d..0000000000 --- a/docs/content/architecture/_index.md +++ /dev/null @@ -1,6 +0,0 @@ ---- -title: "Architecture" -date: -draft: false -weight: 20 ---- diff --git a/docs/content/architecture/disaster-recovery.md b/docs/content/architecture/disaster-recovery.md deleted file mode 100644 index deee66dcc5..0000000000 --- a/docs/content/architecture/disaster-recovery.md +++ /dev/null @@ -1,307 +0,0 @@ ---- -title: "Disaster Recovery" -date: -draft: false -weight: 200 ---- - -When using the PostgreSQL Operator, the answer to the question "do you take -backups of your database" is automatically "yes!" - -The PostgreSQL Operator uses the open source -[pgBackRest](https://pgbackrest.org) backup and restore utility that is designed -for working with databases that are many terabytes in size. As described in the -[Provisioning](/architecture/provisioning/) section, pgBackRest is enabled by -default as it permits the PostgreSQL Operator to automate some advanced as well -as convenient behaviors, including: - -- Efficient provisioning of new replicas that are added to the PostgreSQL -cluster -- Preventing replicas from falling out of sync from the PostgreSQL primary by -allowing them to replay old WAL logs -- Allowing failed primaries to automatically and efficiently heal using the -"delta restore" feature -- Serving as the basis for the cluster cloning feature -- ...and of course, allowing for one to take full, differential, and incremental -backups and perform full and point-in-time restores - -![PostgreSQL Operator pgBackRest Integration](/images/postgresql-cluster-dr-base.png) - -The PostgreSQL Operator leverages a pgBackRest repository to facilitate the -usage of the pgBackRest features in a PostgreSQL cluster. When a new PostgreSQL -cluster is created, it simultaneously creates a pgBackRest repository as -described in the [Provisioning](/architecture/provisioning/) section. - -At PostgreSQL cluster creation time, you can specify a specific Storage Class -for the pgBackRest repository. Additionally, you can also specify the type of -pgBackRest repository that can be used, including: - -- `local`: Uses the storage that is provided by the Kubernetes cluster's Storage -Class that you select -- `s3`: Use Amazon S3 or an object storage system that uses the S3 protocol -- `local,s3`: Use both the storage that is provided by the Kubernetes cluster's -Storage Class that you select AND Amazon S3 (or equivalent object storage system -that uses the S3 protocol) - -The pgBackRest repository consists of the following Kubernetes objects: - -- A Deployment -- A Secret that contains information that is specific to the PostgreSQL cluster -that it is deployed with (e.g. SSH keys, AWS S3 keys, etc.) -- A Service - -The PostgreSQL primary is automatically configured to use the -`pgbackrest archive-push` and push the write-ahead log (WAL) archives to the -correct repository. - -## Backups - -Backups can be taken with the `pgo backup` command - -The PostgreSQL Operator supports three types of pgBackRest backups: - -- Full (`full`): A full backup of all the contents of the PostgreSQL cluster -- Differential (`diff`): A backup of only the files that have changed since the -last full backup -- Incremental (`incr`): A backup of only the files that have changed since the -last full or differential backup - -By default, `pgo backup` will attempt to take an **incremental (`incr`)** backup -unless otherwise specified. - -For example, to specify a full backup: - -```shell -pgo backup hacluster --backup-opts="--type=full" -``` - -The PostgreSQL Operator also supports setting pgBackRest retention policies as -well for backups. For example, to take a full backup and to specify to only keep -the last 7 backups: - -```shell -pgo backup hacluster --backup-opts="--type=full --repo1-retention-full=7" -``` - -## Restores - -The PostgreSQL Operator supports the ability to perform a full restore on a -PostgreSQL cluster as well as a point-in-time-recovery. There are two types of -ways to restore a cluster: - -- Restore to a new cluster using the `--restore-from` flag in the -[`pgo create cluster`]({{< relref "/pgo-client/reference/pgo_create_cluster.md" >}}) -command. -- Restore in-place using the [`pgo restore`]({{< relref "/pgo-client/reference/pgo_restore.md" >}}) -command. Note that this is **destructive**. - -**NOTE**: Ensure you are backing up your PostgreSQL cluster regularly, as this -will help expedite your restore times. The next section will cover scheduling -regular backups. - -The following explains how to perform restores based on the restoration method -you chose. - -### Restore to a New Cluster - -Restoring to a new PostgreSQL cluster allows one to take a backup and create a -new PostgreSQL cluster that can run alongside an existing PostgreSQL cluster. -There are several scenarios where using this technique is helpful: - -- Creating a copy of a PostgreSQL cluster that can be used for other purposes. -Another way of putting this is "creating a clone." -- Restore to a point-in-time and inspect the state of the data without affecting -the current cluster - -and more. - -Restoring to a new cluster can be accomplished using the [`pgo create cluster`]({{< relref "/pgo-client/reference/pgo_create_cluster.md" >}}) -command with several flags: - -- `--restore-from`: specifies the name of a PostgreSQL cluster (either one that -is active, or a former cluster whose pgBackRest repository still exists) to -restore from. -- `--restore-opts`: used to specify additional options, similar to the ones that -are passed into [`pgbackrest restore`](https://pgbackrest.org/command.html#command-restore). - -One can copy an entire PostgreSQL cluster into a new cluster with a command as -simple as the one below: - -``` -pgo create cluster newcluster --restore-from oldcluster -``` - -To perform a point-in-time-recovery, you have to pass in the pgBackRest `--type` -and `--target` options, where `--type` indicates the type of recovery to -perform, and `--target` indicates the point in time to recover to: - -``` -pgo create cluster newcluster \ - --restore-from oldcluster \ - --restore-opts "--type=time --target='2019-12-31 11:59:59.999999+00'" -``` - -Note that when using this method, the PostgreSQL Operator can only restore one -cluster from each pgBackRest repository at a time. Using the above example, one -can only perform one restore from `oldcluster` at a given time. - -When using the restore to a new cluster method, the PostgreSQL Operator takes -the following actions: - -- After running the normal cluster creation tasks, the PostgreSQL Operator -creates a "bootstrap" job that performs a pgBackRest restore to the newly -created PVC. -- The PostgreSQL Operator kicks off the new PostgreSQL cluster, which enters -into recovery mode until it has recovered to a specified point-in-time or -finishes replaying all available write-ahead logs. -- When this is done, the PostgreSQL cluster performs its regular operations when -starting up. - -### Restore in-place - -Restoring a PostgreSQL cluster in-place is a **destructive** action that will -perform a recovery on your existing data directory. This is accomplished using -the [`pgo restore`]({{< relref "/pgo-client/reference/pgo_restore.md" >}}) -command. - -`pgo restore` lets you specify the point at which you want to restore your -database using the `--pitr-target` flag. - -When the PostgreSQL Operator issues a restore, the following actions are taken -on the cluster: - -- The PostgreSQL Operator disables the "autofail" mechanism so that no failovers -will occur during the restore. -- Any replicas that may be associated with the PostgreSQL cluster are destroyed -- A new Persistent Volume Claim (PVC) is allocated using the specifications -provided for the primary instance. This may have been set with the -`--storage-class` flag when the cluster was originally created -- A Kubernetes Job is created that will perform a pgBackRest restore operation -to the newly allocated PVC. This is facilitated by the `pgo-backrest-restore` -container image. - -![PostgreSQL Operator Restore Step 1](/images/postgresql-cluster-restore-step-1.png) - -- When restore Job successfully completes, a new Deployment for the PostgreSQL -cluster primary instance is created. A recovery is then issued to the specified -point-in-time, or if it is a full recovery, up to the point of the latest WAL -archive in the repository. -- Once the PostgreSQL primary instance is available, the PostgreSQL Operator -will take a new, full backup of the cluster. - -![PostgreSQL Operator Restore Step 2](/images/postgresql-cluster-restore-step-2.png) - -At this point, the PostgreSQL cluster has been restored. However, you will need -to re-enable autofail if you would like your PostgreSQL cluster to be -highly-available. You can re-enable autofail with this command: - -```shell -pgo update cluster hacluster --autofail=true -``` - -## Scheduling Backups - -Any effective disaster recovery strategy includes having regularly scheduled -backups. The PostgreSQL Operator enables this through its scheduling sidecar -that is deployed alongside the Operator. - -The PostgreSQL Operator Scheduler is essentially a [cron](https://en.wikipedia.org/wiki/Cron) -server that will run jobs that it is specified. Schedule commands use the cron -syntax to set up scheduled tasks. - -![PostgreSQL Operator Schedule Backups](/images/postgresql-cluster-dr-schedule.png) - -For example, to schedule a full backup once a day at 1am, the following command -can be used: - -```shell -pgo create schedule hacluster --schedule="0 1 * * *" \ - --schedule-type=pgbackrest --pgbackrest-backup-type=full -``` - -To schedule an incremental backup once every 3 hours: - -```shell -pgo create schedule hacluster --schedule="0 */3 * * *" \ - --schedule-type=pgbackrest --pgbackrest-backup-type=incr -``` - -### Setting Backup Retention Policies - -Unless specified, pgBackRest will keep an unlimited number of backups. As part -of your regularly scheduled backups, it is encouraged for you to set a retention -policy. This can be accomplished using the `--repo1-retention-full` for full -backups and `--repo1-retention-diff` for differential backups via the -`--schedule-opts` parameter. - -For example, using the above example of taking a nightly full backup, you can -specify a policy of retaining 21 backups using the following command: - -```shell -pgo create schedule hacluster --schedule="0 1 * * *" \ - --schedule-type=pgbackrest --pgbackrest-backup-type=full \ - --schedule-opts="--repo1-retention-full=21" -``` - -### Schedule Expression Format - -Schedules are expressed using the following rules, which should be familiar to -users of cron: - -``` -Field name | Mandatory? | Allowed values | Allowed special characters ----------- | ---------- | -------------- | -------------------------- -Seconds | Yes | 0-59 | * / , - -Minutes | Yes | 0-59 | * / , - -Hours | Yes | 0-23 | * / , - -Day of month | Yes | 1-31 | * / , - ? -Month | Yes | 1-12 or JAN-DEC | * / , - -Day of week | Yes | 0-6 or SUN-SAT | * / , - ? -``` - -## Using S3 - -The PostgreSQL Operator integration with pgBackRest allows it to use the AWS S3 -object storage system, as well as other object storage systems that implement -the S3 protocol. - -In order to enable S3 storage, it is helpful to provide some of the S3 -information prior to deploying the PostgreSQL Operator, or updating the -`pgo-config` ConfigMap and restarting the PostgreSQL Operator pod. - -First, you will need to add the proper S3 bucket name, AWS S3 endpoint and -the AWS S3 region to the `Cluster` section of the `pgo.yaml` -[configuration file](/configuration/pgo-yaml-configuration/): - -```yaml -Cluster: - BackrestS3Bucket: my-postgresql-backups-example - BackrestS3Endpoint: s3.amazonaws.com - BackrestS3Region: us-east-1 - BackrestS3URIStyle: host - BackrestS3VerifyTLS: true -``` - -These values can also be set on a per-cluster basis with the -`pgo create cluster` command, i.e.: - - -- `--pgbackrest-s3-bucket` - specifics the AWS S3 bucket that should be utilized -- `--pgbackrest-s3-endpoint` specifies the S3 endpoint that should be utilized -- `--pgbackrest-s3-key` - specifies the AWS S3 key that should be utilized -- `--pgbackrest-s3-key-secret`- specifies the AWS S3 key secret that should be -utilized -- `--pgbackrest-s3-region` - specifies the AWS S3 region that should be utilized -- `--pgbackrest-s3-uri-style` - specifies whether "host" or "path" style URIs should be utilized -- `--pgbackrest-s3-verify-tls` - set this value to "true" to enable TLS verification - - -Sensitive information, such as the values of the AWS S3 keys and secrets, are -stored in Kubernetes Secrets and are securely mounted to the PostgreSQL -clusters. - -To enable a PostgreSQL cluster to use S3, the `--pgbackrest-storage-type` on the -`pgo create cluster` command needs to be set to `s3` or `local,s3`. - -Once configured, the `pgo backup` and `pgo restore` commands will work with S3 -similarly to the above! diff --git a/docs/content/architecture/eventing.md b/docs/content/architecture/eventing.md deleted file mode 100644 index 7bae789322..0000000000 --- a/docs/content/architecture/eventing.md +++ /dev/null @@ -1,73 +0,0 @@ ---- -title: "Lifecycle Events" -date: -draft: false -weight: 500 ---- - -## Operator Eventing - -The Operator creates events from the various life-cycle -events going on within the Operator logic and driven -by pgo users as they interact with the Operator and as -Postgres clusters come and go or get updated. - -## Event Watching - -There is a pgo CLI command: - - pgo watch alltopic - -This command connects to the event stream and listens -on a topic for event real-time. The command will not -complete until the pgo user enters ctrl-C. - -This command will connect to localhost:14150 (default) to reach the -event stream. If you have the correct priviledges -to connect to the Operator pod, you can port forward -as follows to form a connection to the event stream: - - kubectl port-forward svc/postgres-operator 14150:4150 -n pgo - -## Event Topics - -The following topics exist that hold the various Operator -generated events: - - alltopic - clustertopic - backuptopic - loadtopic - postgresusertopic - policytopic - pgbouncertopic - pgotopic - pgousertopic - -## Event Types - -The various event types are found in the source code at -https://github.com/CrunchyData/postgres-operator/blob/master/pkg/events/eventtype.go - - -## Event Deployment - -The Operator events are published and subscribed via the NSQ -project software (https://nsq.io/). NSQ is found in the pgo-event -container which is part of the postgres-operator deployment. - -You can see the pgo-event logs by issuing the elog bash function -found in the examples/envs.sh script. - -NSQ looks for events currently at port 4150. The Operator sends -events to the NSQ address as defined in the EVENT_ADDR environment -variable. - -If you want to disable eventing when installing with Bash, set the following -environment variable in the Operator Deployment: - "name": "DISABLE_EVENTING" - "value": "true" - -To disable eventing when installing with Ansible, add the following to -your inventory file: - pgo_disable_eventing='true' diff --git a/docs/content/architecture/high-availability/_index.md b/docs/content/architecture/high-availability/_index.md deleted file mode 100644 index c5f05eaf96..0000000000 --- a/docs/content/architecture/high-availability/_index.md +++ /dev/null @@ -1,278 +0,0 @@ ---- -title: "High-Availability" -date: -draft: false -weight: 300 ---- - -One of the great things about PostgreSQL is its reliability: it is very stable -and typically "just works." However, there are certain things that can happen in -the environment that PostgreSQL is deployed in that can affect its uptime, -including: - -- The database storage disk fails or some other hardware failure occurs -- The network on which the database resides becomes unreachable -- The host operating system becomes unstable and crashes -- A key database file becomes corrupted -- A data center is lost - -There may also be downtime events that are due to the normal case of operations, -such as performing a minor upgrade, security patching of operating system, -hardware upgrade, or other maintenance. - -Fortunately, the Crunchy PostgreSQL Operator is prepared for this. - -![PostgreSQL Operator High-Availability Overview](/images/postgresql-ha-overview.png) - -The Crunchy PostgreSQL Operator supports a distributed-consensus based -high-availability (HA) system that keeps its managed PostgreSQL clusters up and -running, even if the PostgreSQL Operator disappears. Additionally, it leverages -Kubernetes specific features such as -[Pod Anti-Affinity](#how-the-crunchy-postgresql-operator-uses-pod-anti-affinity) -to limit the surface area that could lead to a PostgreSQL cluster becoming -unavailable. The PostgreSQL Operator also supports automatic healing of failed -primaries and leverages the efficient pgBackRest "delta restore" method, which -eliminates the need to fully reprovision a failed cluster! - -The Crunchy PostgreSQL Operator also maintains high-availability during a -routine task such as a PostgreSQL minor version upgrade. - -For workloads that are sensitive to transaction loss, the Crunchy PostgreSQL -Operator supports PostgreSQL synchronous replication, which can be specified -with the `--sync-replication` when using the `pgo create cluster` command. - -(HA is enabled by default in any newly created PostgreSQL cluster. You can -update this setting by either using the `--disable-autofail` flag when using -`pgo create cluster`, or modify the `pgo-config` ConfigMap [or the `pgo.yaml` -file] to set `DisableAutofail` to `"true"`. These can also be set when a -PostgreSQL cluster is running using the `pgo update cluster` command). - -One can also choose to manually failover using the `pgo failover` command as -well. - -The high-availability backing for your PostgreSQL cluster is only as good as -your high-availability backing for Kubernetes. To learn more about creating a -[high-availability Kubernetes cluster](https://kubernetes.io/docs/tasks/administer-cluster/highly-available-master/), -please review the [Kubernetes documentation](https://kubernetes.io/docs/tasks/administer-cluster/highly-available-master/) -or consult your systems administrator. - -## The Crunchy PostgreSQL Operator High-Availability Algorithm - -A critical aspect of any production-grade PostgreSQL deployment is a reliable -and effective high-availability (HA) solution. Organizations want to know that -their PostgreSQL deployments can remain available despite various issues that -have the potential to disrupt operations, including hardware failures, network -outages, software errors, or even human mistakes. - -The key portion of high-availability that the PostgreSQL Operator provides is -that it delegates the management of HA to the PostgreSQL clusters themselves. -This ensures that the PostgreSQL Operator is not a single-point of failure for -the availability of any of the PostgreSQL clusters that it manages, as the -PostgreSQL Operator is only maintaining the definitions of what should be in the -cluster (e.g. how many instances in the cluster, etc.). - -Each HA PostgreSQL cluster maintains its availability using concepts that come -from the [Raft algorithm](https://raft.github.io/) to achieve distributed -consensus. The Raft algorithm ("Reliable, Replicated, Redundant, -Fault-Tolerant") was developed for systems that have one "leader" (i.e. a -primary) and one-to-many followers (i.e. replicas) to provide the same fault -tolerance and safety as the PAXOS algorithm while being easier to implement. - -For the PostgreSQL cluster group to achieve distributed consensus on who the -primary (or leader) is, each PostgreSQL cluster leverages the distributed etcd -key-value store that is bundled with Kubernetes. After it is elected as the -leader, a primary will place a lock in the distributed etcd cluster to indicate -that it is the leader. The "lock" serves as the method for the primary to -provide a heartbeat: the primary will periodically update the lock with the -latest time it was able to access the lock. As long as each replica sees that -the lock was updated within the allowable automated failover time, the replicas -will continue to follow the leader. - -The "log replication" portion that is defined in the Raft algorithm is handled -by PostgreSQL in two ways. First, the primary instance will replicate changes to -each replica based on the rules set up in the provisioning process. For -PostgreSQL clusters that leverage "synchronous replication," a transaction is -not considered complete until all changes from those transactions have been sent -to all replicas that are subscribed to the primary. - -In the above section, note the key word that the transaction are sent to each -replica: the replicas will acknowledge receipt of the transaction, but they may -not be immediately replayed. We will address how we handle this further down in -this section. - -During this process, each replica keeps track of how far along in the recovery -process it is using a "log sequence number" (LSN), a built-in PostgreSQL serial -representation of how many logs have been replayed on each replica. For the -purposes of HA, there are two LSNs that need to be considered: the LSN for the -last log received by the replica, and the LSN for the changes replayed for the -replica. The LSN for the latest changes received can be compared amongst the -replicas to determine which one has replayed the most changes, and an important -part of the automated failover process. - -The replicas periodically check in on the lock to see if it has been updated by -the primary within the allowable automated failover timeout. Each replica checks -in at a randomly set interval, which is a key part of Raft algorithm that helps -to ensure consensus during an election process. If a replica believes that the -primary is unavailable, it becomes a candidate and initiates an election and -votes for itself as the new primary. A candidate must receive a majority of -votes in a cluster in order to be elected as the new primary. - -There are several cases for how the election can occur. If a replica believes -that a primary is down and starts an election, but the primary is actually not -down, the replica will not receive enough votes to become a new primary and will -go back to following and replaying the changes from the primary. - -In the case where the primary is down, the first replica to notice this starts -an election. Per the Raft algorithm, each available replica compares which one -has the latest changes available, based upon the LSN of the latest logs -received. The replica with the latest LSN wins and receives the vote of the -other replica. The replica with the majority of the votes wins. In the event -that two replicas' logs have the same LSN, the tie goes to the replica that -initiated the voting request. - -Once an election is decided, the winning replica is immediately promoted to be a -primary and takes a new lock in the distributed etcd cluster. If the new primary -has not finished replaying all of its transactions logs, it must do so in order -to reach the desired state based on the LSN. Once the logs are finished being -replayed, the primary is able to accept new queries. - -At this point, any existing replicas are updated to follow the new primary. - -When the old primary tries to become available again, it realizes that it has -been deposed as the leader and must be healed. The old primary determines what -kind of replica it should be based upon the CRD, which allows it to set itself -up with appropriate attributes. It is then restored from the pgBackRest backup -archive using the "delta restore" feature, which heals the instance and makes it -ready to follow the new primary, which is known as "auto healing." - -## How The Crunchy PostgreSQL Operator Uses Pod Anti-Affinity - -By default, when a new PostgreSQL cluster is created using the PostgreSQL -Operator, pod anti-affinity rules will be applied to any deployments comprising -the full PG cluster (please note that default pod anti-affinity does not apply -to any Kubernetes jobs created by the PostgreSQL Operator). This includes: - -- The primary PG deployment -- The deployments for each PG replica -- The `pgBackrest` dedicated repository deployment -- The `pgBouncer` deployment (if enabled for the cluster) - -There are three types of Pod Anti-Affinity rules that the Crunchy PostgreSQL -Operator supports: - -- `preferred`: Kubernetes will try to schedule any pods within a PostgreSQL -cluster to different nodes, but in the event it must schedule two pods on the -same Node, it will. As described above, this is the default option. -- `required`: Kubernetes will schedule pods within a PostgreSQL cluster to -different Nodes, but in the event it cannot schedule a pod to a different Node, -it will not schedule the pod until a different node is available. While this -guarantees that no pod will share the same node, it can also lead to downtime -events as well. This uses the `requiredDuringSchedulingIgnoredDuringExecution` -affinity rule. -- `disabled`: Pod Anti-Affinity is not used. - -With the default `preferred` Pod Anti-Affinity rule enabled, Kubernetes will -attempt to schedule pods created by each of the separate deployments above on a -unique node, but will not guarantee that this will occur. This ensures that the -pods comprising the PostgreSQL cluster can always be scheduled, though perhaps -not always on the desired node. This is specifically done using the following: - -- The `preferredDuringSchedulingIgnoredDuringExecution` affinity type, which -defines an anti-affinity rule that Kubernetes will attempt to adhere to, but -will not guarantee will occur during Pod scheduling -- A combination of labels that uniquely identify the pods created by the various -Deployments listed above -- A topology key of `kubernetes.io/hostname`, which instructs Kubernetes to -schedule a pod on specific Node only if there is not already another pod in the -PostgreSQL cluster scheduled on that same Node - -If you want to explicitly create a PostgreSQL cluster with the `preferred` Pod -Anti-Affinity rule, you can execute the `pgo create` command using the -`--pod-anti-affinity` flag similar to this: - -```shell -pgo create cluster hacluster --replica-count=2 --pod-anti-affinity=preferred -``` - -or it can also be explicitly enabled globally for all clusters by setting -`PodAntiAffinity` to `preferred` in the `pgo.yaml` configuration file. - -If you want to create a PostgreSQL cluster with the `required` Pod Anti-Affinity -rule, you can execute a command similar to this: - -```shell -pgo create cluster hacluster --replica-count=2 --pod-anti-affinity=required -``` - -or set the `required` option globally for all clusters by setting -`PodAntiAffinity` to `required` in the `pgo.yaml` configuration file. - -When `required` is utilized for the default pod anti-affinity, a separate node -is required for each deployment listed above comprising the PG cluster. This -ensures that the cluster remains highly-available by ensuring that node failures -do not impact any other deployments in the cluster. However, this does mean that -the PostgreSQL primary, each PostgreSQL replica, the pgBackRest repository and, -if deployed, the pgBouncer Pods will each require a unique node, meaning -the minimum number of Nodes required for the Kubernetes cluster will increase as -more Pods are added to the PostgreSQL cluster. Further, if an insufficient -number of nodes are available to support this configuration, certain deployments -will fail, since it will not be possible for Kubernetes to successfully schedule -the pods for each deployment. - -## Synchronous Replication: Guarding Against Transactions Loss - -Clusters managed by the Crunchy PostgreSQL Operator can be deployed with -synchronous replication, which is useful for workloads that are sensitive to -losing transactions, as PostgreSQL will not consider a transaction to be -committed until it is committed to all synchronous replicas connected to a -primary. This provides a higher guarantee of data consistency and, when a -healthy synchronous replica is present, a guarantee of the most up-to-date data -during a failover event. - -This comes at a cost of performance: PostgreSQL has to wait for -a transaction to be committed on all synchronous replicas, and a connected client -will have to wait longer than if the transaction only had to be committed on the -primary (which is how asynchronous replication works). Additionally, there is a -potential impact to availability: if a synchronous replica crashes, any writes -to the primary will be blocked until a replica is promoted to become a new -synchronous replica of the primary. - -You can enable synchronous replication by using the `--sync-replication` flag -with the `pgo create` command, e.g.: - - -```shell -pgo create cluster hacluster --replica-count=2 --sync-replication -``` - -## Node Affinity - -Kubernetes [Node Affinity](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity) -can be used to scheduled Pods to specific Nodes within a Kubernetes cluster. -This can be useful when you want your PostgreSQL instances to take advantage of -specific hardware (e.g. for geospatial applications) or if you want to have a -replica instance deployed to a specific region within your Kubernetes cluster -for high-availability purposes. - -The PostgreSQL Operator provides users with the ability to apply Node Affinity -rules using the `--node-label` flag on the `pgo create` and the `pgo scale` -commands. Node Affinity directs Kubernetes to attempt to schedule these -PostgreSQL instances to the specified Node label. - -To get a list of available Node labels: - -``` -kubectl get nodes --show-labels -``` - -You can then specify one of those Kubernetes node names (e.g. `region=us-east-1`) -when creating a PostgreSQL cluster; - -``` -pgo create cluster thatcluster --node-label=region=us-east-1 -``` - -The Node Affinity only uses the `preferred` scheduling strategy (similar to what -is described in the Pod Anti-Affinity section above), so if a Pod cannot be -scheduled to a particular Node matching the label, it will be scheduled to a -different Node. diff --git a/docs/content/architecture/high-availability/multi-cluster-kubernetes.md b/docs/content/architecture/high-availability/multi-cluster-kubernetes.md deleted file mode 100644 index c6043adba4..0000000000 --- a/docs/content/architecture/high-availability/multi-cluster-kubernetes.md +++ /dev/null @@ -1,323 +0,0 @@ ---- -title: "Kubernetes Multi-Cluster Deployments" -date: -draft: false -weight: 300 ---- - -![PostgreSQL Operator High-Availability Overview](/images/postgresql-ha-multi-data-center.png) - -Advanced [high-availability]({{< relref "/architecture/high-availability/_index.md" >}}) -and [disaster recovery]({{< relref "/architecture/disaster-recovery.md" >}}) -strategies involve spreading your database clusters across multiple data centers -to help maximize uptime. In Kubernetes, this technique is known as "[federation](https://en.wikipedia.org/wiki/Federation_(information_technology))". -Federated Kubernetes clusters are able to communicate with each other, -coordinate changes, and provide resiliency for applications that have high -uptime requirements. - -As of this writing, federation in Kubernetes is still in ongoing development -and is something we monitor with intense interest. As Kubernetes federation -continues to mature, we wanted to provide a way to deploy PostgreSQL clusters -managed by the [PostgreSQL Operator](https://www.crunchydata.com/developers/download-postgres/containers/postgres-operator) -that can span multiple Kubernetes clusters. This can be accomplished with a -few environmental setups: - -- Two Kubernetes clusters -- S3, or an external storage system that uses the S3 protocol - -At a high-level, the PostgreSQL Operator follows the "active-standby" data -center deployment model for managing the PostgreSQL clusters across Kuberntetes -clusters. In one Kubernetes cluster, the PostgreSQL Operator deploy PostgreSQL as an -"active" PostgreSQL cluster, which means it has one primary and one-or-more -replicas. In another Kubernetes cluster, the PostgreSQL cluster is deployed as -a "standby" cluster: every PostgreSQL instance is a replica. - -A side-effect of this is that in each of the Kubernetes clusters, the PostgreSQL -Operator can be used to deploy both active and standby PostgreSQL clusters, -allowing you to mix and match! While the mixing and matching may not ideal for -how you deploy your PostgreSQL clusters, it does allow you to perform online -moves of your PostgreSQL data to different Kubernetes clusters as well as manual -online upgrades. - -Lastly, while this feature does extend high-availability, promoting a standby -cluster to an active cluster is **not** automatic. While the PostgreSQL clusters -within a Kubernetes cluster do support self-managed high-availability, a -cross-cluster deployment requires someone to specifically promote the cluster -from standby to active. - -## Standby Cluster Overview - -Standby PostgreSQL clusters are managed just like any other PostgreSQL cluster -that is managed by the PostgreSQL Operator. For example, adding replicas to a -standby cluster is identical to before: you can use [`pgo scale`]({{< relref "/pgo-client/reference/pgo_scale.md" >}}). - -As the architecture diagram above shows, the main difference is that there is -no primary instance: one PostgreSQL instance is reading in the database changes -from the S3 repository, while the other replicas are replicas of that instance. -This is known as [cascading replication](https://www.postgresql.org/docs/current/warm-standby.html#CASCADING-REPLICATION). - replicas are cascading replicas, i.e. replicas replicating from a database server that itself is replicating from another database server. - -Because standby clusters are effectively read-only, certain functionality -that involves making changes to a database, e.g. PostgreSQL user changes, is -blocked while a cluster is in standby mode. Additionally, backups and restores -are blocked as well. While [pgBackRest](https://pgbackrest.org/) does support -backups from standbys, this requires direct access to the primary database, -which cannot be done until the PostgreSQL Operator supports Kubernetes -federation. If a blocked function is called on a standby cluster via the -[`pgo` client]({{< relref "/pgo-client/_index.md">}}) or a direct call to the -API server, the call will return an error. - -### Key Commands - -#### [`pgo create cluster`]({{< relref "/pgo-client/reference/pgo_create_cluster.md" >}}) - -This first step to creating a standby PostgreSQL cluster is...to create a -PostgreSQL standby cluster. We will cover how to set this up in the example -below, but wanted to provide some of the standby-specific flags that need to be -used when creating a standby cluster. These include: - -- `--standby`: Creates a cluster as a PostgreSQL standby cluster -- `--password-superuser`: The password for the `postgres` superuser account, -which performs a variety of administrative actions. -- `--password-replication`: The password for the replication account -(`primaryuser`), used to maintain high-availability. -- `--password`: The password for the standard user account created during -PostgreSQL cluster initialization. -- `--pgbackrest-repo-path`: The specific pgBackRest repository path that should -be utilized by the standby cluster. Allows a standby cluster to specify a path -that matches that of the active cluster it is replicating. -- `--pgbackrest-storage-type`: Must be set to `s3` -- `--pgbackrest-s3-key`: The S3 key to use -- `--pgbackrest-s3-key-secret`: The S3 key secret to use -- `--pgbackrest-s3-bucket`: The S3 bucket to use -- `--pgbackrest-s3-endpoint`: The S3 endpoint to use -- `--pgbackrest-s3-region`: The S3 region to use - -With respect to the credentials, it should be noted that when the standby -cluster is being created within the same Kubernetes cluster AND it has access to -the Kubernetes Secret created for the active cluster, one can use the -`--secret-from` flag to set up the credentials. - -#### [`pgo update cluster`]({{< relref "/pgo-client/reference/pgo_update_cluster.md" >}}) - -[`pgo update cluster`]({{< relref "/pgo-client/reference/pgo_update_cluster.md" >}}) -is responsible for the promotion and disabling of a standby cluster, and -contains several flags to help with this process: - -- `--enable-standby`: Enables standby mode in a cluster for a cluster. This will -bootstrap a PostgreSQL cluster to become aligned with the current active -cluster and begin to follow its changes. -- `--promote-standby`: Enables standby mode in a cluster. This is a destructive -action that results in the deletion of all PVCs for the cluster (data will be - retained according Storage Class and/or Persistent Volume reclaim policies). - In order to allow the proper deletion of PVCs, the cluster must also be - shutdown. -- `--shutdown`: Scales all deployments for the cluster to 0, resulting in a full -shutdown of the PG cluster. This includes the primary, any replicas, as well as -any supporting services ([pgBackRest](https://www.pgbackrest.org) and -[pgBouncer](({{< relref "/pgo-client/common-tasks.md" >}}#connection-pooling-via-pgbouncer)) -if enabled). -- `--startup`: Scales all deployments for the cluster to 1, effectively starting -a PG cluster that was previously shutdown. This includes the primary, any -replicas, as well as any supporting services (pgBackRest and pgBouncer if -enabled). The primary is brought online first in order to maintain a -consistent primary/replica architecture across startups and shutdowns. - -## Creating a Standby PostgreSQL Cluster - -Let's create a PostgreSQL deployment that has both an active and standby -cluster! You can try this example either within a single Kubernetes cluster, or -across multuple Kubernetes clusters. - -First, deploy a new active PostgreSQL cluster that is configured to use S3 with -pgBackRest. For example: - -``` -pgo create cluster hippo --pgbouncer --replica-count=2 \ - --pgbackrest-storage-type=local,s3 \ - --pgbackrest-s3-key= \ - --pgbackrest-s3-key-secret= \ - --pgbackrest-s3-bucket=watering-hole \ - --pgbackrest-s3-endpoint=s3.amazonaws.com \ - --pgbackrest-s3-region=us-east-1 \ - --password-superuser=supersecrethippo \ - --password-replication=somewhatsecrethippo \ - --password=opensourcehippo -``` - -(Replace the placeholder values with your actual values. We are explicitly -setting all of the passwords for the primary cluster to make it easier to run -the example as is). - -The above command creates an active PostgreSQL cluster with two replicas and a -pgBouncer deployment. Wait a few moments for this cluster to become live before -proceeding. - -Once the cluster has been created, you can then create the standby cluster. This -can either be in another Kubernetes cluster or within the same Kubernetes -cluster. If using a separate Kubernetes cluster, you will need to provide the -proper passwords for the superuser and replication accounts. You can also -provide a password for the regular PostgreSQL database user created during cluster -initialization to ensure the passwords and associated secrets across both -clusters are consistent. - -(If the standby cluster is being created using the same PostgreSQL Operator -deployment (and therefore the same Kubernetes cluster), the `--secret-from` flag -can also be used in lieu of these passwords. You would specify the name of the -cluster [e.g. `hippo`] as the value of the `--secret-from` variable.) - -With this in mind, create a standby cluster similar to this below: - -``` -pgo create cluster hippo-standby --standby --pgbouncer --replica-count=2 \ - --pgbackrest-storage-type=s3 \ - --pgbackrest-s3-key= \ - --pgbackrest-s3-key-secret= \ - --pgbackrest-s3-bucket=watering-hole \ - --pgbackrest-s3-endpoint=s3.amazonaws.com \ - --pgbackrest-s3-region=us-east-1 \ - --pgbackrest-repo-path=/backrestrepo/hippo-backrest-shared-repo \ - --password-superuser=supersecrethippo \ - --password-replication=somewhatsecrethippo \ - --password=opensourcehippo -``` - -Note the use of the `--pgbackrest-repo-path` flag as it points to the name of -the pgBackRest repository that is used for the original `hippo` cluster. - -At this point, the standby cluster will bootstrap as a standby along with two -cascading replicas. pgBouncer will be deployed at this time as well, but will -remain non-functional until `hippo-standby` is promoted. To see that the Pod is -indeed a standby, you can check the logs. - -``` -kubectl logs hippo-standby-dcff544d6-s6d58 -… -Thu Mar 19 18:16:54 UTC 2020 INFO: Node standby-dcff544d6-s6d58 fully initialized for cluster standby and is ready for use -2020-03-19 18:17:03,390 INFO: Lock owner: standby-dcff544d6-s6d58; I am standby-dcff544d6-s6d58 -2020-03-19 18:17:03,454 INFO: Lock owner: standby-dcff544d6-s6d58; I am standby-dcff544d6-s6d58 -2020-03-19 18:17:03,598 INFO: no action. i am the standby leader with the lock -2020-03-19 18:17:13,389 INFO: Lock owner: standby-dcff544d6-s6d58; I am standby-dcff544d6-s6d58 -2020-03-19 18:17:13,466 INFO: no action. i am the standby leader with the lock -``` - -You can also see that this is a standby cluster from the -[`pgo show cluster`]({{< relref "/pgo-client/reference/pgo_show_cluster.md" >}}) -command. - -``` -pgo show cluster hippo - -cluster : standby (crunchy-postgres-ha:{{< param centosBase >}}-{{< param postgresVersion >}}-{{< param operatorVersion >}}) - standby : true -``` -## Promoting a Standby Cluster - -There comes a time where a standby cluster needs to be promoted to an active -cluster. Promoting a standby cluster means that a PostgreSQL instance within -it will become a priary and start accepting both reads and writes. This has the -net effect of pushing WAL (transaction archives) to the pgBackRest repository, -so we need to take a few steps first to ensure we don't accidentally create a -split-brain scenario. - -First, if this is not a disaster scenario, you will want to "shutdown" the -active PostgreSQL cluster. This can be done with the `--shutdown` flag: - -``` -pgo update cluster hippo --shutdown -``` - -The effect of this is that all the Kubernetes Deployments for this cluster are -scaled to 0. You can verify this with the following command: - -``` -kubectl get deployments --selector pg-cluster=hippo - -NAME READY UP-TO-DATE AVAILABLE AGE -hippo 0/0 0 0 32m -hippo-backrest-shared-repo 0/0 0 0 32m -hippo-kvfo 0/0 0 0 27m -hippo-lkge 0/0 0 0 27m -hippo-pgbouncer 0/0 0 0 31m -``` - -We can then promote the standby cluster using the `--promote-standby` flag: - -``` -pgo update cluster hippo-standby --promote-standby -``` - -This command essentially removes the standby configuration from the Kubernetes -cluster’s DCS, which triggers the promotion of the current standby leader to a -primary PostgreSQL instance. You can view this promotion in the PostgreSQL -standby leader's (soon to be active leader's) logs: - -``` -kubectl logs hippo-standby-dcff544d6-s6d58 -… -2020-03-19 18:28:11,919 INFO: Reloading PostgreSQL configuration. -server signaled -2020-03-19 18:28:16,792 INFO: Lock owner: standby-dcff544d6-s6d58; I am standby-dcff544d6-s6d58 -2020-03-19 18:28:16,850 INFO: Reaped pid=5377, exit status=0 -2020-03-19 18:28:17,024 INFO: no action. i am the leader with the lock -2020-03-19 18:28:26,792 INFO: Lock owner: standby-dcff544d6-s6d58; I am standby-dcff544d6-s6d58 -2020-03-19 18:28:26,924 INFO: no action. i am the leader with the lock -``` - -As pgBouncer was enabled for the cluster, the `pgbouncer` user's password is -rotated, which will bring pgBouncer online with the newly promoted active -cluster. If pgBouncer is still having trouble connecting, you can explicitly -rotate the password with the following command: - -``` -pgo update pgbouncer --rotate-password hippo-standby -``` - -With the standby cluster now promoted, the cluster with the original active -PostgreSQL cluster can now be turned into a standby PostgreSQL cluster. This is -done by deleting and recreating all PVCs for the cluster and re-initializing it -as a standby using the S3 repository. Being that this is a destructive action -(i.e. data will only be retained if any Storage Classes and/or Persistent - Volumes have the appropriate reclaim policy configured) a warning is shown - when attempting to enable standby. - -``` -pgo update cluster hippo --enable-standby -Enabling standby mode will result in the deletion of all PVCs for this cluster! -Data will only be retained if the proper retention policy is configured for any associated storage classes and/or persistent volumes. -Please proceed with caution. -WARNING: Are you sure? (yes/no): yes -updated pgcluster hippo -``` - - -To verify that standby has been enabled, you can check the DCS configuration for -the cluster to verify that the proper standby settings are present. - -``` -kubectl get cm hippo-config -o yaml | grep standby - %f \"%p\""},"use_pg_rewind":true,"use_slots":false},"standby_cluster":{"create_replica_methods":["pgbackrest_standby"],"restore_command":"source -``` - -Also, the PVCs for the cluster should now only be a few seconds old, since they -were recreated. - - -``` -kubectl get pvc --selector pg-cluster=hippo -NAME STATUS VOLUME CAPACITY AGE -hippo Bound crunchy-pv251 1Gi 33s -hippo-kvfo Bound crunchy-pv174 1Gi 29s -hippo-lkge Bound crunchy-pv228 1Gi 26s -hippo-pgbr-repo Bound crunchy-pv295 1Gi 22s -``` - -And finally, the cluster can be restarted: - -``` -pgo update cluster hippo --startup -``` - -At this point, the cluster will reinitialize from scratch as a standby, just -like the original standby that was created above. Therefore any transactions -written to the original standby, should now replicate back to this cluster. diff --git a/docs/content/architecture/monitoring.md b/docs/content/architecture/monitoring.md deleted file mode 100644 index 75c9b0eb5f..0000000000 --- a/docs/content/architecture/monitoring.md +++ /dev/null @@ -1,239 +0,0 @@ ---- -title: "Monitoring" -date: -draft: false -weight: 350 ---- - -![PostgreSQL Operator Monitoring](/images/postgresql-monitoring.png) - -While having [high availability]({{< relref "architecture/high-availability/_index.md" >}}) -and [disaster recovery]({{< relref "architecture/disaster-recovery.md" >}}) -systems in place helps in the event of something going wrong with your -PostgreSQL cluster, monitoring helps you anticipate problems before they happen. -Additionally, monitoring can help you diagnose and resolve additional issues -that may not result in downtime, but cause degraded performance. - -There are many different ways to monitor systems within Kubernetes, including -tools that come with Kubernetes itself. This is by no means to be a -comprehensive on how to monitor everything in Kubernetes, but rather what the -PostgreSQL Operator provides to give you an -[out-of-the-box monitoring solution]({{< relref "installation/metrics/_index.md" >}}). - -## Getting Started - -If you want to install the metrics stack, please visit the [installation]({{< relref "installation/metrics/_index.md" >}}) -instructions for the [PostgreSQL Operator Monitoring]({{< relref "installation/metrics/_index.md" >}}) -stack. - -Once the metrics stack is set up, you will need to deploy your PostgreSQL -clusters with monitoring enabled. To do so, you will need to use the `--metrics` -flag as part of the [`pgo create cluster`]({{< relref "pgo-client/reference/pgo_create_cluster.md" >}}) -command, for example: - -``` -pgo create cluster --metrics hippo -``` - -## Components - -The [PostgreSQL Operator Monitoring]({{< relref "installation/metrics/_index.md" >}}) -stack is made up of several open source components: - -- [pgMonitor](https://github.com/CrunchyData/pgmonitor), which provides the core -of the monitoring infrastructure including the following components: - - [postgres_exporter](https://github.com/CrunchyData/pgmonitor/tree/master/exporter/postgres), - which provides queries used to collect metrics information about a PostgreSQL - instance. - - [Prometheus](https://github.com/prometheus/prometheus), a time-series - database that scrapes and stores the collected metrics so they can be consumed - by other services. - - [Grafana](https://github.com/grafana/grafana), a visualization tool that - provides charting and other capabilities for viewing the collected monitoring - data. - - [Alertmanager](https://github.com/prometheus/alertmanager), a tool that - can send alerts when metrics hit a certain threshold that require someone to - intervene. -- [pgnodemx](https://github.com/CrunchyData/pgnodemx), a PostgreSQL extension -that is able to pull container-specific metrics (e.g. CPU utilization, memory -consumption) from the container itself via SQL queries. - -## Visualizations - -Below is a brief description of all the visualizations provided by the -[PostgreSQL Operator Monitoring]({{< relref "installation/metrics/_index.md" >}}) -stack. Some of the descriptions may include some directional guidance on how to -interpret the charts, though this is only to provide a starting point: actual -causes and effects of issues can vary between systems. - -Many of the visualizations can be broken down based on the following groupings: - -- Cluster: which PostgreSQL cluster should be viewed -- Pod: the specific Pod or PostgreSQL instance - -### Overview - -![PostgreSQL Operator Monitoring - Overview](/images/postgresql-monitoring-overview.png) - -The overview provides an overview of all of the PostgreSQL clusters that are -being monitoring by the PostgreSQL Operator Monitoring stack. This includes the -following information: - -- The name of the PostgreSQL cluster and the namespace that it is in -- The type of PostgreSQL cluster (HA [high availability] or standalone) -- The status of the cluster, as indicate by color. Green indicates the cluster -is available, red indicates that it is not. - -Each entry is clickable to provide additional cluster details. - -### PostgreSQL Details - -![PostgreSQL Operator Monitoring - Cluster Cluster Details](/images/postgresql-monitoring.png) - -The PostgreSQL Details view provides more information about a specific -PostgreSQL cluster that is being managed and monitored by the PostgreSQL -Operator. These include many key PostgreSQL-specific metrics that help make -decisions around managing a PostgreSQL cluster. These include: - -- Backup Status: The last time a backup was taken of the cluster. Green is good. -Orange means that a backup has not been taken in more than a day and may warrant -investigation. -- Active Connections: How many clients are connected to the database. Too many -clients connected could impact performance and, for values approaching 100%, can -lead to clients being unable to connect. -- Idle in Transaction: How many clients have a connection state of "idle in -transaction". Too many clients in this state can cause performance issues and, -in certain cases, maintenance issues. -- Idle: How many clients are connected but are in an "idle" state. -- TPS: The number of "transactions per second" that are occurring. Usually needs -to be combined with another metric to help with analysis. "Higher is better" -when performing benchmarking. -- Connections: An aggregated view of active, idle, and idle in transaction -connections. -- Database Size: How large databases are within a PostgreSQL cluster. Typically -combined with another metric for analysis. Helps keep track of overall disk -usage and if any triage steps need to occur around PVC size. -- WAL Size: How much space write-ahead logs (WAL) are taking up on disk. This -can contribute to extra space being used on your data disk, or can give you an -indication of how much space is being utilized on a separate WAL PVC. If you -are using replication slots, this can help indicate if a slot is not being -acknowledged if the numbers are much larger than the `max_wal_size` setting (the -PostgreSQL Operator does not use slots by default). -- Row Activity: The number of rows that are selected, inserted, updated, and -deleted. This can help you determine what percentage of your workload is read -vs. write, and help make database tuning decisions based on that, in conjunction -with other metrics. -- Replication Status: Provides guidance information on how much replication lag -there is between primary and replica PostgreSQL instances, both in bytes and -time. This can provide an indication of how much data could be lost in the event -of a failover. - -![PostgreSQL Operator Monitoring - Cluster Cluster Details 2](/images/postgresql-monitoring-cluster.png) - -- Conflicts / Deadlocks: These occur when PostgreSQL is unable to complete -operations, which can result in transaction loss. The goal is for these numbers -to be `0`. If these are occurring, check your data access and writing patterns. -- Cache Hit Ratio: A measure of how much of the "working data", e.g. data that -is being accessed and manipulated, resides in memory. This is used to understand -how much PostgreSQL is having to utilize the disk. The target number of this -should be as high as possible. How to achieve this is the subject of books, but -certain takes efforts on your applications use PostgreSQL. -- Buffers: The buffer usage of various parts of the PostgreSQL system. This can -be used to help understand the overall throughput between various parts of the -system. -- Commit & Rollback: How many transactions are committed and rolled back. -- Locks: The number of locks that are present on a given system. - -### Pod Details - -![PostgreSQL Operator Monitoring - Pod Details](/images/postgresql-monitoring-pod.png) - -Pod details provide information about a given Pod or Pods that are being used -by a PostgreSQL cluster. These are similar to "operating system" or "node" -metrics, with the differences that these are looking at resource utilization by -a container, not the entire node. - -It may be helpful to view these metrics on a "pod" basis, by using the Pod -filter at the top of the dashboard. - -- Disk Usage: How much space is being consumed by a volume. -- Disk Activity: How many reads and writes are occurring on a volume. -- Memory: Various information about memory utilization, including the request -and limit as well as actually utilization. -- CPU: The amount of CPU being utilized by a Pod -- Network Traffic: The amount of networking traffic passing through each network -device. -- Container ResourceS: The CPU and memory limits and requests. - -### PostgreSQL Service Health Overview - -![PostgreSQL Operator Monitoring - Service Health Overview](/images/postgresql-monitoring-service.png) - -The Service Health Overview provides information about the Kubernetes Services -that sit in front of the PostgreSQL Pods. This provides information about the -status of the network. - -- Saturation: How much of the available network to the Service is being -consumed. High saturation may cause degraded performance to clients or create -an inability to connect to the PostgreSQL cluster. -- Traffic: Displays the number of transactions per minute that the Service is -handling. -- Errors: Displays the total number of errors occurring at a particular Service. -- Latency: What the overall network latency is when interfacing with the -Service. - -### Alerts - -![PostgreSQL Operator Monitoring - Alerts](/images/postgresql-monitoring-alerts.png) - -Alerting lets one view and receive alerts about actions that require -intervention, for example, a HA cluster that cannot self-heal. The alerting -system is powered by [Alertmanager](https://github.com/prometheus/alertmanager). - -The alerts that come installed by default include: - -- `PGExporterScrapeError`: The Crunchy PostgreSQL Exporter is having issues -scraping statistics used as part of the monitoring stack. -- `PGIsUp`: A PostgreSQL instance is down. -- `PGIdleTxn`: There are too many connections that are in the -"idle in transaction" state. -- `PGQueryTime`: A single PostgreSQL query is taking too long to run. Issues a -warning at 12 hours and goes critical after 24. -- `PGConnPerc`: Indicates that there are too many connection slots being used. -Issues a warning at 75% and goes critical above 90%. -- `PGDiskSize`: Indicates that a PostgreSQL database is too large and could be in -danger of running out of disk space. Issues a warning at 75% and goes critical -at 90%. -- `PGReplicationByteLag`: Indicates that a replica is too far behind a primary -instance, which coul risk data loss in a failover scenario. Issues a warning at -50MB an goes critical at 100MB. -- `PGReplicationSlotsInactive`: Indicates that a replication slot is inactive. -Not attending to this can lead to out-of-disk errors. -- `PGXIDWraparound`: Indicates that a PostgreSQL instance is nearing transaction -ID wraparound. Issues a warning at 50% and goes critical at 75%. It's important -that you [vacuum your database](https://info.crunchydata.com/blog/managing-transaction-id-wraparound-in-postgresql) -to prevent this. -- `PGEmergencyVacuum`: Indicates that autovacuum is not running or cannot keep -up with ongoing changes, i.e. it's past its "freeze" age. Issues a warning at -110% and goes critical at 125%. -- `PGArchiveCommandStatus`: Indicates that the archive command, which is used -to ship WAL archives to pgBackRest, is failing. -- `PGSequenceExhaustion`: Indicates that a sequence is over 75% used. -- `PGSettingsPendingRestart`: Indicates that there are settings changed on a -PostgreSQL instance that requires a restart. - -Optional alerts that can be enabled: - -- `PGMinimumVersion`: Indicates if PostgreSQL is below a desired version. -- `PGRecoveryStatusSwitch_Replica`: Indicates that a replica has been promoted -to a primary. -- `PGConnectionAbsent_Prod`: Indicates that metrics collection is absent from a -PostgresQL instance. -- `PGSettingsChecksum`: Indicates that PostgreSQL settings have changed from a -previous state. -- `PGDataChecksum`: Indicates that there are data checksum failures on a -PostgreSQL instance. This could be a sign of data corruption. - -You can modify these alerts as you see fit, and add your own alerts as well! -Please see the [installation instructions]({{< relref "installation/metrics/_index.md" >}}) -for general setup of the PostgreSQL Operator Monitoring stack. diff --git a/docs/content/architecture/namespace.md b/docs/content/architecture/namespace.md deleted file mode 100644 index f6b4265723..0000000000 --- a/docs/content/architecture/namespace.md +++ /dev/null @@ -1,413 +0,0 @@ ---- -title: "Namespace Management" -date: -draft: false -weight: 400 ---- - -# Kubernetes Namespaces and the PostgreSQL Operator - -The PostgreSQL Operator leverages Kubernetes Namespaces to react to actions -taken within a Namespace to keep its PostgreSQL clusters deployed as requested. -Early on, the PostgreSQL Operator was scoped to a single namespace and would -only watch PostgreSQL clusters in that Namspace, but since version 4.0, it has -been expanded to be able to manage PostgreSQL clusters across multiple -namespaces. - -The following provides more information about how the PostgreSQL Operator works -with namespaces, and presents several deployment patterns that can be used to -deploy the PostgreSQL Operator. - -## Namespace Operating Modes - -The PostgreSQL Operator can be run with various Namespace Operating Modes, with each mode -determining whether or not certain namespace capabilities are enabled for the PostgreSQL Operator -installation. When the PostgreSQL Operator is run, the Kubernetes environment is inspected to -determine what cluster roles are currently assigned to the `pgo-operator` `ServiceAccount` -(i.e. the `ServiceAccount` running the Pod the PostgreSQL Operator is deployed within). Based -on the `ClusterRoles` identified, one of the namespace operating modes described below will be -enabled for the [PostgreSQL Operator Installation]({{< relref "installation" >}}). Please consult -the [installation](({{< relref "installation" >}})) section for more information on the available -settings. - -### `dynamic` - -Enables full dynamic namespace capabilities, in which the Operator can create, delete and update -any namespaces within a Kubernetes cluster. With `dynamic` mode enabled, the PostgreSQL Operator -can respond to namespace events in a Kubernetes cluster, such as when a namespace is created, and -take an appropriate action, such as adding the PostgreSQL Operator controllers for the newly -created namespace. - -The following defines the namespace permissions required for the `dynamic` mode to be enabled: - -```yaml ---- -kind: ClusterRole -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - name: pgo-cluster-role -rules: - - apiGroups: - - '' - resources: - - namespaces - verbs: - - get - - list - - watch - - create - - update - - delete -``` - -### `readonly` - -In `readonly` mode, the PostgreSQL Operator is still able to listen to namespace events within a -Kubernetes cluster, but it can no longer modify (create, update, delete) namespaces. For example, -if a Kubernetes administrator creates a namespace, the PostgreSQL Operator can respond and create -controllers for that namespace. - -The following defines the namespace permissions required for the `readonly` mode to be enabled: - -```yaml -kind: ClusterRole -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - name: pgo-cluster-role -rules: - - apiGroups: - - '' - resources: - - namespaces - verbs: - - get - - list - - watch -``` - -### `disabled` - -`disabled` mode disables namespace capabilities namespace capabilities within the PostgreSQL -Operator altogether. While in this mode the PostgreSQL Operator will simply attempt to work with -the target namespaces specified during installation. If no target namespaces are specified, then -the Operator will be configured to work within the namespace in which it is deployed. Since the -Operator is unable to dynamically respond to namespace events in the cluster, in the event that -target namespaces are deleted or new target namespaces need to be added, the PostgreSQL Operator -will need to be re-deployed. - -Please note that it is important to redeploy the PostgreSQL Operator following the deletion of a -target namespace to ensure it no longer attempts to listen for events in that namespace. - -The `disabled` mode is enabled the when the PostgreSQL Operator has not been assigned namespace -permissions. - -## RBAC Reconciliation - -By default, the PostgreSQL Operator will attempt to reconcile RBAC resources (ServiceAccounts, -Roles and RoleBindings) within each namespace configured for the PostgreSQL Operator installation. -This allows the PostgreSQL Operator to create, update and delete the various RBAC resources it -requires in order to properly create and manage PostgreSQL clusters within each targeted namespace -(this includes self-healing RBAC resources as needed if removed and/or misconfigured). - -In order for RBAC reconciliation to function properly, the PostgreSQL Operator ServiceAccount must -be assigned a certain set of permissions. While the PostgreSQL Operator is not concerned with -exactly how it has been assigned the permissions required to reconcile RBAC in each target -namespace, the various [installation methods]({{< relref "installation" >}}) supported by the -PostgreSQL Operator install a recommended set permissions based on the specific Namespace Operating -Mode enabled (see section [Namespace Operating Modes]({{< relref "#namespace-operating-modes" >}}) -above for more information regarding the various Namespace Operating Modes available). - -The following section defines the recommended set of permissions that should be assigned to the -PostgreSQL Operator ServiceAccount in order to ensure proper RBAC reconciliation based on the -specific Namespace Operating Mode enabled. Please note that each PostgreSQL Operator installation -method handles the initial configuration and setup of the permissions shown below based on the -Namespace Operating Mode configured during installation. - -### `dynamic` Namespace Operating Mode - -When using the `dynamic` Namespace Operating Mode, it is recommended that the PostgreSQL Operator -ServiceAccount be granted permissions to manage RBAC inside any namespace in the Kubernetes cluster -via a ClusterRole. This allows for a fully-hands off approach to managing RBAC within each -targeted namespace space. In other words, as namespaces are added and removed post-installation of -the PostgreSQL Operator (e.g. using `pgo create namespace` or `pgo delete namespace`), the Operator -is able to automatically reconcile RBAC in those namespaces without the need for any external -administrative action and/or resource creation. - -The following defines ClusterRole permissions that are assigned to the PostgreSQL Operator -ServiceAccount via the various Operator installation methods when the `dynamic` Namespace Operating -Mode is configured: - -```yaml ---- -kind: ClusterRole -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - name: pgo-cluster-role -rules: - - apiGroups: - - '' - resources: - - serviceaccounts - verbs: - - get - - create - - update - - delete - - apiGroups: - - rbac.authorization.k8s.io - resources: - - roles - - rolebindings - verbs: - - get - - create - - update - - delete - - apiGroups: - - '' - resources: - - configmaps - - endpoints - - pods - - pods/exec - - pods/log - - replicasets - - secrets - - services - - persistentvolumeclaims - verbs: - - get - - list - - watch - - create - - patch - - update - - delete - - deletecollection - - apiGroups: - - apps - resources: - - deployments - verbs: - - get - - list - - watch - - create - - patch - - update - - delete - - deletecollection - - apiGroups: - - batch - resources: - - jobs - verbs: - - get - - list - - watch - - create - - patch - - update - - delete - - deletecollection - - apiGroups: - - crunchydata.com - resources: - - pgclusters - - pgpolicies - - pgreplicas - - pgtasks - verbs: - - get - - list - - watch - - create - - patch - - update - - delete - - deletecollection -``` - -### `readonly` & `disabled` Namespace Operating Modes - -When using the `readonly` or `disabled` Namespace Operating Modes, it is recommended that the -PostgreSQL Operator ServiceAccount be granted permissions to manage RBAC inside of any configured -namespaces using local Roles within each targeted namespace. This means that as new namespaces -are added and removed post-installation of the PostgreSQL Operator, an administrator must manually -assign the PostgreSQL Operator ServiceAccount the permissions it requires within each target -namespace in order to successfully reconcile RBAC within those namespaces. - -The following defines the permissions that are assigned to the PostgreSQL Operator ServiceAccount -in each configured namespace via the various Operator installation methods when the `readonly` or -`disabled` Namespace Operating Modes are configured: - -```yaml ---- -kind: Role -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - name: pgo-local-ns - namespace: targetnamespace -rules: - - apiGroups: - - '' - resources: - - serviceaccounts - verbs: - - get - - create - - update - - delete - - apiGroups: - - rbac.authorization.k8s.io - resources: - - roles - - rolebindings - verbs: - - get - - create - - update - - delete ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: Role -metadata: - name: pgo-target-role - namespace: targetnamespace -rules: -- apiGroups: - - '' - resources: - - configmaps - - endpoints - - pods - - pods/exec - - pods/log - - replicasets - - secrets - - services - - persistentvolumeclaims - verbs: - - get - - list - - watch - - create - - patch - - update - - delete - - deletecollection -- apiGroups: - - apps - resources: - - deployments - verbs: - - get - - list - - watch - - create - - patch - - update - - delete - - deletecollection -- apiGroups: - - batch - resources: - - jobs - verbs: - - get - - list - - watch - - create - - patch - - update - - delete - - deletecollection -- apiGroups: - - crunchydata.com - resources: - - pgclusters - - pgpolicies - - pgtasks - - pgreplicas - verbs: - - get - - list - - watch - - create - - patch - - update - - delete - - deletecollection -``` - -### Disabling RBAC Reconciliation - -In the event that the reconciliation behavior discussed above is not desired, it can be fully -disabled by setting `DisableReconcileRBAC` to `true` in the `pgo.yaml` configuration file. When -reconciliation is disabled using this setting, the PostgreSQL Operator will not attempt to -reconcile RBAC in any configured namespace. As a result, any RBAC required by the PostreSQL -Operator a targeted namespace must be manually created by an administrator. - -Please see the the -[`pgo.yaml` configuration guide]({{< relref "configuration/pgo-yaml-configuration.md" >}}), as well -as the documentation for the various [installation methods]({{< relref "installation" >}}) -supported by the PostgreSQL Operator, for guidance on how to properly configure this setting and -therefore disable RBAC reconciliation. - -## Namespace Deployment Patterns - -There are several different ways the PostgreSQL Operator can be deployed in -Kubernetes clusters with respect to Namespaces. - -### One Namespace: PostgreSQL Operator + PostgreSQL Clusters - -![PostgreSQL Operator Own Namespace Deployment](/images/namespace-own.png) - -This patterns is great for testing out the PostgreSQL Operator in development -environments, and can also be used to keep your entire PostgreSQL workload -within a single Kubernetes Namespace. - -This can be set up with the `disabled` Namespace mode. - -### Single Tenant: PostgreSQL Operator Separate from PostgreSQL Clusters - -![PostgreSQL Operator Single Namespace Deployment](/images/namespace-single.png) - -The PostgreSQL Operator can be deployed into its own namespace and manage -PostgreSQL clusters in a separate namespace. - -This can be set up with either the `readonly` or `dynamic` Namespace modes. - -### Multi Tenant: PostgreSQL Operator Managing PostgreSQL Clusters in Multiple Namespaces - -![PostgreSQL Operator Multi Namespace Deployment](/images/namespace-multi.png) - -The PostgreSQL Operator can manage PostgreSQL clusters across multiple -namespaces which allows for multi-tenancy. - -This can be set up with either the `readonly` or `dynamic` Namespace modes. - -## [`pgo` client]({{< relref "/pgo-client/_index.md" >}}) and Namespaces - -The [`pgo` client]({{< relref "/pgo-client/_index.md" >}}) needs to be aware of -the Kubernetes Namespaces it is issuing commands to. This can be accomplish with -the `-n` flag that is available on most PostgreSQL Operator commands. For -example, to create a PostgreSQL cluster called `hippo` in the `pgo` namespace, -you would execute the following command: - -``` -pgo create cluster -n pgo hippo -``` - -For convenience, you can set the `PGO_NAMESPACE` environmental variable to -automatically use the desired namespace with the commands. - -For example, to create a cluster named `hippo` in the `pgo` namespace, you could -do the following - -``` -# this export only needs to be run once per session -export PGO_NAMESPACE=pgo - -pgo create cluster hippo -``` diff --git a/docs/content/architecture/overview.md b/docs/content/architecture/overview.md deleted file mode 100644 index 9365787ba8..0000000000 --- a/docs/content/architecture/overview.md +++ /dev/null @@ -1,174 +0,0 @@ ---- -title: "Overview" -date: -draft: false -weight: 100 ---- - -The goal of the Crunchy PostgreSQL Operator is to provide a means to quickly get -your applications up and running on PostgreSQL for both development and -production environments. To understand how the PostgreSQL Operator does this, we -want to give you a tour of its architecture, with explains both the architecture -of the PostgreSQL Operator itself as well as recommended deployment models for -PostgreSQL in production! - -# Crunchy PostgreSQL Operator Architecture - -![Operator Architecture with CRDs](/Operator-Architecture-wCRDs.png) - -The Crunchy PostgreSQL Operator extends Kubernetes to provide a higher-level -abstraction for rapid creation and management of PostgreSQL clusters. The -Crunchy PostgreSQL Operator leverages a Kubernetes concept referred to as -"[Custom Resources](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/)” -to create several -[custom resource definitions (CRDs)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions) -that allow for the management of PostgreSQL clusters. - -The Custom Resource Definitions include: - -- `pgclusters.crunchydata.com`: Stores information required to manage a -PostgreSQL cluster. This includes things like the cluster name, what storage and -resource classes to use, which version of PostgreSQL to run, information about -how to maintain a high-availability cluster, etc. -- `pgreplicas.crunchydata.com`: Stores information required to manage the -replicas within a PostgreSQL cluster. This includes things like the number of -replicas, what storage and resource classes to use, special affinity rules, etc. -- `pgtasks.crunchydata.com`: A general purpose CRD that accepts a type of task -that is needed to run against a cluster (e.g. take a backup) and tracks the -state of said task through its workflow. -- `pgpolicies.crunchydata.com`: Stores a reference to a SQL file that can be -executed against a PostgreSQL cluster. In the past, this was used to manage RLS -policies on PostgreSQL clusters. - -There are also a few legacy Custom Resource Definitions that the PostgreSQL -Operator comes with that will be removed in a future release. - -The PostgreSQL Operator runs as a deployment in a namespace and is composed of -up to four Pods, including: - -- `operator` (image: postgres-operator) - This is the heart of the PostgreSQL -Operator. It contains a series of Kubernetes -[controllers](https://kubernetes.io/docs/concepts/architecture/controller/) that -place watch events on a series of native Kubernetes resources (Jobs, Pods) as -well as the Custom Resources that come with the PostgreSQL Operator (Pgcluster, -Pgtask) -- `apiserver` (image: pgo-apiserver) - This provides an API that a PostgreSQL -Operator User (`pgouser`) can interface with via the `pgo` command-line -interface (CLI) or directly via HTTP requests. The API server can also control -what resources a user can access via a series of RBAC rules that can be defined -as part of a `pgorole`. -- `scheduler` (image: pgo-scheduler) - A container that runs `cron` and allows a -user to schedule repeatable tasks, such as backups (because it is important to - schedule backups in a production environment!) -- `event` (image: pgo-event, optional) - A container that provides an interface -to the `nsq` message queue and transmits information about lifecycle events that -occur within the PostgreSQL Operator (e.g. a cluster is created, a backup is -taken, etc.) - -The main purpose of the PostgreSQL Operator is to create and update information -around the structure of a PostgreSQL Cluster, and to relay information about the -overall status and health of a PostgreSQL cluster. The goal is to also simplify -this process as much as possible for users. For example, let's say we want to -create a high-availability PostgreSQL cluster that has a single replica, -supports having backups in both a local storage area and Amazon S3 and has -built-in metrics and connection pooling, similar to: - -![PostgreSQL HA Cluster](/images/postgresql-cluster-ha-s3.png) - -We can accomplish that with a single command: - -```shell -pgo create cluster hacluster --replica-count=1 --metrics --pgbackrest-storage-type="local,s3" --pgbouncer --pgbadger -``` - -The PostgreSQL Operator handles setting up all of the various Deployments and -sidecars to be able to accomplish this task, and puts in the various constructs -to maximize resiliency of the PostgreSQL cluster. - -You will also notice that **high-availability is enabled by default**. The -Crunchy PostgreSQL Operator uses a distributed-consensus method for PostgreSQL -cluster high-availability, and as such delegates the management of each -cluster's availability to the clusters themselves. This removes the PostgreSQL -Operator from being a single-point-of-failure, and has benefits such as faster -recovery times for each PostgreSQL cluster. For a detailed discussion on -high-availability, please see the [High-Availability](/architecture/high-availability) -section. - -Every single Kubernetes object (Deployment, Service, Pod, Secret, Namespace, -etc.) that is deployed or managed by the PostgreSQL Operator has a Label -associated with the name of `vendor` and a value of `crunchydata`. You can -use Kubernetes selectors to easily find out which objects are being watched by -the PostgreSQL Operator. For example, to get all of the managed Secrets in the -default namespace the PostgreSQL Operator is deployed into (`pgo`): - -```shell -kubectl get secrets -n pgo --selector=vendor=crunchydata -``` - -## Kubernetes Deployments: The Crunchy PostgreSQL Operator Deployment Model - -The Crunchy PostgreSQL Operator uses [Kubernetes Deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) -for running PostgreSQL clusters instead of StatefulSets or other objects. This -is by design: Kubernetes Deployments allow for more flexibility in how you -deploy your PostgreSQL clusters. - -For example, let's look at a specific PostgreSQL cluster where we want to have -one primary instance and one replica instance. We want to ensure that our -primary instance is using our fastest disks and has more compute resources -available to it. We are fine with our replica having slower disks and less -compute resources. We can create this environment with a command similar to -below: - -```shell -pgo create cluster mixed --replica-count=1 \ - --storage-config=fast --memory=32Gi --cpu=8.0 \ - --replica-storage-config=standard -``` - -Now let's say we want to have one replica available to run read-only queries -against, but we want its hardware profile to mirror that of the primary -instance. We can run the following command: - -```shell -pgo scale mixed --replica-count=1 \ - --storage-config=fast -``` - -Kubernetes Deployments allow us to create heterogeneous clusters with ease and -let us scale them up and down as we please. Additional components in our -PostgreSQL cluster, such as the pgBackRest repository or an optional pgBouncer, -are deployed as Kubernetes Deployments as well. - -We can also leverage Kubernees Deployments to apply -[Node Affinity](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity) -rules to individual PostgreSQL instances. For instance, we may want to force one -or more of our PostgreSQL replicas to run on Nodes in a different region than -our primary PostgreSQL instances. - -Using Kubernetes Deployments does create additional management complexity, but -the good news is: the PostgreSQL Operator manages it for you! Being aware of -this model can help you understand how the PostgreSQL Operator gives you maximum -flexibility for your PostgreSQL clusters while giving you the tools to -troubleshoot issues in production. - -The last piece of this model is the use of [Kubernetes Services](https://kubernetes.io/docs/concepts/services-networking/service/) -for accessing your PostgreSQL clusters and their various components. The -PostgreSQL Operator puts services in front of each Deployment to ensure you have -a known, consistent means of accessing your PostgreSQL components. - -Note that in some production environments, there can be delays in accessing -Services during transition events. The PostgreSQL Operator attempts to mitigate -delays during critical operations (e.g. failover, restore, etc.) by directly -accessing the Kubernetes Pods to perform given actions. - - -For a detailed analysis, please see -[Using Kubernetes Deployments for Running PostgreSQL](https://info.crunchydata.com/blog/using-kubernetes-deployments-for-running-postgresql). - -# Additional Architecture Information - -There is certainly a lot to unpack in the overall architecture of the Crunchy -PostgreSQL Operator. Understanding the architecture will help you to plan -the deployment model that is best for your environment. For more information on -the architectures of various components of the PostgreSQL Operator, please read -onward! diff --git a/docs/content/architecture/pgadmin4.md b/docs/content/architecture/pgadmin4.md deleted file mode 100644 index 6a09838bfd..0000000000 --- a/docs/content/architecture/pgadmin4.md +++ /dev/null @@ -1,71 +0,0 @@ ---- -title: "pgAdmin 4" -date: -draft: false -weight: 900 ---- - -![pgAdmin 4 Query](/images/pgadmin4-query.png) - -[pgAdmin 4](https://www.pgadmin.org/) is a popular graphical user interface that -makes it easy to work with PostgreSQL databases from both a desktop or web-based -client. With its ability to manage and orchestrate changes for PostgreSQL users, -the PostgreSQL Operator is a natural partner to keep a pgAdmin 4 environment -synchronized with a PostgreSQL environment. - -The PostgreSQL Operator lets you deploy a pgAdmin 4 environment alongside a -PostgreSQL cluster and keeps users' database credentials synchronized. You can -simply log into pgAdmin 4 with your PostgreSQL username and password and -immediately have access to your databases. - -## Deploying pgAdmin 4 - -For example, let's use a PostgreSQL cluster called hippo `hippo` that has a user -named `hippo` with password `datalake`: - -``` -pgo create cluster hippo --username=hippo --password=datalake -``` - -After the PostgreSQL cluster becomes ready, you can create a pgAdmin 4 -deployment with the [`pgo create pgadmin`]({{< relref "/pgo-client/reference/pgo_create_pgadmin.md" >}}) -command: - -``` -pgo create pgadmin hippo -``` - -This creates a pgAdmin 4 deployment unique to this PostgreSQL cluster and -synchronizes the PostgreSQL user information into it. To access pgAdmin 4, you -can set up a port-forward to the Service, which follows the pattern `-pgadmin`, to port `5050`: - -``` -kubectl port-forward svc/hippo-pgadmin 5050:5050 -``` - -Point your browser at `http://localhost:5050` and use your database -username (e.g. `hippo`) and password (e.g. `datalake`) to log in. Though the -prompt says "email address", using your PostgreSQL username will work. - -![pgAdmin 4 Login Page](/images/pgadmin4-login.png) - -(**Note**: if your password does not appear to work, you can retry setting up -the user with the [`pgo update user`]({{< relref "/pgo-client/reference/pgo_update_user.md" >}}) -command: `pgo update user hippo --password=datalake`) - -## User Synchronization - -The [`pgo create user`]({{< relref "/pgo-client/reference/pgo_create_user.md" >}}), -[`pgo update user`]({{< relref "/pgo-client/reference/pgo_update_user.md" >}}), -and [`pgo delete user`]({{< relref "/pgo-client/reference/pgo_delete_user.md" >}}) -commands are synchronized with the pgAdmin 4 deployment. Note that if you use -`pgo create user` without the `--managed` flag prior to deploying pgAdmin 4, -then the user's credentials will not be synchronized to the pgAdmin 4 -deployment. However, a subsequent run of `pgo update user --password` will -synchronize the credentials with pgAdmin 4. - -## Deleting pgAdmin 4 - -You can remove the pgAdmin 4 deployment with the -[`pgo delete pgadmin`]({{< relref "/pgo-client/reference/pgo_delete_pgadmin.md" >}}) -command. diff --git a/docs/content/architecture/postgres-operator-containers-overview.md b/docs/content/architecture/postgres-operator-containers-overview.md deleted file mode 100644 index e9d9927259..0000000000 --- a/docs/content/architecture/postgres-operator-containers-overview.md +++ /dev/null @@ -1,49 +0,0 @@ ---- -title: "PostgreSQL Containers" -date: -draft: false -weight: 600 ---- - -## PostgreSQL Operator Containers Overview - -The PostgreSQL Operator orchestrates a series of PostgreSQL and PostgreSQL related containers containers that enable rapid deployment of PostgreSQL, including administration and monitoring tools in a Kubernetes environment. The PostgreSQL Operator supports PostgreSQL 9.5+ with multiple PostgreSQL cluster deployment strategies and a variety of PostgreSQL related extensions and tools enabling enterprise grade PostgreSQL-as-a-Service. A full list of the containers supported by the PostgreSQL Operator is provided below. - -### PostgreSQL Server and Extensions - -* **PostgreSQL** (crunchy-postgres-ha). PostgreSQL database server. The crunchy-postgres container image is unmodified, open source PostgreSQL packaged and maintained by Crunchy Data. - -* **PostGIS** (crunchy-postgres-ha-gis). PostgreSQL database server including the PostGIS extension. The crunchy-postgres-gis container image is unmodified, open source PostgreSQL packaged and maintained by Crunchy Data. This image is identical to the crunchy-postgres image except it includes the open source geospatial extension PostGIS for PostgreSQL in addition to the language extension PL/R which allows for writing functions in the R statistical computing language. - -### Backup and Restore - -* **pgBackRest** (crunchy-backrest-restore). pgBackRest is a high performance backup and restore utility for PostgreSQL. The crunchy-backrest-restore container executes the pgBackRest utility, allowing FULL and DELTA restore capability. - -* **pgdump** (crunchy-pgdump). The crunchy-pgdump container executes either a pg_dump or pg_dumpall database backup against another PostgreSQL database. - -* **crunchy-pgrestore** (restore). The restore image provides a means of performing a restore of a dump from pg_dump or pg_dumpall via psql or pg_restore to a PostgreSQL container database. - - -### Administration Tools - -* **pgAdmin4** (crunchy-pgadmin4). PGAdmin4 is a graphical user interface administration tool for PostgreSQL. The crunchy-pgadmin4 container executes the pgAdmin4 web application. - -* **pgbadger** (crunchy-pgbadger). pgbadger is a PostgreSQL log analyzer with fully detailed reports and graphs. The crunchy-pgbadger container executes the pgBadger utility, which generates a PostgreSQL log analysis report using a small HTTP server running on the container. - -* **pg_upgrade** (crunchy-upgrade). The crunchy-upgrade container contains 9.5, 9.6, 10, 11 and 12 PostgreSQL packages in order to perform a pg_upgrade from 9.5 to 9.6, 9.6 to 10, 10 to 11, and 11 to 12 versions. - -* **scheduler** (crunchy-scheduler). The crunchy-scheduler container provides a cron like microservice for automating pgBackRest backups within a single namespace. - -### Metrics and Monitoring - -* **Metrics Collection** (crunchy-postgres-exporter). The crunchy-postgres-exporter container provides real time metrics about the PostgreSQL database via an API. These metrics are scraped and stored by a Prometheus time-series database and are then graphed and visualized through the open source data visualizer Grafana. - -* **Grafana** (grafana). Hosts an open source web-based graphing dashboard called Grafana. Provides visual dashboards for monitoring PostgreSQL clusters, specifically using Crunchy PostgreSQL Exporter data stored within Prometheus. - -* **Prometheus** (prometheus). Prometheus is a multi-dimensional time series data model with an elastic query language. It is used in collaboration with the Crunchy PostgreSQL Exporter and Grafana to provide and store metrics. - -* **Alertmanager** (alertmanager). Handles alerts sent by Prometheus by deduplicating, grouping, and routing them to reciever integrations. - -### Connection Pooling - -* **pgbouncer** (crunchy-pgbouncer). pgbouncer is a lightweight connection pooler for PostgreSQL. The crunchy-pgbouncer container provides a pgbouncer image. diff --git a/docs/content/architecture/provisioning.md b/docs/content/architecture/provisioning.md deleted file mode 100644 index 23734ee180..0000000000 --- a/docs/content/architecture/provisioning.md +++ /dev/null @@ -1,200 +0,0 @@ ---- -title: "Provisioning" -date: -draft: false -weight: 100 ---- - -What happens when the Crunchy PostgreSQL Operator creates a PostgreSQL cluster? - -![PostgreSQL HA Cluster](/images/postgresql-cluster-ha-s3.png) - -First, an entry needs to be added to the `Pgcluster` CRD that provides the -essential attributes for maintaining the definition of a PostgreSQL cluster. -These attributes include: - -- Cluster name -- The storage and resource definitions to use -- References to any secrets required, e.g. ones to the pgBackRest repository -- High-availability rules -- Which sidecars and ancillary services are enabled, e.g. pgBouncer, pgMonitor - -After the Pgcluster CRD entry is set up, the PostgreSQL Operator handles various -tasks to ensure that a healthy PostgreSQL cluster can be deployed. These -include: - -- Allocating the [PersistentVolumeClaim](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)s -that are used to store the PostgreSQL data as well as the pgBackRest repository -- Setting up the Secrets specific to this PostgreSQL cluster -- Setting up the ConfigMap entries specific for this PostgreSQL cluster, -including entries that may contain custom configurations as well as ones that -are used for the PostgreSQL cluster to manage its high-availability -- Creating Deployments for the PostgreSQL primary instance and the pgBackRest -repository - -You will notice the presence of a pgBackRest repository. As of version 4.2, this -is a mandatory feature for clusters that are deployed by the PostgreSQL -Operator. In addition to providing an archive for the PostgreSQL write-ahead -logs (WAL), the pgBackRest repository serves several critical functions, -including: - -- Used to efficiently provision new replicas that are added to the PostgreSQL -cluster -- Prevent replicas from falling out of sync from the PostgreSQL primary by -allowing them to replay old WAL logs -- Allow failed primaries to automatically and efficiently heal using the -"delta restore" feature -- Serves as the basis for the cluster cloning feature -- ...and of course, allow for one to take full, differential, and incremental -backups and perform full and point-in-time restores - -The pgBackRest repository can be configured to use storage that resides within -the Kubernetes cluster (the `local` option), Amazon S3 or a storage system that -uses the S3 protocol (the `s3` option), or both (`local,s3`). - -Once the PostgreSQL primary instance is ready, there are two follow up actions -that the PostgreSQL Operator takes to properly leverage the pgBackRest -repository: - -- A new pgBackRest stanza is created -- An initial backup is taken to facilitate the creation of any new replica - -At this point, if new replicas were requested as part of the `pgo create` -command, they are provisioned from the pgBackRest repository. - -There is a Kubernetes Service created for the Deployment of the primary -PostgreSQL instance, one for the pgBackRest repository, and one that encompasses -all of the replicas. Additionally, if the connection pooler pgBouncer is -deployed with this cluster, it will also have a service as well. - -An optional monitoring sidecar can be deployed as well. The sidecar, called -`exporter`, uses the `crunchy-postgres-exporter` container that is a part of pgMonitor and -scrapes key health metrics into a Prometheus instance. See Monitoring for more -information on how this works. - -## Horizontal Scaling - -There are many reasons why you may want to horizontally scale your PostgreSQL -cluster: - -- Add more redundancy by having additional replicas -- Leveraging load balancing for your read only queries -- Add in a new replica that has more storage or a different container resource -profile, and then failover to that as the new primary - -and more. - -The PostgreSQL Operator enables the ability to scale up and down via the -`pgo scale` and `pgo scaledown` commands respectively. When you run `pgo scale`, -the PostgreSQL Operator takes the following steps: - -- The PostgreSQL Operator creates a new Kubernetes Deployment with the -information specified from the `pgo scale` command combined with the information -already stored as part of the managing the existing PostgreSQL cluster -- During the provisioning of the replica, a pgBackRest restore takes place in -order to bring it up to the point of the last backup. If data already exists -as part of this replica, then a "delta restore" is performed. (**NOTE**: If you -have not taken a backup in awhile and your database is large, consider taking a -backup before performing scaling up.) -- The new replica boots up in recovery mode and recovers to the latest point in -time. This allows it to catch up to the current primary. -- Once the replica has recovered, it joins the primary as a streaming replica! - -If pgMonitor is enabled, an `exporter` sidecar is also added to the replica -Deployment. - -Scaling down works in the opposite way: - -- The PostgreSQL instance on the scaled down replica is stopped. By default, the -data is explicitly wiped out unless the `--keep-data` flag on `pgo scaledown` is -specified. Once the data is removed, the PersistentVolumeClaim (PVC) is also -deleted -- The Kubernetes Deployment associated with the replica is removed, as well as -any other Kubernetes objects that are specifically associated with this replcia - -## [Custom Configuration]({{< relref "/advanced/custom-configuration.md" >}}) - -PostgreSQL workloads often need tuning and additional configuration in production -environments, and the PostgreSQL Operator allows for this via its ability to -manage [custom PostgreSQL configuration]({{< relref "/advanced/custom-configuration.md" >}}). - -The custom configuration can be edit from a [ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/) -that follows the pattern of `-pgha-config`, where `` -would be `hippo` in `pgo create cluster hippo`. When the ConfigMap is edited, -the changes are automatically pushed out to all of the PostgreSQL instances -within a cluster. - -For more information on how this works and what configuration settings are -editable, please visit the "[Custom PostgreSQL configuration]({{< relref "/advanced/custom-configuration.md" >}})" -section of the documentation. - -## Provisioning Using a Backup from an Another PostgreSQL Cluster - -When provisioning a new PostgreSQL cluster, it is possible to bootstrap the cluster using an -existing backup from either another PostgreSQL cluster that is currently running, or from a -PostgreSQL cluster that no longer exists (specifically a cluster that was deleted using the - `keep-backups` option, as discussed in section [Deprovisioning](#deprovisioning) below). This -is specifically accomplished by performing a `pgbackrest restore` during cluster initialization -in order to populate the initial `PGDATA` directory for the new cluster using the contents of a -backup from another cluster. - -To leverage this capability, the name of the cluster containing the backup that should be utilzed -when restoring simply needs to be specified using the `restore-from` option when creating a new -cluster: - -```shell -pgo create cluster mycluster2 --restore-from=mycluster1 -``` - -By default, pgBackRest will restore the latest backup available in the repository, and will replay -all available WAL archives. However, additional pgBackRest options can be specified using the -`restore-opts` option, which allows the restore command to be further tailored and customized. For -instance, the following demonstrates how a point-in-time restore can be utilized when creating a -new cluster: - -```shell -pgo create cluster mycluster2 \ - --restore-from=mycluster1 \ - --restore-opts="--type=time --target='2020-07-02 20:19:36.13557+00'" -``` - -Additionally, if bootstrapping from a cluster the utilizes AWS S3 storage with pgBackRest (or a -cluster that utilized AWS S3 storage in the case of a former cluster), you can also also specify -`s3` as the repository type in order to restore from a backup stored in an S3 storage bucket: - -```shell -pgo create cluster mycluster2 \ - --restore-from=mycluster1 \ - --restore-opts="--repo-type=s3" -``` - -When restoring from a cluster that is currently running, the new cluster will simply connect to -the existing pgBackRest repository host for that cluster in order to perform the pgBackRest -restore. If restoring from a former cluster that has since been deleted, a new pgBackRest -repository host will be deployed for the sole purpose of bootstrapping the new cluster, and will -then be destroyed once the restore is complete. Also, please note that it is only possible for -one cluster to bootstrap from another cluster (whether running or not) at any given time. - -## Deprovisioning - -There may become a point where you need to completely deprovision, or delete, a -PostgreSQL cluster. You can delete a cluster managed by the PostgreSQL Operator -using the `pgo delete` command. By default, all data and backups are removed -when you delete a PostgreSQL cluster, but there are some options that allow you -to retain data, including: - -- `--keep-backups` - this retains the pgBackRest repository. This can be used to -restore the data to a new PostgreSQL cluster. -- `--keep-data` - this retains the PostgreSQL data directory (aka `PGDATA`) from -the primary PostgreSQL instance in the cluster. This can be used to recreate the -PostgreSQL cluster of the same name. - -When the PostgreSQL cluster is deleted, the following takes place: - -- All PostgreSQL instances are stopped. By default, the data is explicitly wiped -out unless the `--keep-data` flag on `pgo scaledown` is specified. Once the data -is removed, the PersistentVolumeClaim (PVC) is also deleted -- Any Services, ConfigMaps, Secrets, etc. Kubernetes objects are all deleted -- The Kubernetes Deployments associated with the PostgreSQL instances are -removed, as well as the Kubernetes Deployments associated with pgBackRest -repository and, if deployed, the pgBouncer connection pooler diff --git a/docs/content/architecture/storage-overview.md b/docs/content/architecture/storage-overview.md deleted file mode 100644 index d9741f584f..0000000000 --- a/docs/content/architecture/storage-overview.md +++ /dev/null @@ -1,24 +0,0 @@ ---- -title: "Storage" -date: -draft: false -weight: 700 ---- - -## Storage and the PostgreSQL Operator - -The PostgreSQL Operator allows for a variety of different configurations of persistent storage that can be leveraged by the PostgreSQL instances or clusters it deploys. - -The PostgreSQL Operator works with several different storage types, HostPath, Network File System(NFS), and Dynamic storage. - -* Hostpath is the simplest storage and useful for single node testing. - -* NFS provides the ability to do single and multi-node testing. - -Hostpath and NFS both require you to configure persistent volumes so that you can make claims towards those volumes. You will need to monitor the persistent volumes so that you do not run out of available volumes to make claims against. - -Dynamic storage classes provide a means for users to request persistent volume claims and have the persistent volume dynamically created for you. You will need to monitor disk space with dynamic storage to make sure there is enough space for users to request a volume. There are multiple providers of dynamic storage classes to choose from. You will need to configure what works for your environment and size the Physical Volumes, Persistent Volumes (PVs), appropriately. - -Once you have determined the type of storage you will plan on using and setup PV’s you need to configure the Operator to know about it. You will do this in the pgo.yaml file. - -If you are deploying to a cloud environment with multiple zones, for instance Google Kubernetes Engine (GKE), you will want to review topology aware storage class configurations. diff --git a/docs/content/architecture/tablespaces.md b/docs/content/architecture/tablespaces.md deleted file mode 100644 index 8bb3213290..0000000000 --- a/docs/content/architecture/tablespaces.md +++ /dev/null @@ -1,185 +0,0 @@ ---- -title: "Tablespaces" -date: -draft: false -weight: 850 ---- - -A [Tablespace](https://www.postgresql.org/docs/current/manage-ag-tablespaces.html) -is a PostgreSQL feature that is used to store data on a volume that is different -from the primary data directory. While most workloads do not require them, -tablespaces can be particularly helpful for larger data sets or utilizing -particular hardware to optimize performance on a particular PostgreSQL object -(a table, index, etc.). Some examples of use cases for tablespaces include: - -- Partitioning larger data sets across different volumes -- Putting data onto archival systems -- Utilizing hardware (or a storage class) for a particular database -- Storing sensitive data on a volume that supports transparent data-encryption -(TDE) - -and others. - -In order to use PostgreSQL tablespaces properly in a highly-available, -distributed system, there are several considerations that need to be accounted -for to ensure proper operations: - -- Each tablespace must have its own volume; this means that every tablespace for -every replica in a system must have its own volume. -- The filesystem map must be consistent across the cluster -- The backup & disaster recovery management system must be able to safely backup -and restore data to tablespaces - -Additionally, a tablespace is a critical piece of a PostgreSQL instance: if -PostgreSQL expects a tablespace to exist and it is unavailable, this could -trigger a downtime scenario. - -While there are certain challenges with creating a PostgreSQL cluster with -high-availability along with tablespaces in a Kubernetes-based environment, the -PostgreSQL Operator adds many conveniences to make it easier to use -tablespaces in applications. - -## How Tablespaces Work in the PostgreSQL Operator - -As stated above, it is important to ensure that every tablespace created has its -own volume (i.e. its own [persistent volume claim](https://kubernetes.io/docs/concepts/storage/persistent-volumes/)). -This is especially true for any replicas in a cluster: you don't want multiple -PostgreSQL instances writing to the same volume, as this is a recipe for -disaster! - -One of the keys to working with tablespaces in a high-availability cluster is to -ensure the filesystem that the tablespaces map to is consistent. Specifically, -it is imperative to have the `LOCATION` parameter that is used by PostgreSQL to -indicate where a tablespace resides to match in each instance in a cluster. - -The PostgreSQL Operator achieves this by mounting all of its tablespaces to a -directory called `/tablespaces` in the container. While each tablespace will -exist in a unique PVC across all PostgreSQL instances in a cluster, each -instance's tablespaces will mount in a predictable way in `/tablespaces`. - -The PostgreSQL Operator takes this one step further and abstracts this away from -you. When your PostgreSQL cluster initialized, the tablespace definition is -automatically created in PostgreSQL; you can start using it immediately! An -example of this is demonstrated in the next section. - -The PostgreSQL Operator ensures the availability of the tablespaces across the -different lifecycle events that occur on a PostgreSQL cluster, including: - -- High-Availability: Data in the tablespaces is replicated across the cluster, -and is available after a downtime event -- Disaster Recovery: Tablespaces are backed up and are properly restored during -a recovery -- Clone: Tablespaces are created in any cloned or restored cluster -- Deprovisioining: Tablespaces are deleted when a PostgreSQL instance or cluster -is deleted - -## Adding Tablespaces to a New Cluster - -Tablespaces can be used in a cluster with the [`pgo create cluster`](/pgo-client/reference/pgo_create_cluster/) -command. The command follows this general format: - -```shell -pgo create cluster hacluster \ - --tablespace=name=tablespace1:storageconfig=storageconfigname \ - --tablespace=name=tablespace2:storageconfig=storageconfigname -``` - -For example, to create tablespaces name `faststorage1` and `faststorage2` on -PVCs that use the `nfsstorage` storage type, you would execute the following -command: - -```shell -pgo create cluster hacluster \ - --tablespace=name=faststorage1:storageconfig=nfsstorage \ - --tablespace=name=faststorage2:storageconfig=nfsstorage -``` - -Once the cluster is initialized, you can immediately interface with the -tablespaces! For example, if you wanted to create a table called `sensor_data` -on the `faststorage1` tablespace, you could execute the following SQL: - -```sql -CREATE TABLE sensor_data ( - sensor_id int, - sensor_value numeric, - created_at timestamptz DEFAULT CURRENT_TIMESTAMP -) -TABLESPACE faststorage1; -``` - -## Adding Tablespaces to Existing Clusters - -You can also add a tablespace to an existing PostgreSQL cluster with the -[`pgo update cluster`](/pgo-client/reference/pgo_update_cluster/) command. -Adding a tablespace to a cluster uses a similar syntax to creating a cluster -with tablespaces, for example: - -```shell -pgo update cluster hacluster \ - --tablespace=name=tablespace3:storageconfig=storageconfigname -``` - -**NOTE**: This operation can cause downtime. In order to add a tablespace to a -PostgreSQL cluster, persistent volume claims (PVCs) need to be created and -mounted to each PostgreSQL instance in the cluster. The act of mounting a new -PVC to a Kubernetes Deployment causes the Pods in the deployment to restart. - -When the operation completes, the tablespace will be set up and accessible to -use within the PostgreSQL cluster. - -## Removing Tablespaces - -Removing a tablespace is a nontrivial operation. PostgreSQL does not provide a -`DROP TABLESPACE .. CASCADE` command that would drop any associated objects with -a tablespace. Additionally, the PostgreSQL documentation covering the -[`DROP TABLESPACE`](https://www.postgresql.org/docs/current/sql-droptablespace.html) -command goes on to note: - -> A tablespace can only be dropped by its owner or a superuser. The tablespace -> must be empty of all database objects before it can be dropped. It is possible -> that objects in other databases might still reside in the tablespace even if -> no objects in the current database are using the tablespace. Also, if the -> tablespace is listed in the temp_tablespaces setting of any active session, -> the DROP might fail due to temporary files residing in the tablespace. - -Because of this, and to avoid a situation where a PostgreSQL cluster is left in -an inconsistent state due to trying to remove a tablespace, the PostgreSQL -Operator does not provide any means to remove tablespaces automatically. If you -do need to remove a tablespace from a PostgreSQL deployment, we recommend -following this procedure: - -1. As a database administrator: - 1. Log into the primary instance of your cluster. - 1. Drop any objects that reside within the tablespace you wish to delete. - These can be tables, indexes, and even databases themselves - 1. When you believe you have deleted all objects that depend on the tablespace - you wish to remove, you can delete this tablespace from the PostgreSQL cluster - using the `DROP TABLESPACE` command. -1. As a Kubernetes user who can modify Deployments and edit an entry in the - pgclusters.crunchydata.com CRD in the Namespace that the PostgreSQL cluster is - in: - 1. For each Deployment that represents a PostgreSQL instance in the cluster - (i.e. `kubectl -n get deployments --selector=pgo-pg-database=true,pg-cluster=`), - edit the Deployment and remove the Volume and VolumeMount entry for the - tablespace. If the tablespace is called `hippo-ts`, the Volume entry will look - like: - ```yaml - - name: tablespace-hippo-ts - persistentVolumeClaim: - claimName: -tablespace-hippo-ts - ``` - and the VolumeMount entry will look like: - ```yaml - - mountPath: /tablespaces/hippo-ts - name: tablespace-hippo-ts - ``` - 2. Modify the CR entry for the PostgreSQL cluster and remove the - `tablespaceMounts` entry. If your PostgreSQL cluster is called `hippo`, then - the name of the CR entry is also called `hippo`. If your tablespace is called - `hippo-ts`, then you would remove the YAML stanza called `hippo-ts` from the - `tablespaceMounts` entry. - -## More Information - -For more information on how tablespaces work in PostgreSQL please refer to the -[PostgreSQL manual](https://www.postgresql.org/docs/current/manage-ag-tablespaces.html). diff --git a/docs/content/architecture/users-role-overview.md b/docs/content/architecture/users-role-overview.md deleted file mode 100644 index 1362b85f8f..0000000000 --- a/docs/content/architecture/users-role-overview.md +++ /dev/null @@ -1,53 +0,0 @@ ---- -title: "User & Roles" -date: -draft: false -weight: 800 ---- - -## User Roles in the PostgreSQL Operator - -The PostgreSQL Operator, when used in conjunction with the associated PostgreSQL Containers and Kubernetes, provides you with the ability to host your own open source, Kubernetes -native PostgreSQL-as-a-Service infrastructure. - -In installing, configuring and operating the PostgreSQL Operator as a PostgreSQL-as-a-Service capability, the following user roles will be required: - - -|Role | Applicable Component | Authorized Privileges and Functions Performed | -|-----------|---------------------------|-----------------------------------------------| -|Platform Admininistrator (Privileged User)| PostgreSQL Operator | The Platform Admininistrator is able to control all aspects of the PostgreSQL Operator functionality, including: provisioning and scaling clusters, adding PostgreSQL Administrators and PostgreSQL Users to clusters, setting PostgreSQL cluster security privileges, managing other PostgreSQL Operator users, and more. This user can have access to any database that is deployed and managed by the PostgreSQL Operator.| -|Platform User | PostgreSQL Operator | The Platform User has access to a limited subset of PostgreSQL Operator functionality that is defined by specific RBAC rules. A Platform Administrator manages the specific permissions for an Platform User specific permissions. A Platform User only receives a permission if its is explicitly granted to them.| -|PostgreSQL Administrator(Privileged Account) | PostgreSQL Containers | The PostgreSQL Administrator is the equivalent of a PostgreSQL superuser (e.g. the "postgres" user) and can perform all the actions that a PostgreSQL superuser is permitted to do, which includes adding additional PostgreSQL Users, creating databases within the cluster.| -|PostgreSQL User|PostgreSQL Containers | The PostgreSQL User has access to a PostgreSQL Instance or Cluster but must be granted explicit permissions to perform actions in PostgreSQL based upon their role membership. | - -As indicated in the above table, both the Operator Administrator and the PostgreSQL Administrators represent privilege users with components within the PostgreSQL Operator. - -### Platform Administrator - -For purposes of this User Guide, the "Platform Administrator" is a Kubernetes system user with PostgreSQL Administrator privileges and has PostgreSQL Operator admin rights. While -PostgreSQL Operator admin rights are not required, it is helpful to have admin rights to be able to verify that the installation completed successfully. The Platform Administrator -will be responsible for managing the installation of the Crunchy PostgreSQL Operator service in Kubernetes. That installation can be on RedHat OpenShift 3.11+, Kubeadm, or even -Google’s Kubernetes Engine. - -### Platform User - -For purposes of this User Guide, a "Platform User" is a Kubernetes system user and has PostgreSQL Operator admin rights. While admin rights are not required for a typical user, -testing out functiontionality will be easier, if you want to limit functionality to specific actions section 2.4.5 covers roles. The Platform User is anyone that is interacting with -the Crunchy PostgreSQL Operator service in Kubernetes via the PGO CLI tool. Their rights to carry out operations using the PGO CLI tool is governed by PGO Roles(discussed in more -detail later) configured by the Platform Administrator. If this is you, please skip to section 2.3.1 where we cover configuring and installing PGO. - -### PostgreSQL User - -In the context of the PostgreSQL Operator, the "PostgreSQL User" is any person interacting with the PostgreSQL database using database specific connections, such as a language -driver or a database management GUI. - -The default PostgreSQL instance installation via the PostgreSQL Operator comes with the following users: - -|Role name | Attributes | -----------------|----------------------------------------------------------------| -|postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | -|primaryuser | Replication | -|testuser | | - -The postgres user will be the admin user for the database instance. The primary user is used for replication between primary and replicas. The testuser is a normal user that has -access to the database “userdb” that is created for testing purposes. diff --git a/docs/content/contributing/_index.md b/docs/content/contributing/_index.md deleted file mode 100644 index ccfdae562d..0000000000 --- a/docs/content/contributing/_index.md +++ /dev/null @@ -1,6 +0,0 @@ ---- -title: "Contributing" -date: -draft: false -weight: 90 ---- diff --git a/docs/content/contributing/developer-setup.md b/docs/content/contributing/developer-setup.md deleted file mode 100644 index 113139e607..0000000000 --- a/docs/content/contributing/developer-setup.md +++ /dev/null @@ -1,176 +0,0 @@ ---- -title: "Development Environment" -date: -draft: false -weight: 305 ---- - -The [PostgreSQL Operator](https://github.com/crunchydata/postgres-operator) is an open source project hosted on GitHub. - -This guide is intended for those wanting to build the Operator from source or contribute via pull requests. - - -# Prerequisites - -The target development host for these instructions is a CentOS 7 or RHEL 7 host. Others operating systems -are possible, however we do not support building or running the Operator on others at this time. - -## Environment Variables - -The following environment variables are expected by the steps in this guide: - -Variable | Example | Description --------- | ------- | ----------- -`GOPATH` | $HOME/odev | Golang project directory -`PGOROOT` | $GOPATH/src/github.com/crunchydata/postgres-operator | Operator repository location -`PGO_CONF_DIR` | $PGOROOT/installers/ansible/roles/pgo-operator/files | Operator Config Template Directory -`PGO_BASEOS` | {{< param centosBase >}} | Base OS for container images -`PGO_CMD` | kubectl | Cluster management tool executable -`PGO_IMAGE_PREFIX` | crunchydata | Container image prefix -`PGO_OPERATOR_NAMESPACE` | pgo | Kubernetes namespace for the operator -`PGO_VERSION` | {{< param operatorVersion >}} | Operator version - -{{% notice tip %}} -`examples/envs.sh` contains the above variable definitions as well as others used by postgres-operator tools -{{% /notice %}} - - -## Other requirements - -* The development host has been created, has access to `yum` updates, and has a regular user account with `sudo` rights to run `yum`. -* `GOPATH` points to a directory containing `src`,`pkg`, and `bin` directories. -* The development host has `$GOPATH/bin` added to its `PATH` environment variable. Development tools will be installed to this path. Defining a `GOBIN` environment variable other than `$GOPATH/bin` may yield unexpected results. -* The development host has `git` installed and has cloned the postgres-operator repository to `$GOPATH/src/github.com/crunchydata/postgres-operator`. Makefile targets below are run from the repository directory. -* Deploying the Operator will require deployment access to a Kubernetes or OpenShift cluster -* Once you have cloned the git repository, you will need to download the CentOS 7 repository files and GPG keys and place them in the `$PGOROOT/conf` directory. You can do so with the following code: - -```shell -cd $PGOROOT -curl https://api.developers.crunchydata.com/downloads/repo/rpm-centos/postgresql12/crunchypg12.repo > conf/crunchypg12.repo -curl https://api.developers.crunchydata.com/downloads/repo/rpm-centos/postgresql11/crunchypg11.repo > conf/crunchypg11.repo -curl https://api.developers.crunchydata.com/downloads/gpg/RPM-GPG-KEY-crunchydata-dev > conf/RPM-GPG-KEY-crunchydata-dev -``` - -# Building - -## Dependencies - -Configuring build dependencies is automated via the `setup` target in the project Makefile: - - make setup - -The setup target ensures the presence of: - -* `GOPATH` and `PATH` as described in the prerequisites -* EPEL yum repository -* [`go`](https://golang.org/) compiler version 1.13+ -* [`dep`](https://golang.github.io/dep/) dependency manager -* NSQ messaging binaries -* `docker` container tool -* `buildah` OCI image building tool version 1.14.9+ - -By default, docker is not configured to run its daemon. Refer to the [docker post-installation instructions](https://docs.docker.com/install/linux/linux-postinstall/) to configure it to run once or at system startup. This is not done automatically. - -## Code Generation - -Code generation is leveraged to generate the clients and informers utilized to interact with the -various [Custom Resources](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) -(e.g. `pgclusters`) comprising the PostgreSQL Operator declarative API. Code generation is provided -by the [Kubernetes code-generator project](https://github.com/kubernetes/code-generator), -and the following two `Make` targets are included within the PostgreSQL Operator project to both -determine if any generated code within the project requires an update, and then update that code -as needed: - -```bash -# Check to see if an update to generated code is needed: -make verify-codegen - -# Update any generated code: -make update-codegen -``` - -Therefore, in the event that a Custom Resource defined within the PostgreSQL Operator API -(`$PGOROOT/pkg/apis/crunchydata.com`) is updated, the `verify-codegen` target will indicate that -an update is needed, and the `update-codegen` target should then be utilized to generate the -updated code prior to compiling. - -## Compile - -{{% notice tip %}} -Please be sure to have your GPG Key and `.repo` file in the `conf` directory -before proceeding. -{{% /notice %}} - -You will build all the Operator binaries and Docker images by running: - - make all - -This assumes you have Docker installed and running on your development host. - -By default, the Makefile will use buildah to build the container images, to override this default to use docker to build the images, set the IMGBUILDER variable to `docker` - - -The project uses the golang dep package manager to vendor all the golang source dependencies into the `vendor` directory. You typically do not need to run any `dep` commands unless you are adding new golang package dependencies into the project outside of what is within the project for a given release. - -After a full compile, you will have a `pgo` binary in `$HOME/odev/bin` and the Operator images in your local Docker registry. - -# Deployment - -Now that you have built the PostgreSQL Operator images, you can now deploy them -to your Kubernetes cluster. To deploy the image and associated Kubernetes -manifests, you can execute the following command: - -```shell -make deployoperator -``` - -If your Kubernetes cluster is not local to your development host, you will need -to specify a config file that will connect you to your Kubernetes cluster. See -the [Kubernetes documentation](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) -for details. - -# Testing - -Once the PostgreSQL Operator is deployed, you can run the end-to-end regression -test suite interface with the PostgreSQL client. You need to ensure -that the `pgo` client executable is in your `$PATH`. The test suite can be run -using the following commands: - -```shell -cd $PGOROOT/testing/pgo_cli -GO111MODULE=on go test -count=1 -parallel=2 -timeout=30m -v . -``` - -For more information, please follow the [testing README](https://github.com/CrunchyData/postgres-operator/blob/master/testing/pgo_cli/README.md) -in the source repository. - -# Troubleshooting - -Debug level logging in turned on by default when deploying the Operator. - -Sample bash functions are supplied in `examples/envs.sh` to view -the Operator logs. - -You can view the Operator REST API logs with the `alog` bash function. - -You can view the Operator core logic logs with the `olog` bash function. - -You can view the Scheduler logs with the `slog` bash function. - -These logs contain the following details: - - Timestamp - Logging Level - Message Content - Function Information - File Information - PGO version - -Additionally, you can view the Operator deployment Event logs with the `elog` bash function. - -You can enable the `pgo` CLI debugging with the following flag: - - pgo version --debug - -You can set the REST API URL as follows after a deployment if you are -developing on your local host by executing the `setip` bash function. diff --git a/docs/content/contributing/documentation-updates.md b/docs/content/contributing/documentation-updates.md deleted file mode 100644 index 1b40e770c1..0000000000 --- a/docs/content/contributing/documentation-updates.md +++ /dev/null @@ -1,34 +0,0 @@ ---- -title: "Updating Documentation" -date: -draft: false -weight: 901 ---- - -## Documentation - -The [documentation website](/) is generated using [Hugo](https://gohugo.io/). - -## Hosting Hugo Locally (Optional) - -If you would like to build the documentation locally, view the -[official Installing Hugo](https://gohugo.io/getting-started/installing/) guide to set up Hugo locally. - -You can then start the server by running the following commands - - -``` -cd $PGOROOT/docs/ -hugo server -``` - -The local version of the Hugo server is accessible by default from -*localhost:1313*. Once you've run *hugo server*, that will let you interactively make changes to the documentation as desired and view the updates -in real-time. - -## Contributing to the Documentation - -All documentation is in Markdown format and uses Hugo weights for positioning of the pages. - -The current production release documentation is updated for every tagged major release. - -When you're ready to commit a change, please verify that the documentation generates locally. diff --git a/docs/content/contributing/issues.md b/docs/content/contributing/issues.md deleted file mode 100644 index aad3017dc8..0000000000 --- a/docs/content/contributing/issues.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: "Submitting Issues" -date: -draft: false -weight: 902 ---- - - -If you would like to submit an feature / issue for us to consider please submit an to the official [GitHub Repository](https://github.com/CrunchyData/postgres-operator/issues/new/choose). - -If you would like to work the issue, please add that information in the issue so that we can confirm we are not already working no need to duplicate efforts. - -If you have any question you can submit a Support - Question and Answer issue and we will work with you on how you can get more involved. diff --git a/docs/content/contributing/pull-requests.md b/docs/content/contributing/pull-requests.md deleted file mode 100644 index 99a76ddd6a..0000000000 --- a/docs/content/contributing/pull-requests.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: "Submitting Pull Requests" -date: -draft: false -weight: 903 ---- - -So you decided to submit an issue and work it. Great! Let's get it merged in to the codebase. The following will go a long way to helping get the fix merged in quicker. - -1. Create a pull request from your fork to the `master` branch. -2. Update the checklists in the Pull Request Description. -3. Reference which issues this Pull Request is resolving. diff --git a/docs/content/custom-resources/_index.md b/docs/content/custom-resources/_index.md deleted file mode 100644 index e27adfd3e6..0000000000 --- a/docs/content/custom-resources/_index.md +++ /dev/null @@ -1,692 +0,0 @@ ---- -title: "Using Custom Resources" -date: -draft: false -weight: 55 ---- - -![Operator Architecture with CRDs](/Operator-Architecture-wCRDs.png) - -As discussed in the [architecture overview]({{< relref "/architecture/overview.md" >}}), -the heart of the [PostgreSQL Operator]({{< relref "_index.md" >}}), and any -[Kubernetes Operator]([PostgreSQL Operator]({{< relref "_index.md" >}})), is one -or more [Custom Resources Definitions](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions), -also known as "CRDs". CRDs provide extensions to the Kubernetes API, and, in the -case of the PostgreSQL Operator, allow you to perform actions such as: - -- Creating a PostgreSQL Cluster -- Updating PostgreSQL Cluster resource allocations -- Add additional utilities to a PostgreSQL cluster, e.g. [pgBouncer]({{< relref "/pgo-client/reference/pgo_create_pgbouncer.md" >}}) -for connection pooling and more. - -The PostgreSQL Operator provides the [`pgo` client]({{< relref "/pgo-client/_index.md" >}}) -as a convenience for interfacing with the CRDs, as manipulating the CRDs -directly can be a tedious process. For example, there are several Kubernetes -objects that need to be set up prior to creating a `pgcluster` [custom resource](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) -in order to successfully deploy a new PostgreSQL cluster. - -The Kubernetes community trend has been to move towards supporting a -"custom resource only" workflow for using Operators, and this is something that -the PostgreSQL Operator aims to do as well. Certain workflows are fully driven -by Custom Resources (e.g. creating a PostgreSQL cluster), while others still -need to interface through the [`pgo` client]({{< relref "/pgo-client/_index.md" >}}) -(e.g. adding a PostgreSQL user). - -The following sections will describe the functionality that is available today -when manipulating the PostgreSQL Operator Custom Resources directly. - -## PostgreSQL Operator Custom Resource Definitions - -There are several PostgreSQL Operator Custom Resource Definitions (CRDs) that -are installed in order for the PostgreSQL Operator to successfully function: - -- `pgclusters.crunchydata.com`: Stores information required to manage a -PostgreSQL cluster. This includes things like the cluster name, what storage and -resource classes to use, which version of PostgreSQL to run, information about -how to maintain a high-availability cluster, etc. -- `pgreplicas.crunchydata.com`: Stores information required to manage the -replicas within a PostgreSQL cluster. This includes things like the number of -replicas, what storage and resource classes to use, special affinity rules, etc. -- `pgtasks.crunchydata.com`: A general purpose CRD that accepts a type of task -that is needed to run against a cluster (e.g. take a backup) and tracks the -state of said task through its workflow. -- `pgpolicies.crunchydata.com`: Stores a reference to a SQL file that can be -executed against a PostgreSQL cluster. In the past, this was used to manage RLS -policies on PostgreSQL clusters. - -Below takes an in depth look for what each attribute does in a Custom Resource -Definition, and how they can be used in the creation and update workflow. - -### Glossary - -- `create`: if an attribute is listed as `create`, it means it can affect what -happens when a new Custom Resource is created. -- `update`: if an attribute is listed as `update`, it means it can affect the -Custom Resource, and by extension the objects it manages, when the attribute is -updated. - -### `pgclusters.crunchydata.com` - -The `pgclusters.crunchydata.com` Custom Resource Definition is the fundamental -definition of a PostgreSQL cluster. Most attributes only affect the deployment -of a PostgreSQL cluster at the time the PostgreSQL cluster is created. Some -attributes can be modified during the lifetime of the PostgreSQL cluster and -make changes, as described below. - -#### Specification (`Spec`) - -| Attribute | Action | Description | -|-----------|--------|-------------| -| Annotations | `create`, `update` | Specify Kubernetes [Annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) that can be applied to the different deployments managed by the PostgreSQL Operator (PostgreSQL, pgBackRest, pgBouncer). For more information, please see the "Annotations Specification" below. | -| BackrestConfig | `create` | Optional references to pgBackRest configuration files -| BackrestLimits | `create`, `update` | Specify the container resource limits that the pgBackRest repository should use. Follows the [Kubernetes definitions of resource limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). | -| BackrestResources | `create`, `update` | Specify the container resource requests that the pgBackRest repository should use. Follows the [Kubernetes definitions of resource requests](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). | -| BackrestS3Bucket | `create` | An optional parameter that specifies a S3 bucket that pgBackRest should use. | -| BackrestS3Endpoint | `create` | An optional parameter that specifies the S3 endpoint pgBackRest should use. | -| BackrestS3Region | `create` | An optional parameter that specifies a cloud region that pgBackRest should use. | -| BackrestS3URIStyle | `create` | An optional parameter that specifies if pgBackRest should use the `path` or `host` S3 URI style. | -| BackrestS3VerifyTLS | `create` | An optional parameter that specifies if pgBackRest should verify the TLS endpoint. | -| BackrestStorage | `create` | A specification that gives information about the storage attributes for the pgBackRest repository, which stores backups and archives, of the PostgreSQL cluster. For details, please see the `Storage Specification` section below. This is required. | -| CCPImage | `create` | The name of the PostgreSQL container image to use, e.g. `crunchy-postgres-ha` or `crunchy-postgres-ha-gis`. | -| CCPImagePrefix | `create` | If provided, the image prefix (or registry) of the PostgreSQL container image, e.g. `registry.developers.crunchydata.com/crunchydata`. The default is to use the image prefix set in the PostgreSQL Operator configuration. | -| CCPImageTag | `create` | The tag of the PostgreSQL container image to use, e.g. `{{< param centosBase >}}-{{< param postgresVersion >}}-{{< param operatorVersion >}}`. | -| CollectSecretName | `create` | An optional attribute unless `crunchy-postgres-exporter` is specified in the `UserLabels`; contains the name of a Kubernetes Secret that contains the credentials for a PostgreSQL user that is used for metrics collection, and is created when the PostgreSQL cluster is first bootstrapped. For more information, please see `User Secret Specification`.| -| ClusterName | `create` | The name of the PostgreSQL cluster, e.g. `hippo`. This is used to group PostgreSQL instances (primary, replicas) together. | -| CustomConfig | `create` | If specified, references a custom ConfigMap to use when bootstrapping a PostgreSQL cluster. For the shape of this file, please see the section on [Custom Configuration]({{< relref "/advanced/custom-configuration.md" >}}) | -| Database | `create` | The name of a database that the PostgreSQL user can log into after the PostgreSQL cluster is created. | -| ExporterLimits | `create`, `update` | Specify the container resource limits that the `crunchy-postgres-exporter` sidecar uses when it is deployed with a PostgreSQL instance. Follows the [Kubernetes definitions of resource limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). | -| ExporterPort | `create` | If the `"crunchy-postgres-exporter"` label is set in `UserLabels`, then this specifies the port that the metrics sidecar runs on (e.g. `9187`) | -| ExporterResources | `create`, `update` | Specify the container resource requests that the `crunchy-postgres-exporter` sidecar uses when it is deployed with a PostgreSQL instance. Follows the [Kubernetes definitions of resource requests](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). | -| Limits | `create`, `update` | Specify the container resource limits that the PostgreSQL cluster should use. Follows the [Kubernetes definitions of resource limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). | -| Name | `create` | The name of the PostgreSQL instance that is the primary. On creation, this should be set to be the same as `ClusterName`. | -| Namespace | `create` | The Kubernetes Namespace that the PostgreSQL cluster is deployed in. | -| PGBadgerPort | `create` | If the `"crunchy-pgbadger"` label is set in `UserLabels`, then this specifies the port that the pgBadger sidecar runs on (e.g. `10000`) | -| PGDataSource | `create` | Used to indicate if a PostgreSQL cluster should bootstrap its data from a pgBackRest repository. This uses the PostgreSQL Data Source Specification, described below. | -| PGOImagePrefix | `create` | If provided, the image prefix (or registry) of any PostgreSQL Operator images that are used for jobs, e.g. `registry.developers.crunchydata.com/crunchydata`. The default is to use the image prefix set in the PostgreSQL Operator configuration. | -| PgBouncer | `create`, `update` | If specified, defines the attributes to use for the pgBouncer connection pooling deployment that can be used in conjunction with this PostgreSQL cluster. Please see the specification defined below. | -| PodAntiAffinity | `create` | A required section. Sets the [pod anti-affinity rules]({{< relref "/architecture/high-availability/_index.md#how-the-crunchy-postgresql-operator-uses-pod-anti-affinity" >}}) for the PostgreSQL cluster and associated deployments. Please see the `Pod Anti-Affinity Specification` section below. | -| Policies | `create` | If provided, a comma-separated list referring to `pgpolicies.crunchydata.com.Spec.Name` that should be run once the PostgreSQL primary is first initialized. | -| Port | `create` | The port that PostgreSQL will run on, e.g. `5432`. | -| PrimaryStorage | `create` | A specification that gives information about the storage attributes for the primary instance in the PostgreSQL cluster. For details, please see the `Storage Specification` section below. This is required. | -| RootSecretName | `create` | The name of a Kubernetes Secret that contains the credentials for a PostgreSQL _replication user_ that is created when the PostgreSQL cluster is first bootstrapped. For more information, please see `User Secret Specification`.| -| ReplicaStorage | `create` | A specification that gives information about the storage attributes for any replicas in the PostgreSQL cluster. For details, please see the `Storage Specification` section below. This will likely be changed in the future based on the nature of the high-availability system, but presently it is still required that you set it. It is recommended you use similar settings to that of `PrimaryStorage`. | -| Replicas | `create` | The number of replicas to create after a PostgreSQL primary is first initialized. This only works on create; to scale a cluster after it is initialized, please use the [`pgo scale`]({{< relref "/pgo-client/reference/pgo_scale.md" >}}) command. | -| Resources | `create`, `update` | Specify the container resource requests that the PostgreSQL cluster should use. Follows the [Kubernetes definitions of resource requests](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). | -| RootSecretName | `create` | The name of a Kubernetes Secret that contains the credentials for a PostgreSQL superuser that is created when the PostgreSQL cluster is first bootstrapped. For more information, please see `User Secret Specification`.| -| SyncReplication | `create` | If set to `true`, specifies the PostgreSQL cluster to use [synchronous replication]({{< relref "/architecture/high-availability/_index.md#how-the-crunchy-postgresql-operator-uses-pod-anti-affinity#synchronous-replication-guarding-against-transactions-loss" >}}).| -| User | `create` | The name of the PostgreSQL user that is created when the PostgreSQL cluster is first created. | -| UserLabels | `create` | A set of key-value string pairs that are used as a sort of "catch-all" for things that really should be modeled in the CRD. These values do get copied to the actually CR labels. If you want to set up metrics collection or pgBadger, you would specify `"crunchy-postgres-exporter": "true"` and `"crunchy-pgbadger": "true"` here, respectively. However, this structure does need to be set, so just follow whatever is in the example. | -| UserSecretName | `create` | The name of a Kubernetes Secret that contains the credentials for a standard PostgreSQL user that is created when the PostgreSQL cluster is first bootstrapped. For more information, please see `User Secret Specification`.| -| TablespaceMounts | `create`,`update` | Lists any tablespaces that are attached to the PostgreSQL cluster. Tablespaces can be added at a later time by updating the `TablespaceMounts` entry, but they cannot be removed. Stores a map of information, with the key being the name of the tablespace, and the value being a Storage Specification, defined below. | -| TLS | `create` | Defines the attributes for enabling TLS for a PostgreSQL cluster. See TLS Specification below. | -| TLSOnly | `create` | If set to true, requires client connections to use only TLS to connect to the PostgreSQL database. | -| Standby | `create`, `update` | If set to true, indicates that the PostgreSQL cluster is a "standby" cluster, i.e. is in read-only mode entirely. Please see [Kubernetes Multi-Cluster Deployments]({{< relref "/architecture/high-availability/multi-cluster-kubernetes.md" >}}) for more information. | -| Shutdown | `create`, `update` | If set to true, indicates that a PostgreSQL cluster should shutdown. If set to false, indicates that a PostgreSQL cluster should be up and running. | - -##### Storage Specification - -The storage specification is a spec that defines attributes about the storage to -be used for a particular function of a PostgreSQL cluster (e.g. a primary -instance or for the pgBackRest backup repository). The below describes each -attribute and how it works. - -| Attribute | Action | Description | -|-----------|--------|-------------| -| AccessMode| `create` | The name of the Kubernetes Persistent Volume [Access Mode](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes) to use. | -| MatchLabels | `create` | Only used with `StorageType` of `create`, used to match a particular subset of provisioned Persistent Volumes. | -| Name | `create` | Only needed for `PrimaryStorage` in `pgclusters.crunchydata.com`.Used to identify the name of the PostgreSQL cluster. Should match `ClusterName`. | -| Size | `create` | The size of the [Persistent Volume Claim](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) (PVC). Must use a Kubernetes resource value, e.g. `20Gi`. | -| StorageClass | `create` | The name of the Kubernetes [StorageClass](https://kubernetes.io/docs/concepts/storage/storage-classes/) to use. | -| StorageType | `create` | Set to `create` if storage is provisioned (e.g. using `hostpath`). Set to `dynamic` if using a dynamic storage provisioner, e.g. via a `StorageClass`. | -| SupplementalGroups | `create` | If provided, a comma-separated list of group IDs to use in case it is needed to interface with a particular storage system. Typically used with NFS or hostpath storage. | - -##### Pod Anti-Affinity Specification - -Sets the [pod anti-affinity]({{< relref "/architecture/high-availability/_index.md#how-the-crunchy-postgresql-operator-uses-pod-anti-affinity" >}}) -for the PostgreSQL cluster and associated deployments. Each attribute can -contain one of the following values: - -- `required` -- `preferred` (which is also the recommended default) -- `disabled` - -For a detailed explanation for how this works. Please see the [high-availability]({{< relref "/architecture/high-availability/_index.md#how-the-crunchy-postgresql-operator-uses-pod-anti-affinity" >}}) -documentation. - -| Attribute | Action | Description | -|-----------|--------|-------------| -| Default | `create` | The default pod anti-affinity to use for all Pods managed in a given PostgreSQL cluster. | -| PgBackRest | `create` | If set to a value that differs from `Default`, specifies the pod anti-affinity to use for just the pgBackRest repository. | -| PgBouncer | `create` | If set to a value that differs from `Default`, specifies the pod anti-affinity to use for just the pgBouncer Pods. | - -##### PostgreSQL Data Source Specification - -This specification is used when one wants to bootstrap the data in a PostgreSQL -cluster from a pgBackRest repository. This can be a pgBackRest repository that -is attached to an active PostgreSQL cluster or is kept around to be used for -spawning new PostgreSQL clusters. - -| Attribute | Action | Description | -|-----------|--------|-------------| -| RestoreFrom | `create` | The name of a PostgreSQL cluster, active or former, that will be used for bootstrapping the data of a new PostgreSQL cluster. | -| RestoreOpts | `create` | Additional pgBackRest [restore options](https://pgbackrest.org/command.html#command-restore) that can be used as part of the bootstrapping operation, for example, point-in-time-recovery options. | - -##### TLS Specification - -The TLS specification makes a reference to the various secrets that are required -to enable TLS in a PostgreSQL cluster. For more information on how these secrets -should be structured, please see [Enabling TLS in a PostgreSQL Cluster]({{< relref "/pgo-client/common-tasks.md#enable-tls" >}}). - -| Attribute | Action | Description | -|-----------|--------|-------------| -| CASecret | `create` | A reference to the name of a Kubernetes Secret that specifies a certificate authority for the PostgreSQL cluster to trust. | -| ReplicationTLSSecret | `create` | A reference to the name of a Kubernetes TLS Secret that contains a keypair for authenticating the replication user. Must be used with `CASecret` and `TLSSecret`. | -| TLSSecret | `create` | A reference to the name of a Kubernetes TLS Secret that contains a keypair that is used for the PostgreSQL instance to identify itself and perform TLS communications with PostgreSQL clients. Must be used with `CASecret`. | - -##### pgBouncer Specification - -The pgBouncer specification defines how a pgBouncer deployment can be deployed -alongside the PostgreSQL cluster. pgBouncer is a PostgreSQL connection pooler -that can also help manage connection state, and is helpful to deploy alongside -a PostgreSQL cluster to help with failover scenarios too. - -| Attribute | Action | Description | -|-----------|--------|-------------| -| Limits | `create`, `update` | Specify the container resource limits that the pgBouncer Pods should use. Follows the [Kubernetes definitions of resource limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). | -| Replicas | `create`, `update` | The number of pgBouncer instances to deploy. Must be set to at least `1` to deploy pgBouncer. Setting to `0` removes an existing pgBouncer deployment for the PostgreSQL cluster. | -| Resources | `create`, `update` | Specify the container resource requests that the pgBouncer Pods should use. Follows the [Kubernetes definitions of resource requests](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-requests-and-limits-of-pod-and-container). | - -##### Annotations Specification - -The `pgcluster.crunchydata.com` specification contains a block that allows for -custom [Annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) -to be applied to the Deployments that are managed by the PostgreSQL Operator, -including: - -- PostgreSQL -- pgBackRest -- pgBouncer - -This also includes the option to apply Annotations globally across the three -different deployment groups. - -| Attribute | Action | Description | -|-----------|--------|-------------| -| Backrest | `create`, `update` | Specify annotations that are only applied to the pgBackRest deployments | -| Global | `create`, `update` | Specify annotations that are applied to the PostgreSQL, pgBackRest, and pgBouncer deployments | -| PgBouncer | `create`, `update` | Specify annotations that are only applied to the pgBouncer deployments | -| Postgres | `create`, `update` | Specify annotations that are only applied to the PostgreSQL deployments | - -### `pgreplicas.crunchydata.com` - -The `pgreplicas.crunchydata.com` Custom Resource Definition contains information -pertaning to the structure of PostgreSQL replicas associated within a PostgreSQL -cluster. All of the attributes only affect the replica when it is created. - -#### Specification (`Spec`) - -| Attribute | Action | Description | -|-----------|--------|-------------| -| ClusterName | `create` | The name of the PostgreSQL cluster, e.g. `hippo`. This is used to group PostgreSQL instances (primary, replicas) together. | -| Name | `create` | The name of this PostgreSQL replica. It should be unique within a `ClusterName`. | -| Namespace | `create` | The Kubernetes Namespace that the PostgreSQL cluster is deployed in. | -| ReplicaStorage | `create` | A specification that gives information about the storage attributes for any replicas in the PostgreSQL cluster. For details, please see the `Storage Specification` section in the `pgclusters.crunchydata.com` description. This will likely be changed in the future based on the nature of the high-availability system, but presently it is still required that you set it. It is recommended you use similar settings to that of `PrimaryStorage`. | -| UserLabels | `create` | A set of key-value string pairs that are used as a sort of "catch-all" for things that really should be modeled in the CRD. These values do get copied to the actually CR labels. If you want to set up metrics collection, you would specify `"crunchy-postgres-exporter": "true"` here. This also allows for node selector pinning using `NodeLabelKey` and `NodeLabelValue`. However, this structure does need to be set, so just follow whatever is in the example. | - -## Custom Resource Workflows - -### Create a PostgreSQL Cluster - -The fundamental workflow for interfacing with a PostgreSQL Operator Custom -Resource Definition is for creating a PostgreSQL cluster. However, this is also -one of the most complicated workflows to go through, as there are several -Kubernetes objects that need to be created prior to using this method. These -include: - -- Secrets - - Information for setting up a pgBackRest repository - - PostgreSQL superuser bootstrap credentials - - PostgreSQL replication user bootstrap credentials - - PostgresQL standard user bootstrap credentials - -Additionally, if you want to add some of the other sidecars, you may need to -create additional secrets. - -The following guide goes through how to create a PostgreSQL cluster called -`hippo` by creating a new custom resource. - -#### Step 1: Create the pgBackRest Secret - -pgBackRest is a fundamental part of a PostgreSQL deployment with the PostgreSQL -Operator: not only is it a backup and archive repository, but it also helps with -operations such as self-healing. A PostgreSQL instance a pgBackRest communicate -using ssh, and as such, we need to generate a unique ssh keypair for -communication for each PostgreSQL cluster we deploy. - -In this example, we generate a ssh keypair using ED25519 keys, but if your -environment requires it, you can also use RSA keys. - -In your working directory, run the following commands: - -
-# this variable is the name of the cluster being created
-export pgo_cluster_name=hippo
-# this variable is the namespace the cluster is being deployed into
-export cluster_namespace=pgo
-
-# generate a SSH public/private keypair for use by pgBackRest
-ssh-keygen -t ed25519 -N '' -f "${pgo_cluster_name}-key"
-
-# base64 encoded the keys for the generation of the Kubernetes secret, and place
-# them into variables temporarily
-public_key_temp=$(cat "${pgo_cluster_name}-key.pub" | base64)
-private_key_temp=$(cat "${pgo_cluster_name}-key" | base64)
-export pgbackrest_public_key="${public_key_temp//[$'\n']}" pgbackrest_private_key="${private_key_temp//[$'\n']}"
-
-# create the backrest-repo-config example file and substitute in the newly
-# created keys
-#
-# (Note: that the "config" / "sshd_config" entries contain configuration to
-# ensure that PostgreSQL instances are able to communicate with the pgBackRest
-# repository, which houses backups and archives, and vice versa. Most of the
-# settings follow the sshd defaults, with a few overrides. Edit at your own
-# discretion.)
-cat <<-EOF > "${pgo_cluster_name}-backrest-repo-config.yaml"
-apiVersion: v1
-kind: Secret
-type: Opaque
-metadata:
-  labels:
-    pg-cluster: ${pgo_cluster_name}
-    pgo-backrest-repo: "true"
-  name: ${pgo_cluster_name}-backrest-repo-config
-  namespace: ${cluster_namespace}
-data:
-  authorized_keys: ${pgbackrest_public_key}
-  id_ed25519: ${pgbackrest_private_key}
-  ssh_host_ed25519_key: ${pgbackrest_private_key}
-  config: SG9zdCAqClN0cmljdEhvc3RLZXlDaGVja2luZyBubwpJZGVudGl0eUZpbGUgL3RtcC9pZF9lZDI1NTE5ClBvcnQgMjAyMgpVc2VyIHBnYmFja3Jlc3QK
-  sshd_config: IwkkT3BlbkJTRDogc3NoZF9jb25maWcsdiAxLjEwMCAyMDE2LzA4LzE1IDEyOjMyOjA0IG5hZGR5IEV4cCAkCgojIFRoaXMgaXMgdGhlIHNzaGQgc2VydmVyIHN5c3RlbS13aWRlIGNvbmZpZ3VyYXRpb24gZmlsZS4gIFNlZQojIHNzaGRfY29uZmlnKDUpIGZvciBtb3JlIGluZm9ybWF0aW9uLgoKIyBUaGlzIHNzaGQgd2FzIGNvbXBpbGVkIHdpdGggUEFUSD0vdXNyL2xvY2FsL2JpbjovdXNyL2JpbgoKIyBUaGUgc3RyYXRlZ3kgdXNlZCBmb3Igb3B0aW9ucyBpbiB0aGUgZGVmYXVsdCBzc2hkX2NvbmZpZyBzaGlwcGVkIHdpdGgKIyBPcGVuU1NIIGlzIHRvIHNwZWNpZnkgb3B0aW9ucyB3aXRoIHRoZWlyIGRlZmF1bHQgdmFsdWUgd2hlcmUKIyBwb3NzaWJsZSwgYnV0IGxlYXZlIHRoZW0gY29tbWVudGVkLiAgVW5jb21tZW50ZWQgb3B0aW9ucyBvdmVycmlkZSB0aGUKIyBkZWZhdWx0IHZhbHVlLgoKIyBJZiB5b3Ugd2FudCB0byBjaGFuZ2UgdGhlIHBvcnQgb24gYSBTRUxpbnV4IHN5c3RlbSwgeW91IGhhdmUgdG8gdGVsbAojIFNFTGludXggYWJvdXQgdGhpcyBjaGFuZ2UuCiMgc2VtYW5hZ2UgcG9ydCAtYSAtdCBzc2hfcG9ydF90IC1wIHRjcCAjUE9SVE5VTUJFUgojClBvcnQgMjAyMgojQWRkcmVzc0ZhbWlseSBhbnkKI0xpc3RlbkFkZHJlc3MgMC4wLjAuMAojTGlzdGVuQWRkcmVzcyA6OgoKSG9zdEtleSAvc3NoZC9zc2hfaG9zdF9lZDI1NTE5X2tleQoKIyBDaXBoZXJzIGFuZCBrZXlpbmcKI1Jla2V5TGltaXQgZGVmYXVsdCBub25lCgojIExvZ2dpbmcKI1N5c2xvZ0ZhY2lsaXR5IEFVVEgKU3lzbG9nRmFjaWxpdHkgQVVUSFBSSVYKI0xvZ0xldmVsIElORk8KCiMgQXV0aGVudGljYXRpb246CgojTG9naW5HcmFjZVRpbWUgMm0KUGVybWl0Um9vdExvZ2luIG5vClN0cmljdE1vZGVzIG5vCiNNYXhBdXRoVHJpZXMgNgojTWF4U2Vzc2lvbnMgMTAKClB1YmtleUF1dGhlbnRpY2F0aW9uIHllcwoKIyBUaGUgZGVmYXVsdCBpcyB0byBjaGVjayBib3RoIC5zc2gvYXV0aG9yaXplZF9rZXlzIGFuZCAuc3NoL2F1dGhvcml6ZWRfa2V5czIKIyBidXQgdGhpcyBpcyBvdmVycmlkZGVuIHNvIGluc3RhbGxhdGlvbnMgd2lsbCBvbmx5IGNoZWNrIC5zc2gvYXV0aG9yaXplZF9rZXlzCiNBdXRob3JpemVkS2V5c0ZpbGUJL3BnY29uZi9hdXRob3JpemVkX2tleXMKQXV0aG9yaXplZEtleXNGaWxlCS9zc2hkL2F1dGhvcml6ZWRfa2V5cwoKI0F1dGhvcml6ZWRQcmluY2lwYWxzRmlsZSBub25lCgojQXV0aG9yaXplZEtleXNDb21tYW5kIG5vbmUKI0F1dGhvcml6ZWRLZXlzQ29tbWFuZFVzZXIgbm9ib2R5CgojIEZvciB0aGlzIHRvIHdvcmsgeW91IHdpbGwgYWxzbyBuZWVkIGhvc3Qga2V5cyBpbiAvZXRjL3NzaC9zc2hfa25vd25faG9zdHMKI0hvc3RiYXNlZEF1dGhlbnRpY2F0aW9uIG5vCiMgQ2hhbmdlIHRvIHllcyBpZiB5b3UgZG9uJ3QgdHJ1c3Qgfi8uc3NoL2tub3duX2hvc3RzIGZvcgojIEhvc3RiYXNlZEF1dGhlbnRpY2F0aW9uCiNJZ25vcmVVc2VyS25vd25Ib3N0cyBubwojIERvbid0IHJlYWQgdGhlIHVzZXIncyB+Ly5yaG9zdHMgYW5kIH4vLnNob3N0cyBmaWxlcwojSWdub3JlUmhvc3RzIHllcwoKIyBUbyBkaXNhYmxlIHR1bm5lbGVkIGNsZWFyIHRleHQgcGFzc3dvcmRzLCBjaGFuZ2UgdG8gbm8gaGVyZSEKI1Bhc3N3b3JkQXV0aGVudGljYXRpb24geWVzCiNQZXJtaXRFbXB0eVBhc3N3b3JkcyBubwpQYXNzd29yZEF1dGhlbnRpY2F0aW9uIG5vCgojIENoYW5nZSB0byBubyB0byBkaXNhYmxlIHMva2V5IHBhc3N3b3JkcwpDaGFsbGVuZ2VSZXNwb25zZUF1dGhlbnRpY2F0aW9uIHllcwojQ2hhbGxlbmdlUmVzcG9uc2VBdXRoZW50aWNhdGlvbiBubwoKIyBLZXJiZXJvcyBvcHRpb25zCiNLZXJiZXJvc0F1dGhlbnRpY2F0aW9uIG5vCiNLZXJiZXJvc09yTG9jYWxQYXNzd2QgeWVzCiNLZXJiZXJvc1RpY2tldENsZWFudXAgeWVzCiNLZXJiZXJvc0dldEFGU1Rva2VuIG5vCiNLZXJiZXJvc1VzZUt1c2Vyb2sgeWVzCgojIEdTU0FQSSBvcHRpb25zCiNHU1NBUElBdXRoZW50aWNhdGlvbiB5ZXMKI0dTU0FQSUNsZWFudXBDcmVkZW50aWFscyBubwojR1NTQVBJU3RyaWN0QWNjZXB0b3JDaGVjayB5ZXMKI0dTU0FQSUtleUV4Y2hhbmdlIG5vCiNHU1NBUElFbmFibGVrNXVzZXJzIG5vCgojIFNldCB0aGlzIHRvICd5ZXMnIHRvIGVuYWJsZSBQQU0gYXV0aGVudGljYXRpb24sIGFjY291bnQgcHJvY2Vzc2luZywKIyBhbmQgc2Vzc2lvbiBwcm9jZXNzaW5nLiBJZiB0aGlzIGlzIGVuYWJsZWQsIFBBTSBhdXRoZW50aWNhdGlvbiB3aWxsCiMgYmUgYWxsb3dlZCB0aHJvdWdoIHRoZSBDaGFsbGVuZ2VSZXNwb25zZUF1dGhlbnRpY2F0aW9uIGFuZAojIFBhc3N3b3JkQXV0aGVudGljYXRpb24uICBEZXBlbmRpbmcgb24geW91ciBQQU0gY29uZmlndXJhdGlvbiwKIyBQQU0gYXV0aGVudGljYXRpb24gdmlhIENoYWxsZW5nZVJlc3BvbnNlQXV0aGVudGljYXRpb24gbWF5IGJ5cGFzcwojIHRoZSBzZXR0aW5nIG9mICJQZXJtaXRSb290TG9naW4gd2l0aG91dC1wYXNzd29yZCIuCiMgSWYgeW91IGp1c3Qgd2FudCB0aGUgUEFNIGFjY291bnQgYW5kIHNlc3Npb24gY2hlY2tzIHRvIHJ1biB3aXRob3V0CiMgUEFNIGF1dGhlbnRpY2F0aW9uLCB0aGVuIGVuYWJsZSB0aGlzIGJ1dCBzZXQgUGFzc3dvcmRBdXRoZW50aWNhdGlvbgojIGFuZCBDaGFsbGVuZ2VSZXNwb25zZUF1dGhlbnRpY2F0aW9uIHRvICdubycuCiMgV0FSTklORzogJ1VzZVBBTSBubycgaXMgbm90IHN1cHBvcnRlZCBpbiBSZWQgSGF0IEVudGVycHJpc2UgTGludXggYW5kIG1heSBjYXVzZSBzZXZlcmFsCiMgcHJvYmxlbXMuClVzZVBBTSB5ZXMKCiNBbGxvd0FnZW50Rm9yd2FyZGluZyB5ZXMKI0FsbG93VGNwRm9yd2FyZGluZyB5ZXMKI0dhdGV3YXlQb3J0cyBubwpYMTFGb3J3YXJkaW5nIHllcwojWDExRGlzcGxheU9mZnNldCAxMAojWDExVXNlTG9jYWxob3N0IHllcwojUGVybWl0VFRZIHllcwojUHJpbnRNb3RkIHllcwojUHJpbnRMYXN0TG9nIHllcwojVENQS2VlcEFsaXZlIHllcwojVXNlTG9naW4gbm8KI1Blcm1pdFVzZXJFbnZpcm9ubWVudCBubwojQ29tcHJlc3Npb24gZGVsYXllZAojQ2xpZW50QWxpdmVJbnRlcnZhbCAwCiNDbGllbnRBbGl2ZUNvdW50TWF4IDMKI1Nob3dQYXRjaExldmVsIG5vCiNVc2VETlMgeWVzCiNQaWRGaWxlIC92YXIvcnVuL3NzaGQucGlkCiNNYXhTdGFydHVwcyAxMDozMDoxMDAKI1Blcm1pdFR1bm5lbCBubwojQ2hyb290RGlyZWN0b3J5IG5vbmUKI1ZlcnNpb25BZGRlbmR1bSBub25lCgojIG5vIGRlZmF1bHQgYmFubmVyIHBhdGgKI0Jhbm5lciBub25lCgojIEFjY2VwdCBsb2NhbGUtcmVsYXRlZCBlbnZpcm9ubWVudCB2YXJpYWJsZXMKQWNjZXB0RW52IExBTkcgTENfQ1RZUEUgTENfTlVNRVJJQyBMQ19USU1FIExDX0NPTExBVEUgTENfTU9ORVRBUlkgTENfTUVTU0FHRVMKQWNjZXB0RW52IExDX1BBUEVSIExDX05BTUUgTENfQUREUkVTUyBMQ19URUxFUEhPTkUgTENfTUVBU1VSRU1FTlQKQWNjZXB0RW52IExDX0lERU5USUZJQ0FUSU9OIExDX0FMTCBMQU5HVUFHRQpBY2NlcHRFbnYgWE1PRElGSUVSUwoKIyBvdmVycmlkZSBkZWZhdWx0IG9mIG5vIHN1YnN5c3RlbXMKU3Vic3lzdGVtCXNmdHAJL3Vzci9saWJleGVjL29wZW5zc2gvc2Z0cC1zZXJ2ZXIKCiMgRXhhbXBsZSBvZiBvdmVycmlkaW5nIHNldHRpbmdzIG9uIGEgcGVyLXVzZXIgYmFzaXMKI01hdGNoIFVzZXIgYW5vbmN2cwojCVgxMUZvcndhcmRpbmcgbm8KIwlBbGxvd1RjcEZvcndhcmRpbmcgbm8KIwlQZXJtaXRUVFkgbm8KIwlGb3JjZUNvbW1hbmQgY3ZzIHNlcnZlcgo=
-EOF
-
-# remove the pgBackRest ssh keypair from the shell session
-unset pgbackrest_public_key pgbackrest_private_key
-
-# create the pgBackRest secret
-kubectl apply -f "${pgo_cluster_name}-backrest-repo-config.yaml"
-
- -#### Step 2: Creating the PostgreSQL User Secrets - -As mentioned above, there are a minimum of three PostgreSQL user accounts that -you must create in order to bootstrap a PostgreSQL cluster. These are: - -- A PostgreSQL superuser -- A replication user -- A standard PostgreSQL user - -The below code will help you set up these Secrets. - -``` -# this variable is the name of the cluster being created -pgo_cluster_name=hippo -# this variable is the namespace the cluster is being deployed into -cluster_namespace=pgo - -# this is the superuser secret -kubectl create secret generic -n "${cluster_namespace}" "${pgo_cluster_name}-postgres-secret" \ - --from-literal=username=postgres \ - --from-literal=password=Supersecurepassword* - -# this is the replication user secret -kubectl create secret generic -n "${cluster_namespace}" "${pgo_cluster_name}-primaryuser-secret" \ - --from-literal=username=primaryuser \ - --from-literal=password=Anothersecurepassword* - -# this is the standard user secret -kubectl create secret generic -n "${cluster_namespace}" "${pgo_cluster_name}-hippo-secret" \ - --from-literal=username=hippo \ - --from-literal=password=Moresecurepassword* - - -kubectl label secrets -n "${cluster_namespace}" "${pgo_cluster_name}-postgres-secret" "pg-cluster=${pgo_cluster_name}" -kubectl label secrets -n "${cluster_namespace}" "${pgo_cluster_name}-primaryuser-secret" "pg-cluster=${pgo_cluster_name}" -kubectl label secrets -n "${cluster_namespace}" "${pgo_cluster_name}-hippo-secret" "pg-cluster=${pgo_cluster_name}" -``` - -#### Step 3: Create the PostgreSQL Cluster - -With the Secrets in place. It is now time to create the PostgreSQL cluster. - -The below manifest references the Secrets created in the previous step to add a -custom resource to the `pgclusters.crunchydata.com` custom resource definition. - -**NOTE**: You will need to modify the storage sections to match your storage -configuration. - -``` -# this variable is the name of the cluster being created -export pgo_cluster_name=hippo -# this variable is the namespace the cluster is being deployed into -export cluster_namespace=pgo - -cat <<-EOF > "${pgo_cluster_name}-pgcluster.yaml" -apiVersion: crunchydata.com/v1 -kind: Pgcluster -metadata: - annotations: - current-primary: ${pgo_cluster_name} - labels: - autofail: "true" - crunchy-pgbadger: "false" - crunchy-pgha-scope: ${pgo_cluster_name} - crunchy-postgres-exporter: "false" - deployment-name: ${pgo_cluster_name} - name: ${pgo_cluster_name} - pg-cluster: ${pgo_cluster_name} - pg-pod-anti-affinity: "" - pgo-backrest: "true" - pgo-version: {{< param operatorVersion >}} - pgouser: admin - name: ${pgo_cluster_name} - namespace: ${cluster_namespace} -spec: - BackrestStorage: - accessmode: ReadWriteMany - matchLabels: "" - name: "" - size: 1G - storageclass: "" - storagetype: create - supplementalgroups: "" - PrimaryStorage: - accessmode: ReadWriteMany - matchLabels: "" - name: ${pgo_cluster_name} - size: 1G - storageclass: "" - storagetype: create - supplementalgroups: "" - ReplicaStorage: - accessmode: ReadWriteMany - matchLabels: "" - name: "" - size: 1G - storageclass: "" - storagetype: create - supplementalgroups: "" - annotations: - backrestLimits: {} - backrestRepoPath: "" - backrestResources: - memory: 48Mi - backrestS3Bucket: "" - backrestS3Endpoint: "" - backrestS3Region: "" - backrestS3URIStyle: "" - backrestS3VerifyTLS: "" - ccpimage: crunchy-postgres-ha - ccpimageprefix: registry.developers.crunchydata.com/crunchydata - ccpimagetag: {{< param centosBase >}}-{{< param postgresVersion >}}-{{< param operatorVersion >}} - clustername: ${pgo_cluster_name} - customconfig: "" - database: ${pgo_cluster_name} - exporterport: "9187" - limits: {} - name: ${pgo_cluster_name} - namespace: ${cluster_namespace} - pgBouncer: - limits: {} - replicas: 0 - pgDataSource: - restoreFrom: "" - restoreOpts: "" - pgbadgerport: "10000" - pgoimageprefix: registry.developers.crunchydata.com/crunchydata - podAntiAffinity: - default: preferred - pgBackRest: preferred - pgBouncer: preferred - policies: "" - port: "5432" - primarysecretname: ${pgo_cluster_name}-primaryuser-secret - replicas: "0" - rootsecretname: ${pgo_cluster_name}-postgres-secret - shutdown: false - standby: false - tablespaceMounts: {} - tls: - caSecret: "" - replicationTLSSecret: "" - tlsSecret: "" - tlsOnly: false - user: hippo - userlabels: - crunchy-postgres-exporter: "false" - pg-pod-anti-affinity: "" - pgo-version: {{< param operatorVersion >}} - usersecretname: ${pgo_cluster_name}-hippo-secret -EOF - -kubectl apply -f "${pgo_cluster_name}-pgcluster.yaml" -``` - -### Modify a Cluster - -There following modification operations are supported on the -`pgclusters.crunchydata.com` custom resource definition: - -#### Modify Resource Requests & Limits - -Modifying the `resources`, `limits`, `backrestResources`, `backRestLimits`, -`pgBouncer.resources`, or `pgbouncer.limits` will cause the PostgreSQL Operator -to apply the new values to the affected [Deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/). - -For example, if we wanted to make a memory request of 512Mi for the `hippo` -cluster created in the previous example, we could do the following: - -``` -# this variable is the name of the cluster being created -export pgo_cluster_name=hippo -# this variable is the namespace the cluster is being deployed into -export cluster_namespace=pgo - -kubectl edit pgclusters.crunchydata.com -n "${cluster_namespace}" "${pgo_cluster_name}" -``` - -This will open up your editor. Find the `resources` block, and have it read as -the following: - -``` -resources: - memory: 256Mi -``` - -The PostgreSQL Operator will respond and modify the PostgreSQL instances to -request 256Mi of memory. - -Be careful when editing these values for a variety of reasons, mainly that -modifying these values will cause the Pods to restart, which in turn will create -potential downtime events. It's best to modify the values for a deployment group -together and not mix and match, i.e. - -- PostgreSQL instances: `resources`, `limits` -- pgBackRest: `backrestResources`, `backrestLimits` -- pgBouncer: `pgBouncer.resources`, `pgBouncer.limits` - -### Scale - -Once you have created a PostgreSQL cluster, you may want to add a replica to -create a high-availability environment. Replicas are added and removed using the -`pgreplicas.crunchydata.com` custom resource definition. Each replica must have -a unique name, e.g. `hippo-rpl1` could be one unique replica for a PostgreSQL -cluster. - -Using the above example cluster, `hippo`, let's add a replica called -`hippo-rpl1` using the configuration below. Be sure to change the -`replicastorage` block to match the storage configuration for your environment: - -``` -# this variable is the name of the cluster being created -export pgo_cluster_name=hippo -# this helps to name the replica, in this case "rpl1" -export pgo_cluster_replica_suffix=rpl1 -# this variable is the namespace the cluster is being deployed into -export cluster_namespace=pgo - -cat <<-EOF > "${pgo_cluster_name}-${pgo_cluster_replica_suffix}-pgreplica.yaml" -apiVersion: crunchydata.com/v1 -kind: Pgreplica -metadata: - labels: - name: ${pgo_cluster_name}-${pgo_cluster_replica_suffix} - pg-cluster: ${pgo_cluster_name} - pgouser: admin - name: ${pgo_cluster_name}-${pgo_cluster_replica_suffix} - namespace: ${cluster_namespace} -spec: - clustername: ${pgo_cluster_name} - name: ${pgo_cluster_name}-${pgo_cluster_replica_suffix} - namespace: ${cluster_namespace} - replicastorage: - accessmode: ReadWriteMany - matchLabels: "" - name: ${pgo_cluster_name}-${pgo_cluster_replica_suffix} - size: 1G - storageclass: "" - storagetype: create - supplementalgroups: "" - userlabels: - NodeLabelKey: "" - NodeLabelValue: "" - crunchy-postgres-exporter: "false" - pg-pod-anti-affinity: "" - pgo-version: {{< param operatorVersion >}} -EOF - -kubectl apply -f "${pgo_cluster_name}-${pgo_cluster_replica_suffix}-pgreplica.yaml" -``` - -Add this time, removing a replica must be handled through the [`pgo` client]({{< relref "/pgo-client/common-tasks.md#high-availability-scaling-up-down">}}). - -### Add a Tablespace - -Tablespaces can be added during the lifetime of a PostgreSQL cluster (tablespaces can be removed as well, but for a detailed explanation as to how, please see the [Tablespaces]({{< relref "/architecture/tablespaces.md">}}) section). - -To add a tablespace, you need to add an entry to the `tablespaceMounts` section -of a custom entry, where the key is the name of the tablespace (unique to the -`pgclusters.crunchydata.com` custom resource entry) and the value is a storage -configuration as defined in the `pgclusters.crunchydata.com` section above. - -For example, to add a tablespace named `lake` to our `hippo` cluster, we can -open up the editor with the following code: - -``` -# this variable is the name of the cluster being created -export pgo_cluster_name=hippo -# this variable is the namespace the cluster is being deployed into -export cluster_namespace=pgo - -kubectl edit pgclusters.crunchydata.com -n "${cluster_namespace}" "${pgo_cluster_name}" -``` - -and add an entry to the `tablespaceMounts` block that looks similar to this, -with the addition of the correct storage configuration for your environment: - -``` -tablespaceMounts: - lake: - accessmode: ReadWriteMany - matchLabels: "" - size: 5Gi - storageclass: "" - storagetype: create - supplementalgroups: "" -``` - -### pgBouncer - -[pgBouncer](https://www.pgbouncer.org/) is a PostgreSQL connection pooler and -state manager that can be useful for high-availability setups as well as -managing overall performance of a PostgreSQL cluster. A pgBouncer deployment for -a PostgreSQL cluster can be fully managed from a `pgclusters.crunchydata.com` -custom resource. - -For example, to add a pgBouncer deployment to our `hippo` cluster with two -instances and a memory limit of 36Mi, you can edit the custom resource: - -``` -# this variable is the name of the cluster being created -export pgo_cluster_name=hippo -# this variable is the namespace the cluster is being deployed into -export cluster_namespace=pgo - -kubectl edit pgclusters.crunchydata.com -n "${cluster_namespace}" "${pgo_cluster_name}" -``` - -And modify the `pgBouncer` block to look like this: - -``` -pgBouncer: - limits: - memory: 36Mi - replicas: 2 -``` - -Likewise, to remove pgBouncer from a PostgreSQL cluster, you would set -`replicas` to `0`: - -``` -pgBouncer: - replicas: 0 -``` - -### Start / Stop a Cluster - -A PostgreSQL cluster can be start and stopped by toggling the `shutdown` -parameter in a `pgclusters.crunchydata.com` custom resource. Setting `shutdown` -to `true` will stop a PostgreSQL cluster, whereas a value of `false` will make -a cluster available. This affects all of the associated instances of a -PostgreSQL cluster. - -### Manage Annotations - -Kubernetes [Annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) -can be managed for PostgreSQL, pgBackRest, and pgBouncer Deployments, as well as -being able to apply Annotations across all three. This is done via the -`annotations` block in the `pgclusters.crunchydata.com` custom resource -definition. For example, to apply Annotations in the `hippo` cluster, some that -are global, some that are specific to each Deployment type, you could do the -following. - -First, start editing the `hippo` custom resource: - -``` -# this variable is the name of the cluster being created -export pgo_cluster_name=hippo -# this variable is the namespace the cluster is being deployed into -export cluster_namespace=pgo - -kubectl edit pgclusters.crunchydata.com -n "${cluster_namespace}" "${pgo_cluster_name}" -``` - -In the `hippo` specification, add the annotations block similar to this (note, -this explicitly shows that this is the `spec` block. **Do not modify the -`annotations` block in the `metadata` section**). - - -``` -spec: - annotations: - global: - favorite: hippo - backrest: - chair: comfy - pgBouncer: - pool: swimming - postgres: - elephant: cool -``` - -Save your edits, and in a short period of time, you should see these annotations -applied to the managed Deployments. diff --git a/docs/content/installation/_index.md b/docs/content/installation/_index.md deleted file mode 100644 index 909561ce46..0000000000 --- a/docs/content/installation/_index.md +++ /dev/null @@ -1,22 +0,0 @@ ---- -title: "Installation" -date: -draft: false -weight: 40 ---- - -There are several different ways to install and deploy the [PostgreSQL Operator](https://www.crunchydata.com/developers/download-postgres/containers/postgres-operator) -based upon your use case. - -For the vast majority of use cases, we recommend using the [PostgreSQL Operator Installer]({{< relref "/installation/postgres-operator.md" >}}), -which uses the `pgo-deployer` container to set up all of the objects required to -run the PostgreSQL Operator. - -For advanced use cases, such as for development, one may want to set up a -[development environment]({{< relref "/contributing/developer-setup.md" >}}) -that is created using a series of scripts controlled by the Makefile. - -Before selecting your installation method, it's important that you first read -the [prerequisites]({{< relref "/installation/prerequisites.md" >}}) for your -deployment environment to ensure that your setup meets the needs for installing -the PostgreSQL Operator. diff --git a/docs/content/installation/configuration.md b/docs/content/installation/configuration.md deleted file mode 100644 index 1a09b4afaa..0000000000 --- a/docs/content/installation/configuration.md +++ /dev/null @@ -1,342 +0,0 @@ ---- -title: "Configuration Reference" -date: -draft: false -weight: 40 ---- - -# PostgreSQL Operator Installer Configuration - -When installing the PostgreSQL Operator you have many configuration options, these -options are listed in this section. - -## General Configuration - -These variables affect the general configuration of the PostgreSQL -Operator. - -| Name | Default | Required | Description | -|------|---------|----------|-------------| -| `archive_mode` | true | **Required** | Set to true enable archive logging on all newly created clusters. | -| `archive_timeout` | 60 | **Required** | Set to a value in seconds to configure the timeout threshold for archiving. | -| `backrest_aws_s3_bucket` | | | Set to configure the *bucket* used by pgBackRest with Amazon Web Service S3 for backups and restoration in S3. | -| `backrest_aws_s3_endpoint` | | | Set to configure the *endpoint* used by pgBackRest with Amazon Web Service S3 for backups and restoration in S3. | -| `backrest_aws_s3_key` | | | Set to configure the *key* used by pgBackRest with Amazon Web Service S3 for backups and restoration in S3. | -| `backrest_aws_s3_region` | | | Set to configure the *region* used by pgBackRest with Amazon Web Service S3 for backups and restoration in S3. | -| `backrest_aws_s3_secret` | | | Set to configure the *secret* used by pgBackRest with Amazon Web Service S3 for backups and restoration in S3. | -| `backrest_aws_s3_uri_style` | | | Set to configure whether “host” or “path” style URIs will be used when connecting to S3. | -| `backrest_aws_s3_verify_tls` | | | Set this value to true to enable TLS verification when making a pgBackRest connection to S3.f | -| `backrest_port` | 2022 | **Required** | Defines the port where pgBackRest will run. | -| `badger` | false | **Required** | Set to true enable pgBadger capabilities on all newly created clusters. This can be disabled by the client. | -| `ccp_image_prefix` | registry.developers.crunchydata.com/crunchydata | **Required** | Configures the image prefix used when creating containers from Crunchy Container Suite. | -| `ccp_image_pull_secret` | | | Name of a Secret containing credentials for container image registries. | -| `ccp_image_pull_secret_manifest` | | | Provide a path to the Secret manifest to be installed in each namespace. (optional) | -| `ccp_image_tag` | {{< param centosBase >}}-{{< param postgresVersion >}}-{{< param operatorVersion >}} | **Required** | Configures the image tag (version) used when creating containers from Crunchy Container Suite. | -| `create_rbac` | true | **Required** | Set to true if the installer should create the RBAC resources required to run the PostgreSQL Operator. | -| `crunchy_debug` | false | | Set to configure Operator to use debugging mode. Note: this can cause sensitive data such as passwords to appear in Operator logs. | -| `db_name` | | | Set to a value to configure the default database name on all newly created clusters. By default, the PostgreSQL Operator will set it to the name of the cluster that is being created. | -| `db_password_age_days` | 0 | | Set to a value in days to configure the expiration age on PostgreSQL role passwords on all newly created clusters. If set to “0”, this is the same as saying the password never expires | -| `db_password_length` | 24 | | Set to configure the size of passwords generated by the operator on all newly created roles. | -| `db_port` | 5432 | **Required** | Set to configure the default port used on all newly created clusters. | -| `db_replicas` | 0 | **Required** | Set to configure the amount of replicas provisioned on all newly created clusters. | -| `db_user` | testuser | **Required** | Set to configure the username of the dedicated user account on all newly created clusters. | -| `default_instance_memory` | 128Mi | | Represents the memory request for a PostgreSQL instance. | -| `default_pgbackrest_memory` | 48Mi | | Represents the memory request for a pgBackRest repository. | -| `default_pgbouncer_memory` | 24Mi | | Represents the memory request for a pgBouncer instance. | -| `delete_operator_namespace` | false | | Set to configure whether or not the PGO operator namespace (defined using variable `pgo_operator_namespace`) is deleted when uninstalling the PGO. | -| `delete_watched_namespaces` | false | | Set to configure whether or not the PGO watched namespaces (defined using variable `namespace`) are deleted when uninstalling the PGO. | -| `disable_auto_failover` | false | | If set, will disable autofail capabilities by default in any newly created cluster | -| `disable_fsgroup` | false | | Set to `true` for deployments where you do not want to have the default PostgreSQL fsGroup (26) set. The typical usage is in OpenShift environments that have a `restricted` Security Context Constraints. | -| `exporterport` | 9187 | **Required** | Set to configure the default port used to connect to postgres exporter. | -| `metrics` | false | **Required** | Set to true enable performance metrics on all newly created clusters. This can be disabled by the client. | -| `namespace` | pgo | | Set to a comma delimited string of all the namespaces Operator will manage. | -| `namespace_mode` | dynamic | | Determines which namespace permissions are assigned to the PostgreSQL Operator using a ClusterRole. Options: `dynamic`, `readonly`, and `disabled` | -| `pgbadgerport` | 10000 | **Required** | Set to configure the default port used to connect to pgbadger. | -| `pgo_add_os_ca_store` | false | **Required** | When true, includes system default certificate authorities. | -| `pgo_admin_password` | examplepassword | **Required** | Configures the pgo administrator password. | -| `pgo_admin_perms` | * | **Required** | Sets the access control rules provided by the PostgreSQL Operator RBAC resources for the PostgreSQL Operator administrative account that is created by this installer. Defaults to allowing all of the permissions, which is represented with the * | -| `pgo_admin_role_name` | pgoadmin | **Required** | Sets the name of the PostgreSQL Operator role that is utilized for administrative operations performed by the PostgreSQL Operator. | -| `pgo_admin_username` | admin | **Required** | Configures the pgo administrator username. | -| `pgo_apiserver_port` | 8443 | | Set to configure the port used by the Crunchy PostgreSQL Operator apiserver. | -| `pgo_apiserver_url` | https://postgres-operator | | Sets the `pgo_apiserver_url` for the `pgo-client` deployment. | -| `pgo_client_cert_secret` | pgo.tls | | Sets the secret that the `pgo-client` will use when connecting to the PostgreSQL Operator. | -| `pgo_client_container_install` | false | | Run the `pgo-client` deployment with the PostgreSQL Operator. | -| `pgo_client_install` | true | | Enable to download the `pgo` client binary as part of the Ansible install | -| `pgo_client_version` | {{< param operatorVersion >}} | **Required** | | -| `pgo_cluster_admin` | false | **Required** | Determines whether or not the cluster-admin role is assigned to the PGO service account. Must be true to enable PGO namespace & role creation when installing in OpenShift. | -| `pgo_disable_eventing` | false | | Set to configure whether or not eventing should be enabled for the Crunchy PostgreSQL Operator installation. | -| `pgo_disable_tls` | false | | Set to configure whether or not TLS should be enabled for the Crunchy PostgreSQL Operator apiserver. | -| `pgo_image_prefix` | registry.developers.crunchydata.com/crunchydata | **Required** | Configures the image prefix used when creating containers for the Crunchy PostgreSQL Operator (apiserver, operator, scheduler..etc). | -| `pgo_image_pull_secret` | | | Name of a Secret containing credentials for container image registries. | -| `pgo_image_pull_secret_manifest` | | | Provide a path to the Secret manifest to be installed in each namespace. (optional) | -| `pgo_image_tag` | {{< param centosBase >}}-{{< param operatorVersion >}} | **Required** | Configures the image tag used when creating containers for the Crunchy PostgreSQL Operator (apiserver, operator, scheduler..etc) | -| `pgo_installation_name` | devtest | **Required** | The name of the PGO installation. | -| `pgo_noauth_routes` | | | Configures URL routes with mTLS and HTTP BasicAuth disabled. | -| `pgo_operator_namespace` | pgo | **Required** | Set to configure the namespace where Operator will be deployed. | -| `pgo_tls_ca_store` | | | Set to add additional Certificate Authorities for Operator to trust (PEM-encoded file). | -| `pgo_tls_no_verify` | false | | Set to configure Operator to verify TLS certificates. | -| `reconcile_rbac` | true | | Determines whether or not the PostgreSQL Operator will granted the permissions needed to reconcile RBAC within targeted namespaces. | -| `scheduler_timeout` | 3600 | **Required** | Set to a value in seconds to configure the `pgo-scheduler` timeout threshold when waiting for schedules to complete. | -| `service_type` | ClusterIP | | Set to configure the type of Kubernetes service provisioned on all newly created clusters. | -| `sync_replication` | false | | If set to `true` will automatically enable synchronous replication in new PostgreSQL clusters. | - -## Storage Settings - -The store configuration options defined in this section can be used to specify -the storage configurations that are used by the PostgreSQL Operator. - -## Storage Configuration Options - -Kubernetes and OpenShift offer support for a wide variety of different storage -types and we provide suggested configurations for different environments. These -storage types can be modified or removed as needed, while additional storage -configurations can also be added to meet the specific storage requirements for -your PostgreSQL clusters. - -The following storage variables are utilized to add or modify operator storage -configurations in the with the installer: - -| Name | Required | Description | -|------|----------|-------------| -| `storage_name` | Yes | Set to specify a name for the storage configuration. | -| `storage_access_mode` | Yes | Set to configure the access mode of the volumes created when using this storage definition. | -| `storage_size` | Yes | Set to configure the size of the volumes created when using this storage definition. | -| `storage_class` | Required when using the `dynamic` storage type | Set to configure the storage class name used when creating dynamic volumes. | -| `storage_supplemental_groups` | Required when using NFS storage | Set to configure any supplemental groups that should be added to security contexts on newly created clusters. | -| `storage_type` | Yes | Set to either `create` or `dynamic` to configure the operator to create persistent volumes or have them created dynamically by a storage class. | - -The ID portion of storage prefix for each variable name above should be an -integer that is used to group the various storage variables into a single -storage configuration. - -### Example Storage Configuration - -```yaml -storage3_name: 'nfsstorage' -storage3_access_mode: 'ReadWriteMany' -storage3_size: '1G' -storage3_type: 'create' -storage3_supplemental_groups: 65534 -``` - -As this example storage configuration shows, integer `3` is used as the ID for -each of the `storage` variables, which together form a single storage -configuration called `nfsstorage`. This approach allows different storage -configurations to be created by defining the proper `storage` variables with a -unique ID for each required storage configuration. - -### PostgreSQL Cluster Storage Defaults - -You can specify the default storage to use for PostgreSQL, pgBackRest, and other -elements that require storage that can outlast the lifetime of a Pod. While the -PostgreSQL Operator defaults to using `hostpathstorage` to work with -environments that are typically used to test, we recommend using one of the -other storage classes in production deployments. - -| Name | Default | Required | Description | -|------|---------|----------|-------------| -| `backrest_storage` | hostpathstorage | **Required** | Set the value of the storage configuration to use for the pgbackrest shared repository deployment created when a user specifies pgbackrest to be enabled on a cluster. | -| `backup_storage` | hostpathstorage | **Required** | Set the value of the storage configuration to use for backups, including the storage for pgbackrest repo volumes. | -| `primary_storage` | hostpathstorage | **Required** | Set to configure which storage definition to use when creating volumes used by PostgreSQL primaries on all newly created clusters. | -| `replica_storage` | hostpathstorage | **Required** | Set to configure which storage definition to use when creating volumes used by PostgreSQL replicas on all newly created clusters. | -| `wal_storage` | | | Set to configure which storage definition to use when creating volumes used for PostgreSQL Write-Ahead Log. | - -#### Example Defaults - -```yaml -backrest_storage: 'nfsstorage' -backup_storage: 'nfsstorage' -primary_storage: 'nfsstorage' -replica_storage: 'nfsstorage' -``` - -With the configuration shown above, the `nfsstorage` storage configuration would -be used by default for the various containers created for a PG cluster -(i.e. containers for the primary DB, replica DB's, backups and/or `pgBackRest`). - -### Considerations for Multi-Zone Cloud Environments - -When using the Operator in a Kubernetes cluster consisting of nodes that span -multiple zones, special consideration must be taken to ensure all pods and the -volumes they require are scheduled and provisioned within the same zone. Specifically, -being that a pod is unable mount a volume that is located in another zone, any -volumes that are dynamically provisioned must be provisioned in a topology-aware -manner according to the specific scheduling requirements for the pod. For instance, -this means ensuring that the volume containing the database files for the primary -database in a new PostgreSQL cluster is provisioned in the same zone as the node -containing the PostgreSQL primary pod that will be using it. - -### Default Storage Configuration Types - -#### Default StorageClass - -| Name | Value | -|------|-------| -| storage1_name | default | -| storage1_access_mode | ReadWriteOnce | -| storage1_size | 1G | -| storage1_type | dynamic | - -#### Host Path Storage - -| Name | Value | -|------|-------| -| storage2_name | hostpathstorage | -| storage2_access_mode | ReadWriteMany | -| storage2_size | 1G | -| storage2_type | create | - -#### NFS Storage - -| Name | Value | -|------|-------| -| storage3_name | nfsstorage | -| storage3_access_mode | ReadWriteMany | -| storage3_size | 1G | -| storage3_type | create | -| storage3_supplemental_groups | 65534 | - -#### NFS Storage Red - -| Name | Value | -|------|-------| -| storage4_name | nfsstoragered | -| storage4_access_mode | ReadWriteMany | -| storage4_size | 1G | -| storage4_match_labels | crunchyzone=red | -| storage4_type | create | -| storage4_supplemental_groups | 65534 | - -#### StorageOS - -| Name | Value | -|------|-------| -| storage5_name | storageos | -| storage5_access_mode | ReadWriteOnce | -| storage5_size | 5Gi | -| storage5_type | dynamic | -| storage5_class | fast | - -#### Primary Site - -| Name | Value | -|------|-------| -| storage6_name | primarysite | -| storage6_access_mode | ReadWriteOnce | -| storage6_size | 4G | -| storage6_type | dynamic | -| storage6_class | primarysite | - -#### Alternate Site - -| Name | Value | -|------|-------| -| storage7_name | alternatesite | -| storage7_access_mode | ReadWriteOnce | -| storage7_size | 4G | -| storage7_type | dynamic | -| storage7_class | alternatesite | - -#### GCE - -| Name | Value | -|------|-------| -| storage8_name | gce | -| storage8_access_mode | ReadWriteOnce | -| storage8_size | 300M | -| storage8_type | dynamic | -| storage8_class | standard | - -#### Rook - -| Name | Value | -|------|-------| -| storage9_name | rook | -| storage9_access_mode | ReadWriteOnce | -| storage9_size | 1Gi | -| storage9_type | dynamic | -| storage9_class | rook-ceph-block | - -## Pod Anti-affinity Settings -This will set the default pod anti-affinity for the deployed PostgreSQL -clusters. Pod Anti-Affinity is set to determine where the PostgreSQL Pods are -deployed relative to each other There are three levels: - -- required: Pods *must* be scheduled to different Nodes. If a Pod cannot be - scheduled to a different Node from the other Pods in the anti-affinity - group, then it will not be scheduled. -- preferred (default): Pods *should* be scheduled to different Nodes. There is - a chance that two Pods in the same anti-affinity group could be scheduled to - the same node -- disabled: Pods do not have any anti-affinity rules - -The `POD_ANTI_AFFINITY` label sets the Pod anti-affinity for all of the Pods -that are managed by the Operator in a PostgreSQL cluster. In addition to the -PostgreSQL Pods, this also includes the pgBackRest repository and any -pgBouncer pods. By default, the pgBackRest and pgBouncer pods inherit the -value of `POD_ANTI_AFFINITY`, but one can override the default by setting -the `POD_ANTI_AFFINITY_PGBACKREST` and `POD_ANTI_AFFINITY_PGBOUNCER` variables -for pgBackRest and pgBouncer respectively - -| Name | Default | Required | Description | -|------|---------|----------|-------------| -| `pod_anti_affinity` | preferred | | This will set the default pod anti-affinity for the deployed PostgreSQL clusters. | -| `pod_anti_affinity_pgbackrest` | | | This will set the default pod anti-affinity for the pgBackRest pods. | -| `pod_anti_affinity_pgbouncer` | | | This will set the default pod anti-affinity for the pgBouncer pods. | - -## Understanding `pgo_operator_namespace` & `namespace` - -The Crunchy PostgreSQL Operator can be configured to be deployed and manage a single -namespace or manage several namespaces. The following are examples of different types -of deployment models: - -### Single Namespace - -To deploy the Crunchy PostgreSQL Operator to work with a single namespace (in this example -our namespace is named `pgo`), configure the following settings: - -```yaml -pgo_operator_namespace: 'pgo' -namespace: 'pgo' -``` - -### Multiple Namespaces - -To deploy the Crunchy PostgreSQL Operator to work with multiple namespaces (in this example -our namespaces are named `pgo`, `pgouser1` and `pgouser2`), configure the following settings: - -```yaml -pgo_operator_namespace: 'pgo' -namespace: 'pgouser1,pgouser2' -``` - -## Deploying Multiple Operators - -The 4.0 release of the Crunchy PostgreSQL Operator allows for multiple operator deployments in the same cluster. -To install the Crunchy PostgreSQL Operator to multiple namespaces, it's recommended to have an configuration file -for each deployment of the operator. - -For each operator deployment the following variables should be configured uniquely for each install. - -For example, operator could be deployed twice by changing the `pgo_operator_namespace` and `namespace` for those -deployments: - -Install A would deploy operator to the `pgo` namespace and it would manage the `pgo` target namespace. - -```yaml -# Config A -pgo_operator_namespace: 'pgo' -namespace: 'pgo' -... -``` - -Install B would deploy operator to the `pgo2` namespace and it would manage the `pgo2` and `pgo3` target namespaces. -```yaml -# Config B -pgo_operator_namespace: 'pgo2' -namespace: 'pgo2,pgo3' -... -``` - -Each install of the operator will create a corresponding directory in `$HOME/.pgo/` which will contain -the TLS and `pgouser` client credentials. diff --git a/docs/content/installation/metrics/_index.md b/docs/content/installation/metrics/_index.md deleted file mode 100644 index 19ad2dccaf..0000000000 --- a/docs/content/installation/metrics/_index.md +++ /dev/null @@ -1,52 +0,0 @@ ---- -title: "PostgreSQL Operator Monitoring" -date: -draft: false -weight: 60 ---- - -The PostgreSQL Operator Monitoring infrastructure is a fully integrated solution for monitoring -and visualizing metrics captured from PostgreSQL clusters created using the PostgreSQL Operator. -By leveraging [pgMonitor][] to configure and integrate -the various tools, components and metrics needed to effectively monitor PostgreSQL clusters, -the PostgreSQL Operator Monitoring infrastructure provides an powerful and easy-to-use solution -to effectively monitor and visualize pertinent PostgreSQL database and container metrics. -Included in the monitoring infrastructure are the following components: - -- [pgMonitor][] - Provides the configuration -needed to enable the effective capture and visualization of PostgreSQL database metrics using -the various tools comprising the PostgreSQL Operator Monitoring infrastructure -- [Grafana](https://grafana.com/) - Enables visual dashboard capabilities for monitoring -PostgreSQL clusters, specifically using Crunchy PostgreSQL Exporter data stored within Prometheus -- [Prometheus](https://prometheus.io/) - A multi-dimensional data model with time series data, -which is used in collaboration with the Crunchy PostgreSQL Exporter to provide and store -metrics -- [Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/) - Handles alerts -sent by Prometheus by deduplicating, grouping, and routing them to reciever integrations. - -When installing the monitoring infrastructure, various configuration options and settings -are available to tailor the installation according to your needs. For instance, custom dashboards -and datasources can be utilized with Grafana, while custom scrape configurations can be utilized -with Promtheus. Please see the -[monitoring configuration reference](<{{< relref "/installation/metrics/metrics-configuration.md">}}>) -for additional details. - -By leveraging the various installation methods described in this section, the PostgreSQL Operator -Metrics infrastructure can be deployed alongside the PostgreSQL Operator. There are several -different ways to install and deploy the -[PostgreSQL Operator Monitoring infrastructure](https://www.crunchydata.com/developers/download-postgres/containers/postgres-operator) -based upon your use case. - -For the vast majority of use cases, we recommend using the -[PostgreSQL Operator Monitoring Installer]({{< relref "/installation/metrics/postgres-operator-metrics.md" >}}), -which uses the `pgo-deployer` container to set up all of the objects required to -run the PostgreSQL Operator Monitoring infrastructure. -Additionally, [Ansible](<{{< relref "/installation/metrics/metrics-configuration.md">}}>) and -[Helm](<{{< relref "/installation/metrics/other/ansible">}}>) installers are available. - -Before selecting your installation method, it's important that you first read -the [prerequisites]({{< relref "/installation/metrics/metrics-prerequisites.md" >}}) for your -deployment environment to ensure that your setup meets the needs for installing -the PostgreSQL Operator Monitoring infrastructure. - -[pgMonitor]: https://github.com/CrunchyData/pgmonitor diff --git a/docs/content/installation/metrics/metrics-configuration.md b/docs/content/installation/metrics/metrics-configuration.md deleted file mode 100644 index 7d343480cf..0000000000 --- a/docs/content/installation/metrics/metrics-configuration.md +++ /dev/null @@ -1,130 +0,0 @@ ---- -title: "Monitoring Configuration Reference" -date: -draft: false -weight: 30 ---- - -# PostgreSQL Operator Monitoring Installer Configuration - -When installing the PostgreSQL Operator Monitoring infrastructure you have various configuration options available, which -are defined below. - -## General Configuration - -These variables affect the general configuration of PostgreSQL Operator Monitoring. - -| Name | Default | Required | Description | -|------|---------|----------|-------------| -| `alertmanager_log_level` | info | | Set the log level for Alertmanager logging. | -| `alertmanager_service_type` | ClusterIP | **Required** | How to [expose][k8s-service-type] the Alertmanager service. | -| `alertmanager_storage_access_mode` | ReadWriteOnce | **Required** | Set to the access mode used by the configured storage class for Alertmanager persistent volumes. | -| `alertmanager_storage_class_name` | fast | | Set to the name of the storage class used when creating Alertmanager persistent volumes. Omit if not using storage classes. | -| `alertmanager_supplemental_groups` | 65534 | | Set to configure any supplemental groups that should be added to security contexts for Alertrmanager. | -| `alertmanager_volume_size` | 1Gi | **Required** | Set to the size of persistent volume to create for Alertmanager. | -| `create_rbac` | true | **Required** | Set to true if the installer should create the RBAC resources required to run the PostgreSQL Operator Monitoring infrastructure. | -| `db_port` | 5432 | **Required** | Set to configure the PostgreSQL port used by all PostgreSQL clusters. | -| `delete_metrics_namespace` | false | | Set to configure whether or not the metrics namespace (defined using variable `metrics_namespace`) is deleted when uninstalling the monitoring infrastructure. | -| `disable_fsgroup` | false | | Set to `true` for deployments where you do not want to have the default PostgreSQL fsGroup (26) set. The typical usage is in OpenShift environments that have a `restricted` Security Context Constraints. | -| `grafana_admin_password` | admin | **Required** | Set to configure the login password for the Grafana administrator. | -| `grafana_admin_username` | admin | **Required** | Set to configure the login username for the Grafana administrator. | -| `grafana_install` | true | **Required** | Set to true to install Grafana to visualize metrics. | -| `grafana_service_type` | ClusterIP | **Required** | How to [expose][k8s-service-type] the Grafana service. | -| `grafana_storage_access_mode` | ReadWriteOnce | **Required** | Set to the access mode used by the configured storage class for Grafana persistent volumes. | -| `grafana_storage_class_name` | fast | | Set to the name of the storage class used when creating Grafana persistent volumes. Omit if not using storage classes. | -| `grafana_supplemental_groups` | 65534 | | Set to configure any supplemental groups that should be added to security contexts for Grafana. | -| `grafana_volume_size` | 1Gi | **Required** | Set to the size of persistent volume to create for Grafana. | -| `metrics_image_pull_secret` | | | Name of a Secret containing credentials for container image registries. | -| `metrics_image_pull_secret_manifest` | | | Provide a path to the image Secret manifest to be installed in the metrics namespace. | -| `metrics_namespace` | 1G | **Required** | The namespace that should be created (if it doesn't already exist) and utilized for installation of the Matrics infrastructure. | -| `pgbadgerport` | 10000 | **Required** | Set to configure the port used by pgbadger in any PostgreSQL clusters. | -| `prometheus_install` | false | **Required** | Set to true to install Promotheus in order to capture metrics exported from PostgreSQL clusters. | -| `prometheus_service_type` | true | **Required** | How to [expose][k8s-service-type] the Prometheus service. | -| `prometheus_storage_access_mode` | ReadWriteOnce | **Required** | Set to the access mode used by the configured storage class for Prometheus persistent volumes. | -| `prometheus_storage_class_name` | fast | | Set to the name of the storage class used when creating Prometheus persistent volumes. Omit if not using storage classes. | -| `prometheus_supplemental_groups` | 65534 | | Set to configure any supplemental groups that should be added to security contexts for Prometheus. | -| `prometheus_volume_size` | 1Gi | **Required** | Set to the size of persistent volume to create for Prometheus. | - -## Custom Configuration - -When installing the PostgreSQL Operator Monitoring infrastructure, it is possible to further customize -the various Deployments included (e.g. Alertmanager, Grafana, and/or Prometheus) using custom configuration files. -Specifically, by pointing the PostgreSQL Operator Monitoring installer to one or more ConfigMaps -containing any desired custom configuration settings, those settings will then be applied during -configuration and installation of the PostgreSQL Operator Monitoring infrastructure. - -The specific custom configuration settings available are as follows: - -| Name | Default | Required | Description | -|------|---------|----------|-------------| -| `alertmanager_custom_config` | alertmanager-config | | The name of a ConfigMap containing a custom `alertmanager.yml` configuration file. | -| `alertmanager_custom_rules_config` | alertmanager-rules-config | | The name of a ConfigMap container custom alerting rules for Prometheus. | -| `grafana_datasources_custom_config` | grafana-datasources | | The name of a ConfigMap containing custom Grafana datasources. | -| `grafana_dashboards_custom_config` | grafana-dashboards | | The name of a ConfigMap containing custom Grafana dashboards. | -| `prometheus_custom_configmap` | crunchy-prometheus | | The name of a ConfigMap containing a custom `prometheus.yml` configuration file. | - -_Please note that when using custom ConfigMaps per the above configuration settings, any defaults -for the specific configuration being customized are no longer applied._ - -## Using Alertmanager - -The Alertmanager deployment requires a custom configuration file to configure reciever -integrations that are supported by Prometheus Alertmanager. The installer will create -a configmap containing an example Alertmanager configuration file created by -the pgMonitor project, this file can be found in the [pgMonitor](https://github.com/CrunchyData/pgmonitor/blob/master/prometheus/crunchy-alertmanager.yml) -repository. This example file, along with the [Alertmanager configuration docs](https://prometheus.io/docs/alerting/latest/configuration/), -will help you to configure alerting for you specific use cases. - -{{% notice tip %}} -Alertmanager cannot be installed without also deploying the Crunchy Prometheus deployment. -Once both are deployed, Prometheus is automatically configured to send alerts to -the Alertmanager. -{{% /notice %}} - -## Using RedHat Certified Containers & Custom Images - -By default, the PostgreSQL Operator Monitoring installer will deploy the official Grafana and -Prometheus containers that are publically available on [dockerhub](https://hub.docker.com/): - -- https://hub.docker.com/r/grafana/grafana -- https://hub.docker.com/r/prom/prometheus -- https://hub.docker.com/r/prom/alertmanager - -However, if RedHat certified containers are needed, the following certified images have also -been verified with the PostgreSQL Operator Metric infrastructure, and can therefore be -utilized instead: - -- https://catalog.redhat.com/software/containers/openshift4/ose-grafana/5cdc17d55a13467289f58321 -- https://catalog.redhat.com/software/containers/openshift4/ose-prometheus/5cdc1e585a13467289f5841a -- https://catalog.redhat.com/software/containers/openshift4/ose-prometheus-alertmanager/5cdc1cfbbed8bd5717d60b17 - -The following configuration settings can be applied to properly configure the image prefix, name -and tag as needed to use the RedHat certified containers: - -| Name | Default | Required | Description | -|------|---------|----------|-------------| -| `alertmanager_image_prefix` | prom | **Required** | Configure the image prefix to use for the Alertmanager container. | -| `alertmanager_image_name` | alertmanager | **Required** | Configure the image name to use for the Alertmanager container. | -| `alertmanager_image_tag` | v0.21.0 | **Required** | Configures the image tag to use for the Alertmanager container. | -| `grafana_image_prefix` | grafana | **Required** | Configures the image prefix to use for the Grafana container.| -| `grafana_image_name` | grafana | **Required** | Configures the image name to use for the Grafana container. | -| `grafana_image_tag` | 6.7.4 | **Required** | Configures the image tag to use for the Grafana container. | -| `prometheus_image_prefix` | prom | **Required** | Configures the image prefix to use for the Prometheus container. | -| `prometheus_image_name` | promtheus | **Required** | Configures the image name to use for the Prometheus container. | -| `prometheus_image_tag` | v2.20.0 | **Required** | Configures the image tag to use for the Prometheus container. | - -Additionally, these same settings can be utilized as needed to support custom image names, -tags, and additional container registries. - -## Helm Only Configuration Settings - -When using Helm, the following settings can be defined to control the image prefix and image tag -utilized for the `pgo-deployer` container that is run to install, update or uninstall the -PostgreSQL Operator Monitoring infrastructure: - -| Name | Default | Required | Description | -|------|---------|----------|-------------| -| `pgo_image_prefix` | registry.developers.crunchydata.com/crunchydata | **Required** | Configures the image prefix used by the `pgo-deployer` container | -| `pgo_image_tag` | {{< param centosBase >}}-{{< param operatorVersion >}} | **Required** | Configures the image tag used by the `pgo-deployer` container | - -[k8s-service-type]: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types \ No newline at end of file diff --git a/docs/content/installation/metrics/metrics-prerequisites.md b/docs/content/installation/metrics/metrics-prerequisites.md deleted file mode 100644 index 7c9cb8a19e..0000000000 --- a/docs/content/installation/metrics/metrics-prerequisites.md +++ /dev/null @@ -1,31 +0,0 @@ ---- -title: "Monitoring Prerequisites" -date: -draft: false -weight: 10 ---- - -# Prerequisites - -The following is required prior to installing PostgreSQL Operator Monitoring. - -## Environment - -PostgreSQL Operator Monitoring is tested in the following environments: - -* Kubernetes v1.13+ -* Red Hat OpenShift v3.11+ -* Red Hat OpenShift v4.3+ -* VMWare Enterprise PKS 1.3+ - -### Application Ports - -The PostgreSQL Operator Monitoring installer deploys different services as needed to support -PostgreSQL Operator Monitoring collection and monitoring. Below is a list of the applications -and their default Service ports. - -| Service | Port | -| --- | --- | -| Grafana | 3000 | -| Prometheus | 9090 | -| Alertmanager | 9093 | diff --git a/docs/content/installation/metrics/other/_index.md b/docs/content/installation/metrics/other/_index.md deleted file mode 100644 index 368221d1c7..0000000000 --- a/docs/content/installation/metrics/other/_index.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -title: "Other Methods" -date: -draft: false -weight: 100 ---- - -This section provides additional methods for installing the PostgreSQL Operator -Metrics infrastructure. diff --git a/docs/content/installation/metrics/other/ansible/_index.md b/docs/content/installation/metrics/other/ansible/_index.md deleted file mode 100644 index cede0bb875..0000000000 --- a/docs/content/installation/metrics/other/ansible/_index.md +++ /dev/null @@ -1,25 +0,0 @@ ---- -title: "Ansible" -date: -draft: false -weight: 10 ---- - -# Crunchy Data PostgreSQL Operator Monitoring Playbooks - -The Crunchy Data PostgreSQL Operator Monitoring Playbooks contain [Ansible](https://www.ansible.com/) -roles for installing and managing the [Crunchy Data PostgreSQL Operator Monitoring infrastructure]({{< relref "/installation/other/ansible/installing-operator.md" >}}). - -## Features - -The playbooks provided allow users to: - -* install PostgreSQL Operator Monitoring on Kubernetes and OpenShift -* install PostgreSQL Operator from a Linux, Mac or Windows (Ubuntu subsystem) host -* support a variety of deployment models - -## Resources - -* [Ansible](https://www.ansible.com/) -* [Crunchy Data](https://www.crunchydata.com/) -* [Crunchy Data PostgreSQL Operator Project](https://github.com/CrunchyData/postgres-operator) diff --git a/docs/content/installation/metrics/other/ansible/installing-metrics.md b/docs/content/installation/metrics/other/ansible/installing-metrics.md deleted file mode 100644 index 254eadb881..0000000000 --- a/docs/content/installation/metrics/other/ansible/installing-metrics.md +++ /dev/null @@ -1,128 +0,0 @@ ---- -title: "Installing the Monitoring Infrastructure" -date: -draft: false -weight: 22 ---- - -# Installing the Monitoring Infrastructure - -PostgreSQL clusters created by the Crunchy PostgreSQL Operator can optionally be -configured to serve performance metrics via Prometheus Exporters. The metric exporters -included in the database pod serve realtime metrics for the database container. In -order to store and view this data, Grafana and Prometheus are required. The Crunchy -PostgreSQL Operator does not create this infrastructure, however, they can be installed -using the provided Ansible roles. - -## Prerequisites - -The following assumes the proper [prerequisites are satisfied][ansible-prerequisites] -we can now install the PostgreSQL Operator. - -## Installing on Linux - -On a Linux host with Ansible installed we can run the following command to install -the Metrics stack: - -```bash -ansible-playbook -i /path/to/inventory.yaml --tags=install-metrics main.yml -``` - -## Installing on macOS - -On a macOS host with Ansible installed we can run the following command to install -the Metrics stack: - -```bash -ansible-playbook -i /path/to/inventory.yaml --tags=install-metrics main.yml -``` - -## Installing on Windows - -On a Windows host with the Ubuntu subsystem we can run the following commands to install -the Metrics stack: - -```bash -ansible-playbook -i /path/to/inventory.yaml --tags=install-metrics main.yml -``` - -## Verifying the Installation - -This may take a few minutes to deploy. To check the status of the deployment run -the following: - -```bash -# Kubernetes -kubectl get deployments -n -kubectl get pods -n - -# OpenShift -oc get deployments -n -oc get pods -n -``` - -## Verify Alertmanager - -In a separate terminal we need to setup a port forward to the Crunchy Alertmanager deployment -to ensure connection can be made outside of the cluster: - -```bash -# If deployed to Kubernetes -kubectl port-forward -n svc/crunchy-alertmanager 9093:9093 - -# If deployed to OpenShift -oc port-forward -n svc/crunchy-alertmanager 9093:9093 -``` - -In a browser navigate to `http://127.0.0.1:9093` to access the Alertmanager dashboard. - -## Verify Grafana - -In a separate terminal we need to setup a port forward to the Crunchy Grafana deployment -to ensure connection can be made outside of the cluster: - -```bash -# If deployed to Kubernetes -kubectl port-forward -n svc/crunchy-grafana 3000:3000 - -# If deployed to OpenShift -oc port-forward -n svc/crunchy-grafana 3000:3000 -``` - -In a browser navigate to `http://127.0.0.1:3000` to access the Grafana dashboard. - -{{% notice tip %}} -No metrics will be scraped if no exporters are available. To create a PostgreSQL -cluster with metric exporters, run the following command following installation -of the PostgreSQL Operator: - -```bash -pgo create cluster --metrics --namespace= -``` -{{% /notice %}} - -## Verify Prometheus - -In a separate terminal we need to setup a port forward to the Crunchy Prometheus deployment -to ensure connection can be made outside of the cluster: - -```bash -# If deployed to Kubernetes -kubectl port-forward -n svc/crunchy-prometheus 9090:9090 - -# If deployed to OpenShift -oc port-forward -n svc/crunchy-prometheus 9090:9090 -``` - -In a browser navigate to `http://127.0.0.1:9090` to access the Prometheus dashboard. - -{{% notice tip %}} -No metrics will be scraped if no exporters are available. To create a PostgreSQL -cluster with metric exporters run the following command: - -```bash -pgo create cluster --metrics --namespace= -``` -{{% /notice %}} - -[ansible-prerequisites]: {{< relref "/installation/metrics/other/ansible/metrics-prerequisites.md" >}} diff --git a/docs/content/installation/metrics/other/ansible/metrics-prerequisites.md b/docs/content/installation/metrics/other/ansible/metrics-prerequisites.md deleted file mode 100644 index 1e9d31164d..0000000000 --- a/docs/content/installation/metrics/other/ansible/metrics-prerequisites.md +++ /dev/null @@ -1,92 +0,0 @@ ---- -title: "Metrics Prerequisites" -date: -draft: false -weight: 10 ---- - -# Prerequisites - -The following is required prior to installing the Crunchy PostgreSQL Operator Monitoring infrastructure using Ansible: - -* [postgres-operator playbooks](https://github.com/CrunchyData/postgres-operator/) source code for the target version -* Ansible 2.9.0+ - -## Kubernetes Installs - -* Kubernetes v1.11+ -* Cluster admin privileges in Kubernetes -* [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) configured to communicate with Kubernetes - -## OpenShift Installs - -* OpenShift v3.11+ -* Cluster admin privileges in OpenShift -* [oc](https://docs.openshift.com/container-platform/3.11/cli_reference/get_started_cli.html) configured to communicate with OpenShift - -## Installing from a Windows Host - -If the Crunchy PostgreSQL Operator is being installed from a Windows host the following -are required: - -* [Windows Subsystem for Linux (WSL)](https://docs.microsoft.com/en-us/windows/wsl/install-win10) -* [Ubuntu for Windows](https://www.microsoft.com/en-us/p/ubuntu/9nblggh4msv6) - -## Permissions - -The installation of the Crunchy PostgreSQL Operator Monitoring infrastructure requires elevated -privileges, as the following objects need to be created: - -* RBAC for use by Prometheus and/or Grafana -* The metrics namespace - -## Obtaining Operator Ansible Role - -* Clone the [postgres-operator project](https://github.com/CrunchyData/postgres-operator) - -### GitHub Installation - -All necessary files (inventory.yaml, values.yaml, main playbook and roles) can be found in the -[`installers/metrics/ansible`](https://github.com/CrunchyData/postgres-operator/tree/master/installers/metrics/ansible) -directory in the [source code](https://github.com/CrunchyData/postgres-operator). - -## Configuring the Inventory File - -The `inventory.yaml` file included with the PostgreSQL Operator Monitoring Playbooks allows installers -to configure how Ansible will connect to your Kubernetes cluster. This file -should contain the following connection variables: - -{{% notice tip %}} -You will have to uncomment out either the `kubernetes` or `openshift` variables -if you are being using them for your environment. Both sets of variables cannot -be used at the same time. The unused variables should be left commented out or removed. -{{% /notice %}} - - -| Name | Default | Required | Description | -|-----------------------------------|-------------|----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `kubernetes_context` | | **Required**, if deploying to Kubernetes |When deploying to Kubernetes, set to configure the context name of the kubeconfig to be used for authentication. | -| `openshift_host` | | **Required**, if deploying to OpenShift | When deploying to OpenShift, set to configure the hostname of the OpenShift cluster to connect to. | -| `openshift_password` | | **Required**, if deploying to OpenShift | When deploying to OpenShift, set to configure the password used for login. | -| `openshift_skip_tls_verify` | | **Required**, if deploying to OpenShift | When deploying to Openshift, set to ignore the integrity of TLS certificates for the OpenShift cluster. | -| `openshift_token` | | **Required**, if deploying to OpenShift | When deploying to OpenShift, set to configure the token used for login (when not using username/password authentication). | -| `openshift_user` | | **Required**, if deploying to OpenShift | When deploying to OpenShift, set to configure the username used for login. | - -{{% notice tip %}} -To retrieve the `kubernetes_context` value for Kubernetes installs, run the following command: - -```bash -kubectl config current-context -``` -{{% /notice %}} - -## Configuring - `values.yaml` - -The `values.yaml` file contains all of the configuration parameters -for deploying the PostgreSQL Operator Monitoring infrastructure. -The [example file](https://github.com/CrunchyData/postgres-operator/blob/v{{< param operatorVersion >}}/installers/metrics/ansible/values.yaml) -contains defaults that should work in most Kubernetes environments, but it may -require some customization. - -For a detailed description of each configuration parameter, please read the -[PostgreSQL Operator Installer Metrics Configuration Reference](<{{< relref "/installation/metrics/metrics-configuration.md">}}>) diff --git a/docs/content/installation/metrics/other/ansible/uninstalling-metrics.md b/docs/content/installation/metrics/other/ansible/uninstalling-metrics.md deleted file mode 100644 index 137eb34a88..0000000000 --- a/docs/content/installation/metrics/other/ansible/uninstalling-metrics.md +++ /dev/null @@ -1,23 +0,0 @@ ---- -title: "Uninstalling the Monitoring Infrastructure" -date: -draft: false -weight: 41 ---- - -# Uninstalling the Monitoring Infrastructure - -The following assumes the proper [prerequisites are satisfied][ansible-prerequisites] -we can now uninstall the PostgreSQL Operator Monitoring infrastructure. - -First, it is recommended to use the playbooks tagged with the same version -of the Metrics infratructure currently deployed. - -With the correct playbooks acquired and prerequisites satisfied, simply run -the following command: - -```bash -ansible-playbook -i /path/to/inventory.yaml --tags=uninstall-metrics main.yml -``` - -[ansible-prerequisites]: {{< relref "/installation/metrics/other/ansible/metrics-prerequisites.md" >}} diff --git a/docs/content/installation/metrics/other/ansible/updating-metrics.md b/docs/content/installation/metrics/other/ansible/updating-metrics.md deleted file mode 100644 index 4c8387093b..0000000000 --- a/docs/content/installation/metrics/other/ansible/updating-metrics.md +++ /dev/null @@ -1,131 +0,0 @@ ---- -title: "Updating the Monitoring Infrastructure" -date: -draft: false -weight: 30 ---- - -# Updating the Monitoring Infrastructure - -Updating the PostgreSQL Operator Monitoring infrastrcutre is essential to the lifecycle management -of the service. Using the `update-metrics` flag will: - -* Update and redeploy the monitoring infrastructure deployments -* Recreate configuration maps and/or secrets used by the monitoring infrastructure -* Remove any deprecated objects -* Allow administrators to change settings configured in the `values.yaml` - -The following assumes the proper [prerequisites are satisfied][ansible-prerequisites] -we can now update the PostgreSQL Operator. - -The commands should be run in the directory where the Crunchy PostgreSQL Operator -playbooks is stored. See the `ansible` directory in the Crunchy PostgreSQL Operator -project for the inventory file, values file, main playbook and ansible roles. - -## Updating on Linux - -On a Linux host with Ansible installed we can run the following command to update -the PostgreSQL Operator: - -```bash -ansible-playbook -i /path/to/inventory.yaml --tags=update --ask-become-pass main.yml -``` - -## Updating on macOS - -On a macOS host with Ansible installed we can run the following command to update -the PostgreSQL Operator. - -```bash -ansible-playbook -i /path/to/inventory.yaml --tags=update --ask-become-pass main.yml -``` - -## Updating on Windows Ubuntu Subsystem - -On a Windows host with an Ubuntu subsystem we can run the following commands to update -the PostgreSQL Operator. - -```bash -ansible-playbook -i /path/to/inventory.yaml --tags=update --ask-become-pass main.yml -``` - -## Verifying the Update - -This may take a few minutes to deploy. To check the status of the deployment run -the following: - -```bash -# Kubernetes -kubectl get deployments -n -kubectl get pods -n - -# OpenShift -oc get deployments -n -oc get pods -n -``` - -## Verify Alertmanager - -In a separate terminal we need to setup a port forward to the Crunchy Alertmanager deployment -to ensure connection can be made outside of the cluster: - -```bash -# If deployed to Kubernetes -kubectl port-forward -n svc/crunchy-alertmanager 9093:9093 - -# If deployed to OpenShift -oc port-forward -n svc/crunchy-alertmanager 9093:9093 -``` - -In a browser navigate to `http://127.0.0.1:9093` to access the Alertmanager dashboard. - -## Verify Grafana - -In a separate terminal we need to setup a port forward to the Crunchy Grafana deployment -to ensure connection can be made outside of the cluster: - -```bash -# If deployed to Kubernetes -kubectl port-forward -n svc/crunchy-grafana 3000:3000 - -# If deployed to OpenShift -oc port-forward -n svc/crunchy-grafana 3000:3000 -``` - -In a browser navigate to `http://127.0.0.1:3000` to access the Grafana dashboard. - -{{% notice tip %}} -No metrics will be scraped if no exporters are available. To create a PostgreSQL -cluster with metric exporters, run the following command following installation -of the PostgreSQL Operator: - -```bash -pgo create cluster --metrics --namespace= -``` -{{% /notice %}} - -## Verify Prometheus - -In a separate terminal we need to setup a port forward to the Crunchy Prometheus deployment -to ensure connection can be made outside of the cluster: - -```bash -# If deployed to Kubernetes -kubectl port-forward -n svc/crunchy-prometheus 9090:9090 - -# If deployed to OpenShift -oc port-forward -n svc/crunchy-prometheus 9090:9090 -``` - -In a browser navigate to `http://127.0.0.1:9090` to access the Prometheus dashboard. - -{{% notice tip %}} -No metrics will be scraped if no exporters are available. To create a PostgreSQL -cluster with metric exporters run the following command: - -```bash -pgo create cluster --metrics --namespace= -``` -{{% /notice %}} - -[ansible-prerequisites]: {{< relref "/installation/metrics/other/ansible/metrics-prerequisites.md" >}} diff --git a/docs/content/installation/metrics/other/helm-metrics.md b/docs/content/installation/metrics/other/helm-metrics.md deleted file mode 100644 index fb94918003..0000000000 --- a/docs/content/installation/metrics/other/helm-metrics.md +++ /dev/null @@ -1,137 +0,0 @@ ---- -title: "Monitoring Helm Chart" -date: -draft: false -weight: 20 ---- - -# The PostgreSQL Operator Monitoring Helm Chart - -## Overview - -The PostgreSQL Operator comes with a container called `pgo-deployer` which -handles a variety of lifecycle actions for the PostgreSQL Operator Monitoring infrastructure, -including: - -- Installation -- Upgrading -- Uninstallation - -After configuring the `values.yaml` file with you configuration options, the -installer will be run using the `helm` command line tool and takes care of -setting up all of the objects required to run the PostgreSQL Operator. - -The PostgreSQL Operator Monitoring Helm chart is available in the -[Helm](https://github.com/CrunchyData/postgres-operator/tree/master/installers/metrics/helm) -directory in the PostgreSQL Operator repository. - -## Requirements - -### RBAC - -The Helm chart will create the ServiceAccount, ClusterRole, and ClusterRoleBinding -that are required to run the `pgo-deployer`. If you have already configured the -ServiceAccount and ClusterRoleBinding for the installation process (e.g. from a -previous installation), you can disable their creation using the `rbac.create` -and `serviceAccount.create` variables in the `values.yaml` file. If these options -are disabled, you must provide the name of your preconfigured ServiceAccount using -`serviceAccount.name`. - -### Namespace - -In order to install the PostgreSQL Operator using the Helm chart you will need -to first create the namespace in which the `pgo-deployer` will be run. By default, -it will run in the namespace that is provided to `helm` at the command line. - -``` -kubectl create namespace -helm install postgres-operator-metrics -n /path/to/chart_dir -``` - -### Config Map - -The `pgo-deployer` uses a [Kubernetes ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/) -to pass configuration options into the installer. The values in your `values.yaml` -file will be used to populate the configuation options in the ConfigMap. - -### Configuration - `values.yaml` - -The `values.yaml` file contains all of the configuration parameters for deploying -the PostgreSQL Operator Monitoring infrastructure. -The [values.yaml file](https://github.com/CrunchyData/postgres-operator/blob/master/installers/metrics/helm/values.yaml) -contains the defaults that should work in most Kubernetes environments, but it may require some customization. - -For a detailed description of each configuration parameter, please read the -[PostgreSQL Operator Monitoring Installer Configuration Reference](<{{< relref "/installation/metrics/metrics-configuration.md">}}>) - -## Installation - -Once you have configured the PostgreSQL Operator Monitoring installer to your -specification, you can install the PostgreSQL Operator Monitoring infrastructure -with the following command: - -```shell -helm install -n /path/to/chart_dir -``` - -{{% notice tip %}} -Take note of the `name` used when installing, this `name` will be used to -upgrade and uninstall the PostgreSQL Operator. -{{% /notice %}} - -## Upgrade and Uninstall - -Once install has be completed using Helm, it will also be used to upgrade and -uninstall your PostgreSQL Operator. - -{{% notice tip %}} -The `name` and `namespace` in the following sections should match the options -provided at install. -{{% /notice %}} - -### Upgrade - -To make changes to your deployment of the PostgreSQL Operator you will use the -`helm upgrade` command. Once the configuration changes have been made to you -`values.yaml` file, you can run the following command to implement them in the -deployment: - -```shell -helm upgrade -n /path/to/updated_chart -``` - -### Uninstall - -To uninstall the PostgreSQL Operator you will use the `helm uninstall` command. -This will uninstall the operator and clean up resources used by the `pgo-deployer`. - -```shell -helm uninstall -n -``` - -## Debugging - -When the `pgo-deployer` job does not complete successfully, the resources that -are created and normally cleaned up by Helm will be left in your -Kubernetes cluster. This will allow you to use the failed job and its logs to -debug the issue. The following command will show the logs for the `pgo-deployer` -job: - -```shell -kubectl logs -n job.batch/pgo-metrics-deploy -``` - -{{% notice tip %}} -You can also view the logs as the job is running by using the `kubectl -f` -follow flag: -```shell -kubectl logs -n job.batch/pgo-metrics-deploy -f -``` -{{% /notice %}} - - -These logs will provide feedback if there are any misconfigurations in your -install. Once you have finished debugging the failed job and fixed any configuration -issues, you can take steps to re-run your install, upgrade, or uninstall. By -running another command the resources from the failed install will be cleaned up -so that a successfull install can run. diff --git a/docs/content/installation/metrics/postgres-operator-metrics.md b/docs/content/installation/metrics/postgres-operator-metrics.md deleted file mode 100644 index a077862ba9..0000000000 --- a/docs/content/installation/metrics/postgres-operator-metrics.md +++ /dev/null @@ -1,164 +0,0 @@ ---- -title: Install PostgreSQL Operator Monitoring -date: -draft: false -weight: 20 ---- - -# PostgreSQL Operator Monitoring Installer - -## Quickstart - -If you believe that all the default settings in the installation manifest work -for you, you can take a chance by running the metrics manifest directly from the -repository: - -``` -kubectl create namespace pgo -kubectl apply -f https://raw.githubusercontent.com/CrunchyData/postgres-operator/v{{< param operatorVersion >}}/installers/metrics/kubectl/postgres-operator-metrics.yml -``` - -However, we still advise that you read onward to see how to properly configure -the PostgreSQL Operator Monitoring infrastructure. - -## Overview - -The PostgreSQL Operator comes with a container called `pgo-deployer` which -handles a variety of lifecycle actions for the PostgreSQL Operator Monitoring infrastructure, -including: - -- Installation -- Upgrading -- Uninstallation - -After configuring the Job template, the installer can be run using -[`kubectl apply`](https://kubernetes.io/docs/reference/kubectl/cheatsheet/#apply) -and takes care of setting up all of the objects required to run the PostgreSQL -Operator. - -The installation manifest, called [`postgres-operator-metrics.yml`](https://github.com/CrunchyData/postgres-operator/blob/v{{< param operatorVersion >}}/installers/metrics/kubectl/postgres-operator-metrics.yml), is available in the [`installers/metrics/kubectl/postgres-operator-metrics.yml`](https://github.com/CrunchyData/postgres-operator/blob/v{{< param operatorVersion >}}/installers/metrics/kubectl/postgres-operator-metrics.yml) -path in the PostgreSQL Operator repository. - - -## Requirements - -### RBAC - -The `pgo-deployer` requires a [ServiceAccount](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) -and [ClusterRoleBinding](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole) -to run the installation job. Both of these resources are already defined -in the `postgres-operator-metrics.yml`, but can be updated based on your specific -environmental requirements. - -By default, the `pgo-deployer` uses a ServiceAccount called `pgo-metrics-deployer-sa` -that has a ClusterRoleBinding (`pgo-metrics-deployer-crb`) with several ClusterRole -permissions. This ClusterRole is needed for the initial configuration and deployment -of the various applications comprising the monitoring infrastructure. This includes permissions -to create: - -* RBAC for use by Prometheus and/or Grafana -* The metrics namespace - -The required list of privileges are available in the -[postgres-operator-metrics.yml](https://raw.githubusercontent.com/CrunchyData/postgres-operator/v{{< param operatorVersion >}}/installers/metrics/kubectl/postgres-operator-metrics.yml) -file: - -[https://raw.githubusercontent.com/CrunchyData/postgres-operator/v{{< param operatorVersion >}}/installers/metrics/kubectl/postgres-operator-metrics.yml](https://raw.githubusercontent.com/CrunchyData/postgres-operator/v{{< param operatorVersion >}}/installers/kubectl/postgres-operator.yml) - -If you have already configured the ServiceAccount and ClusterRoleBinding for the -installation process (e.g. from a previous installation), then you can remove -these objects from the `postgres-operator-metrics.yml` manifest. - -### Config Map - -The `pgo-deployer` uses a [Kubernetes ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/) -to pass configuration options into the installer. The ConfigMap is defined in -the `postgres-operator-metrics.yaml` file and can be updated based on your configuration -preferences. - -### Namespaces - -By default, the PostgreSQL Operator Monitoring installer will run in the `pgo` Namespace. This can be -updated in the `postgres-operator-metrics.yml` file. **Please ensure that this namespace -exists before the job is run**. - -For example, to create the `pgo` namespace: - -``` -kubectl create namespace pgo -``` - -## Configuration - `postgres-operator-metrics.yml` - -The `postgres-operator-metrics.yml` file contains all of the configuration parameters -for deploying PostgreSQL Operator Monitoring. The [example file](https://github.com/CrunchyData/postgres-operator/blob/v{{< param operatorVersion >}}/installers/metrics/kubectl/postgres-operator-metrics.yml) -contains defaults that should work in most Kubernetes environments, but it may -require some customization. - -For a detailed description of each configuration parameter, please read the -[PostgreSQL Operator Monitoring Installer Configuration Reference](<{{< relref "/installation/metrics/metrics-configuration.md">}}>) - -#### Configuring to Update and Uninstall - -The deploy job can be used to perform different deployment actions for the -PostgreSQL Operator Monitoring infrastructure. When you run the job it will install -the monitoring infrastructure by default but you can change the deployment action to -uninstall or update. The `DEPLOY_ACTION` environment variable in the `postgres-operator-metrics.yml` -file can be set to `install-metrics`, `update-metrics`, and `uninstall-metrics`. - -### Image Pull Secrets - -If you are pulling PostgreSQL Operator Monitoring images from a private registry, you -will need to setup an -[imagePullSecret](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/) -with access to the registry. The image pull secret will need to be added to the -installer service account to have access. The secret will need to be created in -each namespace that the PostgreSQL Operator will be using. - -After you have configured your image pull secret in the Namespace the installer -runs in (by default, this is `pgo`), -add the name of the secret to the job yaml that you are using. You can update -the existing section like this: - -``` -apiVersion: v1 -kind: ServiceAccount -metadata: - name: pgo-metrics-deployer-sa - namespace: pgo -imagePullSecrets: - - name: -``` - -If the service account is configured without using the job yaml file, you -can link the secret to an existing service account with the `kubectl` or `oc` -clients. - -``` -# kubectl -kubectl patch serviceaccount -p '{"imagePullSecrets": [{"name": "myregistrykey"}]}' -n - -# oc -oc secrets link --for=pull --namespace= -``` - -## Installation - -Once you have configured the PostgreSQL Operator Monitoring installer to your -specification, you can install the PostgreSQL Operator Monitoring infrastructure -with the following command: - -```shell -kubectl apply -f /path/to/postgres-operator-metrics.yml -``` - -## Post-Installation - -To clean up the installer artifacts, you can simply run: - -```shell -kubectl delete -f /path/to/postgres-operator-metrics.yml -``` - -Note that if you still have the ServiceAccount and ClusterRoleBinding in there, -you will need to have elevated privileges. diff --git a/docs/content/installation/other/_index.md b/docs/content/installation/other/_index.md deleted file mode 100644 index 54722b3e61..0000000000 --- a/docs/content/installation/other/_index.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: "Other Methods" -date: -draft: false -weight: 50 ---- - -Though the years, we have built up several other methods for installing the -PostgreSQL Operator. The next few sections provide some alternative ways of -deploying the PostgreSQL Operator. Some of these methods are deprecated and may -be removed in a future release. diff --git a/docs/content/installation/other/ansible/_index.md b/docs/content/installation/other/ansible/_index.md deleted file mode 100644 index 0cd09a034d..0000000000 --- a/docs/content/installation/other/ansible/_index.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -title: "Ansible" -date: -draft: false -weight: 100 ---- - -# Crunchy Data PostgreSQL Operator Playbooks - -The Crunchy Data PostgreSQL Operator Playbooks contain [Ansible](https://www.ansible.com/) -roles for installing and managing the [Crunchy Data PostgreSQL Operator]({{< relref "/installation/other/ansible/installing-operator.md" >}}). - -## Features - -The playbooks provided allow users to: - -* install PostgreSQL Operator on Kubernetes and OpenShift -* install PostgreSQL Operator from a Linux, Mac or Windows (Ubuntu subsystem) host -* generate TLS certificates required by the PostgreSQL Operator -* support a variety of deployment models - -## Resources - -* [Ansible](https://www.ansible.com/) -* [Crunchy Data](https://www.crunchydata.com/) -* [Crunchy Data PostgreSQL Operator Project](https://github.com/CrunchyData/postgres-operator) diff --git a/docs/content/installation/other/ansible/installing-ansible.md b/docs/content/installation/other/ansible/installing-ansible.md deleted file mode 100644 index 50ab5d6584..0000000000 --- a/docs/content/installation/other/ansible/installing-ansible.md +++ /dev/null @@ -1,29 +0,0 @@ ---- -title: "Installing Ansible" -date: -draft: false -weight: 20 ---- - -## Installing Ansible on Linux, macOS or Windows Ubuntu Subsystem - -To install Ansible on Linux or macOS, [see the official documentation](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html#intro-installation-guide) -provided by Ansible. - -## Install Google Cloud SDK (Optional) - -If Crunchy PostgreSQL Operator is going to be installed in a Google Kubernetes -Environment the Google Cloud SDK is required. - -To install the Google Cloud SDK on Linux or macOS, see the -[official Google Cloud documentation](https://cloud.google.com/sdk/install). - -When installing the Google Cloud SDK on the Windows Ubuntu Subsystem, run the following -commands to install: - -```bash -wget https://sdk.cloud.google.com --output-document=/tmp/install-gsdk.sh -# Review the /tmp/install-gsdk.sh prior to running -chmod +x /tmp/install-gsdk.sh -/tmp/install-gsdk.sh -``` diff --git a/docs/content/installation/other/ansible/installing-operator.md b/docs/content/installation/other/ansible/installing-operator.md deleted file mode 100644 index 8cc82d1448..0000000000 --- a/docs/content/installation/other/ansible/installing-operator.md +++ /dev/null @@ -1,117 +0,0 @@ ---- -title: "Installing PostgreSQL Operator" -date: -draft: false -weight: 21 ---- - -# Installing - -The following assumes the proper [prerequisites are satisfied][ansible-prerequisites] -we can now install the PostgreSQL Operator. - -The commands should be run in the directory where the Crunchy PostgreSQL Operator -playbooks are stored. See the `installers/ansible` directory in the Crunchy PostgreSQL Operator -project for the inventory file, values file, main playbook and ansible roles. - -## Installing on Linux - -On a Linux host with Ansible installed we can run the following command to install -the PostgreSQL Operator: - -```bash -ansible-playbook -i /path/to/inventory.yaml --tags=install --ask-become-pass main.yml -``` - -## Installing on macOS - -On a macOS host with Ansible installed we can run the following command to install -the PostgreSQL Operator. - -```bash -ansible-playbook -i /path/to/inventory.yaml --tags=install --ask-become-pass main.yml -``` - -## Installing on Windows Ubuntu Subsystem - -On a Windows host with an Ubuntu subsystem we can run the following commands to install -the PostgreSQL Operator. - -```bash -ansible-playbook -i /path/to/inventory.yaml --tags=install --ask-become-pass main.yml -``` - -## Verifying the Installation - -This may take a few minutes to deploy. To check the status of the deployment run -the following: - -```bash -# Kubernetes -kubectl get deployments -n -kubectl get pods -n - -# OpenShift -oc get deployments -n -oc get pods -n -``` - -## Configure Environment Variables - -After the Crunchy PostgreSQL Operator has successfully been installed we will need -to configure local environment variables before using the `pgo` client. - -{{% notice info %}} - -If TLS authentication was disabled during installation, please see the [TLS Configuration Page] ({{< relref "Configuration/tls.md" >}}) for additional configuration information. - -{{% / notice %}} - -To configure the environment variables used by `pgo` run the following command: - -Note: `` should be replaced with the namespace the Crunchy PostgreSQL -Operator was deployed to. - -```bash -cat <> ~/.bashrc -export PGOUSER="${HOME?}/.pgo//pgouser" -export PGO_CA_CERT="${HOME?}/.pgo//client.crt" -export PGO_CLIENT_CERT="${HOME?}/.pgo//client.crt" -export PGO_CLIENT_KEY="${HOME?}/.pgo//client.key" -export PGO_APISERVER_URL='https://127.0.0.1:8443' -EOF -``` - -Apply those changes to the current session by running: - -```bash -source ~/.bashrc -``` - -## Verify `pgo` Connection - -In a separate terminal we need to setup a port forward to the Crunchy PostgreSQL -Operator to ensure connection can be made outside of the cluster: - -```bash -# If deployed to Kubernetes -kubectl port-forward -n pgo svc/postgres-operator 8443:8443 - -# If deployed to OpenShift -oc port-forward -n pgo svc/postgres-operator 8443:8443 -``` - -You can subsitute `pgo` in the above examples with the namespace that you -deployed the PostgreSQL Operator into. - -On a separate terminal verify the PostgreSQL client can communicate with the Crunchy PostgreSQL -Operator: - -```bash -pgo version -``` - -If the above command outputs versions of both the client and API server, the Crunchy -PostgreSQL Operator has been installed successfully. - -[ansible-prerequisites]: {{< relref "/installation/other/ansible/prerequisites.md" >}} diff --git a/docs/content/installation/other/ansible/prerequisites.md b/docs/content/installation/other/ansible/prerequisites.md deleted file mode 100644 index b055226594..0000000000 --- a/docs/content/installation/other/ansible/prerequisites.md +++ /dev/null @@ -1,98 +0,0 @@ ---- -title: "Prerequisites" -date: -draft: false -weight: 10 ---- - -# Prerequisites - -The following is required prior to installing Crunchy PostgreSQL Operator using Ansible: - -* [postgres-operator playbooks](https://github.com/CrunchyData/postgres-operator/) source code for the target version -* Ansible 2.9.0+ - -## Kubernetes Installs - -* Kubernetes v1.11+ -* Cluster admin privileges in Kubernetes -* [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) configured to communicate with Kubernetes - -## OpenShift Installs - -* OpenShift v3.09+ -* Cluster admin privileges in OpenShift -* [oc](https://docs.openshift.com/container-platform/3.11/cli_reference/get_started_cli.html) configured to communicate with OpenShift - -## Installing from a Windows Host - -If the Crunchy PostgreSQL Operator is being installed from a Windows host the following -are required: - -* [Windows Subsystem for Linux (WSL)](https://docs.microsoft.com/en-us/windows/wsl/install-win10) -* [Ubuntu for Windows](https://www.microsoft.com/en-us/p/ubuntu/9nblggh4msv6) - -## Permissions - -The installation of the Crunchy PostgreSQL Operator requires elevated -privileges, as the following objects need to be created: - -* Custom Resource Definitions -* Cluster RBAC for using one of the multi-namespace modes -* Create required namespaces - -{{% notice warning %}}In Kubernetes versions prior to 1.12 (including Openshift up through 3.11), there is a limitation that requires an extra step during installation for the operator to function properly with watched namespaces. This limitation does not exist when using Kubernetes 1.12+. When a list of namespaces are provided through the NAMESPACE environment variable, the setupnamespaces.sh script handles the limitation properly in both the bash and ansible installation. - -However, if the user wishes to add a new watched namespace after installation, where the user would normally use pgo create namespace to add the new namespace, they should instead run the add-targeted-namespace.sh script or they may give themselves cluster-admin privileges instead of having to run setupnamespaces.sh script. Again, this is only required when running on a Kubernetes distribution whose version is below 1.12. In Kubernetes version 1.12+ the pgo create namespace command works as expected. - -{{% /notice %}} - -## Obtaining Operator Ansible Role - -* Clone the [postgres-operator project](https://github.com/CrunchyData/postgres-operator) - -### GitHub Installation - -All necessary files (inventory.yaml, values.yaml, main playbook and roles) can be found in the -[`installers/ansible`](https://github.com/CrunchyData/postgres-operator/tree/master/installers/ansible) directory -in the [source code](https://github.com/CrunchyData/postgres-operator). - -## Configuring the Inventory File - -The `inventory.yaml` file included with the PostgreSQL Operator Playbooks allows installers -to configure how Ansible will connect to your Kubernetes cluster. This file -should contain the following connection variables: - -{{% notice tip %}} -You will have to uncomment out either the `kubernetes` or `openshift` variables -if you are being using them for your environment. Both sets of variables cannot -be used at the same time. The unused variables should be left commented out or removed. -{{% /notice %}} - - -| Name | Default | Required | Description | -|-----------------------------------|-------------|----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `kubernetes_context` | | **Required**, if deploying to Kubernetes |When deploying to Kubernetes, set to configure the context name of the kubeconfig to be used for authentication. | -| `openshift_host` | | **Required**, if deploying to OpenShift | When deploying to OpenShift, set to configure the hostname of the OpenShift cluster to connect to. | -| `openshift_password` | | **Required**, if deploying to OpenShift | When deploying to OpenShift, set to configure the password used for login. | -| `openshift_skip_tls_verify` | | **Required**, if deploying to OpenShift | When deploying to Openshift, set to ignore the integrity of TLS certificates for the OpenShift cluster. | -| `openshift_token` | | **Required**, if deploying to OpenShift | When deploying to OpenShift, set to configure the token used for login (when not using username/password authentication). | -| `openshift_user` | | **Required**, if deploying to OpenShift | When deploying to OpenShift, set to configure the username used for login. | - -{{% notice tip %}} -To retrieve the `kubernetes_context` value for Kubernetes installs, run the following command: - -```bash -kubectl config current-context -``` -{{% /notice %}} - -## Configuring - `values.yaml` - -The `values.yaml` file contains all of the configuration parameters -for deploying the PostgreSQL Operator. The [example file](https://github.com/CrunchyData/postgres-operator/blob/v{{< param operatorVersion >}}/installers/ansible/values.yaml) -contains defaults that should work in most Kubernetes environments, but it may -require some customization. - -For a detailed description of each configuration parameter, please read the -[PostgreSQL Operator Installer Configuration Reference](<{{< relref "/installation/configuration.md">}}>) diff --git a/docs/content/installation/other/ansible/uninstalling-operator.md b/docs/content/installation/other/ansible/uninstalling-operator.md deleted file mode 100644 index 657bc6d0cf..0000000000 --- a/docs/content/installation/other/ansible/uninstalling-operator.md +++ /dev/null @@ -1,33 +0,0 @@ ---- -title: "Uninstalling PostgreSQL Operator" -date: -draft: false -weight: 40 ---- - -# Uninstalling PostgreSQL Operator - -The following assumes the proper [prerequisites are satisfied][ansible-prerequisites] -we can now uninstall the PostgreSQL Operator. - -First, it is recommended to use the playbooks tagged with the same version -of the PostgreSQL Operator currently deployed. - -With the correct playbooks acquired and prerequisites satisfied, simply run -the following command: - -```bash -ansible-playbook -i /path/to/inventory.yaml --tags=uninstall --ask-become-pass main.yml -``` - -## Deleting `pgo` Client - -If variable `pgo_client_install` is set to `true` in the `values.yaml` file, the `pgo` client will also be removed when uninstalling. - -Otherwise, the `pgo` client can be manually uninstalled by running the following command: - -``` -rm /usr/local/bin/pgo -``` - -[ansible-prerequisites]: {{< relref "/installation/other/ansible/prerequisites.md" >}} diff --git a/docs/content/installation/other/ansible/updating-operator.md b/docs/content/installation/other/ansible/updating-operator.md deleted file mode 100644 index 5f4c62c963..0000000000 --- a/docs/content/installation/other/ansible/updating-operator.md +++ /dev/null @@ -1,120 +0,0 @@ ---- -title: "Updating PostgreSQL Operator" -date: -draft: false -weight: 30 ---- - -# Updating - -Updating the Crunchy PostgreSQL Operator is essential to the lifecycle management -of the service. Using the `update` flag will: - -* Update and redeploy the operator deployment -* Recreate configuration maps used by operator -* Remove any deprecated objects -* Allow administrators to change settings configured in the `values.yaml` -* Reinstall the `pgo` client if a new version is specified - -The following assumes the proper [prerequisites are satisfied][ansible-prerequisites] -we can now update the PostgreSQL Operator. - -The commands should be run in the directory where the Crunchy PostgreSQL Operator -playbooks is stored. See the `ansible` directory in the Crunchy PostgreSQL Operator -project for the inventory file, values file, main playbook and ansible roles. - -## Updating on Linux - -On a Linux host with Ansible installed we can run the following command to update -the PostgreSQL Operator: - -```bash -ansible-playbook -i /path/to/inventory.yaml --tags=update --ask-become-pass main.yml -``` - -## Updating on macOS - -On a macOS host with Ansible installed we can run the following command to update -the PostgreSQL Operator. - -```bash -ansible-playbook -i /path/to/inventory.yaml --tags=update --ask-become-pass main.yml -``` - -## Updating on Windows Ubuntu Subsystem - -On a Windows host with an Ubuntu subsystem we can run the following commands to update -the PostgreSQL Operator. - -```bash -ansible-playbook -i /path/to/inventory.yaml --tags=update --ask-become-pass main.yml -``` - -## Verifying the Update - -This may take a few minutes to deploy. To check the status of the deployment run -the following: - -```bash -# Kubernetes -kubectl get deployments -n -kubectl get pods -n - -# OpenShift -oc get deployments -n -oc get pods -n -``` - -## Configure Environment Variables - -After the Crunchy PostgreSQL Operator has successfully been updated we will need -to configure local environment variables before using the `pgo` client. - -To configure the environment variables used by `pgo` run the following command: - -Note: `` should be replaced with the namespace the Crunchy PostgreSQL -Operator was deployed to. -Also, if TLS was disabled, or if the port was changed, update PGO_APISERVER_URL accordingly. - -```bash -cat <> ~/.bashrc -export PGOUSER="${HOME?}/.pgo//pgouser" -export PGO_CA_CERT="${HOME?}/.pgo//client.crt" -export PGO_CLIENT_CERT="${HOME?}/.pgo//client.crt" -export PGO_CLIENT_KEY="${HOME?}/.pgo//client.key" -export PGO_APISERVER_URL='https://127.0.0.1:8443' -EOF -``` - -Apply those changes to the current session by running: - -```bash -source ~/.bashrc -``` - -## Verify `pgo` Connection - -In a separate terminal we need to setup a port forward to the Crunchy PostgreSQL -Operator to ensure connection can be made outside of the cluster: - -```bash -# If deployed to Kubernetes -kubectl port-forward -n pgo svc/postgres-operator 8443:8443 - -# If deployed to OpenShift -oc port-forward -n pgo svc/postgres-operator 8443:8443 -``` -In the above examples, you can substitute `pgo` for the namespace that you -deployed the PostgreSQL Operator into. - -On a separate terminal verify the PostgreSQL Operator client can communicate -with the PostgreSQL Operator: - -```bash -pgo version -``` - -If the above command outputs versions of both the client and API server, the Crunchy -PostgreSQL Operator has been updated successfully. - -[ansible-prerequisites]: {{< relref "/installation/other/ansible/prerequisites.md" >}} diff --git a/docs/content/installation/other/bash.md b/docs/content/installation/other/bash.md deleted file mode 100644 index 52f4b5cdc6..0000000000 --- a/docs/content/installation/other/bash.md +++ /dev/null @@ -1,290 +0,0 @@ ---- -title: "Bash Scripts" -date: -draft: false -weight: 100 ---- - -A full installation of the Operator includes the following steps: - - - create a project structure - - configure your environment variables - - configure Operator templates - - create security resources - - deploy the operator - - install pgo CLI (end user command tool) - -Operator end-users are only required to install the pgo CLI client on their host and can skip the server-side installation steps. pgo CLI clients are provided for Linux, Mac, and Windows clients. - -The Operator can be deployed by multiple methods including: - - * default installation - * Ansible playbook installation - * Openshift Console installation using OLM - - -## Default Installation - Create Project Structure - -The Operator follows a golang project structure, you can create a structure as follows on your local Linux host: - - mkdir -p $HOME/odev/src/github.com/crunchydata $HOME/odev/bin $HOME/odev/pkg - cd $HOME/odev/src/github.com/crunchydata - git clone https://github.com/CrunchyData/postgres-operator.git - cd postgres-operator - git checkout v{{< param operatorVersion >}} - - -This creates a directory structure under your HOME directory name *odev* and clones the current Operator version to that structure. - -## Default Installation - Configure Environment - -Environment variables control aspects of the Operator installation. You can copy a sample set of Operator environment variables and aliases to your *.bashrc* file to work with. - - cat $HOME/odev/src/github.com/crunchydata/postgres-operator/examples/envs.sh >> $HOME/.bashrc - source $HOME/.bashrc - -## Default Installation - Namespace Creation - -Creating Kubernetes namespaces is typically something that only a -privileged Kubernetes user can perform so log into your Kubernetes cluster as a user -that has the necessary privileges. - -The *NAMESPACE* environment variable is a comma separated list -of namespaces that specify where the Operator will be provisioing -PG clusters into, specifically, the namespaces the Operator is watching -for Kubernetes events. This value is set as follows: - - export NAMESPACE=pgouser1,pgouser2 - -This means namespaces called *pgouser1* and *pgouser2* will be -created as part of the default installation. - -{{% notice warning %}}In Kubernetes versions prior to 1.12 (including Openshift up through 3.11), there is a limitation that requires an extra step during installation for the operator to function properly with watched namespaces. This limitation does not exist when using Kubernetes 1.12+. When a list of namespaces are provided through the NAMESPACE environment variable, the setupnamespaces.sh script handles the limitation properly in both the bash and ansible installation. - -However, if the user wishes to add a new watched namespace after installation, where the user would normally use pgo create namespace to add the new namespace, they should instead run the add-targeted-namespace.sh script or they may give themselves cluster-admin privileges instead of having to run setupnamespaces.sh script. Again, this is only required when running on a Kubernetes distribution whose version is below 1.12. In Kubernetes version 1.12+ the pgo create namespace command works as expected. - -{{% /notice %}} - -The *PGO_OPERATOR_NAMESPACE* environment variable is the name of the namespace -that the Operator will be installed into. For the installation example, this -value is set as follows: - - export PGO_OPERATOR_NAMESPACE=pgo - -This means a *pgo* namespace will be created and the Operator will -be deployed into that namespace. - -Create the Operator namespaces using the Makefile target: - - make setupnamespaces - -**Note**: The setupnamespaces target only creates the namespace(s) specified in PGO_OPERATOR_NAMESPACE environment variable - -The [Design](/design) section of this documentation talks further about -the use of namespaces within the Operator. - -## Default Installation - Configure Operator Templates - -Within the Operator [*PGO_CONF_DIR*](/developer-setup/) directory are several configuration files and templates used by the Operator to determine the various resources that it deploys on your Kubernetes cluster, specifically the PostgreSQL clusters it deploys. - -When you install the Operator you must make choices as to what kind of storage the Operator has to work with for example. Storage varies with each installation. As an installer, you would modify these configuration templates used by the Operator to customize its behavior. - -**Note**: when you want to make changes to these Operator templates and configuration files after your initial installation, you will need to re-deploy the Operator in order for it to pick up any future configuration changes. - -Here are some common examples of configuration changes most installers would make: - -### Storage -Inside `conf/postgres-operator/pgo.yaml` there are various storage configurations defined. - - PrimaryStorage: gce - WALStorage: gce - BackupStorage: gce - ReplicaStorage: gce - gce: - AccessMode: ReadWriteOnce - Size: 1G - StorageType: dynamic - StorageClass: standard - -Listed above are the *pgo.yaml* sections related to storage choices. *PrimaryStorage* specifies the name of the storage configuration used for PostgreSQL primary database volumes to be provisioned. In the example above, a NFS storage configuration is picked. That same storage configuration is selected for the other volumes that the Operator will create. - -This sort of configuration allows for a PostgreSQL primary and replica to use different storage if you want. Other storage settings like *AccessMode*, *Size*, *StorageType*, and *StorageClass* further define the storage configuration. Currently, NFS, HostPath, and Storage Classes are supported in the configuration. - -As part of the Operator installation, you will need to adjust these storage settings to suit your deployment requirements. For users wanting to try -out the Operator on Google Kubernetes Engine you would make the -following change to the storage configuration in pgo.yaml: - - - -For NFS Storage, it is assumed that there are sufficient Persistent Volumes (PV) created for the Operator to use when it creates Persistent Volume Claims (PVC). The creation of Persistent Volumes is something a Kubernetes cluster-admin user would typically provide before installing the Operator. There is an example script which can be used to create NFS Persistent Volumes located here: - - ./pv/create-nfs-pv.sh - -That script looks for the IP address of an NFS server using the -environment variable PGO_NFS_IP you would set in your .bashrc environment. - -A similar script is provided for HostPath persistent volume creation if -you wanted to use HostPath for testing: -``` -./pv/create-pv.sh -``` - -Adjust the above PV creation scripts to suit your local requirements, the -purpose of these scripts are solely to produce a test set of Volume to test the -Operator. - -Other settings in *pgo.yaml* are described in the [pgo.yaml Configuration](/configuration/pgo-yaml-configuration) section of the documentation. - -## Operator Security - -The Operator implements its own RBAC (Role Based Access Controls) for authenticating Operator users access to the Operator REST API. - -A default admin user is created when the operator is deployed. Create a .pgouser in your home directory and insert the text from below: - -``` -admin:examplepassword -``` - -The format of the .pgouser client file is: - -``` -: -``` - -To create a unique administrator user on deployment of the operator edit this file and update the .pgouser file accordingly: - -``` -$PGOROOT/deploy/install-bootstrap-creds.sh -``` - -After installation users can create optional Operator users as follows: - -``` -pgo create pgouser someuser --pgouser-namespaces="pgouser1,pgouser2" --pgouser-password=somepassword --pgouser-roles="somerole,someotherrole" -``` - -Note, you can also store the pgouser file in alternate locations, see the -Security documentation for details. - -Operator security is discussed in the Security section [Security](/security) of the documentation. - -Adjust these settings to meet your local requirements. - -## Default Installation - Create Kubernetes RBAC Controls - -The Operator installation requires Kubernetes administrators to create Resources required by the Operator. These resources are only allowed to be created by a cluster-admin user. To install on Google Cloud, you will need a user -account with cluster-admin privileges. If you own the GKE cluster you -are installing on, you can add cluster-admin role to your account as -follows: - - kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user $(gcloud config get-value account) - -Specifically, Custom Resource Definitions for the Operator, and Service Accounts used by the Operator are created which require cluster permissions. - -Tor create the Kubernetes RBAC used by the Operator, run the following as a cluster-admin Kubernetes user: - - make installrbac - -This set of Resources is created a single time unless a new Operator -release requires these Resources to be recreated. Note that when you -run *make installrbac* the set of keys used by the Operator REST API and -also the pgbackrest ssh keys are generated. - -Verify the Operator Custom Resource Definitions are created as follows: - - kubectl get crd - -You should see the *pgclusters* CRD among the listed CRD resource types. - -See the Security documentation for a description of the various RBAC -resources created and used by the Operator. - -## Default Installation - Deploy the Operator -At this point, you as a normal Kubernetes user should be able to deploy the Operator. To do this, run the following Makefile target: - - make deployoperator - -This will cause any existing Operator to be removed first, then the configuration to be bundled into a ConfigMap, then the Operator Deployment to be created. - -This will create a postgres-operator Deployment and a postgres-operator Service.Operator administrators needing to make changes to the Operator -configuration would run this make target to pick up any changes to pgo.yaml, -pgo users/roles, or the Operator templates. - -## Default Installation - Completely Cleaning Up - -You can completely remove all the namespaces you have previously -created using the default installation by running the following: - - make cleannamespaces - -This will permanently delete each namespace the Operator installation -created previously. - - -## pgo CLI Installation -Most users will work with the Operator using the *pgo* CLI tool. That tool is downloaded from the GitHub Releases page for the Operator (https://github.com/crunchydata/postgres-operator/releases). Crunchy Enterprise Customer can download the pgo binaries from https://access.crunchydata.com/ on the downloads page. - -The *pgo* client is provided in Mac, Windows, and Linux binary formats, -download the appropriate client to your local laptop or workstation to work -with a remote Operator. - -{{% notice info %}} - -If TLS authentication was disabled during installation, please see the [TLS Configuration Page] ({{< relref "Configuration/tls.md" >}}) for additional configuration information. - -{{% / notice %}} - -Prior to using *pgo*, users testing the Operator on a single host can specify the -*postgres-operator* URL as follows: - -``` - $ kubectl get service postgres-operator -n pgo - NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE - postgres-operator 10.104.47.110 8443/TCP 7m - $ export PGO_APISERVER_URL=https://10.104.47.110:8443 - pgo version -``` - -That URL address needs to be reachable from your local *pgo* client host. Your Kubernetes administrator will likely need to create a network route, ingress, or LoadBalancer service to expose the Operator REST API to applications outside of the Kubernetes cluster. Your Kubernetes administrator might also allow you to run the Kubernetes port-forward command, contact your administrator for details. - -Next, the *pgo* client needs to reference the keys used to secure the Operator REST API: - -``` - export PGO_CA_CERT=$PGOROOT/conf/postgres-operator/server.crt - export PGO_CLIENT_CERT=$PGOROOT/conf/postgres-operator/server.crt - export PGO_CLIENT_KEY=$PGOROOT/conf/postgres-operator/server.key -``` - -You can also specify these keys on the command line as follows: - - pgo version --pgo-ca-cert=$PGOROOT/conf/postgres-operator/server.crt --pgo-client-cert=$PGOROOT/conf/postgres-operator/server.crt --pgo-client-key=$PGOROOT/conf/postgres-operator/server.key - -{{% notice tip %}} if you are running the Operator on Google Cloud, you would open up another terminal and run *kubectl port-forward ...* to forward the Operator pod port 8443 to your localhost where you can access the Operator API from your local workstation. -{{% /notice %}} - -At this point, you can test connectivity between your laptop or workstation and the Postgres Operator deployed on a Kubernetes cluster as follows: - - pgo version - -You should get back a valid response showing the client and server version numbers. - -## Verify the Installation - -Now that you have deployed the Operator, you can verify that it is running correctly. - -You should see a pod running that contains the Operator: - - kubectl get pod --selector=name=postgres-operator -n pgo - NAME READY STATUS RESTARTS AGE - postgres-operator-79bf94c658-zczf6 3/3 Running 0 47s - - -That pod should show 3 of 3 containers in *running* state and that the operator is installed into the *pgo* namespace. - -The sample environment script, examples/env.sh, if used creates some bash functions that you can use to view the Operator logs. This is useful in case you find one of the Operator containers not in a running status. - -Using the pgo CLI, you can verify the versions of the client and server match as follows: - - pgo version - -This also tests connectivity between your pgo client host and the Operator server. diff --git a/docs/content/installation/other/google-cloud-marketplace.md b/docs/content/installation/other/google-cloud-marketplace.md deleted file mode 100644 index 64fa52a684..0000000000 --- a/docs/content/installation/other/google-cloud-marketplace.md +++ /dev/null @@ -1,197 +0,0 @@ ---- -title: "Google Cloud Marketplace" -date: -draft: false -weight: 200 ---- - -The PostgreSQL Operator is installed as part of [Crunchy PostgreSQL for GKE][gcm-listing] -that is available in the Google Cloud Marketplace. - -[gcm-listing]: https://console.cloud.google.com/marketplace/details/crunchydata/crunchy-postgresql-operator - - -## Step 1: Install - -Install [Crunchy PostgreSQL for GKE][gcm-listing] to a Google Kubernetes Engine cluster using -Google Cloud Marketplace. - - -## Step 2: Verify Installation - -Install `kubectl` using the `gcloud components` command of the [Google Cloud SDK][sdk-install] or -by following the [Kubernetes documentation][kubectl-install]. - -[kubectl-install]: https://kubernetes.io/docs/tasks/tools/install-kubectl/ -[sdk-install]: https://cloud.google.com/sdk/docs/install - -Using the `gcloud` utility, ensure you are logged into the GKE cluster in which you installed the -PostgreSQL Operator, and see that it is running in the namespace in which you installed it. -For example, in the `pgo` namespace: - -```shell -kubectl -n pgo get deployments,pods -``` - -If successful, you should see output similar to this: - -``` -NAME READY UP-TO-DATE AVAILABLE AGE -deployment.apps/postgres-operator 1/1 1 1 16h - -NAME READY STATUS RESTARTS AGE -pod/postgres-operator-56d6ccb97-tmz7m 4/4 Running 0 2m -``` - - -## Step 3: Install the PostgreSQL Operator User Keys - -You will need to get TLS keys used to secure the Operator REST API. Again, in the `pgo` namespace: - -```shell -kubectl -n pgo get secret pgo.tls -o 'go-template={{ index .data "tls.crt" | base64decode }}' > /tmp/client.crt -kubectl -n pgo get secret pgo.tls -o 'go-template={{ index .data "tls.key" | base64decode }}' > /tmp/client.key -``` - - -## Step 4: Setup PostgreSQL Operator User - -The PostgreSQL Operator implements its own role-based access control (RBAC) system for authenticating and authorization PostgreSQL Operator users access to its REST API. A default PostgreSQL Operator user (aka a "pgouser") is created as part of the marketplace installation (these credentials are set during the marketplace deployment workflow). - -Create the pgouser file in `${HOME?}/.pgo//pgouser` and insert the user and password you created on deployment of the PostgreSQL Operator via GCP Marketplace. For example, if you set up a user with the username of `username` and a password of `hippo`: - -```shell -username:hippo -``` - - -## Step 5: Setup Environment variables - -The PostgreSQL Operator Client uses several environmental variables to make it easier for interfacing with the PostgreSQL Operator. - -Set the environmental variables to use the key / certificate pair that you pulled in Step 3 was deployed via the marketplace. Using the previous examples, You can set up environment variables with the following command: - -```shell -export PGOUSER="${HOME?}/.pgo/pgo/pgouser" -export PGO_CA_CERT="/tmp/client.crt" -export PGO_CLIENT_CERT="/tmp/client.crt" -export PGO_CLIENT_KEY="/tmp/client.key" -export PGO_APISERVER_URL='https://127.0.0.1:8443' -export PGO_NAMESPACE=pgo -``` - -If you wish to permanently add these variables to your environment, you can run the following command: - -```shell -cat <> ~/.bashrc -export PGOUSER="${HOME?}/.pgo/pgo/pgouser" -export PGO_CA_CERT="/tmp/client.crt" -export PGO_CLIENT_CERT="/tmp/client.crt" -export PGO_CLIENT_KEY="/tmp/client.key" -export PGO_APISERVER_URL='https://127.0.0.1:8443' -export PGO_NAMESPACE=pgo -EOF - -source ~/.bashrc -``` - -**NOTE**: For macOS users, you must use `~/.bash_profile` instead of `~/.bashrc` - - -## Step 6: Install the PostgreSQL Operator Client `pgo` - -The [`pgo` client](/pgo-client/) provides a helpful command-line interface to perform key operations on a PostgreSQL Operator, such as creating a PostgreSQL cluster. - -The `pgo` client can be downloaded from GitHub [Releases](https://github.com/crunchydata/postgres-operator/releases) (subscribers can download it from the [Crunchy Data Customer Portal](https://access.crunchydata.com)). - -Note that the `pgo` client's version must match the version of the PostgreSQL Operator that you have deployed. For example, if you have deployed version {{< param operatorVersion >}} of the PostgreSQL Operator, you must use the `pgo` for {{< param operatorVersion >}}. - -Once you have download the `pgo` client, change the permissions on the file to be executable if need be as shown below: - -```shell -chmod +x pgo -``` - -## Step 7: Connect to the PostgreSQL Operator - -Finally, let's see if we can connect to the PostgreSQL Operator from the `pgo` client. In order to communicate with the PostgreSQL Operator API server, you will first need to set up a [port forward](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to your local environment. - -In a new console window, run the following command to set up a port forward: - -```shell -kubectl -n pgo port-forward svc/postgres-operator 8443:8443 -``` - -Back to your original console window, you can verify that you can connect to the PostgreSQL Operator using the following command: - -```shell -pgo version -``` - -If successful, you should see output similar to this: - -``` -pgo client version {{< param operatorVersion >}} -pgo-apiserver version {{< param operatorVersion >}} -``` - -## Step 8: Create a Namespace - -We are almost there! You can optionally add a namespace that can be managed by the PostgreSQL Operator to watch and to deploy a PostgreSQL cluster into. - -```shell -pgo create namespace wateringhole -``` - -verify the operator has access to the newly added namespace - -```shell -pgo show namespace --all -``` - -you should see out put similar to this: - -```shell -pgo username: admin -namespace useraccess installaccess -application-system accessible no access -default accessible no access -kube-public accessible no access -kube-system accessible no access -pgo accessible no access -wateringhole accessible accessible -``` - -## Step 9: Have Some Fun - Create a PostgreSQL Cluster - -You are now ready to create a new cluster in the `wateringhole` namespace, try the command below: - -```shell -pgo create cluster -n wateringhole hippo -``` - -If successful, you should see output similar to this: - -``` -created Pgcluster hippo -workflow id 1cd0d225-7cd4-4044-b269-aa7bedae219b -``` - -This will create a PostgreSQL cluster named `hippo`. It may take a few moments for the cluster to be provisioned. You can see the status of this cluster using the `pgo test` command: - -```shell -pgo test -n wateringhole hippo -``` - -When everything is up and running, you should see output similar to this: - -``` -cluster : hippo - Services - primary (10.97.140.113:5432): UP - Instances - primary (hippo-7b64747476-6dr4h): UP -``` - -The `pgo test` command provides you the basic information you need to connect to your PostgreSQL cluster from within your Kubernetes environment. For more detailed information, you can use `pgo show cluster -n wateringhole hippo`. - diff --git a/docs/content/installation/other/helm.md b/docs/content/installation/other/helm.md deleted file mode 100644 index b4bab8ff26..0000000000 --- a/docs/content/installation/other/helm.md +++ /dev/null @@ -1,234 +0,0 @@ ---- -title: "Helm Chart" -date: -draft: false -weight: 100 ---- - -# The PostgreSQL Operator Helm Chart - -## Overview - -The PostgreSQL Operator comes with a container called `pgo-deployer` which -handles a variety of lifecycle actions for the PostgreSQL Operator, including: - -- Installation -- Upgrading -- Uninstallation - -After configuring the `values.yaml` file with you configuration options, the -installer will be run using the `helm` command line tool and takes care of -setting up all of the objects required to run the PostgreSQL Operator. - -The `postgres-operator` Helm chart is available in the [Helm](https://github.com/CrunchyData/postgres-operator/tree/master/installers/helm) -directory in the PostgreSQL Operator repository. - -## Requirements - -### RBAC - -The Helm chart will create the ServiceAccount, ClusterRole, and ClusterRoleBinding -that are required to run the `pgo-deployer`. If you have already configured the -ServiceAccount and ClusterRoleBinding for the installation process (e.g. from a -previous installation), you can disable their creation using the `rbac.create` -and `serviceAccount.create` variables in the `values.yaml` file. If these options -are disabled, you must provide the name of your preconfigured ServiceAccount using -`serviceAccount.name`. - -### Namespace - -In order to install the PostgreSQL Operator using the Helm chart you will need -to first create the namespace in which the `pgo-deployer` will be run. By default, -it will run in the namespace that is provided to `helm` at the command line. - -``` -kubectl create namespace -helm install postgres-operator -n /path/to/chart_dir -``` - -The PostgreSQL Operator has the ability to manage PostgreSQL clusters across -multiple Kubernetes [Namespaces](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/), -including the ability to add and remove Namespaces that it watches. Doing so -does require the PostgreSQL Operator to have elevated privileges, and as such, -the PostgreSQL Operator comes with three "namespace modes" to select what level -of privileges to provide. Detailed information about these "namespace modes" -can be found in the [Namespace](<{{< relref "/installation/postgres-operator.md" >}}>) -section here. - -### Config Map - -The `pgo-deployer` uses a [Kubernetes ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/) -to pass configuration options into the installer. The values in your `values.yaml` -file will be used to populate the configuation options in the ConfigMap. - -### Configuration - `values.yaml` - -The `values.yaml` file contains all of the configuration parametes for deploying -the PostgreSQL Operator. The [values.yaml file](https://github.com/CrunchyData/postgres-operator/blob/master/installers/helm/values.yaml) contains the defaults that -should work in most Kubernetes environments, but it may require some customization. - -For a detailed description of each configuration parameter, please read the -[PostgreSQL Operator Installer Configuration Reference](<{{< relref "/installation/configuration.md">}}>) - -## Installation - -Once you have configured the PostgreSQL Operator Installer to your -specification, you can install the PostgreSQL Operator with the following -command: - -```shell -helm install -n /path/to/chart_dir -``` - -{{% notice tip %}} -Take note of the `name` used when installing, this `name` will be used to -upgrade and uninstall the PostgreSQL Operator. -{{% /notice %}} - -### Install the [`pgo` Client]({{< relref "/installation/pgo-client" >}}) - -To use the [`pgo` Client]({{< relref "/installation/pgo-client" >}}), -there are a few additional steps to take in order to get it to work with your -PostgreSQL Operator installation. For convenience, you can download and run the -[`client-setup.sh`](https://raw.githubusercontent.com/CrunchyData/postgres-operator/master/installers/kubectl/client-setup.sh) -script in your local environment: - -```shell -curl https://raw.githubusercontent.com/CrunchyData/postgres-operator/master/installers/kubectl/client-setup.sh > client-setup.sh -chmod +x client-setup.sh -./client-setup.sh -``` - -{{% notice tip %}} -Running this script can cause existing `pgo` client binary, `pgouser`, -`client.crt`, and `client.key` files to be overwritten. -{{% /notice %}} - -The `client-setup.sh` script performs the following tasks: - -- Sets `$PGO_OPERATOR_NAMESPACE` to `pgo` if it is unset. This is the default -namespace that the PostgreSQL Operator is deployed to -- Checks for valid Operating Systems and determines which `pgo` binary to -download -- Creates a directory in `$HOME/.pgo/$PGO_OPERATOR_NAMESPACE` (e.g. `/home/hippo/.pgo/pgo`) -- Downloads the `pgo` binary, saves it to in `$HOME/.pgo/$PGO_OPERATOR_NAMESPACE`, -and sets it to be executable -- Pulls the TLS keypair from the PostgreSQL Operator `pgo.tls` Secret so that -the `pgo` client can communicate with the PostgreSQL Operator. These are saved -as `client.crt` and `client.key` in the `$HOME/.pgo/$PGO_OPERATOR_NAMESPACE` -path. -- Pulls the `pgouser` credentials from the `pgouser-admin` secret and saves them -in the format `username:password` in a file called `pgouser` -- `client.crt`, `client.key`, and `pgouser` are all set to be read/write by the -file owner. All other permissions are removed. -- Sets the following environmental variables with the following values: - -```shell -export PGOUSER=$HOME/.pgo/$PGO_OPERATOR_NAMESPACE/pgouser -export PGO_CA_CERT=$HOME/.pgo/$PGO_OPERATOR_NAMESPACE/client.crt -export PGO_CLIENT_CERT=$HOME/.pgo/$PGO_OPERATOR_NAMESPACE/client.crt -export PGO_CLIENT_KEY=$HOME/.pgo/$PGO_OPERATOR_NAMESPACE/client.key -``` - -For convenience, after the script has finished, you can permanently add these -environmental variables to your environment: - - -```shell -cat <> ~/.bashrc -export PATH="$HOME/.pgo/$PGO_OPERATOR_NAMESPACE:$PATH" -export PGOUSER="$HOME/.pgo/$PGO_OPERATOR_NAMESPACE/pgouser" -export PGO_CA_CERT="$HOME/.pgo/$PGO_OPERATOR_NAMESPACE/client.crt" -export PGO_CLIENT_CERT="$HOME/.pgo/$PGO_OPERATOR_NAMESPACE/client.crt" -export PGO_CLIENT_KEY="$HOME/.pgo/$PGO_OPERATOR_NAMESPACE/client.key" -EOF -``` - -By default, the `client-setup.sh` script targets the user that is stored in the -`pgouser-admin` secret in the `pgo` (`$PGO_OPERATOR_NAMESPACE`) Namespace. If -you wish to use a different Secret, you can set the `PGO_USER_ADMIN` -environmental variable. - -For more detailed information about [installing the `pgo` client]({{< relref "/installation/pgo-client" >}}), -please see [Installing the `pgo` client]({{< relref "/installation/pgo-client" >}}). - -### Verify the Installation - -One way to verify the installation was successful is to execute the -[`pgo version`]({{< relref "/pgo-client/reference/pgo_version.md" >}}) command. - -In a new console window, run the following command to set up a port forward: - -```shell -kubectl -n pgo port-forward svc/postgres-operator 8443:8443 -``` - -In another console window, run the `pgo version` command: - -```shell -pgo version -``` - -If successful, you should see output similar to this: - -``` -pgo client version {{< param operatorVersion >}} -pgo-apiserver version {{< param operatorVersion >}} -``` - -## Upgrade and Uninstall - -Once install has be completed using Helm, it will also be used to upgrade and -uninstall your PostgreSQL Operator. - -{{% notice tip %}} -The `name` and `namespace` in the following sections should match the options -provided at install. -{{% /notice %}} - -### Upgrade - -To make changes to your deployment of the PostgreSQL Operator you will use the -`helm upgrade` command. Once the configuration changes have been made to you -`values.yaml` file, you can run the following command to implement them in the -deployment: - -```shell -helm upgrade -n /path/to/updated_chart -``` - -### Uninstall - -To uninstall the PostgreSQL Operator you will use the `helm uninstall` command. -This will uninstall the operator and clean up resources used by the `pgo-deployer`. - -```shell -helm uninstall -n -``` - -## Debugging - -When the `pgo-deployer` job does not complete successfully, the resources that -are created and normally cleaned up by Helm will be left in your -Kubernetes cluster. This will allow you to use the failed job and its logs to -debug the issue. The following command will show the logs for the `pgo-deployer` -job: - -```shell -kubectl logs -n job.batch/pgo-deploy -``` - -{{% notice tip %}} -You can also view the logs as the job is running by using the `kubectl -f` -follow flag: -```shell -kubectl logs -n job.batch/pgo-deploy -f -``` -{{% /notice %}} - - -These logs will provide feedback if there are any misconfigurations in your -install. Once you have finished debugging the failed job and fixed any configuration -issues, you can take steps to re-run your install, upgrade, or uninstall. By -running another command the resources from the failed install will be cleaned up -so that a successfull install can run. diff --git a/docs/content/installation/other/operator-hub.md b/docs/content/installation/other/operator-hub.md deleted file mode 100644 index 9b077ef073..0000000000 --- a/docs/content/installation/other/operator-hub.md +++ /dev/null @@ -1,155 +0,0 @@ ---- -title: "OperatorHub.io" -date: -draft: false -weight: 200 ---- - -If your Kubernetes cluster is already running the [Operator Lifecycle Manager][OLM], -the PostgreSQL Operator can be installed as part of [Crunchy PostgreSQL for Kubernetes][hub-listing] -that is available in OperatorHub.io. - -[hub-listing]: https://operatorhub.io/operator/postgresql -[OLM]: https://olm.operatorframework.io/ - - -## Before You Begin - -There are a few manual steps that the cluster administrator must perform prior to installing the PostgreSQL Operator. -At the very least, it must be provided with an initial configuration. - -First, make sure OLM and the OperatorHub.io catalog are installed by running -`kubectl get CatalogSources --all-namespaces`. You should see something similar to the following: - -``` -NAMESPACE NAME DISPLAY TYPE PUBLISHER -olm operatorhubio-catalog Community Operators grpc OperatorHub.io -``` - -Take note of the name and namespace above, you will need them later on. - -Next, select a namespace in which to install the PostgreSQL Operator. PostgreSQL clusters will also be deployed here. -If it does not exist, create it now. - -``` -export PGO_OPERATOR_NAMESPACE=pgo -kubectl create namespace "$PGO_OPERATOR_NAMESPACE" -``` - -Next, clone the PostgreSQL Operator repository locally. - -``` -git clone -b v{{< param operatorVersion >}} https://github.com/CrunchyData/postgres-operator.git -cd postgres-operator -``` - -### PostgreSQL Operator Configuration - -Edit `conf/postgres-operator/pgo.yaml` to configure the deployment. Look over all of the options and make any -changes necessary for your environment. A full description of each option is available in the -[`pgo.yaml` configuration guide]({{< relref "configuration/pgo-yaml-configuration.md" >}}). - -When the file is ready, upload the entire directory to the `pgo-config` ConfigMap. - -``` -kubectl -n "$PGO_OPERATOR_NAMESPACE" create configmap pgo-config \ - --from-file=./conf/postgres-operator -``` - -### Secrets - -Configure pgBackRest for your environment. If you do not plan to use AWS S3 to store backups, you can omit -the `aws-s3` keys below. - -``` -kubectl -n "$PGO_OPERATOR_NAMESPACE" create secret generic pgo-backrest-repo-config \ - --from-file=./installers/ansible/roles/pgo-operator/files/pgo-backrest-repo/config \ - --from-file=./installers/ansible/roles/pgo-operator/files/pgo-backrest-repo/sshd_config \ - --from-file=./installers/ansible/roles/pgo-operator/files/pgo-backrest-repo/aws-s3-ca.crt \ - --from-literal=aws-s3-key="" \ - --from-literal=aws-s3-key-secret="" -``` - -### Certificates (optional) - -The PostgreSQL Operator has an API that uses TLS to communicate securely with clients. If you have -a certificate bundle validated by your organization, you can install it now. If not, the API will -automatically generate and use a self-signed certificate. - -``` -kubectl -n "$PGO_OPERATOR_NAMESPACE" create secret tls pgo.tls \ - --cert=/path/to/server.crt \ - --key=/path/to/server.key -``` - -Once these resources are in place, the PostgreSQL Operator can be installed into the cluster. - - -## Installation - -Create an `OperatorGroup` and a `Subscription` in your chosen namespace. -Make sure the `source` and `sourceNamespace` match the CatalogSource from earlier. - -``` -kubectl -n "$PGO_OPERATOR_NAMESPACE" create -f- <}} -YAML -``` - - -## After You Install - -Once the PostgreSQL Operator is installed in your Kubernetes cluster, you will need to do a few things -to use the [PostgreSQL Operator Client]({{< relref "/pgo-client/_index.md" >}}). - -Install the first set of client credentials and download the `pgo` binary and client certificates. - -``` -PGO_CMD=kubectl ./deploy/install-bootstrap-creds.sh -PGO_CMD=kubectl ./installers/kubectl/client-setup.sh -``` - -The client needs to be able to reach the PostgreSQL Operator API from outside the Kubernetes cluster. -Create an external service or forward a port locally. - -``` -kubectl -n "$PGO_OPERATOR_NAMESPACE" expose deployment postgres-operator --type=LoadBalancer - -export PGO_APISERVER_URL="https://$( - kubectl -n "$PGO_OPERATOR_NAMESPACE" get service postgres-operator \ - -o jsonpath="{.status.loadBalancer.ingress[*]['ip','hostname']}" -):8443" -``` -_or_ -``` -kubectl -n "$PGO_OPERATOR_NAMESPACE" port-forward deployment/postgres-operator 8443 - -export PGO_APISERVER_URL="https://127.0.0.1:8443" -``` - -Verify connectivity using the `pgo` command. - -``` -pgo version -# pgo client version {{< param operatorVersion >}} -# pgo-apiserver version {{< param operatorVersion >}} -``` - diff --git a/docs/content/installation/pgo-client.md b/docs/content/installation/pgo-client.md deleted file mode 100644 index 69dae759e1..0000000000 --- a/docs/content/installation/pgo-client.md +++ /dev/null @@ -1,292 +0,0 @@ ---- -title: "Install `pgo` Client" -date: -draft: false -weight: 30 ---- - -# Install the PostgreSQL Operator (`pgo`) Client - -The following will install and configure the `pgo` client on all systems. For the -purpose of these instructions it's assumed that the Crunchy PostgreSQL Operator -is already deployed. - -## Prerequisites - -* For Kubernetes deployments: [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) configured to communicate with Kubernetes -* For OpenShift deployments: [oc](https://docs.openshift.com/container-platform/3.11/cli_reference/get_started_cli.html) configured to communicate with OpenShift - -To authenticate with the Crunchy PostgreSQL Operator API: - -* Client CA Certificate -* Client TLS Certificate -* Client Key -* `pgouser` file containing `:` - -All of the requirements above should be obtained from an administrator who installed the Crunchy -PostgreSQL Operator. - -## Linux and macOS - -The following will setup the `pgo` client to be used on a Linux or macOS system. - -### Installing the Client - -First, download the `pgo` client from the -[GitHub official releases](https://github.com/CrunchyData/postgres-operator/releases). Crunchy Enterprise Customers can download the pgo binaries from https://access.crunchydata.com/ on the downloads page. - -Next, install `pgo` in `/usr/local/bin` by running the following: - -```bash -sudo mv /PATH/TO/pgo /usr/local/bin/pgo -sudo chmod +x /usr/local/bin/pgo -``` - -Verify the `pgo` client is accessible by running the following in the terminal: - -```bash -pgo --help -``` - -#### Configuring Client TLS - -With the client TLS requirements satisfied we can setup `pgo` to use them. - -First, create a directory to hold these files by running the following command: - -```bash -mkdir ${HOME?}/.pgo -chmod 700 ${HOME?}/.pgo -``` - -Next, copy the certificates to this new directory: - -```bash -cp /PATH/TO/client.crt ${HOME?}/.pgo/client.crt && chmod 600 ${HOME?}/.pgo/client.crt -cp /PATH/TO/client.key ${HOME?}/.pgo/client.key && chmod 400 ${HOME?}/.pgo/client.key -``` - -Finally, set the following environment variables to point to the client TLS files: - -```bash -cat <> ${HOME?}/.bashrc -export PGO_CA_CERT="${HOME?}/.pgo/client.crt" -export PGO_CLIENT_CERT="${HOME?}/.pgo/client.crt" -export PGO_CLIENT_KEY="${HOME?}/.pgo/client.key" -EOF -``` - -Apply those changes to the current session by running: - -```bash -source ~/.bashrc -``` - -#### Configuring `pgouser` - -The `pgouser` file contains the username and password used for authentication with the Crunchy -PostgreSQL Operator. - -To setup the `pgouser` file, run the following: - -```bash -echo ":" > ${HOME?}/.pgo/pgouser -``` - -```bash -cat <> ${HOME?}/.bashrc -export PGOUSER="${HOME?}/.pgo/pgouser" -EOF -``` - -Apply those changes to the current session by running: - -```bash -source ${HOME?}/.bashrc -``` - -#### Configuring the API Server URL - -If the Crunchy PostgreSQL Operator is not accessible outside of the cluster, it's required -to setup a port-forward tunnel using the `kubectl` or `oc` binary. - -In a separate terminal we need to setup a port forward to the Crunchy PostgreSQL Operator to -ensure connection can be made outside of the cluster: - -```bash -# If deployed to Kubernetes -kubectl port-forward -n pgo svc/postgres-operator 8443:8443 - -# If deployed to OpenShift -oc port-forward -n pgo svc/postgres-operator 8443:8443 -``` - -In the above examples, you can substitute `pgo` for the namespace that you -deployed the PostgreSQL Operator into. - -**Note**: The port-forward will be required for the duration of using the -PostgreSQL client. - -Next, set the following environment variable to configure the API server address: - -```bash -cat <> ${HOME?}/.bashrc -export PGO_APISERVER_URL="https://:8443" -EOF -``` - -**Note**: if port-forward is being used, the IP of the Operator API is `127.0.0.1` - -Apply those changes to the current session by running: - -```bash -source ${HOME?}/.bashrc -``` - -## PGO-Client Container - -The following will setup the `pgo` client image in a Kubernetes or Openshift -environment. The image must be installed using the Ansible installer. - -### Installing the PGO-Client Container -The pgo-client container can be installed with the Ansible installer by updating -the `pgo_client_container_install` variable in the inventory file. Set this -variable to true in the inventory file and run the ansible-playbook. As part of -the install the `pgo.tls` and `pgouser-` secrets are used to configure -the `pgo` client. - -### Using the PGO-Client Deployment -Once the container has been installed you can access it by exec'ing into the -pod. You can run single commands with the kubectl or oc command line tools -or multiple commands by exec'ing into the pod with bash. - -``` -kubectl exec -it -n pgo deploy/pgo-client -- pgo version - -# or - -kubectl exec -it -n pgo deploy/pgo-client bash -``` - -The deployment does not require any configuration to connect to the operator. - -## Windows - -The following will setup the `pgo` client to be used on a Windows system. - -### Installing the Client - -First, download the `pgo.exe` client from the -[GitHub official releases](https://github.com/CrunchyData/postgres-operator/releases). - -Next, create a directory for `pgo` using the following: - -* Left click the _Start_ button in the bottom left corner of the taskbar -* Type `cmd` to search for _Command Prompt_ -* Right click the _Command Prompt_ application and click "Run as administrator" -* Enter the following command: `mkdir "%ProgramFiles%\postgres-operator"` - -Within the same terminal copy the `pgo.exe` binary to the directory created above using the -following command: - -```bash -copy %HOMEPATH%\Downloads\pgo.exe "%ProgramFiles%\postgres-operator" -``` - -Finally, add `pgo.exe` to the system path by running the following command in the terminal: - -```bash -setx path "%path%;C:\Program Files\postgres-operator" -``` - -Verify the `pgo.exe` client is accessible by running the following in the terminal: - -```bash -pgo --help -``` - -#### Configuring Client TLS - -With the client TLS requirements satisfied we can setup `pgo` to use them. - -First, create a directory to hold these files using the following: - -* Left click the _Start_ button in the bottom left corner of the taskbar -* Type `cmd` to search for _Command Prompt_ -* Right click the _Command Prompt_ application and click "Run as administrator" -* Enter the following command: `mkdir "%HOMEPATH%\pgo"` - -Next, copy the certificates to this new directory: - -```bash -copy \PATH\TO\client.crt "%HOMEPATH%\pgo" -copy \PATH\TO\client.key "%HOMEPATH%\pgo" -``` - -Finally, set the following environment variables to point to the client TLS files: - -```bash -setx PGO_CA_CERT "%HOMEPATH%\pgo\client.crt" -setx PGO_CLIENT_CERT "%HOMEPATH%\pgo\client.crt" -setx PGO_CLIENT_KEY "%HOMEPATH%\pgo\client.key" -``` - -#### Configuring `pgouser` - -The `pgouser` file contains the username and password used for authentication with the Crunchy -PostgreSQL Operator. - -To setup the `pgouser` file, run the following: - -* Left click the _Start_ button in the bottom left corner of the taskbar -* Type `cmd` to search for _Command Prompt_ -* Right click the _Command Prompt_ application and click "Run as administrator" -* Enter the following command: `echo USERNAME_HERE:PASSWORD_HERE > %HOMEPATH%\pgo\pgouser` - -Finally, set the following environment variable to point to the `pgouser` file: - -``` -setx PGOUSER "%HOMEPATH%\pgo\pgouser" -``` - -#### Configuring the API Server URL - -If the Crunchy PostgreSQL Operator is not accessible outside of the cluster, it's required -to setup a port-forward tunnel using the `kubectl` or `oc` binary. - -In a separate terminal we need to setup a port forward to the Crunchy PostgreSQL Operator to -ensure connection can be made outside of the cluster: - -```bash -# If deployed to Kubernetes -kubectl port-forward -n pgo svc/postgres-operator 8443:8443 - -# If deployed to OpenShift -oc port-forward -n pgo svc/postgres-operator 8443:8443 -``` - -In the above examples, you can substitute `pgo` for the namespace that you -deployed the PostgreSQL Operator into. - -**Note**: The port-forward will be required for the duration of using the -PostgreSQL client. - -Next, set the following environment variable to configure the API server address: - -* Left click the _Start_ button in the bottom left corner of the taskbar -* Type `cmd` to search for _Command Prompt_ -* Right click the _Command Prompt_ application and click "Run as administrator" -* Enter the following command: `setx PGO_APISERVER_URL "https://:8443"` - * Note: if port-forward is being used, the IP of the Operator API is `127.0.0.1` - -## Verify the Client Installation - -After completing all of the steps above we can verify `pgo` is configured -properly by simply running the following: - -```bash -pgo version -``` - -If the above command outputs versions of both the client and API server, the Crunchy PostgreSQL -Operator client has been installed successfully. diff --git a/docs/content/installation/postgres-operator.md b/docs/content/installation/postgres-operator.md deleted file mode 100644 index 0cbd542dd5..0000000000 --- a/docs/content/installation/postgres-operator.md +++ /dev/null @@ -1,304 +0,0 @@ ---- -title: Install the PostgreSQL Operator -date: -draft: false -weight: 20 ---- - -# The PostgreSQL Operator Installer - -## Quickstart - -If you believe that all the default settings in the installation manifest work -for you, you can take a chance by running the manifest directly from the -repository: - -``` -kubectl create namespace pgo -kubectl apply -f https://raw.githubusercontent.com/CrunchyData/postgres-operator/v{{< param operatorVersion >}}/installers/kubectl/postgres-operator.yml -``` - -However, we still advise that you read onward to see how to properly configure -the PostgreSQL Operator. - -## Overview - -The PostgreSQL Operator comes with a container called `pgo-deployer` which -handles a variety of lifecycle actions for the PostgreSQL Operator, including: - -- Installation -- Upgrading -- Uninstallation - -After configuring the Job template, the installer can be run using -[`kubectl apply`](https://kubernetes.io/docs/reference/kubectl/cheatsheet/#apply) -and takes care of setting up all of the objects required to run the PostgreSQL -Operator. - -The installation manifest, called [`postgres-operator.yaml`](https://github.com/CrunchyData/postgres-operator/blob/v{{< param operatorVersion >}}/installers/kubectl/postgres-operator.yml), is available in the [`installers/kubectl/postgres-operator.yml`](https://github.com/CrunchyData/postgres-operator/blob/v{{< param operatorVersion >}}/installers/kubectl/postgres-operator.yml) -path in the PostgreSQL Operator repository. - - -## Requirements - -### RBAC - -The `pgo-deployer` requires a [ServiceAccount](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) -and [ClusterRoleBinding](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole) -to run the installation job. Both of these resources are already defined -in the `postgres-operator.yml`, but can be updated based on your specific -environmental requirements. - -By default, the `pgo-deployer` uses a ServiceAccount called `pgo-deployer-sa` -that has a ClusterRoleBinding (`pgo-deployer-crb`) with several ClusterRole -permissions. This is required to create the [Custom Resource Definitions](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) -that power the PostgreSQL Operator. While the PostgreSQL Operator itself can be -scoped to a specific namespace, you will need to have `cluster-admin` for the -initial deployment, or privileges that allow you to install Custom Resource -Definitions. The required list of privileges are available in the [postgres-operator.yml](https://raw.githubusercontent.com/CrunchyData/postgres-operator/v{{< param operatorVersion >}}/installers/kubectl/postgres-operator.yml) file: - -[https://raw.githubusercontent.com/CrunchyData/postgres-operator/v{{< param operatorVersion >}}/installers/kubectl/postgres-operator.yml](https://raw.githubusercontent.com/CrunchyData/postgres-operator/v{{< param operatorVersion >}}/installers/kubectl/postgres-operator.yml) - -If you have already configured the ServiceAccount and ClusterRoleBinding for the -installation process (e.g. from a previous installation), then you can remove -these objects from the `postgres-operator.yml` manifest. - -### Config Map - -The `pgo-deployer` uses a [Kubernetes ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/) -to pass configuration options into the installer. The ConfigMap is defined in -the `postgres-operator.yaml` file and can be updated based on your configuration -preferences. - -### Namespaces - -By default, the installer will run in the `pgo` Namespace. This can be -updated in the `postgres-operator.yml` file. **Please ensure that this namespace -exists before the job is run**. - -For example, to create the `pgo` namespace: - -``` -kubectl create namespace pgo -``` - -The PostgreSQL Operator has the ability to manage PostgreSQL clusters across -multiple Kubernetes [Namespaces](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/), -including the ability to add and remove Namespaces that it watches. Doing so -does require the PostgreSQL Operator to have elevated privileges, and as such, -the PostgreSQL Operator comes with three "namespace modes" to select what level -of privileges to provide: - -- `dynamic`: The default is the default mode. This enables full dynamic Namespace -management capabilities, in which the PostgreSQL Operator can create, delete and -update any Namespaces within the Kubernetes cluster, while then also having the -ability to create the Roles, RoleBindings andService Accounts within those -Namespaces for normal operations. The PostgreSQL Operator can also listen for -Namespace events and create or remove controllers for various Namespaces as -changes are made to Namespaces from Kubernetes and the PostgreSQL Operator's -management. - -- `readonly`: In this mode, the PostgreSQL Operator is able to listen for -namespace events within the Kubernetes cluster, and then manage controllers -as Namespaces are added, updated or deleted. While this still requires a -ClusterRole, the permissions mirror those of a "read-only" environment, and as -such the PostgreSQL Operator is unable to create, delete or update Namespaces -itself nor create RBAC that it requires in any of those Namespaces. Therefore, -while in readonly, mode namespaces must be preconfigured with the proper RBAC -as the PostgreSQL Operator cannot create the RBAC itself. - -- `disabled`: Use this mode if you do not want to deploy the PostgreSQL Operator -with any ClusterRole privileges, especially if you are only deploying the -PostgreSQL Operator to a single namespace. This disables any Namespace -management capabilities within the PostgreSQL Operator and will simply attempt -to work with the target Namespaces specified during installation. If no target -Namespaces are specified, then the Operator will be configured to work within -the namespace in which it is deployed. As with the readonly mode, while in -this mode, Namespaces must be preconfigured with the proper RBAC, since the -PostgreSQL Operator cannot create the RBAC itself. - -## Configuration - `postgres-operator.yml` - -The `postgres-operator.yml` file contains all of the configuration parameters -for deploying the PostgreSQL Operator. The [example file](https://github.com/CrunchyData/postgres-operator/blob/v{{< param operatorVersion >}}/installers/kubectl/postgres-operator.yml) -contains defaults that should work in most Kubernetes environments, but it may -require some customization. - -For a detailed description of each configuration parameter, please read the -[PostgreSQL Operator Installer Configuration Reference](<{{< relref "/installation/configuration.md">}}>) - -#### Configuring to Update and Uninstall - -The deploy job can be used to perform different deployment actions for the -PostgreSQL Operator. When you run the job it will install the operator by -default but you can change the deployment action to uninstall or update. The -`DEPLOY_ACTION` environment variable in the `postgres-operator.yml` file can be -set to `install`, `update`, and `uninstall`. - - -### Image Pull Secrets - -If you are pulling the PostgreSQL Operator images from a private registry, you -will need to setup an -[imagePullSecret](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/) -with access to the registry. The image pull secret will need to be added to the -installer service account to have access. The secret will need to be created in -each namespace that the PostgreSQL Operator will be using. - -After you have configured your image pull secret in the Namespace the installer -runs in (by default, this is `pgo`), -add the name of the secret to the job yaml that you are using. You can update -the existing section like this: - -``` -apiVersion: v1 -kind: ServiceAccount -metadata: - name: pgo-deployer-sa - namespace: pgo -imagePullSecrets: - - name: -``` - -If the service account is configured without using the job yaml file, you -can link the secret to an existing service account with the `kubectl` or `oc` -clients. - -``` -# kubectl -kubectl patch serviceaccount -p '{"imagePullSecrets": [{"name": "myregistrykey"}]}' -n - -# oc -oc secrets link --for=pull --namespace= -``` - -## Installation - -Once you have configured the PostgreSQL Operator Installer to your -specification, you can install the PostgreSQL Operator with the following -command: - -```shell -kubectl apply -f /path/to/postgres-operator.yml -``` - -### Install the [`pgo` Client]({{< relref "/installation/pgo-client" >}}) - -To use the [`pgo` Client]({{< relref "/installation/pgo-client" >}}), -there are a few additional steps to take in order to get it to work with you -PostgreSQL Operator installation. For convenience, you can download and run the -[`client-setup.sh`](https://raw.githubusercontent.com/CrunchyData/postgres-operator/v{{< param operatorVersion >}}/installers/kubectl/client-setup.sh) -script in your local environment: - -```shell -curl https://raw.githubusercontent.com/CrunchyData/postgres-operator/v{{< param operatorVersion >}}/installers/kubectl/client-setup.sh > client-setup.sh -chmod +x client-setup.sh -./client-setup.sh -``` - -{{% notice tip %}} -Running this script can cause existing `pgo` client binary, `pgouser`, -`client.crt`, and `client.key` files to be overwritten. -{{% /notice %}} - -The `client-setup.sh` script performs the following tasks: - -- Sets `$PGO_OPERATOR_NAMESPACE` to `pgo` if it is unset. This is the default -namespace that the PostgreSQL Operator is deployed to -- Checks for valid Operating Systems and determines which `pgo` binary to -download -- Creates a directory in `$HOME/.pgo/$PGO_OPERATOR_NAMESPACE` (e.g. `/home/hippo/.pgo/pgo`) -- Downloads the `pgo` binary, saves it to in `$HOME/.pgo/$PGO_OPERATOR_NAMESPACE`, -and sets it to be executable -- Pulls the TLS keypair from the PostgreSQL Operator `pgo.tls` Secret so that -the `pgo` client can communicate with the PostgreSQL Operator. These are saved -as `client.crt` and `client.key` in the `$HOME/.pgo/$PGO_OPERATOR_NAMESPACE` -path. -- Pulls the `pgouser` credentials from the `pgouser-admin` secret and saves them -in the format `username:password` in a file called `pgouser` -- `client.crt`, `client.key`, and `pgouser` are all set to be read/write by the -file owner. All other permissions are removed. -- Sets the following environmental variables with the following values: - -```shell -export PGOUSER=$HOME/.pgo/$PGO_OPERATOR_NAMESPACE/pgouser -export PGO_CA_CERT=$HOME/.pgo/$PGO_OPERATOR_NAMESPACE/client.crt -export PGO_CLIENT_CERT=$HOME/.pgo/$PGO_OPERATOR_NAMESPACE/client.crt -export PGO_CLIENT_KEY=$HOME/.pgo/$PGO_OPERATOR_NAMESPACE/client.key -``` - -For convenience, after the script has finished, you can permanently at these -environmental variables to your environment: - - -```shell -cat <> ~/.bashrc -export PATH="$HOME/.pgo/$PGO_OPERATOR_NAMESPACE:$PATH" -export PGOUSER="$HOME/.pgo/$PGO_OPERATOR_NAMESPACE/pgouser" -export PGO_CA_CERT="$HOME/.pgo/$PGO_OPERATOR_NAMESPACE/client.crt" -export PGO_CLIENT_CERT="$HOME/.pgo/$PGO_OPERATOR_NAMESPACE/client.crt" -export PGO_CLIENT_KEY="$HOME/.pgo/$PGO_OPERATOR_NAMESPACE/client.key" -EOF -``` - -By default, the `client-setup.sh` script targets the user that is stored in the -`pgouser-admin` secret in the `pgo` (`$PGO_OPERATOR_NAMESPACE`) Namespace. If -you wish to use a different Secret, you can set the `PGO_USER_ADMIN` -environmental variable. - -For more detailed information about [installing the `pgo` client]({{< relref "/installation/pgo-client" >}}), -please see [Installing the `pgo` client]({{< relref "/installation/pgo-client" >}}). - -### Verify the Installation - -One way to verify the installation was successful is to execute the -[`pgo version`]({{< relref "/pgo-client/reference/pgo_version.md" >}}) command. - -In a new console window, run the following command to set up a port forward: - -```shell -kubectl -n pgo port-forward svc/postgres-operator 8443:8443 -``` - -Next, in another console window, set the following environment variable to configure the API server address: - -```bash -cat <> ${HOME?}/.bashrc -export PGO_APISERVER_URL="https://127.0.0.1:8443" -EOF -``` - -Apply those changes to the current session by running: - -```bash -source ${HOME?}/.bashrc -``` - -Now run the `pgo version` command: - -```shell -pgo version -``` - -If successful, you should see output similar to this: - -``` -pgo client version {{< param operatorVersion >}} -pgo-apiserver version {{< param operatorVersion >}} -``` - -## Post-Installation - -To clean up the installer artifacts, you can simply run: - -```shell -kubectl delete -f /path/to/postgres-operator.yml -``` - -Note that if you still have the ServiceAccount and ClusterRoleBinding in there, -you will need to have elevated privileges. - -## Installing the PostgreSQL Operator Monitoring Infrastructure - -Please see the [PostgreSQL Operator Monitoring installation section]({{< relref "/installation/metrics" >}}) -for instructions on how to install the PostgreSQL Operator Monitoring infrastructure. diff --git a/docs/content/installation/prerequisites.md b/docs/content/installation/prerequisites.md deleted file mode 100644 index 2df54859d9..0000000000 --- a/docs/content/installation/prerequisites.md +++ /dev/null @@ -1,76 +0,0 @@ ---- -title: "Prerequisites" -date: -draft: false -weight: 10 ---- - -# Prerequisites - -The following is required prior to installing PostgreSQL Operator. - -## Environment - -The PostgreSQL Operator is tested in the following environments: - -* Kubernetes v1.13+ -* Red Hat OpenShift v3.11+ -* Red Hat OpenShift v4.4+ -* Amazon EKS -* VMWare Enterprise PKS 1.3+ -* IBM Cloud Pak Data - -#### IBM Cloud Pak Data - -If you install the PostgreSQL Operator, which comes with Crunchy -PostgreSQL for Kubernetes, on IBM Cloud Pak Data, please note the following -additional requirements: - -* Cloud Pak Data Version 2.5 -* Minimum Node Requirements (Cloud Paks Cluster): 3 -* Crunchy PostgreSQL for Kuberentes (Service): - * Minimum CPU Requirements: 0.2 CPU - * Minimum Memory Requirements: 120MB - * Minimum Storage Requirements: 5MB - -**Note**: PostgreSQL clusters deployed by the PostgreSQL Operator with -Crunchy PostgreSQL for Kubernetes are workload dependent. As such, users should -allocate enough resources for their PostgreSQL clusters. - -## Client Interfaces - -The PostgreSQL Operator installer will install the [`pgo` client]({{< relref "/pgo-client/_index.md" >}}) interface -to help with using the PostgreSQL Operator. However, it is also recommend that -you have access to [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) -or [`oc`](https://www.okd.io/download.html) and are able to communicate with the -Kubernetes or OpenShift cluster that you are working with. - -## Ports - -There are several application ports to note when using the PostgreSQL Operator. -These ports allow for the [`pgo` client]({{< relref "/pgo-client/_index.md" >}}) -to interface with the PostgreSQL Operator API as well as for users of the event -stream to connect to `nsqd` and `nsqdadmin`: - -| Container | Port | -| --- | --- | -| API Server | 8443 | -| nsqadmin | 4151 | -| nsqd | 4150 | - -If you are using these services, ensure your cluster administrator has given you -access to these ports. - -### Application Ports - -The PostgreSQL Operator deploys different services to support a production -PostgreSQL environment. Below is a list of the applications and their default -Service ports. - -| Service | Port | -| --- | --- | -| PostgreSQL | 5432 | -| pgbouncer | 5432 | -| pgBackRest | 2022 | -| postgres-exporter | 9187 | -| pgbadger | 10000 | diff --git a/docs/content/pgo-client/_index.md b/docs/content/pgo-client/_index.md deleted file mode 100644 index 99c1040882..0000000000 --- a/docs/content/pgo-client/_index.md +++ /dev/null @@ -1,177 +0,0 @@ ---- -title: "Using the pgo Client" -date: -draft: false -weight: 50 ---- - -The PostgreSQL Operator Client, aka `pgo`, is the most convenient way to -interact with the PostgreSQL Operator. `pgo` provides many convenience methods -for creating, managing, and deleting PostgreSQL clusters through a series of -simple commands. The `pgo` client interfaces with the API that is provided by -the PostgreSQL Operator and can leverage the RBAC and TLS systems that are -provided by the PostgreSQL Operator - -![Architecture](/Operator-Architecture.png) - -The `pgo` client is available for Linux, macOS, and Windows, as well as a -`pgo-client` container that can be deployed alongside the PostgreSQL Operator. - -You can download `pgo` from the [releases page](https://github.com/crunchydata/postgres-operator/releases), -or have it installed in your preferred binary format or as a container in your -Kubernetes cluster using the [Ansible Installer](/installation/install-with-ansible/). - -## General Notes on Using the `pgo` Client - -Many of the `pgo` client commands require you to specify a namespace via the -`-n` or `--namespace` flag. While this is a very helpful tool when managing -PostgreSQL deployments across many Kubernetes namespaces, this can become -onerous for the intents of this guide. - -If you install the PostgreSQL Operator using the [quickstart](/quickstart/) -guide, you will install the PostgreSQL Operator to a namespace called `pgo`. We -can choose to always use one of these namespaces by setting the `PGO_NAMESPACE` -environmental variable, which is detailed in the global [`pgo` Client](/pgo-client/) -reference, - -For convenience, we will use the `pgo` namespace in the examples below. -For even more convenience, we recommend setting `pgo` to be the value of -the `PGO_NAMESPACE` variable. In the shell that you will be executing the `pgo` -commands in, run the following command: - -```shell -export PGO_NAMESPACE=pgo -``` - -If you do not wish to set this environmental variable, or are in an environment -where you are unable to use environmental variables, you will have to use the -`--namespace` (or `-n`) flag for most commands, e.g. - -`pgo version -n pgo` - -## Syntax - -The syntax for `pgo` is similar to what you would expect from using the -`kubectl` or `oc` binaries. This is by design: one of the goals of the -PostgreSQL Operator project is to allow for seamless management of PostgreSQL -clusters in Kubernetes-enabled environments, and by following the command -patterns that users are familiar with, the learning curve is that much easier! - -To get an overview of everything that is available at the top-level of `pgo`, -execute: - -```shell -pgo -``` - -The syntax for the commands that `pgo` executes typicall follow this format: - -``` -pgo [command] ([TYPE] [NAME]) [flags] -``` - -Where *command* is a verb like: - -- `create` -- `show` -- `delete` - -And *type* is a resource type like: - -- `cluster` -- `backup` -- `user` - -And *name* is the name of the resource type like: - -- hacluster -- gisdba - -There are several global flags that are available to every `pgo` command as well -as flags that are specific to particular commands. To get a list of all the -options and flags available to a command, you can use the `--help` flag. For -example, to see all of the options available to the `pgo create cluster` -command, you can run the following: - -```shell -pgo create cluster --help -``` - -## Command Overview - -The following table provides an overview of the commands that the `pgo` client -provides: - -| Operation | Syntax | Description | -| :---------- | :------------- | :------ | -| apply | `pgo apply mypolicy --selector=name=mycluster` | Apply a SQL policy on a Postgres cluster(s) that have a label matching `service-name=mycluster` | -| backup | `pgo backup mycluster` | Perform a backup on a Postgres cluster(s) | -| cat | `pgo cat mycluster filepath` | Perform a Linux `cat` command on the cluster. | -| clone | `pgo clone oldcluster newcluster` | DEPRECATED: Copies the primary database of an existing cluster to a new cluster. For a more robust method to copy data, use `pgo create cluster newcluster --restore-from oldcluster ` | -| create | `pgo create cluster mycluster` | Create an Operator resource type (e.g. cluster, policy, schedule, user, namespace, pgouser, pgorole) | -| delete | `pgo delete cluster mycluster` | Delete an Operator resource type (e.g. cluster, policy, user, schedule, namespace, pgouser, pgorole) | -| df | `pgo df mycluster` | Display the disk status/capacity of a Postgres cluster. | -| failover | `pgo failover mycluster` | Perform a manual failover of a Postgres cluster. | -| help | `pgo help` | Display general `pgo` help information. | -| label | `pgo label mycluster --label=environment=prod` | Create a metadata label for a Postgres cluster(s). | -| reload | `pgo reload mycluster` | Perform a `pg_ctl` reload command on a Postgres cluster(s). | -| restore | `pgo restore mycluster` | Perform a `pgbackrest` or `pgdump` restore on a Postgres cluster. | -| scale | `pgo scale mycluster` | Create a Postgres replica(s) for a given Postgres cluster. | -| scaledown | `pgo scaledown mycluster --query` | Delete a replica from a Postgres cluster. | -| show | `pgo show cluster mycluster` | Display Operator resource information (e.g. cluster, user, policy, schedule, namespace, pgouser, pgorole). | -| status | `pgo status` | Display Operator status. | -| test | `pgo test mycluster` | Perform a SQL test on a Postgres cluster(s). | -| update | `pgo update cluster mycluster --disable-autofail` | Update a Postgres cluster(s), pgouser, pgorole, user, or namespace. | -| upgrade | `pgo upgrade mycluster` | Perform a minor upgrade to a Postgres cluster(s). | -| version | `pgo version` | Display Operator version information. | - - -### Global Flags - -There are several global flags available to the `pgo` client. - -**NOTE**: Flags take precedence over environmental variables. - -| Flag | Description | -| :-- | :-- | -| `--apiserver-url` | The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. | -| `--debug` | Enable additional output for debugging. | -| `--disable-tls` | Disable TLS authentication to the Postgres Operator. | -| `--exclude-os-trust` | Exclude CA certs from OS default trust store. | -| `-h`, `--help` | Print out help for a command command. | -| `-n`, `--namespace` | The namespace to execute the `pgo` command in. This is required for most `pgo` commands. | -| `--pgo-ca-cert` | The CA certificate file path for authenticating to the PostgreSQL Operator apiserver. | -| `--pgo-client-cert` | The client certificate file path for authenticating to the PostgreSQL Operator apiserver. | -| `--pgo-client-key` | The client key file path for authenticating to the PostgreSQL Operator apiserver. | - -### Global Environment Variables - -There are several environmental variables that can be used with the `pgo` -client. - -**NOTE** Flags take precedence over environmental variables. - - -| Name | Description | -| :-- | :-- | -| `EXCLUDE_OS_TRUST` | Exclude CA certs from OS default trust store. | -| `GENERATE_BASH_COMPLETION` | If set, will allow `pgo` to leverage "bash completion" to help complete commands as they are typed. | -| `PGO_APISERVER_URL` | The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. | -| `PGO_CA_CERT` | The CA certificate file path for authenticating to the PostgreSQL Operator apiserver. | -| `PGO_CLIENT_CERT` | The client certificate file path for authenticating to the PostgreSQL Operator apiserver. | -| `PGO_CLIENT_KEY` | The client key file path for authenticating to the PostgreSQL Operator apiserver. | -| `PGO_NAMESPACE` | The namespace to execute the `pgo` command in. This is required for most `pgo` commands. | -| `PGOUSER` | The path to the pgouser file. Will be ignored if either `PGOUSERNAME` or `PGOUSERPASS` are set. | -| `PGOUSERNAME` | The username (role) used for auth on the operator apiserver. Requires that `PGOUSERPASS` be set. | -| `PGOUSERPASS` | The password for used for auth on the operator apiserver. Requires that `PGOUSERNAME` be set. | - -## Additional Information - -How can you use the `pgo` client to manage your day-to-day PostgreSQL -operations? The next section covers many of the common types of tasks that -one needs to perform when managing production PostgreSQL clusters. Beyond that -is the full reference for all the available commands and flags for the `pgo` -client. - -- [Common `pgo` Client Tasks](/pgo-client/common-tasks/) -- [`pgo` Client Reference](/pgo-client/common-reference/) diff --git a/docs/content/pgo-client/common-tasks.md b/docs/content/pgo-client/common-tasks.md deleted file mode 100644 index a2ea5d1efe..0000000000 --- a/docs/content/pgo-client/common-tasks.md +++ /dev/null @@ -1,1606 +0,0 @@ ---- -title: "Common pgo Client Tasks" -date: -draft: false -weight: 20 ---- - -While the full [`pgo` client reference](/pgo-client/reference/) will tell you -everything you need to know about how to use `pgo`, it may be helpful to see -several examples on how to conduct "day-in-the-life" tasks for administrating -PostgreSQL cluster with the PostgreSQL Operator. - -The below guide covers many of the common operations that are required when -managing PostgreSQL clusters. The guide is broken up by different administrative -topics, such as provisioning, high-availability, etc. - -## Setup Before Running the Examples - -Many of the `pgo` client commands require you to specify a namespace via the -`-n` or `--namespace` flag. While this is a very helpful tool when managing -PostgreSQL deployxments across many Kubernetes namespaces, this can become -onerous for the intents of this guide. - -If you install the PostgreSQL Operator using the [quickstart](/quickstart/) -guide, you will install the PostgreSQL Operator to a namespace called `pgo`. We -can choose to always use one of these namespaces by setting the `PGO_NAMESPACE` -environmental variable, which is detailed in the global [`pgo` Client](/pgo-client/) -reference, - -For convenience, we will use the `pgo` namespace in the examples below. -For even more convenience, we recommend setting `pgo` to be the value of -the `PGO_NAMESPACE` variable. In the shell that you will be executing the `pgo` -commands in, run the following command: - -```shell -export PGO_NAMESPACE=pgo -``` - -If you do not wish to set this environmental variable, or are in an environment -where you are unable to use environmental variables, you will have to use the -`--namespace` (or `-n`) flag for most commands, e.g. - -`pgo version -n pgo` - -### JSON Output - -The default for the `pgo` client commands is to output their results in a -readable format. However, there are times where it may be helpful to you to have -the format output in a machine parseable format like JSON. - -Several commands support the `-o`/`--output` flags that delivers the results of -the command in the specified output. Presently, the only output that is -supported is `json`. - -As an example of using this feature, if you wanted to get the results of the -`pgo test` command in JSON, you could run the following: - -```shell -pgo test hacluster -o json -``` - -## PostgreSQL Operator System Basics - -To get started, it's first important to understand the basics of working with -the PostgreSQL Operator itself. You should know how to test if the PostgreSQL -Operator is working, check the overall status of the PostgreSQL Operator, view -the current configuration that the PostgreSQL Operator us using, and seeing -which Kubernetes Namespaces the PostgreSQL Operator has access to. - -While this may not be as fun as creating high-availability PostgreSQL clusters, -these commands will help you to perform basic troubleshooting tasks in your -environment. - -### Checking Connectivity to the PostgreSQL Operator - -A common task when working with the PostgreSQL Operator is to check connectivity -to the PostgreSQL Operator. This can be accomplish with the [`pgo version`](/pgo-client/reference/pgo_version/) -command: - -```shell -pgo version -``` - -which, if working, will yield results similar to: - -``` -pgo client version {{< param operatorVersion >}} -pgo-apiserver version {{< param operatorVersion >}} -``` - -### Inspecting the PostgreSQL Operator Configuration - -The [`pgo show config`](/pgo-client/reference/pgo_status/) command allows you to -view the current configuration that the PostgreSQL Operator is using. This can -be helpful for troubleshooting issues such as which PostgreSQL images are being -deployed by default, which storage classes are being used, etc. - -You can run the `pgo show config` command by running: - -```shell -pgo show config -``` - -which yields output similar to: - -```yaml -BasicAuth: "" -Cluster: - CCPImagePrefix: crunchydata - CCPImageTag: {{< param centosBase >}}-{{< param postgresVersion >}}-{{< param operatorVersion >}} - Policies: "" - Metrics: false - Badger: false - Port: "5432" - PGBadgerPort: "10000" - ExporterPort: "9187" - User: testuser - Database: userdb - PasswordAgeDays: "60" - PasswordLength: "8" - Replicas: "0" - ServiceType: ClusterIP - BackrestPort: 2022 - Backrest: true - BackrestS3Bucket: "" - BackrestS3Endpoint: "" - BackrestS3Region: "" - BackrestS3URIStyle: "" - BackrestS3VerifyTLS: true - DisableAutofail: false - PgmonitorPassword: "" - EnableCrunchyadm: false - DisableReplicaStartFailReinit: false - PodAntiAffinity: preferred - SyncReplication: false -Pgo: - Audit: false - PGOImagePrefix: crunchydata - PGOImageTag: {{< param centosBase >}}-{{< param operatorVersion >}} -PrimaryStorage: nfsstorage -BackupStorage: nfsstorage -ReplicaStorage: nfsstorage -BackrestStorage: nfsstorage -Storage: - nfsstorage: - AccessMode: ReadWriteMany - Size: 1G - StorageType: create - StorageClass: "" - SupplementalGroups: "65534" - MatchLabels: "" -``` - -### Viewing PostgreSQL Operator Managed Namespaces - -The PostgreSQL Operator has the ability to manage PostgreSQL clusters across -Kubernetes Namespaces. During the course of Operations, it can be helpful to -know which namespaces the PostgreSQL Operator can use for deploying PostgreSQL -clusters. - -You can view which namespaces the PostgreSQL Operator can utilize by using -the [`pgo show namespace`](/pgo-client/reference/pgo_show_namespace/) command. To -list out the namespaces that the PostgreSQL Operator has access to, you can run -the following command: - -```shell -pgo show namespace --all -``` - -which yields output similar to: - -``` -pgo username: admin -namespace useraccess installaccess -default accessible no access -kube-node-lease accessible no access -kube-public accessible no access -kube-system accessible no access -pgo accessible no access -pgouser1 accessible accessible -pgouser2 accessible accessible -somethingelse no access no access -``` - -**NOTE**: Based on your deployment, your Kubernetes administrator may restrict -access to the multi-namespace feature of the PostgreSQL Operator. In this case, -you do not need to worry about managing your namespaces and as such do not need -to use this command, but we recommend setting the `PGO_NAMESPACE` variable as -described in the [general notes](#general-notes) on this page. - -## Provisioning: Create, View, Destroy - -### Creating a PostgreSQL Cluster - -You can create a cluster using the [`pgo create cluster`](/pgo-client/reference/pgo_create_cluster/) -command: - -```shell -pgo create cluster hacluster -``` - -which if successfully, will yield output similar to this: - -``` -created Pgcluster hacluster -workflow id ae714d12-f5d0-4fa9-910f-21944b41dec8 -``` - -#### Create a PostgreSQL Cluster with Different PVC Sizes - -You can also create a PostgreSQL cluster with an arbitrary PVC size using the -[`pgo create cluster`](/pgo-client/reference/pgo_create_cluster/) command. For -example, if you want to create a PostgreSQL cluster with with a 128GB PVC, you -can use the following command: - -```shell -pgo create cluster hacluster --pvc-size=128Gi -``` - -The above command sets the PVC size for all PostgreSQL instances in the cluster, -i.e. the primary and replicas. - -This also extends to the size of the pgBackRest repository as well, if you are -using the local Kubernetes cluster storage for your backup repository. To -create a PostgreSQL cluster with a pgBackRest repository that uses a 1TB PVC, -you can use the following command: - -```shell -pgo create cluster hacluster --pgbackrest-pvc-size=1Ti -``` - -#### Specify CPU / Memory for a PostgreSQL Cluster - -To specify the amount of CPU and memory to request for a PostgreSQL cluster, you -can use the `--cpu` and `--memory` flags of the -[`pgo create cluster`](/pgo-client/reference/pgo_create_cluster/) command. Both -of these values utilize the [Kubernetes quantity format](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) -for specifying how to allocate resources. - -For example, to create a PostgreSQL cluster that requests 4 CPU cores and has 16 -gibibytes of memory, you can use the following command: - -```shell -pgo create cluster hacluster --cpu=4 --memory=16Gi -``` - -#### Create a PostgreSQL Cluster with PostGIS - -To create a PostgreSQL cluster that uses the geospatial extension PostGIS, you -can execute the following command, updated with your desired image tag. In the -example below, the cluster will use PostgreSQL {{< param postgresVersion >}} and PostGIS {{< param postgisVersion >}}: - -```shell -pgo create cluster hagiscluster \ - --ccp-image=crunchy-postgres-gis-ha \ - --ccp-image-tag={{< param centosBase >}}-{{< param postgresVersion >}}-{{< param postgisVersion >}}-{{< param operatorVersion >}} -``` - -#### Create a PostgreSQL Cluster with a Tablespace - -Tablespaces are a PostgreSQL feature that allows a user to select specific -volumes to store data to, which is helpful in [several types of scenarios](/architecture/tablespaces/). -Often your workload does not require a tablespace, but the PostgreSQL Operator -provides support for tablespaces throughout the lifecycle of a PostgreSQL -cluster. - -To create a PostgreSQL cluster that uses the [tablespace](/architecture/tablespaces/) -feature with NFS storage, you can execute the following command: - -```shell -pgo create cluster hactsluster --tablespace=name=ts1:storageconfig=nfsstorage -``` - -You can use your preferred storage engine instead of `nfsstorage`. For example, -to create multiple tablespaces on GKE, you can execute the following command: - -```shell -pgo create cluster hactsluster \ - --tablespace=name=ts1:storageconfig=gce \ - --tablespace=name=ts2:storageconfig=gce -``` - -Tablespaces are immediately available once the PostgreSQL cluster is -provisioned. For example, to create a table using the tablespace `ts1`, you can -run the following SQL on your PostgreSQL cluster: - -```sql -CREATE TABLE sensor_data ( - id int GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY, - sensor1 numeric, - sensor2 numeric, - sensor3 numeric, - sensor4 numeric -) -TABLESPACE ts1; -``` - -You can also create tablespaces that have different sized PVCs from the ones -defined in the storage specification. For instance, to create two tablespaces, -one that uses a 10GiB PVC and one that uses a 20GiB PVC, you can execute the -following command: - -```shell -pgo create cluster hactsluster \ - --tablespace=name=ts1:storageconfig=gce:pvcsize=10Gi \ - --tablespace=name=ts2:storageconfig=gce:pvcsize=20Gi -``` - -#### Create a PostgreSQL Cluster Using a Backup from Another PostgreSQL Cluster - -It is also possible to create a new PostgreSQL Cluster using a backup from another -PostgreSQL cluster. To do so, simply specify the cluster containing the backup -that you would like to utilize using the `restore-from` option: - - -```shell -pgo create cluster hacluster2 --restore-from=hacluster1 -``` - -When using this approach, a `pgbackrest restore` will be performed using the pgBackRest -repository for the `restore-from` cluster specified in order to populate the initial -`PGDATA` directory for the new PostgreSQL cluster. By default, pgBackRest will restore -to the latest backup available and replay all WAL. However, a `restore-opts` option -is also available that allows the `restore` command to be further customized, e.g. to -perform a point-in-time restore and/or restore from an S3 storage bucket: - -```shell -pgo create cluster hacluster2 \ - --restore-from=hacluster1 \ - --restore-opts="--repo-type=s3 --type=time --target='2020-07-02 20:19:36.13557+00'" -``` - -#### Tracking a Newly Provisioned Cluster - -A new PostgreSQL cluster can take a few moments to provision. You may have -noticed that the `pgo create cluster` command returns something called a -"workflow id". This workflow ID allows you to track the progress of your new -PostgreSQL cluster while it is being provisioned using the [`pgo show workflow`](/pgo-client/reference/pgo_show_workflow/) -command: - -```shell -pgo show workflow ae714d12-f5d0-4fa9-910f-21944b41dec8 -``` - -which can yield output similar to: - -``` -parameter value ---------- ----- -pg-cluster hacluster -task completed 2019-12-27T02:10:14Z -task submitted 2019-12-27T02:09:46Z -workflowid ae714d12-f5d0-4fa9-910f-21944b41dec8 -``` - -### View PostgreSQL Cluster Details - -To see details about your PostgreSQL cluster, you can use the [`pgo show cluster`](/pgo-client/reference/pgo_show_cluster/) -command. These details include elements such as: - -- The version of PostgreSQL that the cluster is using -- The PostgreSQL instances that comprise the cluster -- The Pods assigned to the cluster for all of the associated components, -including the nodes that the pods are assigned to -- The Persistent Volume Claims (PVC) that are being consumed by the cluster -- The Kubernetes Deployments associated with the cluster -- The Kubernetes Services associated with the cluster -- The Kubernetes Labels that are assigned to the PostgreSQL instances - -and more. - -You can view the details of the cluster by executing the following command: - -```shell -pgo show cluster hacluster -``` - -which will yield output similar to: - -``` -cluster : hacluster (crunchy-postgres-ha:{{< param centosBase >}}-{{< param postgresVersion >}}-{{< param operatorVersion >}}) - pod : hacluster-6dc6cfcfb9-f9knq (Running) on node01 (1/1) (primary) - pvc : hacluster - resources : CPU Limit= Memory Limit=, CPU Request= Memory Request= - storage : Primary=200M Replica=200M - deployment : hacluster - deployment : hacluster-backrest-shared-repo - service : hacluster - ClusterIP (10.102.20.42) - labels : pg-pod-anti-affinity= archive-timeout=60 crunchy-pgbadger=false crunchy-postgres-exporter=false deployment-name=hacluster pg-cluster=hacluster crunchy-pgha-scope=hacluster autofail=true pgo-backrest=true pgo-version={{< param operatorVersion >}} current-primary=hacluster name=hacluster pgouser=admin workflowid=ae714d12-f5d0-4fa9-910f-21944b41dec8 -``` - -### Deleting a Cluster - -You can delete a PostgreSQL cluster that is managed by the PostgreSQL Operator -by executing the following command: - -```shell -pgo delete cluster hacluster -``` - -This will remove the cluster from being managed by the PostgreSQL Operator, as -well as delete the root data Persistent Volume Claim (PVC) and backup PVCs -associated with the cluster. - -If you wish to keep your PostgreSQL data PVC, you can delete the cluster with -the following command: - -```shell -pgo delete cluster hacluster --keep-data -``` - -You can then recreate the PostgreSQL cluster with the same data by using the -`pgo create cluster` command with a cluster of the same name: - -```shell -pgo create cluster hacluster -``` - -This technique is used when performing tasks such as upgrading the PostgreSQL -Operator. - -You can also keep the pgBackRest repository associated with the PostgreSQL -cluster by using the `--keep-backups` flag with the `pgo delete cluster` -command: - -```shell -pgo delete cluster hacluster --keep-backups -``` - -## Testing PostgreSQL Cluster Availability - -You can test the availability of your cluster by using the [`pgo test`](/pgo-client/reference/pgo_test/) -command. The `pgo test` command checks to see if the Kubernetes Services and -the Pods that comprise the PostgreSQL cluster are available to receive -connections. This includes: - -- Testing that the Kubernetes Endpoints are available and able to route requests -to healthy Pods -- Testing that each PostgreSQL instance is available and ready to accept client -connections by performing a connectivity check similar to the one performed by -`pg_isready` - -To test the availability of a PostgreSQL cluster, you can run the following -command: - -```shell -pgo test hacluster -``` - -which will yield output similar to: - -``` -cluster : hacluster - Services - primary (10.102.20.42:5432): UP - Instances - primary (hacluster-6dc6cfcfb9-f9knq): UP -``` - -## Disaster Recovery: Backups & Restores - -The PostgreSQL Operator supports sophisticated functionality for managing your -backups and restores. For more information for how this works, please see the -[disaster recovery](/architecture/disaster-recovery/) guide. - -### Creating a Backup - -The PostgreSQL Operator uses the open source [pgBackRest](https://www.pgbackrest.org) -backup and recovery utility for managing backups and PostgreSQL archives. These -backups are also used as part of managing the overall health and -high-availability of PostgreSQL clusters managed by the PostgreSQL Operator and -used as part of the cloning process as well. - -When a new PostgreSQL cluster is provisioned by the PostgreSQL Operator, a full -pgBackRest backup is taken by default. This is required in order to create new -replicas (via `pgo scale`) for the PostgreSQL cluster as well as healing during -a [failover scenario](/architecture/high-availability/). - -To create a backup, you can run the following command: - -```shell -pgo backup hacluster -``` - -which by default, will create an incremental pgBackRest backup. The reason for -this is that the PostgreSQL Operator initially creates a pgBackRest full backup -when the cluster is initial provisioned, and pgBackRest will take incremental -backups for each subsequent backup until a different backup type is specified. - -Most pgBackRest options are supported and can be passed in by the PostgreSQL -Operator via the `--backup-opts` flag. What follows are some examples for how -to utilize pgBackRest with the PostgreSQL Operator to help you create your -optimal disaster recovery setup. - -#### Creating a Full Backup - -You can create a full backup using the following command: - -```shell -pgo backup hacluster --backup-opts="--type=full" -``` - -#### Creating a Differential Backup - -You can create a differential backup using the following command: - -```shell -pgo backup hacluster --backup-opts="--type=diff" -``` - -#### Creating an Incremental Backup - -You can create a differential backup using the following command: - -```shell -pgo backup hacluster --backup-opts="--type=incr" -``` - -An incremental backup is created without specifying any options after a full or -differential backup is taken. - -### Creating Backups in S3 - -The PostgreSQL Operator supports creating backups in S3 or any object storage -system that uses the S3 protocol. For more information, please read the section -on [PostgreSQL Operator Backups with S3](/architecture/disaster-recovery/#using-s3) -in the architecture section. - -### Displaying Backup Information - -You can see information about the current state of backups in a PostgreSQL -cluster managed by the PostgreSQL Operator by executing the following command: - -```shell -pgo show backup hacluster -``` - -### Setting Backup Retention - -By default, pgBackRest will allow you to keep on creating backups until you run -out of disk space. As such, it may be helpful to manage how many backups are -retained. - -pgBackRest comes with several flags for managing how backups can be retained: - -- `--repo1-retention-full`: how many full backups to retain -- `--repo1-retention-diff`: how many differential backups to retain -- `--repo1-retention-archive`: how many sets of WAL archives to retain alongside -the full and differential backups that are retained - -For example, to create a full backup and retain the previous 7 full backups, you -would execute the following command: - -```shell -pgo backup hacluster --backup-opts="--type=full --repo1-retention-full=7" -``` - -### Scheduling Backups - -Any effective disaster recovery strategy includes having regularly scheduled -backups. The PostgreSQL Operator enables this through its scheduling sidecar -that is deployed alongside the Operator. - -#### Creating a Scheduled Backup - -For example, to schedule a full backup once a day at midnight, you can execute -the following command: - -```shell -pgo create schedule hacluster --schedule="0 1 * * *" \ - --schedule-type=pgbackrest --pgbackrest-backup-type=full -``` - -To schedule an incremental backup once every 3 hours, you can execute the -following command: - -```shell -pgo create schedule hacluster --schedule="0 */3 * * *" \ - --schedule-type=pgbackrest --pgbackrest-backup-type=incr -``` - -You can also create regularly scheduled backups and combine it with a retention -policy. For example, using the above example of taking a nightly full backup, -you can specify a policy of retaining 21 backups by executing the following -command: - -```shell -pgo create schedule hacluster --schedule="0 0 * * *" \ - --schedule-type=pgbackrest --pgbackrest-backup-type=full \ - --schedule-opts="--repo1-retention-full=21" -``` - -### Restore a Cluster - -The PostgreSQL Operator supports the ability to perform a full restore on a -PostgreSQL cluster (i.e. a "clone" or "copy") as well as a -point-in-time-recovery. There are two types of ways to restore a cluster: - -- Restore to a new cluster using the `--restore-from` flag in the -[`pgo create cluster`]({{< relref "/pgo-client/reference/pgo_create_cluster.md" >}}) -command. This is effectively a [clone](#clone-a-postgresql-cluster) or a copy. -- Restore in-place using the [`pgo restore`]({{< relref "/pgo-client/reference/pgo_restore.md" >}}) -command. Note that this is **destructive**. - -It is typically better to perform a restore to a new cluster, particularly when -performing a point-in-time-recovery, as it can allow you to more effectively -manage your downtime and avoid making undesired changes to your production data. - -Additionally, the "restore to a new cluster" technique works so long as you have -a pgBackRest repository available: the pgBackRest repository does not need to be -attached to an active cluster! For example, if a cluster named `hippo` was -deleted as such: - -``` -pgo delete cluster hippo --keep-backups -``` - -you can create a new cluster from the backups like so: - -``` -pgo create cluster datalake --restore-from=hippo -``` - -Below provides guidance on how to perform a restore to a new PostgreSQL cluster -both as a full copy and to a specific point in time. Additionally, it also -shows how to restore in place to a specific point in time. - -#### Restore to a New Cluster (aka "copy" or "clone") - -Restoring to a new PostgreSQL cluster allows one to take a backup and create a -new PostgreSQL cluster that can run alongside an existing PostgreSQL cluster. -There are several scenarios where using this technique is helpful: - -- Creating a copy of a PostgreSQL cluster that can be used for other purposes. -Another way of putting this is "creating a clone." -- Restore to a point-in-time and inspect the state of the data without affecting -the current cluster - -and more. - -##### Full Restore - -To create a new PostgreSQL cluster from a backup and restore it fully, you can -execute the following command: - -``` -pgo create cluster newcluster --restore-from=oldcluster -``` - -##### Point-in-time-Recovery (PITR) - -To create a new PostgreSQL cluster and restore it to specific point-in-time -(e.g. before a key table was dropped), you can use the following command, -substituting the time that you wish to restore to: - -``` -pgo create cluster newcluster \ - --restore-from oldcluster \ - --restore-opts "--type=time --target='2019-12-31 11:59:59.999999+00'" -``` - -When the restore is complete, the cluster is immediately available for reads and -writes. To inspect the data before allowing connections, add pgBackRest's -`--target-action=pause` option to the `--restore-opts` parameter. - -The PostgreSQL Operator supports the full set of pgBackRest restore options, -which can be passed into the `--backup-opts` parameter. For more information, -please review the [pgBackRest restore options](https://pgbackrest.org/command.html#command-restore) - -#### Restore in-place - -Restoring a PostgreSQL cluster in-place is a **destructive** action that will -perform a recovery on your existing data directory. This is accomplished using -the [`pgo restore`]({{< relref "/pgo-client/reference/pgo_restore.md" >}}) -command. The most common scenario is to restore the database to a specific point -in time. - -##### Point-in-time-Recovery (PITR) - -The more likely scenario when performing a PostgreSQL cluster restore is to -recover to a particular point-in-time (e.g. before a key table was dropped). For -example, to restore a cluster to December 31, 2019 at 11:59pm: - -``` -pgo restore hacluster --pitr-target="2019-12-31 11:59:59.999999+00" \ - --backup-opts="--type=time" -``` - -When the restore is complete, the cluster is immediately available for reads and -writes. To inspect the data before allowing connections, add pgBackRest's -`--target-action=pause` option to the `--backup-opts` parameter. - -The PostgreSQL Operator supports the full set of pgBackRest restore options, -which can be passed into the `--backup-opts` parameter. For more information, -please review the [pgBackRest restore options](https://pgbackrest.org/command.html#command-restore) - -Using this technique, after a restore is complete, you will need to re-enable -high availability on the PostgreSQL cluster manually. You can re-enable high -availability by executing the following command: - -``` -pgo update cluster hacluster --autofail=true -``` - -### Logical Backups (`pg_dump` / `pg_dumpall`) - -The PostgreSQL Operator supports taking logical backups with `pg_dump` and -`pg_dumpall`. While they do not provide the same performance and storage -optimizations as the physical backups provided by pgBackRest, logical backups -are helpful when one wants to upgrade between major PostgreSQL versions, or -provide only a subset of a database, such as a table. - -#### Create a Logical Backup - -To create a logical backup of the 'postgres' database, you can run the following -command: - -```shell -pgo backup hacluster --backup-type=pgdump -``` - -To create a logical backup of a specific database, you can use the `--database` flag, -as in the following command: - -```shell -pgo backup hacluster --backup-type=pgdump --database=mydb -``` - -You can pass in specific options to `--backup-opts`, which can accept most of -the options that the [`pg_dump`](https://www.postgresql.org/docs/current/app-pgdump.html) -command accepts. For example, to only dump the data from a specific table called -`users`: - -```shell -pgo backup hacluster --backup-type=pgdump --backup-opts="-t users" -``` - -To use `pg_dumpall` to create a logical backup of all the data in a PostgreSQL -cluster, you must pass the `--dump-all` flag in `--backup-opts`, i.e.: - -```shell -pgo backup hacluster --backup-type=pgdump --backup-opts="--dump-all" -``` - -#### Viewing Logical Backups - -To view an available list of logical backups, you can use the `pgo show backup` -command: - -```shell -pgo show backup --backup-type=pgdump -``` - -This provides information about the PVC that the logical backups are stored on -as well as the timestamps required to perform a restore from a logical backup. - -#### Restore from a Logical Backup - -To restore from a logical backup, you need to reference the PVC that the logical -backup is stored to, as well as the timestamp that was created by the logical -backup. - -You can restore a logical backup using the following command: - -```shell -pgo restore hacluster --backup-type=pgdump --backup-pvc=hacluster-pgdump-pvc \ - --pitr-target="2019-01-15-00-03-25" -n pgouser1 -``` - -To restore to a specific database, add the `--pgdump-database` flag to the -command from above: - -```shell -pgo restore hacluster --backup-type=pgdump --backup-pvc=hacluster-pgdump-pvc \ - --pgdump-database=mydb --pitr-target="2019-01-15-00-03-25" -n pgouser1 -``` - -## High-Availability: Scaling Up & Down - -The PostgreSQL Operator supports a robust [high-availability](/architecture/high-availability) -set up to ensure that your PostgreSQL clusters can stay up and running. For -detailed information on how it works, please see the -[high-availability architecture]((/architecture/high-availability)) section. - -### Creating a New Replica - -To create a new replica, also known as "scaling up", you can execute the -following command: - -```shell -pgo scale hacluster --replica-count=1 -``` - -If you wanted to add two new replicas at the same time, you could execute the -following command: - -```shell -pgo scale hacluster --replica-count=2 -``` - -### Viewing Available Replicas - -You can view the available replicas in a few ways. First, you can use `pgo show cluster` -to see the overall information about the PostgreSQL cluster: - -```shell -pgo show cluster hacluster -``` - -You can also find specific replica names by using the `--query` flag on the -`pgo failover` and `pgo scaledown` commands, e.g.: - -```shell -pgo failover --query hacluster -``` - -### Manual Failover - -The PostgreSQL Operator is set up with an automated failover system based on -distributed consensus, but there may be times where you wish to have your -cluster manually failover. If you wish to have your cluster manually failover, -first, query your cluster to determine which failover targets are available. -The query command also provides information that may help your decision, such as -replication lag: - -```shell -pgo failover --query hacluster -``` - -Once you have selected the replica that is best for your to failover to, you can -perform a failover with the following command: - -```shell -pgo failover hacluster --target=hacluster-abcd -``` - -where `hacluster-abcd` is the name of the PostgreSQL instance that you want to -promote to become the new primary - -#### Destroying a Replica - -To destroy a replica, first query the available replicas by using the `--query` -flag on the `pgo scaledown` command, i.e.: - -```shell -pgo scaledown hacluster --query -``` - -Once you have picked the replica you want to remove, you can remove it by -executing the following command: - -```shell -pgo scaledown hacluster --target=hacluster-abcd -``` - -where `hacluster-abcd` is the name of the PostgreSQL replica that you want to -destroy. - -## Monitoring - -### PostgreSQL Metrics via pgMonitor - -You can view metrics about your PostgreSQL cluster using [PostgreSQL Operator Monitoring]({{< relref "/installation/metrics" >}}), -which uses open source [pgMonitor](https://github.com/CrunchyData/pgmonitor). -First, you need to install the [PostgreSQL Operator Monitoring]({{< relref "/installation/metrics" >}}) -stack for your PostgreSQL Operator environment. - -After that, you need to ensure that you deploy the `crunchy-postgres-exporter` -with each PostgreSQL cluster that you deploy: - -``` -pgo create cluster hippo --metrics -``` - -For more information on how monitoring with the PostgreSQL Operator works, -please see the [Monitoring]({{< relref "/architecture/monitoring.md" >}}) -section of the documentation. - -### View Disk Utilization - -You can see a comparison of Postgres data size versus the Persistent -volume claim size by entering the following: - -```shell -pgo df hacluster -n pgouser1 -``` - -## Cluster Maintenance & Resource Management - -There are several operations that you can perform to modify a PostgreSQL cluster -over its lifetime. - -#### Modify CPU / Memory for a PostgreSQL Cluster - -As database workloads change, it may be necessary to modify the CPU and memory -allocation for your PostgreSQL cluster. The PostgreSQL Operator allows for this -via the `--cpu` and `--memory` flags on the [`pgo update cluster`](/pgo-client/reference/pgo_update_cluster/) -command. Similar to the create command, both flags accept values that follow the -[Kubernetes quantity format](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/). - -For example, to update a PostgreSQL cluster to use 8 CPU cores and has 32 -gibibytes of memory, you can use the following command: - -```shell -pgo update cluster hacluster --cpu=8 --memory=32Gi -``` - -The resource allocations apply to all instances in a PostgreSQL cluster: this -means your primary and any replicas will have the same cluster resource -allocations. Be sure to specify resource requests that your Kubernetes -environment can support. - -**NOTE**: This operation can cause downtime. Modifying the resource requests -allocated to a Deployment requires that the Pods in a Deployment must be -restarted. Each PostgreSQL instance is safely shutdown using the ["fast"](https://www.postgresql.org/docs/current/app-pg-ctl.html) -shutdown method to help ensure it will not enter crash recovery mode when a new -Pod is created. - -When the operation completes, each PostgreSQL instance will have the new -resource allocations. - -#### Adding a Tablespace to a Cluster - -Based on your workload or volume of data, you may wish to add a -[tablespace](https://www.postgresql.org/docs/current/manage-ag-tablespaces.html) to -your PostgreSQL cluster. - -You can add a tablespace to an existing PostgreSQL cluster with the -[`pgo update cluster`](/pgo-client/reference/pgo_update_cluster/) command. -Adding a tablespace to a cluster uses a similar syntax to -[creating a cluster with a tablespace](#create-a-postgresql-cluster-with-a-tablespace), for example: - -```shell -pgo update cluster hacluster \ - --tablespace=name=tablespace3:storageconfig=storageconfigname -``` - -**NOTE**: This operation can cause downtime. In order to add a tablespace to a -PostgreSQL cluster, persistent volume claims (PVCs) need to be created and -mounted to each PostgreSQL instance in the cluster. The act of mounting a new -PVC to a Kubernetes Deployment causes the Pods in the deployment to restart. - -Each PostgreSQL instance is safely shutdown using the ["fast"](https://www.postgresql.org/docs/current/app-pg-ctl.html) -shutdown method to help ensure it will not enter crash recovery mode when a new -Pod is created. - -When the operation completes, the tablespace will be set up and accessible to -use within the PostgreSQL cluster. - -For more information on tablespaces, please visit the [tablespace](/architecture/tablespaces/) -section of the documentation. - -## Clone a PostgreSQL Cluster - -You can create a copy of an existing PostgreSQL cluster in a new PostgreSQL -cluster by using the [`pgo create cluster`]({{< relref "/pgo-client/reference/pgo_create_cluster.md" >}}) -command with the `--restore-from` flag (and, if needed, `--restore-opts`). -The command copies the pgBackRest repository from either an active PostgreSQL -cluster, or a pgBackRest repository that exists from a former cluster that was -deleted using `pgo delete cluster --keep-backups`. - -You can clone a PostgreSQL cluster by running the following command: - -``` -pgo clone hacluster newhacluster -``` - -By leveraging `pgo create cluster`, you are able to copy the data from a -PostgreSQL cluster while creating the topology of a new cluster the way you want -to. For instance, if you want to copy data from an existing cluster that does -not have metrics to a new cluster that does, you can accomplish that with the -following command: - -``` -pgo create cluster newcluster --restore-from=oldcluster --metrics -``` - -### Clone a PostgreSQL Cluster to Different PVC Size - -You can have a cloned PostgreSQL cluster use a different PVC size, which is -useful when moving your PostgreSQL cluster to a larger PVC. For example, to -clone a PostgreSQL cluster to a 256GiB PVC, you can execute the following -command: - -```shell -pgo create cluster bighippo --restore-from=hippo --pvc-size=256Gi -``` - -You can also have the cloned PostgreSQL cluster use a larger pgBackRest -backup repository by setting its PVC size. For example, to have a cloned -PostgreSQL cluster use a 1TiB pgBackRest repository, you can execute the -following command: - -```shell -pgo create cluster bighippo --restore-from=hippo --pgbackrest-pvc-size=1Ti -``` - -## Enable TLS - -TLS allows secure TCP connections to PostgreSQL, and the PostgreSQL Operator -makes it easy to enable this PostgreSQL feature. The TLS support in the -PostgreSQL Operator does not make an opinion about your PKI, but rather loads in -your TLS key pair that you wish to use for the PostgreSQL server as well as its -corresponding certificate authority (CA) certificate. Both of these Secrets are -required to enable TLS support for your PostgreSQL cluster when using the -PostgreSQL Operator, but it in turn allows seamless TLS support. - -### Setup - -There are three items that are required to enable TLS in your PostgreSQL -clusters: - -- A CA certificate -- A TLS private key -- A TLS certificate - -There are a variety of methods available to generate these items: in fact, -Kubernetes comes with its own [certificate management system](https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/)! -It is up to you to decide how you want to manage this for your cluster. The -PostgreSQL documentation also provides an example for how to -[generate a TLS certificate](https://www.postgresql.org/docs/current/ssl-tcp.html#SSL-CERTIFICATE-CREATION) -as well. - -To set up TLS for your PostgreSQL cluster, you have to create two [Secrets](https://kubernetes.io/docs/concepts/configuration/secret/): -one that contains the CA certificate, and the other that contains the server -TLS key pair. - -First, create the Secret that contains your CA certificate. Create the Secret -as a generic Secret, and note that the following requirements **must** be met: - -- The Secret must be created in the same Namespace as where you are deploying -your PostgreSQL cluster -- The `name` of the key that is holding the CA **must** be `ca.crt` - -There are optional settings for setting up the CA secret: - -- You can pass in a certificate revocation list (CRL) for the CA secret by -passing in the CRL using the `ca.crl` key name in the Secret. - -For example, to create a CA Secret with the trusted CA to use for the PostgreSQL -clusters, you could execute the following command: - -```shell -kubectl create secret generic postgresql-ca --from-file=ca.crt=/path/to/ca.crt -``` - -To create a CA Secret that includes a CRL, you could execute the following -command: - -```shell -kubectl create secret generic postgresql-ca \ - --from-file=ca.crt=/path/to/ca.crt \ - --from-file=ca.crl=/path/to/ca.crl -``` - -Note that you can reuse this CA Secret for other PostgreSQL clusters deployed by -the PostgreSQL Operator. - -Next, create the Secret that contains your TLS key pair. Create the Secret as a -a TLS Secret, and note the following requirement must be met: - -- The Secret must be created in the same Namespace as where you are deploying -your PostgreSQL cluster - -```shell -kubectl create secret tls hacluster-tls-keypair \ - --cert=/path/to/server.crt \ - --key=/path/to/server.key -``` - -Now you can create a TLS-enabled PostgreSQL cluster! - -### Create a TLS Enabled PostgreSQL Cluster - -Using the above example, to create a TLS-enabled PostgreSQL cluster that can -accept both TLS and non-TLS connections, execute the following command: - -```shell -pgo create cluster hacluster-tls \ - --server-ca-secret=postgresql-ca \ - --server-tls-secret=hacluster-tls-keypair -``` - -Including the `--server-ca-secret` and `--server-tls-secret` flags automatically -enable TLS connections in the PostgreSQL cluster that is deployed. These flags -should reference the CA Secret and the TLS key pair Secret, respectively. - -If deployed successfully, when you connect to the PostgreSQL cluster, assuming -your `PGSSLMODE` is set to `prefer` or higher, you will see something like this -in your `psql` terminal: - -``` -SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off) -``` - -### Force TLS in a PostgreSQL Cluster - -There are many environments where you want to force all remote connections to -occur over TLS, for example, if you deploy your PostgreSQL cluster's in a public -cloud or on an untrusted network. The PostgreSQL Operator lets you force all -remote connections to occur over TLS by using the `--tls-only` flag. - -For example, using the setup above, you can force TLS in a PostgreSQL cluster by -executing the following command: - -```shell -pgo create cluster hacluster-tls-only \ - --tls-only \ - --server-ca-secret=postgresql-ca --server-tls-secret=hacluster-tls-keypair -``` - -If deployed successfully, when you connect to the PostgreSQL cluster, assuming -your `PGSSLMODE` is set to `prefer` or higher, you will see something like this -in your `psql` terminal: - -``` -SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off) -``` - -If you try to connect to a PostgreSQL cluster that is deployed using the -`--tls-only` with TLS disabled (i.e. `PGSSLMODE=disable`), you will receive an -error that connections without TLS are unsupported. - -### TLS Authentication for PostgreSQL Replication - -PostgreSQL supports [certificate-based authentication](https://www.postgresql.org/docs/current/auth-cert.html), -which allows for PostgreSQL to authenticate users based on the common name (CN) -in a certificate. Using this feature, the PostgreSQL Operator allows you to -configure PostgreSQL replicas in a cluster to authenticate using a certificate -instead of a password. - -To use this feature, first you will need to set up a Kubernetes TLS Secret that -has a CN of `primaryuser`. If you do not wish to have this as your CN, you will -need to map the CN of this certificate to the value of `primaryuser` using a -[pg_ident](https://www.postgresql.org/docs/current/auth-username-maps.html) -username map, which you can configure as part of a -[custom PostgreSQL configuration]({{< relref "/advanced/custom-configuration.md" >}}). - -You also need to ensure that the certificate is verifiable by the certificate -authority (CA) chain that you have provided for your PostgreSQL cluster. The CA -is provided as part of the `--server-ca-secret` flag in the -[`pgo create cluster`]({{< relref "/pgo-client/reference/pgo_create_cluster.md" >}}) -command. - -To create a PostgreSQL cluster that uses TLS authentication for replication, -first create Kubernetes Secrets for the server and the CA. For the purposes of -this example, we will use the ones that were created earlier: `postgresql-ca` -and `hacluster-tls-keypair`. After generating a certificate that has a CN of -`primaryuser`, create a Kubernetes Secret that references this TLS keypair -called `hacluster-tls-replication-keypair`: - -``` -kubectl create secret tls hacluster-tls-replication-keypair \ - --cert=/path/to/replication.crt \ - --key=/path/to/replication.key -``` - -We can now create a PostgreSQL cluster and allow for it to use TLS -authentication for its replicas! Let's create a PostgreSQL cluster with two -replicas that also requires TLS for any connection: - -``` -pgo create cluster hippo \ - --tls-only \ - --server-ca-secret=postgresql-ca \ - --server-tls-secret=hacluster-tls-keypair \ - --replication-tls-secret=hacluster-tls-replication-keypair \ - --replica-count=2 -``` - -By default, the PostgreSQL Operator has each replica connect to PostgreSQL using -a [PostgreSQL TLS mode](https://www.postgresql.org/docs/current/libpq-ssl.html#LIBPQ-SSL-SSLMODE-STATEMENTS) -of `verify-ca`. If you wish to perform TLS mutual authentication between -PostgreSQL instances (i.e. certificate-based authentication with SSL mode of -`verify-full`), you will need to create a -[PostgreSQL custom configuration]({{< relref "/advanced/custom-configuration.md" >}}). - -## [Custom PostgreSQL Configuration]({{< relref "/advanced/custom-configuration.md" >}}) - -Customizing PostgreSQL configuration is currently not subject to the `pgo` -client, but given it is a common question, we thought it may be helpful to link -to how to do it from here. To find out more about how to -[customize your PostgreSQL configuration]({{< relref "/advanced/custom-configuration.md" >}}), -please refer to the [Custom PostgreSQL Configuration]({{< relref "/advanced/custom-configuration.md" >}}) -section of the documentation. - -## pgAdmin 4: PostgreSQL Administration - -[pgAdmin 4](https://www.pgadmin.org/) is a popular graphical user interface that -lets you work with PostgreSQL databases from both a desktop or web-based client. -In the case of the PostgreSQL Operator, the pgAdmin 4 web client can be deployed -and synchronized with PostgreSQL clusters so that users can administrate their -databases with their PostgreSQL username and password. - -For example, let's work with a PostgreSQL cluster called `hippo` that has a user named `hippo` with password `datalake`, e.g.: - -``` -pgo create cluster hippo --username=hippo --password=datalake -``` - -Once the `hippo` PostgreSQL cluster is ready, create the pgAdmin 4 deployment -with the [`pgo create pgadmin`]({{< relref "/pgo-client/reference/pgo_create_pgadmin.md" >}}) -command: - -``` -pgo create pgadmin hippo -``` - -This creates a pgAdmin 4 deployment unique to this PostgreSQL cluster and -synchronizes the PostgreSQL user information into it. To access pgAdmin 4, you -can set up a port-forward to the Service, which follows the -pattern `-pgadmin`, to port `5050`: - -``` -kubectl port-forward svc/hippo-pgadmin 5050:5050 -``` - -Point your browser at `http://localhost:5050` and use your database username -(e.g. `hippo`) and password (e.g. `datalake`) to log in. - -![pgAdmin 4 Login Page](/images/pgadmin4-login.png) - -(Note: if your password does not appear to work, you can retry setting up the -user with the [`pgo update user`]({{< relref "/pgo-client/reference/pgo_update_user.md" >}}) -command: `pgo update user hippo --password=datalake`) - -The `pgo create user`, `pgo update user`, and `pgo delete user` commands are -synchronized with the pgAdmin 4 deployment. Any user with credentials to this -PostgreSQL cluster will be able to log in and use pgAdmin 4: - -![pgAdmin 4 Query](/images/pgadmin4-query.png) - -You can remove the pgAdmin 4 deployment with the [`pgo delete pgadmin`]({{< relref "/pgo-client/reference/pgo_delete_pgadmin.md" >}}) -command. - -For more information, please read the [pgAdmin 4 Architecture]({{< relref "/architecture/pgadmin4.md" >}}) -section of the documentation. - -## Standby Clusters: Multi-Cluster Kubernetes Deployments - -A [standby PostgreSQL cluster]({{< relref "/architecture/high-availability/multi-cluster-kubernetes.md" >}}) -can be used to create an advanced high-availability set with a PostgreSQL -cluster running in a different Kubernetes cluster, or used for other operations -such as migrating from one PostgreSQL cluster to another. Note: this is not -[high availability]({{< relref "/architecture/high-availability/_index.md" >}}) -per se: a high-availability PostgreSQL cluster will automatically fail over upon -a downtime event, whereas a standby PostgreSQL cluster must be explicitly -promoted. - -With that said, you can run multiple PostgreSQL Operators in different -Kubernetes clusters, and the below functionality will work! - -Below are some commands for setting up and using standby PostgreSQL clusters. -For more details on how standby clusters work, please review the section on -[Kubernetes Multi-Cluster Deployments]({{< relref "/architecture/high-availability/multi-cluster-kubernetes.md" >}}). - -### Creating a Standby Cluster - -Before creating a standby cluster, you will need to ensure that your primary -cluster is created properly. Standby clusters require the use of S3 or -equivalent S3-compatible storage system that is accessible to both the primary -and standby clusters. For example, to create a primary cluster to these -specifications: - -```shell -pgo create cluster hippo --pgbouncer --replica-count=2 \ - --pgbackrest-storage-type=local,s3 \ - --pgbackrest-s3-key= \ - --pgbackrest-s3-key-secret= \ - --pgbackrest-s3-bucket=watering-hole \ - --pgbackrest-s3-endpoint=s3.amazonaws.com \ - --pgbackrest-s3-region=us-east-1 \ - --pgbackrest-s3-uri-style=host \ - --pgbackrest-s3-verify-tls=true \ - --password-superuser=supersecrethippo \ - --password-replication=somewhatsecrethippo \ - --password=opensourcehippo - ``` - -Before setting up the standby PostgreSQL cluster, you will need to wait a few -moments for the primary PostgreSQL cluster to be ready. Once your primary -PostgreSQL cluster is available, you can create a standby cluster by using the -following command: - -```shell -pgo create cluster hippo-standby --standby --replica-count=2 \ - --pgbackrest-storage-type=s3 \ - --pgbackrest-s3-key= \ - --pgbackrest-s3-key-secret= \ - --pgbackrest-s3-bucket=watering-hole \ - --pgbackrest-s3-endpoint=s3.amazonaws.com \ - --pgbackrest-s3-region=us-east-1 \ - --pgbackrest-s3-uri-style=host \ - --pgbackrest-s3-verify-tls=true \ - --pgbackrest-repo-path=/backrestrepo/hippo-backrest-shared-repo \ - --password-superuser=supersecrethippo \ - --password-replication=somewhatsecrethippo \ - --password=opensourcehippo -``` - -The standby cluster will take a few moments to bootstrap, but it is now set up! - -### Promoting a Standby Cluster - -Before promoting a standby cluster, it is first necessary to shut down the -primary cluster, otherwise you can run into a potential "[split-brain](https://en.wikipedia.org/wiki/Split-brain_(computing))" -scenario (if your primary Kubernetes cluster is down, it may not be possible to -do this). - -To shutdown, run the following command: - -``` -pgo update cluster hippo --shutdown -``` - -Once it is shut down, you can promote the standby cluster: - -``` -pgo update cluster hippo-standby --promote-standby -``` - -The standby is now an active PostgreSQL cluster and can start to accept writes. - -To convert the previous active cluster into a standby cluster, you can run the -following command: - -``` -pgo update cluster hippo --enable-standby -``` - -This will take a few moments to make this PostgreSQL cluster into a standby -cluster. When it is ready, you can start it up with the following command: - -``` -pgo update cluster hippo --startup -``` - -## Labels - -Labels are a helpful way to organize PostgreSQL clusters, such as by application -type or environment. The PostgreSQL Operator supports managing Kubernetes Labels -as a convenient way to group PostgreSQL clusters together. - -You can view which labels are assigned to a PostgreSQL cluster using the -[`pgo show cluster`](/pgo-client/reference/pgo_show_cluster/) command. You are also -able to see these labels when using `kubectl` or `oc`. - -### Add a Label to a PostgreSQL Cluster - -Labels can be added to PostgreSQL clusters using the [`pgo label`](/pgo-client/reference/pgo_label/) -command. For example, to add a label with a key/value pair of `env=production`, -you could execute the following command: - -```shell -pgo label hacluster --label=env=production -``` - -### Add a Label to Multiple PostgreSQL Clusters - -You can add also add a label to multiple PostgreSQL clusters simultaneously -using the `--selector` flag on the `pgo label` command. For example, to add a -label with a key/value pair of `env=production` to clusters that have a label -key/value pair of `app=payment`, you could execute the following command: - -```shell -pgo label --selector=app=payment --label=env=production -``` - -## Custom Annotations - -There are a variety of reasons why one may want to add additional -[Annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) -to the Deployments, and by extension Pods, managed by the PostgreSQL Operator: - -- External applications that extend functionality via details in an annotation -- Tracking purposes for an internal application - -etc. - -As such the `pgo` client allows you to manage your own custom annotations on the -Operator. There are four different ways to add annotations: - -- On PostgreSQL instances -- On pgBackRest instances -- On pgBouncer instances -- On all of the above - -The custom annotation feature follows the same syntax as Kubernetes for adding -and removing annotations, e.g.: - -`--annotation=name=value` - -would add an annotation called `name` with a value of `value`, and: - -`--annotation=name-` - -would remove an annotation called `name` - -### Adding an Annotation - -There are two ways to add an Annotation during the lifecycle of a PostgreSQL -cluster: - -- Cluster creation: ([`pgo create cluster`]({{< relref "/pgo-client/reference/pgo_create_cluster.md" >}})) -- Updating a cluster: ([`pgo update cluster`]({{< relref "/pgo-client/reference/pgo_update_cluster.md" >}})) - -There are several flags available for managing Annotations, i.e.: - -- `--annotation`: adds an Annotation to all managed Deployments (PostgreSQL, pgBackRest, pgBouncer) -- `--annotation-postgres`: adds an Annotation only to PostgreSQL Deployments -- `--annotation-pgbackrest`: adds an Annotation only to pgBackrest Deployments -- `--annotation-pgbouncer`: adds an Annotation only to pgBouncer Deployments - -To add an Annotation with key `hippo` and value `awesome` to all of the managed -Deployments when creating a cluster, you would run the following command: - -`pgo create cluster hippo --annotation=hippo=awesome` - -To add an Annotation with key `elephant` and value `cool` to only the PostgreSQL -Deployments when creating a cluster, you would run the following command: - -`pgo create cluster hippo --annotation-postgres=elephant=cool` - -To add an Annotation to all the managed Deployments in an existing cluster, you -can use the `pgo update cluster` command: - -`pgo update cluster hippo --annotation=zebra=nice` - -### Adding Multiple Annotations - -There are two syntaxes you could use to add multiple Annotations to a cluster: - -`pgo create cluster hippo --annotation=hippo=awesome,elephant=cool` - -or - -`pgo create cluster hippo --annotation=hippo=awesome --annotation=elephant=cool` - -### Updating Annotations - -To update an Annotation, you can use the [`pgo update cluster`]({{< relref "/pgo-client/reference/pgo_update_cluster.md" >}}) -command and reference the original Annotation key. For intance, if I wanted to -update the `hippo` annotation to be `rad`: - -`pgo update cluster hippo --annotation=hippo=rad` - -### Removing Annotations - -To remove an Annotation, you need to add a `-` to the end of the Annotation -name. For example, to remove the `hippo` annotation: - -`pgo update cluster hippo --annotation=hippo-` - -## Policy Management - -### Create a Policy - -To create a SQL policy, enter the following: - - pgo create policy mypolicy --in-file=mypolicy.sql -n pgouser1 - -This examples creates a policy named *mypolicy* using the contents -of the file *mypolicy.sql* which is assumed to be in the current -directory. - -You can view policies as following: - - pgo show policy --all -n pgouser1 - - -### Apply a Policy - - pgo apply mypolicy --selector=environment=prod - pgo apply mypolicy --selector=name=hacluster - -## Advanced Operations - -### Connection Pooling via pgBouncer - -To add a pgbouncer Deployment to your Postgres cluster, enter: - - pgo create cluster hacluster --pgbouncer -n pgouser1 - -You can add pgbouncer after a Postgres cluster is created as follows: - - pgo create pgbouncer hacluster - pgo create pgbouncer --selector=name=hacluster - -You can also specify a pgbouncer password as follows: - - pgo create cluster hacluster --pgbouncer --pgbouncer-pass=somepass -n pgouser1 - -You can remove a pgbouncer from a cluster as follows: - - pgo delete pgbouncer hacluster -n pgouser1 - -### Query Analysis via pgBadger - -You can create a pgbadger sidecar container in your Postgres cluster -pod as follows: - - pgo create cluster hacluster --pgbadger -n pgouser1 - -### Create a Cluster using Specific Storage - - pgo create cluster hacluster --storage-config=somestorageconfig -n pgouser1 - -Likewise, you can specify a storage configuration when creating -a replica: - - pgo scale hacluster --storage-config=someslowerstorage -n pgouser1 - -This example specifies the *somestorageconfig* storage configuration -to be used by the Postgres cluster. This lets you specify a storage -configuration that is defined in the *pgo.yaml* file specifically for -a given Postgres cluster. - -You can create a Cluster using a Preferred Node as follows: - - pgo create cluster hacluster --node-label=speed=superfast -n pgouser1 - -That command will cause a node affinity rule to be added to the -Postgres pod which will influence the node upon which Kubernetes -will schedule the Pod. - -Likewise, you can create a Replica using a Preferred Node as follows: - - pgo scale hacluster --node-label=speed=slowerthannormal -n pgouser1 - -### Create a Cluster with LoadBalancer ServiceType - - pgo create cluster hacluster --service-type=LoadBalancer -n pgouser1 - -This command will cause the Postgres Service to be of a specific -type instead of the default ClusterIP service type. - -### Namespace Operations - -Create an Operator namespace where Postgres clusters can be created -and managed by the Operator: - - pgo create namespace mynamespace - -Update a Namespace to be able to be used by the Operator: - - pgo update namespace somenamespace - -Delete a Namespace: - - pgo delete namespace mynamespace - -### PostgreSQL Operator User Operations - -PGO users are users defined for authenticating to the PGO REST API. You -can manage those users with the following commands: - - pgo create pgouser someuser --pgouser-namespaces="pgouser1,pgouser2" --pgouser-password="somepassword" --pgouser-roles="pgoadmin" - pgo create pgouser otheruser --all-namespaces --pgouser-password="somepassword" --pgouser-roles="pgoadmin" - -Update a user: - - pgo update pgouser someuser --pgouser-namespaces="pgouser1,pgouser2" --pgouser-password="somepassword" --pgouser-roles="pgoadmin" - pgo update pgouser otheruser --all-namespaces --pgouser-password="somepassword" --pgouser-roles="pgoadmin" - -Delete a PGO user: - - pgo delete pgouser someuser - -PGO roles are also managed as follows: - - pgo create pgorole somerole --permissions="Cat,Ls" - -Delete a PGO role with: - - pgo delete pgorole somerole - -Update a PGO role with: - - pgo update pgorole somerole --permissions="Cat,Ls" - -### PostgreSQL Cluster User Operations - -Managed Postgres users can be viewed using the following command: - - pgo show user hacluster - -Postgres users can be created using the following command examples: - - pgo create user hacluster --username=somepguser --password=somepassword --managed - pgo create user --selector=name=hacluster --username=somepguser --password=somepassword --managed - -Those commands are identical in function, and create on the hacluster Postgres cluster, a user named *somepguser*, with a password of *somepassword*, the account is *managed* meaning that -these credentials are stored as a Secret on the Kubernetes cluster in the Operator -namespace. - -Postgres users can be deleted using the following command: - - pgo delete user hacluster --username=somepguser - -That command deletes the user on the hacluster Postgres cluster. - -Postgres users can be updated using the following command: - - pgo update user hacluster --username=somepguser --password=frodo - -That command changes the password for the user on the hacluster Postgres cluster. diff --git a/docs/content/pgo-client/reference/_index.md b/docs/content/pgo-client/reference/_index.md deleted file mode 100644 index 58e997d4eb..0000000000 --- a/docs/content/pgo-client/reference/_index.md +++ /dev/null @@ -1,49 +0,0 @@ ---- -title: "pgo Client Reference" ---- -## pgo - -The pgo command line interface. - -### Synopsis - -The pgo command line interface lets you create and manage PostgreSQL clusters. - -### Options - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -h, --help help for pgo - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo apply](/pgo-client/reference/pgo_apply/) - Apply a policy -* [pgo backup](/pgo-client/reference/pgo_backup/) - Perform a Backup -* [pgo cat](/pgo-client/reference/pgo_cat/) - Perform a cat command on a cluster -* [pgo create](/pgo-client/reference/pgo_create/) - Create a Postgres Operator resource -* [pgo delete](/pgo-client/reference/pgo_delete/) - Delete an Operator resource -* [pgo df](/pgo-client/reference/pgo_df/) - Display disk space for clusters -* [pgo failover](/pgo-client/reference/pgo_failover/) - Performs a manual failover -* [pgo label](/pgo-client/reference/pgo_label/) - Label a set of clusters -* [pgo reload](/pgo-client/reference/pgo_reload/) - Perform a cluster reload -* [pgo restart](/pgo-client/reference/pgo_restart/) - Restarts the PostgrSQL database within a PostgreSQL cluster -* [pgo restore](/pgo-client/reference/pgo_restore/) - Perform a restore from previous backup -* [pgo scale](/pgo-client/reference/pgo_scale/) - Scale a PostgreSQL cluster -* [pgo scaledown](/pgo-client/reference/pgo_scaledown/) - Scale down a PostgreSQL cluster -* [pgo show](/pgo-client/reference/pgo_show/) - Show the description of a cluster -* [pgo status](/pgo-client/reference/pgo_status/) - Display PostgreSQL cluster status -* [pgo test](/pgo-client/reference/pgo_test/) - Test cluster connectivity -* [pgo update](/pgo-client/reference/pgo_update/) - Update a pgouser, pgorole, or cluster -* [pgo upgrade](/pgo-client/reference/pgo_upgrade/) - Perform a cluster upgrade. -* [pgo version](/pgo-client/reference/pgo_version/) - Print version information for the PostgreSQL Operator -* [pgo watch](/pgo-client/reference/pgo_watch/) - Print watch information for the PostgreSQL Operator - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_apply.md b/docs/content/pgo-client/reference/pgo_apply.md deleted file mode 100644 index 403d6c9d47..0000000000 --- a/docs/content/pgo-client/reference/pgo_apply.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -title: "pgo apply" ---- -## pgo apply - -Apply a policy - -### Synopsis - -APPLY allows you to apply a Policy to a set of clusters. For example: - - pgo apply mypolicy1 --selector=name=mycluster - pgo apply mypolicy1 --selector=someotherpolicy - pgo apply mypolicy1 --selector=someotherpolicy --dry-run - -``` -pgo apply [flags] -``` - -### Options - -``` - --dry-run Shows the clusters that the label would be applied to, without labelling them. - -h, --help help for apply - -s, --selector string The selector to use for cluster filtering. -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface. - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_backup.md b/docs/content/pgo-client/reference/pgo_backup.md deleted file mode 100644 index 0e4c65a530..0000000000 --- a/docs/content/pgo-client/reference/pgo_backup.md +++ /dev/null @@ -1,47 +0,0 @@ ---- -title: "pgo backup" ---- -## pgo backup - -Perform a Backup - -### Synopsis - -BACKUP performs a Backup, for example: - - pgo backup mycluster - -``` -pgo backup [flags] -``` - -### Options - -``` - --backup-opts string The options to pass into pgbackrest. - --backup-type string The backup type to perform. Default is pgbackrest. Valid backup types are pgbackrest and pgdump. (default "pgbackrest") - -d, --database string The name of the database pgdump will backup. (default "postgres") - -h, --help help for backup - --pgbackrest-storage-type string The type of storage to use when scheduling pgBackRest backups. Either "local", "s3" or both, comma separated. (default "local") - --pvc-name string The PVC name to use for the backup instead of the default. - -s, --selector string The selector to use for cluster filtering. -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface. - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_cat.md b/docs/content/pgo-client/reference/pgo_cat.md deleted file mode 100644 index cef3887e31..0000000000 --- a/docs/content/pgo-client/reference/pgo_cat.md +++ /dev/null @@ -1,41 +0,0 @@ ---- -title: "pgo cat" ---- -## pgo cat - -Perform a cat command on a cluster - -### Synopsis - -CAT performs a Linux cat command on a cluster file. For example: - - pgo cat mycluster /pgdata/mycluster/postgresql.conf - -``` -pgo cat [flags] -``` - -### Options - -``` - -h, --help help for cat -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface. - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_clone.md b/docs/content/pgo-client/reference/pgo_clone.md deleted file mode 100644 index 6f07741010..0000000000 --- a/docs/content/pgo-client/reference/pgo_clone.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -title: "pgo clone" ---- -## pgo clone - -Copies the primary database of an existing cluster to a new cluster - -### Synopsis - -Clone makes a copy of an existing PostgreSQL cluster managed by the Operator and creates a new PostgreSQL cluster managed by the Operator, with the data from the old cluster. - - pgo clone oldcluster newcluster - -``` -pgo clone [flags] -``` - -### Options - -``` - --enable-metrics If sets, enables metrics collection on the newly cloned cluster - -h, --help help for clone - --pgbackrest-pvc-size string The size of the PVC capacity for the pgBackRest repository. Overrides the value set in the storage class. This is ignored if the storage type of "local" is not used. Must follow the standard Kubernetes format, e.g. "10.1Gi" - --pgbackrest-storage-source string The data source for the clone when both "local" and "s3" are enabled in the source cluster. Either "local", "s3" or both, comma separated. (default "local") - --pvc-size string The size of the PVC capacity for primary and replica PostgreSQL instances. Overrides the value set in the storage class. Must follow the standard Kubernetes format, e.g. "10.1Gi" -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface. - -###### Auto generated by spf13/cobra on 2-Jul-2020 diff --git a/docs/content/pgo-client/reference/pgo_create.md b/docs/content/pgo-client/reference/pgo_create.md deleted file mode 100644 index 14cc07b5d0..0000000000 --- a/docs/content/pgo-client/reference/pgo_create.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -title: "pgo create" ---- -## pgo create - -Create a Postgres Operator resource - -### Synopsis - -CREATE allows you to create a new Operator resource. For example: - pgo create cluster - pgo create pgadmin - pgo create pgbouncer - pgo create pgouser - pgo create pgorole - pgo create policy - pgo create namespace - pgo create user - -``` -pgo create [flags] -``` - -### Options - -``` - -h, --help help for create -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface. -* [pgo create cluster](/pgo-client/reference/pgo_create_cluster/) - Create a PostgreSQL cluster -* [pgo create namespace](/pgo-client/reference/pgo_create_namespace/) - Create a namespace -* [pgo create pgadmin](/pgo-client/reference/pgo_create_pgadmin/) - Create a pgAdmin instance -* [pgo create pgbouncer](/pgo-client/reference/pgo_create_pgbouncer/) - Create a pgbouncer -* [pgo create pgorole](/pgo-client/reference/pgo_create_pgorole/) - Create a pgorole -* [pgo create pgouser](/pgo-client/reference/pgo_create_pgouser/) - Create a pgouser -* [pgo create policy](/pgo-client/reference/pgo_create_policy/) - Create a SQL policy -* [pgo create schedule](/pgo-client/reference/pgo_create_schedule/) - Create a cron-like scheduled task -* [pgo create user](/pgo-client/reference/pgo_create_user/) - Create a PostgreSQL user - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_create_cluster.md b/docs/content/pgo-client/reference/pgo_create_cluster.md deleted file mode 100644 index 265b3c5517..0000000000 --- a/docs/content/pgo-client/reference/pgo_create_cluster.md +++ /dev/null @@ -1,133 +0,0 @@ ---- -title: "pgo create cluster" ---- -## pgo create cluster - -Create a PostgreSQL cluster - -### Synopsis - -Create a PostgreSQL cluster consisting of a primary and a number of replica backends. For example: - - pgo create cluster mycluster - -``` -pgo create cluster [flags] -``` - -### Options - -``` - --annotation strings Add an Annotation to all of the managed deployments (PostgreSQL, pgBackRest, pgBouncer) - The format to add an annotation is "name=value" - The format to remove an annotation is "name-" - - For example, to add two annotations: "--annotation=hippo=awesome,elephant=cool" - --annotation-pgbackrest strings Add an Annotation specifically to pgBackRest deployments - The format to add an annotation is "name=value" - The format to remove an annotation is "name-" - --annotation-pgbouncer strings Add an Annotation specifically to pgBouncer deployments - The format to add an annotation is "name=value" - The format to remove an annotation is "name-" - --annotation-postgres strings Add an Annotation specifically to PostgreSQL deployments - The format to add an annotation is "name=value" - The format to remove an annotation is "name-" - --ccp-image string The CCPImage name to use for cluster creation. If specified, overrides the value crunchy-postgres. - --ccp-image-prefix string The CCPImagePrefix to use for cluster creation. If specified, overrides the global configuration. - -c, --ccp-image-tag string The CCPImageTag to use for cluster creation. If specified, overrides the pgo.yaml setting. - --cpu string Set the number of millicores to request for the CPU, e.g. "100m" or "0.1". - --cpu-limit string Set the number of millicores to limit for the CPU, e.g. "100m" or "0.1". - --custom-config string The name of a configMap that holds custom PostgreSQL configuration files used to override defaults. - -d, --database string If specified, sets the name of the initial database that is created for the user. Defaults to the value set in the PostgreSQL Operator configuration, or if that is not present, the name of the cluster - --disable-autofail Disables autofail capabitilies in the cluster following cluster initialization. - --exporter-cpu string Set the number of millicores to request for CPU for the Crunchy Postgres Exporter sidecar container, e.g. "100m" or "0.1". Defaults to being unset. - --exporter-cpu-limit string Set the number of millicores to limit for CPU for the Crunchy Postgres Exporter sidecar container, e.g. "100m" or "0.1". Defaults to being unset. - --exporter-memory string Set the amount of memory to request for the Crunchy Postgres Exporter sidecar container. Defaults to server value (24Mi). - --exporter-memory-limit string Set the amount of memory to limit for the Crunchy Postgres Exporter sidecar container. - -h, --help help for cluster - -l, --labels string The labels to apply to this cluster. - --memory string Set the amount of RAM to request, e.g. 1GiB. Overrides the default server value. - --memory-limit string Set the amount of RAM to limit, e.g. 1GiB. - --metrics Adds the crunchy-postgres-exporter container to the database pod. - --node-label string The node label (key=value) to use in placing the primary database. If not set, any node is used. - --password string The password to use for standard user account created during cluster initialization. - --password-length int If no password is supplied, sets the length of the automatically generated password. Defaults to the value set on the server. - --password-replication string The password to use for the PostgreSQL replication user. - --password-superuser string The password to use for the PostgreSQL superuser. - --pgbackrest-cpu string Set the number of millicores to request for CPU for the pgBackRest repository. - --pgbackrest-cpu-limit string Set the number of millicores to limit for CPU for the pgBackRest repository. - --pgbackrest-custom-config string The name of a ConfigMap containing pgBackRest configuration files. - --pgbackrest-memory string Set the amount of memory to request for the pgBackRest repository. Defaults to server value (48Mi). - --pgbackrest-memory-limit string Set the amount of memory to limit for the pgBackRest repository. - --pgbackrest-pvc-size string The size of the PVC capacity for the pgBackRest repository. Overrides the value set in the storage class. This is ignored if the storage type of "local" is not used. Must follow the standard Kubernetes format, e.g. "10.1Gi" - --pgbackrest-repo-path string The pgBackRest repository path that should be utilized instead of the default. Required for standby - clusters to define the location of an existing pgBackRest repository. - --pgbackrest-s3-bucket string The AWS S3 bucket that should be utilized for the cluster when the "s3" storage type is enabled for pgBackRest. - --pgbackrest-s3-ca-secret string If used, specifies a Kubernetes secret that uses a different CA certificate for S3 or a S3-like storage interface. Must contain a key with the value "aws-s3-ca.crt" - --pgbackrest-s3-endpoint string The AWS S3 endpoint that should be utilized for the cluster when the "s3" storage type is enabled for pgBackRest. - --pgbackrest-s3-key string The AWS S3 key that should be utilized for the cluster when the "s3" storage type is enabled for pgBackRest. - --pgbackrest-s3-key-secret string The AWS S3 key secret that should be utilized for the cluster when the "s3" storage type is enabled for pgBackRest. - --pgbackrest-s3-region string The AWS S3 region that should be utilized for the cluster when the "s3" storage type is enabled for pgBackRest. - --pgbackrest-s3-uri-style string Specifies whether "host" or "path" style URIs will be used when connecting to S3. - --pgbackrest-s3-verify-tls This sets if pgBackRest should verify the TLS certificate when connecting to S3. To disable, use "--pgbackrest-s3-verify-tls=false". (default true) - --pgbackrest-storage-config string The name of the storage config in pgo.yaml to use for the pgBackRest local repository. - --pgbackrest-storage-type string The type of storage to use with pgBackRest. Either "local", "s3" or both, comma separated. (default "local") - --pgbadger Adds the crunchy-pgbadger container to the database pod. - --pgbouncer Adds a crunchy-pgbouncer deployment to the cluster. - --pgbouncer-cpu string Set the number of millicores to request for CPU for pgBouncer. Defaults to being unset. - --pgbouncer-cpu-limit string Set the number of millicores to limit for CPU for pgBouncer. Defaults to being unset. - --pgbouncer-memory string Set the amount of memory to request for pgBouncer. Defaults to server value (24Mi). - --pgbouncer-memory-limit string Set the amount of memory to limit for pgBouncer. - --pgbouncer-replicas int32 Set the total number of pgBouncer instances to deploy. If not set, defaults to 1. - --pgo-image-prefix string The PGOImagePrefix to use for cluster creation. If specified, overrides the global configuration. - --pod-anti-affinity string Specifies the type of anti-affinity that should be utilized when applying default pod anti-affinity rules to PG clusters (default "preferred") - --pod-anti-affinity-pgbackrest string Set the Pod anti-affinity rules specifically for the pgBackRest repository. Defaults to the default cluster pod anti-affinity (i.e. "preferred"), or the value set by --pod-anti-affinity - --pod-anti-affinity-pgbouncer string Set the Pod anti-affinity rules specifically for the pgBouncer Pods. Defaults to the default cluster pod anti-affinity (i.e. "preferred"), or the value set by --pod-anti-affinity - -z, --policies string The policies to apply when creating a cluster, comma separated. - --pvc-size string The size of the PVC capacity for primary and replica PostgreSQL instances. Overrides the value set in the storage class. Must follow the standard Kubernetes format, e.g. "10.1Gi" - --replica-count int The number of replicas to create as part of the cluster. - --replica-storage-config string The name of a Storage config in pgo.yaml to use for the cluster replica storage. - --replication-tls-secret string The name of the secret that contains the TLS keypair to use for enabling certificate-based authentication between PostgreSQL instances, particularly for the purpose of replication. Must be used with "server-tls-secret" and "server-ca-secret". - --restore-from string The name of cluster to restore from when bootstrapping a new cluster - --restore-opts string The options to pass into pgbackrest where performing a restore to bootrap the cluster. Only applicable when a "restore-from" value is specified - -s, --secret-from string The cluster name to use when restoring secrets. - --server-ca-secret string The name of the secret that contains the certficate authority (CA) to use for enabling the PostgreSQL cluster to accept TLS connections. Must be used with "server-tls-secret". - --server-tls-secret string The name of the secret that contains the TLS keypair to use for enabling the PostgreSQL cluster to accept TLS connections. Must be used with "server-ca-secret" - --service-type string The Service type to use for the PostgreSQL cluster. If not set, the pgo.yaml default will be used. - --show-system-accounts Include the system accounts in the results. - --standby Creates a standby cluster that replicates from a pgBackRest repository in AWS S3. - --storage-config string The name of a Storage config in pgo.yaml to use for the cluster storage. - --sync-replication Enables synchronous replication for the cluster. - --tablespace strings Create a PostgreSQL tablespace on the cluster, e.g. "name=ts1:storageconfig=nfsstorage". The format is a key/value map that is delimited by "=" and separated by ":". The following parameters are available: - - - name (required): the name of the PostgreSQL tablespace - - storageconfig (required): the storage configuration to use, as specified in the list available in the "pgo-config" ConfigMap (aka "pgo.yaml") - - pvcsize: the size of the PVC capacity, which overrides the value set in the specified storageconfig. Follows the Kubernetes quantity format. - - For example, to create a tablespace with the NFS storage configuration with a PVC of size 10GiB: - - --tablespace=name=ts1:storageconfig=nfsstorage:pvcsize=10Gi - --tls-only If true, forces all PostgreSQL connections to be over TLS. Must also set "server-tls-secret" and "server-ca-secret" - -u, --username string The username to use for creating the PostgreSQL user with standard permissions. Defaults to the value in the PostgreSQL Operator configuration. - --wal-storage-config string The name of a storage configuration in pgo.yaml to use for PostgreSQL's write-ahead log (WAL). - --wal-storage-size string The size of the capacity for WAL storage, which overrides any value in the storage configuration. Follows the Kubernetes quantity format. -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo create](/pgo-client/reference/pgo_create/) - Create a Postgres Operator resource - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_create_namespace.md b/docs/content/pgo-client/reference/pgo_create_namespace.md deleted file mode 100644 index 90894e2b77..0000000000 --- a/docs/content/pgo-client/reference/pgo_create_namespace.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -title: "pgo create namespace" ---- -## pgo create namespace - -Create a namespace - -### Synopsis - -Create a namespace. For example: - - pgo create namespace somenamespace - - Note: For Kubernetes versions prior to 1.12, this command will not function properly - - use $PGOROOT/deploy/add_targted_namespace.sh scriptor or give the user cluster-admin privileges. - For more details, see the Namespace Creation section under Installing Operator Using Bash in the documentation. - -``` -pgo create namespace [flags] -``` - -### Options - -``` - -h, --help help for namespace -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo create](/pgo-client/reference/pgo_create/) - Create a Postgres Operator resource - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_create_pgadmin.md b/docs/content/pgo-client/reference/pgo_create_pgadmin.md deleted file mode 100644 index 1e0c43d578..0000000000 --- a/docs/content/pgo-client/reference/pgo_create_pgadmin.md +++ /dev/null @@ -1,42 +0,0 @@ ---- -title: "pgo create pgadmin" ---- -## pgo create pgadmin - -Create a pgAdmin instance - -### Synopsis - -Create a pgAdmin instance for mycluster. For example: - - pgo create pgadmin mycluster - -``` -pgo create pgadmin [flags] -``` - -### Options - -``` - -h, --help help for pgadmin - -s, --selector string The selector to use for cluster filtering. -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo create](/pgo-client/reference/pgo_create/) - Create a Postgres Operator resource - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_create_pgbouncer.md b/docs/content/pgo-client/reference/pgo_create_pgbouncer.md deleted file mode 100644 index ad406e60e0..0000000000 --- a/docs/content/pgo-client/reference/pgo_create_pgbouncer.md +++ /dev/null @@ -1,47 +0,0 @@ ---- -title: "pgo create pgbouncer" ---- -## pgo create pgbouncer - -Create a pgbouncer - -### Synopsis - -Create a pgbouncer. For example: - - pgo create pgbouncer mycluster - -``` -pgo create pgbouncer [flags] -``` - -### Options - -``` - --cpu string Set the number of millicores to request for CPU for pgBouncer. Defaults to being unset. - --cpu-limit string Set the number of millicores to request for CPU for pgBouncer. - -h, --help help for pgbouncer - --memory string Set the amount of memory to request for pgBouncer. Defaults to server value (24Mi). - --memory-limit string Set the amount of memory to limit for pgBouncer. - --replicas int32 Set the total number of pgBouncer instances to deploy. If not set, defaults to 1. - -s, --selector string The selector to use for cluster filtering. -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo create](/pgo-client/reference/pgo_create/) - Create a Postgres Operator resource - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_create_pgorole.md b/docs/content/pgo-client/reference/pgo_create_pgorole.md deleted file mode 100644 index 50bcc66915..0000000000 --- a/docs/content/pgo-client/reference/pgo_create_pgorole.md +++ /dev/null @@ -1,42 +0,0 @@ ---- -title: "pgo create pgorole" ---- -## pgo create pgorole - -Create a pgorole - -### Synopsis - -Create a pgorole. For example: - - pgo create pgorole somerole --permissions="Cat,Ls" - -``` -pgo create pgorole [flags] -``` - -### Options - -``` - -h, --help help for pgorole - --permissions string specify a comma separated list of permissions for a pgorole -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo create](/pgo-client/reference/pgo_create/) - Create a Postgres Operator resource - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_create_pgouser.md b/docs/content/pgo-client/reference/pgo_create_pgouser.md deleted file mode 100644 index 35513ea915..0000000000 --- a/docs/content/pgo-client/reference/pgo_create_pgouser.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -title: "pgo create pgouser" ---- -## pgo create pgouser - -Create a pgouser - -### Synopsis - -Create a pgouser. For example: - - pgo create pgouser someuser - -``` -pgo create pgouser [flags] -``` - -### Options - -``` - --all-namespaces specifies this user will have access to all namespaces. - -h, --help help for pgouser - --pgouser-namespaces string specify a comma separated list of Namespaces for a pgouser - --pgouser-password string specify a password for a pgouser - --pgouser-roles string specify a comma separated list of Roles for a pgouser -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo create](/pgo-client/reference/pgo_create/) - Create a Postgres Operator resource - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_create_policy.md b/docs/content/pgo-client/reference/pgo_create_policy.md deleted file mode 100644 index ac17b059f6..0000000000 --- a/docs/content/pgo-client/reference/pgo_create_policy.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -title: "pgo create policy" ---- -## pgo create policy - -Create a SQL policy - -### Synopsis - -Create a policy. For example: - - pgo create policy mypolicy --in-file=/tmp/mypolicy.sql - -``` -pgo create policy [flags] -``` - -### Options - -``` - -h, --help help for policy - -i, --in-file string The policy file path to use for adding a policy. - -u, --url string The url to use for adding a policy. -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo create](/pgo-client/reference/pgo_create/) - Create a Postgres Operator resource - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_create_schedule.md b/docs/content/pgo-client/reference/pgo_create_schedule.md deleted file mode 100644 index 4aeb07fe88..0000000000 --- a/docs/content/pgo-client/reference/pgo_create_schedule.md +++ /dev/null @@ -1,51 +0,0 @@ ---- -title: "pgo create schedule" ---- -## pgo create schedule - -Create a cron-like scheduled task - -### Synopsis - -Schedule creates a cron-like scheduled task. For example: - - pgo create schedule --schedule="* * * * *" --schedule-type=pgbackrest --pgbackrest-backup-type=full mycluster - -``` -pgo create schedule [flags] -``` - -### Options - -``` - -c, --ccp-image-tag string The CCPImageTag to use for cluster creation. If specified, overrides the pgo.yaml setting. - --database string The database to run the SQL policy against. - -h, --help help for schedule - --pgbackrest-backup-type string The type of pgBackRest backup to schedule (full, diff or incr). - --pgbackrest-storage-type string The type of storage to use when scheduling pgBackRest backups. Either "local", "s3" or both, comma separated. (default "local") - --policy string The policy to use for SQL schedules. - --schedule string The schedule assigned to the cron task. - --schedule-opts string The custom options passed to the create schedule API. - --schedule-type string The type of schedule to be created (pgbackrest or policy). - --secret string The secret name for the username and password of the PostgreSQL role for SQL schedules. - -s, --selector string The selector to use for cluster filtering. -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo create](/pgo-client/reference/pgo_create/) - Create a Postgres Operator resource - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_create_user.md b/docs/content/pgo-client/reference/pgo_create_user.md deleted file mode 100644 index cd38c71059..0000000000 --- a/docs/content/pgo-client/reference/pgo_create_user.md +++ /dev/null @@ -1,53 +0,0 @@ ---- -title: "pgo create user" ---- -## pgo create user - -Create a PostgreSQL user - -### Synopsis - -Create a postgres user. For example: - - pgo create user --username=someuser --all --managed - pgo create user --username=someuser mycluster --managed - pgo create user --username=someuser -selector=name=mycluster --managed - pgo create user --username=user1 --selector=name=mycluster - -``` -pgo create user [flags] -``` - -### Options - -``` - --all Create a user on every cluster. - -h, --help help for user - --managed Creates a user with secrets that can be managed by the Operator. - -o, --output string The output format. Supported types are: "json" - --password string The password to use for creating a new user which overrides a generated password. - --password-length int If no password is supplied, sets the length of the automatically generated password. Defaults to the value set on the server. - --password-type string The type of password hashing to use.Choices are: (md5, scram-sha-256). (default "md5") - -s, --selector string The selector to use for cluster filtering. - --username string The username to use for creating a new user - --valid-days int Sets the number of days that a password is valid. Defaults to the server value. -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo create](/pgo-client/reference/pgo_create/) - Create a Postgres Operator resource - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_delete.md b/docs/content/pgo-client/reference/pgo_delete.md deleted file mode 100644 index b2233b9c50..0000000000 --- a/docs/content/pgo-client/reference/pgo_delete.md +++ /dev/null @@ -1,67 +0,0 @@ ---- -title: "pgo delete" ---- -## pgo delete - -Delete an Operator resource - -### Synopsis - -The delete command allows you to delete an Operator resource. For example: - - pgo delete backup mycluster - pgo delete cluster mycluster - pgo delete cluster mycluster --delete-data - pgo delete cluster mycluster --delete-data --delete-backups - pgo delete label mycluster --label=env=research - pgo delete pgadmin mycluster - pgo delete pgbouncer mycluster - pgo delete pgbouncer mycluster --uninstall - pgo delete pgouser someuser - pgo delete pgorole somerole - pgo delete policy mypolicy - pgo delete namespace mynamespace - pgo delete schedule --schedule-name=mycluster-pgbackrest-full - pgo delete schedule --selector=name=mycluster - pgo delete schedule mycluster - pgo delete user --username=testuser --selector=name=mycluster - -``` -pgo delete [flags] -``` - -### Options - -``` - -h, --help help for delete -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface. -* [pgo delete backup](/pgo-client/reference/pgo_delete_backup/) - Delete a backup -* [pgo delete cluster](/pgo-client/reference/pgo_delete_cluster/) - Delete a PostgreSQL cluster -* [pgo delete label](/pgo-client/reference/pgo_delete_label/) - Delete a label from clusters -* [pgo delete namespace](/pgo-client/reference/pgo_delete_namespace/) - Delete namespaces -* [pgo delete pgadmin](/pgo-client/reference/pgo_delete_pgadmin/) - Delete a pgAdmin instance from a cluster -* [pgo delete pgbouncer](/pgo-client/reference/pgo_delete_pgbouncer/) - Delete a pgbouncer from a cluster -* [pgo delete pgorole](/pgo-client/reference/pgo_delete_pgorole/) - Delete a pgorole -* [pgo delete pgouser](/pgo-client/reference/pgo_delete_pgouser/) - Delete a pgouser -* [pgo delete policy](/pgo-client/reference/pgo_delete_policy/) - Delete a SQL policy -* [pgo delete schedule](/pgo-client/reference/pgo_delete_schedule/) - Delete a schedule -* [pgo delete user](/pgo-client/reference/pgo_delete_user/) - Delete a user - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_delete_backup.md b/docs/content/pgo-client/reference/pgo_delete_backup.md deleted file mode 100644 index 22bf95e3c9..0000000000 --- a/docs/content/pgo-client/reference/pgo_delete_backup.md +++ /dev/null @@ -1,41 +0,0 @@ ---- -title: "pgo delete backup" ---- -## pgo delete backup - -Delete a backup - -### Synopsis - -Delete a backup. For example: - - pgo delete backup mydatabase - -``` -pgo delete backup [flags] -``` - -### Options - -``` - -h, --help help for backup -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo delete](/pgo-client/reference/pgo_delete/) - Delete an Operator resource - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_delete_cluster.md b/docs/content/pgo-client/reference/pgo_delete_cluster.md deleted file mode 100644 index bf550cf53e..0000000000 --- a/docs/content/pgo-client/reference/pgo_delete_cluster.md +++ /dev/null @@ -1,47 +0,0 @@ ---- -title: "pgo delete cluster" ---- -## pgo delete cluster - -Delete a PostgreSQL cluster - -### Synopsis - -Delete a PostgreSQL cluster. For example: - - pgo delete cluster --all - pgo delete cluster mycluster - -``` -pgo delete cluster [flags] -``` - -### Options - -``` - --all Delete all clusters. Backups and data subject to --delete-backups and --delete-data flags, respectively. - -h, --help help for cluster - --keep-backups Keeps the backups available for use at a later time (e.g. recreating the cluster). - --keep-data Keeps the data for the specified cluster. Can be reassigned to exact same cluster in the future. - --no-prompt No command line confirmation before delete. - -s, --selector string The selector to use for cluster filtering. -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo delete](/pgo-client/reference/pgo_delete/) - Delete an Operator resource - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_delete_label.md b/docs/content/pgo-client/reference/pgo_delete_label.md deleted file mode 100644 index b8ad151b73..0000000000 --- a/docs/content/pgo-client/reference/pgo_delete_label.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -title: "pgo delete label" ---- -## pgo delete label - -Delete a label from clusters - -### Synopsis - -Delete a label from clusters. For example: - - pgo delete label mycluster --label=env=research - pgo delete label all --label=env=research - pgo delete label --selector=group=southwest --label=env=research - -``` -pgo delete label [flags] -``` - -### Options - -``` - -h, --help help for label - --label string The label to delete for any selected or specified clusters. - -s, --selector string The selector to use for cluster filtering. -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo delete](/pgo-client/reference/pgo_delete/) - Delete an Operator resource - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_delete_namespace.md b/docs/content/pgo-client/reference/pgo_delete_namespace.md deleted file mode 100644 index 63e9fa95db..0000000000 --- a/docs/content/pgo-client/reference/pgo_delete_namespace.md +++ /dev/null @@ -1,42 +0,0 @@ ---- -title: "pgo delete namespace" ---- -## pgo delete namespace - -Delete namespaces - -### Synopsis - -Delete namespaces. For example: - - pgo delete namespace mynamespace - pgo delete namespace --selector=env=test - -``` -pgo delete namespace [flags] -``` - -### Options - -``` - -h, --help help for namespace -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo delete](/pgo-client/reference/pgo_delete/) - Delete an Operator resource - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_delete_pgadmin.md b/docs/content/pgo-client/reference/pgo_delete_pgadmin.md deleted file mode 100644 index d48bacd9d0..0000000000 --- a/docs/content/pgo-client/reference/pgo_delete_pgadmin.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -title: "pgo delete pgadmin" ---- -## pgo delete pgadmin - -Delete a pgAdmin instance from a cluster - -### Synopsis - -Delete a pgAdmin instance from a cluster. For example: - - pgo delete pgadmin mycluster - -``` -pgo delete pgadmin [flags] -``` - -### Options - -``` - -h, --help help for pgadmin - --no-prompt No command line confirmation before delete. - -s, --selector string The selector to use for cluster filtering. -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo delete](/pgo-client/reference/pgo_delete/) - Delete an Operator resource - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_delete_pgbouncer.md b/docs/content/pgo-client/reference/pgo_delete_pgbouncer.md deleted file mode 100644 index bcf71def78..0000000000 --- a/docs/content/pgo-client/reference/pgo_delete_pgbouncer.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -title: "pgo delete pgbouncer" ---- -## pgo delete pgbouncer - -Delete a pgbouncer from a cluster - -### Synopsis - -Delete a pgbouncer from a cluster. For example: - - pgo delete pgbouncer mycluster - -``` -pgo delete pgbouncer [flags] -``` - -### Options - -``` - -h, --help help for pgbouncer - --no-prompt No command line confirmation before delete. - -s, --selector string The selector to use for cluster filtering. - --uninstall Used to remove any "pgbouncer" owned object and user from the PostgreSQL cluster -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo delete](/pgo-client/reference/pgo_delete/) - Delete an Operator resource - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_delete_pgorole.md b/docs/content/pgo-client/reference/pgo_delete_pgorole.md deleted file mode 100644 index f67359235d..0000000000 --- a/docs/content/pgo-client/reference/pgo_delete_pgorole.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -title: "pgo delete pgorole" ---- -## pgo delete pgorole - -Delete a pgorole - -### Synopsis - -Delete a pgorole. For example: - - pgo delete pgorole somerole - -``` -pgo delete pgorole [flags] -``` - -### Options - -``` - --all Delete all PostgreSQL Operator roles. - -h, --help help for pgorole - --no-prompt No command line confirmation before delete. -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo delete](/pgo-client/reference/pgo_delete/) - Delete an Operator resource - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_delete_pgouser.md b/docs/content/pgo-client/reference/pgo_delete_pgouser.md deleted file mode 100644 index 0a4bba911f..0000000000 --- a/docs/content/pgo-client/reference/pgo_delete_pgouser.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -title: "pgo delete pgouser" ---- -## pgo delete pgouser - -Delete a pgouser - -### Synopsis - -Delete a pgouser. For example: - - pgo delete pgouser someuser - -``` -pgo delete pgouser [flags] -``` - -### Options - -``` - --all Delete all PostgreSQL Operator users. - -h, --help help for pgouser - --no-prompt No command line confirmation before delete. -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo delete](/pgo-client/reference/pgo_delete/) - Delete an Operator resource - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_delete_policy.md b/docs/content/pgo-client/reference/pgo_delete_policy.md deleted file mode 100644 index cf40d26835..0000000000 --- a/docs/content/pgo-client/reference/pgo_delete_policy.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -title: "pgo delete policy" ---- -## pgo delete policy - -Delete a SQL policy - -### Synopsis - -Delete a policy. For example: - - pgo delete policy mypolicy - -``` -pgo delete policy [flags] -``` - -### Options - -``` - --all Delete all SQL policies. - -h, --help help for policy - --no-prompt No command line confirmation before delete. -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo delete](/pgo-client/reference/pgo_delete/) - Delete an Operator resource - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_delete_schedule.md b/docs/content/pgo-client/reference/pgo_delete_schedule.md deleted file mode 100644 index b7de536bbd..0000000000 --- a/docs/content/pgo-client/reference/pgo_delete_schedule.md +++ /dev/null @@ -1,46 +0,0 @@ ---- -title: "pgo delete schedule" ---- -## pgo delete schedule - -Delete a schedule - -### Synopsis - -Delete a cron-like schedule. For example: - - pgo delete schedule mycluster - pgo delete schedule --selector=env=test - pgo delete schedule --schedule-name=mycluster-pgbackrest-full - -``` -pgo delete schedule [flags] -``` - -### Options - -``` - -h, --help help for schedule - --no-prompt No command line confirmation before delete. - --schedule-name string The name of the schedule to delete. - -s, --selector string The selector to use for cluster filtering. -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo delete](/pgo-client/reference/pgo_delete/) - Delete an Operator resource - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_delete_user.md b/docs/content/pgo-client/reference/pgo_delete_user.md deleted file mode 100644 index ea4f7f75ae..0000000000 --- a/docs/content/pgo-client/reference/pgo_delete_user.md +++ /dev/null @@ -1,46 +0,0 @@ ---- -title: "pgo delete user" ---- -## pgo delete user - -Delete a user - -### Synopsis - -Delete a user. For example: - - pgo delete user --username=someuser --selector=name=mycluster - -``` -pgo delete user [flags] -``` - -### Options - -``` - --all Delete all PostgreSQL users from all clusters. - -h, --help help for user - --no-prompt No command line confirmation before delete. - -o, --output string The output format. Supported types are: "json" - -s, --selector string The selector to use for cluster filtering. - --username string The username to delete. -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo delete](/pgo-client/reference/pgo_delete/) - Delete an Operator resource - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_df.md b/docs/content/pgo-client/reference/pgo_df.md deleted file mode 100644 index 3a744dfbe9..0000000000 --- a/docs/content/pgo-client/reference/pgo_df.md +++ /dev/null @@ -1,46 +0,0 @@ ---- -title: "pgo df" ---- -## pgo df - -Display disk space for clusters - -### Synopsis - -Displays the disk status for PostgreSQL clusters. For example: - - pgo df mycluster - pgo df --selector=env=research - pgo df --all - -``` -pgo df [flags] -``` - -### Options - -``` - --all Get disk utilization for all managed clusters - -h, --help help for df - -o, --output string The output format. Supported types are: "json" - -s, --selector string The selector to use for cluster filtering. -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface. - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_failover.md b/docs/content/pgo-client/reference/pgo_failover.md deleted file mode 100644 index d60cefd417..0000000000 --- a/docs/content/pgo-client/reference/pgo_failover.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -title: "pgo failover" ---- -## pgo failover - -Performs a manual failover - -### Synopsis - -Performs a manual failover. For example: - - pgo failover mycluster - -``` -pgo failover [flags] -``` - -### Options - -``` - -h, --help help for failover - --no-prompt No command line confirmation. - --query Prints the list of failover candidates. - --target string The replica target which the failover will occur on. -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface. - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_label.md b/docs/content/pgo-client/reference/pgo_label.md deleted file mode 100644 index 14f6486ad7..0000000000 --- a/docs/content/pgo-client/reference/pgo_label.md +++ /dev/null @@ -1,47 +0,0 @@ ---- -title: "pgo label" ---- -## pgo label - -Label a set of clusters - -### Synopsis - -LABEL allows you to add or remove a label on a set of clusters. For example: - - pgo label mycluster yourcluster --label=environment=prod - pgo label all --label=environment=prod - pgo label --label=environment=prod --selector=name=mycluster - pgo label --label=environment=prod --selector=status=final --dry-run - -``` -pgo label [flags] -``` - -### Options - -``` - --dry-run Shows the clusters that the label would be applied to, without labelling them. - -h, --help help for label - --label string The new label to apply for any selected or specified clusters. - -s, --selector string The selector to use for cluster filtering. -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface. - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_reload.md b/docs/content/pgo-client/reference/pgo_reload.md deleted file mode 100644 index ebc8dc2e1a..0000000000 --- a/docs/content/pgo-client/reference/pgo_reload.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -title: "pgo reload" ---- -## pgo reload - -Perform a cluster reload - -### Synopsis - -RELOAD performs a PostgreSQL reload on a cluster or set of clusters. For example: - - pgo reload mycluster - -``` -pgo reload [flags] -``` - -### Options - -``` - -h, --help help for reload - --no-prompt No command line confirmation. - -s, --selector string The selector to use for cluster filtering. -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface. - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_restart.md b/docs/content/pgo-client/reference/pgo_restart.md deleted file mode 100644 index dc0517f1db..0000000000 --- a/docs/content/pgo-client/reference/pgo_restart.md +++ /dev/null @@ -1,52 +0,0 @@ ---- -title: "pgo restart" ---- -## pgo restart - -Restarts the PostgrSQL database within a PostgreSQL cluster - -### Synopsis - -Restarts one or more PostgreSQL databases within a PostgreSQL cluster. - - For example, to restart the primary and all replicas: - pgo restart mycluster - - Or target a specific instance within the cluster: - pgo restart mycluster --target=mycluster-abcd - - And use the 'query' flag obtain a list of all instances within the cluster: - pgo restart mycluster --query - -``` -pgo restart [flags] -``` - -### Options - -``` - -h, --help help for restart - --no-prompt No command line confirmation. - -o, --output string The output format. Supported types are: "json" - --query Prints the list of instances that can be restarted. - --target stringArray The instance that will be restarted. -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface. - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_restore.md b/docs/content/pgo-client/reference/pgo_restore.md deleted file mode 100644 index 2d8561c64d..0000000000 --- a/docs/content/pgo-client/reference/pgo_restore.md +++ /dev/null @@ -1,49 +0,0 @@ ---- -title: "pgo restore" ---- -## pgo restore - -Perform a restore from previous backup - -### Synopsis - -RESTORE performs a restore to a new PostgreSQL cluster. This includes stopping the database and recreating a new primary with the restored data. Valid backup types to restore from are pgbackrest and pgdump. For example: - - pgo restore mycluster - -``` -pgo restore [flags] -``` - -### Options - -``` - --backup-opts string The restore options for pgbackrest or pgdump. - --backup-pvc string The PVC containing the pgdump to restore from. - --backup-type string The type of backup to restore from, default is pgbackrest. Valid types are pgbackrest or pgdump. - -h, --help help for restore - --no-prompt No command line confirmation. - --node-label string The node label (key=value) to use when scheduling the restore job, and in the case of a pgBackRest restore, also the new (i.e. restored) primary deployment. If not set, any node is used. - --pgbackrest-storage-type string The type of storage to use for a pgBackRest restore. Either "local", "s3". (default "local") - -d, --pgdump-database string The name of the database pgdump will restore. (default "postgres") - --pitr-target string The PITR target, being a PostgreSQL timestamp such as '2018-08-13 11:25:42.582117-04'. -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface. - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_scale.md b/docs/content/pgo-client/reference/pgo_scale.md deleted file mode 100644 index 684d506cc8..0000000000 --- a/docs/content/pgo-client/reference/pgo_scale.md +++ /dev/null @@ -1,47 +0,0 @@ ---- -title: "pgo scale" ---- -## pgo scale - -Scale a PostgreSQL cluster - -### Synopsis - -The scale command allows you to adjust a Cluster's replica configuration. For example: - - pgo scale mycluster --replica-count=1 - -``` -pgo scale [flags] -``` - -### Options - -``` - --ccp-image-tag string The CCPImageTag to use for cluster creation. If specified, overrides the .pgo.yaml setting. - -h, --help help for scale - --no-prompt No command line confirmation. - --node-label string The node label (key) to use in placing the replica database. If not set, any node is used. - --replica-count int The replica count to apply to the clusters. (default 1) - --service-type string The service type to use in the replica Service. If not set, the default in pgo.yaml will be used. - --storage-config string The name of a Storage config in pgo.yaml to use for the replica storage. -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface. - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_scaledown.md b/docs/content/pgo-client/reference/pgo_scaledown.md deleted file mode 100644 index deef6123d9..0000000000 --- a/docs/content/pgo-client/reference/pgo_scaledown.md +++ /dev/null @@ -1,49 +0,0 @@ ---- -title: "pgo scaledown" ---- -## pgo scaledown - -Scale down a PostgreSQL cluster - -### Synopsis - -The scale command allows you to scale down a Cluster's replica configuration. For example: - - To list targetable replicas: - pgo scaledown mycluster --query - - To scale down a specific replica: - pgo scaledown mycluster --target=mycluster-replica-xxxx - -``` -pgo scaledown [flags] -``` - -### Options - -``` - -h, --help help for scaledown - --keep-data Causes data for the scale down replica to *not* be deleted - --no-prompt No command line confirmation. - --query Prints the list of targetable replica candidates. - --target string The replica to target for scaling down -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface. - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_show.md b/docs/content/pgo-client/reference/pgo_show.md deleted file mode 100644 index b71032f999..0000000000 --- a/docs/content/pgo-client/reference/pgo_show.md +++ /dev/null @@ -1,63 +0,0 @@ ---- -title: "pgo show" ---- -## pgo show - -Show the description of a cluster - -### Synopsis - -Show allows you to show the details of a policy, backup, pvc, or cluster. For example: - - pgo show backup mycluster - pgo show backup mycluster --backup-type=pgbackrest - pgo show cluster mycluster - pgo show config - pgo show pgouser someuser - pgo show policy policy1 - pgo show pvc mycluster - pgo show namespace - pgo show workflow 25927091-b343-4017-be4b-71575f0b3eb5 - pgo show user --selector=name=mycluster - -``` -pgo show [flags] -``` - -### Options - -``` - -h, --help help for show -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface. -* [pgo show backup](/pgo-client/reference/pgo_show_backup/) - Show backup information -* [pgo show cluster](/pgo-client/reference/pgo_show_cluster/) - Show cluster information -* [pgo show config](/pgo-client/reference/pgo_show_config/) - Show configuration information -* [pgo show namespace](/pgo-client/reference/pgo_show_namespace/) - Show namespace information -* [pgo show pgadmin](/pgo-client/reference/pgo_show_pgadmin/) - Show pgadmin deployment information -* [pgo show pgbouncer](/pgo-client/reference/pgo_show_pgbouncer/) - Show pgbouncer deployment information -* [pgo show pgorole](/pgo-client/reference/pgo_show_pgorole/) - Show pgorole information -* [pgo show pgouser](/pgo-client/reference/pgo_show_pgouser/) - Show pgouser information -* [pgo show policy](/pgo-client/reference/pgo_show_policy/) - Show policy information -* [pgo show pvc](/pgo-client/reference/pgo_show_pvc/) - Show PVC information for a cluster -* [pgo show schedule](/pgo-client/reference/pgo_show_schedule/) - Show schedule information -* [pgo show user](/pgo-client/reference/pgo_show_user/) - Show user information -* [pgo show workflow](/pgo-client/reference/pgo_show_workflow/) - Show workflow information - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_show_backup.md b/docs/content/pgo-client/reference/pgo_show_backup.md deleted file mode 100644 index a15c426d54..0000000000 --- a/docs/content/pgo-client/reference/pgo_show_backup.md +++ /dev/null @@ -1,42 +0,0 @@ ---- -title: "pgo show backup" ---- -## pgo show backup - -Show backup information - -### Synopsis - -Show backup information. For example: - - pgo show backup mycluser - -``` -pgo show backup [flags] -``` - -### Options - -``` - --backup-type string The backup type output to list. Valid choices are pgbackrest or pgdump. (default "pgbackrest") - -h, --help help for backup -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo show](/pgo-client/reference/pgo_show/) - Show the description of a cluster - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_show_cluster.md b/docs/content/pgo-client/reference/pgo_show_cluster.md deleted file mode 100644 index 291d3b6ff6..0000000000 --- a/docs/content/pgo-client/reference/pgo_show_cluster.md +++ /dev/null @@ -1,46 +0,0 @@ ---- -title: "pgo show cluster" ---- -## pgo show cluster - -Show cluster information - -### Synopsis - -Show a PostgreSQL cluster. For example: - - pgo show cluster --all - pgo show cluster mycluster - -``` -pgo show cluster [flags] -``` - -### Options - -``` - --all show all resources. - --ccp-image-tag string Filter the results based on the image tag of the cluster. - -h, --help help for cluster - -o, --output string The output format. Currently, json is the only supported value. - -s, --selector string The selector to use for cluster filtering. -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo show](/pgo-client/reference/pgo_show/) - Show the description of a cluster - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_show_config.md b/docs/content/pgo-client/reference/pgo_show_config.md deleted file mode 100644 index ae3cb75059..0000000000 --- a/docs/content/pgo-client/reference/pgo_show_config.md +++ /dev/null @@ -1,41 +0,0 @@ ---- -title: "pgo show config" ---- -## pgo show config - -Show configuration information - -### Synopsis - -Show configuration information for the Operator. For example: - - pgo show config - -``` -pgo show config [flags] -``` - -### Options - -``` - -h, --help help for config -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo show](/pgo-client/reference/pgo_show/) - Show the description of a cluster - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_show_namespace.md b/docs/content/pgo-client/reference/pgo_show_namespace.md deleted file mode 100644 index 9794a12bac..0000000000 --- a/docs/content/pgo-client/reference/pgo_show_namespace.md +++ /dev/null @@ -1,42 +0,0 @@ ---- -title: "pgo show namespace" ---- -## pgo show namespace - -Show namespace information - -### Synopsis - -Show namespace information for the Operator. For example: - - pgo show namespace - -``` -pgo show namespace [flags] -``` - -### Options - -``` - --all show all resources. - -h, --help help for namespace -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo show](/pgo-client/reference/pgo_show/) - Show the description of a cluster - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_show_pgadmin.md b/docs/content/pgo-client/reference/pgo_show_pgadmin.md deleted file mode 100644 index 73574045aa..0000000000 --- a/docs/content/pgo-client/reference/pgo_show_pgadmin.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -title: "pgo show pgadmin" ---- -## pgo show pgadmin - -Show pgadmin deployment information - -### Synopsis - -Show service information about a pgadmin deployment. For example: - - pgo show pgadmin thecluster - pgo show pgadmin --selector=app=theapp - - -``` -pgo show pgadmin [flags] -``` - -### Options - -``` - -h, --help help for pgadmin - -o, --output string The output format. Supported types are: "json" - -s, --selector string The selector to use for cluster filtering. -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo show](/pgo-client/reference/pgo_show/) - Show the description of a cluster - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_show_pgbouncer.md b/docs/content/pgo-client/reference/pgo_show_pgbouncer.md deleted file mode 100644 index 0a977097a8..0000000000 --- a/docs/content/pgo-client/reference/pgo_show_pgbouncer.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -title: "pgo show pgbouncer" ---- -## pgo show pgbouncer - -Show pgbouncer deployment information - -### Synopsis - -Show user, password, and service information about a pgbouncer deployment. For example: - - pgo show pgbouncer hacluster - pgo show pgbouncer --selector=app=payment - - -``` -pgo show pgbouncer [flags] -``` - -### Options - -``` - -h, --help help for pgbouncer - -o, --output string The output format. Supported types are: "json" - -s, --selector string The selector to use for cluster filtering. -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo show](/pgo-client/reference/pgo_show/) - Show the description of a cluster - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_show_pgorole.md b/docs/content/pgo-client/reference/pgo_show_pgorole.md deleted file mode 100644 index f8241d4e33..0000000000 --- a/docs/content/pgo-client/reference/pgo_show_pgorole.md +++ /dev/null @@ -1,42 +0,0 @@ ---- -title: "pgo show pgorole" ---- -## pgo show pgorole - -Show pgorole information - -### Synopsis - -Show pgorole information . For example: - - pgo show pgorole somerole - -``` -pgo show pgorole [flags] -``` - -### Options - -``` - --all show all resources. - -h, --help help for pgorole -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo show](/pgo-client/reference/pgo_show/) - Show the description of a cluster - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_show_pgouser.md b/docs/content/pgo-client/reference/pgo_show_pgouser.md deleted file mode 100644 index 4881d2f1fb..0000000000 --- a/docs/content/pgo-client/reference/pgo_show_pgouser.md +++ /dev/null @@ -1,42 +0,0 @@ ---- -title: "pgo show pgouser" ---- -## pgo show pgouser - -Show pgouser information - -### Synopsis - -Show pgouser information for an Operator user. For example: - - pgo show pgouser someuser - -``` -pgo show pgouser [flags] -``` - -### Options - -``` - --all show all resources. - -h, --help help for pgouser -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo show](/pgo-client/reference/pgo_show/) - Show the description of a cluster - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_show_policy.md b/docs/content/pgo-client/reference/pgo_show_policy.md deleted file mode 100644 index ddeaedbd09..0000000000 --- a/docs/content/pgo-client/reference/pgo_show_policy.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -title: "pgo show policy" ---- -## pgo show policy - -Show policy information - -### Synopsis - -Show policy information. For example: - - pgo show policy --all - pgo show policy policy1 - -``` -pgo show policy [flags] -``` - -### Options - -``` - --all show all resources. - -h, --help help for policy -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo show](/pgo-client/reference/pgo_show/) - Show the description of a cluster - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_show_pvc.md b/docs/content/pgo-client/reference/pgo_show_pvc.md deleted file mode 100644 index ea8312dd36..0000000000 --- a/docs/content/pgo-client/reference/pgo_show_pvc.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -title: "pgo show pvc" ---- -## pgo show pvc - -Show PVC information for a cluster - -### Synopsis - -Show PVC information. For example: - - pgo show pvc mycluster - pgo show pvc --all - -``` -pgo show pvc [flags] -``` - -### Options - -``` - --all show all resources. - -h, --help help for pvc -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo show](/pgo-client/reference/pgo_show/) - Show the description of a cluster - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_show_schedule.md b/docs/content/pgo-client/reference/pgo_show_schedule.md deleted file mode 100644 index 7d39ac4cff..0000000000 --- a/docs/content/pgo-client/reference/pgo_show_schedule.md +++ /dev/null @@ -1,46 +0,0 @@ ---- -title: "pgo show schedule" ---- -## pgo show schedule - -Show schedule information - -### Synopsis - -Show cron-like schedules. For example: - - pgo show schedule mycluster - pgo show schedule --selector=pg-cluster=mycluster - pgo show schedule --schedule-name=mycluster-pgbackrest-full - -``` -pgo show schedule [flags] -``` - -### Options - -``` - -h, --help help for schedule - --no-prompt No command line confirmation. - --schedule-name string The name of the schedule to show. - -s, --selector string The selector to use for cluster filtering. -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo show](/pgo-client/reference/pgo_show/) - Show the description of a cluster - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_show_user.md b/docs/content/pgo-client/reference/pgo_show_user.md deleted file mode 100644 index 7dcdfe2b31..0000000000 --- a/docs/content/pgo-client/reference/pgo_show_user.md +++ /dev/null @@ -1,48 +0,0 @@ ---- -title: "pgo show user" ---- -## pgo show user - -Show user information - -### Synopsis - -Show users on a cluster. For example: - - pgo show user --all - pgo show user mycluster - pgo show user --selector=name=nycluster - -``` -pgo show user [flags] -``` - -### Options - -``` - --all show all clusters. - --expired int Shows passwords that will expire in X days. - -h, --help help for user - -o, --output string The output format. Supported types are: "json" - -s, --selector string The selector to use for cluster filtering. - --show-system-accounts Include the system accounts in the results. -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo show](/pgo-client/reference/pgo_show/) - Show the description of a cluster - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_show_workflow.md b/docs/content/pgo-client/reference/pgo_show_workflow.md deleted file mode 100644 index 28d5f11666..0000000000 --- a/docs/content/pgo-client/reference/pgo_show_workflow.md +++ /dev/null @@ -1,41 +0,0 @@ ---- -title: "pgo show workflow" ---- -## pgo show workflow - -Show workflow information - -### Synopsis - -Show workflow information for a given workflow. For example: - - pgo show workflow 25927091-b343-4017-be4b-71575f0b3eb5 - -``` -pgo show workflow [flags] -``` - -### Options - -``` - -h, --help help for workflow -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo show](/pgo-client/reference/pgo_show/) - Show the description of a cluster - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_status.md b/docs/content/pgo-client/reference/pgo_status.md deleted file mode 100644 index 21a4a84464..0000000000 --- a/docs/content/pgo-client/reference/pgo_status.md +++ /dev/null @@ -1,42 +0,0 @@ ---- -title: "pgo status" ---- -## pgo status - -Display PostgreSQL cluster status - -### Synopsis - -Display namespace wide information for PostgreSQL clusters. For example: - - pgo status - -``` -pgo status [flags] -``` - -### Options - -``` - -h, --help help for status - -o, --output string The output format. Currently, json is the only supported value. -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface. - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_test.md b/docs/content/pgo-client/reference/pgo_test.md deleted file mode 100644 index 36671b5f6e..0000000000 --- a/docs/content/pgo-client/reference/pgo_test.md +++ /dev/null @@ -1,46 +0,0 @@ ---- -title: "pgo test" ---- -## pgo test - -Test cluster connectivity - -### Synopsis - -TEST allows you to test the availability of a PostgreSQL cluster. For example: - - pgo test mycluster - pgo test --selector=env=research - pgo test --all - -``` -pgo test [flags] -``` - -### Options - -``` - --all test all resources. - -h, --help help for test - -o, --output string The output format. Currently, json is the only supported value. - -s, --selector string The selector to use for cluster filtering. -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface. - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_update.md b/docs/content/pgo-client/reference/pgo_update.md deleted file mode 100644 index 669c841701..0000000000 --- a/docs/content/pgo-client/reference/pgo_update.md +++ /dev/null @@ -1,56 +0,0 @@ ---- -title: "pgo update" ---- -## pgo update - -Update a pgouser, pgorole, or cluster - -### Synopsis - -The update command allows you to update a pgouser, pgorole, or cluster. For example: - - pgo update cluster --selector=name=mycluster --autofail=false - pgo update cluster --all --autofail=true - pgo update namespace mynamespace - pgo update pgbouncer mycluster --rotate-password - pgo update pgorole somerole --pgorole-permission="Cat" - pgo update pgouser someuser --pgouser-password=somenewpassword - pgo update pgouser someuser --pgouser-roles="role1,role2" - pgo update pgouser someuser --pgouser-namespaces="pgouser2" - pgo update pgorole somerole --pgorole-permission="Cat" - pgo update user mycluster --username=testuser --selector=name=mycluster --password=somepassword - -``` -pgo update [flags] -``` - -### Options - -``` - -h, --help help for update -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface. -* [pgo update cluster](/pgo-client/reference/pgo_update_cluster/) - Update a PostgreSQL cluster -* [pgo update namespace](/pgo-client/reference/pgo_update_namespace/) - Update a namespace, applying Operator RBAC -* [pgo update pgbouncer](/pgo-client/reference/pgo_update_pgbouncer/) - Update a pgBouncer deployment for a PostgreSQL cluster -* [pgo update pgorole](/pgo-client/reference/pgo_update_pgorole/) - Update a pgorole -* [pgo update pgouser](/pgo-client/reference/pgo_update_pgouser/) - Update a pgouser -* [pgo update user](/pgo-client/reference/pgo_update_user/) - Update a PostgreSQL user - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_update_cluster.md b/docs/content/pgo-client/reference/pgo_update_cluster.md deleted file mode 100644 index 007c34b0fc..0000000000 --- a/docs/content/pgo-client/reference/pgo_update_cluster.md +++ /dev/null @@ -1,87 +0,0 @@ ---- -title: "pgo update cluster" ---- -## pgo update cluster - -Update a PostgreSQL cluster - -### Synopsis - -Update a PostgreSQL cluster. For example: - - pgo update cluster mycluster --autofail=false - pgo update cluster mycluster myothercluster --disable-autofail - pgo update cluster --selector=name=mycluster --disable-autofail - pgo update cluster --all --enable-autofail - -``` -pgo update cluster [flags] -``` - -### Options - -``` - --all all resources. - --annotation strings Add an Annotation to all of the managed deployments (PostgreSQL, pgBackRest, pgBouncer) - The format to add an annotation is "name=value" - The format to remove an annotation is "name-" - - For example, to add two annotations: "--annotation=hippo=awesome,elephant=cool" - --annotation-pgbackrest strings Add an Annotation specifically to pgBackRest deployments - The format to add an annotation is "name=value" - The format to remove an annotation is "name-" - --annotation-pgbouncer strings Add an Annotation specifically to pgBouncer deployments - The format to add an annotation is "name=value" - The format to remove an annotation is "name-" - --annotation-postgres strings Add an Annotation specifically to PostgreSQL deploymentsThe format to add an annotation is "name=value" - The format to remove an annotation is "name-" - --cpu string Set the number of millicores to request for the CPU, e.g. "100m" or "0.1". - --cpu-limit string Set the number of millicores to limit for the CPU, e.g. "100m" or "0.1". - --disable-autofail Disables autofail capabitilies in the cluster. - --enable-autofail Enables autofail capabitilies in the cluster. - --enable-standby Enables standby mode in the cluster(s) specified. - --exporter-cpu string Set the number of millicores to request for CPU for the Crunchy Postgres Exporter sidecar container, e.g. "100m" or "0.1". - --exporter-cpu-limit string Set the number of millicores to limit for CPU for the Crunchy Postgres Exporter sidecar container, e.g. "100m" or "0.1". - --exporter-memory string Set the amount of memory to request for the Crunchy Postgres Exporter sidecar container. - --exporter-memory-limit string Set the amount of memory to limit for the Crunchy Postgres Exporter sidecar container. - -h, --help help for cluster - --memory string Set the amount of RAM to request, e.g. 1GiB. - --memory-limit string Set the amount of RAM to limit, e.g. 1GiB. - --no-prompt No command line confirmation. - --pgbackrest-cpu string Set the number of millicores to request for CPU for the pgBackRest repository. - --pgbackrest-cpu-limit string Set the number of millicores to limit for CPU for the pgBackRest repository. - --pgbackrest-memory string Set the amount of memory to request for the pgBackRest repository. - --pgbackrest-memory-limit string Set the amount of memory to limit for the pgBackRest repository. - --promote-standby Disables standby mode (if enabled) and promotes the cluster(s) specified. - -s, --selector string The selector to use for cluster filtering. - --shutdown Shutdown the database cluster if it is currently running. - --startup Restart the database cluster if it is currently shutdown. - --tablespace strings Add a PostgreSQL tablespace on the cluster, e.g. "name=ts1:storageconfig=nfsstorage". The format is a key/value map that is delimited by "=" and separated by ":". The following parameters are available: - - - name (required): the name of the PostgreSQL tablespace - - storageconfig (required): the storage configuration to use, as specified in the list available in the "pgo-config" ConfigMap (aka "pgo.yaml") - - pvcsize: the size of the PVC capacity, which overrides the value set in the specified storageconfig. Follows the Kubernetes quantity format. - - For example, to create a tablespace with the NFS storage configuration with a PVC of size 10GiB: - - --tablespace=name=ts1:storageconfig=nfsstorage:pvcsize=10Gi -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo update](/pgo-client/reference/pgo_update/) - Update a pgouser, pgorole, or cluster - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_update_namespace.md b/docs/content/pgo-client/reference/pgo_update_namespace.md deleted file mode 100644 index 396cb9d30b..0000000000 --- a/docs/content/pgo-client/reference/pgo_update_namespace.md +++ /dev/null @@ -1,40 +0,0 @@ ---- -title: "pgo update namespace" ---- -## pgo update namespace - -Update a namespace, applying Operator RBAC - -### Synopsis - -UPDATE allows you to update a Namespace. For example: - pgo update namespace mynamespace - -``` -pgo update namespace [flags] -``` - -### Options - -``` - -h, --help help for namespace -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo update](/pgo-client/reference/pgo_update/) - Update a pgouser, pgorole, or cluster - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_update_pgbouncer.md b/docs/content/pgo-client/reference/pgo_update_pgbouncer.md deleted file mode 100644 index ec51137fd2..0000000000 --- a/docs/content/pgo-client/reference/pgo_update_pgbouncer.md +++ /dev/null @@ -1,52 +0,0 @@ ---- -title: "pgo update pgbouncer" ---- -## pgo update pgbouncer - -Update a pgBouncer deployment for a PostgreSQL cluster - -### Synopsis - -Used to update the pgBouncer deployment for a PostgreSQL cluster, such - as by rotating a password. For example: - - pgo update pgbouncer hacluster --rotate-password - - -``` -pgo update pgbouncer [flags] -``` - -### Options - -``` - --cpu string Set the number of millicores to request for CPU for pgBouncer. - --cpu-limit string Set the number of millicores to limit for CPU for pgBouncer. - -h, --help help for pgbouncer - --memory string Set the amount of memory to request for pgBouncer. - --memory-limit string Set the amount of memory to limit for pgBouncer. - --no-prompt No command line confirmation. - -o, --output string The output format. Supported types are: "json" - --replicas int32 Set the total number of pgBouncer instances to deploy. If not set, defaults to 1. - --rotate-password Used to rotate the pgBouncer service account password. Can cause interruption of service. - -s, --selector string The selector to use for cluster filtering. -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo update](/pgo-client/reference/pgo_update/) - Update a pgouser, pgorole, or cluster - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_update_pgorole.md b/docs/content/pgo-client/reference/pgo_update_pgorole.md deleted file mode 100644 index 3c1706b76a..0000000000 --- a/docs/content/pgo-client/reference/pgo_update_pgorole.md +++ /dev/null @@ -1,42 +0,0 @@ ---- -title: "pgo update pgorole" ---- -## pgo update pgorole - -Update a pgorole - -### Synopsis - -UPDATE allows you to update a pgo role. For example: - pgo update pgorole somerole --permissions="Cat,Ls - -``` -pgo update pgorole [flags] -``` - -### Options - -``` - -h, --help help for pgorole - --no-prompt No command line confirmation. - --permissions string The permissions to use for updating the pgorole permissions. -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo update](/pgo-client/reference/pgo_update/) - Update a pgouser, pgorole, or cluster - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_update_pgouser.md b/docs/content/pgo-client/reference/pgo_update_pgouser.md deleted file mode 100644 index 2991939cd7..0000000000 --- a/docs/content/pgo-client/reference/pgo_update_pgouser.md +++ /dev/null @@ -1,47 +0,0 @@ ---- -title: "pgo update pgouser" ---- -## pgo update pgouser - -Update a pgouser - -### Synopsis - -UPDATE allows you to update a pgo user. For example: - pgo update pgouser myuser --pgouser-roles=somerole - pgo update pgouser myuser --pgouser-password=somepassword --pgouser-roles=somerole - pgo update pgouser myuser --pgouser-password=somepassword --no-prompt - -``` -pgo update pgouser [flags] -``` - -### Options - -``` - --all-namespaces all namespaces. - -h, --help help for pgouser - --no-prompt No command line confirmation. - --pgouser-namespaces string The namespaces to use for updating the pgouser roles. - --pgouser-password string The password to use for updating the pgouser password. - --pgouser-roles string The roles to use for updating the pgouser roles. -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo update](/pgo-client/reference/pgo_update/) - Update a pgouser, pgorole, or cluster - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_update_user.md b/docs/content/pgo-client/reference/pgo_update_user.md deleted file mode 100644 index 25c18b73da..0000000000 --- a/docs/content/pgo-client/reference/pgo_update_user.md +++ /dev/null @@ -1,69 +0,0 @@ ---- -title: "pgo update user" ---- -## pgo update user - -Update a PostgreSQL user - -### Synopsis - -Allows the ability to perform various user management functions for PostgreSQL users. - -For example: - -//change a password, set valid days for 40 days from now -pgo update user mycluster --username=someuser --password=foo -//expire password for a user -pgo update user mycluster --username=someuser --expire-user -//Update all passwords older than the number of days specified -pgo update user mycluster --expired=45 --password-length=8 - -# Disable the ability for a user to log into the PostgreSQL cluster -pgo update user mycluster --username=foobar --disable-login - -# Enable the ability for a user to log into the PostgreSQL cluster -pgo update user mycluster --username=foobar --enable-login - - -``` -pgo update user [flags] -``` - -### Options - -``` - --all all clusters. - --disable-login Disables a PostgreSQL user from being able to log into the PostgreSQL cluster. - --enable-login Enables a PostgreSQL user to be able to log into the PostgreSQL cluster. - --expire-user Performs expiring a user if set to true. - --expired int Updates passwords that will expire in X days using an autogenerated password. - -h, --help help for user - -o, --output string The output format. Supported types are: "json" - --password string Specifies the user password when updating a user password or creating a new user. If --rotate-password is set as well, --password takes precedence. - --password-length int If no password is supplied, sets the length of the automatically generated password. Defaults to the value set on the server. - --password-type string The type of password hashing to use.Choices are: (md5, scram-sha-256). This only takes effect if the password is being changed. (default "md5") - --rotate-password Rotates the user's password with an automatically generated password. The length of the password is determine by either --password-length or the value set on the server, in that order. - -s, --selector string The selector to use for cluster filtering. - --username string Updates the postgres user on selective clusters. - --valid-always Sets a password to never expire based on expiration time. Takes precedence over --valid-days - --valid-days int Sets the number of days that a password is valid. Defaults to the server value. -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo update](/pgo-client/reference/pgo_update/) - Update a pgouser, pgorole, or cluster - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_upgrade.md b/docs/content/pgo-client/reference/pgo_upgrade.md deleted file mode 100644 index 78d787f6f0..0000000000 --- a/docs/content/pgo-client/reference/pgo_upgrade.md +++ /dev/null @@ -1,46 +0,0 @@ ---- -title: "pgo upgrade" ---- -## pgo upgrade - -Perform a cluster upgrade. - -### Synopsis - -UPGRADE allows you to perform a comprehensive PGCluster upgrade - (for use after performing a Postgres Operator upgrade). - For example: - - pgo upgrade mycluster - Upgrades the cluster for use with the upgraded Postgres Operator version. - -``` -pgo upgrade [flags] -``` - -### Options - -``` - --ccp-image-tag string The image tag to use for cluster creation. If specified, it overrides the default configuration setting and disables tag validation checking. - -h, --help help for upgrade - --ignore-validation Disables version checking against the image tags when performing an cluster upgrade. -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface. - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_version.md b/docs/content/pgo-client/reference/pgo_version.md deleted file mode 100644 index 5bf407bc73..0000000000 --- a/docs/content/pgo-client/reference/pgo_version.md +++ /dev/null @@ -1,42 +0,0 @@ ---- -title: "pgo version" ---- -## pgo version - -Print version information for the PostgreSQL Operator - -### Synopsis - -VERSION allows you to print version information for the postgres-operator. For example: - - pgo version - -``` -pgo version [flags] -``` - -### Options - -``` - --client Only return the version of the pgo client. This does not make a call to the API server. - -h, --help help for version -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface. - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/pgo-client/reference/pgo_watch.md b/docs/content/pgo-client/reference/pgo_watch.md deleted file mode 100644 index 0f3e721545..0000000000 --- a/docs/content/pgo-client/reference/pgo_watch.md +++ /dev/null @@ -1,42 +0,0 @@ ---- -title: "pgo watch" ---- -## pgo watch - -Print watch information for the PostgreSQL Operator - -### Synopsis - -WATCH allows you to watch event information for the postgres-operator. For example: - pgo watch --pgo-event-address=localhost:14150 alltopic - pgo watch alltopic - -``` -pgo watch [flags] -``` - -### Options - -``` - -h, --help help for watch - -a, --pgo-event-address string The address (host:port) where the event stream is. (default "localhost:14150") -``` - -### Options inherited from parent commands - -``` - --apiserver-url string The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client. - --debug Enable additional output for debugging. - --disable-tls Disable TLS authentication to the Postgres Operator. - --exclude-os-trust Exclude CA certs from OS default trust store - -n, --namespace string The namespace to use for pgo requests. - --pgo-ca-cert string The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-cert string The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver. - --pgo-client-key string The Client Key file path for authenticating to the PostgreSQL Operator apiserver. -``` - -### SEE ALSO - -* [pgo](/pgo-client/reference/pgo/) - The pgo command line interface. - -###### Auto generated by spf13/cobra on 1-Oct-2020 diff --git a/docs/content/quickstart/_index.md b/docs/content/quickstart/_index.md deleted file mode 100644 index dd29d467be..0000000000 --- a/docs/content/quickstart/_index.md +++ /dev/null @@ -1,338 +0,0 @@ ---- -title: "Quickstart" -date: -draft: false -weight: 10 ---- - -# PostgreSQL Operator Quickstart - -Can't wait to try out the PostgreSQL Operator? Let us show you the quickest possible path to getting up and running. - -There are two paths to quickly get you up and running with the PostgreSQL Operator: - -- [Installation via the PostgreSQL Operator Installer](#postgresql-operator-installer) -- Installation via a Marketplace - - Installation via [Operator Lifecycle Manager]({{< relref "/installation/other/operator-hub.md" >}}) - - Installation via [Google Cloud Marketplace]({{< relref "/installation/other/google-cloud-marketplace.md" >}}) - -Marketplaces can help you get more quickly started in your environment as they provide a mostly automated process, but there are a few steps you will need to take to ensure you can fully utilize your PostgreSQL Operator environment. You can find out more information about how to get started with one of those installers in the [Installation]({{< relref "/installation/_index.md" >}}) section. - -# PostgreSQL Operator Installer - -Below will guide you through the steps for installing and using the PostgreSQL Operator using an installer that works with Ansible. - -## Installation - -### Install the PostgreSQL Operator - -On environments that have a [default storage class](https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/) set up (which is most modern Kubernetes environments), the below command should work: - -``` -kubectl create namespace pgo -kubectl apply -f https://raw.githubusercontent.com/CrunchyData/postgres-operator/v{{< param operatorVersion >}}/installers/kubectl/postgres-operator.yml -``` - -This will launch the `pgo-deployer` container that will run the various setup and installation jobs. This can take a few minutes to complete depending on your Kubernetes cluster. - -If your install is unsuccessful, you may need to modify your configuration. Please read the ["Troubleshooting"](#troubleshooting) section. You can still get up and running fairly quickly with just a little bit of configuration. - -### Install the `pgo` Client - -During or after the installation of the PostgreSQL Operator, download the `pgo` client set up script. This will help set up your local environment for using the PostgreSQL Operator: - -``` -curl https://raw.githubusercontent.com/CrunchyData/postgres-operator/v{{< param operatorVersion >}}/installers/kubectl/client-setup.sh > client-setup.sh -chmod +x client-setup.sh -``` - -When the PostgreSQL Operator is done installing, run the client setup script: - -``` -./client-setup.sh -``` - -This will download the `pgo` client and provide instructions for how to easily use it in your environment. It will prompt you to add some environmental variables for you to set up in your session, which you can do with the following commands: - - -``` -export PGOUSER="${HOME?}/.pgo/pgo/pgouser" -export PGO_CA_CERT="${HOME?}/.pgo/pgo/client.crt" -export PGO_CLIENT_CERT="${HOME?}/.pgo/pgo/client.crt" -export PGO_CLIENT_KEY="${HOME?}/.pgo/pgo/client.key" -export PGO_APISERVER_URL='https://127.0.0.1:8443' -export PGO_NAMESPACE=pgo -``` - -If you wish to permanently add these variables to your environment, you can run the following: - -``` -cat <> ~/.bashrc -export PGOUSER="${HOME?}/.pgo/pgo/pgouser" -export PGO_CA_CERT="${HOME?}/.pgo/pgo/client.crt" -export PGO_CLIENT_CERT="${HOME?}/.pgo/pgo/client.crt" -export PGO_CLIENT_KEY="${HOME?}/.pgo/pgo/client.key" -export PGO_APISERVER_URL='https://127.0.0.1:8443' -export PGO_NAMESPACE=pgo -EOF - -source ~/.bashrc -``` - -**NOTE**: For macOS users, you must use `~/.bash_profile` instead of `~/.bashrc` - -### Post-Installation Setup - -Below are a few steps to check if the PostgreSQL Operator is up and running. - -By default, the PostgreSQL Operator installs into a namespace called `pgo`. First, see that the Kubernetes Deployment of the Operator exists and is healthy: - -``` -kubectl -n pgo get deployments -``` - -If successful, you should see output similar to this: - -``` -NAME READY UP-TO-DATE AVAILABLE AGE -postgres-operator 1/1 1 1 16h -``` - -Next, see if the Pods that run the PostgreSQL Operator are up and running: - -``` -kubectl -n pgo get pods -``` - -If successful, you should see output similar to this: - -``` -NAME READY STATUS RESTARTS AGE -postgres-operator-56d6ccb97-tmz7m 4/4 Running 0 2m -``` - -Finally, let's see if we can connect to the PostgreSQL Operator from the `pgo` command-line client. The Ansible installer installs the `pgo` command line client into your environment, along with the username/password file that allows you to access the PostgreSQL Operator. In order to communicate with the PostgreSQL Operator API server, you will first need to set up a [port forward](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to your local environment. - -In a new console window, run the following command to set up a port forward: - -``` -kubectl -n pgo port-forward svc/postgres-operator 8443:8443 -``` - -Back to your original console window, you can verify that you can connect to the PostgreSQL Operator using the following command: - -``` -pgo version -``` - -If successful, you should see output similar to this: - -``` -pgo client version {{< param operatorVersion >}} -pgo-apiserver version {{< param operatorVersion >}} -``` - -## Create a PostgreSQL Cluster - -The quickstart installation method creates a namespace called `pgo` where the PostgreSQL Operator manages PostgreSQL clusters. Try creating a PostgreSQL cluster called `hippo`: - -``` -pgo create cluster -n pgo hippo -``` - -Alternatively, because we set the [`PGO_NAMESPACE`]({{< relref "pgo-client/reference/pgo_create_cluster.md" >}}#general-notes-on-using-the-pgo-client) environmental variable in our `.bashrc` file, we could omit the `-n` flag from the [`pgo create cluster`]({{< relref "pgo-client/reference/pgo_create_cluster.md" >}}) command and just run this: - -``` -pgo create cluster hippo -``` - -Even with `PGO_NAMESPACE` set, you can always overwrite which namespace to use by setting the `-n` flag for the specific command. For explicitness, we will continue to use the `-n` flag in the remaining examples of this quickstart. - -If your cluster creation command executed successfully, you should see output similar to this: - -``` -created Pgcluster hippo -workflow id 1cd0d225-7cd4-4044-b269-aa7bedae219b -``` - -This will create a PostgreSQL cluster named `hippo`. It may take a few moments for the cluster to be provisioned. You can see the status of this cluster using the [`pgo test`]({{< relref "pgo-client/reference/pgo_test.md" >}}) command: - -``` -pgo test -n pgo hippo -``` - -When everything is up and running, you should see output similar to this: - -``` -cluster : hippo - Services - primary (10.97.140.113:5432): UP - Instances - primary (hippo-7b64747476-6dr4h): UP -``` - -The `pgo test` command provides you the basic information you need to connect to your PostgreSQL cluster from within your Kubernetes environment. For more detailed information, you can use `pgo show cluster -n pgo hippo`. - -## Connect to a PostgreSQL Cluster - -By default, the PostgreSQL Operator creates a database inside the cluster with the same name of the cluster, in this case, `hippo`. Below demonstrates how we can connect to `hippo`. - -### How Users Work - -You can get information about the users in your cluster with the [`pgo show user`]({{< relref "pgo-client/reference/pgo_show_user.md" >}}) command: - -``` -pgo show user -n pgo hippo -``` - -This will give you all the unprivileged, non-system PostgreSQL users for the `hippo` PostgreSQL cluster, for example: - -``` -CLUSTER USERNAME PASSWORD EXPIRES STATUS ERROR -------- -------- ------------------------ ------- ------ ----- -hippo testuser datalake never ok -``` - -To get the information about all PostgreSQL users that the PostgreSQL Operator is managing, you will need to use the `--show-system-accounts` flag: - -``` -pgo show user -n pgo hippo --show-system-accounts -``` - -which returns something similar to: - -``` -CLUSTER USERNAME PASSWORD EXPIRES STATUS ERROR -------- -------------- ------------------------ ------- ------ ----- -hippo postgres never ok -hippo primaryuser never ok -hippo testuser datalake never ok -``` - -The `postgres` user represents the [database superuser](https://www.postgresql.org/docs/current/role-attributes.html) and has every privilege granted to it. The PostgreSQL Operator securely interfaces through the `postgres` account to perform certain actions, such as managing users. - -The `primaryuser` is the used for replication and [high availability]({{< relref "architecture/high-availability/_index.md" >}}). You should never need to interface with this user account. - -### Connecting via `psql` - -Let's see how we can connect to `hippo` using [`psql`](https://www.postgresql.org/docs/current/app-psql.html), the command-line tool for accessing PostgreSQL. Ensure you have [installed the `psql` client](https://www.crunchydata.com/developers/download-postgres/binaries/postgresql12). - -The PostgreSQL Operator creates a service with the same name as the cluster. See for yourself! Get a list of all of the Services available in the `pgo` namespace: - -``` -kubectl -n pgo get svc - -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -hippo ClusterIP 10.96.218.63 2022/TCP,5432/TCP 59m -hippo-backrest-shared-repo ClusterIP 10.96.75.175 2022/TCP 59m -postgres-operator ClusterIP 10.96.121.246 8443/TCP,4171/TCP,4150/TCP 71m -``` - -Let's connect the `hippo` cluster. First, in a different console window, set up a port forward to the `hippo` service: - -``` -kubectl -n pgo port-forward svc/hippo 5432:5432 -``` - -You can connect to the database with the following command, substituting `datalake` for your actual password: - -``` -PGPASSWORD=datalake psql -h localhost -p 5432 -U testuser hippo -``` - -You should then be greeted with the PostgreSQL prompt: - -``` -psql ({{< param postgresVersion >}}) -Type "help" for help. - -hippo=> -``` - -### Connecting via [pgAdmin 4]({{< relref "architecture/pgadmin4.md" >}}) - -[pgAdmin 4]({{< relref "architecture/pgadmin4.md" >}}) is a graphical tool that can be used to manage and query a PostgreSQL database from a web browser. The PostgreSQL Operator provides a convenient integration with pgAdmin 4 for managing how users can log into the database. - -To add pgAdmin 4 to `hippo`, you can execute the following command: - -``` -pgo create pgadmin -n pgo hippo -``` - -It will take a few moments to create the pgAdmin 4 instance. The PostgreSQL Operator also creates a pgAdmin 4 service. See for yourself! Get a list of all of the Services available in the `pgo` namespace: - -``` -kubectl -n pgo get svc - -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -hippo ClusterIP 10.96.218.63 2022/TCP,5432/TCP 59m -hippo-backrest-shared-repo ClusterIP 10.96.75.175 2022/TCP 59m -hippo-pgadmin ClusterIP 10.96.165.27 5050/TCP 5m1s -postgres-operator ClusterIP 10.96.121.246 8443/TCP,4171/TCP,4150/TCP 71m -``` - -Let's connect to our `hippo` cluster via pgAdmin 4! In a different terminal, set up a port forward to pgAdmin 4: - -``` -kubectl -n pgo port-forward svc/hippo-pgadmin 5050:5050 -``` - -Navigate your browser to http://localhost:5050 and use your database username (`testuser`) and password (e.g. `datalake`) to log in. Though the prompt says “email address”, using your PostgreSQL username will work: - -![pgAdmin 4 Login Page](/images/pgadmin4-login2.png) - -(There are occasions where the initial credentials do not properly get set in pgAdmin 4. If you have trouble logging in, try running the command `pgo update user -n pgo hippo --username=testuser --password=datalake`). - -Once logged into pgAdmin 4, you will be automatically connected to your database. Explore pgAdmin 4 and run some queries! - -For more information, please see the section on [pgAdmin 4]({{< relref "architecture/pgadmin4.md" >}}). - -## Troubleshooting - -### Installation Failures - -Some Kubernetes environments may require you to customize the configuration for the PostgreSQL Operator installer. The below provides a guide on the common parameters that require modification, though this may vary based on your installation. For a full reference, please visit the [Installation]({{< relref "/installation/_index.md" >}}) section. - -If you already attempted to install the PostgreSQL Operator and that failed, the easiest way to clean up that installation is to delete the [Namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) that you attempted to install the PostgreSQL Operator into. **Note: This deletes all of the other objects in the Namespace, so please be sure this is OK!** - -To delete the namespace, you can run the following command: - -``` -kubectl delete namespace pgo -``` - -#### Get the PostgreSQL Operator Installer Manifest - -You will need to download the PostgreSQL Operator Installer manifest to your environment, which you can do with the following command: - -``` -curl https://raw.githubusercontent.com/CrunchyData/postgres-operator/v{{< param operatorVersion >}}/installers/kubectl/postgres-operator.yml > postgres-operator.yml -``` - -#### Configure the PostgreSQL Operator Installer - -There are many [configuration parameters]({{< relref "/installation/configuration.md">}}) to help you fine tune your installation, but there are a few that you may want to change to get the PostgreSQL Operator to run in your environment. Open up the `postgres-operator.yml` file and edit a few variables. - -Find the `pgo_admin_password` variable. This is the password you will use with the [`pgo` client]({{< relref "/installation/pgo-client" >}}) to manage your PostgreSQL clusters. The default is `password`, but you can change it to something like `hippo-elephant`. - -You may also need to set the storage default storage classes that you would like the PostgreSQL Operator to use. These variables are called `primary_storage`, `replica_storage`, `backup_storage`, and `backrest_storage`. There are several storage configurations listed out in the configuration file under the heading `storage[1-9]_name`. Find the one that you want to use, and set it to that value. - -For example, if your Kubernetes environment is using NFS storage, you would set these variables to the following: - -``` -backrest_storage: "nfsstorage" -backup_storage: "nfsstorage" -primary_storage: "nfsstorage" -replica_storage: "nfsstorage" -``` - -If you are using either Openshift or CodeReady Containers and you have a `restricted` Security Context Constraint, you will need to set `disable_fsgroup` to `true` in order to deploy the PostgreSQL Operator. - -For a full list of available storage types that can be used with this installation method, please review the [configuration parameters]({{< relref "/installation/configuration.md">}}). - -When you are done editing the file, you can install the PostgreSQL Operator by running the following commands: - -``` -kubectl create namespace pgo -kubectl apply -f postgres-operator.yml -``` diff --git a/docs/content/releases/4.1.0.md b/docs/content/releases/4.1.0.md deleted file mode 100644 index efe45f8600..0000000000 --- a/docs/content/releases/4.1.0.md +++ /dev/null @@ -1,125 +0,0 @@ ---- -title: "4.1.0" -date: -draft: false -weight: 320 ---- - -[Crunchy Data](https://www.crunchydata.com) announces the release of [PostgreSQL Operator](https://www.crunchydata.com/products/crunchy-postgresql-operator/) 4.1 on October 15, 2019. - -In addition to new features, such as dynamic namespace manage by the Operator and the ability to subscribe to a stream of lifecycle events that occur with PostgreSQL clusters, this release adds many new features and bug fixes. - -The Postgres Operator 4.1 release also includes the following software versions upgrades: - -- The PostgreSQL now uses versions 11.5, 10.10, 9.6.15, and 9.5.19. The PostgreSQL container now includes support for PL/Python. -- pgBackRest is now 2.17 -- pgMonitor now uses version 3.2 - -To build Postgres Operator 4.1, you will need to utilize buildah version 1.9.0 and above. - -Postgres Operator is tested with Kubernetes 1.13 - 1.15, OpenShift 3.11+, Google Kubernetes Engine (GKE), and VMware Enterprise PKS 1.3+. At present, Postgres Operator 4.1 is **not** compatible with Kubernetes 1.16. - -# Major Features - -## Dynamic Namespace Management - -Postgres Operator 4.1 introduces the ability to dynamically management Kubernetes namespaces from the Postgres Operator itself. [Kubernetes namespaces](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) provide the ability to isolate workloads within a Kubernetes cluster, which can provide additional security and privacy amongst users. - -The previous version of the Postgres Operator allowed users to add Kubernetes namespaces to which the Postgres Operator could deploy and manage clusters. Postgres Operator 4.1 expands on this ability by allowing the Operator to dynamically add and remove namespaces using the `pgo create namespace` and `pgo delete namespace` commands. - -This allows for several different deployment patterns for PostgreSQL clusters, including: - -- Deploying PostgreSQL clusters within a single namespace for an individual user -- Deploying a PostgreSQL cluster into its own namespace - -Note that deleting a namespace in Kubernetes deletes all of the objects that reside within that namespace, **including active PostgreSQL clusters**. Ensure that you wish to delete everything inside a namespace before executing `pgo delete namespace`. - -This has also lead to a change in terms of how role-based access control (RBAC) is handled. Traditionally, RBAC permissions we added to the `ClusterRole` objects, but in order to support dynamic namespace management, the RBAC has been moved to the `Role` objects. - -If you would like to use the dynamic namespace feature Kubernetes 1.11 and OpenShift 3.11, you will also need to utilize the `add-targeted-namespace.sh` script that is bundled with Postgres Operator 4.1. To add a namespace to dynamically to your Postgres Operator deployment in Kubernetes 1.11, you first need to create the namespace with `kubectl` (e.g. `kubectl create namespace yournamespace`) and then run the `add-targeted-namespace.sh` script (`./add-targeted-namespace.sh yournamespace`). - -## Lifecycle Events - -Postgres Operator 4.1 now provides PostgreSQL lifecycle events that occur during the operation of a cluster. Lifecycle events include things such as when a cluster is provisioned, a replica is added, a backup is taken, a cluster fails over, etc. Each deployed PostgreSQL cluster managed by the PostgreSQL Operator will report back to the Operator around these lifecycle events via the NSQ distributed messaging platform. - -You can subscribe to lifecycle events by topic using the `pgo watch` command. For subscribe to all events for clusters under management, you can run `pgo watch alltopic`. Eventing can be disabled using the `DISABLE_EVENTING` environmental variable within the `postgres-operator` deployment. - -For more information, please read the [Eventing]({{< relref "/architecture/eventing.md" >}}) section of the documentation. - -# Breaking Changes - -## Containers - -- The `node_exporter` container is no longer shipped with the PostgreSQL Operator. A detailed explanation of how node-style metrics are handled is available in the "Additional Features" section. - -## API - -- The `pgo update cluster` API endpoint now uses a HTTP `POST` instead of `GET` -- The user management endpoints (e.g. `pgo create user`) now use a HTTP `POST` instead of a `GET`. - -## Command-line interface - -- Removed the `-db` flag from `pgo create user` and `pgo update user` -- Removed `--update-password` flag from the `pgo user` command - -## Installation - -- Changed the Ansible installer to use `uninstall` and `uninstall-metrics` tags instead of `deprovision` and `deprovision-metrics` respectively - -## Builds - -- The `Makefile` now uses `buildah` for building the containers instead of `Docker`. The PostgreSQL Operator can be built with buildah v1.9.0 and above. - -# Additional Features - -## General PostgreSQL Operator Features - -- PostgreSQL Operator users and roles can now be dynamically managed (i.e. `pgouser` and `pgorole`) - -- Readiness probes have been added to the `apiserver` and `scheduler` and is now included in the new `event` container. The `scheduler` uses a special `heartbeat` task to provide its readiness. - -- Added the `DISABLE_TLS` environmental variable for `apiserver`, which allows the API server to run over HTTP. - -- Added the `NOAUTH_ROUTES` environmental variable for `apiserver`, which allows useres to bypass TLS authentication on certain routes (e.g. `/health`). At present, only `/health` can be used in this variable. - -- Services ports for the postgres\_exporter and pgBadger are now templated so a user can now customize them beyond the defaults. - -## PostgreSQL Upgrade Management - -- The process to perform a minor upgrade on a PostgreSQL deployment was modified in order to minimize downtime. Now, when a `pgo upgrade cluster` command is run, the PostgreSQL Operator will upgrade all the replicas to the desired version of PostgreSQL before upgrading the primary container. If `autofail` is enabled, the PostgreSQL Operator will failover to a pod that is already updated to a newer version, which minimizes downtime and allows the cluster to upgrade to the desired, updated version of PostgreSQL. - -- `pgo upgrade` now supports the `--all` flag, which will upgrade every single PostgreSQL cluster managed by the PostgreSQL Operator (i.e. `pgo upgrade --all`) - -## PostgreSQL User Management - -- All user passwords are now loaded in from Kubernetes Secrets. -- `pgo create user --managed` now supports any acceptable password for PostgreSQL -- Improved error message for calling the equivalent `pgo show user` command when interfacing with the API directly and there are no clusters found for th euser. - -## Monitoring - -- Updated the Grafana dashboards to use those found in pgMonitor v3.2 -- The `crunchy-collect` container now connects to PostgreSQL using a password that is stored in a Kubernetes secret -- Introduced support for collecting host-style metrics via the cAdvisor installations that are installed and running on each Kubelet. This requires for the `ClusterRole` to have the `nodes` and `nodes/metrics` resources granted to it. - -## Logging - -- Updated logging to provide additional details of where issues occurred, including file and line number where the issue originated. - -## Installation - -- The Ansible installer `uninstall` tag now has the option of preserving portions of the previous installation -- The Ansible installer supports NFS and hostpath storage options -- The Ansible installer can now set the fsgroup for the `metrics` tag -- The Ansible installer now has the same configuration options as the bash installer -- The Ansible installer now supports a separate RBAC installation -- Add a custom security context constraint (SCC) to the Ansible and bash installers that is applied to pods created by the Operator. This makes it possible to customize the control permissions for the PostgreSQL cluster pods managed by the Operator - -# Fixes - -- Fixed a bug where `testuser` was always created even if the username was modified in the `pgo.yaml` -- Fixed the `--expired` flag for `pgo show user` to show the number of days until a user's password expires -- Fixed the workflow for `pgo benchmark` jobs to show completion -- Modify the create a cluster via a custom resource definition (CRD) to use pgBackRest -- Fixed an issue with the `pgpool` label when a `pg_dump` is performed by calling the REST API -- Fixed the `pgo load` example, which previous used a hardcoded namespace. This has changed with the support of dynamic namespaces. diff --git a/docs/content/releases/4.1.1.md b/docs/content/releases/4.1.1.md deleted file mode 100644 index 92cddd0821..0000000000 --- a/docs/content/releases/4.1.1.md +++ /dev/null @@ -1,27 +0,0 @@ ---- -title: "4.1.1" -date: -draft: false -weight: 310 ---- - -[Crunchy Data](https://www.crunchydata.com) announces the release of [PostgreSQL Operator](https://www.crunchydata.com/products/crunchy-postgresql-operator/) 4.1.1 on November, 22, 2019. - -Postgres Operator 4.1.1 provide bug fixes and continued support to Postgres Operator 4.1 as well as continued compatibility with newer versions of PostgreSQL. - -The PostgreSQL Operator is released in conjunction with the [Crunchy Container Suite](https://github.com/CrunchyData/crunchy-containers/). - -The Postgres Operator 4.1.1 release includes the following software versions upgrades: - -- The PostgreSQL now uses versions 12.1, 11.6, 10.11, 9.6.16, and 9.5.20. - -Postgres Operator is tested with Kubernetes 1.13 - 1.15, OpenShift 3.11+, Google Kubernetes Engine (GKE), and VMware Enterprise PKS 1.3+. At present, Postgres Operator 4.1 is **not** compatible with Kubernetes 1.16. - -# Fixes - -- Add the `--disable-tls` flag to the `pgo` command-line client, as to be compatible with the Operator API server that is deployed with `DISABLE_TLS` enabled. This is backported due to this functionality being missed in the 4.1 release. -- Update the YAML library to v2.2.4 to mitigate issues presented in CVE-2019-11253 -- Specify the `pgbackrest` user by its ID number (2000) in the backrest-repo container image so that containers instantiated with the `runAsNonRoot` option enabled recognize the `pgbackrest` user as non-root. -- Ensure any Kubernetes Secret associated with a PostgreSQL backup is deleted when the `--delete-backups` flag is specified on `pgo delete cluster` -- Enable individual ConfigMap files to be customized without having to upload every single ConfigMap file available in `pgo-config`. Patch by Conor Quin (@Conor-Quin) -- Skip the HTTP Basic Authorization check if the `BasicAuth` parameter in `pgo.yaml` is set to `false` diff --git a/docs/content/releases/4.1.2.md b/docs/content/releases/4.1.2.md deleted file mode 100644 index bbe3256685..0000000000 --- a/docs/content/releases/4.1.2.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -title: "4.1.2" -date: -draft: false -weight: 300 ---- - -This release was to update the supported PostgreSQL versions to 12.2, 11.7, 10.12, 9.6.17, and 9.5.21 diff --git a/docs/content/releases/4.2.0.md b/docs/content/releases/4.2.0.md deleted file mode 100644 index 5eedad7bd4..0000000000 --- a/docs/content/releases/4.2.0.md +++ /dev/null @@ -1,248 +0,0 @@ ---- -title: "4.2.0" -date: -draft: false -weight: 220 ---- - -[Crunchy Data](https://www.crunchydata.com) announces the release of the [PostgreSQL Operator](https://www.crunchydata.com/products/crunchy-postgresql-operator/) 4.2.0 on December, 31, 2019. - -The focus of the 4.2.0 release of the PostgreSQL Operator was on the resiliency and uptime of the [PostgreSQL](https://www.postgresql.org) clusters that the PostgreSQL Operator manages, with an emphasis on high-availability and removing the Operator from being a single-point-of-failure in the HA process. This release introduces support for a distributed-consensus based high-availability approach using [Kubernetes distributed consensus store](https://kubernetes.io/docs/tasks/administer-cluster/highly-available-master/) as the backing, which, in other words, allows for the PostgreSQL clusters to manage their own availability and **not** the PostgreSQL Operator. This is accomplished by leveraging the open source high-availability framework [Patroni](https://patroni.readthedocs.io) as well as the open source, high-performant PostgreSQL disaster recovery management tool [pgBackRest](https://pgbackrest.org/). - -To accomplish this, we have introduced a new container called `crunchy-postgres-ha` (and for geospatial workloads, `crunchy-postgres-gis-ha`). **If you are upgrading from an older version of the PostgreSQL Operator, you will need to modify your installation to use these containers**. - -Included in the PostgreSQL Operator 4.2.0 introduces the following new features: - -- An improved PostgreSQL HA (high-availability) solution using distributed consensus that is backed by Kubernetes. This includes: - - Elimination of the PostgreSQL Operator as the arbiter that decides when a cluster should fail over - - Support for Pod anti-affinity, which indicates to Kubernetes schedule pods (e.g. PostgreSQL instances) on separate nodes - - Failed primaries now automatically heal, which significantly reduces the time in which they can rejoin the cluster. - - Introduction of synchronous replication for workloads that are sensitive to transaction loss (with a tradeoff of performance and potentially availability) -- Standardization of physical backups and restores on pgBackRest, with native support for `pg_basebackup` removed. -- Introduction of the ability to clone PostgreSQL clusters using the `pgo clone` command. This feature copies the pgBackRest repository from a cluster and creates a new, single instance primary as its own cluster. -- Allow one to use their own certificate authority (CA) when interfacing with the Operator API, and to specify usage of the CA from the `pgo` command-line interface (CLI) - -The container building process has been optimized, with build speed ups reported to be 70% faster. - -The Postgres Operator 4.2.0 release also includes the following software versions upgrades: - -- The PostgreSQL containers now use versions 12.1, 11.6, 10.11, 9.6.16, and 9.5.20. -- [pgBackRest](https://access.crunchydata.com/documentation/pgbackrest/) is upgraded to use version 2.20 -- [pgBouncer](https://access.crunchydata.com/documentation/pgbouncer/) is upgraded to use version 1.12 -- [Patroni](https://access.crunchydata.com/documentation/patroni/) uses version 1.6.3 - -PostgreSQL Operator is tested with Kubernetes 1.13 - 1.15, OpenShift 3.11+, Google Kubernetes Engine (GKE), and VMware Enterprise PKS 1.3+. We have taken steps to ensure the PostgreSQL Operator is compatible with Kubernetes 1.16+, but did not test thoroughly on it for this release. Cursory testing indicates that the PostgreSQL Operator is compatible with Kubernetes 1.16 and beyond, but we advise that you run your own tests. - -# Major Features - -## High-Availability & Disaster Recovery - -PostgreSQL Operator 4.2.0 makes significant enhancements to the high-availability and disaster recovery capabilities of the PostgreSQL Operator by moving to a distributed-consensus based model for maintaining availability, standardizing around pgBackRest for backups and restores, and removing the Operator itself as a single-point-of-failure in relation to PostgreSQL cluster resiliency. - -As the high-availability environment introduced by PostgreSQL Operator 4.2.0 is now the default, setting up a HA cluster is as easy as: - -``` -pgo create cluster hacluster -pgo scale hacluster --replica-count=2 -``` - -If you wish to disable high-availability for a cluster, you can use the following command: - -``` -pgo create cluster boringcluster --disable-autofail -``` - -### New Required HA PostgreSQL Containers: `crunchy-postgres-ha` and `crunchy-postgres-gis-ha` - -Using the PostgreSQL Operator 4.2.0 requires replacing your `crunchy-postgres` and `crunchy-postgres-gis` containers with the `crunchy-postgres-ha` and `crunchy-postgres-gis-ha` containres respectively. The underlying PostgreSQL installations in the container remain the same but are now optimized for Kubernetes environments to provide the new high-availability functionality. - -A major change to this container is that the PostgreSQL process is now managed by Patroni. This allows a PostgreSQL cluster that is deployed by the PostgreSQL Operator to manage its own uptime and availability, to elect a new leader in the event of a downtime scenario, and to automatically heal after a failover event. - -Upgrading to these new containers is as simple as modifying your CRD `ccpimage` parameter to use `crunchy-postgres-ha` to use the HA enabled containers. Please see our upgrade instructions to select your preferred upgrade strategy. - -### pgBackRest Standardization - -pgBackRest is now the only backup and restore method supported by the PostgreSQL Operator. This has allowed for the following features: - -- Faster creation of new replicas when a scale up request is made -- Automatic healing of PostgreSQL instances after a failover event, leveraging the pgBackRest [delta restore](https://pgbackrest.org/configuration.html#section-general/option-delta) feature. This allows for a significantly shorter healing process -- The ability to clone PostgreSQL clusters - -As part of this standardization, one change to note is that after a PostgreSQL cluster is created, the PostgreSQL Operator will schedule a full backup of the cluster. This is to ensure that a new replica can be created from a pgBackRest backup. If this initial backup fails, no new replicas will be provisioned. - -When upgrading from an earlier version, please ensure that you have at least one pgBackRest full backup in your backup repository. - -### Pod Anti-Affinity - -PostgreSQL Operator 4.2.0 adds support for Kubernetes pod anti-affinity, which provides guidance on how Kubernetes should schedule pods relative to each other. This is helpful in high-availability architectures to ensure that PostgreSQL pods are spread out in order to withstand node failures. For example, in a setup with two PostgreSQL instances, you would not want both instances scheduled to the same node: if that node goes down or becomes unreachable, then your cluster will be unavailable! - -The way the PostgreSQL Operator uses pod anti-affinity is that it tries to ensure that **none** of the managed pods within the same cluster are scheduled to the same node. These include: - -- Any PostgreSQL instances -- The pod that manages pgBackRest repository -- If deployed, any pgBouncer pods - -This helps improve the likelihood that a cluster can remain up even if a downtime event occurs. - -There are three options available for pod anti-affinity: - -- `preferred`: Kubernetes will try to schedule any pods within a PostgreSQL cluster to different nodes, but in the event it must schedule two pods on the same node, it will. This is the default option -- `required`: Kubernetes will schedule pods within a PostgreSQL cluster to different nodes, but in the event it cannot schedule a pod to a different node, it will not schedule the pod until a different node is available. While this guarantees that no pod will share the same node, it can also lead to downtime events as well. -- `disabled`: Pod anti-affinity is not used. - -These options can be combined with the existing node affinity functionality (`--node-label`) to group the scheduling of pods to particular node labels! - -### Synchronous Replication - -PostgreSQL Operator 4.2 introduces support for synchronous replication by leveraging the "synchronous mode" functionality provided by Patroni. Synchronous replication is useful for workloads that are sensitive to losing transactions, as PostgreSQL will not consider a transaction to be committed until it is committed to all synchronous replicas connected to a primary. This provides a higher guarantee of data consistency and, when a healthy synchronous replica is present, a guarantee of the most up-to-date data during a failover event. - -This comes at a cost of performance as PostgreSQL: as PostgreSQL has to wait for a transaction to be committed on all synchronous replicas, a connected client will have to wait longer than if the transaction only had to be committed on the primary (which is how asynchronous replication works). Additionally, there is a potential impact to availability: if a synchronous replica crashes, any writes to the primary will be blocked until a replica is promoted to become a new synchronous replica of the primary. - -You can enable synchronous replication by using the `--sync-replication` flag with the `pgo create` command. - -### Updated pgo CLI Flags - -- `pgo create` now has a CLI flag for pod anti-affinity called `--pod-anti-affinity`, which accepts the values `required`, `preferred`, and `disabled` -- `pgo create --sync-replication` specifies to create a PostgreSQL HA cluster with synchronous replication - -### Global Configuration - -To support high-availability there are some new settings that you can manage from your `pgo.yaml` file: - -- `DisableAutofail` - when set to `true`, this will disable the new HA functionality in any newly created PostgreSQL clusters. By default, this is `false`. -- `DisableReplicaStartFailReinit` - when set to `true`, this will disable attempting to re-provision a PostgreSQL replica when it is stuck in a "start failed" state. By default, this `false`. -- `PodAntiAffinity` - Determines the type of pod anti-affinity rules to apply to the pods within a newly PostgreSQL cluster. If set to `required`, pods within a PostgreSQL cluster **must** be scheduled on different nodes, otherwise a pod will fail to schedule. If set to `preferred`, Kubernetes will make a best effort to schedule pods of the same PostgreSQL cluster on different nodes. If set to `disabled`, this feature is disabled. By default, this is `preferred`. -- `SyncReplication` - If set to `true`, enables synchronous replication in newly created PostgreSQL clusters. Default to `false`. - -## `pgo clone` - -PostgreSQL Operator 4.2.0 introduces the ability to clone the data from one PostgreSQL cluster into a brand new PostgreSQL cluster. The command to do so is simple: - -``` -pgo clone oldcluster newcluster -``` - -After the command is executed, the PostgreSQL Operator checks to see if a) the `oldcluster` exists and b) the `newcluster` does not exist. If both of these conditions hold, the PostgreSQL Operator creates two new PVCs the match the specs of the `oldcluster` PostgreSQL data PVC (`PrimaryStorage`) and its pgBackRest repository PVC (`BackrestStorage`). - -If these PVCs are successfully created, the PostgreSQL Operator will copy the contents of the pgBackRest repository from the `oldcluster` to the one setup for the `newcluster` by means of a Kubernetes Job that is running `rsync` provided by the `pgo-backrest-repo-sync` container. We are able to do this because all changes to the pgBackRest repository are atomic. - -If this successfully completes, the PostgreSQL Operator then runs a pgBackRest restore job to restore the PostgreSQL cluster. On a successful restore, the new PostgreSQL cluster is then scheduled and runs in recovery mode until it reaches a consistent state, and then comes online as a brand new cluster - -To optimize the time it takes to restore for a clone, we recommend taking a backup of the cluster you want to clone. You can do this with the `pgo backup` command, and choose if you want to take a full, differential, or incremental backup. - -Future work will be focused on additional options, such as being able to clone a PostgreSQL cluster to a particular point-in-time (so long as the backup is available to support it) and supporting other `pgo create` flags. - -## Schedule Backups With Retention Policies - -While the PostgreSQL Operator has had the ability to schedule full, incremental, and differential pgBackRest backups for awhile, it has not been possible to set the retention policy on these backups. Backup retention policies allow users to manage their backup storage whle maintaining enough backups to be able to recover to a specific point-in-time, or perform forensic work on data in a particular state. - -For example, one can schedule a full backup to occur nightly at midnight and keep up to 21 full backups (e.g. a 21 day retention policy): - -``` -pgo create schedule mycluster --schedule="0 0 * * *" --schedule-type="pgbackrest" --pgbackrest-backup-type=full --schedule-opts="--repo1-retention-full=21" -``` - -# Breaking Changes - -## Feature Removals - -- Physical backups using `pg_basebackup` are no longer supported. Any command-line option that references using this method has been removed. The API endpoints where one can specify a `pg_basebackup` remain, but will be removed in a future release (likely the next one). -- Removed the `pgo-lspvc` container. This container was used with the `pgo show pvc` and performed searches on the mounted filesystem. This would cause issues both on environments that could not support a PVC being mounted multiple times, and for underlying volumes that contained numerous files. Now, `pgo show pvc` only lists the PVCs associated with the PostgreSQL clusters managed by the PostgreSQL Operator. -- Native support for pgpool has been removed. - -## Command Line (`pgo`) - -### `pgo create cluster` - -- The `--pgbackrest` option is removed as it is no longer needed. pgBackRest is enabled by default - -### `pgo delete cluster` - -The default behavior for `pgo delete cluster` has changed so that **all backups and PostgreSQL data are deleted by default**. - -To keep a backup after a cluster is deleted, one can use the `--keep-backups` flag with `pgo delete cluster`, and to keep the PostgreSQL data directory, one can specify the `--keep-data` flag. There is a plan to remove the `--keep-data` flag in a future release, though this has not been determined yet. - -**The `-b`, `--delete-backups`, `-d`, and `--delete-data` flags are all deprecated and will be removed in the next release.** - -### `pgo scaledown` - -With this release, `pgo scaledown` will **delete the PostgreSQL data directory of the replica by default.** To keep the PostgreSQL directory after the replica has scaled down, one can use the `--keep-data` flag. - -### `pgo test` - -`pgo test` is optimized to provide faster results about the availability of a PostgreSQL cluster. Instead of attempting to make PostgreSQL connections to each PostgreSQL instance with each user, `pgo test` now checks the availability of the service endpoints for each PostgreSQL cluster as well as the output of the PostgreSQL readiness checks, which check the connectivity of a PostgreSQL cluster. - -Both the API and the output of `pgo test` are modified for this optimization. - -## Additional apiserver Changes - -- An authorization failure in the `apiserver` (i.e. not having the correct RBAC permission for a `pgouser`) will return a status code of `403` instead of `401` -- The pgorole permissions now support the `"*"` permission to specify _all_ pgorole RBAC permissions are granted to a pgouser. Users upgrading from an earlier version should note this change if they want to assign their users to access new features. - -# Additional Features - -## pgo (Operator CLI) - -- Support the pgBackRest options for backup retention, including `--repo1-retention-full`, `--repo1-retention-diff`, `--repo1-retention-archive`, `--repo1-retention-archive-type`, which can be added in the `--backup-opts` flag in the `pgo backup` command. For example: - -``` -# create a pgBackRest incremental backup with one full backup being retained and two differential backups being retained, along with incremental backups associated with each -pgo backup mycluster --backup-type="pgbackrest" --backup-opts="--type=incr --repo1-retention-diff=2 --repo1-retention-full=1" - -# create a pgBackRest full backup where 2 other full backups are retained, with WAL archive retained for full and differential backups -pgo backup mycluster --backup-opts="--type=full --repo1-retention-full=2 --repo1-retention-archive=4 --repo1-retention-archive-type=diff" -``` - -- Allow for users to define S3 credentials and other options for pgBackRest backups on a per-cluster basis, in addition to leveraging the globally provided templates. This introduces the following flags on the -`pgo create cluster` command: - - `--pgbackrest-s3-bucket` - specifics the AWS S3 bucket that should be utilized - - `--pgbackrest-s3-endpoint` specifies the S3 endpoint that should be utilized - - `--pgbackrest-s3-key` - specifies the AWS S3 key that should be utilized - - `--pgbackrest-s3-key-secret`- specifies the AWS S3 key secret that should be utilized - - `--pgbackrest-s3-region` - specifies the AWS S3 region that should be utilized -- Add the `--disable-tls` flag to the `pgo` command-line client, as to be compatible with the Operator API server that is deployed with `DISABLE_TLS` enabled. -- Improved output for the `pgo scaledown --query` and and `pgo failover --query` commands, including providing easy-to-understand results on replication lag - -- Containerized `pgo` via the `pgo-client` container. This can be installed from the Ansible installer using the `pgo_client_container_install` flag, and it installs into the same namespace as the PostgreSQL Operator. You can connect to the container via `kubectl exec` and execute pgo commands! - -## Builds - -- Refactor the Dockerfiles to rely on a "base" definition for ease of management and to ensure consistent builds across all container images during a full `make` -- Selecting which image builder to use is now argument based using the `IMGBUILDER` environmental variable. Default is `buildah` -- Optimize `yum clean` invocation to occur on same line as the `RUN`, which leads to smaller image builds. - -## Installation - -- Add the `pgo_noauth_routes` (Ansible) / `NOAUTH_ROUTES` (Bash) configuration variables to disable TLS/BasicAuth authentication on particular API routes. This defaults to `'/health'` -- Add the `pgo_tls_ca_store` Ansible / `TLS_CA_TRUST` (Bash) configuration variables to specify a PEM encoded list of trusted certificate authorities (CA) for the Operator to use when authenticating API requests over TLS -- Add the `pgo_add_os_ca_store` / `ADD_OS_TRUSTSTORE` (Bash) to specify to use the trusted CA that is provided by the operating system. Defaults to `true` - -## Configuration - -- Enable individual ConfigMap files to be customized without having to upload every single ConfigMap file available in `pgo-config`. Patch by Conor Quin (@Conor-Quin) -- Add `EXCLUDE_OS_TRUST` environmental variable that allows the `pgo` client to specify that it does not want to use the trusted certificate authorities (CA) provided by the operating system. - -## Miscellaneous - -- Migrated Kubernetes API groups using API version `extensions/v1beta1` to their respective updated API groups/versions. This improves compatibility with Kubernetes 1.16 and beyond. Original patch by Lucas Bickel (@hairmare) -- Add a Kubernetes Service Account to every Pod that is managed by the PostgreSQL Operator -- Add support for private repositories using `imagePullSecret`. This can be configured during installation by setting the `pgo_image_pull_secret` and `pgo_image_pull_secret_manifest` in the inventory file using Ansible installer, or with the `PGO_IMAGE_PULL_SECRET` and `PGO_IMAGE_PULL_SECRET_MANIFEST` environmental variables using the Bash installer. The "pull secret" is the name of the pull secret, whereas the manifest is what is used to define the secret -- The pgorole permissions now support the `"*"` permission to specify _all_ pgorole RBAC permissions are granted to a pgouser -- Policies that are executed using `pgo apply` and `pgo create cluster --policies` are now executed over a UNIX socket directly on the Pod of the primary PostgreSQL instance. Reported by @yuanlinios -- A new sidecar, `crunchyadm`, is available for running managemnet commands on a PostgreSQL cluster. As this is experimental, this feature is disabled by default. - -# Fixes - -- Update the YAML library to v2.2.4 to mitigate issues presented in CVE-2019-11253 -- Specify the `pgbackrest` user by its ID number (2000) in the backrest-repo container image so that containers instantiated with the `runAsNonRoot` option enabled recognize the `pgbackrest` user as non-root. -- Ensure any Kubernetes Secret associated with a PostgreSQL backup is deleted when the `--delete-backups` flag is specified on `pgo delete cluster` -- The pgBouncer pod can now support connecting to databases that are added after a PostgreSQL cluster is deployed -- Remove misleading error messages from the logs that were caused by the readiness/liveness probes on the `apiserver` and `event` containers in the `postgres-operator` pod -- Several fixes to the cleanup of a PostgreSQL cluster after a deletion event (e.g. `pgo delete cluster`) to ensure data is safely removed. This includes ensuring schedules managed by `pgo schedule` are removed, as well as PostgreSQL cluster and backup data -- Skip the HTTP Basic Authorization check if the `BasicAuth` parameter in `pgo.yaml` is set to `false` -- Ensure all available backup types are displayed in the `pgo schedule` are listed (full, incr, diff) -- Ensure schedule tasks create with `pgo create schedule` are deleted when `pgo delete cluster` is called -- Fix the missing readiness/liveness probes used to check the status of the `apiserver` and `event` containers in the `postgres-operator` pod -- Remove misleading error messages from the logs that were caused by the readiness/liveness probes on the `apiserver` and `event` containers in the `postgres-operator` pod -- Fix a race condition where the `pgo-rmdata` job could fail when doing its final pass on deleting PVCs. This became noticeable after adding in the task to clean up any configmaps that a PostgreSQL cluster relied on -- Improved logging around authorization failures in the apiserver diff --git a/docs/content/releases/4.2.1.md b/docs/content/releases/4.2.1.md deleted file mode 100644 index 83d94992b8..0000000000 --- a/docs/content/releases/4.2.1.md +++ /dev/null @@ -1,29 +0,0 @@ ---- -title: "4.2.1" -date: -draft: false -weight: 210 ---- - -[Crunchy Data](https://www.crunchydata.com) announces the release of the [PostgreSQL Operator](https://www.crunchydata.com/products/crunchy-postgresql-operator/) 4.2.1 on January, 16, 2020. - -The PostgreSQL Operator 4.2.1 provides bug fixes and continued support to the 4.2 release line. - -The PostgreSQL Operator is released in conjunction with the [Crunchy Container Suite](https://github.com/CrunchyData/crunchy-containers/). - -PostgreSQL Operator is tested with Kubernetes 1.13+, OpenShift 3.11+, Google Kubernetes Engine (GKE), and VMware Enterprise PKS 1.3+. - -# Fixes - -- Ensure Pod labels are updated after failover (#1218) -- Fix for scheduled tasks to continue executing even after `pgo delete schedule` is called (#1215) -- Ensure `pgo restore` carries through the `--node-label` to the new primary (#1206) -- Fix for displaying incorrect replica names with the `--query` flag on `pgo scaledown`/`pgo failover` after a failover occurred -- Fix for default CA exclusion for the Windows-based [pgo client]({{< relref "pgo-client/_index.md" >}}) -- Fix a race condition where the `pgo-rmdata` job could fail when doing its final pass on deleting PVCs. This went unnoticed as it was the final task to occur -- Fix image pull policy for the `pgo-client` container to match the project default (`IfNotPresent`) -- Update the "Create CRD Example" to reference the `crunchy-postgres-ha` container -- Update comments used for the API documentation generation via Swagger -- Update the directions for running the PostgreSQL Operator from the GCP Marketplace deployer -- Updated OLM installer example and added generation script -- Update the "cron" package to `v3.0.1` diff --git a/docs/content/releases/4.2.2.md b/docs/content/releases/4.2.2.md deleted file mode 100644 index 9b8b5ae4e9..0000000000 --- a/docs/content/releases/4.2.2.md +++ /dev/null @@ -1,38 +0,0 @@ ---- -title: "4.2.2" -date: -draft: false -weight: 200 ---- - -[Crunchy Data](https://www.crunchydata.com) announces the release of the [PostgreSQL Operator](https://www.crunchydata.com/products/crunchy-postgresql-operator/) 4.2.2 on February, 18, 2020. - -The PostgreSQL Operator 4.2.2 release provides bug fixes and continued support to the 4.2 release line. - -This release includes updates for several packages supported by the PostgreSQL Operator, including: - -- The PostgreSQL containers now use versions 12.2, 11.7, 10.12, 9.6.17, and 9.5.21 -- The PostgreSQL containers now support PL/Python3 -- Patroni is updated to version 1.6.4 - -The PostgreSQL Operator is released in conjunction with the [Crunchy Container Suite](https://github.com/CrunchyData/crunchy-containers/). - -PostgreSQL Operator is tested with Kubernetes 1.13+, OpenShift 3.11+, Google Kubernetes Engine (GKE), and VMware Enterprise PKS 1.3+. - -# Changes since 4.2.1 - -- Added the `--enable-autofail` flag to `pgo update` to make it clear how the auto-failover mechanism can be re-enabled for a PostgreSQL cluster. -- Remove using `expenv` in the `add-targeted-namespace.sh` script - -# Fixes since 4.2.1 - -- Ensure PostgreSQL clusters can be successfully restored via `pgo restore` after 'pgo scaledown' is executed -- Ensure all replicas are listed out via the `--query` flag in `pgo scaledown` and `pgo failover`. This now follows the pattern outlined by the [Kubernetes safe random string generator](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/util/rand/rand.go) (#1247) -- Honor the value of "PasswordLength" when it is set in the pgo.yaml file for password generation. The default is now set at `24` -- Set `UsePAM yes` in the `sshd_config` file to fix an issue with using SSHD in newer versions of Docker -- The backup task listed in the `pgtask` CRD is now only deleted if one already exists -- Ensure that a successful "rmdata" Job does not delete all cluster pgtasks listed in the CRD after a successful run -- Only add Operator labels to a managed namespace if the namespace already exists when executing the `add-targeted-namespace.sh` script -- Remove logging of PostgreSQL user credentials in the PostgreSQL Operator logs -- Consolidation of the Dockerfiles for RHEL7/UBI7 builds -- Several fixes to the documentation (#1233) diff --git a/docs/content/releases/4.3.0.md b/docs/content/releases/4.3.0.md deleted file mode 100644 index ab7ef604f9..0000000000 --- a/docs/content/releases/4.3.0.md +++ /dev/null @@ -1,425 +0,0 @@ ---- -title: "4.3.0" -date: -draft: false -weight: 100 ---- - -Crunchy Data announces the release of the [PostgreSQL Operator](https://www.crunchydata.com/products/crunchy-postgresql-operator/) 4.3.0 on May 1, 2020. - -The PostgreSQL Operator is released in conjunction with the [Crunchy Container Suite](https://github.com/CrunchyData/crunchy-containers/). - -The PostgreSQL Operator 4.3.0 release includes the following software versions upgrades: - -- The PostgreSQL containers now use versions 12.2, 11.7, 10.12, 9.6.17, and 9.5.21 - - This now includes support for using the JIT compilation feature introduced in PostgreSQL 11 -- PostgreSQL containers now support PL/Python3 -- pgBackRest is now at version 2.25 -- Patroni is now at version 1.6.5 -- postgres\_exporter is now at version 0.7.0 -- pgAdmin 4 is at 4.18 - -PostgreSQL Operator is tested with Kubernetes 1.13 - 1.18, OpenShift 3.11+, OpenShift 4.3+, Google Kubernetes Engine (GKE), and VMware Enterprise PKS 1.3+. - -# Major Features - -- [Standby Clusters + Multi-Kubernetes Deployments]({{< relref "/architecture/high-availability/multi-cluster-kubernetes.md" >}}) -- [Improved custom configuration for PostgreSQL clusters]({{< relref "/advanced/custom-configuration.md" >}}) -- [Installation via the `pgo-deployer` container]({{< relref "/installation/postgres-operator/_index.md" >}}) -- [Automatic Upgrades of the PostgreSQL Operator via `pgo upgrade`]({{< relref "/upgrade/_index.md" >}}) -- Set [custom PVC sizes]({{< relref "pgo-client/common-tasks/_index.md" >}}#create-a-postgresql-cluster-with-different-pvc-sizes) for PostgreSQL clusters on creation and clone -- Support for PostgreSQL [Tablespaces]({{< relref "/architecture/tablespaces.md" >}}) -- The ability to specify an external volume for write-ahead logs (WAL) -- [Elimination of `ClusterRole` requirement]({{< relref "/architecture/namespace.md" >}}) for using the PostgreSQL Operator -- [Easy TLS-enabled PostgreSQL cluster creation]({{< relref "pgo-client/common-tasks/_index.md" >}}#enable-tls) - - All Operator commands now support TLS-only PostgreSQL workflows -- Feature Preview: [pgAdmin 4 Integration + User Synchronization]({{< relref "/architecture/pgadmin4.md" >}}) - -## Standby Clusters + Multi-Kubernetes Deployments - -A key component of building database architectures that can ensure continuity of operations is to be able to have the database available across multiple data -centers. In Kubernetes, this would mean being able to have the PostgreSQL Operator be able to have the PostgreSQL Operator run in multiple Kubernetes clusters, have PostgreSQL clusters exist in these Kubernetes clusters, and only ensure the "standby" deployment is promoted in the event of an outage or planned switchover. - -As of this release, the PostgreSQL Operator now supports standby PostgreSQL clusters that can be deployed across namespaces or other Kubernetes or Kubernetes-enabled clusters (e.g. OpenShift). This is accomplished by leveraging the PostgreSQL Operator's support for -[pgBackRest]({{< relref "/architecture/disaster-recovery.md" >}}) and leveraging an intermediary, i.e. S3, to provide the ability for the standby cluster to read in the PostgreSQL archives and replicate the data. This allows a user to quickly promote a standby PostgreSQL cluster in the event that the primary cluster suffers downtime (e.g. data center outage), for planned switchovers such as Kubernetes cluster maintenance or moving a PostgreSQL workload from one data center to another. - -To support standby clusters, there are several new flags available on `pgo create cluster` that are required to set up a new standby cluster. These include: - -- `--standby`: If set, creates the PostgreSQL cluster as a standby cluster. -- `--pgbackrest-repo-path`: Allows the user to override the `pgBackRest` repository path for a cluster. While this setting can now be utilized when creating any cluster, it is typically required for the creation of standby clusters as the repository path will need to match that of the primary cluster. -- `--password-superuser`: When creating a standby cluster, allows the user to specify a password for the superuser that matches the superuser account in the cluster the standby is replicating from. -- `--password-replication`: When creating a standby cluster, allows the user to specify a password for the replication user that matches the superuser account in the cluster the standby is replicating from. - -Note that the `--password` flag must be used to ensure the password of the main PostgreSQL user account matches that of the primary PostgreSQL cluster, if you are using Kubernetes to manage the user's password. - -For example, if you have a cluster named `hippo` and wanted to create a standby cluster called `hippo` and assuming the S3 credentials are using the defaults provided to the PostgreSQL Operator, you could execute a command similar to: - -``` -pgo create cluster hippo-standby --standby \ - --pgbackrest-repo-path=/backrestrepo/hippo-backrest-shared-repo - --password-superuser=superhippo - --password-replication=replicahippo -``` - -To shutdown the primary cluster (if you can), you can execute a command similar to: - -``` -pgo update cluster hippo --shutdown -``` - -To promote the standby cluster to be able to accept write traffic, you can execute the following command: - -``` -pgo update cluster hippo-standby --promote-standby -``` - -To convert the old primary cluster into a standby cluster, you can execute the following command: - -``` -pgo update cluster hippo --enable-standby -``` - -Once the old primary is converted to a standby cluster, you can bring it online with the following command: - -``` -pgo update cluster hippo --startup -``` - -For information on the architecture and how to -[set up a standby PostgreSQL cluster]({{< relref "/architecture/high-availability/multi-cluster-kubernetes.md" >}}), please refer to the [documentation]({{< relref "/architecture/high-availability/multi-cluster-kubernetes.md" >}}). - -At present, streaming replication between the primary and standby clusters are not supported, but the PostgreSQL instances within each cluster do support streaming replication. - -## Installation via the `pgo-deployer` container - -Installation, alongside upgrading, have long been two of the biggest challenges of using the PostgreSQL Operator. This release makes improvements on both (with upgrading being described in the next section). - -For installation, we have introduced a new container called [`pgo-deployer`]({{< relref "/installation/postgres-operator/_index.md" >}}). For environments that use hostpath storage (e.g. minikube), [installing the PostgreSQL Operator]({{< relref "/installation/postgres-operator/_index.md" >}}) can be as simple as: - -``` -kubectl create namespace pgo -kubectl apply -f https://raw.githubusercontent.com/CrunchyData/postgres-operator/v4.3.0/installers/kubectl/postgres-operator.yml -``` - -The `pgo-deployer` container can be configured by a manifest called [`postgres-operator.yml`](https://raw.githubusercontent.com/CrunchyData/postgres-operator/v4.3.0/installers/kubectl/postgres-operator.yml) and provides a set of [environmental variables]({{< relref "/installation/configuration/_index.md" >}}) that should be familiar from using the [other installers]({{< relref "/installation/other/_index.md" >}}). - -The `pgo-deployer` launches a Job in the namespace that the PostgreSQL Operator will be installed into and sets up the requisite Kubernetes objects: CRDs, Secrets, ConfigMaps, etc. - -The `pgo-deployer` container can also be used to uninstall the PostgreSQL Operator. For more information, please see the [installation documentation]({{< relref "/installation/_index.md" >}}). - -## Automatic PostgreSQL Operator Upgrade Process - -One of the biggest challenges to using a newer version of the PostgreSQL Operator was upgrading from an older version. - -This release introduces the ability to [automatically upgrade from an older version of the Operator]({{< relref "/upgrade/_index.md" >}}) (as early as 4.1.0) to the newest version (4.3.0) using the [`pgo upgrade`]({{< relref "/pgo-client/reference/pgo_upgrade.md" >}}) command. - -The `pgo upgrade` command follows a process similar to the [manual PostgreSQL Operator upgrade]({{< relref "/upgrade/manual/upgrade4.md" >}}) process, but instead automates it. - -To find out more about how to upgrade the PostgreSQL Operator, please review the [upgrade documentation]({{< relref "/upgrade/_index.md" >}}). - -## Improved Custom Configuration for PostgreSQL Clusters - -The ability to customize the configuration for a PostgreSQL cluster with the PostgreSQL Operator can now be easily modified by making changes directly to the ConfigMap that is created with each PostgreSQL cluster. The ConfigMap, which follows the pattern `-pgha-config` (e.g. `hippo-pgha-config` for -`pgo create cluster hippo`), manages the user-facing configuration settings available for a PostgreSQL cluster, and when modified, it will automatically synchronize the settings across all primaries and replicas in a PostgreSQL cluster. - -Presently, the ConfigMap can be edited using the `kubectl edit cm` command, and future iterations will add functionality to the PostgreSQL Operator to make this process easier. - -## Customize PVC Size on PostgreSQL cluster Creation & Clone - -The PostgreSQL Operator provides the ability to set customization for how large the PVC can be via the "storage config" options available in the PostgreSQL Operator configuration file (aka `pgo.yaml`). While these provide a baseline level of customizability, it is often important to be able to set the size of the PVC that a PostgreSQL cluster should use at cluster creation time. In other words, users should be able to choose exactly how large they want their PostgreSQL PVCs ought to be. - -PostgreSQL Operator 4.3 introduces the ability to set the PVC sizes for the PostgreSQL cluster, the pgBackRest repository for the PostgreSQL cluster, and the PVC size for each tablespace at cluster creation time. Additionally, this behavior has been extended to the clone functionality as well, which is helpful when trying to resize a PostgreSQL cluster. Here is some information on the flags that have been added: - -### pgo create cluster - -`--pvc-size` - sets the PVC size for the PostgreSQL data directory -`--pgbackrest-pvc-size` - sets the PVC size for the PostgreSQL pgBackRest repository - -For tablespaces, one can use the `pvcsize` option to set the PVC size for that tablespace. - -### pgo clone cluster - -`--pvc-size` - sets the PVC size for the PostgreSQL data directory for the newly created cluster -`--pgbackrest-pvc-size` - sets the PVC size for the PostgreSQL pgBackRest repository for the newly created cluster - -## Tablespaces - -Tablespaces can be used to spread out PostgreSQL workloads across multiple volumes, which can be used for a variety of use cases: - -- Partitioning larger data sets -- Putting data onto archival systems -- Utilizing hardware (or a storage class) for a particular database object, e.g. an index - -and more. - -Tablespaces can be created via the `pgo create cluster` command using the `--tablespace` flag. The arguments to `--tablespace` can be passed in using one of several key/value pairs, including: - -- `name` (required) - the name of the tablespace -- `storageconfig` (required) - the storage configuration to use for the tablespace -- `pvcsize` - if specified, the size of the PVC. Defaults to the PVC size in the storage configuration - -Each value is separated by a `:`, for example: - -``` -pgo create cluster hacluster --tablespace=name=ts:storageconfig=nfsstorage -``` - -All tablespaces are mounted in the `/tablespaces` directory. The PostgreSQL Operator manages the mount points and persistent volume claims (PVCs) for the tablespaces, and ensures they are available throughout all of the PostgreSQL lifecycle operations, including: - -- Provisioning -- Backup & Restore -- High-Availability, Failover, Healing -- Clone - -etc. - -One additional value is added to the pgcluster CRD: - -- TablespaceMounts: a map of the name of the tablespace and its associated storage. - -Tablespaces are automatically created in the PostgreSQL cluster. You can access them as soon as the cluster is initialized. For example, using the tablespace created above, you could create a table on the tablespace `ts` with the following SQL: - -```sql -CREATE TABLE (id int) TABLESPACE ts; -``` - -Tablespaces can also be added to existing PostgreSQL clusters by using the `pgo update cluster` command. The syntax is similar to that of creating a PostgreSQL cluster with a tablespace, i.e.: - -``` -pgo update cluster hacluster --tablespace=name=ts2:storageconfig=nfsstorage -``` - -As additional volumes need to be mounted to the Deployments, this action can cause downtime, though the expectation is that the downtime is brief. - -Based on usage, future work will look to making this more flexible. Dropping tablespaces can be tricky as no objects must exist on a tablespace in order for PostgreSQL to drop it (i.e. there is no DROP TABLESPACE .. CASCADE command). - -## Easy TLS-Enabled PostgreSQL Clusters - -Connecting to PostgreSQL clusters is a typical requirement when deploying to an untrusted network, such as a public cloud. The PostgreSQL Operator makes it easy to [enable TLS for PostgreSQL](https://access.crunchydata.com/documentation/postgres-operator/latest/latest/pgo-client/common-tasks/#enable-tls). To do this, one must create two secrets prior: one containing the trusted certificate authority (CA) and one containing the PostgreSQL server's TLS keypair, e.g.: - -``` -kubectl create secret generic postgresql-ca --from-file=ca.crt=/path/to/ca.crt -kubectl create secret tls hippo-tls-keypair \ - --cert=/path/to/server.crt \ - --key=/path/to/server.key -``` - -From there, one can create a PostgreSQL cluster that supports TLS with the following command: - -``` -pgo create cluster hippo-tls \ - --server-ca-secret=postgresql-ca \ - --server-tls-secret=hippo-tls-keypair -``` - -To create a PostgreSQL cluster that **only** accepts TLS connections and rejects any connection attempts made over an insecure channel, you can use the `--tls-only` flag on cluster creation, e.g.: - -``` -pgo create cluster hippo-tls \ - --tls-only \ - --server-ca-secret=postgresql-ca \ - --server-tls-secret=hippo-tls-keypair -``` - -### External WAL Volume - -An optimization used for improving PostgreSQL performance related to file system usage is to have the PostgreSQL write-ahead logs (WAL) written to a different mounted volume than other parts of the PostgreSQL system, such as the data directory. - -To support this, the PostgreSQL Operator now supports the ability to specify an external volume for writing the PostgreSQL write-head log (WAL) during cluster creation, which carries through to replicas and clones. When not specified, the WAL resides within the PGDATA directory and volume, which is the present behavior. - -To create a PostgreSQL cluster to use an external volume, one can use the `--wal-storage-config` flag at cluster creation time to select the storage configuration to use, e.g. - -`pgo create cluster --wal-storage-config=nfsstorage hippo` - -Additionally, it is also possible to specify the size of the WAL storage on all newly created clusters. When in use, the size of the volume can be overridden per-cluster. This is specified with the `--wal-storage-size` flag, i.e. - -`pgo create cluster --wal-storage-config=nfsstorage --wal-storage-size=10Gi hippo` - -This implementation does not define the WAL volume in any deployment templates because the volume name and mount path are constant. - -## Elimination of `ClusterRole` Requirement for the PostgreSQL Operator - -PostgreSQL Operator 4.0 introduced the ability to manage PostgreSQL clusters across multiple Kubernetes Namespaces. PostgreSQL Operator 4.1 built on this functionality by allowing users to dynamically control which Namespaces it managed as well as the PostgreSQL clusters deployed to them. In order to leverage this feature, one must grant a [`ClusterRole`](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole) level permission via a ServiceAccount to the PostgreSQL Operator. - -There are a lot of deployment environments for the PostgreSQL Operator that only need for it to exists within a single namespace and as such, granting cluster-wide privileges is superfluous, and in many cases, undesirable. As such, it should be possible to deploy the PostgreSQL Operator to a single namespace without requiring a `ClusterRole`. - -To do this, but maintain the aforementioned Namespace functionality for those who require it, PostgreSQL Operator 4.3 introduces the ability to opt into deploying it with minimum required `ClusterRole` privileges and in turn, the ability to deploy the PostgreSQL Operator without a `ClusterRole`. To do so, the PostgreSQL Operator introduces the concept of "namespace operating mode" which lets one select the type deployment to create. The namespace mode is set at the install time for the PostgreSQL Operator, and files into one of three options: - -- `dynamic`: **This is the default**. This enables full dynamic Namespace management capabilities, in which the PostgreSQL Operator can create, delete and update any Namespaces within the Kubernetes cluster, while then also having the ability to create the Roles, Role Bindings and Service Accounts within those Namespaces for normal operations. The PostgreSQL Operator can also listen for Namespace events and create or remove controllers for various Namespaces as changes are made to Namespaces from Kubernetes and the PostgreSQL Operator's management. - -- `readonly`: In this mode, the PostgreSQL Operator is able to listen for namespace events within the Kubernetetes cluster, and then manage controllers as Namespaces are added, updated or deleted. While this still requires a `ClusterRole`, the permissions mirror those of a "read-only" environment, and as such the PostgreSQL Operator is unable to create, delete or update Namespaces itself nor create RBAC that it requires in any of those Namespaces. Therefore, while in `readonly`, mode namespaces must be preconfigured with the proper RBAC as the PostgreSQL Operator cannot create the RBAC itself. - -- `disabled`: Use this mode if you do not want to deploy the PostgreSQL Operator with any `ClusterRole` privileges, especially if you are only deploying the PostgreSQL Operator to a single namespace. This disables any Namespace management capabilities within the PostgreSQL Operator and will simply attempt to work with the target Namespaces specified during installation. If no target Namespaces are specified, then the Operator will be configured to work within the namespace in which it is deployed. As with the `readonly` mode, while in this mode, Namespaces must be pre-configured with the proper RBAC, since the PostgreSQL Operator cannot create the RBAC itself. - -Based on the installer you use, the variables to set this mode are either named: - -- PostgreSQL Operator Installer: `NAMESPACE_MODE` -- Developer Installer: `PGO_NAMESPACE_MODE` -- Ansible Installer: `namespace_mode` - -## Feature Preview: pgAdmin 4 Integration + User Synchronization - -[pgAdmin 4](https://www.pgadmin.org/) is a popular graphical user interface that lets you work with PostgreSQL databases from both a desktop or web-based client. With its ability to manage and orchestrate changes for PostgreSQL users, the PostgreSQL Operator is a natural partner to keep a pgAdmin 4 environment synchronized with a PostgreSQL environment. - -This release introduces an integration with pgAdmin 4 that allows you to deploy a pgAdmin 4 environment alongside a PostgreSQL cluster and keeps the user's database credentials synchronized. You can simply log into pgAdmin 4 with your PostgreSQL username and password and immediately have access to your databases. - -For example, let's there is a PostgreSQL cluster called `hippo` that has a user named `hippo` with password `datalake`: - -``` -pgo create cluster hippo --username=hippo --password=datalake -``` - -After the PostgreSQL cluster becomes ready, you can create a pgAdmin 4 deployment with the [`pgo create pgadmin`]({{< relref "/pgo-client/reference/pgo_create_pgadmin.md" >}}) -command: - -``` -pgo create pgadmin hippo -``` - -This creates a pgAdmin 4 deployment unique to this PostgreSQL cluster and synchronizes the PostgreSQL user information into it. To access pgAdmin 4, you can set up a port-forward to the Service, which follows the pattern `-pgadmin`, to port `5050`: - -``` -kubectl port-forward svc/hippo-pgadmin 5050:5050 -``` - -Point your browser at `http://localhost:5050` and use your database username (e.g. `hippo`) and password (e.g. `datalake`) to log in. - -(Note: if your password does not appear to work, you can retry setting up the user with the [`pgo update user`]({{< relref "/pgo-client/reference/pgo_update_user.md" >}}) command: `pgo update user hippo --password=datalake`) - -The `pgo create user`, `pgo update user`, and `pgo delete user` commands are synchronized with the pgAdmin 4 deployment. Note that if you use `pgo create user` without the `--managed` flag prior to deploying pgAdmin 4, then the user's credentials will not be synchronized to the pgAdmin 4 deployment. However, a subsequent run of `pgo update user --password` will synchronize the credentials with pgAdmin 4. - -You can remove the pgAdmin 4 deployment with the [`pgo delete pgadmin`]({{< relref "/pgo-client/reference/pgo_delete_pgadmin.md" >}}) command. - -We have released the first version of this change under "feature preview" so you can try it out. As with all of our features, we open to feedback on how we can continue to improve the PostgreSQL Operator. - -## Enhanced `pgo df` - -`pgo df` provides information on the disk utilization of a PostgreSQL cluster, and previously, this was not reporting accurate numbers. The new `pgo df` looks at each PVC that is mounted to each PostgreSQL instance in a cluster, including the PVCs for tablespaces, and computers the overall utilization. Even better, the data is returned in a structured format for easy scraping. This implementation also leverages Golang concurrency to help compute the results quickly. - -## Enhanced pgBouncer Integration - -The pgBouncer integration was completely rewritten to support the TLS-only operations via the PostgreSQL Operator. While most of the work was internal, you should now see a much more stable pgBouncer experience. - -The pgBouncer attributes in the `pgclusters.crunchydata.com` CRD are also declarative and any updates will be reflected by the PostgreSQL Operator. - -Additionally, a few new commands were added: - -- `pgo create pgbouncer --cpu` and `pgo update pgbouncer --memory` resource request flags for settings container resources for the pgBouncer instances. For CPU, this will also set the limit. -- `pgo create pgbouncer --enable-memory-limit` sets the Kubernetes resource limit for memory -- `pgo create pgbouncer --replicas` sets the number of pgBouncer Pods to deploy with a PostgreSQL cluster. The default is `1`. -- `pgo show pgbouncer` shows information about a pgBouncer deployment -- `pgo update pgbouncer --cpu` and `pgo update pgbouncer --memory` resource request flags for settings container resources for the pgBouncer instances after they are deployed. For CPU, this will also set the limit. -- `pgo update pgbouncer --disables-memory-limit` and `pgo update pgbouncer --enable-memory-limit` respectively unset and set the Kubernetes resource limit for memory -- `pgo update pgbouncer --replicas` sets the number of pgBouncer Pods to deploy with a PostgreSQL cluster. -- `pgo update pgbouncer --rotate-password` allows one to rotate the service -account password for pgBouncer - -## Rewritten pgo User Management commands - -The user management commands were rewritten to support the TLS only workflow. These commands now return additional information about a user when actions are taken. Several new flags have been added too, including the option to view all output in JSON. Other flags include: - -- `pgo update user --rotate-password` to automatically rotate the password -- `pgo update user --disable-login` which disables the ability for a PostgreSQL user to login -- `pgo update user --enable-login` which enables the ability for a PostgreSQL user to login -- `pgo update user --valid-always` which sets a password to always be valid, i.e. it has no -expiration -- `pgo show user` does not show system accounts by default now, but can be made to show the system accounts by using `pgo show user --show-system-accounts` - -A major change as well is that the default password expiration function is now defaulted to be unlimited (i.e. never expires) which aligns with typical PostgreSQL workflows. - - -# Breaking Changes - -- `pgo create cluster` will now set the default database name to be the name of the cluster. For example, `pgo create cluster hippo` would create the initial database named `hippo`. -- The `Database` configuration parameter in `pgo.yaml` (`db_name` in the Ansible inventory) is now set to `""` by default. -- the `--password`/`-w` flag for `pgo create cluster` now only sets the password for the regular user account that is created, not all of the system accounts (e.g. the `postgres` superuser). -- A default `postgres-ha.yaml` file is no longer is no longer created by the Operator for every PostgreSQL cluster. -- "Limit" resource parameters are no longer set on the containers, in particular, the PostgreSQL container, due to undesired behavior stemming from the host machine OOM killer. Further details can be found in the original [pull request](https://github.com/CrunchyData/postgres-operator/pull/1391). -- Added `DefaultInstanceMemory`, `DefaultBackrestMemory`, and `DefaultPgBouncerMemory` options to the `pgo.yaml` configuration to allow for the setting of default memory requests for PostgreSQL instances, the pgBackRest repository, and pgBouncer instances respectively. -- If unset by either the PostgreSQL Operator configuration or one-off, the default memory resource requests for the following applications are: - - PostgreSQL: The installers default to 128Mi (suitable for test environments), though the "default of last resort" is 512Mi to be consistent with the PostgreSQL default shared memory requirement - - pgBackRest: 48Mi - - pgBouncer: 24Mi -- Remove the `Default...ContainerResources` set of parameters from the `pgo.yaml` configuration file. -- The `pgbackups.crunchydata.com`, deprecated since 4.2.0, has now been completely removed, along with any code that interfaced with it. -- The `PreferredFailoverFeature` is removed. This had not been doing anything since 4.2.0, but some of the legacy bits and configuration were still there. -- `pgo status` no longer returns information about the nodes available in a Kubernetes cluster -- Remove `--series` flag from `pgo create cluster` command. This affects API calls more than actual usage of the `pgo` client. -- `pgo benchmark`, `pgo show benchmark`, `pgo delete benchmark` are removed. PostgreSQL benchmarks with `pgbench` can still be executed using the `crunchy-pgbench` container. -- `pgo ls` is removed. -- The API that is used by `pgo create cluster` now returns its contents in JSON. The output now includes information about the user that is created. -- The API that is used by `pgo show backup` now returns its contents in JSON. The output view of `pgo show backup` remains the same. -- Remove the `PreferredFailoverNode` feature, as it had already been effectively removed. -- Remove explicit `rm` calls when cleaning up PostgreSQL clusters. This behavior is left to the storage provisioner that one deploys with their PostgreSQL instances. -- Schedule backup job names have been shortened, and follow a pattern that looks like `--sch-backup` - -# Features - -- Several additions to `pgo create cluster` around PostgreSQL users and databases, including: - - `--ccp-image-prefix` sets the `CCPImagePrefix` that specifies the image prefix for the PostgreSQL related containers that are deployed by the PostgreSQL Operator - - `--cpu` flag that sets the amount of CPU to use for the PostgreSQL instances in the cluster. This also sets the limit. - -`--database` / `-d` flag that sets the name of the initial database created. - - `--enable-memory-limit`, `--enable-pgbackrest-memory-limit`, `--enable-pgbouncer-memory-limit` enable the Kubernetes memory resource limit for PostgreSQL, pgBackRest, and pgBouncer respectively - - `--memory` flag that sets the amount of memory to use for the PostgreSQL instances in the cluster - - `--user` / `-u` flag that sets the PostgreSQL username for the standard database user - - `--password-length` sets the length of the password that should be generated, if `--password` is not set. - - `--pgbackrest-cpu` flag that sets the amount of CPU to use for the pgBackRest repository - - `--pgbackrest-memory` flag that sets the amount of memory to use for the pgBackRest repository - - `--pgbackrest-s3-ca-secret` specifies the name of a Kubernetes Secret that contains a key (`aws-s3-ca.crt`) to override the default CA used for making connections to a S3 interface - - `--pgbackrest-storage-config` lets one specify a different storage configuration to use for a local pgBackRest repository - - `--pgbouncer-cpu` flag that sets the amount of CPU to use for the pgBouncer instances - - `--pgbouncer-memory` flag that sets the amount of memory to use for the pgBouncer instances - - `--pgbouncer-replicas` sets the number of pgBouncer Pods to deploy with the PostgreSQL cluster. The default is `1`. - - `--pgo-image-prefix` sets the `PGOImagePrefix` that specifies the image prefix for the PostgreSQL Operator containers that help to manage the PostgreSQL clusters - - `--show-system-accounts` returns the credentials of the system accounts (e.g. the `postgres` superuser) along with the credentials for the standard database user -- `pgo update cluster` now supports the `--cpu`, `--disable-memory-limit`, `--disable-pgbackrest-memory-limit`, `--enable-memory-limit`, `--enable-pgbackrest-memory-limit`, `--memory`, `--pgbackrest-cpu`, and `--pgbackrest-memory` flags to allow PostgreSQL instances and the pgBackRest repository to have their resources adjusted post deployment -- Added the `PodAntiAffinityPgBackRest` and `PodAntiAffinityPgBouncer` to the `pgo.yaml` configuration file to set specific Pod anti-affinity rules for pgBackRest and pgBouncer Pods that are deployed along with PostgreSQL clusters that are managed by the Operator. The default for pgBackRest and pgBouncer is to use the value that is set in `PodAntiAffinity`. -- `pgo create cluster` now supports the `--pod-anti-affinity-pgbackrest` and `--pod-anti-affinity-pgbouncer` flags to specifically overwrite the pgBackRest repository and pgBouncer Pod anti-affinity rules on a specific PostgreSQL cluster deployment, which overrides any values present in `PodAntiAffinityPgBackRest` and `PodAntiAffinityPgBouncer` respectfully. The default for pgBackRest and pgBouncer is to use the value for pod anti-affinity that is used for the PostgreSQL instances in the cluster. -- One can specify the "image prefix" (e.g. `crunchydata`) for the containers that are deployed by the PostgreSQL Operator. This adds two fields to the pgcluster CRD: `CCPImagePrefix` and `PGOImagePrefix -- Specify a different S3 Certificate Authority (CA) with `pgo create cluster` by using the `--pgbackrest-s3-ca-secret` flag, which refers to an existing Secret that contains a key called `aws-s3-ca.crt` that contains the CA. Reported by Aurelien Marie @(aurelienmarie) -- `pgo clone` now supports the `--enable-metrics` flag, which will deploy the monitoring sidecar along with the newly cloned PostgreSQL cluster. -- The pgBackRest repository now uses [ED25519](https://en.wikipedia.org/wiki/EdDSA#Ed25519) SSH key pairs. -- Add the `--enable-autofail` flag to `pgo update` to make it clear how the autofailover mechanism can be re-enabled for a PostgreSQL cluster. - -# Changes - -- Remove `backoffLimit` from Jobs that can be retried, which is most of them. -- POSIX shared memory is now used for the PostgreSQL Deployments. -- Increase the number of namespaces that can be watched by the PostgreSQL Operator. -- The number of unsupported pgBackRest flags on the deny list has been reduced. -- The liveness and readiness probes for a PostgreSQL cluster now reference the `/opt/cpm/bin/health` -- `wal_level` is now defaulted to `logical` to enable logical replication -- `archive_timeout` is now a default setting in the `crunchy-postgres-ha` and `crunchy-postgres-ha-gis` containers and is set to `60` -- `ArchiveTimeout`, `LogStatement`, `LogMinDurationStatement` are removed from `pgo.yaml`, as these can be customized either via a custom `postgresql.conf` file or `postgres-ha.yaml` file -- Quoted identifiers for the database name and user name in bootstrap scripts for the PostgreSQL containers -- Password generation now leverages cryptographically secure random number generation and uses the full set of typeable ASCII characters -- The `node` ClusterRole is no longer used -- The names of the scheduled backups are shortened to use the pattern `--sch-backup` -- The PostgreSQL Operator now logs its timestamps using RFC3339 formatting as implemented by Go -- SSH key pairs are no longer created as part of the Operator installation process. This was a legacy behavior that had not been removed -- The `pv/create-pv-nfs.sh` has been modified to create persistent volumes with their own directories on the NFS filesystems. This better mimics production environments. The older version of the script still exists as `pv/create-pv-nfs-legacy.sh` -- Load pgBackRest S3 credentials into environmental variables as Kubernetes Secrets, to avoid revealing their contents in Kubernetes commands or in logs -- Update how the pgBackRest and pgMonitor pamareters are loaded into Deployment templates to no longer use JSON fragments -- The `pgo-rmdata` Job no longer calls the `rm` command on any data within the PVC, but rather leaves this task to the storage provisioner -- Remove using `expenv` in the `add-targeted-namespace.sh` script - -# Fixes - -- Ensure PostgreSQL clusters can be successfully restored via `pgo restore` after 'pgo scaledown' is executed -- Allow the original primary to be removed with `pgo scaledown` after it is failed over -- The replica Service is now properly managed based on the existence of replicas in a PostgreSQL cluster, i.e. if there are replicas, the Service exists, if not, it is removed -- Report errors in a SQL policy at the time `pgo apply` is executed, which was the previous behavior. Reported by José Joye (@jose-joye) -- Ensure all replicas are listed out via the `--query` flag in `pgo scaledown` and `pgo failover`. This now follows the pattern outlined by the [Kubernetes safe random string generator](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/util/rand/rand.go) -- Default the recovery action to "promote" when performing a "point-in-time-recovery" (PITR), which will ensure that a PITR process completes -- The `stanza-create` Job now waits for both the PostgreSQL cluster and the pgBackRest repository to be ready before executing -- Remove `backoffLimit` from Jobs that can be retried, which is most of them. Reported by Leo Khomenko (@lkhomenk) -- The `pgo-rmdata` Job will not fail if a PostgreSQL cluster has not been properly initialized -- Fixed a separate `pgo-rmdata` crash related to an improper SecurityContext -- The `failover` ConfigMap for a PostgreSQL cluster is now removed when the cluster is deleted -- Allow the standard PostgreSQL user created with the Operator to be able to create and manage objects within its own user schema. Reported by Nicolas HAHN (@hahnn) -- Honor the value of "PasswordLength" when it is set in the pgo.yaml file for password generation. The default is now set at `24` -- Do not log pgBackRest environmental variables to the Kubernetes logs -- By default, exclude using the trusted OS certificate authority store for the Windows pgo client. -- Update the `pgo-client` imagePullPolicy to be `IfNotPresent`, which is the default for all of the managed containers across the project -- Set `UsePAM yes` in the `sshd_config` file to fix an issue with using SSHD in newer versions of Docker -- Only add Operator labels to a managed namespace if the namespace already exists when executing the `add-targeted-namespace.sh` script diff --git a/docs/content/releases/4.3.1.md b/docs/content/releases/4.3.1.md deleted file mode 100644 index 4b66056ff5..0000000000 --- a/docs/content/releases/4.3.1.md +++ /dev/null @@ -1,94 +0,0 @@ ---- -title: "4.3.1" -date: -draft: false -weight: 95 ---- - -Crunchy Data announces the release of the [PostgreSQL Operator](https://www.crunchydata.com/products/crunchy-postgresql-operator/) 4.3.1 on May 18, 2020. - -The PostgreSQL Operator is released in conjunction with the [Crunchy Container Suite](https://github.com/CrunchyData/crunchy-containers/). - -The PostgreSQL Operator 4.3.1 release includes the following software versions upgrades: - -- The PostgreSQL containers now use versions 12.3, 11.8, 10.13, 9.6.18, and 9.5.22 - -PostgreSQL Operator is tested with Kubernetes 1.13 - 1.18, OpenShift 3.11+, OpenShift 4.3+, Google Kubernetes Engine (GKE), and VMware Enterprise PKS 1.3+. - -# Changes - -## Initial Support for SCRAM - -[SCRAM](https://info.crunchydata.com/blog/how-to-upgrade-postgresql-passwords-to-scram) is a password authentication method in PostgreSQL that has been available since PostgreSQL 10 and is considered to be superior to the `md5` authentication method. The PostgreSQL Operator now introduces support for SCRAM on the [`pgo create user`](https://access.crunchydata.com/documentation/postgres-operator/latest/pgo-client/reference/pgo_create_user/) and [`pgo update user`](https://access.crunchydata.com/documentation/postgres-operator/latest/pgo-client/reference/pgo_update_user/) commands by means of the `--password-type` flag. The following values for `--password-type` will select the following authentication methods: - - -- `--password-type=""`, `--password-type="md5"` => md5 -- `--password-type="scram"`, `--password-type="scram-sha-256"` => SCRAM-SHA-256 - -In turn, the PostgreSQL Operator will hash the passwords based on the chosen method and store the computed hash in PostgreSQL. - -When using SCRAM support, it is important to note the following observations and limitations: - -- When using one of the password modifications commands on `pgo update user` (e.g. `--password`, `--rotate-password`, `--expires`) with the desire to keep the persisted password using SCRAM, it is necessary to specify the "--password-type=scram-sha-256" directive. -- SCRAM does not work with the current pgBouncer integration with the PostgreSQL Operator. pgBouncer presently supports only one password-based authentication type at a time. Additionally, to enable support for SCRAM, pgBouncer would require a list of plaintext passwords to be stored in a file that is accessible to it. Future work can evaluate how to leverage SCRAM support with pgBouncer. - -## `pgo restart` and `pgo reload` - -This release introduces the `pgo restart` command, which allow you to perform a PostgreSQL restart on one or more instances within a PostgreSQL cluster. - -You restart all instances at the same time using the following command: - -```shell -pgo restart hippo -``` - -or specify a specific instance to restart using the `--target` flag (which follows a similar behavior to the `--target` flag on `pgo scaledown` and `pgo failover`): - -```shell -pgo restart hippo --target=hippo-abcd -``` - -The restart itself is performed by calling the Patroni `restart` REST endpoint on the specific instance (primary or replica) being restarted. - -As with the `pgo failover` and `pgo scaledown` commands it is also possible to specify the `--query` flag to query instances available for restart: - -```shell -pgo restart mycluster --query -``` - -With the new `pgo restart` command, using `--query` flag with the `pgo failover` and `pgo scaledown` commands include the `PENDING RESTART` information, which is now returned with any replication information. - - -This release allows for the `pgo reload` command to properly reloads all instances (i.e. the primary and all replicas) within the cluster. - -## Dynamic Namespace Mode and Older Kubernetes Versions - -The dynamic namespace mode (e.g. `pgo create namespace` + `pgo delete namespace`) provides the ability to create and remove Kubernetes namespaces and automatically add them unto the purview of the PostgreSQL Operator. Through the course of fixing usability issues with working with the other namespaces modes (`readonly`, `disabled`), a change needed to be introduced that broke compatibility with Kubernetes 1.12 and earlier. - -The PostgreSQL Operator still supports managing PostgreSQL Deployments across multiple namespaces in Kubernetes 1.12 and earlier, but only with `readonly` mode. In `readonly` mode, a cluster administrator needs to create the namespace and the RBAC needed to run the PostgreSQL Operator in that namespace. However, it is now possible to define the RBAC required for the PostgreSQL Operator to manage clusters in a namespace via a ServiceAccount, as described in the [Namespace](https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/namespace/) section of the documentation. - -The usability change allows for one to add namespace to the PostgreSQL Operator's purview (or deploy the PostgreSQL Operator within a namespace) and automatically set up the appropriate RBAC for the PostgreSQL Operator to correctly operate. - -## Other Changes - -- The RBAC required for deploying the PostgreSQL Operator is now decomposed into the exact privileges that are needed. This removes the need for requiring a `cluster-admin` privilege for deploying the PostgreSQL Operator. Reported by (@obeyler). -- With namespace modes `disabled` and `readonly`, the PostgreSQL Operator will now dynamically create the required RBAC when a new namespace is added if that namespace has the RBAC defined in `local-namespace-rbac.yaml`. This occurs when `PGO_DYNAMIC_NAMESPACE` is set to `true`. -- If the PostgreSQL Operator has permissions to manage it's own RBAC within a namespace, it will now reconcile and auto-heal that RBAC as needed (e.g. if it is invalid or has been removed) to ensure it can properly interact with and manage that namespace. -- Add default CPU and memory limits for the metrics collection and pgBadger sidecars to help deployments that wish to have a Pod QoS of `Guaranteed`. The metrics defaults are 100m/24Mi and the pgBadger defaults are 500m/24Mi. Reported by (@jose-joye). -- Introduce `DISABLE_FSGROUP` option as part of the installation. When set to `true`, this does not add a FSGroup to the Pod [Security Context](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) when deploying PostgreSQL related containers or pgAdmin 4. This is helpful when deploying the PostgreSQL Operator in certain environments, such as OpenShift with a `restricted` Security Context Constraint. Defaults to `false`. -- Remove the custom Security Context Constraint (SCC) that would be deployed with the PostgreSQL Operator, so now the PostgreSQL Operator can be deployed using default OpenShift SCCs (e.g. "restricted", though note that `DISABLE_FSGROUP` will need to be set to `true` for that). The example PostgreSQL Operator SCC is left in the [`examples`](https://raw.githubusercontent.com/CrunchyData/postgres-operator/master/examples/pgo-scc.yaml) directory for reference. -- When `PGO_DISABLE_TLS` is set to `true`, then `PGO_TLS_NO_VERIFY` is set to `true`. -- Some of the `pgo-deployer` environmental variables that we not needed to be set by a user were internalized. These include `ANSIBLE_CONFIG` and `HOME`. -- When using the `pgo-deployer` container to install the PostgreSQL Operator, update the default watched namespace to `pgo` as the example only uses this namespace. - -# Fixes -- Fix for cloning a PostgreSQL cluster when the pgBackRest repository is stored in S3. -- The `pgo show namespace` command now properly indicates which namespaces a user is able to access. -- Ensure the `pgo-apiserver` will successfully run if `PGO_DISABLE_TLS` is set to `true`. Reported by (@zhubx007). -- Prevent a run of `pgo-deployer` from failing if it detects the existence of dependent cluster-wide objects already present. -- Deployments with `pgo-deployer` using the default file with `hostpathstorage` will now successfully deploy PostgreSQL clusters without any adjustments. -- Ensure image pull secrets are attached to deployments of the `pgo-client` container. -- Ensure `client-setup.sh` executes to completion if existing PostgreSQL Operator credentials exist that were created by a different installation method -- Update the documentation to properly name `CCP_IMAGE_PULL_SECRET_MANIFEST` and `PGO_IMAGE_PULL_SECRET_MANIFEST` in the `pgo-deployer` configuration. -- Several fixes for selecting default storage configurations and sizes when using the `pgo-deployer` container. These include #1, #4, and #8 in the `STORAGE` family of variables. -- The custom setup example was updated to reflect the current state of bootstrapping the PostgreSQL container. diff --git a/docs/content/releases/4.3.2.md b/docs/content/releases/4.3.2.md deleted file mode 100644 index 216cc2727a..0000000000 --- a/docs/content/releases/4.3.2.md +++ /dev/null @@ -1,53 +0,0 @@ ---- -title: "4.3.2" -date: -draft: false -weight: 94 ---- - -Crunchy Data announces the release of the [PostgreSQL Operator](https://www.crunchydata.com/products/crunchy-postgresql-operator/) 4.3.2 on May 27, 2020. - -The PostgreSQL Operator is released in conjunction with the [Crunchy Container Suite](https://github.com/CrunchyData/crunchy-containers/). - -Version 4.3.2 of the PostgreSQL Operator contains bug fixes to the installer container and changes to how CPU/memory requests and limits can be specified. - -PostgreSQL Operator is tested with Kubernetes 1.13 - 1.18, OpenShift 3.11+, OpenShift 4.3+, Google Kubernetes Engine (GKE), and VMware Enterprise PKS 1.3+. - -## Changes - -### Resource Limit Flags - -PostgreSQL Operator 4.3.0 introduced some new options to tune the resource requests for PostgreSQL instances under management and other associated deployments, including pgBackRest and pgBouncer. From some of our learnings of running PostgreSQL in Kubernetes, we heavily restricted how the limits on the Pods could be set, and tied them to be the same as the requests. - -Due to feedback from a variety of sources, this caused more issues than it helped, and as such, we decided to introduce a breaking change into a patch release and remove the `--enable-*-limit` and `--disable-*-limit` series of flags and replace them with flags that allow you to specifically choose CPU and memory limits. - -This release introduces several new flags to various commands, including: - -- `pgo create cluster --cpu-limit` -- `pgo create cluster --memory-limit` -- `pgo create cluster --pgbackrest-cpu-limit` -- `pgo create cluster --pgbackrest-memory-limit` -- `pgo create cluster --pgbouncer-cpu-limit` -- `pgo create cluster --pgbouncer-memory-limit` -- `pgo update cluster --cpu-limit` -- `pgo update cluster --memory-limit` -- `pgo update cluster --pgbackrest-cpu-limit` -- `pgo update cluster --pgbackrest-memory-limit` -- `pgo create pgbouncer --cpu-limit` -- `pgo create pgbouncer --memory-limit` -- `pgo update pgbouncer --cpu-limit` -- `pgo update pgbouncer --memory-limit` - -Additionally, these values can be modified directly in a pgcluster Custom Resource and the PostgreSQL Operator will react and make the modifications. - -### Other Changes - -- The `pgo-deployer` container can now run using an aribtrary UID. -- For deployments of the PostgreSQL Operator using the `pgo-deployer` container to OpenShift 3.11 environments, a new template YAML file, `postgresql-operator-ocp311.yml` is provided. This YAML file requires that the `pgo-deployer` is run with ` cluster-admin` role for OpenShift 3.11 environments due to the lack of support of the `escalate` RBAC verb. Other environments (e.g. Kubernetes, OpenShift 4+) still do not require `cluster-admin`. -- Allow for the resumption of download the `pgo` client if the `client-setup.sh` script gets interrupted. Contributed by Itay Grudev (@itay-grudev). - -## Fixes - -- The `pgo-deployer` container now assigns the required Service Account all the appropriate `get` RBAC privileges via the `postgres-operator.yml` file that it needs to properly install. This allows the `install` functionality to properly work across multiple runs. -- For OpenShift deploymments, the `pgo-deployer` leverages version 4.4 of the `oc` client. -- Use numeric UIDs for users in the PostgreSQL Operator management containers to support `MustRunAsNonRoot` Pod Security Policies and the like. Reported by Olivier Beyler (@obeyler). diff --git a/docs/content/releases/4.3.3.md b/docs/content/releases/4.3.3.md deleted file mode 100644 index c76495049d..0000000000 --- a/docs/content/releases/4.3.3.md +++ /dev/null @@ -1,48 +0,0 @@ ---- -title: "4.3.3" -date: -draft: false -weight: 93 ---- - -Crunchy Data announces the release of the [PostgreSQL Operator](https://www.crunchydata.com/products/crunchy-postgresql-operator/) 4.3.3 on August 17, 2020. - -The PostgreSQL Operator is released in conjunction with the [Crunchy Container Suite](https://github.com/CrunchyData/crunchy-containers/). - -The PostgreSQL Operator 4.3.3 release includes the following software versions upgrades: - -- The PostgreSQL containers now use versions 12.4, 11.9, 10.14, 9.6.19, and 9.5.23 -- pgBouncer is now at version 1.14. - -PostgreSQL Operator is tested with Kubernetes 1.13 - 1.18, OpenShift 3.11+, OpenShift 4.3+, Google Kubernetes Engine (GKE), and VMware Enterprise PKS 1.3+. - - -## Changes - -- Perform a `pg_dump` from a specific database using the `--database` flag when using `pgo backup` with `--backup-type=pgdump`. -- Restore a `pg_dump` to a specific database using the `--pgdump-database` flag using `pgo restore` when `--backup-type=pgdump` is specified. -- Add the `--client` flag to `pgo version` to output the client version of `pgo`. -- The PostgreSQL cluster scope is now utilized to identify and sync the ConfigMap responsible for the DCS for a PostgreSQL cluster. -- The `PGMONITOR_PASSWORD` is now populated by an environmental variable secret. This environmental variable is only set on a primary instance as it is only needed at the time a PostgreSQL cluster is initialized. -- Remove "Operator Start Time" from `pgo status` as it is more convenient and accurate to get this information from `kubectl` and the like, and it was not working due to RBAC privileges. (Reported by @mw-0). -- `pgo-rmdata` container no longer runs as the `root` user, but as `daemon` (UID 2) -- Remove dependency on the `expenv` binary that was included in the PostgreSQL Operator release. All `expenv` calls were either replaced with the native `envsubst` program or removed. - -## Fixes - -- Add validation to ensure that limits for CPU/memory are greater-than-or-equal-to the requests. This applies to any command that can set a limit/request. -- Ensure WAL archives are pushed to all repositories when pgBackRest is set to use both a local and a S3-based repository -- Silence expected error conditions when a pgBackRest repository is being initialized. -- Add the `watch` permissions to the `pgo-deployer` ServiceAccount. -- Ensure `client-setup.sh` works with when there is an existing `pgo` client in the install path -- Ensure the PostgreSQL Operator can be uninstalled by adding `list` verb ClusterRole privileges to several Kubernetes objects. -- Bring up the correct number of pgBouncer replicas when `pgo update cluster --startup` is issued. -- Fixed issue where `pgo scale` would not work after `pgo update cluster --shutdown` and `pgo update cluster --startup` were run. -- Ensure `pgo scaledown` deletes external WAL volumes from the replica that is removed. -- Fix for PostgreSQL cluster startup logic when performing a restore. -- Do not consider non-running Pods as primary Pods when checking for multiple primaries (Reported by @djcooklup). -- Fix race condition that could occur while `pgo upgrade` was running while a HA configuration map attempted to sync. (Reported by Paul Heinen @v3nturetheworld). -- Silence "ConfigMap not found" error messages that occurred during PostgreSQL cluster initialization, as these were not real errors. -- Fix an issue with controller processing, which could manifest in PostgreSQL clusters not being deleted. -- Eliminate `gcc` from the `postgres-ha` and `pgadmin4` containers. -- Fix `pgo label` when applying multiple labels at once. diff --git a/docs/content/releases/4.4.0.md b/docs/content/releases/4.4.0.md deleted file mode 100644 index ae70ac391d..0000000000 --- a/docs/content/releases/4.4.0.md +++ /dev/null @@ -1,114 +0,0 @@ ---- -title: "4.4.0" -date: -draft: false -weight: 80 ---- - -Crunchy Data announces the release of the PostgreSQL Operator 4.4.0 on July 17, 2020. - -The PostgreSQL Operator is released in conjunction with the [Crunchy Container Suite](https://github.com/CrunchyData/crunchy-containers/). - -The PostgreSQL Operator 4.4.0 release includes the following software versions upgrades: - -- PostGIS 3.0 is now supported. There is now a manual upgrade path between PostGIS containers. -- pgRouting is now included in the PostGIS containers. -- pgBackRest is now at version 2.27. -- pgBouncer is now at version 1.14. - -PostgreSQL Operator is tested with Kubernetes 1.15 - 1.18, OpenShift 3.11+, OpenShift 4.4+, Google Kubernetes Engine (GKE), and VMware Enterprise PKS 1.3+. - -## Major Features - -- Create New PostgreSQL Clusters from pgBackRest Repositories -- Improvements to RBAC Reconciliation. -- TLS Authentication for PostgreSQL Instances. -- A Helm Chart is now available and support for deploying the PostgreSQL Operator. - -### Create New PostgreSQL Clusters from pgBackRest Repositories - -A technique frequently used in PostgreSQL data management is to have a pgBackRest repository that can be used to create new PostgreSQL clusters. This can be helpful for a variety of purposes: - -- Creating a development or test database from a production data set -- Performing a point-in-time-restore on a database that is separate from the primary database - -and more. - -This can be accomplished with the following new flags on `pgo create cluster`: - -- `--restore-from`: used to specify the name of the pgBackRest repository to restore from via the name of the PostgreSQL cluster (whether the PostgreSQL cluster is active or not). -- `--restore-opts`: used to specify additional options like the ones specified to `pgbackrest restore` (e.g. `--type` and `--target` if performing a point-in-time-recovery). - -Only one restore can be performed against a pgBackRest repository at a given time. - -### RBAC Reconciliation - -PostgreSQL Operator 4.3 introduced a change that allows for the Operator to manage the role-based access controls (RBAC) based upon the [Namespace Operating mode](https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/namespace/#namespace-operating-modes) that is selected. This ensures that the PostgreSQL Operator is able to function correctly within the Namespace or Namespaces that it is permitted to access. This includes Service Accounts, Roles, and Role Bindings within a Namespace. - -PostgreSQL Operator 4.4 removes the requirements of granting the PostgreSQL Operator `bind` and `escalate` privileges for being able to reconcile its own RBAC, and further defines which RBAC is specifically required to use the PostgreSQL Operator (i.e. the removal of wildcard `*` privileges). The permissions that the PostgreSQL Operator requires to perform the reconciliation are assigned when it is deployed and is a function of which `NAMESPACE_MODE` is selected (`dynamic`, `readonly`, or `disabled`). - -This change renames the `DYNAMIC_RBAC` parameter in the installer to `RECONCILE_RBAC` and is set to `true` by default. - -For more information on how RBAC reconciliation works, please visit the [RBAC reconciliation documentation](https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/namespace/). - -### TLS Authentication for PostgreSQL Instances - -[Certificate-based authentication](https://www.postgresql.org/docs/current/auth-cert.html) is a powerful PostgreSQL feature that allows for a PostgreSQL client to authenticate using a TLS certificate. While there are a variety of permutations for this can be set up, we can at least create a standardized way for enabling the replication connection to authenticate with a certificate, as we do have a known certificate authority. - -PostgreSQL Operator 4.4 introduces the `--replication-tls-secret` flag on the `pgo create cluster` command, which, if specified and if the prerequisites are specified (`--server-tls-secret` and `--server-ca-secret`), then the replication account ("primaryuser") is configured to use certificate-based authentication. Combine with `--tls-only` for powerful results. - -Note that the common name (CN) on the certificate MUST be "primaryuser", otherwise one must specify a mapping in a `pg_ident` configuration block to map to "primary" user. - -When mounted to the container, the connection `sslmode` that the replication user uses is set to `verify-ca` by default. We can make that guarantee based on the certificate authority that is being mounted. Using `verify-full` would cause the Operator to make assumptions about the cluster that we cannot make, and as such a custom `pg_ident` configuration block is needed for that. However, using `verify-full` allows for mutual authentication between primary and replica. - -## Breaking Changes - -- The parameter to set the RBAC reconciliation settings is renamed to `RECONCILE_RBAC` (from `DYNAMIC_RBAC`). - -## Features - -- Added support for using the URI path style feature of pgBackRest. This includes: - - Adding the `BackrestS3URIStyle` configuration parameter to the PostgreSQL Operator ConfigMap (`pgo.yaml`), which accepts the values of `host` or `path`. - - Adding the `--pgbackrest-s3-uri-style` flag to `pgo create cluster`, which accepts values of `host` or `path`. -- Added support to disable TLS verification when connecting to a pgBackRest repository. This includes: - - Adding the `BackrestS3VerifyTLS ` configuration parameter to the PostgreSQL Operator ConfigMap (`pgo.yaml`). Defaults to `true`. - - Adding the `--pgbackrest-s3-verify-tls` flag to `pgo create cluster`, which accepts values of `true` or `false`. -- Perform a `pg_dump` from a specific database using the `--database` flag when using `pgo backup` with `--backup-type=pgdump`. -- Restore a `pg_dump` to a specific database using the `--pgdump-database` flag using `pgo restore` when `--backup-type=pgdump` is specified. -- Allow for support of authentication parameters in the `pgha-config` (e.g. `sslmode`). See the documentation for words of caution on using these. -- Add the `--client` flag to `pgo version` to output the client version of `pgo`. -- A Helm Chart using Helm v3 is now available. - -## Changes - -- `pgo clone` is now deprecated. For a better cloning experience, please use [`pgo create cluster --restore-from`](https://access.crunchydata.com/documentation/postgres-operator/latest/pgo-client/common-tasks/#clone-a-postgresql-cluster) -- The PostgreSQL cluster scope is now utilized to identify and sync the ConfigMap responsible for the DCS for a PostgreSQL cluster. -- The `PGMONITOR_PASSWORD` is now populated by an environmental variable secret. This environmental variable is only set on a primary instance as it is only needed at the time a PostgreSQL cluster is initialized. -- Remove "Operator Start Time" from `pgo status` as it is more convenient and accurate to get this information from `kubectl` and the like, and it was not working due to RBAC privileges. (Reported by @mw-0). -- Removed unused pgcluster attributes `PrimaryHost` and `SecretFrom`. -- `pgo-rmdata` container no longer runs as the `root` user, but as `daemon` (UID 2) -- Remove dependency on the `expenv` binary that was included in the PostgreSQL Operator release. All `expenv` calls were either replaced with the native `envsubst` program or removed. - -## Fixes - -- Add validation to ensure that limits for CPU/memory are greater-than-or-equal-to the requests. This applies to any command that can set a limit/request. -- Ensure PVC capacities are being accurately reported when using `pgo show cluster` -- Ensure WAL archives are pushed to all repositories when pgBackRest is set to use both a local and a S3-based repository -- Silence expected error conditions when a pgBackRest repository is being initialized. -- Deployments with `pgo-deployer` using the default file with `hostpathstorage` will now successfully deploy PostgreSQL clusters without any adjustments. -- Add the `watch` permissions to the `pgo-deployer` ServiceAccount. -- Ensure the PostgreSQL Operator can be uninstalled by adding `list` verb ClusterRole privileges to several Kubernetes objects. -- Ensure `client-setup.sh` executes to completion if existing PostgreSQL Operator credentials exist that were created by a different installation method. -- Ensure `client-setup.sh` works with when there is an existing `pgo` client in the install path. -- Update the documentation to properly name `CCP_IMAGE_PULL_SECRET_MANIFEST` and `PGO_IMAGE_PULL_SECRET_MANIFEST` in the `pgo-deployer` configuration. -- Bring up the correct number of pgBouncer replicas when `pgo update cluster --startup` is issued. -- Fixed issue where `pgo scale` would not work after `pgo update cluster --shutdown` and `pgo update cluster --startup` were run. -- Ensure `pgo scaledown` deletes external WAL volumes from the replica that is removed. -- Fix for PostgreSQL cluster startup logic when performing a restore. -- Several fixes for selecting default storage configurations and sizes when using the `pgo-deployer` container. These include #1, #4, and #8. -- Do not consider non-running Pods as primary Pods when checking for multiple primaries (Reported by @djcooklup). -- Fix race condition that could occur while `pgo upgrade` was running while a HA configuration map attempted to sync. (Reported by Paul Heinen @v3nturetheworld). -- The custom setup example was updated to reflect the current state of bootstrapping the PostgreSQL container. -- Silence "ConfigMap not found" error messages that occurred during PostgreSQL cluster initialization, as these were not real errors. -- Fix an issue with controller processing, which could manifest in PostgreSQL clusters not being deleted. -- Eliminate `gcc` from the `postgres-ha` and `pgadmin4` containers. diff --git a/docs/content/releases/4.4.1.md b/docs/content/releases/4.4.1.md deleted file mode 100644 index 7057ee69ae..0000000000 --- a/docs/content/releases/4.4.1.md +++ /dev/null @@ -1,23 +0,0 @@ ---- -title: "4.4.1" -date: -draft: false -weight: 79 ---- - -Crunchy Data announces the release of the [PostgreSQL Operator](https://www.crunchydata.com/products/crunchy-postgresql-operator/) 4.4.1 on August 17, 2020. - -The PostgreSQL Operator is released in conjunction with the [Crunchy Container Suite](https://github.com/CrunchyData/crunchy-containers/). - -The PostgreSQL Operator 4.4.1 release includes the following software versions upgrades: - -- The PostgreSQL containers now use versions 12.4, 11.9, 10.14, 9.6.19, and 9.5.23 - -PostgreSQL Operator is tested with Kubernetes 1.13 - 1.18, OpenShift 3.11+, OpenShift 4.3+, Google Kubernetes Engine (GKE), and VMware Enterprise PKS 1.3+. - -## Fixes - -- The pgBackRest URI style defaults to `host` if it is not set. -- Fix `pgo label` when applying multiple labels at once. -- pgBadger now has a default memory limit of 64Mi, which should help avoid a visit from the OOM killer. -- Fix `pgo create pgorole` so that the expression `--permissions=*` works. diff --git a/docs/content/releases/4.5.0.md b/docs/content/releases/4.5.0.md deleted file mode 100644 index 2e6b6bf5db..0000000000 --- a/docs/content/releases/4.5.0.md +++ /dev/null @@ -1,159 +0,0 @@ ---- -title: "4.5.0" -date: -draft: false -weight: 70 ---- - -Crunchy Data announces the release of the PostgreSQL Operator 4.5.0 on October 2, 2020. - -The PostgreSQL Operator is released in conjunction with the [Crunchy Container Suite](https://github.com/CrunchyData/crunchy-containers/). - -The PostgreSQL Operator 4.5.0 release includes the following software versions upgrades: - -- Add support for [PostgreSQL 13](https://www.postgresql.org/about/news/2077/). -- [pgBackRest](https://pgbackrest.org/) is now at version 2.29. -- [postgres\_exporter](https://github.com/wrouesnel/postgres_exporter) is now at version 0.8.0 -- [pgMonitor](https://github.com/CrunchyData/pgmonitor) support is now at 4.4 -- [pgnodemx](https://github.com/CrunchyData/pgnodemx) is now at version 1.0.1 -- [wal2json](https://github.com/eulerto/wal2json) is now at version 2.3 -- [Patroni](https://patroni.readthedocs.io/) is now at version 2.0.0 - -Additionally, PostgreSQL Operator 4.5.0 introduces support for the CentOS 8 and UBI 8 base container images. In addition to using the newer operating systems, this enables support for TLS 1.3 when connecting to PostgreSQL. This release also moves to building the containers using [Buildah](https://buildah.io/) 1.14.9. - -The monitoring stack for the PostgreSQL Operator has shifted to use upstream components as opposed to repackaging them. These are specified as part of the [PostgreSQL Operator Installer](https://access.crunchydata.com/documentation/postgres-operator/latest/installation/postgres-operator/). We have tested this release with the following versions of each component: - -- Prometheus: 2.20.0 -- Grafana: 6.7.4 -- Alertmanager: 0.21.0 - -PostgreSQL Operator is tested with Kubernetes 1.15 - 1.19, OpenShift 3.11+, OpenShift 4.4+, Google Kubernetes Engine (GKE), Amazon EKS, and VMware Enterprise PKS 1.3+. - -## Major Features - -### PostgreSQL Operator Monitoring - -![PostgreSQL Operator Monitoring](/images/postgresql-monitoring.png) - -This release makes several changes to the PostgreSQL Operator Monitoring solution, notably making it much easier to set up a turnkey PostgreSQL monitoring solution with the PostgreSQL Operator using the open source [pgMonitor](https://github.com/CrunchyData/pgmonitor) project. - -pgMonitor combines insightful queries for PostgreSQL with several proven tools for statistics collection, data visualization, and alerting to allow one to deploy a turnkey monitoring solution for PostgreSQL. The pgMonitor 4.4 release added support for Kubernetes environments, particularly with the [pgnodemx](https://github.com/CrunchyData/pgnodemx) that allows one to get host-like information from the Kubernetes Pod a PostgreSQL instance is deployed within. - -PostgreSQL Operator 4.5 integrates with pgMonitor to take advantage of its Kubernetes support, and provides the following visualized metrics out-of-the-box: - -- Pod metrics (CPU, Memory, Disk activity) -- PostgreSQL utilization (Database activity, database size, WAL size, replication lag) -- Backup information, including last backup and backup size -- Network utilization (traffic, saturation, latency) -- Alerts (uptime et al.) - -More metrics and visualizations will be added in future releases. You can further customize these to meet the needs for your enviornment. - -PostgreSQL Operator 4.5 uses the upstream packages for Prometheus, Grafana, and Alertmanager. Those using earlier versions of monitoring provided with the PostgreSQL Operator will need to switch to those packages. The tested versions of these packages for PostgreSQL Operator 4.5 include: - -- Prometheus (2.20.0) -- Grafana (6.7.4) -- Alertmanager (0.21.0) - -You can find out how to [install PostgreSQL Operator Monitoring](https://access.crunchydata.com/documentation/postgres-operator/latest/latest/installation/metrics/) in the installation section: - -[https://access.crunchydata.com/documentation/postgres-operator/latest/latest/installation/metrics/](https://access.crunchydata.com/documentation/postgres-operator/latest/latest/installation/metrics/) - -### Customizing pgBackRest via ConfigMap - -[pgBackRest](https://pgbackrest.org/) powers the [disaster recovery](https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/disaster-recovery/) capabilities of PostgreSQL clusters deployed by the PostgreSQL Operator. While the PostgreSQL Operator provides many toggles to customize a pgBackRest configuration, it can be easier to do so directly using the [pgBackRest configuration file format](https://pgbackrest.org/configuration.html). - -This release adds the ability to specify the pgBackRest configuration from either a ConfigMap or Secret by using the `pgo create cluster --pgbackrest-custom-config` flag, or by setting the `BackrestConfig` attributes in the `pgcluster` CRD. Setting this allows any pgBackRest resource (Pod, Job etc.) to leverage this custom configuration. - -Note that some settings will be overriden by the PostgreSQL Operator regardless of the settings of a customized pgBackRest configuration file due to the nature of how the PostgreSQL instances managed by the Operator access pgBackRest. However, these are typically not the settings that one wants to customize. - -### Apply Custom Annotations to Managed Deployments - -It is now possible to add custom annotations to the Deployments that the PostgreSQL Operator manages. These include: - -- PostgreSQL instances -- pgBackRest repositories -- pgBouncer instances - -Annotations are applied on a per-cluster basis, and can be set either for all the managed Deployments within a cluster or individual Deployment groups. The annotations can be set as part of the `Annotations` section of the pgcluster specification. - -This also introduces several flags to the [`pgo` client](https://access.crunchydata.com/documentation/postgres-operator/latest/pgo-client/) that help with the management of the annotations. These flags are available on `pgo create cluster` and `pgo update cluster` commands and include: - -- `--annotation` - apply annotations on all managed Deployments -- `--annotation-postgres` - applies annotations on all managed PostgreSQL Deployments -- `--annotation-pgbackrest` - applies annotations on all managed pgBackRest Deployments -- `--annotation-pgbouncer` - applies annotations on all managed pgBouncer Deployments - -These flags work similarly to how one manages annotations and labels from `kubectl`. To add an annotation, one follows the format: - - `--annotation=key=value` - -To remove an annotation, one follows the format: - - `--annotation=key-` - -## Breaking Changes - -- The `crunchy-collect` container, used for metrics collection is renamed to `crunchy-postgres-exporter` -- The `backrest-restore--to-` pgtask has been renamed to `backrest-restore-`. Additionally, the following parameters no longer need to be specified for the pgtask: - - pgbackrest-stanza - - pgbackrest-db-path - - pgbackrest-repo-path - - pgbackrest-repo-host - - backrest-s3-verify-tls -- When a restore job completes, it now emits the message `restored Primary created` instead of `restored PVC created`. -- The `toPVC` parameter has been removed from the restore request endpoint. -- Restore jobs using `pg_restore` no longer have `from-` in their names. -- The `pgo-backrest-restore` container has been retired. -- The `pgo load` command has been removed. This also retires the `pgo-load` container. -- The `crunchy-prometheus` and `crunchy-grafana` containers are now removed. Please use the corresponding upstream containers. - -## Features - -- The metrics collection container now has configurable resources. This can be set as part of the custom resource workflow as well as from the `pgo` client when using the following command-line arguments: - - CPU resource requests: - - `pgo create cluster --exporter-cpu` - - `pgo update cluster --exporter-cpu` - - CPU resource limits: - - `pgo create cluster --exporter-cpu-limit` - - `pgo update cluster --exporter-cpu-limit` - - Memory resource requests: - - `pgo create cluster --exporter-memory` - - `pgo update cluster --exporter-memory` - - Memory resource limits: - - `pgo create cluster --exporter-memory-limit` - - `pgo update cluster --exporter-memory-limit` -- Support for TLS 1.3 connections to PostgreSQL when using the UBI 8 and CentOS 8 containers -- Added support for the [`pgnodemx`](https://github.com/CrunchyData/pgnodemx) extension which makes container-level metrics (CPU, memory, storage utilization) available via a PostgreSQL-based interface. - -## Changes - -- The PostgreSQL Operator now supports the default storage class that is available within a Kubernetes cluster. The installers are updated to use the default storage class by default. -- The [`pgo restore`](https://access.crunchydata.com/documentation/postgres-operator/latest/pgo-client/reference/pgo_restore/) methodology is changed to mirror the approach taken by `pgo create cluster --restore-from` that was introduced in the previous release. While `pgo restore` will still perform a ["restore in-place"](https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/disaster-recovery/#restores), it will now take the following actions: - - Any existing persistent volume claims (PVCs) in a cluster removed. - - New PVCs are initialized and the data from the PostgreSQL cluster is restored based on the parameters specified in `pgo restore`. - - Any customizations for the cluster (e.g. custom PostgreSQL configuration) will be available. - - This also fixes several bugs that were reported with the `pgo restore` functionality, some of which are captured further down in these release notes. -- Connections to pgBouncer can now be passed along to the default `postgres` database. If you have a pre-existing pgBouncer Deployment, the most convenient way to access this functionality is to redeploy pgBouncer for that PostgreSQL cluster (`pgo delete pgbouncer` + `pgo create pgbouncer`). Suggested by (@lgarcia11). -- The [Downward API](https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/) is now available to PostgreSQL instances. -- The pgBouncer `pgbouncer.ini` and `pg_hba.conf` have been moved from the pgBouncer Secret to a ConfigMap whose name follows the pattern `-pgbouncer-cm`. These are mounted as part of a project volume in conjunction with the current pgBouncer Secret. -- The `pgo df` command will round values over 1000 up to the next unit type, e.g. `1GiB` instead of `1024MiB`. - -## Fixes - -- Ensure that if a PostgreSQL cluster is recreated from a PVC with existing data that it will apply any custom PostgreSQL configuration settings that are specified. -- Fixed issues with PostgreSQL replica Pods not becoming ready after running `pgo restore`. This fix is a result of the change in methodology for how a restore occurs. -- The `pgo scaledown` now allows for the removal of replicas that are not actively running. -- The `pgo scaledown --query` command now shows replicas that may not be in an active state. -- The pgBackRest URI style defaults to `host` if it is not set. -- pgBackRest commands can now be executed even if there are multiple pgBackRest Pods available in a Deployment, so long as there is only one "running" pgBackRest Pod. Reported by Rubin Simons (@rubin55). -- Ensure pgBackRest S3 Secrets can be upgraded from PostgreSQL Operator 4.3. -- Ensure pgBouncer Port is derived from the cluster's port, not the Operator configuration defaults. -- External WAL PVCs are only removed for the replica they are targeted for on a scaledown. Reported by (@dakine1111). -- When deleting a cluster with the `--keep-backups` flag, ensure that backups that were created via `--backup-type=pgdump` are retained. -- Return an error if a cluster is not found when using `pgo df` instead of timing out. -- pgBadger now has a default memory limit of 64Mi, which should help avoid a visit from the OOM killer. -- The Postgres Exporter now works if it is deployed in a TLS-only environment, i.e. the `--tls-only` flag is set. Reported by (@shuhanfan). -- Fix `pgo label` when applying multiple labels at once. -- Fix `pgo create pgorole` so that the expression `--permissions=*` works. -- The `operator` container will no longer panic if all Deployments are scaled to `0` without using the `pgo update cluster --shutdown` command. diff --git a/docs/content/releases/_index.md b/docs/content/releases/_index.md deleted file mode 100644 index d5758c509e..0000000000 --- a/docs/content/releases/_index.md +++ /dev/null @@ -1,6 +0,0 @@ ---- -title: "Release Notes" -date: -draft: false -weight: 100 ---- diff --git a/docs/content/support/_index.md b/docs/content/support/_index.md deleted file mode 100644 index a1cb419443..0000000000 --- a/docs/content/support/_index.md +++ /dev/null @@ -1,22 +0,0 @@ ---- -title: "Support" -date: -draft: false -weight: 110 ---- - -There are a few options available for community support of the [PostgreSQL Operator](https://github.com/CrunchyData/postgres-operator): - -- **If you believe you have found a bug** or have a detailed feature request: please open [an issue on GitHub](https://github.com/CrunchyData/postgres-operator/issues/new/choose). The PostgreSQL Operator community and the Crunchy Data team behind the PostgreSQL Operator is generally active in responding to issues. -- **For general questions or community support**: please join the PostgreSQL Operator community mailing list at [postgres-operator@crunchydata.com](mailto:postgres-operator@crunchydata.com), - -In all cases, please be sure to provide as many details as possible in regards to your issue, including: - -- Your Platform (e.g. Kubernetes vX.YY.Z) -- Operator Version (e.g. {{< param centosBase >}}-{{< param operatorVersion >}}) -- A detailed description of the issue, as well as steps you took that lead up to the issue -- Any relevant logs -- Any additional information you can provide that you may find helpful - -For production and commercial support of the PostgreSQL Operator, please -[contact Crunchy Data](https://www.crunchydata.com/contact/) at [info@crunchydata.com](mailto:info@crunchydata.com) for information regarding an [Enterprise Support Subscription](https://www.crunchydata.com/about/value-of-subscription/). diff --git a/docs/content/tutorial/_index.md b/docs/content/tutorial/_index.md deleted file mode 100644 index 2919cc6a25..0000000000 --- a/docs/content/tutorial/_index.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: "Tutorial" -draft: false -weight: 15 ---- - -The PostgreSQL Operator provides functionality that lets you run your own database-as-a-service: from deploying PostgreSQL clusters with [high availability]({{< relref "architecture/high-availability/_index.md" >}}), to a [full stack monitoring]({{< relref "architecture/high-availability/_index.md" >}}) solution, essential [disaster recovery and backup tools]({{< relref "architecture/disaster-recovery.md" >}}), the ability to secure your cluster with TLS, and much more! - -What's more, you can manage your PostgreSQL clusters with the convenient [`pgo` client]({{< relref "pgo-client/_index.md" >}}) or by interfacing directly with the PostgreSQL Operator [custom resources]({{< relref "custom-resources/_index.md" >}}). - -Given the robustness of the PostgreSQL Operator, we think it's helpful to break down the functionality in this step-by-step tutorial. The tutorial covers the essential functions the PostgreSQL Operator can perform and covers many common basic and advanced use cases. - -So what are you waiting for? Let's [get started]({{< relref "tutorial/getting-started.md" >}})! diff --git a/docs/content/tutorial/connect-cluster.md b/docs/content/tutorial/connect-cluster.md deleted file mode 100644 index 2cfb55a8f2..0000000000 --- a/docs/content/tutorial/connect-cluster.md +++ /dev/null @@ -1,149 +0,0 @@ ---- -title: "Connect to a Postgres Cluster" -draft: false -weight: 120 ---- - -Naturally, once the [PostgreSQL cluster is created]({{< relref "tutorial/create-cluster.md" >}}), you may want to connect to it. You can get the credentials of the users of the cluster using the [`pgo show user`]({{< relref "pgo-client/reference/pgo_show_user.md" >}}) command, i.e.: - -``` -pgo show user hippo -``` - -yields output similar to: - -``` -CLUSTER USERNAME PASSWORD EXPIRES STATUS ERROR -------- -------- -------------------------------- ------- ------ ----- -hippo testuser securerandomlygeneratedpassword never ok -``` - -If you need to get the password of one of the system or privileged accounts, you will need to use the `--show-system-accounts` flag, i.e.: - -``` -pgo show user hippo --show-system-accounts -``` - -``` -CLUSTER USERNAME PASSWORD EXPIRES STATUS ERROR -------- ----------- -------------------------------- ------- ------ ----- -hippo postgres B>xy}9+7wTVp)gkntf}X|H@N never ok -hippo primaryuser ^zULckQy-\KPws:2UoC+szXl never ok -hippo testuser securerandomlygeneratedpassword never ok -``` - -Let's look at three different ways we can connect to the PostgreSQL cluster. - -## Connecting via `psql` - -Let's see how we can connect to `hippo` using [`psql`](https://www.postgresql.org/docs/current/app-psql.html), the command-line tool for accessing PostgreSQL. Ensure you have [installed the `psql` client](https://www.crunchydata.com/developers/download-postgres/binaries/postgresql12). - -The PostgreSQL Operator creates a service with the same name as the cluster. See for yourself! Get a list of all of the Services available in the `pgo` namespace: - -``` -kubectl -n pgo get svc - -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -hippo ClusterIP 10.96.218.63 2022/TCP,5432/TCP 59m -hippo-backrest-shared-repo ClusterIP 10.96.75.175 2022/TCP 59m -postgres-operator ClusterIP 10.96.121.246 8443/TCP,4171/TCP,4150/TCP 71m -``` - -Let's connect the `hippo` cluster. First, in a different console window, set up a port forward to the `hippo` service: - -``` -kubectl -n pgo port-forward svc/hippo 5432:5432 -``` - -You can connect to the database with the following command, substituting `datalake` for your actual password: - -``` -PGPASSWORD=datalake psql -h localhost -p 5432 -U testuser hippo -``` - -You should then be greeted with the PostgreSQL prompt: - -``` -psql ({{< param postgresVersion >}}) -Type "help" for help. - -hippo=> -``` - -## Connecting via [pgAdmin 4]({{< relref "architecture/pgadmin4.md" >}}) - -[pgAdmin 4]({{< relref "architecture/pgadmin4.md" >}}) is a graphical tool that can be used to manage and query a PostgreSQL database from a web browser. The PostgreSQL Operator provides a convenient integration with pgAdmin 4 for managing how users can log into the database. - -To add pgAdmin 4 to `hippo`, you can execute the following command: - -``` -pgo create pgadmin -n pgo hippo -``` - -It will take a few moments to create the pgAdmin 4 instance. The PostgreSQL Operator also creates a pgAdmin 4 service. See for yourself! Get a list of all of the Services available in the `pgo` namespace: - -``` -kubectl -n pgo get svc - -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -hippo ClusterIP 10.96.218.63 2022/TCP,5432/TCP 59m -hippo-backrest-shared-repo ClusterIP 10.96.75.175 2022/TCP 59m -hippo-pgadmin ClusterIP 10.96.165.27 5050/TCP 5m1s -postgres-operator ClusterIP 10.96.121.246 8443/TCP,4171/TCP,4150/TCP 71m -``` - -Let's connect to our `hippo` cluster via pgAdmin 4! In a different terminal, set up a port forward to pgAdmin 4: - -``` -kubectl -n pgo port-forward svc/hippo-pgadmin 5050:5050 -``` - -Navigate your browser to http://localhost:5050 and use your database username (`testuser`) and password (e.g. `datalake`) to log in. Though the prompt says “email address”, using your PostgreSQL username will work: - -![pgAdmin 4 Login Page](/images/pgadmin4-login2.png) - -(There are occasions where the initial credentials do not properly get set in pgAdmin 4. If you have trouble logging in, try running the command `pgo update user -n pgo hippo --username=testuser --password=datalake`). - -Once logged into pgAdmin 4, you will be automatically connected to your database. Explore pgAdmin 4 and run some queries! - -## Connecting from a Kubernetes Application - -### Within a Kubernetes Cluster - -Connecting a Kubernetes application that is within the same cluster that your PostgreSQL cluster is deployed in is as simple as understanding the default [Kubernetes DNS system](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#what-things-get-dns-names). A cluster created by the PostgreSQL Operator automatically creates a Service of the same name (e.g. `hippo`). - -Following the example we've created, the hostname for our PostgreSQL cluster is `hippo.pgo` (or `hippo.pgo.svc.cluster.local`). To get your exact [DNS resolution rules](https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/), you may need to consult with your Kubernetes administrator. - -Knowing this, we can construct a [Postgres URI](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING) that contains all of the connection info: - -`postgres://testuser:securerandomlygeneratedpassword@hippo.jkatz.svc.cluster.local:5432/hippo` - -which breaks down as such: - -- `postgres`: the scheme, i.e. a Postgres URI -- `testuser`: the name of the PostgreSQL user -- `securerandomlygeneratedpassword`: the password for `testuser` -- `hippo.jkatz.svc.cluster.local`: the hostname -- `5432`: the port -- `hippo`: the database you want to connect to - -If your application or connection driver cannot use the Postgres URI, the above should allow for you to break down the connection string into its appropriate components. - -### Outside a Kubernetes Cluster - -To connect to a database from an application that is outside a Kubernetes cluster, you will need to set one of the following: - -- A Service type of [`LoadBalancer`](https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer) or [`NodePort`](https://kubernetes.io/docs/concepts/services-networking/service/#nodeport) -- An [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/). The PostgreSQL Operator does not provide any management for Ingress types. - -To have the PostgreSQL Operator create a Service that is of type `LoadBalancer` or `NodePort`, you can use the `--service-type` flag as part of creating a PostgreSQL cluster, e.g.: - -``` -pgo create cluster hippo --service-type=LoadBalancer -``` - -You can also set the `ServiceType` attribute of the [PostgreSQL Operator configuration]({{< relref "configuration/pgo-yaml-configuration.md" >}}) to provide a default Service type for all PostgreSQL clusters that are created. - -## Next Steps - -We've created a cluster and we've connected to it! Now, let's [learn what customizations we can make as part of the cluster creation process]({{< relref "tutorial/customize-cluster.md" >}}). diff --git a/docs/content/tutorial/create-cluster.md b/docs/content/tutorial/create-cluster.md deleted file mode 100644 index eeb798faf5..0000000000 --- a/docs/content/tutorial/create-cluster.md +++ /dev/null @@ -1,125 +0,0 @@ ---- -title: "Create a Postgres Cluster" -draft: false -weight: 110 ---- - -If you came here through the [quickstart]({{< relref "quickstart/_index.md" >}}), you may have already [created a cluster]({{< relref "quickstart/_index.md" >}}#create-a-postgresql-cluster), in which case, feel free to skip ahead, or read onward for a more in depth look into cluster creation! - -## Create a PostgreSQL Cluster - -Creating a cluster is simple with the [`pgo create cluster`]({{< relref "pgo-client/reference/pgo_create_cluster.md" >}}) command: - -``` -pgo create cluster hippo -``` - -with output similar to: - -``` -created cluster: hippo -workflow id: 25c870a0-5d27-42c2-be00-92f0ba8768e7 -database name: hippo -users: - username: testuser password: securerandomlygeneratedpassword -``` - -This creates a new PostgreSQL cluster named `hippo` with a database in it named `hippo`. This operation may take a few moments to complete. Note the name of the database user (`testuser`) and password (`securerandomlygeneratedpassword`) for when we connect to the PostgreSQL cluster. - -To make it easier to copy and paste statements used throughout this tutorial, you can set the password of `testuser` as part of creating the PostgreSQL cluster: - -``` -pgo create cluster hippo --password=securerandomlygeneratedpassword -``` - -You can check on the status of the cluster creation using the [`pgo test`]({{< relref "pgo-client/reference/pgo_test.md" >}}) command. The `pgo test` command checks to see if the Kubernetes Services and the Pods that comprise the PostgreSQL cluster are available to receive connections. This includes: - -- Testing that the Kubernetes Endpoints are available and able to route requests to healthy Pods. -- Testing that each PostgreSQL instance is available and ready to accept client connections by performing a connectivity check similar to the one performed by [`pg_isready`](https://www.postgresql.org/docs/current/app-pg-isready.html). - -For example, when the `hippo` cluster is ready, - -``` -pgo test hippo -``` - -will yield output similar to: - -``` -cluster : hippo - Services - primary (10.96.179.126:5432): UP - Instances - primary (hippo-57675d4f8f-wwx64): UP -``` - - -### The Create Cluster Process - -So what just happened? Let's break down what occurs during the create cluster process. - -1. First, `pgo` client creates an entry in the PostgreSQL Operator [pgcluster custom resource definition]({{< relref "custom-resources/_index.md" >}}) with the attributes desired to create the cluster. In the case above, this fills in the name of the cluster (`hippo`) and leverages a lot of defaults from the [PostgreSQL Operator configuration]({{< relref "configuration/pgo-yaml-configuration.md" >}}). We'll discuss more about the PostgreSQL Operator configuration later in the tutorial. - -2. Once the custom resource is added, the PostgreSQL Operator begins provisioning the PostgreSQL instace and a pgBackRest repository which is used to store backups. The following actions occur as part of this process: - - - Creating [persistent volume claims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) (PVCs) for the PostgreSQL instance and the pgBackRest repository. - - Creating [services](https://kubernetes.io/docs/concepts/services-networking/service/) that provide a stable network interface for connecting to the PostgreSQL instance and pgBackRest repository. - - Creating [deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) that house each PostgreSQL instance and pgBackRest repository. Each of these is responsible for one Pod. - - The PostgreSQL Pod, when it is started, provisions a PostgreSQL database and performs other bootstrapping functions, such as creating `testuer`. - - The pgBackRest Pod, when it is started, initializes a pgBackRest repository. Note that the pgBackRest repository is not yet ready to start taking backups, but will be after the next step! - -3. When the PostgreSQL Operator detects that the PostgreSQL and pgBackRest deployments are up and running, it creates a Kubenretes Job to create a pgBackRest stanza. This is necessary as part of intializing the pgBackRest repository to accept backups from our PostgreSQL cluster. - -4. When the PostgreSQL Operator detects that the stanza creation is completed, it will take an initial backup of the cluster. - -In order for a PostgreSQL cluster to be considered successfully created, all of these steps need to succeed. You can connect to the PostgreSQL cluster after step two completes, but note for the cluster to be considered "healthy", you need for pgBackRest to finish initializig. - -You may ask yourself, "wait, why do I need for the pgBackRest repository to be initialized for a cluster to be successfully created?" That is a good question! The reason is that pgBackRest plays a fundamental role in both the [disaster recovery]({{< relref "architecture/disaster-recovery.md" >}}) AND [high availability]({{< relref "architecture/high-availability/_index.md" >}}) system with the PostgreSQL Operator, particularly around self-healing. - -### What Is Created? - -There are several Kubernetes objects that are created as part of the `pgo create cluster` command, including: - -- A Deployment representing the primary PostgreSQL instance - - A PVC that persists the data of this instance - - A Service that can connect to this instance -- A Deployment representing the pgBackRest repository - - A PVC that persists the data of this repository - - A Service that can connect to this repository -- [Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) representing the following three user accounts: - - `postgres`: the database superuser for the PostgreSQL cluster. This is in a secret called `hippo-postgres-secret`. - - `primaryuser`: the replication user. This is used for copying data between PostgreSQL instance. You should not need to login as this user. This is in a secret called `hippo-primaryuser-secret`. - - `testuser`: the regular user account. This user has access to log into the `hippo` database that is created. This is the account you want to give out to your user / application. In a later section, we will see how we can change the default user that is created. This is in a secret called `hippo-testuser-secret`, where `testuser` can be substituted for the name of the user account. -- [ConfigMaps](https://kubernetes.io/docs/concepts/configuration/configmap/), including: - - `hippo-pgha-config`, which allows you to [customize the configuration of your PostgreSQL cluster]({{< relref "advanced/custom-configuration.md">}}). We will cover more about this topic in later sections. - - `hippo-config` and `hippo-leader`, which are used by the high availability system. You should not modify these ConfigMaps. - -Each deployment contains a single Pod. **Do not scale the deployments!**: further into the tutorial, we will cover some commands that let you scale your PostgreSQL cluster. - -Some Job artifacts may be left around after the cluster creation process completes, including the stanza creation job (`hippo-stanza-create`) and initial backup job (`backrest-backup-hippo`). If the jobs completed successfully, you can safely delete these objects. - -## Create a PostgreSQL Cluster With Monitoring - -The [PostgreSQL Operator Monitoring]({{< relref "architecture/monitoring.md" >}}) stack provides a convenient way to gain insights into the availabilty and performance of your PostgreSQL clusters. In order to collect metrics from your PostgreSQL clusters, you have to enable the `crunchy-postgres-exporter` sidecar alongside your PostgreSQL cluster. You can do this with the `--metrics` flag on [`pgo create cluster`]({{< relref "pgo-client/reference/pgo_create_cluster.md" >}}): - -``` -pgo create cluster hippo --metrics -``` - -Note that the `--metrics` flag just enables a sidecar that can be scraped. You will need to install the [monitoring stack]({{< relref "installation/metrics/_index.md" >}}) separately, or tie it into your existing monitoring infrastructure. - -## Troubleshooting - -### PostgreSQL / pgBackRest Pods Stuck in `Pending` Phase - -The most common occurrence of this is due to PVCs not being bound. Ensure that you have configure your [storage options]({{< relref "installation/configuration.md" >}}#storage-settings) correctly for your Kubernetes environment, if for some reason you cannot use your default storage class or it is unavailable. - -Also ensure that you have enough persistent volumes available: your Kubernetes administrator may need to provision more. - -### `stanza-create` Job Never Finishes - -The most common occurrence of this is due to the Kubernetes network blocking SSH connections between Pods. Ensure that your Kubernetes networking layer allows for SSH connections over port 2022 in the Namespace that you are deploying your PostgreSQL clusters into. - -## Next Steps - -Once your cluster is created, the next step is to [connect to your PostgreSQL cluster]({{< relref "tutorial/connect-cluster.md" >}}). You can also [learn how to customize your PostgreSQL cluster]({{< relref "tutorial/customize-cluster.md" >}})! diff --git a/docs/content/tutorial/customize-cluster.md b/docs/content/tutorial/customize-cluster.md deleted file mode 100644 index e9be31c268..0000000000 --- a/docs/content/tutorial/customize-cluster.md +++ /dev/null @@ -1,191 +0,0 @@ ---- -title: "Customize a Postgres Cluster" -draft: false -weight: 130 ---- - -The PostgreSQL Operator makes it very easy and quick to [create a cluster]({{< relref "tutorial/create-cluster.md" >}}), but there are possibly more customizations you want to make to your cluster. These include: - -- Resource allocations (e.g. Memory, CPU, PVC size) -- Sidecars (e.g. [Monitoring]({{< relref "architecture/monitoring.md" >}}), pgBouncer, [pgAdmin 4]({{< relref "architecture/pgadmin4.md" >}})) -- High Availability (e.g. adding replicas) -- Specifying specific PostgreSQL images (e.g. one with PostGIS) -- Specifying a [Pod anti-affinity and Node affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/) -- Enable and/or require TLS for all connections -- [Custom PostgreSQL configurations]({{< relref "advanced/custom-configuration.md" >}}) - -and more. - -There are an abundance of ways to customize your PostgreSQL clusters with the PostgreSQL Operator. You can read about all of these options in the [`pgo create cluster`]({{< relref "pgo-client/reference/pgo_create_cluster.md" >}}) reference. - -The goal of this section is to present a few of the common actions that can be taken to help create the PostgreSQL cluster of your choice. Later sections of the tutorial will cover other topics, such as creating a cluster with TLS or tablespaces. - -## Create a PostgreSQL Cluster With Monitoring - -The [PostgreSQL Operator Monitoring]({{< relref "architecture/monitoring.md" >}}) stack provides a convenient way to gain insights into the availabilty and performance of your PostgreSQL clusters. In order to collect metrics from your PostgreSQL clusters, you have to enable the `crunchy-postgres-exporter` sidecar alongside your PostgreSQL cluster. You can do this with the `--metrics` flag on [`pgo create cluster`]({{< relref "pgo-client/reference/pgo_create_cluster.md" >}}): - -``` -pgo create cluster hippo --metrics -``` - -Note that the `--metrics` flag just enables a sidecar that can be scraped. You will need to install the [monitoring stack]({{< relref "installation/metrics/_index.md" >}}) separately, or tie it into your existing monitoring infrastructure. - -## Customize PVC Size - -Databases come in all different sizes, and those sizes can certainly change over time. As such, it is helpful to be able to specify what size PVC you want to store your PostgreSQL data. - -### Customize PVC Size for PostgreSQL - -The PostgreSQL Operator lets you choose the size of your "PostgreSQL data directory" (aka "PGDATA" directory) using the `--pvc-size` flag. The PVC size should be selected using standard [Kubernetes resource units](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-units-in-kubernetes), e.g. `20Gi`. - -For example, to create a PostgreSQL cluster that has a data directory that is `20Gi` in size: - -``` -pgo create cluster hippo --pvc-size=20Gi -``` - -### Customize PVC Size for pgBackRest - -You can also specify the PVC size for the [pgBackRest repository]({{< relref "architecture/disaster-recovery.md" >}}) with the `--pgbackrest-pvc-size`. [pgBackRest](https://pgbackrest.org/) is used to store all of your backups, so you want to size it so that you can meet your backup retention policy. - -For example, to create a pgBackRest repository that has a PVC sized to `100Gi` in size: - -``` -pgo create cluster hippo --pgbackrest-pvc-size=100Gi -``` - -## Customize CPU / Memory - -Databases have different CPU and memory requirements, often which is dictated by the amount of data in your working set (i.e. actively accessed data). Kubernetes provides several ways for Pods to [manage CPU and memory resources](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/): - -- CPU & Memory Requests -- CPU & Memory Limits - -A CPU or Memory Request tells Kubernetes to ensure that there is _at least_ that amount of resource available on the Node to schedule a Pod to. - -A CPU Limit tells Kubernetes to not let a Pod exceed utilizing that amount of CPU. A Pod will only be allowed to use that maximum amount of CPU. Similarly, a Memory limit tells Kubernetes to not let a Pod exceed a certain amount of Memory. In this case, if Kubernetes detects that a Pod has exceed a Memory limit, it will try to terminate any processes that are causing the limit to be exceed. We mention this as, prior to cgroups v2, Memory limits can potentially affect PostgreSQL availability and we advise to use them carefully. - -The below goes into how you can customize the CPU and memory resources that are made available to the core deployment Pods with your PostgreSQL cluster. Customizing CPU and memory does add more resources to your PostgreSQL cluster, but to fully take advantage of additional resources, you will need to [customize your PostgreSQL configuration](#customize-postgresql-configuration) and tune parameters such as `shared_buffers` and others. - -### Customize CPU / Memory for PostgreSQL - -The PostgreSQL Operator provides several flags for [`pgo create cluster`]({{< relref "pgo-client/reference/pgo_create_cluster.md" >}}) to help manage resources for a PostgreSQL instance: - -- `--cpu`: Specify the CPU Request for a PostgreSQL instance -- `--cpu-limit`: Specify the CPU Limit for a PostgreSQL instance -- `--memory`: Specify the Memory Request for a PostgreSQL instance -- `--memory-limit`: Specify the Memory Limit for a PostgreSQL instance - -For example, to create a PostgreSQL cluster that makes a CPU Request of 2.0 with a CPU Limit of 4.0 and a Memory Request of 4Gi with a Memory Limit of 6Gi: - -``` -pgo create cluster hippo \ - --cpu=2.0 --cpu-limit=4.0 \ - --memory=4Gi --memory-limit=6Gi -``` - -### Customize CPU / Memory for Crunchy PostgreSQL Exporter Sidecar - -If you deploy your [PostgreSQL cluster with monitoring](#create-a-postgresql-cluster-with-monitoring), you may want to adjust the resources of the `crunchy-postgres-exporter` sidecar that runs next to each PostgreSQL instnace. You can do this with the following flags: - -- `--exporter-cpu`: Specify the CPU Request for a `crunchy-postgres-exporter` sidecar -- `--exporter-cpu-limit`: Specify the CPU Limit for a `crunchy-postgres-exporter` sidecar -- `--exporter-memory`: Specify the Memory Request for a `crunchy-postgres-exporter` sidecar -- `--exporter-memory-limit`: Specify the Memory Limit for a `crunchy-postgres-exporter` sidecar - -For example, to create a PostgreSQL cluster with a metrics sidecar with custom CPU and memory requests + limits, you could do the following: - -``` -pgo create cluster hippo --metrics \ - --exporter-cpu=0.5 --exporter-cpu-limit=1.0 \ - --exporter-memory=256Mi --exporter-memory-limit=1Gi -``` - -### Customize CPU / Memory for pgBackRest - -You can also customize the CPU and memory requests and limits for pgBackRest with the following flags: - -- `--pgbackrest-cpu`: Specify the CPU Request for pgBackRest -- `--pgbackrest-cpu-limit`: Specify the CPU Limit for pgBackRest -- `--pgbackrest-memory`: Specify the Memory Request for pgBackRest -- `--pgbackrest-memory-limit`: Specify the Memory Limit for pgBackRest - -For example, to create a PostgreSQL cluster with custom CPU and memory requests + limits for pgBackRest, you could do the following: - -``` -pgo create cluster hippo \ - --pgbackrest-cpu=0.5 --pgbackrest-cpu-limit=1.0 \ - --pgbackrest-memory=256Mi --pgbackrest-memory-limit=1Gi -``` - -## Create a High Availability PostgreSQL Cluster - -[High availability]({{< relref "architecture/high-availability/_index.md" >}}) allows you to deploy PostgreSQL clusters with redundancy that allows them to be accessible by your applications even if there is a downtime event to your primary instance. The PostgreSQL clusters use the distributed consensus storage system that comes with Kubernetes so that availability is tied to that of your Kubenretes clusters. For an in-depth discussion of the topic, please read the [high availability]({{< relref "architecture/high-availability/_index.md" >}}) section of the documentation. - -To create a high availability PostgreSQL cluster with one replica, you can run the following command: - -``` -pgo create cluster hippo --replica-count=1 -``` - -You can scale up and down your PostgreSQL cluster with the [`pgo scale`]({{< relref "pgo-client/reference/pgo_scale.md" >}}) and [`pgo scaledown`]({{< relref "pgo-client/reference/pgo_scaledown.md" >}}) commands. - -## Customize PostgreSQL Configuration - -PostgreSQL provides a lot of different knobs that can be used to fine tune the [configuration](https://www.postgresql.org/docs/current/runtime-config.html) for your workload. While you can [customize your PostgreSQL configuration]({{< relref "advanced/custom-configuration.md" >}}) after your cluster has been deployed, you may also want to load in your custom configuration during initialization. - -The PostgreSQL Operator uses [Patroni](https://patroni.readthedocs.io/) to help manage cluster initialization and high availability. To understand how to build out a configuration file to be used to customize your PostgreSQL cluster, please review the [Patroni documentation](https://patroni.readthedocs.io/en/latest/SETTINGS.html). - -For example, let's say we want to create a PostgreSQL cluster with `shared_buffers` set to `2GB`, `max_connections` set to `30` and `password_encryption` set to `scram-sha-256`. We would create a configuration file that looks similar to: - -``` ---- -bootstrap: - dcs: - postgresql: - parameters: - max_connections: 30 - shared_buffers: 2GB - password_encryption: scram-sha-256 -``` - -Save this configuration in a file called `postgres-ha.yaml`. - -Next, create a [`ConfigMap`](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/) called `hippo-custom-config` like so: - -``` -kubectl -n pgo create configmap hippo-custom-config --from-file=postgres-ha.yaml -``` - -You can then have you new PostgreSQL cluster use `hippo-custom-config` as part of its cluster initialization by using the `--custom-config` flag of `pgo create cluster`: - -``` -pgo create cluster hippo --custom-config=hippo-custom-config -``` - -After your cluster is initialized, [connect to your cluster]({{< relref "tutorial/connect-cluster.md" >}}) and confirm that your settings have been applied: - -``` -SHOW shared_buffers; - - shared_buffers ----------------- - 2GB -``` - -## Troubleshooting - -### PostgreSQL Pod Can't Be Scheduled - -There are many reasons why a PostgreSQL Pod may not be scheduled: - -- **Resources are unavailable**. Ensure that you have a Kubernetes [Node](https://kubernetes.io/docs/concepts/architecture/nodes/) with enough resources to satisfy your memory or CPU Request. -- **PVC cannot be provisioned**. Ensure that you request a PVC size that is available, or that your PVC storage class is set up correctly. -- **Node affinity rules cannot be satisfied**. If you assigned a node label, ensure that the Nodes with that label are available for scheduling. If they are, ensure that there are enough resources available. -- **Pod anti-affinity rules cannot be satisfied**. This most likely happens when [pod anti-affinity]({{< relref "architecture/high-availability/_index.md" >}}#how-the-crunchy-postgresql-operator-uses-pod-anti-affinity) is set to `required` and there are not enough Nodes available for scheduling. Consider adding more Nodes or relaxing your anti-affinity rules. - -## Next Steps - -As mentioned at the beginning, there are a lot more customizations that you can make to your PostgreSQL cluster, and we will cover those as the tutorial progresses! This section was to get you familiar with some of the most common customizations, and to explore how many options `pgo create cluster` has! - -Now you have your PostgreSQL cluster up and running and using the resources as you see fit. What if you want to make changes to the cluster? We'll explore some of the commands that can be used to update your PostgreSQL cluster! diff --git a/docs/content/tutorial/delete-cluster.md b/docs/content/tutorial/delete-cluster.md deleted file mode 100644 index 4b35fa882c..0000000000 --- a/docs/content/tutorial/delete-cluster.md +++ /dev/null @@ -1,54 +0,0 @@ ---- -title: "Delete a Postgres Cluster" -draft: false -weight: 150 ---- - -There are many reasons you may want to delete a PostgreSQL cluster, and a few different questions to consider, such as do you want to permanently delete the data or save it for later use? - -The PostgreSQL Operator offers several different workflows for deleting a cluster, from wiping all assets, to keeping PVCs of your data directory, your backup repository, or both. - -## Delete Everything - -Deleting everything in a PostgreSQL cluster is a simple as using the [`pgo delete cluster`]({{< relref "pgo-client/reference/pgo_delete_cluster.md" >}}) command. For example, to delete the `hippo` cluster: - -``` -pgo delete cluster hippo -``` - -This command launches a [Job](https://kubernetes.io/docs/concepts/workloads/controllers/job/) that uses the `pgo-rmdata` container to delete all of the Kubernetes objects associated with this PostgreSQL cluster. Once the `pgo-rmdata` Job finishes executing, all of your data, configurations, etc. will be removed. - -## Keep Backups - -If you want to keep your backups, which can be used to [restore your PostgreSQL cluster at a later time]({{< relref "architecture/disaster-recovery.md">}}#restore-to-a-new-cluster) (a popular method for cloning and having sample data for your development team to use!), use the `--keep-backups` flag! For example, to delete the `hippo` PostgreSQL cluster but keep all of its backups: - -``` -pgo delete cluster hippo --keep-backups -``` - -This keeps the pgBackRest PVC which follows the pattern `-hippo-pgbr-repo` (e.g. `hippo-pgbr-repo`) and any PVCs that were created using the `pgdump` method of [`pgo backup`]({{< relref "pgo-client/reference/pgo_backup.md">}}). - -## Keep the PostgreSQL Data Directory - -You may also want to delete your PostgreSQL cluster data directory, which is the core of your database, but remove any actively running Pods. This can be accomplished with the `--keep-data` flag. For example, to keep the data directory of the `hippo` cluster: - -``` -pgo delete cluster hippo --keep-data -``` - -Once the `pgo-rmdata` Job completes, your data PVC for `hippo` will still remain, but you will be unable to access it unless you attach it to a new PostgreSQL instance. The easiest way to access your data again is to create a PostgreSQL cluster with the same name: - -``` -pgo create cluster hippo -``` - -and the PostgreSQL Operator will re-attach your PVC to the newly running cluster. - -## Next Steps - -We've covered the fundamental lifecycle elements of the PostgreSQL Operator, but there is much more to learn! If you're curious about how things work in the PostgreSQL Operator and how to perform daily tasks, we suggest you continue with the following sections: - -- [Architecture]({{< relref "architecture/_index.md" >}}) -- [Common `pgo` Client Tasks]({{< relref "pgo-client/common-tasks.md" >}}) - -The tutorial will now go into some more advanced topics. Up next, learn how to [secure connections to your PostgreSQL clusters with TLS]({{< relref "tutorial/tls.md" >}}). diff --git a/docs/content/tutorial/disaster-recovery.md b/docs/content/tutorial/disaster-recovery.md deleted file mode 100644 index ca05361674..0000000000 --- a/docs/content/tutorial/disaster-recovery.md +++ /dev/null @@ -1,188 +0,0 @@ ---- -title: "Disaster Recovery" -draft: false -weight: 190 ---- - -When using the PostgreSQL Operator, the answer to the question "do you take backups of your database" is automatically "yes!" - -The PostgreSQL Operator leverages a pgBackRest repository to facilitate the usage of the pgBackRest features in a PostgreSQL cluster. When a new PostgreSQL cluster is created, it simultaneously creates a pgBackRest repository as described in [creating a PostgreSQL cluster]({{< relref "tutorial/create-cluster.md" >}}) section. - -For more information on how disaster recovery in the PostgreSQL Operator works, please see the [disaster recovery architecture]({{< relref "architecture/disaster-recovery.md">}}) section. - -## Creating a Backup - -The PostgreSQL Operator uses the open source [pgBackRest](https://www.pgbackrest.org) backup and recovery utility for managing backups and PostgreSQL archives. pgBackRest has several types of backups that you can take: - -- Full: Back up the entire database -- Differential: Create a backup of everything since the last full back up was taken -- Incremental: Back up everything since the last backup was taken, whether it was full, differential, or incremental - -When a new PostgreSQL cluster is provisioned by the PostgreSQL Operator, a full pgBackRest backup is taken by default. - -To create a backup, you can run the following command: - -``` -pgo backup hippo -``` - -which by default, will create an incremental pgBackRest backup. The reason for this is that the PostgreSQL Operator initially creates a pgBackRest full backup when the cluster is initial provisioned, and pgBackRest will take incremental backups for each subsequent backup until a different backup type is specified. - -Most [pgBackRest options](https://pgbackrest.org/command.html#command-backup) are supported and can be passed in by the PostgreSQL Operator via the `--backup-opts` flag. - -### Creating a Full Backup - -You can create a full backup using the following command: - -``` -pgo backup hippo --backup-opts="--type=full" -``` - -### Creating a Differential Backup - -You can create a differential backup using the following command: - -``` -pgo backup hippo --backup-opts="--type=diff" -``` - -### Creating an Incremental Backup - -You can create a differential backup using the following command: - -``` -pgo backup hippo --backup-opts="--type=incr" -``` - -An incremental backup is created without specifying any options after a full or differential backup is taken. - -### Creating Backups in S3 - -The PostgreSQL Operator supports creating backups in S3 or any object storage system that uses the S3 protocol. For more information, please read the section on [PostgreSQL Operator Backups with S3]({{< relref "architecture/disaster-recovery.md">}}#using-s3) in the architecture section. - -## Set Backup Retention - -By default, pgBackRest will allow you to keep on creating backups until you run out of disk space. As such, it may be helpful to manage how many backups are retained. - -pgBackRest comes with several flags for managing how backups can be retained: - -- `--repo1-retention-full`: how many full backups to retain -- `--repo1-retention-diff`: how many differential backups to retain -- `--repo1-retention-archive`: how many sets of WAL archives to retain alongside the full and differential backups that are retained - -For example, to create a full backup and retain the previous 7 full backups, you would execute the following command: - -``` -pgo backup hippo --backup-opts="--type=full --repo1-retention-full=7" -``` - -pgBackRest also supports time-based retention. Please [review the pgBackRest documentation for more information](https://pgbackrest.org/command.html#command-backup). - -## Schedule Backups - -It is good practice to take backups regularly. The PostgreSQL Operator allows you to schedule backups to occur automatically. - -The PostgreSQL Operator comes with a scheduler is essentially a [cron](https://en.wikipedia.org/wiki/Cron) server that will run jobs that it is specified. Schedule commands use the cron syntax to set up scheduled tasks. - -![PostgreSQL Operator Schedule Backups](/images/postgresql-cluster-dr-schedule.png) - -For example, to schedule a full backup once a day at 1am, the following command can be used: - -``` -pgo create schedule hippo --schedule="0 1 * * *" \ - --schedule-type=pgbackrest --pgbackrest-backup-type=full -``` - -To schedule an incremental backup once every 3 hours: - -``` -pgo create schedule hippo --schedule="0 */3 * * *" \ - --schedule-type=pgbackrest --pgbackrest-backup-type=incr -``` - -You can also add the backup retention settings to these commands. - -## View Backups - -You can view all of the available backups in your pgBackRest repository with the `pgo show backup` command: - -``` -pgo show backup hippo -``` - -## Restores - -The PostgreSQL Operator supports the ability to perform a full restore on a PostgreSQL cluster (i.e. a "clone" or "copy") as well as a point-in-time-recovery. There are two types of ways to restore a cluster: - -- Restore to a new cluster using the `--restore-from` flag in the [`pgo create cluster`]({{< relref "/pgo-client/reference/pgo_create_cluster.md" >}}) command. This is effectively a [clone](#clone-a-postgresql-cluster) or a copy. -- Restore in-place using the [`pgo restore`]({{< relref "/pgo-client/reference/pgo_restore.md" >}}) command. Note that this is **destructive**. - -It is typically better to perform a restore to a new cluster, particularly when performing a point-in-time-recovery, as it can allow you to more effectively manage your downtime and avoid making undesired changes to your production data. - -Additionally, the "restore to a new cluster" technique works so long as you have a pgBackRest repository available: the pgBackRest repository does not need to be attached to an active cluster! For example, if a cluster named `hippo` was deleted as such: - -``` -pgo delete cluster hippo --keep-backups -``` - -you can create a new cluster from the backups like so: - -``` -pgo create cluster datalake --restore-from=hippo -``` - -Below provides guidance on how to perform a restore to a new PostgreSQL cluster both as a full copy and to a specific point in time. Additionally, it also shows how to restore in place to a specific point in time. - -### Restore to a New Cluster (aka "copy" or "clone") - -Restoring to a new PostgreSQL cluster allows one to take a backup and create a new PostgreSQL cluster that can run alongside an existing PostgreSQL cluster. There are several scenarios where using this technique is helpful: - -- Creating a copy of a PostgreSQL cluster that can be used for other purposes. Another way of putting this is "creating a clone." -- Restore to a point-in-time and inspect the state of the data without affecting the current cluster - -and more. - -#### Restore Everything - -To create a new PostgreSQL cluster from a backup and restore it fully, you can -execute the following command: - -``` -pgo create cluster datalake --restore-from=hippo -``` - -#### Partial Restore / Point-in-time-Recovery (PITR) - -To create a new PostgreSQL cluster and restore it to specific point-in-time (e.g. before a key table was dropped), you can use the following command, substituting the time that you wish to restore to: - -``` -pgo create cluster datalake \ - --restore-from hippo \ - --restore-opts "--type=time --target='2019-12-31 11:59:59.999999+00'" -``` - -When the restore is complete, the cluster is immediately available for reads and writes. To inspect the data before allowing connections, add pgBackRest's `--target-action=pause` option to the `--restore-opts` parameter. - -The PostgreSQL Operator supports the full set of pgBackRest restore options, which can be passed into the `--backup-opts` parameter. For more information, please review the [pgBackRest restore options](https://pgbackrest.org/command.html#command-restore) - -### Restore in-place - -Restoring a PostgreSQL cluster in-place is a **destructive** action that will perform a recovery on your existing data directory. This is accomplished using the [`pgo restore`]({{< relref "/pgo-client/reference/pgo_restore.md" >}}) -command. The most common scenario is to restore the database to a specific point in time. - -#### Point-in-time-Recovery (PITR) - -The more likely scenario when performing a PostgreSQL cluster restore is to recover to a particular point-in-time (e.g. before a key table was dropped). For example, to restore a cluster to December 31, 2019 at 11:59pm: - -``` -pgo restore hippo --pitr-target="2019-12-31 11:59:59.999999+00" \ - --backup-opts="--type=time" -``` - -When the restore is complete, the cluster is immediately available for reads and writes. To inspect the data before allowing connections, add pgBackRest's `--target-action=pause` option to the `--backup-opts` parameter. - -The PostgreSQL Operator supports the full set of pgBackRest restore options, which can be passed into the `--backup-opts` parameter. For more information, please review the [pgBackRest restore options](https://pgbackrest.org/command.html#command-restore) - -## Next Steps - -There are cases where you may want to take [logical backups]({{< relref "tutorial/pgdump.md" >}}), aka `pg_dump` / `pg_dumpall`. Let's learn how to do that with the PostgreSQL Operator! diff --git a/docs/content/tutorial/getting-started.md b/docs/content/tutorial/getting-started.md deleted file mode 100644 index 8422487ed1..0000000000 --- a/docs/content/tutorial/getting-started.md +++ /dev/null @@ -1,75 +0,0 @@ ---- -title: "Getting Started" -draft: false -weight: 100 ---- - -## Installation - -If you have not installed the PostgreSQL Operator yet, we recommend you take a look at our [quickstart]({{< relref "quickstart/_index.md" >}}) or the [installation]({{< relref "installation/_index.md" >}}) sections. - -### Customizing an Installation - -How to customize a PostgreSQL Operator installation is a lengthy topic. The details are covered in the [installation]({{< relref "installation/postgres-operator.md" >}}) section, as well as a list of all the [configuration variables]({{< relref "installation/configuration.md" >}}) available. - -## Setup the `pgo` Client - -This tutorial will be using the [`pgo` client]({{< relref "pgo-client/_index.md" >}}) to interact with the PostgreSQL Operator. Please follow the instructions in the [quickstart]({{< relref "quickstart/_index.md" >}}) or the [installation]({{< relref "installation/pgo-client.md" >}}) sections for how to configure the `pgo` client. - -The PostgreSQL Operator and `pgo` client are designed to work in a [multi-namespace deployment environment]({{< relref "architecture/namespace.md" >}}) and many `pgo` commands require that the namespace flag (`-n`) are passed into it. You can use the `PGO_NAMESPACE` environmental variable to set which namespace a `pgo` command can use. For example: - -``` -export PGO_NAMESPACE=pgo -pgo show cluster --all -``` - -would show all of the PostgreSQL clusters deployed to the `pgo` namespace. This is equivalent to: - -``` -pgo show cluster -n pgo --all -``` - -(Note: `-n` takes precedence over `PGO_NAMESPACE`.) - -For convenience, we will use the `pgo` namespace created as part of the [quickstart]({{< relref "quickstart/_index.md" >}}) in this tutorial. In the shell that you will be executing the `pgo` commands in, run the following command: - -``` -export PGO_NAMESPACE=pgo -``` - -## Next Steps - -Before proceeding, please make sure that your `pgo` client setup can communicate with your PostgreSQL Operator. In a separate terminal window, set up a port forward to your PostgreSQL Operator: - -``` -kubectl port-forward -n pgo svc/postgres-operator 8443:8443 -``` - -The [`pgo version`]({{< relref "pgo-client/reference/pgo_version.md" >}}) command is a great way to check connectivity with the PostgreSQL Operator, as it is a very simple, safe operation. Try it out: - -``` -pgo version -``` - -If it is working, you should see results similar to: - -``` -pgo client version {{< param operatorVersion >}} -pgo-apiserver version {{< param operatorVersion >}} -``` - -Note that the version of the `pgo` client **must** match that of the PostgreSQL Operator. - -You can also use the `pgo version` command to check the version specifically for the `pgo` client. This command only runs locally, i.e. it does not make any requests to the PostgreSQL Operator. For example: - -``` -pgo version --client -``` - -which yields results similar to: - -``` -pgo client version {{< param operatorVersion >}} -``` - -Alright, we're now ready to start our journey with the PostgreSQL Operator! diff --git a/docs/content/tutorial/high-availability.md b/docs/content/tutorial/high-availability.md deleted file mode 100644 index a3c2a12bea..0000000000 --- a/docs/content/tutorial/high-availability.md +++ /dev/null @@ -1,101 +0,0 @@ ---- -title: "High Availability" -draft: false -weight: 180 ---- - -One of the great things about PostgreSQL is its reliability: it is very stable and typically "just works." However, there are certain things that can happen in the environment that PostgreSQL is deployed in that can affect its uptime, including: - -- The database storage disk fails or some other hardware failure occurs -- The network on which the database resides becomes unreachable -- The host operating system becomes unstable and crashes -- A key database file becomes corrupted -- A data center is lost - -There may also be downtime events that are due to the normal case of operations, such as performing a minor upgrade, security patching of operating system, hardware upgrade, or other maintenance. - -Fortunately, the Crunchy PostgreSQL Operator is prepared for this. - -![PostgreSQL Operator High-Availability Overview](/images/postgresql-ha-overview.png) - -The Crunchy PostgreSQL Operator supports a distributed-consensus based high-availability (HA) system that keeps its managed PostgreSQL clusters up and running, even if the PostgreSQL Operator disappears. Additionally, it leverages Kubernetes specific features such as [Pod Anti-Affinity](#how-the-crunchy-postgresql-operator-uses-pod-anti-affinity) to limit the surface area that could lead to a PostgreSQL cluster becoming unavailable. The PostgreSQL Operator also supports automatic healing of failed primaries and leverages the efficient pgBackRest "delta restore" method, which eliminates the need to fully reprovision a failed cluster! - -This tutorial will cover the "howtos" of high availbility. For more information on the topic, please review the detailed [high availability architecture]({{< relref "architecture/high-availability/_index.md" >}}) section. - -## Create a HA PostgreSQL Cluster - -High availability is enabled in the PostgreSQL Operator by default so long as you have more than one replica. To create a high availability PostgreSQL cluster, you can execute the following command: - -``` -pgo create cluster hippo --replica-count=1 -``` - -## Scale a PostgreSQL Cluster - -You can scale an existing PostgreSQL cluster to add HA to it by using the [`pgo scale`]({{< relref "pgo-client/reference/pgo_scale.md">}}) command: - -``` -pgo scale hippo -``` - -## Scale Down a PostgreSQL Cluster - -To scale down a PostgreSQL cluster, you will have to provide a target of which instance you want to scale down. You can do this with the [`pgo scaledown`]({{< relref "pgo-client/reference/pgo_scaledown.md">}}) command: - -``` -pgo scaledown hippo --query -``` - -which will yield something similar to: - -``` -Cluster: hippo -REPLICA STATUS NODE REPLICATION LAG PENDING RESTART -hippo-ojnd running node01 0 MB false -``` - -Once you have determined which instance you want to scale down, you can run the following command: - -``` -pgo scaledown hippo --target=hippo-ojnd -``` - -## Manual Failover - -Each PostgreSQL cluster will manage its own availability. If you wish to manually fail over, you will need to use the [`pgo failover`]({{< relref "pgo-client/reference/pgo_failover.md">}}) command. First, determine which instance you want to fail over to: - -``` -pgo failover hippo --query -``` - -which will yield something similar to: - -``` -Cluster: hippo -REPLICA STATUS NODE REPLICATION LAG PENDING RESTART -hippo-ojnd running node01 0 MB false -``` - -Once you have determine your failover target, you can run the following command: - -``` -pgo failover hippo --target==hippo-ojnd -``` - -## Synchronous Replication - -If you have a [write sensitive workload and wish to use synchronous replication]({{< relref "architecture/high-availability/_index.md" >}}#synchronous-replication-guarding-against-transactions-loss), you can create your PostgreSQL cluster with synchronous replication turned on: - -``` -pgo create cluster hippo --sync-replication -``` - -Please understand the tradeoffs of synchronous replication before using it. - -## Pod Anti-Affinity and Node Affinity - -To leran how to use pod anti-affinity and node affinity, please refer to the [high availability architecture documentation]({{< relref "architecture/high-availability/_index.md" >}}) - -## Next Steps - -Backups, restores, point-in-time-recoveries: [disaster recovery]({{< relref "architecture/disaster-recovery.md" >}}) is a big topic! We'll learn about you can [perform disaster recovery]({{< relref "tutorial/disaster-recovery.md" >}}) and more in the PostgreSQL Operator. diff --git a/docs/content/tutorial/pgbouncer.md b/docs/content/tutorial/pgbouncer.md deleted file mode 100644 index 89ba8ce993..0000000000 --- a/docs/content/tutorial/pgbouncer.md +++ /dev/null @@ -1,214 +0,0 @@ ---- -title: "pgBouncer" -draft: false -weight: 170 ---- - -[pgBouncer](https://www.pgbouncer.org/) is a lightweight connection poooler and state manager that provides an efficient gateway to metering connections to PostgreSQL. The PostgreSQL Operator provides an integration with pgBouncer that allows you to deploy it alongside your PostgreSQL cluster. - -This tutorial covers how you can set up pgBouncer, functionality that the PostgreSQL Operator provides to manage it, and more. - -## Setup pgBouncer - -pgBouncer lives as an independent Deployment next to your PostgreSQL cluster but, thanks to the PostgreSQL Operator, is synchronized with various aspects of your environment. - -There are two ways you can set up pgBouncer for your cluster. You can add pgBouncer when you create your cluster, e.g.: - -``` -pgo create cluster hippo --pgbouncer -``` - -or after your PostgreSQL cluster has been provisioned with the [`pgo create pgbouncer`]({{< relref "pgo-client/reference/pgo_create_pgbouncer.md" >}}): - -``` -pgo create pgbouncer hippo -``` - -There are several managed objects that are created alongside the pgBouncer Deployment, these include: - -- The pgBouncer Deployment itself - - One or more pgBouncer Pods -- A pgBouncer ConfigMap, e.g. `hippo-pgbouncer-cm` which has two entries: - - `pgbouncer.ini`, which is the configuration for the pgBouncer instances - - `pg_hba.conf`, which controls how clients can connect to `pgBouncer` -- A pgBouncer Secret e.g. `hippo-pgbouncer-secret`, that contains the following values: - - `password`: the password for the `pgbouncer` user. The `pgbouncer` user is described in more detail further down. - - `users.txt`: the description for how the `pgbouncer` user and only the `pgbouncer` user can explicitly connect to a pgBouncer instance. - -### The `pgbouncer` user - -The `pgbouncer` user is a special type of PostgreSQL user that is solely for the administration of pgBouncer. It performs several roles, including: - -- Securely load PostgreSQL user credentials into pgBouncer so pgBouncer can perform authentication and connection forwarding -- The ability to log into `pgBouncer` itself for administration, introspection, and looking at statistics - -The pgBouncer user **is not meant to be used to log into PostgreSQL directly**: the account is given permissions for ad hoc tasks. More information on how to connect to pgBouncer is provided in the next section. - -## Connect to a Postgres Cluster Through pgBouncer - -Connecting to a PostgreSQL cluster through pgBouncer is similar to how you [connect to PostgreSQL directly]({{< relref "tutorial/connect-cluster.md">}}), but you are connecting through a different service. First, note the types of users that can connect to PostgreSQL through `pgBouncer`: - -- Any regular user that's created through [`pgo create user`]({{< relref "pgo-client/reference/pgo_create_user.md" >}}) or a user that is not a system account. -- The `postgres` superuser - -The following example will follow similar steps for how you would connect to a [Postgres Cluster via `psql`]({{< relref "tutorial/connect-cluster.md">}}#connection-via-psql), but applies to all other connection methods. - -First, get a list of Services that are available in your namespace: - -``` -kubectl -n pgo get svc -``` - -You should see a list similar to: - -``` -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -hippo ClusterIP 10.96.104.207 2022/TCP,5432/TCP 12m -hippo-backrest-shared-repo ClusterIP 10.96.134.253 2022/TCP 12m -hippo-pgbouncer ClusterIP 10.96.85.35 5432/TCP 11m -``` - -We are going to want to create a port forward to the `hippo-pgbouncer` service. In a separate terminal window, run the following command: - -``` -kubectl -n pgo port-forward svc/hippo-pgbouncer 5432:5432 -``` - -Recall in the [earlier part of the tutorial]({{< relref "tutorial/connect-cluster.md">}}) that we created a user called `testuser` with a password of `securerandomlygeneratedpassword`. We can the connect to PostgreSQL via pgBouncer by executing the following command: - -``` -PGPASSWORD=securerandomlygeneratedpassword psql -h localhost -p 5432 -U testuser hippo -``` - -You should then be greeted with the PostgreSQL prompt: - -``` -psql ({{< param postgresVersion >}}) -Type "help" for help. - -hippo=> -``` - -### Validation: Did this actually work? - -This looks just like how we connected to PostgreSQL before, so how do we know that we are connected to PostgreSQL via pgBouncer? Let's log into pgBoucner as the `pgbouncer` user and demonstrate this. - -In another terminal window, get the credential for the pgBouncer user. This can be done with the [`pgo show pgbouncer`]({{< relref "pgo-client/reference/pgo_show_pgbouncer.md" >}}) command: - -``` -pgo show pgbouncer hippo -``` - -which yields something that looks like: - -``` -CLUSTER SERVICE USERNAME PASSWORD CLUSTER IP EXTERNAL IP -------- --------------- --------- ------------------------ ----------- ----------- -hippo hippo-pgbouncer pgbouncer randompassword 10.96.85.35 -``` - -Copy the actual password and log into pgbouncer with the following command: - -``` -PGPASSWORD=randompassword psql -h localhost -p 5432 -U pgbouncer pgbouncer -``` - -You should see something similar to this: - -``` -psql (12.4, server 1.14.0/bouncer) -Type "help" for help. - -pgbouncer=# -``` - -In the `pgboucner` terminal, run the following command. This will show you the overall connection statistics for pgBouncer: - -``` -SHOW stats; -``` - -Success, you have connected to pgBouncer! - -## Customize CPU / Memory for pgBouncer - -### Provisioning - -The PostgreSQL Operator provides several flags for [`pgo create cluster`]({{< relref "pgo-client/reference/pgo_create_cluster.md" >}}) to help manage resources for pgBouncer: - -- `--pgbouncer-cpu`: Specify the CPU Request for pgBouncer -- `--pgbouncer-cpu-limit`: Specify the CPU Limit for pgBouncer -- `--pgbouncer-memory`: Specify the Memory Request for pgBouncer -- `--pgbouncer-memory-limit`: Specify the Memory Limit for pgBouncer - -Additional, the PostgreSQL Operator provides several flags for [`pgo create pgbouncer`]({{< relref "pgo-client/reference/pgo_create_pgbouncer.md" >}}) to help manage resources for pgBouncer: - -- `--cpu`: Specify the CPU Request for pgBouncer -- `--cpu-limit`: Specify the CPU Limit for pgBouncer -- `--memory`: Specify the Memory Request for pgBouncer -- `--memory-limit`: Specify the Memory Limit for pgBouncer - -To create a pgBouncer Deployment that makes a CPU Request of 1.0 with a CPU Limit of 2.0 and a Memory Request of 64Mi with a Memory Limit of 256Mi: - -``` -pgo create pgbouncer hippo \ - --cpu=1.0 --cpu-limit=2.0 \ - --memory=64Mi --memory-limit=256Mi -``` - -### Updating - -You can also add more memory and CPU resources to pgBouncer with the [`pgo update pgbouncer`]({{< relref "pgo-client/reference/pgo_update_pgbouncer.md" >}}) command, including: - -- `--cpu`: Specify the CPU Request for pgBouncer -- `--cpu-limit`: Specify the CPU Limit for pgBouncer -- `--memory`: Specify the Memory Request for pgBouncer -- `--memory-limit`: Specify the Memory Limit for pgBouncer - -For example, to update a pgBouncer to a CPU Request of 2.0 with a CPU Limit of 3.0 and a Memory Request of 128Mi with a Memory Limit of 512Mi: - -``` -pgo update pgbouncer hippo \ - --cpu=2.0 --cpu-limit=3.0 \ - --memory=128Mi --memory-limit=512Mi -``` - -## Scaling pgBouncer - -You can add more pgBouncer instances when provisioning pgBouncer and to an existing pgBouncer Deployment. - -### Provisioning - -To add pgBouncer instances when creating a PostgreSQL cluster, use the `--pgbouncer-replicas` flag on `pgo create cluster`. For example, to add 2 replicas: - -``` -pgo create cluster hippo --pgbouncer --pgbouncer-replicas=2 -``` - -If adding a pgBouncer to an already provisioned PostgreSQL cluster, use the `--replicas` flag on `pgo create pgbouncer`. For example, to add a pgBouncer instance with 2 replicas: - -``` -pgo create pgbouncer hippo --replicas=2 -``` - -### Updating - -To update pgBouncer instances to scale the replicas, use the `pgo update pgbouncer` command with the `--replicas` flag. This flag can scale pgBouncer up and down. For example, to run 3 pgBouncer replicas: - -``` -pgo update pgbouncer hippo --replicas=3 -``` - -## Rotate pgBouncer Password - -If you wish to rotate the pgBouncer password, you can use the `--rotate-password` flag on `pgo update pgbouncer`: - -``` -pgo update pgbouncer hippo --rotate-password -``` - -This will change the pgBouncer password and synchronize the change across all pgBouncer instances. - -## Next Steps - -Now that you have connection pooling set up, let's create a [high availability PostgreSQL cluster]({{< relref "tutorial/high-availability.md" >}})! diff --git a/docs/content/tutorial/pgdump.md b/docs/content/tutorial/pgdump.md deleted file mode 100644 index deac829f9c..0000000000 --- a/docs/content/tutorial/pgdump.md +++ /dev/null @@ -1,64 +0,0 @@ ---- -title: "Logical Backups (pg_dump)" -draft: false -weight: 200 ---- - -The PostgreSQL Operator supports taking logical backups with `pg_dump` and `pg_dumpall`. While they do not provide the same performance and storage optimizations as the physical backups provided by pgBackRest, logical backups are helpful when one wants to upgrade between major PostgreSQL versions, or provide only a subset of a database, such as a table. - -### Create a Logical Backup - -To create a logical backup of the `postgres` database, you can run the following command: - -``` -pgo backup hippo --backup-type=pgdump -``` - -To create a logical backup of a specific database, you can use the `--database` flag, as in the following command: - -``` -pgo backup hippo --backup-type=pgdump --database=hippo -``` - -You can pass in specific options to `--backup-opts`, which can accept most of the options that the [`pg_dump`](https://www.postgresql.org/docs/current/app-pgdump.html) command accepts. For example, to only dump the data from a specific table called `users`: - -``` -pgo backup hippo --backup-type=pgdump --backup-opts="-t users" -``` - -To use `pg_dumpall` to create a logical backup of all the data in a PostgreSQL cluster, you must pass the `--dump-all` flag in `--backup-opts`, i.e.: - -``` -pgo backup hippo --backup-type=pgdump --backup-opts="--dump-all" -``` - -### Viewing Logical Backups - -To view an available list of logical backups, you can use the `pgo show backup` -command with the `--backup-type=pgdump` flag: - -``` -pgo show backup --backup-type=pgdump hippo -``` - -This provides information about the PVC that the logical backups are stored on as well as the timestamps required to perform a restore from a logical backup. - -### Restore from a Logical Backup - -To restore from a logical backup, you need to reference the PVC that the logical backup is stored to, as well as the timestamp that was created by the logical backup. - -You can get the timestamp from the `pgo show backup --backup-type=pgdump` command. - -You can restore a logical backup using the following command: - -``` -pgo restore hippo --backup-type=pgdump --backup-pvc=hippo-pgdump-pvc \ - --pitr-target="2019-01-15-00-03-25" -n pgouser1 -``` - -To restore to a specific database, add the `--pgdump-database` flag to the command from above: - -``` -pgo restore hippo --backup-type=pgdump --backup-pvc=hippo-pgdump-pvc \ - --pgdump-database=mydb --pitr-target="2019-01-15-00-03-25" -n pgouser1 -``` diff --git a/docs/content/tutorial/tls.md b/docs/content/tutorial/tls.md deleted file mode 100644 index 72ccb07b5f..0000000000 --- a/docs/content/tutorial/tls.md +++ /dev/null @@ -1,136 +0,0 @@ ---- -title: "Setup TLS" -draft: false -weight: 160 ---- - -TLS allows secure TCP connections to PostgreSQL, and the PostgreSQL Operator makes it easy to enable this PostgreSQL feature. The TLS support in the PostgreSQL Operator does not make an opinion about your PKI, but rather loads in your TLS key pair that you wish to use for the PostgreSQL server as well as its corresponding certificate authority (CA) certificate. Both of these Secrets are -required to enable TLS support for your PostgreSQL cluster when using the PostgreSQL Operator, but it in turn allows seamless TLS support. - -## Prerequisites - -There are three items that are required to enable TLS in your PostgreSQL clusters: - -- A CA certificate -- A TLS private key -- A TLS certificate - -There are a variety of methods available to generate these items: in fact, Kubernetes comes with its own [certificate management system](https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/)! It is up to you to decide how you want to manage this for your cluster. The PostgreSQL documentation also provides an example for how to [generate a TLS certificate](https://www.postgresql.org/docs/current/ssl-tcp.html#SSL-CERTIFICATE-CREATION) as well. - -To set up TLS for your PostgreSQL cluster, you have to create two [Secrets](https://kubernetes.io/docs/concepts/configuration/secret/): one that contains the CA certificate, and the other that contains the server TLS key pair. - -First, create the Secret that contains your CA certificate. Create the Secret as a generic Secret, and note that the following requirements **must** be met: - -- The Secret must be created in the same Namespace as where you are deploying your PostgreSQL cluster -- The `name` of the key that is holding the CA **must** be `ca.crt` - -There are optional settings for setting up the CA secret: - -- You can pass in a certificate revocation list (CRL) for the CA secret by passing in the CRL using the `ca.crl` key name in the Secret. - -For example, to create a CA Secret with the trusted CA to use for the PostgreSQL clusters, you could execute the following command: - -``` -kubectl create secret generic postgresql-ca -n pgo --from-file=ca.crt=/path/to/ca.crt -``` - -To create a CA Secret that includes a CRL, you could execute the following command: - -``` -kubectl create secret generic postgresql-ca -n pgo \ - --from-file=ca.crt=/path/to/ca.crt \ - --from-file=ca.crl=/path/to/ca.crl -``` - -Note that you can reuse this CA Secret for other PostgreSQL clusters deployed by the PostgreSQL Operator. - -Next, create the Secret that contains your TLS key pair. Create the Secret as a a TLS Secret, and note the following requirement must be met: - -- The Secret must be created in the same Namespace as where you are deploying your PostgreSQL cluster - -``` -kubectl create secret tls hippo-tls-keypair -n pgo \ - --cert=/path/to/server.crt \ - --key=/path/to/server.key -``` - -Now you can create a TLS-enabled PostgreSQL cluster! - -## Create a Postgres Cluster with TLS - -Using the above example, to create a TLS-enabled PostgreSQL cluster that can accept both TLS and non-TLS connections, execute the following command: - -``` -pgo create cluster hippo \ - --server-ca-secret=postgresql-ca \ - --server-tls-secret=hippo-tls-keypair -``` - -Including the `--server-ca-secret` and `--server-tls-secret` flags automatically enable TLS connections in the PostgreSQL cluster that is deployed. These flags should reference the CA Secret and the TLS key pair Secret, respectively. - -If deployed successfully, when you connect to the PostgreSQL cluster, assuming your `PGSSLMODE` is set to `prefer` or higher, you will see something like this in your `psql` terminal: - -``` -SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off) -``` - -## Force TLS For All Connections - -There are many environments where you want to force all remote connections to occur over TLS, for example, if you deploy your PostgreSQL cluster's in a public cloud or on an untrusted network. The PostgreSQL Operator lets you force all remote connections to occur over TLS by using the `--tls-only` flag. - -For example, using the setup above, you can force TLS in a PostgreSQL cluster by executing the following command: - -``` -pgo create cluster hippo \ - --tls-only \ - --server-ca-secret=postgresql-ca --server-tls-secret=hippo-tls-keypair -``` - -If deployed successfully, when you connect to the PostgreSQL cluster, assuming your `PGSSLMODE` is set to `prefer` or higher, you will see something like this in your `psql` terminal: - -``` -SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off) -``` - -If you try to connect to a PostgreSQL cluster that is deployed using the `--tls-only` with TLS disabled (i.e. `PGSSLMODE=disable`), you will receive an error that connections without TLS are unsupported. - -### TLS Authentication for Replicas - -PostgreSQL supports [certificate-based authentication](https://www.postgresql.org/docs/current/auth-cert.html), which allows for PostgreSQL to authenticate users based on the common name (CN) in a certificate. Using this feature, the PostgreSQL Operator allows you to configure PostgreSQL replicas in a cluster to authenticate using a certificate instead of a password. - -To use this feature, first you will need to set up a Kubernetes TLS Secret that has a CN of `primaryuser`. If you do not wish to have this as your CN, you will need to map the CN of this certificate to the value of `primaryuser` using a [pg_ident](https://www.postgresql.org/docs/current/auth-username-maps.html) username map, which you can configure as part of a [custom PostgreSQL configuration]({{< relref "/advanced/custom-configuration.md" >}}). - -You also need to ensure that the certificate is verifiable by the certificate authority (CA) chain that you have provided for your PostgreSQL cluster. The CA is provided as part of the `--server-ca-secret` flag in the [`pgo create cluster`]({{< relref "/pgo-client/reference/pgo_create_cluster.md" >}}) command. - -To create a PostgreSQL cluster that uses TLS authentication for replication, first create Kubernetes Secrets for the server and the CA. For the purposes of this example, we will use the ones that were created earlier: `postgresql-ca` and `hippo-tls-keypair`. After generating a certificate that has a CN of `primaryuser`, create a Kubernetes Secret that references this TLS keypair called `hippo-tls-replication-keypair`: - -``` -kubectl create secret tls hippo-tls-replication-keypair -n pgo \ - --cert=/path/to/replication.crt \ - --key=/path/to/replication.key -``` - -We can now create a PostgreSQL cluster and allow for it to use TLS authentication for its replicas! Let's create a PostgreSQL cluster with two replicas that also requires TLS for any connection: - -``` -pgo create cluster hippo \ - --tls-only \ - --server-ca-secret=postgresql-ca \ - --server-tls-secret=hippo-tls-keypair \ - --replication-tls-secret=hippo-tls-replication-keypair \ - --replica-count=2 -``` - -By default, the PostgreSQL Operator has each replica connect to PostgreSQL using a [PostgreSQL TLS mode](https://www.postgresql.org/docs/current/libpq-ssl.html#LIBPQ-SSL-SSLMODE-STATEMENTS) of `verify-ca`. If you wish to perform TLS mutual authentication between PostgreSQL instances (i.e. certificate-based authentication with SSL mode of `verify-full`), you will need to create a [PostgreSQL custom configuration]({{< relref "/advanced/custom-configuration.md" >}}). - -## Troubleshooting - -### Replicas Cannot Connect to Primary - -If your primary is forcing all connections over TLS, ensure that your replicas are connecting with a `sslmode` of `prefer` or higher. - -If using TLS authentication with your replicas, ensure that the common name (`CN`) for the replicas is `primaryuser` or that you have set up an entry in `pg_ident` that provides a mapping from your `CN` to `primaryuser`. - -## Next Steps - -You've now secured connections to your database. However, how do you scale and pool your PostgreSQL connections? Learn how to [set up and configure pgBouncer]({{< relref "tutorial/pgbouncer.md" >}})! diff --git a/docs/content/tutorial/update-cluster.md b/docs/content/tutorial/update-cluster.md deleted file mode 100644 index e2d50ac3b9..0000000000 --- a/docs/content/tutorial/update-cluster.md +++ /dev/null @@ -1,110 +0,0 @@ ---- -title: "Update a Postgres Cluster" -draft: false -weight: 140 ---- - -You've done it: your application is a huge success! It's so successful that you database needs more resources to keep up with the demand. How do you add more resources to your PostgreSQL cluster? - -The PostgreSQL Operator provides several options to [update a cluster's]({{< relref "pgo-client/reference/pgo_update_cluster.md" >}}) resource utilization, including: - -- Resource allocations (e.g. Memory, CPU, PVC size) -- Tablespaces -- Annotations -- Availability options -- [Configuration]({{< relref "advanced/custom-configuration.md" >}}) - -and more. There are additional actions that can be taken as well outside of the update process, including [scaling a cluster]({{< relref "architecture/high-availability/_index.md" >}}), adding a pgBouncer or [pgAdmin 4]({{< relref "architecture/pgadmin4.md" >}}) Deployment, and more. - -The goal of this section is to present a few of the common actions that can be taken to update your PostgreSQL cluster so it has the resources and configuration that you require. - -## Update CPU / Memory - -You can update the CPU and memory resources available to the Pods in your PostgreSQL cluster by using the [`pgo update cluster`]({{< relref "pgo-client/reference/pgo_create_cluster.md" >}}) command. By using this method, the PostgreSQL instances are safely shut down and the new resources are applied in a rolling fashion (though we caution that a brief downtime may still occur). - -Customizing CPU and memory does add more resources to your PostgreSQL cluster, but to fully take advantage of additional resources, you will need to [customize your PostgreSQL configuration]({{< relref "advanced/custom-configuration.md" >}}) and tune parameters such as `shared_buffers` and others. - -### Customize CPU / Memory for PostgreSQL - -The PostgreSQL Operator provides several flags for [`pgo update cluster`]({{< relref "pgo-client/reference/pgo_create_cluster.md" >}}) to help manage resources for a PostgreSQL instance: - -- `--cpu`: Specify the CPU Request for a PostgreSQL instance -- `--cpu-limit`: Specify the CPU Limit for a PostgreSQL instance -- `--memory`: Specify the Memory Request for a PostgreSQL instance -- `--memory-limit`: Specify the Memory Limit for a PostgreSQL instance - -For example, to update a PostgreSQL cluster that makes a CPU Request of 2.0 with a CPU Limit of 4.0 and a Memory Request of 4Gi with a Memory Limit of 6Gi: - -``` -pgo update cluster hippo \ - --cpu=2.0 --cpu-limit=4.0 \ - --memory=4Gi --memory-limit=6Gi -``` - -### Customize CPU / Memory for Crunchy PostgreSQL Exporter Sidecar - -If your [PostgreSQL cluster has monitoring](#create-a-postgresql-cluster-with-monitoring), you may want to adjust the resources of the `crunchy-postgres-exporter` sidecar that runs next to each PostgreSQL instnace. You can do this with the following flags: - -- `--exporter-cpu`: Specify the CPU Request for a `crunchy-postgres-exporter` sidecar -- `--exporter-cpu-limit`: Specify the CPU Limit for a `crunchy-postgres-exporter` sidecar -- `--exporter-memory`: Specify the Memory Request for a `crunchy-postgres-exporter` sidecar -- `--exporter-memory-limit`: Specify the Memory Limit for a `crunchy-postgres-exporter` sidecar - -For example, to update a PostgreSQL cluster with a metrics sidecar with custom CPU and memory requests + limits, you could do the following: - -``` -pgo update cluster hippo \ - --exporter-cpu=0.5 --exporter-cpu-limit=1.0 \ - --exporter-memory=256Mi --exporter-memory-limit=1Gi -``` - -### Customize CPU / Memory for pgBackRest - -You can also customize the CPU and memory requests and limits for pgBackRest with the following flags: - -- `--pgbackrest-cpu`: Specify the CPU Request for pgBackRest -- `--pgbackrest-cpu-limit`: Specify the CPU Limit for pgBackRest -- `--pgbackrest-memory`: Specify the Memory Request for pgBackRest -- `--pgbackrest-memory-limit`: Specify the Memory Limit for pgBackRest - -For example, to update a PostgreSQL cluster with custom CPU and memory requests + limits for pgBackRest, you could do the following: - -``` -pgo update cluster hippo \ - --pgbackrest-cpu=0.5 --pgbackrest-cpu-limit=1.0 \ - --pgbackrest-memory=256Mi --pgbackrest-memory-limit=1Gi -``` - -## Customize PostgreSQL Configuration - -PostgreSQL provides a lot of different knobs that can be used to fine tune the [configuration](https://www.postgresql.org/docs/current/runtime-config.html) for your workload. While you can [customize your PostgreSQL configuration]({{< relref "advanced/custom-configuration.md" >}}) after your cluster has been deployed, you may also want to load in your custom configuration during initialization. - -The configuration can be customized by editing the `-pgha-config` ConfigMap. For example, with the `hippo` cluster: - -``` -kubectl -n pgo edit configmap hippo-pgha-config -``` - -We recommend that you read the section on how to [customize your PostgreSQL configuration]({{< relref "advanced/custom-configuration.md" >}}) to find out how to customize your configuration. - -## Troubleshooting - -### Configuration Did Not Update - -Any updates to a ConfigMap may take a few moments to propagate to all of your Pods. Once it is propagated, the PostgreSQL Operator will attempt to reload the new configuration on each Pod. - -If the information has propagated but the Pods have not been reloaded, you can force an explicit reload with the [`pgo reload`]({{< relref "pgo-client/reference/pgo_reload.md" >}}) command: - -``` -pgo reload hippo -``` - -Some customized configuration settings can only be applied to your PostgreSQL cluster after it is restarted. For example, to restart the `hippo` cluster, you can use the [`pgo restart`]({{< relref "pgo-client/reference/pgo_restart.md" >}}) command: - -``` -pgo restart hippo -``` - -## Next Steps - -We've seen how to create, customize, and update a PostgreSQL cluster with the PostgreSQL Operator. What about [deleting a PostgreSQL cluster]({{< relref "tutorial/delete-cluster.md" >}})? diff --git a/docs/data/pgmonitor/general/queries_backrest.yml b/docs/data/pgmonitor/general/queries_backrest.yml deleted file mode 120000 index 419d0daf1f..0000000000 --- a/docs/data/pgmonitor/general/queries_backrest.yml +++ /dev/null @@ -1 +0,0 @@ -../../../../tools/pgmonitor/exporter/postgres/queries_backrest.yml \ No newline at end of file diff --git a/docs/data/pgmonitor/general/queries_common.yml b/docs/data/pgmonitor/general/queries_common.yml deleted file mode 120000 index d9d38acacb..0000000000 --- a/docs/data/pgmonitor/general/queries_common.yml +++ /dev/null @@ -1 +0,0 @@ -../../../../tools/pgmonitor/exporter/postgres/queries_common.yml \ No newline at end of file diff --git a/docs/data/pgmonitor/general/queries_per_db.yml b/docs/data/pgmonitor/general/queries_per_db.yml deleted file mode 120000 index d31dda81e8..0000000000 --- a/docs/data/pgmonitor/general/queries_per_db.yml +++ /dev/null @@ -1 +0,0 @@ -../../../../tools/pgmonitor/exporter/postgres/queries_per_db.yml \ No newline at end of file diff --git a/docs/data/pgmonitor/general/queries_pg10.yml b/docs/data/pgmonitor/general/queries_pg10.yml deleted file mode 120000 index d1d2e89575..0000000000 --- a/docs/data/pgmonitor/general/queries_pg10.yml +++ /dev/null @@ -1 +0,0 @@ -../../../../tools/pgmonitor/exporter/postgres/queries_pg10.yml \ No newline at end of file diff --git a/docs/data/pgmonitor/general/queries_pg11.yml b/docs/data/pgmonitor/general/queries_pg11.yml deleted file mode 120000 index 615349e830..0000000000 --- a/docs/data/pgmonitor/general/queries_pg11.yml +++ /dev/null @@ -1 +0,0 @@ -../../../../tools/pgmonitor/exporter/postgres/queries_pg11.yml \ No newline at end of file diff --git a/docs/data/pgmonitor/general/queries_pg12.yml b/docs/data/pgmonitor/general/queries_pg12.yml deleted file mode 120000 index 3d41df9a36..0000000000 --- a/docs/data/pgmonitor/general/queries_pg12.yml +++ /dev/null @@ -1 +0,0 @@ -../../../../tools/pgmonitor/exporter/postgres/queries_pg12.yml \ No newline at end of file diff --git a/docs/data/pgmonitor/general/queries_pg95.yml b/docs/data/pgmonitor/general/queries_pg95.yml deleted file mode 120000 index a63db4ea9e..0000000000 --- a/docs/data/pgmonitor/general/queries_pg95.yml +++ /dev/null @@ -1 +0,0 @@ -../../../../tools/pgmonitor/exporter/postgres/queries_pg95.yml \ No newline at end of file diff --git a/docs/data/pgmonitor/general/queries_pg96.yml b/docs/data/pgmonitor/general/queries_pg96.yml deleted file mode 120000 index cc65dd1b9c..0000000000 --- a/docs/data/pgmonitor/general/queries_pg96.yml +++ /dev/null @@ -1 +0,0 @@ -../../../../tools/pgmonitor/exporter/postgres/queries_pg96.yml \ No newline at end of file diff --git a/docs/data/pgmonitor/pgnodemx/queries_nodemx.yml b/docs/data/pgmonitor/pgnodemx/queries_nodemx.yml deleted file mode 120000 index 69d79c2559..0000000000 --- a/docs/data/pgmonitor/pgnodemx/queries_nodemx.yml +++ /dev/null @@ -1 +0,0 @@ -../../../../tools/pgmonitor/exporter/postgres/queries_nodemx.yml \ No newline at end of file diff --git a/docs/layouts/shortcodes/exporter_metrics.html b/docs/layouts/shortcodes/exporter_metrics.html deleted file mode 100644 index a69cd351a0..0000000000 --- a/docs/layouts/shortcodes/exporter_metrics.html +++ /dev/null @@ -1,17 +0,0 @@ -{{ range $metricsfile, $value0 := .Site.Data.pgmonitor.general }} -

{{ $metricsfile }}

- -{{ range $query, $value1 := $value0 }} -

{{ $query }}

-

SQL Query:

-{{ $value1.query }} - -

Metrics:

-{{ range $key2, $value2 := $value1.metrics }} -{{ range $metric, $value3 := $value2 }} -
{{ $metric }}
-{{ $value3.description }} -{{end}} -{{end}} -{{end}} -{{end}} diff --git a/docs/layouts/shortcodes/pgnodemx_metrics.html b/docs/layouts/shortcodes/pgnodemx_metrics.html deleted file mode 100644 index 919aadd428..0000000000 --- a/docs/layouts/shortcodes/pgnodemx_metrics.html +++ /dev/null @@ -1,17 +0,0 @@ -{{ range $metricsfile, $value0 := .Site.Data.pgmonitor.pgnodemx }} -

{{ $metricsfile }}

- -{{ range $query, $value1 := $value0 }} -

{{ $query }}

-

SQL Query:

-{{ $value1.query }} - -

Metrics:

-{{ range $key2, $value2 := $value1.metrics }} -{{ range $metric, $value3 := $value2 }} -
{{ $metric }}
-{{ $value3.description }} -{{end}} -{{end}} -{{end}} -{{end}} diff --git a/docs/static/Operator-Architecture-wCRDs.png b/docs/static/Operator-Architecture-wCRDs.png deleted file mode 100644 index 291cbefef3..0000000000 Binary files a/docs/static/Operator-Architecture-wCRDs.png and /dev/null differ diff --git a/docs/static/Operator-Architecture.png b/docs/static/Operator-Architecture.png deleted file mode 100644 index aa8a43a134..0000000000 Binary files a/docs/static/Operator-Architecture.png and /dev/null differ diff --git a/docs/static/Operator-DR-Storage.png b/docs/static/Operator-DR-Storage.png deleted file mode 100644 index 7bab1bc27c..0000000000 Binary files a/docs/static/Operator-DR-Storage.png and /dev/null differ diff --git a/docs/static/OperatorReferenceDiagram.1.png b/docs/static/OperatorReferenceDiagram.1.png deleted file mode 100644 index ed2b7164e6..0000000000 Binary files a/docs/static/OperatorReferenceDiagram.1.png and /dev/null differ diff --git a/docs/static/OperatorReferenceDiagram.png b/docs/static/OperatorReferenceDiagram.png deleted file mode 100644 index ed2b7164e6..0000000000 Binary files a/docs/static/OperatorReferenceDiagram.png and /dev/null differ diff --git a/docs/static/crunchy-logo.jpg b/docs/static/crunchy-logo.jpg deleted file mode 100644 index 01f9c9b1a4..0000000000 Binary files a/docs/static/crunchy-logo.jpg and /dev/null differ diff --git a/docs/static/crunchy_logo.png b/docs/static/crunchy_logo.png deleted file mode 100644 index 2fbf3352c1..0000000000 Binary files a/docs/static/crunchy_logo.png and /dev/null differ diff --git a/docs/static/favicon.ico b/docs/static/favicon.ico deleted file mode 100644 index b30f559497..0000000000 Binary files a/docs/static/favicon.ico and /dev/null differ diff --git a/docs/static/favicon.png b/docs/static/favicon.png deleted file mode 100644 index 66ce2072e9..0000000000 Binary files a/docs/static/favicon.png and /dev/null differ diff --git a/docs/static/images/namespace-multi.png b/docs/static/images/namespace-multi.png deleted file mode 100644 index 8bb0c3bb1a..0000000000 Binary files a/docs/static/images/namespace-multi.png and /dev/null differ diff --git a/docs/static/images/namespace-own.png b/docs/static/images/namespace-own.png deleted file mode 100644 index d1f9bde948..0000000000 Binary files a/docs/static/images/namespace-own.png and /dev/null differ diff --git a/docs/static/images/namespace-single.png b/docs/static/images/namespace-single.png deleted file mode 100644 index a32d628388..0000000000 Binary files a/docs/static/images/namespace-single.png and /dev/null differ diff --git a/docs/static/images/pgadmin4-login.png b/docs/static/images/pgadmin4-login.png deleted file mode 100644 index 84c72ef692..0000000000 Binary files a/docs/static/images/pgadmin4-login.png and /dev/null differ diff --git a/docs/static/images/pgadmin4-login2.png b/docs/static/images/pgadmin4-login2.png deleted file mode 100644 index a75f990bfd..0000000000 Binary files a/docs/static/images/pgadmin4-login2.png and /dev/null differ diff --git a/docs/static/images/pgadmin4-query.png b/docs/static/images/pgadmin4-query.png deleted file mode 100644 index 5c0d306016..0000000000 Binary files a/docs/static/images/pgadmin4-query.png and /dev/null differ diff --git a/docs/static/images/postgresql-cluster-dr-base.png b/docs/static/images/postgresql-cluster-dr-base.png deleted file mode 100644 index 515e597500..0000000000 Binary files a/docs/static/images/postgresql-cluster-dr-base.png and /dev/null differ diff --git a/docs/static/images/postgresql-cluster-dr-schedule.png b/docs/static/images/postgresql-cluster-dr-schedule.png deleted file mode 100644 index 098c5e5658..0000000000 Binary files a/docs/static/images/postgresql-cluster-dr-schedule.png and /dev/null differ diff --git a/docs/static/images/postgresql-cluster-ha-s3.png b/docs/static/images/postgresql-cluster-ha-s3.png deleted file mode 100644 index 6922772d1a..0000000000 Binary files a/docs/static/images/postgresql-cluster-ha-s3.png and /dev/null differ diff --git a/docs/static/images/postgresql-cluster-restore-step-1.png b/docs/static/images/postgresql-cluster-restore-step-1.png deleted file mode 100644 index d8d2439fbd..0000000000 Binary files a/docs/static/images/postgresql-cluster-restore-step-1.png and /dev/null differ diff --git a/docs/static/images/postgresql-cluster-restore-step-2.png b/docs/static/images/postgresql-cluster-restore-step-2.png deleted file mode 100644 index cf6c653d54..0000000000 Binary files a/docs/static/images/postgresql-cluster-restore-step-2.png and /dev/null differ diff --git a/docs/static/images/postgresql-ha-multi-data-center.png b/docs/static/images/postgresql-ha-multi-data-center.png deleted file mode 100644 index bb3b18cf51..0000000000 Binary files a/docs/static/images/postgresql-ha-multi-data-center.png and /dev/null differ diff --git a/docs/static/images/postgresql-ha-overview.png b/docs/static/images/postgresql-ha-overview.png deleted file mode 100644 index bb74de6739..0000000000 Binary files a/docs/static/images/postgresql-ha-overview.png and /dev/null differ diff --git a/docs/static/images/postgresql-monitoring-alerts.png b/docs/static/images/postgresql-monitoring-alerts.png deleted file mode 100644 index 13f49f3fe1..0000000000 Binary files a/docs/static/images/postgresql-monitoring-alerts.png and /dev/null differ diff --git a/docs/static/images/postgresql-monitoring-cluster.png b/docs/static/images/postgresql-monitoring-cluster.png deleted file mode 100644 index ea83ce4270..0000000000 Binary files a/docs/static/images/postgresql-monitoring-cluster.png and /dev/null differ diff --git a/docs/static/images/postgresql-monitoring-overview.png b/docs/static/images/postgresql-monitoring-overview.png deleted file mode 100644 index 8d623aa0f8..0000000000 Binary files a/docs/static/images/postgresql-monitoring-overview.png and /dev/null differ diff --git a/docs/static/images/postgresql-monitoring-pod.png b/docs/static/images/postgresql-monitoring-pod.png deleted file mode 100644 index 30e8183f54..0000000000 Binary files a/docs/static/images/postgresql-monitoring-pod.png and /dev/null differ diff --git a/docs/static/images/postgresql-monitoring-service.png b/docs/static/images/postgresql-monitoring-service.png deleted file mode 100644 index a24baf56a2..0000000000 Binary files a/docs/static/images/postgresql-monitoring-service.png and /dev/null differ diff --git a/docs/static/images/postgresql-monitoring.png b/docs/static/images/postgresql-monitoring.png deleted file mode 100644 index 96ed4017fc..0000000000 Binary files a/docs/static/images/postgresql-monitoring.png and /dev/null differ diff --git a/docs/static/logos/TRADEMARKS.md b/docs/static/logos/TRADEMARKS.md new file mode 100644 index 0000000000..8e3e1dcffa --- /dev/null +++ b/docs/static/logos/TRADEMARKS.md @@ -0,0 +1,143 @@ +# PGO Trademark Guidelines + +## 1. Introduction + +This document - the "Policy" - outlines the policy of The PGO Project (the "Project") for the use of our trademarks. + +A trademark’s role is to assure consumers about the quality of the associated products or services. Because an open source license allows you to modify the copyrighted software, we cannot be sure your modified software will not mislead recipients if it is distributed under our trademarks. So, this Policy describes when you may or may not use our trademarks. + +In this Policy, we are not trying to limit the lawful use of our trademarks, but rather describe what we consider lawful use. Trademark law can be ambiguous, so we hope to clarify whether we will consider your use permitted or non-infringing. + +The following sections describe the trademarks this Policy covers, as well as trademark uses we permit. If you want to use our trademarks in ways this Policy doesn’t address, please see "Where to get further information" below for contact information. Any use that does not comply with this Policy, or for which we have not separately provided written permission, is not a use we have approved. + +## 2. We are committed to open source principles + +We want to encourage and facilitate community use of our trademarks in a way that ensures the trademarks are meaningful source and quality indicators for our software and the associated goods and services and continue to embody the high reputation of the software and its associated community. This Policy therefore balances our need to ensure our trademarks remain reliable quality indicators and our community members’ desire to be full Project participants. + +## 3. Trademarks subject to the Policy + +Our trademarks + +This Policy covers: + +### 3.1 Our word trademarks and service marks (the "Word Marks"): + +PGO + +### 3.2. Our logo (the "Logo"): + +PGO: The Postgres Operator from Crunchy Data + +### 3.3 And the unique visual styling of our website (the "Trade Dress"). + +This Policy encompasses all Project trademarks and service marks, whether Word Marks, Logos or Trade Dress, which we collectively call the “Marks." We might not have registered some Marks, but this Policy covers our Marks regardless. + +## 4. Universal considerations for all uses + +Whenever you use a Mark, you must not mislead anyone, either directly or by omission, about what they are getting and from whom. The law reflects this requirement in two major ways described below: it prohibits creating a "likelihood of confusion," but allows for "nominative use." + +For example, you cannot say you are distributing PGO software when you're distributing a modified version of it, because you likely would confuse people, since they are not getting the same features and functionality they would get if they downloaded the software from us. You also cannot use our Logo on your website to suggest your website is an official website or we endorse your website. + +You can, though, say, for example, you like the PGO software, you are a PGO community participant, you are providing unmodified PGO software, or you wrote a book describing how to use the PGO software. + +This fundamental requirement - that it is always clear to people what they are getting and from whom - is reflected throughout this Policy. It should guide you if you are unsure about how you are using the Marks. + +In addition: + +You may not use the Marks in association with software use or distribution if you don’t comply with the license for the software. + +You may not use or register the Marks as part of your own trademark, service mark, domain name, company name, trade name, product name or service name. + +Trademark law does not allow you to use names or trademarks that are too similar to ours. You therefore may not use an obvious Mark variant or phonetic equivalent, foreign language equivalent, takeoff, or abbreviation for a similar or compatible product or service. + +You will not acquire rights in the Marks, and any goodwill you generate using the Marks inures solely to our benefit. +## 5. Use for software + +See universal considerations for all uses, above, which also apply. + +### 5.1 Uses we consider non-infringing + +#### 5.1.1 Distributing unmodified source code or unmodified executable code we have compiled + +When you redistribute our unmodified software, you are not changing its quality or nature. Therefore, you may retain the Word Marks and Logos we have placed on the software, to identify your redistributed software whether you redistribute by optical media, memory stick or download of unmodified source and executable code. This only applies if you are redistributing official software from this Project that you have not changed. You can find the Logo files [here](./). + +#### 5.1.2 Distributing executable code you have compiled, or modified code + +You may use the Word Marks, but not the Logos, to describe the software’s origin, that is, that the code you are distributing is a modification of our software. You may say, for example, "this software is derived from the source code from the PGO Project." +Of course, you can place your own trademarks or logos on software to which you have made substantive modifications, because by modifying the software, you have become the origin of the modified software. + +#### 5.1.3 Statements about compatibility, interoperability or derivation + +You may use the Word Marks, but not the Logos, to describe the relationship between your software and ours. You should use Our Mark after a verb or preposition that describes that relationship. So, you may say, for example, "Bob's plug-in for PGO," but may not say "Bob's PGO plug-in." + +#### 5.1.4 Using trademarks to show community affiliation + +This section discusses using our Marks for application themes, skins and personas. We discuss using our Marks on websites below. +You may use the Word Marks and the Logos in themes, personas, or skins to show your Project support, provided the use is non-commercial and clearly decorative, as contrasted with a use that appears to be the branding for a website or application. + +### 5.2 Permitted uses + +#### 5.2.1 Distributing unmodified software + +You may use the Word Marks and Logos to distribute executable code if you make the code from official Project source code using the procedure for creating an executable found at [https://access.crunchydata.com/documentation/postgres-operator/latest/installation/](https://access.crunchydata.com/documentation/postgres-operator/latest/installation/). + +#### 5.3 Unpermitted uses we consider infringing + +We will likely consider it an infringement to use the Marks in software that combines our software with another software program. In addition to creating a single executable for both software programs, we would consider your software "combined" with ours if installing our software automatically installs yours. We would not consider your software "combined" with ours if it is on the same media but requires separate, independent action to install. + +## 6. Use for non-software goods and services + +See universal considerations for all uses, above, which also apply. + +### 6.1 Uses we consider non-infringing + +#### 6.1.1 Websites + +You may use the Word Marks and Logos on your webpage to show your Project support if: + +- Your own branding or naming is more prominent than any Project Marks; +- The Logos hyperlink to the Project website: [https://github.com/CrunchyData/postgres-operator](https://github.com/CrunchyData/postgres-operator); +- The site does not mislead customers into thinking your website, service, or product is our website, service, or product; and +- The site clearly states the Project does not affiliate with or endorse you. + +#### 6.1.2 Publishing and presenting + +You can use the Word Marks in book and article titles, and the Logo in illustrations within a document, if the use does not suggest we published, endorse, or agree with your work. + +#### 6.1.3 Events + +You can use the Logo to promote the software and Project at events. + +### 6.2 Permitted uses + +#### 6.2.1 Meetups and user groups + +You can use the Word Marks as part of your meetup or user group name if: + +- The group’s main focus is the software; +- Any software or services the group provides are without cost; +- The group does not make a profit; +- Any charge to attend meetings is only to cover the cost of the venue, food and drink. + +The universal considerations for all uses, above, still apply: specifically, you may not use or register the Marks as part of your own trademark, service mark, domain name, company name, trade name, product name or service name. + +### 6.3 Unpermitted uses we consider infringing + +We will likely consider it an infringement to use the Marks as part of a domain name or subdomain. +We also would likely consider it an infringement to use the Marks on for-sale promotional goods. + +## 7 General Information + +### 7.1 Trademark legends + +If you are using our Marks in a way described in the sections entitled "Permitted uses," put the following notice at the foot of the page where you have used the Mark (or, if in a book, on the credits page), on packaging or labeling, and on advertising or marketing materials: "The PGO Project is a trademark of Crunchy Data Solutions, Inc., used with permission." + +### 7.2 What to do when you see abuse + +If you are aware of a confusing use or misuse of the Marks, we would appreciate you bringing it to our attention. Please contact us at [trademarks@crunchydata.com](mailto:trademarks@crunchydata.com) so we can investigate it further. + +### 7.3 Where to get further information + +If you have questions, wish to speak about using our Marks in ways the Policy doesn’t address, or see abuse of our Marks, please send an email to [trademarks@crunchydata.com](mailto:trademarks@crunchydata.com). + +We based these guidelines on the Model Trademark Guidelines, available at [http://www.modeltrademarkguidelines.org](http://www.modeltrademarkguidelines.org), used under a Creative Commons Attribution 3.0 Unported license: [https://creativecommons.org/licenses/by/3.0/deed.en_US](https://creativecommons.org/licenses/by/3.0/deed.en_US). diff --git a/docs/static/logos/pgo.png b/docs/static/logos/pgo.png new file mode 100644 index 0000000000..9d38c8f859 Binary files /dev/null and b/docs/static/logos/pgo.png differ diff --git a/docs/static/logos/pgo.svg b/docs/static/logos/pgo.svg new file mode 100644 index 0000000000..d72f9d7810 --- /dev/null +++ b/docs/static/logos/pgo.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/docs/static/operator-backrest-integration.png b/docs/static/operator-backrest-integration.png deleted file mode 100644 index 7d1f64b500..0000000000 Binary files a/docs/static/operator-backrest-integration.png and /dev/null differ diff --git a/docs/static/operator-backrest-integration.xml b/docs/static/operator-backrest-integration.xml deleted file mode 100644 index 7b27e5c83e..0000000000 --- a/docs/static/operator-backrest-integration.xml +++ /dev/null @@ -1 +0,0 @@ -7Vxbd6I6FP41PtrFXXysvc6szqqnnXU6c15mIURkioRCaGt//UkgQUKC2jG2Tqt9qNkJIdmXb18I9syT+fNF5qWzbzAAcc/QgueeedozDF1zNfyPUBYVZWBRQphFAR20JNxGL4BdSalFFICcG4ggjFGU8kQfJgnwEUfzsgw+8cOmMObvmnohEAi3vheL1LsoQDNK1Z3hsuMSROGM3to1BlXHxPPvwwwWCb1fzzCn5afqnntsLrrRfOYF8KlBMs965kkGIaq+zZ9PQEx4y9hWXXfe0VuvOwMJ2uQCo7rg0YsLuvXjzJ9FCDO1yDDlGHeOYY7CDIvD0K5TkHkIZvhrv2c43jztmaMQ32lE7hqS3eOBCDe+JAiEeGwEE7pTtGDcnUZxfAJjPA1pmtbpsX5u4zlylMF7wHoSmABCZBzSya0yL4jw1lpjpjBBVIl0C7e9OAoT3IjBlKwtTz0/SsKrsnVqa/SKxhKojMzRDM1jei+RlZS7jyBD4LlBoqy9AHAOULbAQ1gv0xdqBswKnpY65Zp01llDndg4j6pxWM+8FCX+QqUpl6ytuQLjQYC1nDZhhmYwhIkXny2po1J1AZlC47kRexMQj2rtbgsJeRk6JoaHaZMY+vffZ1HCOs6jmE0DkkA2DJMbg34DhBZUnl6BICYtV3sFYUrH8fpCNMlwrXOz7mGma7RURKIAw/LTUB0fSx3gzlHg5bOaIXiddBKLba7Rxpdkix9k6JHNmj/Zxp8j1OjCrZ90zgQ2BLBa83JYZD6V5a8ne2j51pd/ovHpt+LrjzNwPe7bOhU6XlkIENMEejkR/0q1zUCMLfaRB0GZEtJLxzBK0FLdTZtXd9O1+SmqVdGrWqpcL2Mj7f5+/t/Vv/Au9a+t88FLPrl4uUn7NkPVBpxxoET+QQxrbbvA5ox4dccjoxdvUg4gQkrJmstd2KOefSpTFAILEXYhx7RjHgVBaVMdptOEHVHitfkKaFN7Pbo8znPIUKivHelYwznRUNz/U4GzIXA6zcG2srSGtkRssPSOGCkqp1qkgtQE8w/KDzFu3sHUbOZ8CY8ROgd+pOVNchgXCGCHyICDUOuWwUSL/WNU+jmJMly1BtRKsVZ9JhAhOBe81yrsamIcWd0TceS3+HJCecKR2nK2EZ293IVap1eLk6qabrtHtuD3Bpro9lxze7dnDR1BmQTN4TCdt6aW22tp0sAmfytClZTMMX8OSVh8FPqpcRR7KSIea4SZijq5vLSBjfmsSbhqS7hqmtvZOkMRoyXWWqpKwaB2VqvktweW/9p4goaiik2NWRFzuPpQohT6UKIVjpIQU0wf5gs/LnIMaX3md/sZSKE6E8QhTjPG28QEJ1H4UACigU0jbGN3VjFnc+imIu30+6204xXALQsHNleORTc+WJYEH4ZK8IFHh/5SE9XCg24KqvQ2WU2TRnMV7cMnNDRpYQlMmbKUCYzapIW67LdIWrCwvEVjAA3suRpJS7NdvZ3Ft7Cz2qGQ1ogmoq2ZaNf5kS7GRlx+1BFo72N6xHBg+/QIZ0euafGxzXaIqBjxRKGRApwKGWEoyhsBzmq/VkWx1Kl9LzGrb70CiBTEO26rnmaaA8HFGZJgR1cQ7KyGrpXRqiwmCWCBRXNTFq9F3N82nlXA63btsiO2NF1JbOnujN0DCbudGFG1K42fMct5KGBVmTa18tMkOWXROg1hfwl/1Tx4YdVUDq1rv2Pgiv2xfx9kGIKyvzh2XWM5nzugXV3G/XiwUtdnWAxkOe8OKsOdgUq/rjgfwOVNwaWreH4Al75tmBJ9378KmwKwcVr1Md3WRbCRBYwqqmMdzLcE5o8vcPukqpq9KyL4MSyCX/lD/OHwwOjKFg94gJljC1p3OCjwJnU1/jBAtW16hKBu7KjmxiziXQ4K2Hb7ucVmhbA/KN4ZA5e7lWG14siO4p0661pTc/PIeatH0E+LfCbY4R5W3lZjiJLTCoZr8kKzlGCstQNEzYPLCTq9Qde/Jrcv068Pg9u7cBl+qkHUxrEiS3N4XBhaQ0YYgyzCKyciL2fYFBylJv+nqDIQUWVV+r8FsmwZhomFpOrIyeGsyd901qR+tM2AwhVPWO7qpEmHYm1Qudm/4ydrjOSTn0npAPmhSpB/D7Q2OzzxllJxLN4ocVjZFou6B52rdiaB964ToAd830t8H7gtfHfeG9/Zcf2PgO+1lXxyfL98iu15MNYj1539Di9/38weHncWxEsS+9dh/xqo79iM5IjN6sKAuhrApgbXsXKlbrZLDI5lqMmjlEnn7Q9AyW1w4LRqNbbbFOprx+Mv1QoUG6usykIcvp8BD9XP3yYZe/Tms3p7u+MQGvydoYGpyZzTroKDDi2UHU/Z++BgjUV98uCg400zS6VXOjwzUfVypdlrnlTWDPlzEzKi7eY3ct+1U2566vrdnV0/NtEt/uyMoW2WVosvarawU3jere6hSIf5yN74azwUiZKp+MrIHj4MWQ0OH+7VTdxc/jRBNXz5+w/m2f8= \ No newline at end of file diff --git a/docs/static/operator-crd-architecture.png b/docs/static/operator-crd-architecture.png deleted file mode 100644 index da86fa51b2..0000000000 Binary files a/docs/static/operator-crd-architecture.png and /dev/null differ diff --git a/docs/static/operator-crd-architecture.xml b/docs/static/operator-crd-architecture.xml deleted file mode 100644 index 0c57bd52f4..0000000000 --- a/docs/static/operator-crd-architecture.xml +++ /dev/null @@ -1 +0,0 @@ -7Vtbd9o4EP41fgzHN7D9GELSnrPZ3TTtXrovewwWRo2xvLJooL9+R7ZkW5ZNSAgph0AfikZXz/fNjGZwDOdquf5Aw2zxK4lQYthmtDaciWHbljky4T8u2ZQSz7JLQUxxJAbVgs/4B5IzhXSFI5QrAxkhCcOZKpyRNEUzpshCSsmjOmxOEnXXLIyRJvg8CxNd+heO2EJIrVFQd3xEOF6IrX3bKzum4ewhpmSViv0M25kXn7J7Gcq1xIPmizAijw2Rc204V5QQVn5brq9QwnUr1VbOu+nprc5NUcp2meCPhuWU72GyEg9/R3IWU/T50y3If88QDRmh8PXqfpKLU7ON1FTxrIivZhnOOJzmJFkxdElnAtRCWrXg+cYLtkxER84oeUBXJIH1nUlKUhgzjmkYYTh+S1xpyoRGFOaLYlPemJOUyd1s0ZaTQf/BNf/Hj5HgOAVZguagm/F3RBkGyC+FmJGMb5OFM5zGfC2zbn7hfZMLl6+Ok6Sx+rV1M7zhq+uaF2DwfdC6IRJIfEBkiRjdwBDRa1uCXsJqbFOw5LHmoOsJ2aJBP1fSKRS8j6u1a+zhi4C/hwqWvzMVAN8FZmB5KwojLzk5VjkjS/hyj3KyojMEXydojlPMMEl13rTU6E4uQZF9lKiwt/r50WSBq6MtgLwtWpOhqfNE2KlC0FcAdaRi6uqQBkEXpK+B6MjTENWAUBUOeoiKj870Sj8KGuV06SS55PkeoQldBzBm8WlAOgM0EH11pAAJBSpHBq0GVlaX+XmvgpVufYY9SpjQRxHspEpG/614gBjXymmIRjH/P4tnCZgkqEksAgco1yn7pXhK2xKxW00RuTTvuMgLnLjFW1a21vedCZeRc5+gnqDeqnYM/cfQztvirRICxD0gnBa9pkqNFo+Hlu/eOFtcTcbXWK5jfrUZxLPMHsDyLMQpov+iNMbFOGAaE+sn4RQl8NjlAzkTWjKjCjC3rf4ljiJ+zmcFJOm4OuJb8emIg2rs8juNpHYRT5tJMwq5uhU4wnooSkKGv6s3qS7TEDvcEVwQTmw+VGzQU6eT+TxHTLOr6pC7mZrvaGxCEdz/RBMlU/J4XQsaHo0zq+iG7xIulWwFGcbVLbDNLga0ucFJfQGCtsCMBy2URpf8+grNaUJmD18WOC3FjUnQakz5hhjbiHa4YgREhLIFiUkK1CMF+Ppdi4dcW7ED6cLbDrnjRtXvkBWj7HXJ4oKgEhA0ESPJSk+M5LBs5eTzyAa6DTeNARmnXt7PRVceTl7IPKfFvXLF3WY7jq2er9SDmLUXpX/7J74yqfP1Iv/28fGXe+sP/OnPCz+wtZBiDgapRn4wfKayGPw3/tHwpkJRMHo4NoaTLujb3qvycj0W0bxR9LigKr8TJ6myGdWFlebcG9bNgW/69n6+6VX9Tw9YjgaW9c6QujAHgeX5itFYRwSc77vnwPEGgQOtMfubLzEwzZFof+V9A8ezRHuybgyebBqNO0QxgMqX3TsMOW8chuo6zVOxZPTWscR8H7HEfSKWeJYkhUDiYnREHqoHO0vD7vRCy3bgILTAEDW0uEcEXHXj3Vqq6cpNI7ICxdwXVXDdV+9bzDmS0owbuAp0EIQMvTQj4FVKM34/njuXZjz9Hr1faYaF+cPOdZklzmdA0RRhUsSn/CHXB4VpxLVB6MM8ATg7qirn8olxgPKJtzuVT6Z84umZytlV9boqx9cr/gd0Ve4ruyqKsgSsamdv1Sj/3qtTu8q/HZ7s7KkO5KmcnZl8Kp6q5zbcZSKn5L46rtCaR9v5R7Ch9Ybuqwcw/RWF/XxaRsAvYZS/xKkVczf6kOjs097Epz1B7m0m/+4c3Ujj3blaebhqpTkYGo1apbG1TpkCtuUk17WkgM+zBubQk4J6ctFSZneXOVuVyifrnttDpFIMLSuQb/0TnCNfhJFFm6HZMpZtP8E5busyHrSiVE/ZVF/IatWOZJiUC5WqOmD9VX+V5wTrr9sc2akVZfX3fc6ACvSOvFjb81DBOa/YvSzy8/OKwNR/0tovr+Avf6+yl2QV1cxwya/N6TTPOqfARYqtzgnG8SUYwTtMMAJT/xXxnGAcMsHwGwkGJAq+fDli95chGonHsJl21ElGd8rxsuyi862K51nQgV/wc1rZRfBTsgu5bRUc3daDHji7CEz73V5GhRc7rewiME//bcIXoXlkqQU067/1K4fXf1DpXP8P \ No newline at end of file diff --git a/docs/static/operator-diagram-cluster.png b/docs/static/operator-diagram-cluster.png deleted file mode 100644 index 201a18a5ed..0000000000 Binary files a/docs/static/operator-diagram-cluster.png and /dev/null differ diff --git a/docs/static/operator-diagram-database.png b/docs/static/operator-diagram-database.png deleted file mode 100644 index 6cfb3959d0..0000000000 Binary files a/docs/static/operator-diagram-database.png and /dev/null differ diff --git a/docs/static/operator-diagram.png b/docs/static/operator-diagram.png deleted file mode 100644 index a37c738ffe..0000000000 Binary files a/docs/static/operator-diagram.png and /dev/null differ diff --git a/docs/static/pdf/postgres-operator.pdf b/docs/static/pdf/postgres-operator.pdf deleted file mode 100644 index c12671896e..0000000000 Binary files a/docs/static/pdf/postgres-operator.pdf and /dev/null differ diff --git a/docs/static/tty.gif b/docs/static/tty.gif deleted file mode 100644 index 160bf4a7c1..0000000000 Binary files a/docs/static/tty.gif and /dev/null differ diff --git a/docs/themes/crunchy-hugo-theme b/docs/themes/crunchy-hugo-theme deleted file mode 160000 index cda8fd1e16..0000000000 --- a/docs/themes/crunchy-hugo-theme +++ /dev/null @@ -1 +0,0 @@ -Subproject commit cda8fd1e169ee0a62583b88685c4b55b340bbd1d diff --git a/examples/create-by-resource/README.md b/examples/create-by-resource/README.md deleted file mode 100644 index f58cd61be6..0000000000 --- a/examples/create-by-resource/README.md +++ /dev/null @@ -1,3 +0,0 @@ -# Example of Creating a Postgres Cluster With Resources - -This has been moved to the documentation. Please see [Using Custom Resources](https://access.crunchydata.com/documentation/postgres-operator/latest/custom-resources/) in the [PostgreSQL Operator documentation](https://access.crunchydata.com/documentation/postgres-operator/). diff --git a/examples/create-by-resource/fromcrd.json b/examples/create-by-resource/fromcrd.json deleted file mode 100644 index 987ec53d55..0000000000 --- a/examples/create-by-resource/fromcrd.json +++ /dev/null @@ -1,104 +0,0 @@ -{ - "apiVersion": "crunchydata.com/v1", - "kind": "Pgcluster", - "metadata": { - "annotations": { - "current-primary": "fromcrd" - }, - "labels": { - "autofail": "true", - "crunchy-pgbadger": "false", - "crunchy-pgha-scope": "fromcrd", - "crunchy-postgres-exporter": "false", - "current-primary": "fromcrd", - "deployment-name": "fromcrd", - "name": "fromcrd", - "pg-cluster": "fromcrd", - "pg-pod-anti-affinity": "", - "pgo-backrest": "true", - "pgo-version": "4.5.0", - "pgouser": "pgoadmin", - "primary": "true" - }, - "name": "fromcrd", - "namespace": "pgouser1" - }, - "spec": { - "ArchiveStorage": { - "accessmode": "", - "matchLabels": "", - "name": "", - "size": "", - "storageclass": "", - "storagetype": "", - "supplementalgroups": "" - }, - "BackrestStorage": { - "accessmode": "ReadWriteOnce", - "matchLabels": "", - "name": "", - "size": "300M", - "storageclass": "fast", - "storagetype": "dynamic", - "supplementalgroups": "" - }, - "PrimaryStorage": { - "accessmode": "ReadWriteOnce", - "matchLabels": "", - "name": "on2today", - "size": "300M", - "storageclass": "fast", - "storagetype": "dynamic", - "supplementalgroups": "" - }, - "ReplicaStorage": { - "accessmode": "ReadWriteOnce", - "matchLabels": "", - "name": "", - "size": "300M", - "storageclass": "fast", - "storagetype": "dynamic", - "supplementalgroups": "" - }, - "backrestResources": {}, - "ccpimage": "crunchy-postgres-ha", - "ccpimagetag": "centos7-12.4-4.5.0", - "clustername": "fromcrd", - "customconfig": "", - "database": "userdb", - "exporterport": "9187", - "name": "fromcrd", - "namespace": "pgouser1", - "pgBouncer": { - "replicas": 0, - "resources": {} - }, - "pgbadgerport": "10000", - "podPodAntiAffinity": { - "default": "preferred", - "pgBackRest": "preferred", - "pgBouncer": "preferred" - }, - "policies": "", - "port": "5432", - "primarysecretname": "fromcrd-primaryuser-secret", - "replicas": "0", - "rootsecretname": "fromcrd-postgres-secret", - "secretfrom": "", - "shutdown": false, - "standby": false, - "status": "", - "syncReplication": null, - "tablespaceMounts": {}, - "tls": {}, - "user": "testuser", - "userlabels": { - "crunchy-postgres-exporter": "false", - "pg-pod-anti-affinity": "", - "pgo-version": "4.5.0", - "pgouser": "pgoadmin", - "pgo-backrest": "true" - }, - "usersecretname": "fromcrd-testuser-secret" - } -} diff --git a/examples/create-by-resource/postgres-secret.yaml b/examples/create-by-resource/postgres-secret.yaml deleted file mode 100644 index 8769508fb1..0000000000 --- a/examples/create-by-resource/postgres-secret.yaml +++ /dev/null @@ -1,11 +0,0 @@ -apiVersion: v1 -data: - password: M3pBeXpmMThxQg== - username: cG9zdGdyZXM= -kind: Secret -metadata: - labels: - pg-cluster: fromcrd - name: fromcrd-postgres-secret - namespace: pgouser1 -type: Opaque diff --git a/examples/create-by-resource/primaryuser-secret.yaml b/examples/create-by-resource/primaryuser-secret.yaml deleted file mode 100644 index 15ee8ad665..0000000000 --- a/examples/create-by-resource/primaryuser-secret.yaml +++ /dev/null @@ -1,11 +0,0 @@ -apiVersion: v1 -data: - password: d0ZvYWlRZFhPTQ== - username: cHJpbWFyeXVzZXI= -kind: Secret -metadata: - labels: - pg-cluster: fromcrd - name: fromcrd-primaryuser-secret - namespace: pgouser1 -type: Opaque diff --git a/examples/create-by-resource/run.sh b/examples/create-by-resource/run.sh deleted file mode 100755 index 1cdefdda77..0000000000 --- a/examples/create-by-resource/run.sh +++ /dev/null @@ -1,89 +0,0 @@ -#!/bin/bash - -# Copyright 2019 - 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -########## -# SETUP # -######### - -DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" - -# A namespace that exists in NAMESPACE env var - see examples/envs.sh -export NS=pgouser1 - -########### -# CLEANUP # -########## - -# remove any existing resources from a previous run -$PGO_CMD delete secret -n $NS \ - fromcrd-postgres-secret \ - fromcrd-primaryuser-secret \ - fromcrd-testuser-secret \ - fromcrd-backrest-repo-config > /dev/null -$PGO_CMD delete pgcluster fromcrd -n $NS -$PGO_CMD delete pvc fromcrd fromcrd-pgbr-repo -n $NS -# remove the public/private keypair from the previous run -rm $DIR/fromcrd-key $DIR/fromcrd-key.pub - -############### -# EXAMPLE RUN # -############### - -# generate a SSH public/private keypair for use by pgBackRest -ssh-keygen -t ed25519 -N '' -f $DIR/fromcrd-key - -# base64 encoded the keys for the generation of the Kube secret, and place -# them into variables temporarily -PUBLIC_KEY_TEMP=$(cat $DIR/fromcrd-key.pub | base64) -PRIVATE_KEY_TEMP=$(cat $DIR/fromcrd-key | base64) - -export PUBLIC_KEY="${PUBLIC_KEY_TEMP//[$'\n']}" -export PRIVATE_KEY="${PRIVATE_KEY_TEMP//[$'\n']}" - -unset PUBLIC_KEY_TEMP -unset PRIVATE_KEY_TEMP - -# create the backrest-repo-config example file and substitute in the newly -# created keys -cat <<-EOF > $DIR/backrest-repo-config.yaml -apiVersion: v1 -data: - authorized_keys: ${PUBLIC_KEY} - id_ed25519: ${PRIVATE_KEY} - ssh_host_ed25519_key: ${PRIVATE_KEY} - config: SG9zdCAqClN0cmljdEhvc3RLZXlDaGVja2luZyBubwpJZGVudGl0eUZpbGUgL3RtcC9pZF9lZDI1NTE5ClBvcnQgMjAyMgpVc2VyIHBnYmFja3Jlc3QK - sshd_config: IwkkT3BlbkJTRDogc3NoZF9jb25maWcsdiAxLjEwMCAyMDE2LzA4LzE1IDEyOjMyOjA0IG5hZGR5IEV4cCAkCgojIFRoaXMgaXMgdGhlIHNzaGQgc2VydmVyIHN5c3RlbS13aWRlIGNvbmZpZ3VyYXRpb24gZmlsZS4gIFNlZQojIHNzaGRfY29uZmlnKDUpIGZvciBtb3JlIGluZm9ybWF0aW9uLgoKIyBUaGlzIHNzaGQgd2FzIGNvbXBpbGVkIHdpdGggUEFUSD0vdXNyL2xvY2FsL2JpbjovdXNyL2JpbgoKIyBUaGUgc3RyYXRlZ3kgdXNlZCBmb3Igb3B0aW9ucyBpbiB0aGUgZGVmYXVsdCBzc2hkX2NvbmZpZyBzaGlwcGVkIHdpdGgKIyBPcGVuU1NIIGlzIHRvIHNwZWNpZnkgb3B0aW9ucyB3aXRoIHRoZWlyIGRlZmF1bHQgdmFsdWUgd2hlcmUKIyBwb3NzaWJsZSwgYnV0IGxlYXZlIHRoZW0gY29tbWVudGVkLiAgVW5jb21tZW50ZWQgb3B0aW9ucyBvdmVycmlkZSB0aGUKIyBkZWZhdWx0IHZhbHVlLgoKIyBJZiB5b3Ugd2FudCB0byBjaGFuZ2UgdGhlIHBvcnQgb24gYSBTRUxpbnV4IHN5c3RlbSwgeW91IGhhdmUgdG8gdGVsbAojIFNFTGludXggYWJvdXQgdGhpcyBjaGFuZ2UuCiMgc2VtYW5hZ2UgcG9ydCAtYSAtdCBzc2hfcG9ydF90IC1wIHRjcCAjUE9SVE5VTUJFUgojClBvcnQgMjAyMgojQWRkcmVzc0ZhbWlseSBhbnkKI0xpc3RlbkFkZHJlc3MgMC4wLjAuMAojTGlzdGVuQWRkcmVzcyA6OgoKSG9zdEtleSAvc3NoZC9zc2hfaG9zdF9lZDI1NTE5X2tleQoKIyBDaXBoZXJzIGFuZCBrZXlpbmcKI1Jla2V5TGltaXQgZGVmYXVsdCBub25lCgojIExvZ2dpbmcKI1N5c2xvZ0ZhY2lsaXR5IEFVVEgKU3lzbG9nRmFjaWxpdHkgQVVUSFBSSVYKI0xvZ0xldmVsIElORk8KCiMgQXV0aGVudGljYXRpb246CgojTG9naW5HcmFjZVRpbWUgMm0KUGVybWl0Um9vdExvZ2luIG5vClN0cmljdE1vZGVzIG5vCiNNYXhBdXRoVHJpZXMgNgojTWF4U2Vzc2lvbnMgMTAKClB1YmtleUF1dGhlbnRpY2F0aW9uIHllcwoKIyBUaGUgZGVmYXVsdCBpcyB0byBjaGVjayBib3RoIC5zc2gvYXV0aG9yaXplZF9rZXlzIGFuZCAuc3NoL2F1dGhvcml6ZWRfa2V5czIKIyBidXQgdGhpcyBpcyBvdmVycmlkZGVuIHNvIGluc3RhbGxhdGlvbnMgd2lsbCBvbmx5IGNoZWNrIC5zc2gvYXV0aG9yaXplZF9rZXlzCkF1dGhvcml6ZWRLZXlzRmlsZQkvc3NoZC9hdXRob3JpemVkX2tleXMKCiNBdXRob3JpemVkUHJpbmNpcGFsc0ZpbGUgbm9uZQoKI0F1dGhvcml6ZWRLZXlzQ29tbWFuZCBub25lCiNBdXRob3JpemVkS2V5c0NvbW1hbmRVc2VyIG5vYm9keQoKIyBGb3IgdGhpcyB0byB3b3JrIHlvdSB3aWxsIGFsc28gbmVlZCBob3N0IGtleXMgaW4gL2V0Yy9zc2gvc3NoX2tub3duX2hvc3RzCiNIb3N0YmFzZWRBdXRoZW50aWNhdGlvbiBubwojIENoYW5nZSB0byB5ZXMgaWYgeW91IGRvbid0IHRydXN0IH4vLnNzaC9rbm93bl9ob3N0cyBmb3IKIyBIb3N0YmFzZWRBdXRoZW50aWNhdGlvbgojSWdub3JlVXNlcktub3duSG9zdHMgbm8KIyBEb24ndCByZWFkIHRoZSB1c2VyJ3Mgfi8ucmhvc3RzIGFuZCB+Ly5zaG9zdHMgZmlsZXMKI0lnbm9yZVJob3N0cyB5ZXMKCiMgVG8gZGlzYWJsZSB0dW5uZWxlZCBjbGVhciB0ZXh0IHBhc3N3b3JkcywgY2hhbmdlIHRvIG5vIGhlcmUhCiNQYXNzd29yZEF1dGhlbnRpY2F0aW9uIHllcwojUGVybWl0RW1wdHlQYXNzd29yZHMgbm8KUGFzc3dvcmRBdXRoZW50aWNhdGlvbiBubwoKIyBDaGFuZ2UgdG8gbm8gdG8gZGlzYWJsZSBzL2tleSBwYXNzd29yZHMKQ2hhbGxlbmdlUmVzcG9uc2VBdXRoZW50aWNhdGlvbiB5ZXMKI0NoYWxsZW5nZVJlc3BvbnNlQXV0aGVudGljYXRpb24gbm8KCiMgS2VyYmVyb3Mgb3B0aW9ucwojS2VyYmVyb3NBdXRoZW50aWNhdGlvbiBubwojS2VyYmVyb3NPckxvY2FsUGFzc3dkIHllcwojS2VyYmVyb3NUaWNrZXRDbGVhbnVwIHllcwojS2VyYmVyb3NHZXRBRlNUb2tlbiBubwojS2VyYmVyb3NVc2VLdXNlcm9rIHllcwoKIyBHU1NBUEkgb3B0aW9ucwojR1NTQVBJQXV0aGVudGljYXRpb24geWVzCiNHU1NBUElDbGVhbnVwQ3JlZGVudGlhbHMgbm8KI0dTU0FQSVN0cmljdEFjY2VwdG9yQ2hlY2sgeWVzCiNHU1NBUElLZXlFeGNoYW5nZSBubwojR1NTQVBJRW5hYmxlazV1c2VycyBubwoKIyBTZXQgdGhpcyB0byAneWVzJyB0byBlbmFibGUgUEFNIGF1dGhlbnRpY2F0aW9uLCBhY2NvdW50IHByb2Nlc3NpbmcsCiMgYW5kIHNlc3Npb24gcHJvY2Vzc2luZy4gSWYgdGhpcyBpcyBlbmFibGVkLCBQQU0gYXV0aGVudGljYXRpb24gd2lsbAojIGJlIGFsbG93ZWQgdGhyb3VnaCB0aGUgQ2hhbGxlbmdlUmVzcG9uc2VBdXRoZW50aWNhdGlvbiBhbmQKIyBQYXNzd29yZEF1dGhlbnRpY2F0aW9uLiAgRGVwZW5kaW5nIG9uIHlvdXIgUEFNIGNvbmZpZ3VyYXRpb24sCiMgUEFNIGF1dGhlbnRpY2F0aW9uIHZpYSBDaGFsbGVuZ2VSZXNwb25zZUF1dGhlbnRpY2F0aW9uIG1heSBieXBhc3MKIyB0aGUgc2V0dGluZyBvZiAiUGVybWl0Um9vdExvZ2luIHdpdGhvdXQtcGFzc3dvcmQiLgojIElmIHlvdSBqdXN0IHdhbnQgdGhlIFBBTSBhY2NvdW50IGFuZCBzZXNzaW9uIGNoZWNrcyB0byBydW4gd2l0aG91dAojIFBBTSBhdXRoZW50aWNhdGlvbiwgdGhlbiBlbmFibGUgdGhpcyBidXQgc2V0IFBhc3N3b3JkQXV0aGVudGljYXRpb24KIyBhbmQgQ2hhbGxlbmdlUmVzcG9uc2VBdXRoZW50aWNhdGlvbiB0byAnbm8nLgojIFdBUk5JTkc6ICdVc2VQQU0gbm8nIGlzIG5vdCBzdXBwb3J0ZWQgaW4gUmVkIEhhdCBFbnRlcnByaXNlIExpbnV4IGFuZCBtYXkgY2F1c2Ugc2V2ZXJhbAojIHByb2JsZW1zLgpVc2VQQU0geWVzIAoKI0FsbG93QWdlbnRGb3J3YXJkaW5nIHllcwojQWxsb3dUY3BGb3J3YXJkaW5nIHllcwojR2F0ZXdheVBvcnRzIG5vClgxMUZvcndhcmRpbmcgeWVzCiNYMTFEaXNwbGF5T2Zmc2V0IDEwCiNYMTFVc2VMb2NhbGhvc3QgeWVzCiNQZXJtaXRUVFkgeWVzCiNQcmludE1vdGQgeWVzCiNQcmludExhc3RMb2cgeWVzCiNUQ1BLZWVwQWxpdmUgeWVzCiNVc2VMb2dpbiBubwpVc2VQcml2aWxlZ2VTZXBhcmF0aW9uIG5vCiNQZXJtaXRVc2VyRW52aXJvbm1lbnQgbm8KI0NvbXByZXNzaW9uIGRlbGF5ZWQKI0NsaWVudEFsaXZlSW50ZXJ2YWwgMAojQ2xpZW50QWxpdmVDb3VudE1heCAzCiNTaG93UGF0Y2hMZXZlbCBubwojVXNlRE5TIHllcwojUGlkRmlsZSAvdmFyL3J1bi9zc2hkLnBpZAojTWF4U3RhcnR1cHMgMTA6MzA6MTAwCiNQZXJtaXRUdW5uZWwgbm8KI0Nocm9vdERpcmVjdG9yeSBub25lCiNWZXJzaW9uQWRkZW5kdW0gbm9uZQoKIyBubyBkZWZhdWx0IGJhbm5lciBwYXRoCiNCYW5uZXIgbm9uZQoKIyBBY2NlcHQgbG9jYWxlLXJlbGF0ZWQgZW52aXJvbm1lbnQgdmFyaWFibGVzCkFjY2VwdEVudiBMQU5HIExDX0NUWVBFIExDX05VTUVSSUMgTENfVElNRSBMQ19DT0xMQVRFIExDX01PTkVUQVJZIExDX01FU1NBR0VTCkFjY2VwdEVudiBMQ19QQVBFUiBMQ19OQU1FIExDX0FERFJFU1MgTENfVEVMRVBIT05FIExDX01FQVNVUkVNRU5UCkFjY2VwdEVudiBMQ19JREVOVElGSUNBVElPTiBMQ19BTEwgTEFOR1VBR0UKQWNjZXB0RW52IFhNT0RJRklFUlMKCiMgb3ZlcnJpZGUgZGVmYXVsdCBvZiBubyBzdWJzeXN0ZW1zClN1YnN5c3RlbQlzZnRwCS91c3IvbGliZXhlYy9vcGVuc3NoL3NmdHAtc2VydmVyCgojIEV4YW1wbGUgb2Ygb3ZlcnJpZGluZyBzZXR0aW5ncyBvbiBhIHBlci11c2VyIGJhc2lzCiNNYXRjaCBVc2VyIGFub25jdnMKIwlYMTFGb3J3YXJkaW5nIG5vCiMJQWxsb3dUY3BGb3J3YXJkaW5nIG5vCiMJUGVybWl0VFRZIG5vCiMJRm9yY2VDb21tYW5kIGN2cyBzZXJ2ZXI= -kind: Secret -metadata: - labels: - pg-cluster: fromcrd - pgo-backrest-repo: "true" - name: fromcrd-backrest-repo-config - namespace: ${NS} -type: Opaque -EOF - -# unset the *_KEY environmental variables -unset PUBLIC_KEY -unset PRIVATE_KEY - -# create the required postgres credentials for the fromcrd cluster -$PGO_CMD -n $NS create -f $DIR/postgres-secret.yaml -$PGO_CMD -n $NS create -f $DIR/primaryuser-secret.yaml -$PGO_CMD -n $NS create -f $DIR/testuser-secret.yaml -$PGO_CMD -n $NS create -f $DIR/backrest-repo-config.yaml - -# create the pgcluster CRD for the fromcrd cluster -$PGO_CMD -n $NS create -f $DIR/fromcrd.json diff --git a/examples/create-by-resource/testuser-secret.yaml b/examples/create-by-resource/testuser-secret.yaml deleted file mode 100644 index ae457e8d75..0000000000 --- a/examples/create-by-resource/testuser-secret.yaml +++ /dev/null @@ -1,12 +0,0 @@ -apiVersion: v1 -data: - password: UE5xOEVFVTBxTQ== - username: dGVzdHVzZXI= -kind: Secret -metadata: - labels: - pg-cluster: fromcrd - name: fromcrd-testuser-secret - namespace: pgouser1 - resourceVersion: "143163" -type: Opaque diff --git a/examples/custom-config/create.sh b/examples/custom-config/create.sh deleted file mode 100755 index b0599f1b37..0000000000 --- a/examples/custom-config/create.sh +++ /dev/null @@ -1,52 +0,0 @@ -#!/bin/bash - -# Copyright 2018 - 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -RED="\033[0;31m" -GREEN="\033[0;32m" -RESET="\033[0m" - -function echo_err() { - echo -e "${RED?}$(date) ERROR: ${1?}${RESET?}" -} - -function echo_info() { - echo -e "${GREEN?}$(date) INFO: ${1?}${RESET?}" -} - - -DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" - -#Error if PGO_CMD not set -if [[ -z ${PGO_CMD} ]] -then - echo_err "PGO_CMD is not set." -fi - -#Error is PGO_NAMESPACE not set -if [[ -z ${PGO_NAMESPACE} ]] -then - echo_err "PGO_NAMESPACE is not set." -fi - -# If both PGO_CMD and PGO_NAMESPACE are set, config map can be created. -if [[ ! -z ${PGO_CMD} ]] && [[ ! -z ${PGO_NAMESPACE} ]] -then - - echo_info "PGO_NAMESPACE=${PGO_NAMESPACE}" - - $PGO_CMD delete configmap pgo-custom-pg-config -n ${PGO_NAMESPACE} - - $PGO_CMD create configmap pgo-custom-pg-config --from-file=$DIR -n ${PGO_NAMESPACE} -fi diff --git a/examples/custom-config/postgres-ha.yaml b/examples/custom-config/postgres-ha.yaml deleted file mode 100644 index 0f4cd6fbab..0000000000 --- a/examples/custom-config/postgres-ha.yaml +++ /dev/null @@ -1,23 +0,0 @@ ---- -bootstrap: - dcs: - postgresql: - parameters: - logging_collector: on - log_directory: pglogs - log_min_duration_statement: 0 - log_statement: none - max_wal_senders: 6 - shared_preload_libraries: pgaudit.so - shared_buffers: 256MB - temp_buffers: 10MB - work_mem: 5MB -postgresql: - pg_hba: - - local all postgres peer - - local all crunchyadm peer - - host replication primaryuser 0.0.0.0/0 md5 - - host all primaryuser 0.0.0.0/0 reject - - host all postgres 0.0.0.0/0 md5 - - host all testuser1 0.0.0.0/0 md5 - - host all testuser2 0.0.0.0/0 md5 diff --git a/examples/custom-config/setup.sql b/examples/custom-config/setup.sql deleted file mode 100644 index 206005eb8a..0000000000 --- a/examples/custom-config/setup.sql +++ /dev/null @@ -1,52 +0,0 @@ -/* - * Copyright 2017 - 2020 Crunchy Data Solutions, Inc. - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - ---- System Setup -SET application_name="container_setup"; - -CREATE EXTENSION IF NOT EXISTS pg_stat_statements; -CREATE EXTENSION IF NOT EXISTS pgaudit; - -CREATE USER "${PGHA_USER}" LOGIN; -ALTER USER "${PGHA_USER}" PASSWORD $$${PGHA_USER_PASSWORD}$$; - -CREATE USER testuser2 LOGIN; -ALTER USER testuser2 PASSWORD 'customconfpass'; - -CREATE DATABASE "${PGHA_DATABASE}"; -GRANT ALL PRIVILEGES ON DATABASE "${PGHA_DATABASE}" TO testuser2; - ---- PGHA_DATABASE Setup - -\c "${PGHA_DATABASE}" - -CREATE EXTENSION IF NOT EXISTS pg_stat_statements; -CREATE EXTENSION IF NOT EXISTS pgaudit; - -CREATE SCHEMA IF NOT EXISTS AUTHORIZATION "${PGHA_USER}"; - -/* The following has been customized for the custom-config example */ - -SET SESSION AUTHORIZATION testuser2; - -CREATE TABLE custom_config_table ( - KEY VARCHAR(30) PRIMARY KEY, - VALUE VARCHAR(50) NOT NULL, - UPDATEDT TIMESTAMP NOT NULL -); - -INSERT INTO custom_config_table (KEY, VALUE, UPDATEDT) VALUES ('CPU', '256', now()); - -GRANT ALL ON custom_config_table TO testuser2; diff --git a/examples/envs.sh b/examples/envs.sh deleted file mode 100644 index 86bde3cde3..0000000000 --- a/examples/envs.sh +++ /dev/null @@ -1,77 +0,0 @@ -export GOPATH=$HOME/odev -export GOBIN=$GOPATH/bin -export PATH=$PATH:$GOBIN -# NAMESPACE is the list of namespaces the Operator will watch -export NAMESPACE=pgouser1,pgouser2 - -# PGO_INSTALLATION_NAME is the unique name given to this Operator install -# this supports multi-deployments of the Operator on the same Kube cluster -export PGO_INSTALLATION_NAME=devtest - -# PGO_OPERATOR_NAMESPACE is the namespace the Operator is deployed into -export PGO_OPERATOR_NAMESPACE=pgo - -# PGO_CMD values are either kubectl or oc, use oc if Openshift -export PGO_CMD=kubectl - -# the directory location of the Operator scripts -export PGOROOT=$GOPATH/src/github.com/crunchydata/postgres-operator - -# the directory location of the Json Config Templates -export PGO_CONF_DIR=$PGOROOT/installers/ansible/roles/pgo-operator/files - -# the version of the Operator you run is set by these vars -export PGO_IMAGE_PREFIX=registry.developers.crunchydata.com/crunchydata -export PGO_BASEOS=centos7 -export PGO_VERSION=4.5.0 -export PGO_IMAGE_TAG=$PGO_BASEOS-$PGO_VERSION - -# for setting the pgo apiserver port, disabling TLS or not verifying TLS -# if TLS is disabled, ensure setip() function port is updated and http is used in place of https -export PGO_APISERVER_PORT=8443 # Defaults: 8443 for TLS enabled, 8080 for TLS disabled -export DISABLE_TLS=false -export TLS_NO_VERIFY=false -export TLS_CA_TRUST="" -export ADD_OS_TRUSTSTORE=false -export NOAUTH_ROUTES="" - -# Disable default inclusion of OS trust in PGO clients -export EXCLUDE_OS_TRUST=false - -# for disabling the Operator eventing -export DISABLE_EVENTING=false - -# for the pgo CLI to authenticate with using TLS -export PGO_CA_CERT=$PGOROOT/conf/postgres-operator/server.crt -export PGO_CLIENT_CERT=$PGOROOT/conf/postgres-operator/server.crt -export PGO_CLIENT_KEY=$PGOROOT/conf/postgres-operator/server.key - -# During a Bash install determines which namespace permissions are assigned to the PostgreSQL -# Operator using a ClusterRole. Options: `dynamic`, `readonly`, and `disabled` -export PGO_NAMESPACE_MODE=dynamic - -# During a Bash install determines whether or not the PostgreSQL Operator will granted the -# permissions needed to reconcile RBAC within targeted namespaces. -export PGO_RECONCILE_RBAC=true - -# common bash functions for working with the Operator -setip() -{ - export PGO_APISERVER_URL=https://`$PGO_CMD -n "$PGO_OPERATOR_NAMESPACE" get service postgres-operator -o=jsonpath="{.spec.clusterIP}"`:8443 -} - -alog() { -$PGO_CMD -n "$PGO_OPERATOR_NAMESPACE" logs `$PGO_CMD -n "$PGO_OPERATOR_NAMESPACE" get pod --selector=name=postgres-operator -o jsonpath="{.items[0].metadata.name}"` -c apiserver -} - -olog () { -$PGO_CMD -n "$PGO_OPERATOR_NAMESPACE" logs `$PGO_CMD -n "$PGO_OPERATOR_NAMESPACE" get pod --selector=name=postgres-operator -o jsonpath="{.items[0].metadata.name}"` -c operator -} - -slog () { -$PGO_CMD -n "$PGO_OPERATOR_NAMESPACE" logs `$PGO_CMD -n "$PGO_OPERATOR_NAMESPACE" get pod --selector=name=postgres-operator -o jsonpath="{.items[0].metadata.name}"` -c scheduler -} - -elog () { -$PGO_CMD -n "$PGO_OPERATOR_NAMESPACE" logs `$PGO_CMD -n "$PGO_OPERATOR_NAMESPACE" get pod --selector=name=postgres-operator -o jsonpath="{.items[0].metadata.name}"` -c event -} diff --git a/examples/pgadmin/kustomization.yaml b/examples/pgadmin/kustomization.yaml new file mode 100644 index 0000000000..600eb8b82d --- /dev/null +++ b/examples/pgadmin/kustomization.yaml @@ -0,0 +1,7 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +namespace: postgres-operator + +resources: +- pgadmin.yaml diff --git a/examples/pgadmin/pgadmin.yaml b/examples/pgadmin/pgadmin.yaml new file mode 100644 index 0000000000..b87856aa86 --- /dev/null +++ b/examples/pgadmin/pgadmin.yaml @@ -0,0 +1,19 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PGAdmin +metadata: + name: rhino +spec: + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + serverGroups: + - name: supply + # An empty selector selects all postgresclusters in the Namespace + postgresClusterSelector: {} + - name: demand + postgresClusterSelector: + matchLabels: + postgres-operator.crunchydata.com/cluster: hippo diff --git a/examples/pgo-bash-completion b/examples/pgo-bash-completion deleted file mode 100644 index 70271ccf0c..0000000000 --- a/examples/pgo-bash-completion +++ /dev/null @@ -1,2150 +0,0 @@ -# bash completion for pgo -*- shell-script -*- - -__pgo_debug() -{ - if [[ -n ${BASH_COMP_DEBUG_FILE} ]]; then - echo "$*" >> "${BASH_COMP_DEBUG_FILE}" - fi -} - -# Homebrew on Macs have version 1.3 of bash-completion which doesn't include -# _init_completion. This is a very minimal version of that function. -__pgo_init_completion() -{ - COMPREPLY=() - _get_comp_words_by_ref "$@" cur prev words cword -} - -__pgo_index_of_word() -{ - local w word=$1 - shift - index=0 - for w in "$@"; do - [[ $w = "$word" ]] && return - index=$((index+1)) - done - index=-1 -} - -__pgo_contains_word() -{ - local w word=$1; shift - for w in "$@"; do - [[ $w = "$word" ]] && return - done - return 1 -} - -__pgo_handle_reply() -{ - __pgo_debug "${FUNCNAME[0]}" - case $cur in - -*) - if [[ $(type -t compopt) = "builtin" ]]; then - compopt -o nospace - fi - local allflags - if [ ${#must_have_one_flag[@]} -ne 0 ]; then - allflags=("${must_have_one_flag[@]}") - else - allflags=("${flags[*]} ${two_word_flags[*]}") - fi - COMPREPLY=( $(compgen -W "${allflags[*]}" -- "$cur") ) - if [[ $(type -t compopt) = "builtin" ]]; then - [[ "${COMPREPLY[0]}" == *= ]] || compopt +o nospace - fi - - # complete after --flag=abc - if [[ $cur == *=* ]]; then - if [[ $(type -t compopt) = "builtin" ]]; then - compopt +o nospace - fi - - local index flag - flag="${cur%=*}" - __pgo_index_of_word "${flag}" "${flags_with_completion[@]}" - COMPREPLY=() - if [[ ${index} -ge 0 ]]; then - PREFIX="" - cur="${cur#*=}" - ${flags_completion[${index}]} - if [ -n "${ZSH_VERSION}" ]; then - # zsh completion needs --flag= prefix - eval "COMPREPLY=( \"\${COMPREPLY[@]/#/${flag}=}\" )" - fi - fi - fi - return 0; - ;; - esac - - # check if we are handling a flag with special work handling - local index - __pgo_index_of_word "${prev}" "${flags_with_completion[@]}" - if [[ ${index} -ge 0 ]]; then - ${flags_completion[${index}]} - return - fi - - # we are parsing a flag and don't have a special handler, no completion - if [[ ${cur} != "${words[cword]}" ]]; then - return - fi - - local completions - completions=("${commands[@]}") - if [[ ${#must_have_one_noun[@]} -ne 0 ]]; then - completions=("${must_have_one_noun[@]}") - fi - if [[ ${#must_have_one_flag[@]} -ne 0 ]]; then - completions+=("${must_have_one_flag[@]}") - fi - COMPREPLY=( $(compgen -W "${completions[*]}" -- "$cur") ) - - if [[ ${#COMPREPLY[@]} -eq 0 && ${#noun_aliases[@]} -gt 0 && ${#must_have_one_noun[@]} -ne 0 ]]; then - COMPREPLY=( $(compgen -W "${noun_aliases[*]}" -- "$cur") ) - fi - - if [[ ${#COMPREPLY[@]} -eq 0 ]]; then - declare -F __custom_func >/dev/null && __custom_func - fi - - # available in bash-completion >= 2, not always present on macOS - if declare -F __ltrim_colon_completions >/dev/null; then - __ltrim_colon_completions "$cur" - fi - - # If there is only 1 completion and it is a flag with an = it will be completed - # but we don't want a space after the = - if [[ "${#COMPREPLY[@]}" -eq "1" ]] && [[ $(type -t compopt) = "builtin" ]] && [[ "${COMPREPLY[0]}" == --*= ]]; then - compopt -o nospace - fi -} - -# The arguments should be in the form "ext1|ext2|extn" -__pgo_handle_filename_extension_flag() -{ - local ext="$1" - _filedir "@(${ext})" -} - -__pgo_handle_subdirs_in_dir_flag() -{ - local dir="$1" - pushd "${dir}" >/dev/null 2>&1 && _filedir -d && popd >/dev/null 2>&1 -} - -__pgo_handle_flag() -{ - __pgo_debug "${FUNCNAME[0]}: c is $c words[c] is ${words[c]}" - - # if a command required a flag, and we found it, unset must_have_one_flag() - local flagname=${words[c]} - local flagvalue - # if the word contained an = - if [[ ${words[c]} == *"="* ]]; then - flagvalue=${flagname#*=} # take in as flagvalue after the = - flagname=${flagname%=*} # strip everything after the = - flagname="${flagname}=" # but put the = back - fi - __pgo_debug "${FUNCNAME[0]}: looking for ${flagname}" - if __pgo_contains_word "${flagname}" "${must_have_one_flag[@]}"; then - must_have_one_flag=() - fi - - # if you set a flag which only applies to this command, don't show subcommands - if __pgo_contains_word "${flagname}" "${local_nonpersistent_flags[@]}"; then - commands=() - fi - - # keep flag value with flagname as flaghash - # flaghash variable is an associative array which is only supported in bash > 3. - if [[ -z "${BASH_VERSION}" || "${BASH_VERSINFO[0]}" -gt 3 ]]; then - if [ -n "${flagvalue}" ] ; then - flaghash[${flagname}]=${flagvalue} - elif [ -n "${words[ $((c+1)) ]}" ] ; then - flaghash[${flagname}]=${words[ $((c+1)) ]} - else - flaghash[${flagname}]="true" # pad "true" for bool flag - fi - fi - - # skip the argument to a two word flag - if __pgo_contains_word "${words[c]}" "${two_word_flags[@]}"; then - c=$((c+1)) - # if we are looking for a flags value, don't show commands - if [[ $c -eq $cword ]]; then - commands=() - fi - fi - - c=$((c+1)) - -} - -__pgo_handle_noun() -{ - __pgo_debug "${FUNCNAME[0]}: c is $c words[c] is ${words[c]}" - - if __pgo_contains_word "${words[c]}" "${must_have_one_noun[@]}"; then - must_have_one_noun=() - elif __pgo_contains_word "${words[c]}" "${noun_aliases[@]}"; then - must_have_one_noun=() - fi - - nouns+=("${words[c]}") - c=$((c+1)) -} - -__pgo_handle_command() -{ - __pgo_debug "${FUNCNAME[0]}: c is $c words[c] is ${words[c]}" - - local next_command - if [[ -n ${last_command} ]]; then - next_command="_${last_command}_${words[c]//:/__}" - else - if [[ $c -eq 0 ]]; then - next_command="_pgo_root_command" - else - next_command="_${words[c]//:/__}" - fi - fi - c=$((c+1)) - __pgo_debug "${FUNCNAME[0]}: looking for ${next_command}" - declare -F "$next_command" >/dev/null && $next_command -} - -__pgo_handle_word() -{ - if [[ $c -ge $cword ]]; then - __pgo_handle_reply - return - fi - __pgo_debug "${FUNCNAME[0]}: c is $c words[c] is ${words[c]}" - if [[ "${words[c]}" == -* ]]; then - __pgo_handle_flag - elif __pgo_contains_word "${words[c]}" "${commands[@]}"; then - __pgo_handle_command - elif [[ $c -eq 0 ]]; then - __pgo_handle_command - elif __pgo_contains_word "${words[c]}" "${command_aliases[@]}"; then - # aliashash variable is an associative array which is only supported in bash > 3. - if [[ -z "${BASH_VERSION}" || "${BASH_VERSINFO[0]}" -gt 3 ]]; then - words[c]=${aliashash[${words[c]}]} - __pgo_handle_command - else - __pgo_handle_noun - fi - else - __pgo_handle_noun - fi - __pgo_handle_word -} - -_pgo_apply() -{ - last_command="pgo_apply" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--dry-run") - local_nonpersistent_flags+=("--dry-run") - flags+=("--selector=") - two_word_flags+=("-s") - local_nonpersistent_flags+=("--selector=") - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_backup() -{ - last_command="pgo_backup" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--backup-opts=") - local_nonpersistent_flags+=("--backup-opts=") - flags+=("--backup-type=") - local_nonpersistent_flags+=("--backup-type=") - flags+=("--pgbackrest-storage-type=") - local_nonpersistent_flags+=("--pgbackrest-storage-type=") - flags+=("--pvc-name=") - local_nonpersistent_flags+=("--pvc-name=") - flags+=("--selector=") - two_word_flags+=("-s") - local_nonpersistent_flags+=("--selector=") - flags+=("--storage-config=") - local_nonpersistent_flags+=("--storage-config=") - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_cat() -{ - last_command="pgo_cat" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_create_cluster() -{ - last_command="pgo_create_cluster" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--autofail") - local_nonpersistent_flags+=("--autofail") - flags+=("--ccp-image=") - local_nonpersistent_flags+=("--ccp-image=") - flags+=("--ccp-image-tag=") - two_word_flags+=("-c") - local_nonpersistent_flags+=("--ccp-image-tag=") - flags+=("--custom-config=") - local_nonpersistent_flags+=("--custom-config=") - flags+=("--labels=") - two_word_flags+=("-l") - local_nonpersistent_flags+=("--labels=") - flags+=("--metrics") - local_nonpersistent_flags+=("--metrics") - flags+=("--node-label=") - local_nonpersistent_flags+=("--node-label=") - flags+=("--password=") - two_word_flags+=("-w") - local_nonpersistent_flags+=("--password=") - flags+=("--pgbackrest=") - local_nonpersistent_flags+=("--pgbackrest=") - flags+=("--pgbackrest-storage-type=") - local_nonpersistent_flags+=("--pgbackrest-storage-type=") - flags+=("--pgbadger") - local_nonpersistent_flags+=("--pgbadger") - flags+=("--pgbouncer") - local_nonpersistent_flags+=("--pgbouncer") - flags+=("--pgbouncer-pass=") - local_nonpersistent_flags+=("--pgbouncer-pass=") - flags+=("--policies=") - two_word_flags+=("-z") - local_nonpersistent_flags+=("--policies=") - flags+=("--replica-count=") - local_nonpersistent_flags+=("--replica-count=") - flags+=("--replica-storage-config=") - local_nonpersistent_flags+=("--replica-storage-config=") - flags+=("--secret-from=") - two_word_flags+=("-s") - local_nonpersistent_flags+=("--secret-from=") - two_word_flags+=("-e") - flags+=("--service-type=") - local_nonpersistent_flags+=("--service-type=") - flags+=("--storage-config=") - local_nonpersistent_flags+=("--storage-config=") - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_create_namespace() -{ - last_command="pgo_create_namespace" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_create_pgbouncer() -{ - last_command="pgo_create_pgbouncer" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--pgbouncer-pass=") - local_nonpersistent_flags+=("--pgbouncer-pass=") - flags+=("--selector=") - two_word_flags+=("-s") - local_nonpersistent_flags+=("--selector=") - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_create_pgorole() -{ - last_command="pgo_create_pgorole" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--permissions=") - local_nonpersistent_flags+=("--permissions=") - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_create_pgouser() -{ - last_command="pgo_create_pgouser" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--all-namespaces") - local_nonpersistent_flags+=("--all-namespaces") - flags+=("--pgouser-namespaces=") - local_nonpersistent_flags+=("--pgouser-namespaces=") - flags+=("--pgouser-password=") - local_nonpersistent_flags+=("--pgouser-password=") - flags+=("--pgouser-roles=") - local_nonpersistent_flags+=("--pgouser-roles=") - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_create_policy() -{ - last_command="pgo_create_policy" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--in-file=") - two_word_flags+=("-i") - local_nonpersistent_flags+=("--in-file=") - flags+=("--url=") - two_word_flags+=("-u") - local_nonpersistent_flags+=("--url=") - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_create_schedule() -{ - last_command="pgo_create_schedule" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--ccp-image-tag=") - two_word_flags+=("-c") - local_nonpersistent_flags+=("--ccp-image-tag=") - flags+=("--database=") - local_nonpersistent_flags+=("--database=") - flags+=("--pgbackrest-backup-type=") - local_nonpersistent_flags+=("--pgbackrest-backup-type=") - flags+=("--pgbackrest-storage-type=") - local_nonpersistent_flags+=("--pgbackrest-storage-type=") - flags+=("--policy=") - local_nonpersistent_flags+=("--policy=") - flags+=("--pvc-name=") - local_nonpersistent_flags+=("--pvc-name=") - flags+=("--schedule=") - local_nonpersistent_flags+=("--schedule=") - flags+=("--schedule-opts=") - local_nonpersistent_flags+=("--schedule-opts=") - flags+=("--schedule-type=") - local_nonpersistent_flags+=("--schedule-type=") - flags+=("--secret=") - local_nonpersistent_flags+=("--secret=") - flags+=("--selector=") - two_word_flags+=("-s") - local_nonpersistent_flags+=("--selector=") - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_create_user() -{ - last_command="pgo_create_user" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--managed") - local_nonpersistent_flags+=("--managed") - flags+=("--password=") - local_nonpersistent_flags+=("--password=") - flags+=("--password-length=") - local_nonpersistent_flags+=("--password-length=") - flags+=("--selector=") - two_word_flags+=("-s") - local_nonpersistent_flags+=("--selector=") - flags+=("--username=") - local_nonpersistent_flags+=("--username=") - flags+=("--valid-days=") - local_nonpersistent_flags+=("--valid-days=") - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_create() -{ - last_command="pgo_create" - - command_aliases=() - - commands=() - commands+=("cluster") - commands+=("namespace") - commands+=("pgbouncer") - commands+=("pgorole") - commands+=("pgouser") - commands+=("policy") - commands+=("schedule") - commands+=("user") - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_delete_backup() -{ - last_command="pgo_delete_backup" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_delete_cluster() -{ - last_command="pgo_delete_cluster" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--all") - local_nonpersistent_flags+=("--all") - flags+=("--delete-backups") - flags+=("-b") - local_nonpersistent_flags+=("--delete-backups") - flags+=("--delete-data") - flags+=("-d") - local_nonpersistent_flags+=("--delete-data") - flags+=("--no-prompt") - local_nonpersistent_flags+=("--no-prompt") - flags+=("--selector=") - two_word_flags+=("-s") - local_nonpersistent_flags+=("--selector=") - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_delete_label() -{ - last_command="pgo_delete_label" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--label=") - local_nonpersistent_flags+=("--label=") - flags+=("--selector=") - two_word_flags+=("-s") - local_nonpersistent_flags+=("--selector=") - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_delete_namespace() -{ - last_command="pgo_delete_namespace" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_delete_pgbouncer() -{ - last_command="pgo_delete_pgbouncer" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--no-prompt") - local_nonpersistent_flags+=("--no-prompt") - flags+=("--selector=") - two_word_flags+=("-s") - local_nonpersistent_flags+=("--selector=") - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_delete_pgorole() -{ - last_command="pgo_delete_pgorole" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--all") - local_nonpersistent_flags+=("--all") - flags+=("--no-prompt") - local_nonpersistent_flags+=("--no-prompt") - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_delete_pgouser() -{ - last_command="pgo_delete_pgouser" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--all") - local_nonpersistent_flags+=("--all") - flags+=("--no-prompt") - local_nonpersistent_flags+=("--no-prompt") - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_delete_policy() -{ - last_command="pgo_delete_policy" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--all") - local_nonpersistent_flags+=("--all") - flags+=("--no-prompt") - local_nonpersistent_flags+=("--no-prompt") - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_delete_schedule() -{ - last_command="pgo_delete_schedule" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--no-prompt") - local_nonpersistent_flags+=("--no-prompt") - flags+=("--schedule-name=") - local_nonpersistent_flags+=("--schedule-name=") - flags+=("--selector=") - two_word_flags+=("-s") - local_nonpersistent_flags+=("--selector=") - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_delete_user() -{ - last_command="pgo_delete_user" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--all") - local_nonpersistent_flags+=("--all") - flags+=("--no-prompt") - local_nonpersistent_flags+=("--no-prompt") - flags+=("--selector=") - two_word_flags+=("-s") - local_nonpersistent_flags+=("--selector=") - flags+=("--username=") - local_nonpersistent_flags+=("--username=") - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_delete() -{ - last_command="pgo_delete" - - command_aliases=() - - commands=() - commands+=("backup") - commands+=("cluster") - commands+=("label") - commands+=("namespace") - commands+=("pgbouncer") - commands+=("pgorole") - commands+=("pgouser") - commands+=("policy") - commands+=("schedule") - commands+=("user") - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_df() -{ - last_command="pgo_df" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--selector=") - two_word_flags+=("-s") - local_nonpersistent_flags+=("--selector=") - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_failover() -{ - last_command="pgo_failover" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--autofail-replace-replica=") - local_nonpersistent_flags+=("--autofail-replace-replica=") - flags+=("--no-prompt") - local_nonpersistent_flags+=("--no-prompt") - flags+=("--query") - local_nonpersistent_flags+=("--query") - flags+=("--target=") - local_nonpersistent_flags+=("--target=") - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_label() -{ - last_command="pgo_label" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--dry-run") - local_nonpersistent_flags+=("--dry-run") - flags+=("--label=") - local_nonpersistent_flags+=("--label=") - flags+=("--selector=") - two_word_flags+=("-s") - local_nonpersistent_flags+=("--selector=") - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_ls() -{ - last_command="pgo_ls" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_reload() -{ - last_command="pgo_reload" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--no-prompt") - local_nonpersistent_flags+=("--no-prompt") - flags+=("--selector=") - two_word_flags+=("-s") - local_nonpersistent_flags+=("--selector=") - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_restore() -{ - last_command="pgo_restore" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--backup-opts=") - local_nonpersistent_flags+=("--backup-opts=") - flags+=("--backup-path=") - local_nonpersistent_flags+=("--backup-path=") - flags+=("--backup-pvc=") - local_nonpersistent_flags+=("--backup-pvc=") - flags+=("--backup-type=") - local_nonpersistent_flags+=("--backup-type=") - flags+=("--no-prompt") - local_nonpersistent_flags+=("--no-prompt") - flags+=("--node-label=") - local_nonpersistent_flags+=("--node-label=") - flags+=("--pgbackrest-storage-type=") - local_nonpersistent_flags+=("--pgbackrest-storage-type=") - flags+=("--pitr-target=") - local_nonpersistent_flags+=("--pitr-target=") - flags+=("--restore-to-pvc=") - local_nonpersistent_flags+=("--restore-to-pvc=") - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_scale() -{ - last_command="pgo_scale" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--ccp-image-tag=") - local_nonpersistent_flags+=("--ccp-image-tag=") - flags+=("--no-prompt") - local_nonpersistent_flags+=("--no-prompt") - flags+=("--node-label=") - local_nonpersistent_flags+=("--node-label=") - flags+=("--replica-count=") - local_nonpersistent_flags+=("--replica-count=") - flags+=("--service-type=") - local_nonpersistent_flags+=("--service-type=") - flags+=("--storage-config=") - local_nonpersistent_flags+=("--storage-config=") - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_scaledown() -{ - last_command="pgo_scaledown" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--delete-data") - flags+=("-d") - local_nonpersistent_flags+=("--delete-data") - flags+=("--no-prompt") - local_nonpersistent_flags+=("--no-prompt") - flags+=("--query") - local_nonpersistent_flags+=("--query") - flags+=("--target=") - local_nonpersistent_flags+=("--target=") - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_show_backup() -{ - last_command="pgo_show_backup" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--backup-type=") - local_nonpersistent_flags+=("--backup-type=") - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_show_cluster() -{ - last_command="pgo_show_cluster" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--all") - local_nonpersistent_flags+=("--all") - flags+=("--ccp-image-tag=") - local_nonpersistent_flags+=("--ccp-image-tag=") - flags+=("--output=") - two_word_flags+=("-o") - local_nonpersistent_flags+=("--output=") - flags+=("--selector=") - two_word_flags+=("-s") - local_nonpersistent_flags+=("--selector=") - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_show_config() -{ - last_command="pgo_show_config" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_show_namespace() -{ - last_command="pgo_show_namespace" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--all") - local_nonpersistent_flags+=("--all") - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_show_pgorole() -{ - last_command="pgo_show_pgorole" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--all") - local_nonpersistent_flags+=("--all") - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_show_pgouser() -{ - last_command="pgo_show_pgouser" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--all") - local_nonpersistent_flags+=("--all") - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_show_policy() -{ - last_command="pgo_show_policy" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--all") - local_nonpersistent_flags+=("--all") - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_show_pvc() -{ - last_command="pgo_show_pvc" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--all") - local_nonpersistent_flags+=("--all") - flags+=("--node-label=") - local_nonpersistent_flags+=("--node-label=") - flags+=("--pvc-root=") - local_nonpersistent_flags+=("--pvc-root=") - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_show_schedule() -{ - last_command="pgo_show_schedule" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--no-prompt") - local_nonpersistent_flags+=("--no-prompt") - flags+=("--schedule-name=") - local_nonpersistent_flags+=("--schedule-name=") - flags+=("--selector=") - two_word_flags+=("-s") - local_nonpersistent_flags+=("--selector=") - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_show_user() -{ - last_command="pgo_show_user" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--all") - local_nonpersistent_flags+=("--all") - flags+=("--expired=") - local_nonpersistent_flags+=("--expired=") - flags+=("--selector=") - two_word_flags+=("-s") - local_nonpersistent_flags+=("--selector=") - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_show_workflow() -{ - last_command="pgo_show_workflow" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_show() -{ - last_command="pgo_show" - - command_aliases=() - - commands=() - commands+=("backup") - commands+=("cluster") - commands+=("config") - commands+=("namespace") - commands+=("pgorole") - commands+=("pgouser") - commands+=("policy") - commands+=("pvc") - commands+=("schedule") - commands+=("user") - commands+=("workflow") - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_status() -{ - last_command="pgo_status" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--output=") - two_word_flags+=("-o") - local_nonpersistent_flags+=("--output=") - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_test() -{ - last_command="pgo_test" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--all") - local_nonpersistent_flags+=("--all") - flags+=("--output=") - two_word_flags+=("-o") - local_nonpersistent_flags+=("--output=") - flags+=("--selector=") - two_word_flags+=("-s") - local_nonpersistent_flags+=("--selector=") - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_update_cluster() -{ - last_command="pgo_update_cluster" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--all") - local_nonpersistent_flags+=("--all") - flags+=("--autofail") - local_nonpersistent_flags+=("--autofail") - flags+=("--no-prompt") - local_nonpersistent_flags+=("--no-prompt") - flags+=("--selector=") - two_word_flags+=("-s") - local_nonpersistent_flags+=("--selector=") - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_update_namespace() -{ - last_command="pgo_update_namespace" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_update_pgorole() -{ - last_command="pgo_update_pgorole" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--no-prompt") - local_nonpersistent_flags+=("--no-prompt") - flags+=("--permissions=") - local_nonpersistent_flags+=("--permissions=") - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_update_pgouser() -{ - last_command="pgo_update_pgouser" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--all-namespaces") - local_nonpersistent_flags+=("--all-namespaces") - flags+=("--no-prompt") - local_nonpersistent_flags+=("--no-prompt") - flags+=("--pgouser-namespaces=") - local_nonpersistent_flags+=("--pgouser-namespaces=") - flags+=("--pgouser-password=") - local_nonpersistent_flags+=("--pgouser-password=") - flags+=("--pgouser-roles=") - local_nonpersistent_flags+=("--pgouser-roles=") - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_update_user() -{ - last_command="pgo_update_user" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--all") - local_nonpersistent_flags+=("--all") - flags+=("--expire-user") - local_nonpersistent_flags+=("--expire-user") - flags+=("--expired=") - local_nonpersistent_flags+=("--expired=") - flags+=("--password=") - local_nonpersistent_flags+=("--password=") - flags+=("--password-length=") - local_nonpersistent_flags+=("--password-length=") - flags+=("--selector=") - two_word_flags+=("-s") - local_nonpersistent_flags+=("--selector=") - flags+=("--username=") - local_nonpersistent_flags+=("--username=") - flags+=("--valid-days=") - local_nonpersistent_flags+=("--valid-days=") - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_update() -{ - last_command="pgo_update" - - command_aliases=() - - commands=() - commands+=("cluster") - commands+=("namespace") - commands+=("pgorole") - commands+=("pgouser") - commands+=("user") - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_upgrade() -{ - last_command="pgo_upgrade" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--ccp-image-tag=") - local_nonpersistent_flags+=("--ccp-image-tag=") - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_version() -{ - last_command="pgo_version" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--help") - flags+=("-h") - local_nonpersistent_flags+=("--help") - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_watch() -{ - last_command="pgo_watch" - - command_aliases=() - - commands=() - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--pgo-event-address=") - two_word_flags+=("-a") - local_nonpersistent_flags+=("--pgo-event-address=") - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -_pgo_root_command() -{ - last_command="pgo" - - command_aliases=() - - commands=() - commands+=("apply") - commands+=("backup") - commands+=("cat") - commands+=("create") - commands+=("delete") - commands+=("df") - commands+=("failover") - commands+=("label") - commands+=("load") - commands+=("ls") - commands+=("reload") - commands+=("restore") - commands+=("scale") - commands+=("scaledown") - commands+=("show") - commands+=("status") - commands+=("test") - commands+=("update") - commands+=("upgrade") - commands+=("version") - commands+=("watch") - - flags=() - two_word_flags=() - local_nonpersistent_flags=() - flags_with_completion=() - flags_completion=() - - flags+=("--apiserver-url=") - flags+=("--debug") - flags+=("--namespace=") - two_word_flags+=("-n") - flags+=("--pgo-ca-cert=") - flags+=("--pgo-client-cert=") - flags+=("--pgo-client-key=") - - must_have_one_flag=() - must_have_one_noun=() - noun_aliases=() -} - -__start_pgo() -{ - local cur prev words cword - declare -A flaghash 2>/dev/null || : - declare -A aliashash 2>/dev/null || : - if declare -F _init_completion >/dev/null 2>&1; then - _init_completion -s || return - else - __pgo_init_completion -n "=" || return - fi - - local c=0 - local flags=() - local two_word_flags=() - local local_nonpersistent_flags=() - local flags_with_completion=() - local flags_completion=() - local commands=("pgo") - local must_have_one_flag=() - local must_have_one_noun=() - local last_command - local nouns=() - - __pgo_handle_word -} - -if [[ $(type -t compopt) = "builtin" ]]; then - complete -o default -F __start_pgo pgo -else - complete -o default -o nospace -F __start_pgo pgo -fi - -# ex: ts=4 sw=4 et filetype=sh diff --git a/examples/pgo-scc.yaml b/examples/pgo-scc.yaml deleted file mode 100644 index e9643cf5ff..0000000000 --- a/examples/pgo-scc.yaml +++ /dev/null @@ -1,45 +0,0 @@ -allowHostDirVolumePlugin: false -allowHostIPC: false -allowHostNetwork: false -allowHostPID: false -allowHostPorts: false -allowPrivilegeEscalation: true -allowPrivilegedContainer: false -allowedCapabilities: null -apiVersion: security.openshift.io/v1 -defaultAddCapabilities: null -fsGroup: - type: MustRunAs - ranges: - - max: 26 - min: 26 - - max: 2 - min: 2 -groups: -- system:authenticated -kind: SecurityContextConstraints -metadata: - annotations: - kubernetes.io/description: scc for postgres - name: pgo -priority: null -readOnlyRootFilesystem: false -requiredDropCapabilities: -- KILL -- MKNOD -- SETUID -- SETGID -runAsUser: - type: MustRunAsRange -seLinuxContext: - type: RunAsAny -supplementalGroups: - type: RunAsAny -users: [] -volumes: -- configMap -- downwardAPI -- emptyDir -- persistentVolumeClaim -- projected -- secret diff --git a/examples/policy/badpolicy.sql b/examples/policy/badpolicy.sql deleted file mode 100644 index 0173b2ad11..0000000000 --- a/examples/policy/badpolicy.sql +++ /dev/null @@ -1,2 +0,0 @@ -\c userdb; -CREATE adfad tablesadhhdhhht1 (a int); diff --git a/examples/policy/gitpolicy.sql b/examples/policy/gitpolicy.sql deleted file mode 100644 index de7e525ad1..0000000000 --- a/examples/policy/gitpolicy.sql +++ /dev/null @@ -1 +0,0 @@ -create table gitpolicy (id int); diff --git a/examples/policy/jsonload.sql b/examples/policy/jsonload.sql deleted file mode 100644 index f2de26d3cd..0000000000 --- a/examples/policy/jsonload.sql +++ /dev/null @@ -1,8 +0,0 @@ -\c userdb; -create table json_collection -( - json_imported jsonb -) -WITH (OIDS=FALSE); - -grant all on json_collection to testuser; diff --git a/examples/policy/policy1-insert.sql b/examples/policy/policy1-insert.sql deleted file mode 100644 index 7e0aafef80..0000000000 --- a/examples/policy/policy1-insert.sql +++ /dev/null @@ -1 +0,0 @@ -insert into policy1 (select now()); diff --git a/examples/policy/policy1.sql b/examples/policy/policy1.sql deleted file mode 100644 index 208ef17331..0000000000 --- a/examples/policy/policy1.sql +++ /dev/null @@ -1,3 +0,0 @@ -\c userdb; -create table policy1 (id text); -grant all on policy1 to primaryuser; diff --git a/examples/policy/rlspolicy.sql b/examples/policy/rlspolicy.sql deleted file mode 100644 index b61bd6df0d..0000000000 --- a/examples/policy/rlspolicy.sql +++ /dev/null @@ -1,5 +0,0 @@ -\c userdb; -CREATE table t1 (a int); -CREATE table t2 (a int); -CREATE POLICY p1 ON t1 FOR ALL TO PUBLIC USING (a % 2 = 0); -- be even number -CREATE POLICY p2 ON t2 FOR ALL TO PUBLIC USING (a % 2 = 1); -- be odd number diff --git a/examples/policy/xrayapp.sql b/examples/policy/xrayapp.sql deleted file mode 100644 index da6c0cfe02..0000000000 --- a/examples/policy/xrayapp.sql +++ /dev/null @@ -1,5 +0,0 @@ -\c userdb; -create table xrayapp (id int, key varchar(40), value varchar(40)); -create table a (id int); -create table xraycsvtable (name varchar(40), state varchar(40), zip varchar(40)); - diff --git a/examples/postgrescluster/kustomization.yaml b/examples/postgrescluster/kustomization.yaml new file mode 100644 index 0000000000..7035765b87 --- /dev/null +++ b/examples/postgrescluster/kustomization.yaml @@ -0,0 +1,7 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +namespace: postgres-operator + +resources: +- postgrescluster.yaml diff --git a/examples/postgrescluster/postgrescluster.yaml b/examples/postgrescluster/postgrescluster.yaml new file mode 100644 index 0000000000..75756af94e --- /dev/null +++ b/examples/postgrescluster/postgrescluster.yaml @@ -0,0 +1,35 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: hippo +spec: + postgresVersion: 16 + instances: + - name: instance1 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + backups: + pgbackrest: + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + - name: repo2 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + proxy: + pgBouncer: {} diff --git a/examples/sample-ingest-config.json b/examples/sample-ingest-config.json deleted file mode 100644 index 2699d22951..0000000000 --- a/examples/sample-ingest-config.json +++ /dev/null @@ -1,12 +0,0 @@ - { - "WatchDir": "/", - "DBHost": "cluster7", - "DBPort": "5432", - "DBName": "userdb", - "DBSecret": "cluster7-postgres-secret", - "DBTable": "json_collection", - "DBColumn": "json_imported", - "MaxJobs": 2, - "PVCName": "pgo-ingest-watch-pvc", - "SecurityContext": "" - } diff --git a/go.mod b/go.mod new file mode 100644 index 0000000000..d268d66018 --- /dev/null +++ b/go.mod @@ -0,0 +1,96 @@ +module github.com/crunchydata/postgres-operator + +go 1.22.0 + +require ( + github.com/go-logr/logr v1.4.2 + github.com/golang-jwt/jwt/v5 v5.2.1 + github.com/google/go-cmp v0.6.0 + github.com/google/uuid v1.6.0 + github.com/kubernetes-csi/external-snapshotter/client/v8 v8.0.0 + github.com/onsi/ginkgo/v2 v2.17.2 + github.com/onsi/gomega v1.33.1 + github.com/pganalyze/pg_query_go/v5 v5.1.0 + github.com/pkg/errors v0.9.1 + github.com/sirupsen/logrus v1.9.3 + github.com/xdg-go/stringprep v1.0.2 + go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0 + go.opentelemetry.io/otel v1.27.0 + go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.27.0 + go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.27.0 + go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.2.0 + go.opentelemetry.io/otel/sdk v1.27.0 + go.opentelemetry.io/otel/trace v1.27.0 + golang.org/x/crypto v0.27.0 + gotest.tools/v3 v3.1.0 + k8s.io/api v0.30.2 + k8s.io/apimachinery v0.30.2 + k8s.io/client-go v0.30.2 + k8s.io/component-base v0.30.2 + sigs.k8s.io/controller-runtime v0.18.4 + sigs.k8s.io/yaml v1.4.0 +) + +require ( + github.com/beorn7/perks v1.0.1 // indirect + github.com/blang/semver/v4 v4.0.0 // indirect + github.com/cenkalti/backoff/v4 v4.3.0 // indirect + github.com/cespare/xxhash/v2 v2.3.0 // indirect + github.com/davecgh/go-spew v1.1.1 // indirect + github.com/emicklei/go-restful/v3 v3.12.1 // indirect + github.com/evanphx/json-patch v5.6.0+incompatible // indirect + github.com/evanphx/json-patch/v5 v5.9.0 // indirect + github.com/felixge/httpsnoop v1.0.4 // indirect + github.com/fsnotify/fsnotify v1.7.0 // indirect + github.com/go-logr/stdr v1.2.2 // indirect + github.com/go-openapi/jsonpointer v0.21.0 // indirect + github.com/go-openapi/jsonreference v0.21.0 // indirect + github.com/go-openapi/swag v0.23.0 // indirect + github.com/go-task/slim-sprig/v3 v3.0.0 // indirect + github.com/gogo/protobuf v1.3.2 // indirect + github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect + github.com/golang/protobuf v1.5.4 // indirect + github.com/google/gnostic-models v0.6.8 // indirect + github.com/google/gofuzz v1.2.0 // indirect + github.com/google/pprof v0.0.0-20240424215950-a892ee059fd6 // indirect + github.com/gorilla/websocket v1.5.0 // indirect + github.com/grpc-ecosystem/grpc-gateway/v2 v2.20.0 // indirect + github.com/imdario/mergo v0.3.16 // indirect + github.com/josharian/intern v1.0.0 // indirect + github.com/json-iterator/go v1.1.12 // indirect + github.com/mailru/easyjson v0.7.7 // indirect + github.com/moby/spdystream v0.2.0 // indirect + github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect + github.com/modern-go/reflect2 v1.0.2 // indirect + github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect + github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f // indirect + github.com/prometheus/client_golang v1.19.1 // indirect + github.com/prometheus/client_model v0.6.1 // indirect + github.com/prometheus/common v0.54.0 // indirect + github.com/prometheus/procfs v0.15.1 // indirect + github.com/spf13/pflag v1.0.5 // indirect + go.opentelemetry.io/otel/metric v1.27.0 // indirect + go.opentelemetry.io/proto/otlp v1.3.1 // indirect + golang.org/x/exp v0.0.0-20240604190554-fc45aab8b7f8 // indirect + golang.org/x/net v0.29.0 // indirect + golang.org/x/oauth2 v0.21.0 // indirect + golang.org/x/sys v0.25.0 // indirect + golang.org/x/term v0.24.0 // indirect + golang.org/x/text v0.18.0 // indirect + golang.org/x/time v0.5.0 // indirect + golang.org/x/tools v0.22.0 // indirect + gomodules.xyz/jsonpatch/v2 v2.4.0 // indirect + google.golang.org/genproto/googleapis/api v0.0.0-20240610135401-a8a62080eff3 // indirect + google.golang.org/genproto/googleapis/rpc v0.0.0-20240903143218-8af14fe29dc1 // indirect + google.golang.org/grpc v1.66.2 // indirect + google.golang.org/protobuf v1.34.2 // indirect + gopkg.in/inf.v0 v0.9.1 // indirect + gopkg.in/yaml.v2 v2.4.0 // indirect + gopkg.in/yaml.v3 v3.0.1 // indirect + k8s.io/apiextensions-apiserver v0.30.2 // indirect + k8s.io/klog/v2 v2.120.1 // indirect + k8s.io/kube-openapi v0.0.0-20240521193020-835d969ad83a // indirect + k8s.io/utils v0.0.0-20240502163921-fe8a2dddb1d0 // indirect + sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect + sigs.k8s.io/structured-merge-diff/v4 v4.4.1 // indirect +) diff --git a/go.sum b/go.sum new file mode 100644 index 0000000000..aed2056f6f --- /dev/null +++ b/go.sum @@ -0,0 +1,251 @@ +github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5 h1:0CwZNZbxp69SHPdPJAN/hZIm0C4OItdklCFmMRWYpio= +github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5/go.mod h1:wHh0iHkYZB8zMSxRWpUBQtwG5a7fFgvEO+odwuTv2gs= +github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM= +github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw= +github.com/blang/semver/v4 v4.0.0 h1:1PFHFE6yCCTv8C1TeyNNarDzntLi7wMI5i/pzqYIsAM= +github.com/blang/semver/v4 v4.0.0/go.mod h1:IbckMUScFkM3pff0VJDNKRiT6TG/YpiHIM2yvyW5YoQ= +github.com/cenkalti/backoff/v4 v4.3.0 h1:MyRJ/UdXutAwSAT+s3wNd7MfTIcy71VQueUuFK343L8= +github.com/cenkalti/backoff/v4 v4.3.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE= +github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs= +github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= +github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= +github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/emicklei/go-restful/v3 v3.12.1 h1:PJMDIM/ak7btuL8Ex0iYET9hxM3CI2sjZtzpL63nKAU= +github.com/emicklei/go-restful/v3 v3.12.1/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc= +github.com/evanphx/json-patch v5.6.0+incompatible h1:jBYDEEiFBPxA0v50tFdvOzQQTCvpL6mnFh5mB2/l16U= +github.com/evanphx/json-patch v5.6.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk= +github.com/evanphx/json-patch/v5 v5.9.0 h1:kcBlZQbplgElYIlo/n1hJbls2z/1awpXxpRi0/FOJfg= +github.com/evanphx/json-patch/v5 v5.9.0/go.mod h1:VNkHZ/282BpEyt/tObQO8s5CMPmYYq14uClGH4abBuQ= +github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg= +github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U= +github.com/fsnotify/fsnotify v1.7.0 h1:8JEhPFa5W2WU7YfeZzPNqzMP6Lwt7L2715Ggo0nosvA= +github.com/fsnotify/fsnotify v1.7.0/go.mod h1:40Bi/Hjc2AVfZrqy+aj+yEI+/bRxZnMJyTJwOpGvigM= +github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A= +github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY= +github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= +github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag= +github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE= +github.com/go-logr/zapr v1.3.0 h1:XGdV8XW8zdwFiwOA2Dryh1gj2KRQyOOoNmBy4EplIcQ= +github.com/go-logr/zapr v1.3.0/go.mod h1:YKepepNBd1u/oyhd/yQmtjVXmm9uML4IXUgMOwR8/Gg= +github.com/go-openapi/jsonpointer v0.21.0 h1:YgdVicSA9vH5RiHs9TZW5oyafXZFc6+2Vc1rr/O9oNQ= +github.com/go-openapi/jsonpointer v0.21.0/go.mod h1:IUyH9l/+uyhIYQ/PXVA41Rexl+kOkAPDdXEYns6fzUY= +github.com/go-openapi/jsonreference v0.21.0 h1:Rs+Y7hSXT83Jacb7kFyjn4ijOuVGSvOdF2+tg1TRrwQ= +github.com/go-openapi/jsonreference v0.21.0/go.mod h1:LmZmgsrTkVg9LG4EaHeY8cBDslNPMo06cago5JNLkm4= +github.com/go-openapi/swag v0.23.0 h1:vsEVJDUo2hPJ2tu0/Xc+4noaxyEffXNIs3cOULZ+GrE= +github.com/go-openapi/swag v0.23.0/go.mod h1:esZ8ITTYEsH1V2trKHjAN8Ai7xHb8RV+YSZ577vPjgQ= +github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI= +github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8= +github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q= +github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q= +github.com/golang-jwt/jwt/v5 v5.2.1 h1:OuVbFODueb089Lh128TAcimifWaLhJwVflnrgM17wHk= +github.com/golang-jwt/jwt/v5 v5.2.1/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk= +github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da h1:oI5xCqsCo564l8iNU+DwB5epxmsaqB+rhGL0m5jtYqE= +github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= +github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk= +github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek= +github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps= +github.com/google/gnostic-models v0.6.8 h1:yo/ABAfM5IMRsS1VnXjTBvUb61tFIHozhlYvRgGre9I= +github.com/google/gnostic-models v0.6.8/go.mod h1:5n7qKqH0f5wFt+aWF8CW6pZLLNOfYuF5OpfBSENuI8U= +github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= +github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI= +github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= +github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= +github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0= +github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= +github.com/google/pprof v0.0.0-20240424215950-a892ee059fd6 h1:k7nVchz72niMH6YLQNvHSdIE7iqsQxK1P41mySCvssg= +github.com/google/pprof v0.0.0-20240424215950-a892ee059fd6/go.mod h1:kf6iHlnVGwgKolg33glAes7Yg/8iWP8ukqeldJSO7jw= +github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0= +github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= +github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE= +github.com/gorilla/websocket v1.5.0 h1:PPwGk2jz7EePpoHN/+ClbZu8SPxiqlu12wZP/3sWmnc= +github.com/gorilla/websocket v1.5.0/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE= +github.com/grpc-ecosystem/grpc-gateway/v2 v2.20.0 h1:bkypFPDjIYGfCYD5mRBvpqxfYX1YCS1PXdKYWi8FsN0= +github.com/grpc-ecosystem/grpc-gateway/v2 v2.20.0/go.mod h1:P+Lt/0by1T8bfcF3z737NnSbmxQAppXMRziHUxPOC8k= +github.com/imdario/mergo v0.3.16 h1:wwQJbIsHYGMUyLSPrEq1CT16AhnhNJQ51+4fdHUnCl4= +github.com/imdario/mergo v0.3.16/go.mod h1:WBLT9ZmE3lPoWsEzCh9LPo3TiwVN+ZKEjmz+hD27ysY= +github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY= +github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y= +github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM= +github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo= +github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8= +github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= +github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE= +github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk= +github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= +github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= +github.com/kubernetes-csi/external-snapshotter/client/v8 v8.0.0 h1:mjQG0Vakr2h246kEDR85U8y8ZhPgT3bguTCajRa/jaw= +github.com/kubernetes-csi/external-snapshotter/client/v8 v8.0.0/go.mod h1:E3vdYxHj2C2q6qo8/Da4g7P+IcwqRZyy3gJBzYybV9Y= +github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0= +github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc= +github.com/moby/spdystream v0.2.0 h1:cjW1zVyyoiM0T7b6UoySUFqzXMoqRckQtXwGPiBhOM8= +github.com/moby/spdystream v0.2.0/go.mod h1:f7i0iNDQJ059oMTcWxx8MA/zKFIuD/lY+0GqbN2Wy8c= +github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= +github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg= +github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= +github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M= +github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk= +github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA= +github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ= +github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f h1:y5//uYreIhSUg3J1GEMiLbxo1LJaP8RfCpH6pymGZus= +github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+o7JKHSa8/e818NopupXU1YMK5fe1lsApnBw= +github.com/onsi/ginkgo/v2 v2.17.2 h1:7eMhcy3GimbsA3hEnVKdw/PQM9XN9krpKVXsZdph0/g= +github.com/onsi/ginkgo/v2 v2.17.2/go.mod h1:nP2DPOQoNsQmsVyv5rDA8JkXQoCs6goXIvr/PRJ1eCc= +github.com/onsi/gomega v1.33.1 h1:dsYjIxxSR755MDmKVsaFQTE22ChNBcuuTWgkUDSubOk= +github.com/onsi/gomega v1.33.1/go.mod h1:U4R44UsT+9eLIaYRB2a5qajjtQYn0hauxvRm16AVYg0= +github.com/pganalyze/pg_query_go/v5 v5.1.0 h1:MlxQqHZnvA3cbRQYyIrjxEjzo560P6MyTgtlaf3pmXg= +github.com/pganalyze/pg_query_go/v5 v5.1.0/go.mod h1:FsglvxidZsVN+Ltw3Ai6nTgPVcK2BPukH3jCDEqc1Ug= +github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4= +github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= +github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= +github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/prometheus/client_golang v1.19.1 h1:wZWJDwK+NameRJuPGDhlnFgx8e8HN3XHQeLaYJFJBOE= +github.com/prometheus/client_golang v1.19.1/go.mod h1:mP78NwGzrVks5S2H6ab8+ZZGJLZUq1hoULYBAYBw1Ho= +github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E= +github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY= +github.com/prometheus/common v0.54.0 h1:ZlZy0BgJhTwVZUn7dLOkwCZHUkrAqd3WYtcFCWnM1D8= +github.com/prometheus/common v0.54.0/go.mod h1:/TQgMJP5CuVYveyT7n/0Ix8yLNNXy9yRSkhnLTHPDIQ= +github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc= +github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk= +github.com/rogpeppe/go-internal v1.12.0 h1:exVL4IDcn6na9z1rAb56Vxr+CgyK3nn3O+epU5NdKM8= +github.com/rogpeppe/go-internal v1.12.0/go.mod h1:E+RYuTGaKKdloAfM02xzb0FW3Paa99yedzYV+kq4uf4= +github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ= +github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ= +github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= +github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA= +github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= +github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= +github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= +github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= +github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg= +github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY= +github.com/xdg-go/stringprep v1.0.2 h1:6iq84/ryjjeRmMJwxutI51F2GIPlP5BfTvXHeYjyhBc= +github.com/xdg-go/stringprep v1.0.2/go.mod h1:8F9zXuvzgwmyT5DUm4GUfZGDdT3W+LCvS6+da4O5kxM= +github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0 h1:jq9TW8u3so/bN+JPT166wjOI6/vQPF6Xe7nMNIltagk= +go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0/go.mod h1:p8pYQP+m5XfbZm9fxtSKAbM6oIllS7s2AfxrChvc7iw= +go.opentelemetry.io/otel v1.2.0/go.mod h1:aT17Fk0Z1Nor9e0uisf98LrntPGMnk4frBO9+dkf69I= +go.opentelemetry.io/otel v1.27.0 h1:9BZoF3yMK/O1AafMiQTVu0YDj5Ea4hPhxCs7sGva+cg= +go.opentelemetry.io/otel v1.27.0/go.mod h1:DMpAK8fzYRzs+bi3rS5REupisuqTheUlSZJ1WnZaPAQ= +go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.27.0 h1:R9DE4kQ4k+YtfLI2ULwX82VtNQ2J8yZmA7ZIF/D+7Mc= +go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.27.0/go.mod h1:OQFyQVrDlbe+R7xrEyDr/2Wr67Ol0hRUgsfA+V5A95s= +go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.27.0 h1:QY7/0NeRPKlzusf40ZE4t1VlMKbqSNT7cJRYzWuja0s= +go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.27.0/go.mod h1:HVkSiDhTM9BoUJU8qE6j2eSWLLXvi1USXjyd2BXT8PY= +go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.2.0 h1:OiYdrCq1Ctwnovp6EofSPwlp5aGy4LgKNbkg7PtEUw8= +go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.2.0/go.mod h1:DUFCmFkXr0VtAHl5Zq2JRx24G6ze5CAq8YfdD36RdX8= +go.opentelemetry.io/otel/metric v1.27.0 h1:hvj3vdEKyeCi4YaYfNjv2NUje8FqKqUY8IlF0FxV/ik= +go.opentelemetry.io/otel/metric v1.27.0/go.mod h1:mVFgmRlhljgBiuk/MP/oKylr4hs85GZAylncepAX/ak= +go.opentelemetry.io/otel/sdk v1.2.0/go.mod h1:jNN8QtpvbsKhgaC6V5lHiejMoKD+V8uadoSafgHPx1U= +go.opentelemetry.io/otel/sdk v1.27.0 h1:mlk+/Y1gLPLn84U4tI8d3GNJmGT/eXe3ZuOXN9kTWmI= +go.opentelemetry.io/otel/sdk v1.27.0/go.mod h1:Ha9vbLwJE6W86YstIywK2xFfPjbWlCuwPtMkKdz/Y4A= +go.opentelemetry.io/otel/trace v1.2.0/go.mod h1:N5FLswTubnxKxOJHM7XZC074qpeEdLy3CgAVsdMucK0= +go.opentelemetry.io/otel/trace v1.27.0 h1:IqYb813p7cmbHk0a5y6pD5JPakbVfftRXABGt5/Rscw= +go.opentelemetry.io/otel/trace v1.27.0/go.mod h1:6RiD1hkAprV4/q+yd2ln1HG9GoPx39SuvvstaLBl+l4= +go.opentelemetry.io/proto/otlp v1.3.1 h1:TrMUixzpM0yuc/znrFTP9MMRh8trP93mkCiDVeXrui0= +go.opentelemetry.io/proto/otlp v1.3.1/go.mod h1:0X1WI4de4ZsLrrJNLAQbFeLCm3T7yBkR0XqQ7niQU+8= +go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto= +go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE= +go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0= +go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y= +go.uber.org/zap v1.26.0 h1:sI7k6L95XOKS281NhVKOFCUNIvv9e0w4BF8N3u+tCRo= +go.uber.org/zap v1.26.0/go.mod h1:dtElttAiwGvoJ/vj4IwHBS/gXsEu/pZ50mUIRWuG0so= +golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= +golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= +golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= +golang.org/x/crypto v0.27.0 h1:GXm2NjJrPaiv/h1tb2UH8QfgC/hOf/+z0p6PT8o1w7A= +golang.org/x/crypto v0.27.0/go.mod h1:1Xngt8kV6Dvbssa53Ziq6Eqn0HqbZi5Z6R0ZpwQzt70= +golang.org/x/exp v0.0.0-20240604190554-fc45aab8b7f8 h1:LoYXNGAShUG3m/ehNk4iFctuhGX/+R1ZpfJ4/ia80JM= +golang.org/x/exp v0.0.0-20240604190554-fc45aab8b7f8/go.mod h1:jj3sYF3dwk5D+ghuXyeI3r5MFf+NT2An6/9dOA95KSI= +golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= +golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= +golang.org/x/net v0.29.0 h1:5ORfpBpCs4HzDYoodCDBbwHzdR5UrLBZ3sOnUJmFoHo= +golang.org/x/net v0.29.0/go.mod h1:gLkgy8jTGERgjzMic6DS9+SP0ajcu6Xu3Orq/SpETg0= +golang.org/x/oauth2 v0.21.0 h1:tsimM75w1tF/uws5rbeHzIWxEqElMehnc+iW793zsZs= +golang.org/x/oauth2 v0.21.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI= +golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= +golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210119212857-b64e53b001e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210423185535-09eb48e85fd7/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.25.0 h1:r+8e+loiHxRqhXVl6ML1nO3l1+oFoWbnlu2Ehimmi34= +golang.org/x/sys v0.25.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= +golang.org/x/term v0.24.0 h1:Mh5cbb+Zk2hqqXNO7S1iTjEphVL+jb8ZWaqh/g+JWkM= +golang.org/x/term v0.24.0/go.mod h1:lOBK/LVxemqiMij05LGJ0tzNr8xlmwBRJ81PX6wVLH8= +golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= +golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= +golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= +golang.org/x/text v0.18.0 h1:XvMDiNzPAl0jr17s6W9lcaIhGUfUORdGCNsuLmPG224= +golang.org/x/text v0.18.0/go.mod h1:BuEKDfySbSR4drPmRPG/7iBdf8hvFMuRexcpahXilzY= +golang.org/x/time v0.5.0 h1:o7cqy6amK/52YcAKIPlM3a+Fpj35zvRj2TP+e1xFSfk= +golang.org/x/time v0.5.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM= +golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= +golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= +golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= +golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= +golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0= +golang.org/x/tools v0.22.0 h1:gqSGLZqv+AI9lIQzniJ0nZDRG5GBPsSi+DRNHWNz6yA= +golang.org/x/tools v0.22.0/go.mod h1:aCwcsjqvq7Yqt6TNyX7QMU2enbQ/Gt0bo6krSeEri+c= +golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +gomodules.xyz/jsonpatch/v2 v2.4.0 h1:Ci3iUJyx9UeRx7CeFN8ARgGbkESwJK+KB9lLcWxY/Zw= +gomodules.xyz/jsonpatch/v2 v2.4.0/go.mod h1:AH3dM2RI6uoBZxn3LVrfvJ3E0/9dG4cSrbuBJT4moAY= +google.golang.org/genproto/googleapis/api v0.0.0-20240610135401-a8a62080eff3 h1:QW9+G6Fir4VcRXVH8x3LilNAb6cxBGLa6+GM4hRwexE= +google.golang.org/genproto/googleapis/api v0.0.0-20240610135401-a8a62080eff3/go.mod h1:kdrSS/OiLkPrNUpzD4aHgCq2rVuC/YRxok32HXZ4vRE= +google.golang.org/genproto/googleapis/rpc v0.0.0-20240903143218-8af14fe29dc1 h1:pPJltXNxVzT4pK9yD8vR9X75DaWYYmLGMsEvBfFQZzQ= +google.golang.org/genproto/googleapis/rpc v0.0.0-20240903143218-8af14fe29dc1/go.mod h1:UqMtugtsSgubUsoxbuAoiCXvqvErP7Gf0so0mK9tHxU= +google.golang.org/grpc v1.66.2 h1:3QdXkuq3Bkh7w+ywLdLvM56cmGvQHUMZpiCzt6Rqaoo= +google.golang.org/grpc v1.66.2/go.mod h1:s3/l6xSSCURdVfAnL+TqCNMyTDAGN6+lZeVxnZR128Y= +google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw= +google.golang.org/protobuf v1.31.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I= +google.golang.org/protobuf v1.34.2 h1:6xV6lTsCfpGD21XK49h7MhtcApnLqkfYgPcdHftf6hg= +google.golang.org/protobuf v1.34.2/go.mod h1:qYOHts0dSfpeUzUFpOMr/WGzszTmLH+DiWniOlNbLDw= +gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= +gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= +gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc= +gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw= +gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY= +gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ= +gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= +gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= +gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= +gotest.tools/v3 v3.1.0 h1:rVV8Tcg/8jHUkPUorwjaMTtemIMVXfIPKiOqnhEhakk= +gotest.tools/v3 v3.1.0/go.mod h1:fHy7eyTmJFO5bQbUsEGQ1v4m2J3Jz9eWL54TP2/ZuYQ= +k8s.io/api v0.30.2 h1:+ZhRj+28QT4UOH+BKznu4CBgPWgkXO7XAvMcMl0qKvI= +k8s.io/api v0.30.2/go.mod h1:ULg5g9JvOev2dG0u2hig4Z7tQ2hHIuS+m8MNZ+X6EmI= +k8s.io/apiextensions-apiserver v0.30.2 h1:l7Eue2t6QiLHErfn2vwK4KgF4NeDgjQkCXtEbOocKIE= +k8s.io/apiextensions-apiserver v0.30.2/go.mod h1:lsJFLYyK40iguuinsb3nt+Sj6CmodSI4ACDLep1rgjw= +k8s.io/apimachinery v0.30.2 h1:fEMcnBj6qkzzPGSVsAZtQThU62SmQ4ZymlXRC5yFSCg= +k8s.io/apimachinery v0.30.2/go.mod h1:iexa2somDaxdnj7bha06bhb43Zpa6eWH8N8dbqVjTUc= +k8s.io/client-go v0.30.2 h1:sBIVJdojUNPDU/jObC+18tXWcTJVcwyqS9diGdWHk50= +k8s.io/client-go v0.30.2/go.mod h1:JglKSWULm9xlJLx4KCkfLLQ7XwtlbflV6uFFSHTMgVs= +k8s.io/component-base v0.30.2 h1:pqGBczYoW1sno8q9ObExUqrYSKhtE5rW3y6gX88GZII= +k8s.io/component-base v0.30.2/go.mod h1:yQLkQDrkK8J6NtP+MGJOws+/PPeEXNpwFixsUI7h/OE= +k8s.io/klog/v2 v2.120.1 h1:QXU6cPEOIslTGvZaXvFWiP9VKyeet3sawzTOvdXb4Vw= +k8s.io/klog/v2 v2.120.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE= +k8s.io/kube-openapi v0.0.0-20240521193020-835d969ad83a h1:zD1uj3Jf+mD4zmA7W+goE5TxDkI7OGJjBNBzq5fJtLA= +k8s.io/kube-openapi v0.0.0-20240521193020-835d969ad83a/go.mod h1:UxDHUPsUwTOOxSU+oXURfFBcAS6JwiRXTYqYwfuGowc= +k8s.io/utils v0.0.0-20240502163921-fe8a2dddb1d0 h1:jgGTlFYnhF1PM1Ax/lAlxUPE+KfCIXHaathvJg1C3ak= +k8s.io/utils v0.0.0-20240502163921-fe8a2dddb1d0/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= +sigs.k8s.io/controller-runtime v0.18.4 h1:87+guW1zhvuPLh1PHybKdYFLU0YJp4FhJRmiHvm5BZw= +sigs.k8s.io/controller-runtime v0.18.4/go.mod h1:TVoGrfdpbA9VRFaRnKgk9P5/atA0pMwq+f+msb9M8Sg= +sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd h1:EDPBXCAspyGV4jQlpZSudPeMmr1bNJefnuqLsRAsHZo= +sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd/go.mod h1:B8JuhiUyNFVKdsE8h686QcCxMaH6HrOAZj4vswFpcB0= +sigs.k8s.io/structured-merge-diff/v4 v4.4.1 h1:150L+0vs/8DA78h1u02ooW1/fFq/Lwr+sGiqlzvrtq4= +sigs.k8s.io/structured-merge-diff/v4 v4.4.1/go.mod h1:N8hJocpFajUSSeSJ9bOZ77VzejKZaXsTtZo4/u7Io08= +sigs.k8s.io/yaml v1.4.0 h1:Mk1wCc2gy/F0THH0TAp1QYyJNzRm2KCLy3o5ASXVI5E= +sigs.k8s.io/yaml v1.4.0/go.mod h1:Ejl7/uTz7PSA4eKMyQCUTnhZYNmLIl+5c2lQPGR2BPY= diff --git a/hack/.gitignore b/hack/.gitignore new file mode 100644 index 0000000000..84fa8c510f --- /dev/null +++ b/hack/.gitignore @@ -0,0 +1 @@ +.kube \ No newline at end of file diff --git a/hack/api-template.tmpl b/hack/api-template.tmpl new file mode 100644 index 0000000000..06361cb9f6 --- /dev/null +++ b/hack/api-template.tmpl @@ -0,0 +1,84 @@ +--- +title: {{or .Metadata.Title "CRD Reference"}} +draft: false +weight: {{or .Metadata.Weight 100 }} +{{- if .Metadata.Description}} +description: {{.Metadata.Description}} +{{- end}} +--- + +Packages: +{{range .Groups}} +- [{{.Group}}/{{.Version}}](#{{ anchorize (printf "%s/%s" .Group .Version) }}) +{{- end -}}{{/* range .Groups */}} + +{{- range .Groups }} +{{- $group := . }} + +

{{.Group}}/{{.Version}}

+ +Resource Types: +{{range .Kinds}} +- [{{.Name}}](#{{ anchorize .Name }}) +{{end}}{{/* range .Kinds */}} + +{{range .Kinds}} +{{$kind := .}} +

{{.Name}}

+ +{{range .Types}} + +{{if not .IsTopLevel}} +

+ {{.Name}} + {{if .ParentKey}}↩ Parent{{end}} +

+{{end}} + + +{{.Description}} + + + + + + + + + + + + {{- if .IsTopLevel -}} + + + + + + + + + + + + + + + + + + + {{- end -}} + {{- range .Fields -}} + + + + + + + {{- end -}} + +
NameTypeDescriptionRequired
apiVersionstring{{$group.Group}}/{{$group.Version}}true
kindstring{{$kind.Name}}true
metadataobjectRefer to the Kubernetes API documentation for the fields of the `metadata` field.true
{{if .TypeKey}}{{.Name}}{{else}}{{.Name}}{{end}}{{.Type}}{{.Description}}{{.Required}}
+ +{{- end}}{{/* range .Types */}} +{{- end}}{{/* range .Kinds */}} +{{- end}}{{/* range .Groups */}} diff --git a/hack/boilerplate.go.txt b/hack/boilerplate.go.txt index 8aabc9a12b..7fc3d63c10 100644 --- a/hack/boilerplate.go.txt +++ b/hack/boilerplate.go.txt @@ -1,15 +1,3 @@ -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 diff --git a/hack/config_sync.sh b/hack/config_sync.sh deleted file mode 100755 index cab45b023b..0000000000 --- a/hack/config_sync.sh +++ /dev/null @@ -1,37 +0,0 @@ -#!/bin/bash - -# Copyright 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -test="${PGOROOT:?Need to set PGOROOT env variable}" - -# sync a master config file with kubectl and helm installers -sync_config() { - - KUBECTL_SPEC_PREFIX=$1 - INSTALLER_ROOT=$2 - MASTER_CONFIG=$3 - - yq write --inplace --doc 2 "$INSTALLER_ROOT/kubectl/$KUBECTL_SPEC_PREFIX.yml" 'data"values.yaml"' -- "$(cat $MASTER_CONFIG)" - yq write --inplace --doc 2 "$INSTALLER_ROOT/kubectl/$KUBECTL_SPEC_PREFIX-ocp311.yml" 'data"values.yaml"' -- "$(cat $MASTER_CONFIG)" - - cat "$INSTALLER_ROOT/helm/helm_template.yaml" "$MASTER_CONFIG" > "$INSTALLER_ROOT/helm/values.yaml" -} - -# sync operator configuration -sync_config "postgres-operator" "$PGOROOT/installers" "$PGOROOT/installers/ansible/values.yaml" - -# sync metrics configuration -sync_config "postgres-operator-metrics" "$PGOROOT/installers/metrics" "$PGOROOT/installers/metrics/ansible/values.yaml" - -echo "Configuration sync complete" diff --git a/hack/create-kubeconfig.sh b/hack/create-kubeconfig.sh new file mode 100755 index 0000000000..3bebcd194e --- /dev/null +++ b/hack/create-kubeconfig.sh @@ -0,0 +1,70 @@ +#!/usr/bin/env bash + +# Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +set -eu + +directory=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd ) + +declare -r namespace="$1" account="$2" +declare -r directory="${directory}/.kube" + +[[ -z "${KUBECONFIG:-}" ]] && KUBECONFIG="${HOME}/.kube/config" +if [[ ! -f "${KUBECONFIG}" ]]; then + echo "unable to find kubeconfig" + exit 1 +fi +echo "using KUBECONFIG=${KUBECONFIG} as base for ${namespace}/${account}" + +# copy the current KUBECONFIG +kubeconfig="${directory}/${namespace}/${account}" +mkdir -p "${directory}/${namespace}" +kubectl config view --minify --raw > "${kubeconfig}" + +# Grab the service account token. If one has not already been generated, +# create a secret to do so. See the LegacyServiceAccountTokenNoAutoGeneration +# feature gate. +for i in 1 2 3 4; do + token=$(kubectl get secret -n "${namespace}" -o go-template=' +{{- range .items }} + {{- if and (eq (or .type "") "kubernetes.io/service-account-token") .metadata.annotations .data }} + {{- if (eq (or (index .metadata.annotations "kubernetes.io/service-account.name") "") "'"${account}"'") }} + {{- if (ne (or (index .metadata.annotations "kubernetes.io/created-by") "") "openshift.io/create-dockercfg-secrets") }} + {{- .data.token | base64decode }} + {{- end }} + {{- end }} + {{- end }} +{{- end }}') + + [[ -n "${token}" ]] && break + + kubectl apply -n "${namespace}" --server-side --filename=- <<< " +apiVersion: v1 +kind: Secret +type: kubernetes.io/service-account-token +metadata: { + name: ${account}-token, + annotations: { kubernetes.io/service-account.name: ${account} } +}" + # If we are on our third or fourth loop, try sleeping to give kube time to create the token + if [ $i -gt 2 ]; then + sleep $(($i-2)) + fi +done +kubectl config --kubeconfig="${kubeconfig}" set-credentials "${account}" --token="${token}" + +# remove any namespace setting, replace the username, and minify once more +kubectl config --kubeconfig="${kubeconfig}" set-context --current --namespace= --user="${account}" +minimal=$(kubectl config --kubeconfig="${kubeconfig}" view --minify --raw) +cat <<< "${minimal}" > "${kubeconfig}" diff --git a/hack/tools/.gitignore b/hack/tools/.gitignore new file mode 100644 index 0000000000..72e8ffc0db --- /dev/null +++ b/hack/tools/.gitignore @@ -0,0 +1 @@ +* diff --git a/hack/update-codegen.sh b/hack/update-codegen.sh deleted file mode 100755 index c9795398ae..0000000000 --- a/hack/update-codegen.sh +++ /dev/null @@ -1,26 +0,0 @@ -#!/usr/bin/env bash - -# Copyright 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -set -o errexit -set -o nounset -set -o pipefail - -SCRIPT_ROOT=$(dirname "${BASH_SOURCE[0]}")/.. -CODEGEN_PKG=${CODEGEN_PKG:-$(cd "${SCRIPT_ROOT}"; ls -d -1 ./vendor/k8s.io/code-generator 2>/dev/null || echo ../code-generator)} - -bash "${SCRIPT_ROOT}/${CODEGEN_PKG}"/generate-groups.sh all \ - github.com/crunchydata/postgres-operator/pkg/generated github.com/crunchydata/postgres-operator/pkg/apis \ - crunchydata.com:v1 \ - --go-header-file "${SCRIPT_ROOT}"/hack/boilerplate.go.txt diff --git a/hack/update-pgmonitor-installer.sh b/hack/update-pgmonitor-installer.sh new file mode 100755 index 0000000000..148a4761c9 --- /dev/null +++ b/hack/update-pgmonitor-installer.sh @@ -0,0 +1,76 @@ +#!/usr/bin/env bash + +# Copyright 2022 - 2024 Crunchy Data Solutions, Inc. +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This script updates the Kustomize installer for monitoring with the latest Grafana, +# Prometheus and Alert Manager configuration per the pgMonitor tag specified + +directory=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd ) + +# The pgMonitor tag to use to refresh the current monitoring installer +pgmonitor_tag=v4.8.1 + +# Set the directory for the monitoring Kustomize installer +pgo_examples_monitoring_dir="${directory}/../../postgres-operator-examples/kustomize/monitoring" + +# Create a tmp directory for checking out the pgMonitor tag +tmp_dir="${directory}/pgmonitor_tmp/" +mkdir -p "${tmp_dir}" + +# Clone the pgMonitor repo and checkout the tag provided +git -C "${tmp_dir}" clone https://github.com/CrunchyData/pgmonitor.git +cd "${tmp_dir}/pgmonitor" +git checkout "${pgmonitor_tag}" + +# Deviation from pgMonitor default! +# Update "${DS_PROMETHEUS}" to "PROMETHEUS" in all containers dashboards +find "grafana/containers" -type f -exec \ + sed -i 's/${DS_PROMETHEUS}/PROMETHEUS/' {} \; +# Copy Grafana dashboards for containers +cp -r "grafana/containers/." "${pgo_examples_monitoring_dir}/config/grafana/dashboards" + +# Deviation from pgMonitor default! +# Update the dashboard location to the default for the Grafana container. +sed -i 's#/etc/grafana/crunchy_dashboards#/etc/grafana/provisioning/dashboards#' \ + "grafana/linux/crunchy_grafana_dashboards.yml" +cp "grafana/linux/crunchy_grafana_dashboards.yml" "${pgo_examples_monitoring_dir}/config/grafana" + +# Deviation from pgMonitor default! +# Update the URL for the Grafana data source configuration to use env vars for the Prometheus host +# and port. +sed -i 's#localhost:9090#$PROM_HOST:$PROM_PORT#' \ + "grafana/common/crunchy_grafana_datasource.yml" +cp "grafana/common/crunchy_grafana_datasource.yml" "${pgo_examples_monitoring_dir}/config/grafana" + +# Deviation from pgMonitor default! +# Update the URL for the Grafana data source configuration to use env vars for the Prometheus host +# and port. +cp "prometheus/containers/crunchy-prometheus.yml.containers" "prometheus/containers/crunchy-prometheus.yml" +cat << EOF >> prometheus/containers/crunchy-prometheus.yml +alerting: + alertmanagers: + - scheme: http + static_configs: + - targets: + - "crunchy-alertmanager:9093" +EOF +cp "prometheus/containers/crunchy-prometheus.yml" "${pgo_examples_monitoring_dir}/config/prometheus" + +# Copy the default Alert Manager configuration +cp "alertmanager/common/crunchy-alertmanager.yml" "${pgo_examples_monitoring_dir}/config/alertmanager" +cp "prometheus/containers/alert-rules.d/crunchy-alert-rules-pg.yml.containers.example" \ + "${pgo_examples_monitoring_dir}/config/alertmanager/crunchy-alert-rules-pg.yml" + +# Cleanup any temporary resources +rm -rf "${tmp_dir}" diff --git a/hack/verify-codegen.sh b/hack/verify-codegen.sh deleted file mode 100755 index c096654ca9..0000000000 --- a/hack/verify-codegen.sh +++ /dev/null @@ -1,47 +0,0 @@ -#!/usr/bin/env bash - -# Copyright 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -set -o errexit -set -o nounset -set -o pipefail - -SCRIPT_ROOT=$(dirname "${BASH_SOURCE[0]}")/.. - -DIFFROOT="${SCRIPT_ROOT}/pkg" -TMP_DIFFROOT="${SCRIPT_ROOT}/_tmp/pkg" -_tmp="${SCRIPT_ROOT}/_tmp" - -cleanup() { - rm -rf "${_tmp}" -} -trap "cleanup" EXIT SIGINT - -cleanup - -mkdir -p "${TMP_DIFFROOT}" -cp -a "${DIFFROOT}"/* "${TMP_DIFFROOT}" - -"${SCRIPT_ROOT}/hack/update-codegen.sh" -echo "diffing ${DIFFROOT} against freshly generated codegen" -ret=0 -diff -Naupr "${DIFFROOT}" "${TMP_DIFFROOT}" || ret=$? -cp -a "${TMP_DIFFROOT}"/* "${DIFFROOT}" -if [[ $ret -eq 0 ]] -then - echo "${DIFFROOT} up to date." -else - echo "${DIFFROOT} is out of date. Please run hack/update-codegen.sh" - exit 1 -fi diff --git a/img/CrunchyDataPrimaryIcon.png b/img/CrunchyDataPrimaryIcon.png new file mode 100644 index 0000000000..e238a688dd Binary files /dev/null and b/img/CrunchyDataPrimaryIcon.png differ diff --git a/installers/ansible/README.md b/installers/ansible/README.md deleted file mode 100644 index a9f0babd16..0000000000 --- a/installers/ansible/README.md +++ /dev/null @@ -1,15 +0,0 @@ -# Crunchy Data PostgreSQL Operator Playbook - -

- Crunchy Data -

- -Latest Release: 4.5.0 - -## General - -This repository contains Ansible Roles for deploying the Crunchy PostgreSQL Operator -for Kubernetes and OpenShift. - -See the [official documentation for more information](https://crunchydata.github.io/postgres-operator/stable/) -on installing Crunchy PostgreSQL Operator. diff --git a/installers/ansible/ansible.cfg b/installers/ansible/ansible.cfg deleted file mode 100644 index 670b29222b..0000000000 --- a/installers/ansible/ansible.cfg +++ /dev/null @@ -1,6 +0,0 @@ -[defaults] -retry_files_enabled = False -remote_tmp=/tmp - -[ssh_connection] -ssh_args = -o ControlMaster=no diff --git a/installers/ansible/inventory.yaml b/installers/ansible/inventory.yaml deleted file mode 100644 index 7cb421029a..0000000000 --- a/installers/ansible/inventory.yaml +++ /dev/null @@ -1,30 +0,0 @@ ---- - all: - hosts: - localhost: - vars: - ansible_connection: local - config_path: "{{ playbook_dir }}/values.yaml" - # ================== - # Installation Methods - # One of the following blocks must be updated: - # - Deploy into Kubernetes - # - Deploy into Openshift - - # Deploy into Kubernetes - # ================== - # Note: Context name can be found using: - # kubectl config current-context - # ================== - # kubernetes_context: '' - - # Deploy into Openshift - # ================== - # Note: openshift_host can use the format https://URL:PORT - # Note: openshift_token can be used for token authentication - # ================== - # openshift_host: '' - # openshift_skip_tls_verify: true - # openshift_user: '' - # openshift_password: '' - # openshift_token: '' diff --git a/installers/ansible/main.yml b/installers/ansible/main.yml deleted file mode 100644 index 3141a041eb..0000000000 --- a/installers/ansible/main.yml +++ /dev/null @@ -1,10 +0,0 @@ ---- -- name: Deploy Crunchy PostgreSQL Operator - hosts: all - vars: - max_storage_configs: 50 # the max num of storage configs that can be defined in the inventory - max_resource_configs: 50 # the max num of resource configs that can be defined in the inventory - gather_facts: true - roles: - - pgo-preflight - - pgo-operator diff --git a/installers/ansible/roles/pgo-operator/defaults/main.yml b/installers/ansible/roles/pgo-operator/defaults/main.yml deleted file mode 100644 index 39fb88c679..0000000000 --- a/installers/ansible/roles/pgo-operator/defaults/main.yml +++ /dev/null @@ -1,52 +0,0 @@ ---- -kubernetes_context: "" -kubernetes_in_cluster: "false" -openshift_host: "" - -backrest_aws_s3_key: "" -backrest_aws_s3_secret: "" -backrest_aws_s3_bucket: "" -backrest_aws_s3_endpoint: "" -backrest_aws_s3_region: "" -backrest_aws_s3_uri_style: "" -backrest_aws_s3_verify_tls: "true" -backrest_port: "2022" -service_type: "ClusterIP" - -cleanup: "false" -common_name: "crunchydata" -crunchy_debug: "false" -enable_crunchyadm: "false" -disable_replica_start_fail_reinit: "false" -disable_fsgroup: "false" - -default_instance_memory: "" -default_pgbackrest_memory: "" -default_pgbouncer_memory: "" -default_exporter_memory: "" - -pgo_client_install: "true" -pgo_client_container_install: "false" -pgo_cluster_admin: "false" -pgo_disable_tls: "false" -pgo_tls_no_verify: "false" -pgo_disable_eventing: "false" -pgo_apiserver_port: 8443 -pgo_tls_ca_store: "" -pgo_add_os_ca_store: "false" -pgo_noauth_routes: "" -pgo_apiserver_url: "https://postgres-operator" -pgo_client_cert_secret: "pgo.tls" -pgo_image_pull_secret: "" -pgo_image_pull_secret_manifest: "" -pod_anti_affinity: "preferred" -pod_anti_affinity_pgbackrest: "" -pod_anti_affinity_pgbouncer: "" - -namespace: "" -namespace_mode: "dynamic" -reconcile_rbac: "true" - -delete_operator_namespace: "false" -delete_watched_namespaces: "false" -preserve_pg_clusters: "false" diff --git a/installers/ansible/roles/pgo-operator/files/crds/pgclusters-crd.yaml b/installers/ansible/roles/pgo-operator/files/crds/pgclusters-crd.yaml deleted file mode 100644 index bea777b436..0000000000 --- a/installers/ansible/roles/pgo-operator/files/crds/pgclusters-crd.yaml +++ /dev/null @@ -1,36 +0,0 @@ ---- -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: pgclusters.crunchydata.com -spec: - group: crunchydata.com - names: - kind: Pgcluster - listKind: PgclusterList - plural: pgclusters - singular: pgcluster - scope: Namespaced - version: v1 - validation: - openAPIV3Schema: - properties: - spec: - properties: - clustername: { type: string } - ccpimage: { type: string } - ccpimagetag: { type: string } - database: { type: string } - exporterport: { type: string } - name: { type: string } - pgbadgerport: { type: string } - primarysecretname: { type: string } - PrimaryStorage: { type: object } - port: { type: string } - rootsecretname: { type: string } - userlabels: { type: object } - usersecretname: { type: string } - status: - properties: - state: { type: string } - message: { type: string } diff --git a/installers/ansible/roles/pgo-operator/files/crds/pgpolicies-crd.yaml b/installers/ansible/roles/pgo-operator/files/crds/pgpolicies-crd.yaml deleted file mode 100644 index 32e0d2014c..0000000000 --- a/installers/ansible/roles/pgo-operator/files/crds/pgpolicies-crd.yaml +++ /dev/null @@ -1,21 +0,0 @@ ---- -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: pgpolicies.crunchydata.com -spec: - group: crunchydata.com - names: - kind: Pgpolicy - listKind: PgpolicyList - plural: pgpolicies - singular: pgpolicy - scope: Namespaced - version: v1 - validation: - openAPIV3Schema: - properties: - status: - properties: - state: { type: string } - message: { type: string } diff --git a/installers/ansible/roles/pgo-operator/files/crds/pgreplicas-crd.yaml b/installers/ansible/roles/pgo-operator/files/crds/pgreplicas-crd.yaml deleted file mode 100644 index 303f77f1ce..0000000000 --- a/installers/ansible/roles/pgo-operator/files/crds/pgreplicas-crd.yaml +++ /dev/null @@ -1,21 +0,0 @@ ---- -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: pgreplicas.crunchydata.com -spec: - group: crunchydata.com - names: - kind: Pgreplica - listKind: PgreplicaList - plural: pgreplicas - singular: pgreplica - scope: Namespaced - version: v1 - validation: - openAPIV3Schema: - properties: - status: - properties: - state: { type: string } - message: { type: string } diff --git a/installers/ansible/roles/pgo-operator/files/crds/pgtasks-crd.yaml b/installers/ansible/roles/pgo-operator/files/crds/pgtasks-crd.yaml deleted file mode 100644 index 20fce21e7a..0000000000 --- a/installers/ansible/roles/pgo-operator/files/crds/pgtasks-crd.yaml +++ /dev/null @@ -1,21 +0,0 @@ ---- -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: pgtasks.crunchydata.com -spec: - group: crunchydata.com - names: - kind: Pgtask - listKind: PgtaskList - plural: pgtasks - singular: pgtask - scope: Namespaced - version: v1 - validation: - openAPIV3Schema: - properties: - status: - properties: - state: { type: string } - message: { type: string } diff --git a/installers/ansible/roles/pgo-operator/files/pgo-backrest-repo/aws-s3-ca.crt b/installers/ansible/roles/pgo-operator/files/pgo-backrest-repo/aws-s3-ca.crt deleted file mode 100644 index 519028c63b..0000000000 --- a/installers/ansible/roles/pgo-operator/files/pgo-backrest-repo/aws-s3-ca.crt +++ /dev/null @@ -1,21 +0,0 @@ ------BEGIN CERTIFICATE----- -MIIDdzCCAl+gAwIBAgIEAgAAuTANBgkqhkiG9w0BAQUFADBaMQswCQYDVQQGEwJJ -RTESMBAGA1UEChMJQmFsdGltb3JlMRMwEQYDVQQLEwpDeWJlclRydXN0MSIwIAYD -VQQDExlCYWx0aW1vcmUgQ3liZXJUcnVzdCBSb290MB4XDTAwMDUxMjE4NDYwMFoX -DTI1MDUxMjIzNTkwMFowWjELMAkGA1UEBhMCSUUxEjAQBgNVBAoTCUJhbHRpbW9y -ZTETMBEGA1UECxMKQ3liZXJUcnVzdDEiMCAGA1UEAxMZQmFsdGltb3JlIEN5YmVy -VHJ1c3QgUm9vdDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAKMEuyKr -mD1X6CZymrV51Cni4eiVgLGw41uOKymaZN+hXe2wCQVt2yguzmKiYv60iNoS6zjr -IZ3AQSsBUnuId9Mcj8e6uYi1agnnc+gRQKfRzMpijS3ljwumUNKoUMMo6vWrJYeK -mpYcqWe4PwzV9/lSEy/CG9VwcPCPwBLKBsua4dnKM3p31vjsufFoREJIE9LAwqSu -XmD+tqYF/LTdB1kC1FkYmGP1pWPgkAx9XbIGevOF6uvUA65ehD5f/xXtabz5OTZy -dc93Uk3zyZAsuT3lySNTPx8kmCFcB5kpvcY67Oduhjprl3RjM71oGDHweI12v/ye -jl0qhqdNkNwnGjkCAwEAAaNFMEMwHQYDVR0OBBYEFOWdWTCCR1jMrPoIVDaGezq1 -BE3wMBIGA1UdEwEB/wQIMAYBAf8CAQMwDgYDVR0PAQH/BAQDAgEGMA0GCSqGSIb3 -DQEBBQUAA4IBAQCFDF2O5G9RaEIFoN27TyclhAO992T9Ldcw46QQF+vaKSm2eT92 -9hkTI7gQCvlYpNRhcL0EYWoSihfVCr3FvDB81ukMJY2GQE/szKN+OMY3EU/t3Wgx -jkzSswF07r51XgdIGn9w/xZchMB5hbgF/X++ZRGjD8ACtPhSNzkE1akxehi/oCr0 -Epn3o0WC4zxe9Z2etciefC7IpJ5OCBRLbf1wbWsaY71k5h+3zvDyny67G7fyUIhz -ksLi4xaNmjICq44Y3ekQEe5+NauQrz4wlHrQMz2nZQ/1/I6eYs9HRCwBXbsdtTLS -R9I4LtD+gdwyah617jzV/OeBHRnDJELqYzmp ------END CERTIFICATE----- diff --git a/installers/ansible/roles/pgo-operator/files/pgo-backrest-repo/config b/installers/ansible/roles/pgo-operator/files/pgo-backrest-repo/config deleted file mode 100644 index d4af269efb..0000000000 --- a/installers/ansible/roles/pgo-operator/files/pgo-backrest-repo/config +++ /dev/null @@ -1,5 +0,0 @@ -Host * - StrictHostKeyChecking no - IdentityFile /tmp/id_ed25519 - Port 2022 - User pgbackrest diff --git a/installers/ansible/roles/pgo-operator/files/pgo-backrest-repo/sshd_config b/installers/ansible/roles/pgo-operator/files/pgo-backrest-repo/sshd_config deleted file mode 100644 index 3a96f209da..0000000000 --- a/installers/ansible/roles/pgo-operator/files/pgo-backrest-repo/sshd_config +++ /dev/null @@ -1,136 +0,0 @@ -# $OpenBSD: sshd_config,v 1.100 2016/08/15 12:32:04 naddy Exp $ - -# This is the sshd server system-wide configuration file. See -# sshd_config(5) for more information. - -# This sshd was compiled with PATH=/usr/local/bin:/usr/bin - -# The strategy used for options in the default sshd_config shipped with -# OpenSSH is to specify options with their default value where -# possible, but leave them commented. Uncommented options override the -# default value. - -# If you want to change the port on a SELinux system, you have to tell -# SELinux about this change. -# semanage port -a -t ssh_port_t -p tcp #PORTNUMBER -# -Port 2022 -#AddressFamily any -#ListenAddress 0.0.0.0 -#ListenAddress :: - -HostKey /sshd/ssh_host_ed25519_key - -# Ciphers and keying -#RekeyLimit default none - -# Logging -#SyslogFacility AUTH -SyslogFacility AUTHPRIV -#LogLevel INFO - -# Authentication: - -#LoginGraceTime 2m -PermitRootLogin no -StrictModes no -#MaxAuthTries 6 -#MaxSessions 10 - -PubkeyAuthentication yes - -# The default is to check both .ssh/authorized_keys and .ssh/authorized_keys2 -# but this is overridden so installations will only check .ssh/authorized_keys -#AuthorizedKeysFile /pgconf/authorized_keys -AuthorizedKeysFile /sshd/authorized_keys - -#AuthorizedPrincipalsFile none - -#AuthorizedKeysCommand none -#AuthorizedKeysCommandUser nobody - -# For this to work you will also need host keys in /etc/ssh/ssh_known_hosts -#HostbasedAuthentication no -# Change to yes if you don't trust ~/.ssh/known_hosts for -# HostbasedAuthentication -#IgnoreUserKnownHosts no -# Don't read the user's ~/.rhosts and ~/.shosts files -#IgnoreRhosts yes - -# To disable tunneled clear text passwords, change to no here! -#PasswordAuthentication yes -#PermitEmptyPasswords no -PasswordAuthentication no - -# Change to no to disable s/key passwords -ChallengeResponseAuthentication yes -#ChallengeResponseAuthentication no - -# Kerberos options -#KerberosAuthentication no -#KerberosOrLocalPasswd yes -#KerberosTicketCleanup yes -#KerberosGetAFSToken no -#KerberosUseKuserok yes - -# GSSAPI options -#GSSAPIAuthentication yes -#GSSAPICleanupCredentials no -#GSSAPIStrictAcceptorCheck yes -#GSSAPIKeyExchange no -#GSSAPIEnablek5users no - -# Set this to 'yes' to enable PAM authentication, account processing, -# and session processing. If this is enabled, PAM authentication will -# be allowed through the ChallengeResponseAuthentication and -# PasswordAuthentication. Depending on your PAM configuration, -# PAM authentication via ChallengeResponseAuthentication may bypass -# the setting of "PermitRootLogin without-password". -# If you just want the PAM account and session checks to run without -# PAM authentication, then enable this but set PasswordAuthentication -# and ChallengeResponseAuthentication to 'no'. -# WARNING: 'UsePAM no' is not supported in Red Hat Enterprise Linux and may cause several -# problems. -UsePAM yes - -#AllowAgentForwarding yes -#AllowTcpForwarding yes -#GatewayPorts no -X11Forwarding yes -#X11DisplayOffset 10 -#X11UseLocalhost yes -#PermitTTY yes -#PrintMotd yes -#PrintLastLog yes -#TCPKeepAlive yes -#UseLogin no -#PermitUserEnvironment no -#Compression delayed -#ClientAliveInterval 0 -#ClientAliveCountMax 3 -#ShowPatchLevel no -#UseDNS yes -#PidFile /var/run/sshd.pid -#MaxStartups 10:30:100 -#PermitTunnel no -#ChrootDirectory none -#VersionAddendum none - -# no default banner path -#Banner none - -# Accept locale-related environment variables -AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES -AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT -AcceptEnv LC_IDENTIFICATION LC_ALL LANGUAGE -AcceptEnv XMODIFIERS - -# override default of no subsystems -Subsystem sftp /usr/libexec/openssh/sftp-server - -# Example of overriding settings on a per-user basis -#Match User anoncvs -# X11Forwarding no -# AllowTcpForwarding no -# PermitTTY no -# ForceCommand cvs server diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/README.txt b/installers/ansible/roles/pgo-operator/files/pgo-configs/README.txt deleted file mode 100644 index e4e595a0f8..0000000000 --- a/installers/ansible/roles/pgo-operator/files/pgo-configs/README.txt +++ /dev/null @@ -1,2 +0,0 @@ -JSON templates are stored in this directory, the postgres-operator -will read these templates and use them for creating various Kube kinds diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/affinity.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/affinity.json deleted file mode 100644 index a247bd9bb4..0000000000 --- a/installers/ansible/roles/pgo-operator/files/pgo-configs/affinity.json +++ /dev/null @@ -1,14 +0,0 @@ - "nodeAffinity": { - "preferredDuringSchedulingIgnoredDuringExecution": [{ - "weight": 10, - "preference": { - "matchExpressions": [{ - "key": "{{.NodeLabelKey}}", - "operator": "{{.OperatorValue}}", - "values": [ - "{{.NodeLabelValue}}" - ] - }] - } - }] - } diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/backrest-job.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/backrest-job.json deleted file mode 100644 index 82b326c7cf..0000000000 --- a/installers/ansible/roles/pgo-operator/files/pgo-configs/backrest-job.json +++ /dev/null @@ -1,84 +0,0 @@ -{ - "apiVersion": "batch/v1", - "kind": "Job", - "metadata": { - "name": "{{.JobName}}", - "labels": { - "vendor": "crunchydata", - "pgo-backrest": "true", - "pgo-backrest-job": "true", - "backrest-command": "{{.Command}}", - "pg-cluster": "{{.ClusterName}}" - } - }, - "spec": { - "template": { - "metadata": { - "name": "{{.JobName}}", - "labels": { - "vendor": "crunchydata", - "pgo-backrest": "true", - "pgo-backrest-job": "true", - "backrest-command": "{{.Command}}", - "pg-cluster": "{{.ClusterName}}" - } - }, - "spec": { - "volumes": [ - {{.PgbackrestRestoreVolumes}} - ], - "securityContext": {{.SecurityContext}}, - "serviceAccountName": "pgo-backrest", - "containers": [{ - "name": "backrest", - "image": "{{.PGOImagePrefix}}/pgo-backrest:{{.PGOImageTag}}", - "volumeMounts": [ - {{.PgbackrestRestoreVolumeMounts}} - ], - "env": [{ - "name": "COMMAND", - "value": "{{.Command}}" - }, { - "name": "COMMAND_OPTS", - "value": "{{.CommandOpts}}" - }, { - "name": "PITR_TARGET", - "value": "{{.PITRTarget}}" - }, { - "name": "PODNAME", - "value": "{{.PodName}}" - }, { - "name": "PGBACKREST_STANZA", - "value": "{{.PgbackrestStanza}}" - }, { - "name": "PGBACKREST_DB_PATH", - "value": "{{.PgbackrestDBPath}}" - }, { - "name": "PGBACKREST_REPO_PATH", - "value": "{{.PgbackrestRepoPath}}" - }, { - "name": "PGBACKREST_REPO_TYPE", - "value": "{{.PgbackrestRepoType}}" - },{ - "name": "PGHA_PGBACKREST_LOCAL_S3_STORAGE", - "value": "{{.BackrestLocalAndS3Storage}}" - },{ - "name": "PGHA_PGBACKREST_S3_VERIFY_TLS", - "value": "{{.PgbackrestS3VerifyTLS}}" - },{ - "name": "PGBACKREST_LOG_PATH", - "value": "/tmp" - }, { - "name": "NAMESPACE", - "valueFrom": { - "fieldRef": { - "fieldPath": "metadata.namespace" - } - } - }] - }], - "restartPolicy": "Never" - } - } - } -} diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/backrest-restore-job.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/backrest-restore-job.json deleted file mode 100644 index ba7a9bd19e..0000000000 --- a/installers/ansible/roles/pgo-operator/files/pgo-configs/backrest-restore-job.json +++ /dev/null @@ -1,102 +0,0 @@ -{ - "apiVersion": "batch/v1", - "kind": "Job", - "metadata": { - "name": "{{.JobName}}", - "labels": { - "vendor": "crunchydata", - "pgo-backrest-restore": "true", - "pg-cluster": "{{.ClusterName}}", - "backrest-restore-to-pvc": "{{.ToClusterPVCName}}", - "workflowid": "{{.WorkflowID}}" - } - }, - "spec": { - "template": { - "metadata": { - "name": "{{.JobName}}", - "labels": { - "vendor": "crunchydata", - "pgo-backrest-restore": "true", - "backrest-restore-to-pvc": "{{.ToClusterPVCName}}", - "pg-cluster": "{{.ClusterName}}", - "service-name": "{{.ClusterName}}" - } - }, - "spec": { - "volumes": [ - { - "name": "pgdata", - "persistentVolumeClaim": { - "claimName": "{{.ToClusterPVCName}}" - } - }, - { - "name": "sshd", - "secret": { - "secretName": "{{.ClusterName}}-backrest-repo-config", - "defaultMode": 511 - } - } - {{.TablespaceVolumes}} - ], - "securityContext": {{.SecurityContext}}, - "serviceAccountName": "pgo-backrest", - "containers": [{ - "name": "backrest", - "image": "{{.PGOImagePrefix}}/pgo-backrest-restore:{{.PGOImageTag}}", - "volumeMounts": [ - { - "mountPath": "/pgdata", - "name": "pgdata", - "readOnly": false - }, - { - "mountPath": "/sshd", - "name": "sshd", - "readOnly": true - } - {{.TablespaceVolumeMounts}} - ], - "env": [ - {{.PgbackrestS3EnvVars}} - { - "name": "COMMAND_OPTS", - "value": "{{.CommandOpts}}" - }, - { - "name": "PITR_TARGET", - "value": "{{.PITRTarget}}" - }, - { - "name": "PGBACKREST_STANZA", - "value": "{{.PgbackrestStanza}}" - }, { - "name": "PGBACKREST_DB_PATH", - "value": "{{.PgbackrestDBPath}}" - }, { - "name": "PGBACKREST_REPO1_PATH", - "value": "{{.PgbackrestRepo1Path}}" - }, { - "name": "PGBACKREST_REPO1_HOST", - "value": "{{.PgbackrestRepo1Host}}" - }, { - "name": "PGBACKREST_LOG_PATH", - "value": "/tmp" - }, { - "name": "NAMESPACE", - "valueFrom": { - "fieldRef": { - "fieldPath": "metadata.namespace" - } - } - }] - }], - "affinity": { - {{.NodeSelector}} - }, - "restartPolicy": "Never" - } - } - } -} diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-bootstrap-job.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-bootstrap-job.json deleted file mode 100644 index ecd2cf735a..0000000000 --- a/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-bootstrap-job.json +++ /dev/null @@ -1,237 +0,0 @@ -{ - "apiVersion": "batch/v1", - "kind": "Job", - "metadata": { - "name": "{{.Name}}-bootstrap", - "labels": { - "vendor": "crunchydata", - "pgo-backrest-job": "true", - "pgha-bootstrap": "{{.Name}}", - {{.DeploymentLabels}} - } - }, - "spec": { - "template": { - "metadata": { - "labels": { - "name": "{{.Name}}-bootstrap", - "vendor": "crunchydata", - "pgha-bootstrap": "{{.Name}}", - {{.PodLabels}} - } - }, - "spec": { - "securityContext": {{.SecurityContext}}, - "serviceAccountName": "pgo-pg", - "containers": [{ - "name": "database", - "image": "{{.CCPImagePrefix}}/{{.CCPImage}}:{{.CCPImageTag}}", - {{.ContainerResources}} - "env": [{ - "name": "PGHA_PG_PORT", - "value": "{{.Port}}" - }, { - "name": "PGHA_USER", - "value": "postgres" - }, - { - "name": "PGHA_INIT", - "value": "true" - }, - { - "name": "PGHA_BOOTSTRAP_METHOD", - "value": "pgbackrest_init" - }, - {{if .Tablespaces}} - { - "name": "PGHA_TABLESPACES", - "value": "{{ .Tablespaces }}" - }, - {{ end }} - { - "name": "PATRONI_POSTGRESQL_DATA_DIR", - "value": "/pgdata/{{.Name}}" - }, - {{.PgbackrestS3EnvVars}} - {{.PgbackrestEnvVars}} - { - "name": "PGHA_DATABASE", - "value": "{{.Database}}" - }, { - "name": "PGHA_CRUNCHYADM", - "value": "true" - }, { - "name": "PGHA_REPLICA_REINIT_ON_START_FAIL", - "value": "{{.ReplicaReinitOnStartFail}}" - }, { - "name": "PGHA_SYNC_REPLICATION", - "value": "{{.SyncReplication}}" - }, { - "name": "PGHA_TLS_ENABLED", - "value": "{{.TLSEnabled}}" - }, { - "name": "PGHA_TLS_ONLY", - "value": "{{.TLSOnly}}" - }, { - "name": "PGHA_STANDBY", - "value": "{{.Standby}}" - }, { - "name": "PATRONI_KUBERNETES_NAMESPACE", - "valueFrom": { - "fieldRef": { - "fieldPath": "metadata.namespace" - } - } - }, { - "name": "PATRONI_KUBERNETES_SCOPE_LABEL", - "value": "{{.ScopeLabel}}" - }, { - "name": "PATRONI_SCOPE", - "valueFrom": { - "fieldRef": { - "fieldPath": "metadata.labels['{{.ScopeLabel}}']" - } - } - }, { - "name": "PATRONI_KUBERNETES_LABELS", - "value": "{vendor: \"crunchydata\"}" - }, { - "name": "PATRONI_LOG_LEVEL", - "value": "INFO" - }, { - "name": "PGHOST", - "value": "/tmp" - }, { - "name": "RESTORE_OPTS", - "value": "{{.RestoreOpts}}" - }], - "volumeMounts": [{ - "mountPath": "/pgdata", - "name": "pgdata" - }, { - "mountPath": "/pgconf/pguser", - "name": "user-volume" - }, { - "mountPath": "/pgconf/pgreplicator", - "name": "primary-volume" - }, { - "mountPath": "/pgconf/pgsuper", - "name": "root-volume" - }, - {{if .TLSEnabled}} - { - "mountPath": "/pgconf/tls", - "name": "tls-server" - }, - {{ end }} - { - "mountPath": "/sshd", - "name": "sshd", - "readOnly": true - }, { - "mountPath": "/pgconf", - "name": "pgconf-volume" - }, { - "mountPath": "/dev/shm", - "name": "dshm" - }, { - "mountPath": "/etc/pgbackrest/conf.d", - "name": "pgbackrest-config" - }, { - "mountPath": "/crunchyadm", - "name": "crunchyadm" - } - {{.TablespaceVolumeMounts}} - ], - "imagePullPolicy": "IfNotPresent" - }], - "volumes": [{ - "name": "pgdata", - {{.PVCName}} - }, { - "name": "user-volume", - "secret": { - "secretName": "{{.UserSecretName}}" - } - }, { - "name": "primary-volume", - "secret": { - "secretName": "{{.PrimarySecretName}}" - } - }, { - "name": "root-volume", - "secret": { - "secretName": "{{.RootSecretName}}" - } - }, { - "name": "sshd", - "secret": { - "secretName": "{{.RestoreFrom}}-backrest-repo-config", - "defaultMode": 511 - } - }, - {{if .TLSEnabled}} - { - "name": "tls-server", - "projected": { - "defaultMode": 288, - "sources": [ - { - "secret": { - "name": "{{.TLSSecret}}" - } - }, - { - "secret": { - "name": "{{.CASecret}}" - } - } - ] - } - }, - {{ end }} - { - "name": "crunchyadm", - "emptyDir": {} - }, - { - "name": "dshm", - "emptyDir": { - "medium": "Memory" - } - }, - { - "name": "pgbackrest-config", - "projected": { "sources": [] } - }, - { - "name": "pgconf-volume", - "projected": { - "sources": [ - {{if .ConfVolume}} - { - "configMap": { - "name": {{.ConfVolume}} - } - }, - {{end}} - { - "configMap": { - "name": "{{.ClusterName}}-pgha-config", - "optional": true - } - } - ] - } - } - {{.TablespaceVolumes}}], - "affinity": { - {{.NodeSelector}} - {{if and .NodeSelector .PodAntiAffinity}},{{end}} - {{.PodAntiAffinity}} - }, - "restartPolicy": "Never" - } - } - } -} diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json deleted file mode 100644 index 4a44785b27..0000000000 --- a/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-deployment.json +++ /dev/null @@ -1,421 +0,0 @@ -{ - "kind": "Deployment", - "apiVersion": "apps/v1", - "metadata": { - "name": "{{.Name}}", - "labels": { - "vendor": "crunchydata", - "pgo-pg-database": "true", - {{.DeploymentLabels }} - } - }, - "spec": { - "replicas": {{.Replicas}}, - "selector": { - "matchLabels": { - "vendor": "crunchydata", - {{.DeploymentLabels }} - } - }, - "template": { - "metadata": { - {{ if .PodAnnotations }} - "annotations": {{ .PodAnnotations }}, - {{ end }} - "labels": { - "name": "{{.Name}}", - "vendor": "crunchydata", - "pgo-pg-database": "true", - {{.PodLabels }} - } - }, - "spec": { - "securityContext": {{.SecurityContext}}, - "serviceAccountName": "pgo-pg", - "containers": [ - { - "name": "database", - "image": "{{.CCPImagePrefix}}/{{.CCPImage}}:{{.CCPImageTag}}", - "readinessProbe": { - "exec": { - "command": [ - "/opt/cpm/bin/health/pgha-readiness.sh" - ] - }, - "initialDelaySeconds": 15 - }, - "livenessProbe": { - "exec": { - "command": [ - "/opt/cpm/bin/health/pgha-liveness.sh" - ] - }, - "initialDelaySeconds": 30, - "periodSeconds": 15, - "timeoutSeconds": 10 - }, - {{.ContainerResources }} - "env": [{ - "name": "PGHA_PG_PORT", - "value": "{{.Port}}" - }, { - "name": "PGHA_USER", - "value": "postgres" - }, - {{if .IsInit}} - { - "name": "PGHA_INIT", - "valueFrom": { - "configMapKeyRef": { - "name": "{{.ClusterName}}-pgha-config", - "key": "init" - } - } - }, - {{ end }} - {{if .Tablespaces}} - { - "name": "PGHA_TABLESPACES", - "value": "{{ .Tablespaces }}" - }, - {{ end }} - { - "name": "PATRONI_POSTGRESQL_DATA_DIR", - "value": "/pgdata/{{.Name}}" - }, - {{.PgbackrestS3EnvVars}} - {{.PgbackrestEnvVars}} - {{.PgmonitorEnvVars}} - { - "name": "PGHA_DATABASE", - "value": "{{.Database}}" - }, { - "name": "PGHA_CRUNCHYADM", - "value": "true" - }, { - "name": "PGHA_REPLICA_REINIT_ON_START_FAIL", - "value": "{{.ReplicaReinitOnStartFail}}" - }, { - "name": "PGHA_SYNC_REPLICATION", - "value": "{{.SyncReplication}}" - }, { - "name": "PGHA_TLS_ENABLED", - "value": "{{.TLSEnabled}}" - }, { - "name": "PGHA_TLS_ONLY", - "value": "{{.TLSOnly}}" - }, { - "name": "PGHA_STANDBY", - "value": "{{.Standby}}" - }, { - "name": "PATRONI_KUBERNETES_NAMESPACE", - "valueFrom": { - "fieldRef": { - "fieldPath": "metadata.namespace" - } - } - }, { - "name": "PATRONI_KUBERNETES_SCOPE_LABEL", - "value": "{{.ScopeLabel}}" - }, { - "name": "PATRONI_SCOPE", - "valueFrom": { - "fieldRef": { - "fieldPath": "metadata.labels['{{.ScopeLabel}}']" - } - } - }, { - "name": "PATRONI_KUBERNETES_LABELS", - "value": "{vendor: \"crunchydata\"}" - }, { - "name": "PATRONI_LOG_LEVEL", - "value": "INFO" - }, { - "name": "PGHOST", - "value": "/tmp" - }], - - - "volumeMounts": [{ - "mountPath": "/pgdata", - "name": "pgdata", - "readOnly": false - }, { - "mountPath": "/pgconf/pguser", - "name": "user-volume" - }, { - "mountPath": "/pgconf/pgreplicator", - "name": "primary-volume" - }, { - "mountPath": "/pgconf/pgsuper", - "name": "root-volume" - }, - {{if .TLSEnabled}} - { - "mountPath": "/pgconf/tls", - "name": "tls-server" - }, - {{if .ReplicationTLSSecret}} - { - "mountPath": "/pgconf/tls-replication", - "name": "tls-replication" - }, - {{ end }} - {{ end }} - { - "mountPath": "/sshd", - "name": "sshd", - "readOnly": true - }, { - "mountPath": "/pgconf", - "name": "pgconf-volume" - }, { - "mountPath": "/recover", - "name": "recover-volume" - }, - { - "mountPath": "/dev/shm", - "name": "dshm" - }, - { - "mountPath": "/etc/pgbackrest/conf.d", - "name": "pgbackrest-config" - }, - { - "mountPath": "/crunchyadm", - "name": "crunchyadm" - }, - { - "mountPath": "/etc/podinfo", - "name": "podinfo" - } - {{.TablespaceVolumeMounts}} - ], - - "ports": [{ - "containerPort": 5432, - "protocol": "TCP" - }, { - "containerPort": 8009, - "protocol": "TCP" - }], - "imagePullPolicy": "IfNotPresent" - }{{if .EnableCrunchyadm}}, - { - "name": "crunchyadm", - "image": "{{.CCPImagePrefix}}/crunchy-admin:{{.CCPImageTag}}", - "securityContext": { - "runAsUser": 17 - }, - "readinessProbe": { - "exec": { - "command": [ - "/opt/cpm/bin/crunchyadm-readiness.sh" - ] - }, - "initialDelaySeconds": 30, - "timeoutSeconds": 10 - }, - "env": [ - { - "name": "PGHOST", - "value": "/crunchyadm" - } - ], - "volumeMounts": [ - { - "mountPath": "/crunchyadm", - "name": "crunchyadm" - } - ], - "imagePullPolicy": "IfNotPresent" - }{{ end }} - - {{.ExporterAddon }} - - {{.BadgerAddon }} - - ], - "volumes": [{ - "name": "pgdata", - {{.PVCName}} - }, { - "name": "user-volume", - "secret": { - "secretName": "{{.UserSecretName}}" - } - }, { - "name": "primary-volume", - "secret": { - "secretName": "{{.PrimarySecretName}}" - } - }, { - "name": "sshd", - "secret": { - "secretName": "{{.ClusterName}}-backrest-repo-config", - "defaultMode": 511 - } - }, { - "name": "root-volume", - "secret": { - "secretName": "{{.RootSecretName}}" - } - }, - {{if .TLSEnabled}} - { - "name": "tls-server", - "projected": { - "defaultMode": 288, - "sources": [ - { - "secret": { - "name": "{{.TLSSecret}}" - } - }, - {{if .ReplicationTLSSecret}} - { - "secret": { - "name": "{{.ReplicationTLSSecret}}", - "items": [ - { - "key": "tls.key", - "path": "tls-replication.key" - }, - { - "key": "tls.crt", - "path": "tls-replication.crt" - } - ] - } - }, - {{ end }} - { - "secret": { - "name": "{{.CASecret}}" - } - } - ] - } - }, - {{if .ReplicationTLSSecret}} - { - "name": "tls-replication", - "emptyDir": { - "medium": "Memory", - "sizeLimit": "2Mi" - } - }, - {{ end }} - {{ end }} - { - "name": "recover-volume", - "emptyDir": { "medium": "Memory" } - }, { - "name": "report", - "emptyDir": { "medium": "Memory" } - }, { - "name": "crunchyadm", - "emptyDir": {} - }, - { - "name": "dshm", - "emptyDir": { - "medium": "Memory" - } - }, - { - "name": "pgbackrest-config", - "projected": { "sources": [] } - }, - { - "name": "pgconf-volume", - "projected": { - "sources": [ - {{if .ConfVolume}} - { - "configMap": { - "name": {{.ConfVolume}} - } - }, - {{end}} - { - "configMap": { - "name": "{{.ClusterName}}-pgha-config", - "optional": true - } - } - ] - } - }, - { - "name": "podinfo", - "downwardAPI": { - "defaultMode": 420, - "items": [ - { - "path": "cpu_limit", - "resourceFieldRef": { - "containerName": "database", - "divisor": "1m", - "resource": "limits.cpu" - } - }, - { - "path": "cpu_request", - "resourceFieldRef": { - "containerName": "database", - "divisor": "1m", - "resource": "requests.cpu" - } - }, - { - "path": "mem_limit", - "resourceFieldRef": { - "containerName": "database", - "resource": "limits.memory" - } - }, - { - "path": "mem_request", - "resourceFieldRef": { - "containerName": "database", - "resource": "requests.memory" - } - }, - { - "fieldRef": { - "apiVersion": "v1", - "fieldPath": "metadata.labels" - }, - "path": "labels" - }, - { - "fieldRef": { - "apiVersion": "v1", - "fieldPath": "metadata.annotations" - }, - "path": "annotations" - } - ] - } - } - {{.TablespaceVolumes}} - ], - "affinity": { - {{.NodeSelector}} - {{if and .NodeSelector .PodAntiAffinity}},{{end}} - {{.PodAntiAffinity}} - }, - "restartPolicy": "Always", - "dnsPolicy": "ClusterFirst" - } - }, - "strategy": { - "type": "RollingUpdate", - "rollingUpdate": { - "maxUnavailable": 1, - "maxSurge": 1 - } - } - } -} diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-service.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-service.json deleted file mode 100644 index a76abf6bad..0000000000 --- a/installers/ansible/roles/pgo-operator/files/pgo-configs/cluster-service.json +++ /dev/null @@ -1,64 +0,0 @@ -{ - "kind": "Service", - "apiVersion": "v1", - "metadata": { - "name": "{{.Name}}", - "labels": { - "vendor": "crunchydata", - "pg-cluster": "{{.ClusterName}}", - "name": "{{.Name}}" - } - }, - "spec": { - "ports": [ - {{ if .PGBadgerPort }} - { - "name": "pgbadger", - "protocol": "TCP", - "port": {{.PGBadgerPort}}, - "targetPort": {{.PGBadgerPort}}, - "nodePort": 0 - }, - {{ end }} - {{ if .ExporterPort }} - { - "name": "postgres-exporter", - "protocol": "TCP", - "port": {{.ExporterPort}}, - "targetPort": {{.ExporterPort}}, - "nodePort": 0 - }, - {{ end }} - {{ if or (eq .Name .ClusterName) (eq .Name (printf "%s%s" .ClusterName "-replica")) }} - { - "name": "sshd", - "protocol": "TCP", - "port": 2022, - "targetPort": 2022, - "nodePort": 0 - }, - {{ end }} - { - "name": "postgres", - "protocol": "TCP", - "port": {{.Port}}, - "targetPort": {{.Port}}, - "nodePort": 0 - } - ], - "selector": { - {{ if or (eq .Name .ClusterName) (eq .Name (printf "%s%s" .ClusterName "-replica")) }} - "pg-cluster": "{{.ClusterName}}", - {{ if eq .Name (printf "%s%s" .ClusterName "-replica") }} - "role": "replica" - {{else}} - "role": "master" - {{end}} - {{else}} - "service-name": "{{.ServiceName}}" - {{end}} - }, - "type": "{{.ServiceType}}", - "sessionAffinity": "None" - } -} diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/container-resources.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/container-resources.json deleted file mode 100644 index 005052405f..0000000000 --- a/installers/ansible/roles/pgo-operator/files/pgo-configs/container-resources.json +++ /dev/null @@ -1,24 +0,0 @@ -{{ if or .RequestsMemory .RequestsCPU .LimitsMemory .LimitsCPU }} -"resources": { - {{ if or .LimitsMemory .LimitsCPU }} - "limits": { - {{ if .LimitsCPU }} - "cpu": "{{.LimitsCPU}}"{{ if .LimitsMemory }},{{ end }} - {{ end }} - {{ if .LimitsMemory }} - "memory": "{{.LimitsMemory}}" - {{ end }} - }{{ if or .RequestsMemory .RequestsCPU }},{{ end }} - {{ end }} - {{ if or .RequestsMemory .RequestsCPU }} - "requests": { - {{ if .RequestsCPU }} - "cpu": "{{.RequestsCPU}}"{{ if .RequestsMemory }},{{ end }} - {{ end }} - {{ if .RequestsMemory }} - "memory": "{{.RequestsMemory}}" - {{ end }} - } - {{ end }} -}, -{{ end }} diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/exporter.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/exporter.json deleted file mode 100644 index c40a26e5ef..0000000000 --- a/installers/ansible/roles/pgo-operator/files/pgo-configs/exporter.json +++ /dev/null @@ -1,53 +0,0 @@ -,{ - "name": "exporter", - "image": "{{.PGOImagePrefix}}/crunchy-postgres-exporter:{{.PGOImageTag}}", - "ports": [{ - "containerPort": {{.ExporterPort}}, - "protocol": "TCP" - }], - {{.ContainerResources }} - "env": [ - { - "name": "EXPORTER_PG_HOST", - "value": "127.0.0.1" - }, - { - "name": "EXPORTER_PG_PORT", - "value": "{{.PgPort}}" - }, - { - "name": "EXPORTER_PG_DATABASE", - "value": "postgres" - }, - { - "name": "EXPORTER_PG_PARAMS", - "value": {{ if .TLSOnly }}"sslmode=require"{{ else }}"sslmode=disable"{{ end }} - }, - { - "name": "JOB_NAME", - "value": "{{.JobName}}" - }, - { - "name": "POSTGRES_EXPORTER_PORT", - "value": "{{.ExporterPort}}" - }, - { - "name": "EXPORTER_PG_USER", - "valueFrom": { - "secretKeyRef": { - "name": "{{.CollectSecretName}}", - "key": "username" - } - } - }, - { - "name": "EXPORTER_PG_PASSWORD", - "valueFrom": { - "secretKeyRef": { - "name": "{{.CollectSecretName}}", - "key": "password" - } - } - } - ] -} diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgadmin-service-template.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgadmin-service-template.json deleted file mode 100644 index b2be1de8eb..0000000000 --- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgadmin-service-template.json +++ /dev/null @@ -1,26 +0,0 @@ -{ - "kind": "Service", - "apiVersion": "v1", - "metadata": { - "name": "{{.Name}}", - "labels": { - "vendor": "crunchydata", - "name": "{{.Name}}", - "pgadmin": "true", - "pg-cluster": "{{.ClusterName}}" - } - }, - "spec": { - "ports": [{ - "protocol": "TCP", - "port": {{.Port}}, - "targetPort": {{.Port}}, - "nodePort": 0 - }], - "selector": { - "name": "{{.Name}}" - }, - "type": "ClusterIP", - "sessionAffinity": "None" - } -} diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgadmin-template.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgadmin-template.json deleted file mode 100644 index 5ea1d44249..0000000000 --- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgadmin-template.json +++ /dev/null @@ -1,80 +0,0 @@ -{ - "kind": "Deployment", - "apiVersion": "apps/v1", - "metadata": { - "name": "{{.Name}}", - "labels": { - "name": "{{.Name}}", - "crunchy-pgadmin": "true", - "pg-cluster": "{{.ClusterName}}", - "service-name": "{{.Name}}", - "vendor": "crunchydata" - } - }, - "spec": { - "replicas": 1, - "selector": { - "matchLabels": { - "name": "{{.Name}}", - "crunchy-pgadmin": "true", - "pg-cluster": "{{.ClusterName}}", - "service-name": "{{.Name}}", - "vendor": "crunchydata" - } - }, - "template": { - "metadata": { - "labels": { - "name": "{{.Name}}", - "crunchy-pgadmin": "true", - "pg-cluster": "{{.ClusterName}}", - "service-name": "{{.Name}}", - "vendor": "crunchydata" - } - }, - "spec": { - "serviceAccountName": "pgo-default", - {{ if not .DisableFSGroup }} - "securityContext": { - "fsGroup": 2 - }, - {{ end }} - "containers": [{ - "name": "pgadminweb", - "image": "{{.CCPImagePrefix}}/crunchy-pgadmin4:{{.CCPImageTag}}", - "ports": [{ - "containerPort": {{.Port}}, - "protocol": "TCP" - }], - "env": [{ - "name": "PGADMIN_SETUP_EMAIL", - "value": "{{.InitUser}}" - },{ - "name": "PGADMIN_SETUP_PASSWORD", - "value": "{{.InitPass}}" - }], - "volumeMounts": [{ - "name": "pgadmin-datadir", - "mountPath": "/var/lib/pgadmin", - "readOnly": false - }] - }], - "volumes": [{ - "name": "pgadmin-datadir", - "persistentVolumeClaim": { - "claimName": "{{.PVCName}}" - } - }], - "restartPolicy": "Always", - "dnsPolicy": "ClusterFirst" - } - }, - "strategy": { - "type": "RollingUpdate", - "rollingUpdate": { - "maxUnavailable": 1, - "maxSurge": 1 - } - } - } -} diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgbackrest-env-vars.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgbackrest-env-vars.json deleted file mode 100644 index fcf64b9679..0000000000 --- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgbackrest-env-vars.json +++ /dev/null @@ -1,48 +0,0 @@ -{ - "name": "PGBACKREST_STANZA", - "value": "{{.PgbackrestStanza}}" -}, -{ - "name": "PGBACKREST_REPO1_HOST", - "value": "{{.PgbackrestRepo1Host}}" -}, -{ - "name": "BACKREST_SKIP_CREATE_STANZA", - "value": "true" -}, -{ - "name": "PGHA_PGBACKREST", - "value": "true" -}, -{ - "name": "PGBACKREST_REPO1_PATH", - "value": "{{.PgbackrestRepo1Path}}" -}, -{ - "name": "PGBACKREST_DB_PATH", - "value": "{{.PgbackrestDBPath}}" -}, -{ - "name": "ENABLE_SSHD", - "value": "true" -}, -{ - "name": "PGBACKREST_LOG_PATH", - "value": "/tmp" -}, -{ - "name": "PGBACKREST_PG1_SOCKET_PATH", - "value": "/tmp" -}, -{ - "name": "PGBACKREST_PG1_PORT", - "value": "{{.PgbackrestPGPort}}" -}, -{ - "name": "PGBACKREST_REPO_TYPE", - "value": "{{.PgbackrestRepo1Type}}" -}, -{ - "name": "PGHA_PGBACKREST_LOCAL_S3_STORAGE", - "value": "{{.PgbackrestLocalAndS3Storage}}" -}, diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgbackrest-s3-env-vars.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgbackrest-s3-env-vars.json deleted file mode 100644 index 6dd7afdea8..0000000000 --- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgbackrest-s3-env-vars.json +++ /dev/null @@ -1,45 +0,0 @@ -{ - "name": "PGBACKREST_REPO1_S3_BUCKET", - "value": "{{.PgbackrestS3Bucket}}" -}, -{ - "name": "PGBACKREST_REPO1_S3_ENDPOINT", - "value": "{{.PgbackrestS3Endpoint}}" -}, -{ - "name": "PGBACKREST_REPO1_S3_REGION", - "value": "{{.PgbackrestS3Region}}" -}, -{ - "name": "PGBACKREST_REPO1_S3_KEY", - "valueFrom": { - "secretKeyRef": { - "name": "{{.PgbackrestS3SecretName}}", - "key": "{{.PgbackrestS3Key}}" - } - } -}, -{ - "name": "PGBACKREST_REPO1_S3_KEY_SECRET", - "valueFrom": { - "secretKeyRef": { - "name": "{{.PgbackrestS3SecretName}}", - "key": "{{.PgbackrestS3KeySecret}}" - } - } -}, -{ - "name": "PGBACKREST_REPO1_S3_CA_FILE", - "value": "/sshd/aws-s3-ca.crt" -}, -{ - "name": "PGBACKREST_REPO1_HOST_CMD", - "value": "/usr/local/bin/archive-push-s3.sh" -}, -{ - "name": "PGBACKREST_REPO1_S3_URI_STYLE", - "value": "{{.PgbackrestS3URIStyle}}" -},{ - "name": "PGHA_PGBACKREST_S3_VERIFY_TLS", - "value": "{{.PgbackrestS3VerifyTLS}}" -}, diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgbadger.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgbadger.json deleted file mode 100644 index d9b04daa73..0000000000 --- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgbadger.json +++ /dev/null @@ -1,41 +0,0 @@ - ,{ - "name": "pgbadger", - "image": "{{.CCPImagePrefix}}/crunchy-pgbadger:{{.CCPImageTag}}", - "ports": [ { - "containerPort": {{.PGBadgerPort}}, - "protocol": "TCP" - } - ], - "readinessProbe": { - "tcpSocket": { - "port": {{.PGBadgerPort}} - }, - "initialDelaySeconds": 20, - "periodSeconds": 10 - }, - "env": [ { - "name": "BADGER_TARGET", - "value": "{{.BadgerTarget}}" - }, { - "name": "PGBADGER_SERVICE_PORT", - "value": "{{.PGBadgerPort}}" - } ], - "resources": { - "limits": { - "cpu": "500m", - "memory": "64Mi" - } - }, - "volumeMounts": [ - { - "mountPath": "/pgdata", - "name": "pgdata", - "readOnly": true - }, - { - "mountPath": "/report", - "name": "report", - "readOnly": false - } - ] - } diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgbouncer-template.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgbouncer-template.json deleted file mode 100644 index 38202a7464..0000000000 --- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgbouncer-template.json +++ /dev/null @@ -1,103 +0,0 @@ -{ - "kind": "Deployment", - "apiVersion": "apps/v1", - "metadata": { - "name": "{{.Name}}", - "labels": { - "name": "{{.Name}}", - "crunchy-pgbouncer": "true", - "pg-cluster": "{{.ClusterName}}", - "service-name": "{{.Name}}", - "vendor": "crunchydata" - } - }, - "spec": { - "replicas": {{.Replicas}}, - "selector": { - "matchLabels": { - "name": "{{.Name}}", - "crunchy-pgbouncer": "true", - "pg-cluster": "{{.ClusterName}}", - "service-name": "{{.Name}}", - "{{.PodAntiAffinityLabelName}}": "{{.PodAntiAffinityLabelValue}}", - "vendor": "crunchydata" - } - }, - "template": { - "metadata": { - {{ if .PodAnnotations }} - "annotations": {{ .PodAnnotations }}, - {{ end }} - "labels": { - "name": "{{.Name}}", - "crunchy-pgbouncer": "true", - "pg-cluster": "{{.ClusterName}}", - "service-name": "{{.Name}}", - "{{.PodAntiAffinityLabelName}}": "{{.PodAntiAffinityLabelValue}}", - "vendor": "crunchydata" - } - }, - "spec": { - "serviceAccountName": "pgo-default", - "containers": [{ - "name": "pgbouncer", - "image": "{{.CCPImagePrefix}}/crunchy-pgbouncer:{{.CCPImageTag}}", - "ports": [{ - "containerPort": {{.Port}}, - "protocol": "TCP" - }], - {{.ContainerResources }} - "env": [{ - "name": "PG_PASSWORD", - "valueFrom": { - "secretKeyRef": { - "name": "{{.PGBouncerSecret}}", - "key": "password" - } - } - }, { - "name": "PG_PRIMARY_SERVICE_NAME", - "value": "{{.PrimaryServiceName}}" - }], - "volumeMounts": [{ - "name": "pgbouncer-conf", - "mountPath": "/pgconf/", - "readOnly": false - }] - }], - "volumes": [ - { - "name": "pgbouncer-conf", - "projected": { - "sources": [ - { - "configMap": { - "name": "{{.PGBouncerConfigMap}}" - } - }, - { - "secret": { - "name": "{{.PGBouncerSecret}}", - "defaultMode": 511 - } - } - ] - } - } - ], - "affinity": { - {{.PodAntiAffinity}} - }, - "restartPolicy": "Always", - "dnsPolicy": "ClusterFirst" - } - }, - "strategy": { - "type": "RollingUpdate", - "rollingUpdate": { - "maxUnavailable": 1, - "maxSurge": 1 - } - } - } -} diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgbouncer.ini b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgbouncer.ini deleted file mode 100644 index 157f9a96e1..0000000000 --- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgbouncer.ini +++ /dev/null @@ -1,22 +0,0 @@ -[databases] -* = host={{.PG_PRIMARY_SERVICE_NAME}} port={{.PG_PORT}} auth_user=pgbouncer - -[pgbouncer] -listen_port = 5432 -listen_addr = * -auth_type = md5 -auth_file = /pgconf/users.txt -auth_query = SELECT username, password from pgbouncer.get_auth($1) -pidfile = /tmp/pgbouncer.pid -logfile = /dev/stdout -admin_users = pgbouncer -stats_users = pgbouncer -default_pool_size = 20 -max_client_conn = 100 -max_db_connections = 0 -min_pool_size = 0 -pool_mode = session -reserve_pool_size = 0 -reserve_pool_timeout = 5 -query_timeout = 0 -ignore_startup_parameters = extra_float_digits diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgbouncer_hba.conf b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgbouncer_hba.conf deleted file mode 100644 index 824c82705e..0000000000 --- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgbouncer_hba.conf +++ /dev/null @@ -1 +0,0 @@ -host all all 0.0.0.0/0 md5 diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgdump-job.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgdump-job.json deleted file mode 100644 index 3b827ecaac..0000000000 --- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgdump-job.json +++ /dev/null @@ -1,94 +0,0 @@ -{ - "apiVersion": "batch/v1", - "kind": "Job", - "metadata": { - "name": "{{.JobName}}", - "labels": { - "vendor": "crunchydata", - "pgdump": "true", - "pg-cluster": "{{.ClusterName}}", - "pg-task": "{{.TaskName}}" - } - }, - "spec": { - "template": { - "metadata": { - "name": "{{.JobName}}", - "labels": { - "vendor":"crunchydata", - "pgdump":"true", - "pg-cluster":"{{.ClusterName}}" - } - }, - "spec": { - "volumes": [ - { - "name": "pgdata", - "persistentVolumeClaim": { - "claimName": "{{.PgDumpPVC}}" - } - } - ], - "securityContext": {{.SecurityContext}}, - "serviceAccountName": "pgo-default", - "containers": [{ - "name": "pgdump", - "image": "{{.CCPImagePrefix}}/crunchy-pgdump:{{.CCPImageTag}}", - "volumeMounts": [ - { - "mountPath": "/pgdata", - "name": "pgdata", - "readOnly": false - } - ], - "env": [ - { - "name": "PGDUMP_HOST", - "value": "{{.PgDumpHost}}" - }, - { - "name": "PGDUMP_USER", - "valueFrom": { - "secretKeyRef": { - "name": "{{.PgDumpUserSecret}}", - "key": "username" - } - } - }, - { - "name": "PGDUMP_PASS", - "valueFrom": { - "secretKeyRef": { - "name": "{{.PgDumpUserSecret}}", - "key": "password" - } - } - }, - { - "name": "PGDUMP_DB", - "value": "{{.PgDumpDB}}" - }, - { - "name": "PGDUMP_PORT", - "value": "{{.PgDumpPort}}" - }, - { - "name": "PGDUMP_CUSTOM_OPTS", - "value": "{{.PgDumpOpts}}" - }, - { - "name": "PGDUMP_FILENAME", - "value": "{{.PgDumpFilename}}" - }, - { - "name": "PGDUMP_ALL", - "value": "{{.PgDumpAll}}" - } - ] - } - ], - "restartPolicy": "Never" - } - } - } -} diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgmonitor-env-vars.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgmonitor-env-vars.json deleted file mode 100644 index a9135c184d..0000000000 --- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgmonitor-env-vars.json +++ /dev/null @@ -1,9 +0,0 @@ -{ - "name": "PGMONITOR_PASSWORD", - "valueFrom": { - "secretKeyRef": { - "name": "{{.ExporterSecret}}", - "key": "password" - } - } -}, diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-backrest-repo-service-template.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-backrest-repo-service-template.json deleted file mode 100644 index 04a73c79a6..0000000000 --- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-backrest-repo-service-template.json +++ /dev/null @@ -1,26 +0,0 @@ -{ - "kind": "Service", - "apiVersion": "v1", - "metadata": { - "name": "{{.Name}}", - "labels": { - "vendor": "crunchydata", - "name": "{{.Name}}", - "pgo-backrest-repo": "true", - "pg-cluster": "{{.ClusterName}}" - } - }, - "spec": { - "ports": [{ - "protocol": "TCP", - "port": {{.Port}}, - "targetPort": {{.Port}}, - "nodePort": 0 - }], - "selector": { - "name": "{{.Name}}" - }, - "type": "ClusterIP", - "sessionAffinity": "None" - } -} diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-backrest-repo-template.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-backrest-repo-template.json deleted file mode 100644 index 5f9e5d5049..0000000000 --- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-backrest-repo-template.json +++ /dev/null @@ -1,129 +0,0 @@ -{ - "kind": "Deployment", - "apiVersion": "apps/v1", - "metadata": { - "name": "{{.Name}}", - "labels": { - {{if .BootstrapCluster}} - "pgha-bootstrap": "{{.BootstrapCluster}}", - {{ end }} - "name": "{{.Name}}", - "pg-cluster": "{{.ClusterName}}", - "service-name": "{{.Name}}", - "vendor": "crunchydata", - "pgo-backrest-repo": "true" - } - }, - "spec": { - "replicas": {{.Replicas}}, - "selector": { - "matchLabels": { - "name": "{{.Name}}", - "pg-cluster": "{{.ClusterName}}", - "service-name": "{{.Name}}", - "vendor": "crunchydata", - "{{.PodAntiAffinityLabelName}}": "{{.PodAntiAffinityLabelValue}}", - "pgo-backrest-repo": "true" - } - }, - "template": { - "metadata": { - {{ if .PodAnnotations }} - "annotations": {{ .PodAnnotations }}, - {{ end }} - "labels": { - {{if .BootstrapCluster}} - "pgha-bootstrap": "{{.BootstrapCluster}}", - {{ end }} - "name": "{{.Name}}", - "pg-cluster": "{{.ClusterName}}", - "service-name": "{{.Name}}", - "vendor": "crunchydata", - "{{.PodAntiAffinityLabelName}}": "{{.PodAntiAffinityLabelValue}}", - "pgo-backrest-repo": "true" - } - }, - "spec": { - "securityContext": {{.SecurityContext}}, - "serviceAccountName": "pgo-default", - "containers": [{ - "name": "database", - "image": "{{.PGOImagePrefix}}/pgo-backrest-repo:{{.PGOImageTag}}", - "ports": [{ - "containerPort": {{.SshdPort}}, - "protocol": "TCP" - }], - {{.ContainerResources }} - "env": [ - {{.PgbackrestS3EnvVars}} - { - "name": "PGBACKREST_STANZA", - "value": "{{.PgbackrestStanza}}" - }, - { - "name": "SSHD_PORT", - "value": "{{.SshdPort}}" - }, - { - "name": "PGBACKREST_DB_PATH", - "value": "{{.PgbackrestDBPath}}" - }, - { - "name": "PGBACKREST_REPO_PATH", - "value": "{{.PgbackrestRepoPath}}" - }, - { - "name": "PGBACKREST_PG1_PORT", - "value": "{{.PgbackrestPGPort}}" - }, - { - "name": "PGBACKREST_LOG_PATH", - "value": "/tmp" - }, - { - "name": "PGBACKREST_PG1_SOCKET_PATH", - "value": "/tmp" - }, - { - "name": "PGBACKREST_DB_HOST", - "value": "{{.PGbackrestDBHost}}" - } - ], - "volumeMounts": [{ - "name": "sshd", - "mountPath": "/sshd", - "readOnly": true - }, { - "name": "backrestrepo", - "mountPath": "/backrestrepo", - "readOnly": false - }] - }], - "volumes": [{ - "name": "sshd", - "secret": { - "secretName": "{{.SshdSecretsName}}", - "defaultMode": 511 - } - }, { - "name": "backrestrepo", - "persistentVolumeClaim": { - "claimName": "{{.BackrestRepoClaimName}}" - } - }], - "affinity": { - {{.PodAntiAffinity}} - }, - "restartPolicy": "Always", - "dnsPolicy": "ClusterFirst" - } - }, - "strategy": { - "type": "RollingUpdate", - "rollingUpdate": { - "maxUnavailable": 1, - "maxSurge": 1 - } - } - } -} diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-backrest-role-binding.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-backrest-role-binding.json deleted file mode 100644 index 84f1c031fc..0000000000 --- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-backrest-role-binding.json +++ /dev/null @@ -1,20 +0,0 @@ -{ - "apiVersion": "rbac.authorization.k8s.io/v1", - "kind": "RoleBinding", - "metadata": { - "name": "pgo-backrest-role-binding", - "namespace": "{{.TargetNamespace}}" - }, - "roleRef": { - "apiGroup": "rbac.authorization.k8s.io", - "kind": "Role", - "name": "pgo-backrest-role" - }, - "subjects": [ - { - "kind": "ServiceAccount", - "name": "pgo-backrest", - "namespace": "{{.TargetNamespace}}" - } - ] -} diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-backrest-role.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-backrest-role.json deleted file mode 100644 index ca1c5b4e0b..0000000000 --- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-backrest-role.json +++ /dev/null @@ -1,33 +0,0 @@ -{ - "apiVersion": "rbac.authorization.k8s.io/v1", - "kind": "Role", - "metadata": { - "name": "pgo-backrest-role", - "namespace": "{{.TargetNamespace}}" - }, - "rules": [ - { - "apiGroups": [ - "" - ], - "resources": [ - "pods" - ], - "verbs": [ - "get", - "list" - ] - }, - { - "apiGroups": [ - "" - ], - "resources": [ - "pods/exec" - ], - "verbs": [ - "create" - ] - } - ] -} diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-backrest-sa.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-backrest-sa.json deleted file mode 100644 index d3d8d19c4b..0000000000 --- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-backrest-sa.json +++ /dev/null @@ -1,8 +0,0 @@ -{ - "apiVersion": "v1", - "kind": "ServiceAccount", - "metadata": { - "name": "pgo-backrest", - "namespace": "{{.TargetNamespace}}" - } -} diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-client.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-client.json deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-default-sa.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-default-sa.json deleted file mode 100644 index 5a8a52865c..0000000000 --- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-default-sa.json +++ /dev/null @@ -1,9 +0,0 @@ -{ - "apiVersion": "v1", - "kind": "ServiceAccount", - "metadata": { - "name": "pgo-default", - "namespace": "{{.TargetNamespace}}" - }, - "automountServiceAccountToken": false -} diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-pg-role-binding.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-pg-role-binding.json deleted file mode 100644 index b9d8209723..0000000000 --- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-pg-role-binding.json +++ /dev/null @@ -1,22 +0,0 @@ -{ - "apiVersion":"rbac.authorization.k8s.io/v1", - "kind":"RoleBinding", - "metadata":{ - "name":"pgo-pg-role-binding", - "namespace":"{{.TargetNamespace}}", - "labels":{ - "vendor":"crunchydata" - } - }, - "roleRef":{ - "apiGroup":"rbac.authorization.k8s.io", - "kind":"Role", - "name":"pgo-pg-role" - }, - "subjects":[ - { - "kind":"ServiceAccount", - "name":"pgo-pg" - } - ] -} \ No newline at end of file diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-pg-role.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-pg-role.json deleted file mode 100644 index ffae6828b7..0000000000 --- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-pg-role.json +++ /dev/null @@ -1,46 +0,0 @@ -{ - "apiVersion":"rbac.authorization.k8s.io/v1", - "kind":"Role", - "metadata":{ - "name":"pgo-pg-role", - "namespace":"{{.TargetNamespace}}", - "labels":{ - "vendor":"crunchydata" - } - }, - "rules":[ - { - "apiGroups":[ - "" - ], - "resources":[ - "configmaps" - ], - "verbs":[ - "create", - "get", - "list", - "patch", - "update", - "watch", - "delete", - "deletecollection" - ] - }, - { - "apiGroups":[ - "" - ], - "resources":[ - "pods" - ], - "verbs":[ - "get", - "list", - "patch", - "update", - "watch" - ] - } - ] -} \ No newline at end of file diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-pg-sa.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-pg-sa.json deleted file mode 100644 index d722cfc27f..0000000000 --- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-pg-sa.json +++ /dev/null @@ -1,11 +0,0 @@ -{ - "apiVersion":"v1", - "kind":"ServiceAccount", - "metadata":{ - "name":"pgo-pg", - "namespace":"{{.TargetNamespace}}", - "labels":{ - "vendor":"crunchydata" - } - } -} \ No newline at end of file diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-target-role-binding.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-target-role-binding.json deleted file mode 100644 index df279ee347..0000000000 --- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-target-role-binding.json +++ /dev/null @@ -1,25 +0,0 @@ -{ - "apiVersion": "rbac.authorization.k8s.io/v1", - "kind": "RoleBinding", - "metadata": { - "name": "pgo-target-role-binding", - "namespace": "{{.TargetNamespace}}" - }, - "roleRef": { - "apiGroup": "rbac.authorization.k8s.io", - "kind": "Role", - "name": "pgo-target-role" - }, - "subjects": [ - { - "kind": "ServiceAccount", - "name": "postgres-operator", - "namespace": "{{.OperatorNamespace}}" - }, - { - "kind": "ServiceAccount", - "name": "pgo-target", - "namespace": "{{.TargetNamespace}}" - } - ] -} diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-target-role.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-target-role.json deleted file mode 100644 index 1cb6a31cc5..0000000000 --- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-target-role.json +++ /dev/null @@ -1,93 +0,0 @@ -{ - "apiVersion": "rbac.authorization.k8s.io/v1", - "kind": "Role", - "metadata": { - "name": "pgo-target-role", - "namespace": "{{.TargetNamespace}}" - }, - "rules": [ - { - "apiGroups": [ - "" - ], - "resources": [ - "configmaps", - "endpoints", - "pods", - "pods/exec", - "pods/log", - "replicasets", - "secrets", - "services", - "persistentvolumeclaims" - ], - "verbs":[ - "get", - "list", - "watch", - "create", - "patch", - "update", - "delete", - "deletecollection" - ] - }, - { - "apiGroups": [ - "apps" - ], - "resources": [ - "deployments" - ], - "verbs":[ - "get", - "list", - "watch", - "create", - "patch", - "update", - "delete", - "deletecollection" - ] - }, - { - "apiGroups": [ - "batch" - ], - "resources": [ - "jobs" - ], - "verbs":[ - "get", - "list", - "watch", - "create", - "patch", - "update", - "delete", - "deletecollection" - ] - }, - { - "apiGroups": [ - "crunchydata.com" - ], - "resources": [ - "pgclusters", - "pgpolicies", - "pgtasks", - "pgreplicas" - ], - "verbs":[ - "get", - "list", - "watch", - "create", - "patch", - "update", - "delete", - "deletecollection" - ] - } - ] -} diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-target-sa.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-target-sa.json deleted file mode 100644 index 5d31bd4441..0000000000 --- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo-target-sa.json +++ /dev/null @@ -1,8 +0,0 @@ -{ - "apiVersion": "v1", - "kind": "ServiceAccount", - "metadata": { - "name": "pgo-target", - "namespace": "{{.TargetNamespace}}" - } -} diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo.sqlrunner-template.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo.sqlrunner-template.json deleted file mode 100644 index 56dbf8b035..0000000000 --- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgo.sqlrunner-template.json +++ /dev/null @@ -1,81 +0,0 @@ -{ - "apiVersion": "batch/v1", - "kind": "Job", - "metadata": { - "name": "{{.JobName}}", - "labels": { - "vendor": "crunchydata", - "pgo-sqlrunner": "true", - "pg-cluster": "{{.ClusterName}}" - } - }, - "spec": { - "template": { - "metadata": { - "name": "{{.JobName}}", - "labels": { - "vendor": "crunchydata", - "pgo-sqlrunner": "true", - "pg-cluster": "{{.ClusterName}}" - } - }, - "spec": { - "serviceAccountName": "pgo-default", - "containers": [ - { - "name": "sqlrunner", - "image": "{{.PGOImagePrefix}}/pgo-sqlrunner:{{.PGOImageTag}}", - "env": [ - { - "name": "PG_HOST", - "value": "{{.PGHost}}" - }, - { - "name": "PG_PORT", - "value": "{{.PGPort}}" - }, - { - "name": "PG_DATABASE", - "value": "{{.PGDatabase}}" - }, - { - "name": "PG_USER", - "valueFrom": { - "secretKeyRef": { - "name": "{{.PGUserSecret}}", - "key": "username" - } - } - }, - { - "name": "PG_PASSWORD", - "valueFrom": { - "secretKeyRef": { - "name": "{{.PGUserSecret}}", - "key": "password" - } - } - } - ], - "volumeMounts": [ - { - "mountPath": "/pgconf", - "name": "pgconf", - "readOnly": true - } - ] - } - ], - "volumes": [ - { - "name": "pgconf", - "configMap": { - "name": "{{.PGSQLConfigMap}}" - } - } - ], - "restartPolicy": "Never" - } - } - } -} diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgrestore-job.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/pgrestore-job.json deleted file mode 100644 index 4dae8fda14..0000000000 --- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pgrestore-job.json +++ /dev/null @@ -1,92 +0,0 @@ -{ - "apiVersion": "batch/v1", - "kind": "Job", - "metadata": { - "name": "{{.JobName}}", - "labels": { - "vendor": "crunchydata", - "pgrestore": "true", - "pg-cluster": "{{.ClusterName}}", - "pg-task": "{{.TaskName}}" - } - }, - "spec": { - "template": { - "metadata": { - "name": "{{.JobName}}", - "labels": { - "vendor": "crunchydata", - "pgrestore": "true", - "pg-cluster": "{{.ClusterName}}" - } - }, - "spec": { - "volumes": [ - { - "name": "pgdata", - "persistentVolumeClaim": { - "claimName": "{{.FromClusterPVCName}}" - } - } - ], - "securityContext": {{.SecurityContext}}, - "serviceAccountName": "pgo-default", - "containers": [ - { - "name": "pgrestore", - "image": "{{.CCPImagePrefix}}/crunchy-pgrestore:{{.CCPImageTag}}", - "volumeMounts": [ - { - "mountPath": "/pgdata", - "name": "pgdata", - "readOnly": true - } - ], - "env": [ - { - "name": "PGRESTORE_USER", - "valueFrom": { - "secretKeyRef": { - "name": "{{.PgRestoreUserSecret}}", - "key": "username" - } - } - }, - { - "name": "PGRESTORE_PASS", - "valueFrom": { - "secretKeyRef": { - "name": "{{.PgRestoreUserSecret}}", - "key": "password" - } - } - }, - { - "name": "PGRESTORE_HOST", - "value": "{{.PgRestoreHost}}" - }, - { - "name": "PGRESTORE_DB", - "value": "{{.PgRestoreDB}}" - }, - { - "name": "PG_PRIMARY_PORT", - "value": "5432" - }, - { - "name": "PGRESTORE_CUSTOM_OPTS", - "value": "{{.PGRestoreOpts}}" - }, - { - "name": "PGRESTORE_BACKUP_TIMESTAMP", - "value": "{{.PITRTarget}}" - } - ] - } - ], - {{.NodeSelector}} - "restartPolicy": "Never" - } - } - } -} diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pod-anti-affinity.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/pod-anti-affinity.json deleted file mode 100644 index 3f05f2a8f3..0000000000 --- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pod-anti-affinity.json +++ /dev/null @@ -1,44 +0,0 @@ - "podAntiAffinity": { - "{{.AffinityType}}": [ - { - {{if eq .AffinityType "preferredDuringSchedulingIgnoredDuringExecution"}} - "weight": 1, - "podAffinityTerm": { - {{end}} - "labelSelector": { - "matchExpressions": [ - { - "key": "{{.VendorLabelKey}}", - "operator": "In", - "values": [ - "{{.VendorLabelValue}}" - ] - }, - { - "key": "{{.PodAntiAffinityLabelKey}}", - {{if eq .AffinityType "requiredDuringSchedulingIgnoredDuringExecution"}} - "operator": "In", - "values": [ - "required", - "require" - ] - {{else}} - "operator": "Exists" - {{end}} - }, - { - "key": "pg-cluster", - "operator": "In", - "values": [ - "{{.ClusterName}}" - ] - } - ] - }, - "topologyKey": "kubernetes.io/hostname" - {{if eq .AffinityType "preferredDuringSchedulingIgnoredDuringExecution"}} - } - {{end}} - } - ] - } diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pvc-matchlabels.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/pvc-matchlabels.json deleted file mode 100644 index ef7f86f465..0000000000 --- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pvc-matchlabels.json +++ /dev/null @@ -1 +0,0 @@ -"selector": { "matchLabels": { "{{.Key}}": "{{.Value}}" } }, diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pvc-storageclass.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/pvc-storageclass.json deleted file mode 100644 index 688ddd55f0..0000000000 --- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pvc-storageclass.json +++ /dev/null @@ -1,23 +0,0 @@ -{ - "kind": "PersistentVolumeClaim", - "apiVersion": "v1", - "metadata": { - "name": "{{.Name}}", - "labels": { - "vendor": "crunchydata", - "pgremove": "true", - "pg-cluster": "{{.ClusterName}}" - } - }, - "spec": { - "accessModes": [ - "{{.AccessMode}}" - ], - {{ if .StorageClass }}"storageClassName": "{{.StorageClass}}",{{ end }} - "resources": { - "requests": { - "storage": "{{.Size}}" - } - } - } -} diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/pvc.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/pvc.json deleted file mode 100644 index d57b24cd8a..0000000000 --- a/installers/ansible/roles/pgo-operator/files/pgo-configs/pvc.json +++ /dev/null @@ -1,25 +0,0 @@ -{ - "kind": "PersistentVolumeClaim", - "apiVersion": "v1", - "metadata": { - "name": "{{.Name}}", - "labels": { - "vendor": "crunchydata", - "pgremove": "true", - "pg-cluster": "{{.ClusterName}}" - } - }, - "spec": { - - {{.MatchLabels}} - - "accessModes": [ - "{{.AccessMode}}" - ], - "resources": { - "requests": { - "storage": "{{.Size}}" - } - } - } -} diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/rmdata-job.json b/installers/ansible/roles/pgo-operator/files/pgo-configs/rmdata-job.json deleted file mode 100644 index b5f169fa4a..0000000000 --- a/installers/ansible/roles/pgo-operator/files/pgo-configs/rmdata-job.json +++ /dev/null @@ -1,61 +0,0 @@ -{ - "apiVersion": "batch/v1", - "kind": "Job", - "metadata": { - "name": "{{.JobName}}", - "labels": { - "vendor": "crunchydata", - "pgrmdata": "true", - "pg-cluster": "{{.ClusterName}}" - } - }, - "spec": { - "template": { - "metadata": { - "name": "{{.JobName}}", - "labels": { - "vendor": "crunchydata", - "pgrmdata": "true", - "pg-cluster": "{{.ClusterName}}" - } - }, - "spec": { - "serviceAccountName": "pgo-target", - "containers": [{ - "name": "rmdata", - "image": "{{.PGOImagePrefix}}/pgo-rmdata:{{.PGOImageTag}}", - "env": [{ - "name": "PG_CLUSTER", - "value": "{{.ClusterName}}" - }, { - "name": "PGHA_SCOPE", - "value": "{{.ClusterPGHAScope}}" - }, { - "name": "REPLICA_NAME", - "value": "{{.ReplicaName}}" - }, { - "name": "REMOVE_DATA", - "value": "{{.RemoveData}}" - }, { - "name": "REMOVE_BACKUP", - "value": "{{.RemoveBackup}}" - }, { - "name": "IS_BACKUP", - "value": "{{.IsBackup}}" - }, { - "name": "IS_REPLICA", - "value": "{{.IsReplica}}" - }, { - "name": "NAMESPACE", - "valueFrom": { - "fieldRef": { - "fieldPath": "metadata.namespace" - } - } - }] - }], - "restartPolicy": "Never" - } - } - } -} diff --git a/installers/ansible/roles/pgo-operator/files/pgo-configs/users.txt b/installers/ansible/roles/pgo-operator/files/pgo-configs/users.txt deleted file mode 100644 index 345272d141..0000000000 --- a/installers/ansible/roles/pgo-operator/files/pgo-configs/users.txt +++ /dev/null @@ -1,2 +0,0 @@ -{{range $key, $value := .}}"{{.Username}}" "{{.Password}}" -{{end}} diff --git a/installers/ansible/roles/pgo-operator/tasks/certs.yml b/installers/ansible/roles/pgo-operator/tasks/certs.yml deleted file mode 100644 index 4c66e89892..0000000000 --- a/installers/ansible/roles/pgo-operator/tasks/certs.yml +++ /dev/null @@ -1,54 +0,0 @@ ---- -- name: Ensure directory exists for local self-signed TLS certs. - file: - path: '{{ output_dir }}' - state: directory - tags: - - install - -- name: Generate RSA Key - command: openssl genrsa -out "{{ output_dir }}/server.key" 2048 - args: - creates: "{{ output_dir }}/server.key" - tags: - - install - -- name: Generate CSR - command: openssl req \ - -new \ - -subj '/C=US/ST=SC/L=Charleston/O=CrunchyData/CN=pg-operator' \ - -key "{{ output_dir }}/server.key" \ - -out "{{ output_dir }}/server.csr" - args: - creates: "{{ output_dir }}/server.csr" - tags: - - install - -- name: Generate Self-signed Certificate - command: openssl req \ - -x509 \ - -days 1825 \ - -key "{{ output_dir }}/server.key" \ - -in "{{ output_dir }}/server.csr" \ - -out "{{ output_dir }}/server.crt" - args: - creates: "{{ output_dir }}/server.crt" - tags: - - install - -- name: Ensure {{ pgo_keys_dir }} Directory Exists - file: - path: '{{ pgo_keys_dir }}' - state: directory - tags: - - install - -- name: Copy certificates to {{ pgo_keys_dir }} - command: "cp {{ output_dir }}/server.crt {{ pgo_keys_dir }}/client.crt" - tags: - - install - -- name: Copy keys to {{ pgo_keys_dir }} - command: "cp {{ output_dir }}/server.key {{ pgo_keys_dir }}/client.key" - tags: - - install diff --git a/installers/ansible/roles/pgo-operator/tasks/cleanup.yml b/installers/ansible/roles/pgo-operator/tasks/cleanup.yml deleted file mode 100644 index ffe9626c56..0000000000 --- a/installers/ansible/roles/pgo-operator/tasks/cleanup.yml +++ /dev/null @@ -1,291 +0,0 @@ ---- -- name: Use kubectl or oc - set_fact: - kubectl_or_oc: "{{ openshift_oc_bin if openshift_oc_bin is defined else 'kubectl' }}" - tags: - - uninstall - - update - -- name: Find watched namespaces - shell: | - {{ kubectl_or_oc }} get namespaces -o json --selector=vendor=crunchydata,pgo-installation-name={{ pgo_installation_name }} - register: ns_result - tags: - - uninstall - - update - -- name: Set Watched Namespaces - set_fact: - watched_namespaces: "{{ ns_result.stdout | from_json | json_query('items[*].metadata.name') }}" - tags: - - uninstall - - update - -- name: Delete PG Cluster Deployments - shell: | - {{ kubectl_or_oc }} delete deployment --selector=vendor=crunchydata -n {{ item }} - with_items: - - "{{ watched_namespaces }}" - when: not preserve_pg_clusters|bool - ignore_errors: yes - no_log: false - tags: - - uninstall - -- name: Delete PG Cluster Jobs - shell: | - {{ kubectl_or_oc }} delete jobs --selector=vendor=crunchydata -n {{ item }} - with_items: - - "{{ watched_namespaces }}" - when: not preserve_pg_clusters|bool - ignore_errors: yes - no_log: false - tags: - - uninstall - -- name: Delete PG Cluster Secrets - shell: | - {{ kubectl_or_oc }} delete configmap,secrets --selector=vendor=crunchydata -n {{ item }} - with_items: - - "{{ watched_namespaces }}" - when: not preserve_pg_clusters|bool - ignore_errors: yes - no_log: false - tags: - - uninstall - -- name: Delete PG Cluster Services - shell: | - {{ kubectl_or_oc }} delete services --selector=vendor=crunchydata -n {{ item }} - with_items: - - "{{ watched_namespaces }}" - when: not preserve_pg_clusters|bool - ignore_errors: yes - no_log: false - tags: - - uninstall - -- name: Delete PG PVC's - shell: | - {{ kubectl_or_oc }} delete pvc --selector=vendor=crunchydata -n {{ item }} - with_items: - - "{{ watched_namespaces }}" - when: not preserve_pg_clusters|bool - ignore_errors: yes - no_log: false - tags: - - uninstall - -- name: Delete Operator Deployment - shell: | - {{ kubectl_or_oc }} delete deployment postgres-operator -n {{ pgo_operator_namespace }} - ignore_errors: yes - no_log: false - tags: - - uninstall - - update - -- name: Delete existing ConfigMaps - shell: | - {{ kubectl_or_oc }} delete configmap pgo-config -n {{ pgo_operator_namespace }} - ignore_errors: yes - no_log: false - tags: - - uninstall - - update - -- name: Delete existing secrets (Operator Namespace) - shell: | - {{ kubectl_or_oc }} delete secret pgo-backrest-repo-config pgorole-{{ pgo_admin_role_name }} \ - pgouser-{{ pgo_admin_username }} -n {{ pgo_operator_namespace }} - ignore_errors: yes - no_log: false - tags: - - uninstall - - update - -- name: Delete pgo.tls secret - shell: | - {{ kubectl_or_oc }} delete secret pgo.tls -n {{ pgo_operator_namespace }} - ignore_errors: yes - no_log: false - tags: - - uninstall - -- name: Delete existing Services - shell: | - {{ kubectl_or_oc }} delete service postgres-operator -n {{ pgo_operator_namespace }} - ignore_errors: yes - no_log: false - tags: - - uninstall - - update - -- name: Delete existing Service Account (Operator Namespace) - shell: | - {{ kubectl_or_oc }} delete serviceaccount postgres-operator -n {{ pgo_operator_namespace }} - ignore_errors: yes - no_log: false - tags: - - uninstall - - update - when: create_rbac|bool - -- name: Delete existing Service Accounts (Watched Namespaces) - shell: | - {{ kubectl_or_oc }} delete serviceaccount pgo-backrest pgo-default pgo-pg pgo-target -n {{ item }} - with_items: - - "{{ watched_namespaces }}" - ignore_errors: yes - no_log: false - tags: - - uninstall - when: create_rbac|bool - -- name: Delete existing Cluster Role Bindings - shell: | - {{ kubectl_or_oc }} delete clusterrolebinding {{ item }} - with_items: - - "pgo-cluster-role" - ignore_errors: yes - no_log: false - tags: - - uninstall - - update - when: create_rbac|bool - -- name: Delete cluster-admin Cluster Role Binding for PGO Service Account - command: "{{ kubectl_or_oc }} delete clusterrolebinding pgo-cluster-admin" - ignore_errors: yes - no_log: false - tags: - - uninstall - - update - when: create_rbac|bool - -- name: Delete existing Cluster Roles - shell: | - {{ kubectl_or_oc }} delete clusterrole {{ item }} - with_items: - - "pgo-cluster-role" - ignore_errors: yes - no_log: false - tags: - - uninstall - - update - when: create_rbac|bool - -- name: Delete existing PGO Role Bindings (Watched Namespaces) - shell: | - {{ kubectl_or_oc }} delete rolebinding pgo-backrest-role-binding pgo-pg-role-binding \ - pgo-target-role-binding pgo-local-ns -n {{ item }} - with_items: - - "{{ watched_namespaces }}" - ignore_errors: yes - no_log: false - tags: - - uninstall - when: create_rbac|bool - -- name: Delete existing PGO Role Binding (Operator Namespace) - shell: | - {{ kubectl_or_oc }} delete rolebinding pgo-role -n {{ pgo_operator_namespace }} - ignore_errors: yes - no_log: false - tags: - - uninstall - - update - when: create_rbac|bool - -- name: Delete existing PGO Roles (Watched Namespaces) - shell: | - {{ kubectl_or_oc }} delete role pgo-backrest-role pgo-pg-role pgo-target-role pgo-local-ns -n {{ item }} - with_items: - - "{{ watched_namespaces }}" - ignore_errors: yes - no_log: false - tags: - - uninstall - when: create_rbac|bool - -- name: Delete existing PGO Role (Operator Namespace) - shell: | - {{ kubectl_or_oc }} delete role pgo-role -n {{ pgo_operator_namespace }} - ignore_errors: yes - no_log: false - tags: - - uninstall - - update - when: create_rbac|bool - -- name: Delete existing Custom Objects - shell: | - {{ kubectl_or_oc }} delete pgclusters,pgpolicies,pgreplicas,pgtasks -n {{ item }} --all - with_items: - - "{{ watched_namespaces }}" - ignore_errors: yes - no_log: false - tags: uninstall - -- name: Delete Custom Resource Definitions - shell: | - {{ kubectl_or_oc }} delete crds pgclusters.crunchydata.com \ - pgpolicies.crunchydata.com pgreplicas.crunchydata.com pgtasks.crunchydata.com - ignore_errors: yes - no_log: false - tags: uninstall - -- name: Remove Labels from Watched Namespaces - shell: | - {{ kubectl_or_oc }} label namespace {{ item }} vendor- pgo-created-by- pgo-installation-name- - ignore_errors: yes - when: not (delete_watched_namespaces|bool) - with_items: - - "{{ watched_namespaces }}" - no_log: false - tags: - - uninstall - -- name: Check for output directory - stat: - path: "{{ output_dir }}" - register: out_dir - ignore_errors: yes - no_log: false - tags: uninstall - -- name: Delete local output directory - file: - state: absent - path: "{{ output_dir }}/" - when: out_dir.stat.exists - ignore_errors: yes - no_log: false - tags: uninstall - -- name: Ensure output directory exists - file: - path: "{{ output_dir }}" - state: directory - mode: 0700 - tags: always - -- name: Delete PGO client - become: yes - become_method: sudo - file: - state: absent - path: "/usr/local/bin/pgo" - when: pgo_client_install == "true" - ignore_errors: yes - no_log: false - tags: uninstall - -- name: Delete PGO client container - shell: | - {{ kubectl_or_oc }} delete deployment pgo-client -n {{ pgo_operator_namespace }} - ignore_errors: yes - no_log: false - tags: - - uninstall - - update diff --git a/installers/ansible/roles/pgo-operator/tasks/crds.yml b/installers/ansible/roles/pgo-operator/tasks/crds.yml deleted file mode 100644 index 5989904b96..0000000000 --- a/installers/ansible/roles/pgo-operator/tasks/crds.yml +++ /dev/null @@ -1,60 +0,0 @@ ---- -- name: PGCluster CRD - tags: - - install - block: - - name: Check if PGCluster CRD Is Installed - shell: "{{ kubectl_or_oc }} get crd pgclusters.crunchydata.com" - register: crds_result - failed_when: false - - - name: Create PGClusters CRD - command: "{{ kubectl_or_oc }} create -f {{ role_path }}/files/crds/pgclusters-crd.yaml" - when: crds_result.rc == 1 - ignore_errors: no - no_log: false - -- name: PGPolicies CRD - tags: - - install - block: - - name: Check if PGPolicies CRD Is Installed - shell: "{{ kubectl_or_oc }} get crd pgpolicies.crunchydata.com" - register: crds_result - failed_when: false - - - name: Create PGPolicies CRD - command: "{{ kubectl_or_oc }} create -f {{ role_path }}/files/crds/pgpolicies-crd.yaml" - when: crds_result.rc == 1 - ignore_errors: no - no_log: false - -- name: PGReplicas CRD - tags: - - install - block: - - name: Check if PGReplicas CRD Is Installed - shell: "{{ kubectl_or_oc }} get crd pgreplicas.crunchydata.com" - register: crds_result - failed_when: false - - - name: Create PGReplicas CRD - command: "{{ kubectl_or_oc }} create -f {{ role_path }}/files/crds/pgreplicas-crd.yaml" - when: crds_result.rc == 1 - ignore_errors: no - no_log: false - -- name: PGTasks CRD - tags: - - install - block: - - name: Check if PGTasks CRD Is Installed - shell: "{{ kubectl_or_oc }} get crd pgtasks.crunchydata.com" - register: crds_result - failed_when: false - - - name: Create PGTasks CRD - command: "{{ kubectl_or_oc }} create -f {{ role_path }}/files/crds/pgtasks-crd.yaml" - when: crds_result.rc == 1 - ignore_errors: no - no_log: false diff --git a/installers/ansible/roles/pgo-operator/tasks/kubernetes.yml b/installers/ansible/roles/pgo-operator/tasks/kubernetes.yml deleted file mode 100644 index 6563d1cc62..0000000000 --- a/installers/ansible/roles/pgo-operator/tasks/kubernetes.yml +++ /dev/null @@ -1,15 +0,0 @@ ---- -- name: Get Namespace Details - shell: "kubectl get namespace {{ pgo_operator_namespace }}" - register: namespace_details - ignore_errors: yes - tags: - - install - - update - -- name: Create PGO Namespace - shell: "kubectl create namespace {{ pgo_operator_namespace }}" - when: namespace_details.rc != 0 - tags: - - install - - update diff --git a/installers/ansible/roles/pgo-operator/tasks/kubernetes_auth.yml b/installers/ansible/roles/pgo-operator/tasks/kubernetes_auth.yml deleted file mode 100644 index 882897ce6e..0000000000 --- a/installers/ansible/roles/pgo-operator/tasks/kubernetes_auth.yml +++ /dev/null @@ -1,5 +0,0 @@ ---- -- name: Set the Kubernetes Context - shell: "kubectl config use-context {{ kubernetes_context }}" - when: not (kubernetes_in_cluster | bool) - tags: always diff --git a/installers/ansible/roles/pgo-operator/tasks/kubernetes_cleanup.yml b/installers/ansible/roles/pgo-operator/tasks/kubernetes_cleanup.yml deleted file mode 100644 index 9a4e9fd353..0000000000 --- a/installers/ansible/roles/pgo-operator/tasks/kubernetes_cleanup.yml +++ /dev/null @@ -1,20 +0,0 @@ ---- -- name: Delete Watched Namespaces (Kubernetes) - shell: | - kubectl delete namespace {{ item }} - when: delete_watched_namespaces|bool - ignore_errors: yes - with_items: - - "{{ watched_namespaces }}" - no_log: false - tags: - - uninstall - -- name: Delete Operator Namespace (Kubernetes) - shell: | - kubectl delete namespace {{ pgo_operator_namespace }} - when: delete_operator_namespace|bool - ignore_errors: yes - no_log: false - tags: - - uninstall diff --git a/installers/ansible/roles/pgo-operator/tasks/main.yml b/installers/ansible/roles/pgo-operator/tasks/main.yml deleted file mode 100644 index c9fc36e6a0..0000000000 --- a/installers/ansible/roles/pgo-operator/tasks/main.yml +++ /dev/null @@ -1,410 +0,0 @@ ---- -- name: Set output directory fact - set_fact: - output_dir: "{{ ansible_env.HOME }}/.pgo/{{ pgo_operator_namespace }}/output" - tags: always - -- name: Ensure output directory exists - file: - path: "{{ output_dir }}" - state: directory - mode: 0700 - tags: always - -- include_tasks: "{{ tasks }}" - with_items: - - openshift_auth.yml - - openshift.yml - loop_control: - loop_var: tasks - when: openshift_host != '' - tags: always - -- include_tasks: "{{ tasks }}" - with_items: - - kubernetes_auth.yml - - kubernetes.yml - loop_control: - loop_var: tasks - when: kubernetes_context != '' or kubernetes_in_cluster | bool - tags: always - -- include_tasks: cleanup.yml - tags: - - uninstall - - update - -- include_tasks: kubernetes_cleanup.yml - when: kubernetes_context != '' or kubernetes_in_cluster | bool - tags: - - uninstall - -- include_tasks: openshift_cleanup.yml - when: openshift_host != '' - tags: - - uninstall - -- include_tasks: certs.yml - tags: - - install - -- name: Use kubectl or oc - set_fact: - kubectl_or_oc: "{{ openshift_oc_bin if openshift_oc_bin is defined else 'kubectl' }}" - tags: - - always - -- name: Deploy PostgreSQL Operator - block: - - include_tasks: namespace.yml - tags: - - install - - update - - - include_tasks: crds.yml - tags: - - install - - - name: PGO Admin Credentials - tags: - - install - - update - block: - - name: Template PGO Admin Credentials - template: - src: pgouser-admin.yaml.j2 - dest: "{{ output_dir }}/pgouser-admin.yaml" - mode: '0600' - - - name: Check PGO Admin Credentials - shell: "{{ kubectl_or_oc }} get -f {{ output_dir }}/pgouser-admin.yaml" - register: pgoadmin_cerds_result - failed_when: false - - - name: Create PGO Admin Credentials - command: "{{ kubectl_or_oc }} create -f {{ output_dir }}/pgouser-admin.yaml" - when: pgoadmin_cerds_result.rc == 1 - - - name: PGO Admin Role & Permissions - tags: - - install - - update - block: - - name: Template PGO Admin Role & Permissions - template: - src: pgorole-pgoadmin.yaml.j2 - dest: "{{ output_dir }}/pgorole-pgoadmin.yaml" - mode: '0600' - - - name: Check PGO Admin Role & Permissions - shell: "{{ kubectl_or_oc }} get -f {{ output_dir }}/pgorole-pgoadmin.yaml" - register: pgorole_pgoadmin_result - failed_when: false - - - name: Create PGO Admin Role & Permissions - command: "{{ kubectl_or_oc }} create -f {{ output_dir }}/pgorole-pgoadmin.yaml" - when: pgorole_pgoadmin_result.rc == 1 - - - name: PGO Service Account - when: - - create_rbac|bool - tags: - - install - - update - block: - - name: Template PGO Service Account - template: - src: pgo-service-account.yaml.j2 - dest: "{{ output_dir }}/pgo-service-account.yaml" - mode: '0600' - - - name: Check PGO Service Account - shell: "{{ kubectl_or_oc }} get -f {{ output_dir }}/pgo-service-account.yaml" - register: pgo_service_account_result - failed_when: false - - - name: Create PGO Service Account - command: "{{ kubectl_or_oc }} create -f {{ output_dir }}/pgo-service-account.yaml" - when: pgo_service_account_result.rc == 1 - - - name: Cluster RBAC (namespace_mode 'dynamic') - when: - - create_rbac|bool - - namespace_mode == "dynamic" - tags: - - install - - update - block: - - name: Template Cluster RBAC (namespace_mode 'dynamic') - template: - src: cluster-rbac.yaml.j2 - dest: "{{ output_dir }}/cluster-rbac.yaml" - mode: '0600' - - - name: Check Cluster RBAC (namespace_mode 'dynamic') - shell: "{{ kubectl_or_oc }} get -f {{ output_dir }}/cluster-rbac.yaml" - register: cluster_rbac_result - failed_when: false - - - name: Create Cluster RBAC (namespace_mode 'dynamic') - command: "{{ kubectl_or_oc }} create -f {{ output_dir }}/cluster-rbac.yaml" - when: cluster_rbac_result.rc == 1 - - - name: Cluster RBAC (namespace_mode 'readonly') - when: - - create_rbac|bool - - namespace_mode == "readonly" - tags: - - install - - update - block: - - name: Template Cluster RBAC (namespace_mode 'readonly') - template: - src: cluster-rbac-readonly.yaml.j2 - dest: "{{ output_dir }}/cluster-rbac-readonly.yaml" - mode: '0600' - - - name: Check Cluster RBAC (namespace_mode 'readonly') - shell: "{{ kubectl_or_oc }} get -f {{ output_dir }}/cluster-rbac-readonly.yaml" - register: cluster_rbac_readonly_result - failed_when: false - - - name: Create Cluster RBAC (namespace_mode 'readonly') - command: "{{ kubectl_or_oc }} create -f {{ output_dir }}/cluster-rbac-readonly.yaml" - when: cluster_rbac_readonly_result.rc == 1 - - - name: Cluster Roles Disabled (namespace_mode 'disabled') - debug: - msg: "Cluster Roles will not be installed because namespace_mode is '{{ namespace_mode }}'" - tags: - - install - - update - when: - - create_rbac|bool - - namespace_mode == "disabled" - - - name: Create CCP Image Pull Secret - shell: > - {{ kubectl_or_oc }} -n {{ pgo_operator_namespace }} get secret/{{ ccp_image_pull_secret }} -o jsonpath='{""}' 2> /dev/null || - {{ kubectl_or_oc }} -n {{ pgo_operator_namespace }} create -f {{ ccp_image_pull_secret_manifest }} - tags: - - install - when: - - create_rbac | bool - - ccp_image_pull_secret_manifest != '' - - - name: Create PGO Image Pull Secret - shell: > - {{ kubectl_or_oc }} -n {{ pgo_operator_namespace }} get secret/{{ pgo_image_pull_secret }} -o jsonpath='{""}' 2> /dev/null || - {{ kubectl_or_oc }} -n {{ pgo_operator_namespace }} create -f {{ pgo_image_pull_secret_manifest }} - tags: - - install - - update - when: - - create_rbac | bool - - pgo_image_pull_secret_manifest != '' - - - name: ClusterRolebinding for PGO Service Account - tags: - - install - - update - when: create_rbac|bool and pgo_cluster_admin|bool - block: - - name: Check cluster-admin Cluster Role Binding for PGO Service Account - shell: "{{ kubectl_or_oc }} get clusterrolebinding pgo-cluster-admin" - register: pgo_cluster_admin_result - failed_when: false - - - name: Create cluster-admin Cluster Role Binding for PGO Service Account - command: | - {{ kubectl_or_oc }} create clusterrolebinding pgo-cluster-admin \ - --clusterrole cluster-admin \ - --serviceaccount "{{ pgo_operator_namespace }}:postgres-operator" - when: pgo_cluster_admin_result.rc == 1 - - - - name: PGO RBAC - tags: - - install - - update - when: create_rbac|bool - block: - - name: Template PGO RBAC - template: - src: pgo-role-rbac.yaml.j2 - dest: "{{ output_dir }}/pgo-role-rbac.yaml" - mode: '0600' - - - name: Check PGO RBAC - shell: "{{ kubectl_or_oc }} get -f {{ output_dir }}/pgo-role-rbac.yaml" - register: pgo_role_rbac_result - failed_when: false - - - name: Create PGO RBAC - command: "{{ kubectl_or_oc }} create -f {{ output_dir }}/pgo-role-rbac.yaml" - when: pgo_role_rbac_result.rc == 1 - - - name: Template Local PGO User - template: - src: pgouser.local.j2 - dest: "{{ pgo_keys_dir }}/pgouser" - mode: '0400' - tags: - - install - - update - - - name: PGO BackRest Repo Secret - tags: - - install - - update - block: - - name: Check PGO BackRest Repo Secret - shell: "{{ kubectl_or_oc }} get secret pgo-backrest-repo-config -n {{ pgo_operator_namespace }}" - register: pgo_backrest_repo_config_result - failed_when: false - - - name: Create PGO BackRest Repo Secret - command: | - {{ kubectl_or_oc }} create secret generic pgo-backrest-repo-config \ - --from-file=config='{{ role_path }}/files/pgo-backrest-repo/config' \ - --from-file=sshd_config='{{ role_path }}/files/pgo-backrest-repo/sshd_config' \ - --from-file=aws-s3-ca.crt='{{ role_path }}/files/pgo-backrest-repo/aws-s3-ca.crt' \ - --from-literal=aws-s3-key='{{ backrest_aws_s3_key }}' \ - --from-literal=aws-s3-key-secret='{{ backrest_aws_s3_secret }}' \ - -n {{ pgo_operator_namespace }} - when: pgo_backrest_repo_config_result.rc == 1 - - - name: PGO API Secret - tags: - - install - - update - block: - - name: Check PGO API Secret - shell: "{{ kubectl_or_oc }} get secret pgo.tls -n {{ pgo_operator_namespace }}" - register: pgo_tls_result - failed_when: false - - - name: Create PGO API Secret - command: | - {{ kubectl_or_oc }} create secret tls pgo.tls \ - --cert='{{ output_dir }}/server.crt' \ - --key='{{ output_dir }}/server.key' \ - -n {{ pgo_operator_namespace }} - when: pgo_tls_result.rc == 1 - - - name: PGO ConfigMap - tags: - - install - - update - block: - - name: Template PGO Configuration - template: - src: pgo.yaml.j2 - dest: "{{ output_dir }}/pgo.yaml" - mode: '0600' - - - name: Check PGO ConfigMap - shell: "{{ kubectl_or_oc }} get configmap pgo-config -n {{ pgo_operator_namespace }}" - register: pgo_config_result - failed_when: false - - - name: Create PGO ConfigMap - command: | - {{ kubectl_or_oc }} create configmap pgo-config \ - --from-file=pgo.yaml='{{ output_dir }}/pgo.yaml' \ - --from-file='{{ role_path }}/files/pgo-configs' \ - -n {{ pgo_operator_namespace }} - when: pgo_config_result.rc == 1 - - - name: PGO Service - tags: - - install - - update - block: - - name: Template PGO Service Configuration - template: - src: service.json.j2 - dest: "{{ output_dir }}/service.json" - mode: '0600' - - - name: Check PGO Service Configuration - shell: "{{ kubectl_or_oc }} get -f {{ output_dir }}/service.json -n {{ pgo_operator_namespace }}" - register: service_result - failed_when: false - - - name: Create PGO Service - command: | - {{ kubectl_or_oc }} create --filename='{{ output_dir }}/service.json' -n {{ pgo_operator_namespace }} - when: service_result.rc == 1 - - - name: PGO Deployment - tags: - - install - - update - block: - - name: Template PGO Deployment - template: - src: deployment.json.j2 - dest: "{{ output_dir }}/deployment.json" - mode: '0600' - - - name: Check PGO Deployment - shell: "{{ kubectl_or_oc }} get -f {{ output_dir }}/deployment.json -n {{ pgo_operator_namespace }}" - register: deployment_json_result - failed_when: false - - - name: Deploy PGO - command: | - {{ kubectl_or_oc }} create --filename='{{ output_dir }}/deployment.json' -n {{ pgo_operator_namespace }} - when: deployment_json_result.rc == 1 - - - name: Wait for PGO to finish deploying - command: "{{ kubectl_or_oc }} rollout status deployment/postgres-operator -n {{ pgo_operator_namespace }}" - async: 600 - -- name: PGO Client - tags: - - install - - update - when: pgo_client_install == "true" and kubernetes_in_cluster == "false" - block: - - name: Download PGO Linux Client - become: yes - become_method: sudo - get_url: - url: "{{ pgo_client_url }}/pgo" - dest: "/usr/local/bin/pgo" - mode: 0755 - force: yes - when: uname_result.stdout == "Linux" - - - name: Download PGO macOS Client - become: yes - become_method: sudo - get_url: - url: "{{ pgo_client_url }}/pgo-mac" - dest: "/usr/local/bin/pgo" - mode: 0755 - when: uname_result.stdout == "Darwin" - -- name: Deploy PGO-Client Container - tags: - - install - - update - when: "pgo_client_container_install == 'true'" - block: - - name: Template PGO-Client Deployment - template: - src: pgo-client.json.j2 - dest: "{{ output_dir }}/pgo-client.json" - mode: '0600' - - - name: Check PGO-Client Deployment - shell: "{{ kubectl_or_oc }} get -f {{ output_dir }}/pgo-client.json" - register: pgo_client_json_result - failed_when: false - - - name: Create PGO-Client deployment - command: | - {{ kubectl_or_oc }} create --filename='{{ output_dir }}/pgo-client.json' - when: pgo_client_json_result.rc == 1 \ No newline at end of file diff --git a/installers/ansible/roles/pgo-operator/tasks/namespace.yml b/installers/ansible/roles/pgo-operator/tasks/namespace.yml deleted file mode 100644 index bc8e607b00..0000000000 --- a/installers/ansible/roles/pgo-operator/tasks/namespace.yml +++ /dev/null @@ -1,103 +0,0 @@ ---- -- name: Namespace List - set_fact: - nslist: "{{ namespace | ternary(namespace, pgo_operator_namespace) }}" - tags: - - install - - update - -- name: Create Watched Namespaces - shell: "{{ target_ns_script }}" - vars: - target_namespace: !unsafe "{{.TargetNamespace}}" - operator_namespace: !unsafe "{{.OperatorNamespace}}" - target_ns_script: "{{ lookup('template', 'add-targeted-namespace.sh.j2') }}" - with_items: "{{ nslist.split(',') | map('trim') | list }}" - when: - - reconcile_rbac == 'false' - tags: - - install - - update - -- name: Create Watched Namespaces (Reconcile RBAC) - shell: "{{ kubectl_or_oc }} create namespace {{ item }}" - register: result - failed_when: - - result.rc != 0 - - "'AlreadyExists' not in result.stderr" - with_items: "{{ nslist.split(',') | map('trim') | list }}" - when: - - namespace_mode != 'dynamic' - - reconcile_rbac == 'true' - tags: - - install - - update - -- name: Label Watched Namespaces (Reconcile RBAC) - shell: | - {{ kubectl_or_oc }} label namespace {{ item }} --overwrite \ - vendor=crunchydata pgo-installation-name={{ pgo_installation_name }} pgo-created-by=add-script - with_items: "{{ nslist.split(',') | map('trim') | list }}" - when: - - namespace_mode != 'dynamic' - - reconcile_rbac == 'true' - tags: - - install - - update - -- name: Cleanup Local Namespace Target RBAC - command: "{{ kubectl_or_oc }} delete role,rolebinding pgo-target-role -n {{ item }}" - with_items: "{{ nslist.split(',') | map('trim') | list }}" - when: - - namespace_mode != 'dynamic' - - reconcile_rbac == 'true' - ignore_errors: yes - tags: - - install - - update - -- name: Create Local Namespace Target RBAC - shell: | - cat {{ role_path }}/files/pgo-configs/pgo-target-role.json |\ - sed 's/{%raw%}{{.TargetNamespace}}{%endraw%}/'"{{ item }}"'/' |\ - {{ kubectl_or_oc }} -n {{ item }} create -f - - with_items: "{{ nslist.split(',') | map('trim') | list }}" - when: - - namespace_mode != 'dynamic' - - reconcile_rbac == 'true' - tags: - - install - - update - -- name: Template Local Namespace RBAC - template: - src: local-namespace-rbac.yaml.j2 - dest: "{{ output_dir }}/local-namespace-rbac.yaml" - mode: '0600' - when: - - namespace_mode != 'dynamic' - - reconcile_rbac == 'true' - tags: - - install - - update - -- name: Cleanup Local Namespace Reconcile RBAC - command: "{{ kubectl_or_oc }} delete -f {{ output_dir }}/local-namespace-rbac.yaml -n {{ item }}" - with_items: "{{ nslist.split(',') | map('trim') | list }}" - when: - - namespace_mode != 'dynamic' - - reconcile_rbac == 'true' - ignore_errors: yes - tags: - - install - - update - -- name: Create Local Namespace Reconcile RBAC - command: "{{ kubectl_or_oc }} create -f {{ output_dir }}/local-namespace-rbac.yaml -n {{ item }}" - with_items: "{{ nslist.split(',') | map('trim') | list }}" - when: - - namespace_mode != 'dynamic' - - reconcile_rbac == 'true' - tags: - - install - - update diff --git a/installers/ansible/roles/pgo-operator/tasks/openshift.yml b/installers/ansible/roles/pgo-operator/tasks/openshift.yml deleted file mode 100644 index 0d4d4da029..0000000000 --- a/installers/ansible/roles/pgo-operator/tasks/openshift.yml +++ /dev/null @@ -1,15 +0,0 @@ ---- -- name: Get Project Details - shell: "{{ openshift_oc_bin}} get project {{ pgo_operator_namespace }}" - register: namespace_details - ignore_errors: yes - tags: - - install - - update - -- name: Create PGO Namespace - shell: "{{ openshift_oc_bin}} new-project {{ pgo_operator_namespace }}" - when: namespace_details.rc != 0 - tags: - - install - - update diff --git a/installers/ansible/roles/pgo-operator/tasks/openshift_auth.yml b/installers/ansible/roles/pgo-operator/tasks/openshift_auth.yml deleted file mode 100644 index 754bf8553a..0000000000 --- a/installers/ansible/roles/pgo-operator/tasks/openshift_auth.yml +++ /dev/null @@ -1,25 +0,0 @@ ---- -- include_vars: openshift.yml - tags: always - -- name: Authenticate with OpenShift via user and password - shell: | - {{ openshift_oc_bin }} login {{ openshift_host }} \ - -u {{ openshift_user }} \ - -p {{ openshift_password }} \ - --insecure-skip-tls-verify={{ openshift_skip_tls_verify | default(false) | bool }} - when: - - openshift_user is defined and openshift_user != '' - - openshift_password is defined and openshift_password != '' - - openshift_token is not defined - no_log: false - tags: always - -- name: Authenticate with OpenShift via token - shell: | - {{ openshift_oc_bin }} login {{ openshift_host }} \ - --token {{ openshift_token }} \ - --insecure-skip-tls-verify={{ openshift_skip_tls_verify | default(false) | bool }} - when: openshift_token is defined and openshift_token != '' - no_log: true - tags: always diff --git a/installers/ansible/roles/pgo-operator/tasks/openshift_cleanup.yml b/installers/ansible/roles/pgo-operator/tasks/openshift_cleanup.yml deleted file mode 100644 index a741f2ada3..0000000000 --- a/installers/ansible/roles/pgo-operator/tasks/openshift_cleanup.yml +++ /dev/null @@ -1,28 +0,0 @@ ---- -- name: Delete Watched Namespaces (Openshift) - shell: | - {{ openshift_oc_bin}} delete project {{ item }} - when: delete_watched_namespaces|bool - ignore_errors: yes - with_items: - - "{{ watched_namespaces }}" - no_log: false - tags: - - uninstall - -- name: Delete Operator Namespace (Openshift) - shell: | - {{ openshift_oc_bin}} delete project {{ pgo_operator_namespace }} - when: delete_operator_namespace|bool - ignore_errors: yes - no_log: false - tags: - - uninstall - -- name: Delete Operator SCC (Openshift) - shell: | - {{ openshift_oc_bin}} delete scc pgo - ignore_errors: yes - no_log: false - tags: - - uninstall diff --git a/installers/ansible/roles/pgo-operator/templates/add-targeted-namespace.sh.j2 b/installers/ansible/roles/pgo-operator/templates/add-targeted-namespace.sh.j2 deleted file mode 100644 index 380a8a80b7..0000000000 --- a/installers/ansible/roles/pgo-operator/templates/add-targeted-namespace.sh.j2 +++ /dev/null @@ -1,83 +0,0 @@ -#!/bin/bash -# Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -if [[ -z "{{ item }}" ]]; then - echo "usage: add-targeted-namespace.sh mynewnamespace" - exit -fi - -PGO_CMD='{{ kubectl_or_oc }}' -PGO_IMAGE_PULL_SECRET='{{ pgo_image_pull_secret }}' -PGO_IMAGE_PULL_SECRET_MANIFEST='{{ pgo_image_pull_secret_manifest }}' -TARGET_NAMESPACE='{{ item }}' - -# the name of the service account utilized by the PG pods -PG_SA="pgo-pg" - -# create the namespace if necessary -{{ kubectl_or_oc }} get ns {{ item }} > /dev/null -if [ $? -eq 0 ]; then - echo "namespace" {{ item }} "already exists" -else - echo "namespace" {{ item }} "is new" - {{ kubectl_or_oc }} create ns {{ item }} -fi - -# set the labels so that this namespace is owned by this installation -{{ kubectl_or_oc }} label namespace/{{ item }} pgo-created-by=add-script -{{ kubectl_or_oc }} label namespace/{{ item }} vendor=crunchydata -{{ kubectl_or_oc }} label namespace/{{ item }} pgo-installation-name={{ pgo_installation_name }} - -# determine if an existing pod is using the 'pgo-pg' service account. if so, do not delete -# and recreate the SA or its associated role and role binding. this is to avoid any undesired -# behavior with existing PG clusters that are actively utilizing the SA. -{{ kubectl_or_oc }} -n {{ item }} get pods -o yaml | grep "serviceAccount: ${PG_SA}" > /dev/null -if [ $? -ne 0 ]; then - {{ kubectl_or_oc }} -n {{ item }} delete --ignore-not-found sa pgo-pg - {{ kubectl_or_oc }} -n {{ item }} delete --ignore-not-found role pgo-pg-role - {{ kubectl_or_oc }} -n {{ item }} delete --ignore-not-found rolebinding pgo-pg-role-binding - - cat {{ role_path }}/files/pgo-configs/pgo-pg-sa.json | sed 's/{{ target_namespace }}/'"{{ item }}"'/' | {{ kubectl_or_oc }} -n {{ item }} create -f - - cat {{ role_path }}/files/pgo-configs/pgo-pg-role.json | sed 's/{{ target_namespace }}/'"{{ item }}"'/' | {{ kubectl_or_oc }} -n {{ item }} create -f - - cat {{ role_path }}/files/pgo-configs/pgo-pg-role-binding.json | sed 's/{{ target_namespace }}/'"{{ item }}"'/' | {{ kubectl_or_oc }} -n {{ item }} create -f - -else - echo "Running pods found using SA '${PG_SA}' in namespace {{ item }}, will not recreate" -fi - -# create RBAC -{{ kubectl_or_oc }} -n {{ item }} delete --ignore-not-found sa pgo-backrest pgo-default pgo-target -{{ kubectl_or_oc }} -n {{ item }} delete --ignore-not-found role pgo-backrest-role pgo-target-role -{{ kubectl_or_oc }} -n {{ item }} delete --ignore-not-found rolebinding pgo-backrest-role-binding pgo-target-role-binding - -cat {{ role_path }}/files/pgo-configs/pgo-default-sa.json | sed 's/{{ target_namespace }}/'"{{ item }}"'/' | {{ kubectl_or_oc }} -n {{ item }} create -f - -cat {{ role_path }}/files/pgo-configs/pgo-target-sa.json | sed 's/{{ target_namespace }}/'"{{ item }}"'/' | {{ kubectl_or_oc }} -n {{ item }} create -f - -cat {{ role_path }}/files/pgo-configs/pgo-target-role.json | sed 's/{{ target_namespace }}/'"{{ item }}"'/' | {{ kubectl_or_oc }} -n {{ item }} create -f - -cat {{ role_path }}/files/pgo-configs/pgo-target-role-binding.json | sed 's/{{ target_namespace }}/'"{{ item }}"'/' | sed 's/{{ operator_namespace }}/'"{{ pgo_operator_namespace }}"'/' | {{ kubectl_or_oc }} -n {{ item }} create -f - -cat {{ role_path }}/files/pgo-configs/pgo-backrest-sa.json | sed 's/{{ target_namespace }}/'"{{ item }}"'/' | {{ kubectl_or_oc }} -n {{ item }} create -f - -cat {{ role_path }}/files/pgo-configs/pgo-backrest-role.json | sed 's/{{ target_namespace }}/'"{{ item }}"'/' | {{ kubectl_or_oc }} -n {{ item }} create -f - -cat {{ role_path }}/files/pgo-configs/pgo-backrest-role-binding.json | sed 's/{{ target_namespace }}/'"{{ item }}"'/' | {{ kubectl_or_oc }} -n {{ item }} create -f - - -if [ -r "$PGO_IMAGE_PULL_SECRET_MANIFEST" ]; then - $PGO_CMD -n "$TARGET_NAMESPACE" create -f "$PGO_IMAGE_PULL_SECRET_MANIFEST" -fi - -if [ -n "$PGO_IMAGE_PULL_SECRET" ]; then - patch='{"imagePullSecrets": [{ "name": "'"$PGO_IMAGE_PULL_SECRET"'" }]}' - - $PGO_CMD -n "$TARGET_NAMESPACE" patch --type=strategic --patch="$patch" serviceaccount/pgo-backrest - $PGO_CMD -n "$TARGET_NAMESPACE" patch --type=strategic --patch="$patch" serviceaccount/pgo-default - $PGO_CMD -n "$TARGET_NAMESPACE" patch --type=strategic --patch="$patch" serviceaccount/pgo-pg - $PGO_CMD -n "$TARGET_NAMESPACE" patch --type=strategic --patch="$patch" serviceaccount/pgo-target -fi diff --git a/installers/ansible/roles/pgo-operator/templates/aws-s3-credentials.yaml.j2 b/installers/ansible/roles/pgo-operator/templates/aws-s3-credentials.yaml.j2 deleted file mode 100644 index 9da8675764..0000000000 --- a/installers/ansible/roles/pgo-operator/templates/aws-s3-credentials.yaml.j2 +++ /dev/null @@ -1,3 +0,0 @@ ---- -aws-s3-key: {{ backrest_aws_s3_key }} -aws-s3-key-secret: {{ backrest_aws_s3_secret }} diff --git a/installers/ansible/roles/pgo-operator/templates/cluster-rbac-readonly.yaml.j2 b/installers/ansible/roles/pgo-operator/templates/cluster-rbac-readonly.yaml.j2 deleted file mode 100644 index 3021d4a058..0000000000 --- a/installers/ansible/roles/pgo-operator/templates/cluster-rbac-readonly.yaml.j2 +++ /dev/null @@ -1,27 +0,0 @@ ---- -kind: ClusterRole -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - name: pgo-cluster-role -rules: - - apiGroups: - - '' - resources: - - namespaces - verbs: - - get - - list - - watch ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: pgo-cluster-role -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: pgo-cluster-role -subjects: -- kind: ServiceAccount - name: postgres-operator - namespace: {{ pgo_operator_namespace }} diff --git a/installers/ansible/roles/pgo-operator/templates/cluster-rbac.yaml.j2 b/installers/ansible/roles/pgo-operator/templates/cluster-rbac.yaml.j2 deleted file mode 100644 index 771080042e..0000000000 --- a/installers/ansible/roles/pgo-operator/templates/cluster-rbac.yaml.j2 +++ /dev/null @@ -1,114 +0,0 @@ ---- -kind: ClusterRole -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - name: pgo-cluster-role -rules: - - apiGroups: - - '' - resources: - - namespaces - verbs: - - get - - list - - watch - - create - - update - - delete -{% if reconcile_rbac | bool %} - - apiGroups: - - '' - resources: - - serviceaccounts - verbs: - - get - - create - - update - - delete - - apiGroups: - - rbac.authorization.k8s.io - resources: - - roles - - rolebindings - verbs: - - get - - create - - update - - delete - - apiGroups: - - '' - resources: - - configmaps - - endpoints - - pods - - pods/exec - - pods/log - - replicasets - - secrets - - services - - persistentvolumeclaims - verbs: - - get - - list - - watch - - create - - patch - - update - - delete - - deletecollection - - apiGroups: - - apps - resources: - - deployments - verbs: - - get - - list - - watch - - create - - patch - - update - - delete - - deletecollection - - apiGroups: - - batch - resources: - - jobs - verbs: - - get - - list - - watch - - create - - patch - - update - - delete - - deletecollection - - apiGroups: - - crunchydata.com - resources: - - pgclusters - - pgpolicies - - pgreplicas - - pgtasks - verbs: - - get - - list - - watch - - create - - patch - - update - - delete - - deletecollection -{% endif %} ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: pgo-cluster-role -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: pgo-cluster-role -subjects: -- kind: ServiceAccount - name: postgres-operator - namespace: {{ pgo_operator_namespace }} diff --git a/installers/ansible/roles/pgo-operator/templates/deployment.json.j2 b/installers/ansible/roles/pgo-operator/templates/deployment.json.j2 deleted file mode 100644 index b94ab4fc42..0000000000 --- a/installers/ansible/roles/pgo-operator/templates/deployment.json.j2 +++ /dev/null @@ -1,241 +0,0 @@ -{ - "apiVersion": "apps/v1", - "kind": "Deployment", - "metadata": { - "name": "postgres-operator", - "labels": { - "vendor": "crunchydata" - } - }, - "spec": { - "replicas": 1, - "selector": { - "matchLabels": { - "name": "postgres-operator", - "vendor": "crunchydata" - } - }, - "template": { - "metadata": { - "labels": { - "name": "postgres-operator", - "vendor": "crunchydata" - } - }, - "spec": { - "serviceAccountName": "postgres-operator", - "containers": [ - { - "name": "apiserver", - "image": "{% if pgo_apiserver_image | default('') != '' %}{{ pgo_apiserver_image }} - {%- else %}{{ pgo_image_prefix }}/pgo-apiserver:{{ pgo_image_tag }} - {%- endif %}", - "imagePullPolicy": "IfNotPresent", - "ports": [ - { "containerPort": {{ pgo_apiserver_port }} } - ], - "readinessProbe": { - "httpGet": { - "path": "/healthz", - "port": {{ pgo_apiserver_port }}, - "scheme": {% if pgo_disable_tls == "true" %}"HTTP"{%- else %}"HTTPS"{%- endif %} - }, - "initialDelaySeconds": 15, - "periodSeconds": 5 - }, - "livenessProbe": { - "httpGet": { - "path": "/healthz", - "port": {{ pgo_apiserver_port }}, - "scheme": {% if pgo_disable_tls == "true" %}"HTTP"{%- else %}"HTTPS"{%- endif %} - }, - "initialDelaySeconds": 15, - "periodSeconds": 5 - }, - "env": [ - { - "name": "CRUNCHY_DEBUG", - "value": "{{ crunchy_debug }}" - }, - { - "name": "PORT", - "value": "{{ pgo_apiserver_port }}" - }, - { - "name": "NAMESPACE", - "value": "{{ namespace }}" - }, - { - "name": "PGO_INSTALLATION_NAME", - "value": "{{ pgo_installation_name }}" - }, - { - "name": "PGO_OPERATOR_NAMESPACE", - "valueFrom": { - "fieldRef": { - "fieldPath": "metadata.namespace" - } - } - }, - { - "name": "TLS_CA_TRUST", - "value": "{{ pgo_tls_ca_store }}" - }, - { - "name": "TLS_NO_VERIFY", - "value": "{{ pgo_tls_no_verify }}" - }, - { - "name": "DISABLE_TLS", - "value": "{{ pgo_disable_tls }}" - }, - { - "name": "NOAUTH_ROUTES", - "value": "{{ pgo_noauth_routes }}" - }, - { - "name": "ADD_OS_TRUSTSTORE", - "value": "{{ pgo_add_os_ca_store }}" - }, - { - "name": "DISABLE_EVENTING", - "value": "{{ pgo_disable_eventing }}" - }, - { - "name": "EVENT_ADDR", - "value": "localhost:4150" - } - ], - "volumeMounts": [] - }, { - "name": "operator", - "image": "{% if pgo_image | default('') != '' %}{{ pgo_image }} - {%- else %}{{ pgo_image_prefix }}/postgres-operator:{{ pgo_image_tag }} - {%- endif %}", - "imagePullPolicy": "IfNotPresent", - "readinessProbe": { - "exec": { - "command": [ - "ls", - "/tmp" - ] - }, - "initialDelaySeconds": 4, - "periodSeconds": 5 - }, - "env": [ - { - "name": "CRUNCHY_DEBUG", - "value": "{{ crunchy_debug }}" - }, - { - "name": "NAMESPACE", - "value": "{{ namespace }}" - }, - { - "name": "PGO_INSTALLATION_NAME", - "value": "{{ pgo_installation_name }}" - }, - { - "name": "PGO_OPERATOR_NAMESPACE", - "valueFrom": { - "fieldRef": { - "fieldPath": "metadata.namespace" - } - } - }, - { - "name": "MY_POD_NAME", - "valueFrom": { - "fieldRef": { - "fieldPath": "metadata.name" - } - } - }, - { - "name": "DISABLE_EVENTING", - "value": "{{ pgo_disable_eventing }}" - }, - { - "name": "EVENT_ADDR", - "value": "localhost:4150" - } - ], - "volumeMounts": [] - }, { - "name": "scheduler", - "image": "{% if pgo_scheduler_image | default('') != '' %}{{ pgo_scheduler_image }} - {%- else %}{{ pgo_image_prefix }}/pgo-scheduler:{{ pgo_image_tag }} - {%- endif %}", - "livenessProbe": { - "exec": { - "command": [ - "bash", - "-c", - "test -n \"$(find /tmp/scheduler.hb -newermt '61 sec ago')\"" - ] - }, - "failureThreshold": 2, - "initialDelaySeconds": 60, - "periodSeconds": 60 - }, - "env": [ - { - "name": "CRUNCHY_DEBUG", - "value": "{{ crunchy_debug }}" - }, - { - "name": "PGO_OPERATOR_NAMESPACE", - "valueFrom": { - "fieldRef": { - "fieldPath": "metadata.namespace" - } - } - }, - { - "name": "NAMESPACE", - "value": "{{ namespace }}" - }, - { - "name": "PGO_INSTALLATION_NAME", - "value": "{{ pgo_installation_name }}" - }, - { - "name": "TIMEOUT", - "value": "{{ scheduler_timeout }}" - }, - { - "name": "EVENT_ADDR", - "value": "localhost:4150" - } - ], - "volumeMounts": [], - "imagePullPolicy": "IfNotPresent" - }, { - "name": "event", - "image": "{% if pgo_event_image | default('') != '' %}{{ pgo_event_image }} - {%- else %}{{ pgo_image_prefix }}/pgo-event:{{ pgo_image_tag }} - {%- endif %}", - "livenessProbe": { - "httpGet": { - "path": "/ping", - "port": 4151 - }, - "initialDelaySeconds": 15, - "periodSeconds": 5 - }, - "env": [ - { - "name": "TIMEOUT", - "value": "3600" - } - ], - "volumeMounts": [], - "imagePullPolicy": "IfNotPresent" - } - ], - "volumes": [] - } - } - } -} diff --git a/installers/ansible/roles/pgo-operator/templates/local-namespace-rbac.yaml.j2 b/installers/ansible/roles/pgo-operator/templates/local-namespace-rbac.yaml.j2 deleted file mode 100644 index 4a878395ae..0000000000 --- a/installers/ansible/roles/pgo-operator/templates/local-namespace-rbac.yaml.j2 +++ /dev/null @@ -1,51 +0,0 @@ ---- -kind: Role -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - name: pgo-local-ns -rules: - - apiGroups: - - '' - resources: - - serviceaccounts - verbs: - - get - - create - - update - - delete - - apiGroups: - - rbac.authorization.k8s.io - resources: - - roles - - rolebindings - verbs: - - get - - create - - update - - delete ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: RoleBinding -metadata: - name: pgo-local-ns -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: Role - name: pgo-local-ns -subjects: -- kind: ServiceAccount - name: postgres-operator - namespace: {{ pgo_operator_namespace }} ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: RoleBinding -metadata: - name: pgo-target-role-binding -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: Role - name: pgo-target-role -subjects: -- kind: ServiceAccount - name: postgres-operator - namespace: {{ pgo_operator_namespace }} diff --git a/installers/ansible/roles/pgo-operator/templates/pgo-client.json.j2 b/installers/ansible/roles/pgo-operator/templates/pgo-client.json.j2 deleted file mode 100644 index 35aa5c1597..0000000000 --- a/installers/ansible/roles/pgo-operator/templates/pgo-client.json.j2 +++ /dev/null @@ -1,104 +0,0 @@ -{ - "apiVersion": "apps/v1", - "kind": "Deployment", - "metadata": { - "name": "pgo-client", - "namespace": "{{ pgo_operator_namespace }}", - "labels": { - "vendor": "crunchydata" - } - }, - "spec": { - "replicas": 1, - "selector": { - "matchLabels": { - "name": "pgo-client", - "vendor": "crunchydata" - } - }, - "template": { - "metadata": { - "labels": { - "name": "pgo-client", - "vendor": "crunchydata" - } - }, - "spec": { - {% if pgo_image_pull_secret %} - "imagePullSecrets": [ - { "name": "{{ pgo_image_pull_secret }}" } - ], - {% endif %} - "containers": [ - { - "name": "pgo", - "image": "{% if pgo_client_image | default('') != '' %}{{ pgo_client_image }} - {%- else %}{{ pgo_image_prefix }}/pgo-client:{{ pgo_image_tag }} - {%- endif %}", - "imagePullPolicy": "IfNotPresent", - "env": [ - { - "name": "PGO_APISERVER_URL", - "value": "{{ pgo_apiserver_url }}:{{ pgo_apiserver_port }}" - }, - { - "name": "PGOUSERNAME", - "valueFrom": { - "secretKeyRef": { - "name": "pgouser-{{ pgo_admin_username }}", - "key": "username" - } - } - }, - { - "name": "PGOUSERPASS", - "valueFrom": { - "secretKeyRef": { - "name": "pgouser-{{ pgo_admin_username }}", - "key": "password" - } - } - }, - { - "name": "PGO_CA_CERT", - "value": "pgo-tls/client.crt" - }, - { - "name": "PGO_CLIENT_CERT", - "value": "pgo-tls/client.crt" - }, - { - "name": "PGO_CLIENT_KEY", - "value": "pgo-tls/client.key" - } - ], - "volumeMounts": [ - { - "name": "pgo-tls-volume", - "mountPath": "pgo-tls" - } - ] - } - ], - "volumes": [ - { - "name": "pgo-tls-volume", - "secret": { - "secretName": "{{ pgo_client_cert_secret }}", - "items": [ - { - "key": "tls.crt", - "path": "client.crt" - }, - { - "key": "tls.key", - "path": "client.key" - } - ] - } - } - ] - } - } - } -} diff --git a/installers/ansible/roles/pgo-operator/templates/pgo-role-rbac.yaml.j2 b/installers/ansible/roles/pgo-operator/templates/pgo-role-rbac.yaml.j2 deleted file mode 100644 index 76af49dbcd..0000000000 --- a/installers/ansible/roles/pgo-operator/templates/pgo-role-rbac.yaml.j2 +++ /dev/null @@ -1,38 +0,0 @@ ---- -kind: Role -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - name: pgo-role - namespace: {{ pgo_operator_namespace }} -rules: - - apiGroups: - - '' - resources: - - serviceaccounts - verbs: - - get - - apiGroups: - - '' - resources: - - configmaps - - secrets - verbs: - - get - - list - - create - - update - - delete ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: RoleBinding -metadata: - name: pgo-role - namespace: {{ pgo_operator_namespace }} -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: Role - name: pgo-role -subjects: -- kind: ServiceAccount - name: postgres-operator - namespace: {{ pgo_operator_namespace }} diff --git a/installers/ansible/roles/pgo-operator/templates/pgo-service-account.yaml.j2 b/installers/ansible/roles/pgo-operator/templates/pgo-service-account.yaml.j2 deleted file mode 100644 index b8a8de6a95..0000000000 --- a/installers/ansible/roles/pgo-operator/templates/pgo-service-account.yaml.j2 +++ /dev/null @@ -1,13 +0,0 @@ ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: postgres-operator - namespace: {{ pgo_operator_namespace }} -imagePullSecrets: -{% if ccp_image_pull_secret %} - - name: {{ ccp_image_pull_secret }} -{% endif %} -{% if pgo_image_pull_secret and ccp_image_pull_secret != pgo_image_pull_secret %} - - name: {{ pgo_image_pull_secret }} -{% endif %} diff --git a/installers/ansible/roles/pgo-operator/templates/pgo.yaml.j2 b/installers/ansible/roles/pgo-operator/templates/pgo.yaml.j2 deleted file mode 100644 index f1b21fbbcb..0000000000 --- a/installers/ansible/roles/pgo-operator/templates/pgo.yaml.j2 +++ /dev/null @@ -1,61 +0,0 @@ -Cluster: - CCPImagePrefix: {{ ccp_image_prefix }} - CCPImageTag: {{ ccp_image_tag }} - DisableAutofail: {{ disable_auto_failover }} - BackrestPort: {{ backrest_port }} - BackrestS3Bucket: {{ backrest_aws_s3_bucket }} - BackrestS3Endpoint: {{ backrest_aws_s3_endpoint }} - BackrestS3Region: {{ backrest_aws_s3_region }} - BackrestS3URIStyle: {{ backrest_aws_s3_uri_style }} - BackrestS3VerifyTLS: "{{ backrest_aws_s3_verify_tls }}" - Metrics: {{ metrics }} - Badger: {{ badger }} - Port: {{ db_port }} - PGBadgerPort: {{ pgbadgerport }} - ExporterPort: {{ exporterport }} - User: {{ db_user}} - Database: {{ db_name }} - PasswordAgeDays: {{ db_password_age_days }} - PasswordLength: {{ db_password_length }} - Replicas: {{ db_replicas }} - ArchiveMode: {{ archive_mode }} - ServiceType: {{ service_type }} - EnableCrunchyadm: {{ enable_crunchyadm }} - DisableReplicaStartFailReinit: {{ disable_replica_start_fail_reinit }} - PodAntiAffinity: {{ pod_anti_affinity }} - PodAntiAffinityPgBackRest: {{ pod_anti_affinity_pgbackrest }} - PodAntiAffinityPgBouncer: {{ pod_anti_affinity_pgbouncer }} - SyncReplication: {{ sync_replication }} - DefaultInstanceMemory: {{ default_instance_memory }} - DefaultBackrestMemory: {{ default_pgbackrest_memory }} - DefaultPgBouncerMemory: {{ default_pgbouncer_memory }} - DefaultExporterMemory: {{ default_exporter_memory }} - DisableFSGroup: {{ disable_fsgroup }} -PrimaryStorage: {{ primary_storage }} -WALStorage: {{ wal_storage }} -BackupStorage: {{ backup_storage }} -ReplicaStorage: {{ replica_storage }} -BackrestStorage: {{ backrest_storage }} -Storage: -{% for i in range(1, max_storage_configs) %} -{% if lookup('vars', 'storage' + i|string + '_name', default='') != '' %} - {{ lookup('vars', 'storage' + i|string + '_name', default='') }}: - AccessMode: {{ lookup('vars', 'storage' + i|string + '_access_mode') }} - Size: {{ lookup('vars', 'storage' + i|string + '_size') }} - StorageType: {{ lookup('vars', 'storage' + i|string + '_type') }} -{% if lookup('vars', 'storage' + i|string + '_match_labels', default='') != '' %} - MatchLabels: {{ lookup('vars', 'storage' + i|string + '_match_labels') }} -{% endif %} -{% if lookup('vars', 'storage' + i|string + '_class', default='') != '' %} - StorageClass: {{ lookup('vars', 'storage' + i|string + '_class') }} -{% endif %} -{% if lookup('vars', 'storage' + i|string + '_supplemental_groups', default='') != '' %} - SupplementalGroups: {{ lookup('vars', 'storage' + i|string + '_supplemental_groups') }} -{% endif %} -{% endif %} -{% endfor %} -Pgo: - Audit: false - DisableReconcileRBAC: {{ not reconcile_rbac | bool }} - PGOImagePrefix: {{ pgo_image_prefix }} - PGOImageTag: {{ pgo_image_tag }} diff --git a/installers/ansible/roles/pgo-operator/templates/pgorole-pgoadmin.yaml.j2 b/installers/ansible/roles/pgo-operator/templates/pgorole-pgoadmin.yaml.j2 deleted file mode 100644 index e16e9f4e00..0000000000 --- a/installers/ansible/roles/pgo-operator/templates/pgorole-pgoadmin.yaml.j2 +++ /dev/null @@ -1,14 +0,0 @@ -apiVersion: v1 -kind: Secret -metadata: - labels: - pgo-created-by: bootstrap - pgo-pgorole: "true" - rolename: {{ pgo_admin_role_name }} - vendor: crunchydata - name: pgorole-{{ pgo_admin_role_name }} - namespace: {{ pgo_operator_namespace }} -type: Opaque -data: - permissions: "{{ pgo_admin_perms | b64encode }}" - rolename: {{ pgo_admin_role_name | b64encode }} diff --git a/installers/ansible/roles/pgo-operator/templates/pgouser-admin.yaml.j2 b/installers/ansible/roles/pgo-operator/templates/pgouser-admin.yaml.j2 deleted file mode 100644 index ca6d6eb4ed..0000000000 --- a/installers/ansible/roles/pgo-operator/templates/pgouser-admin.yaml.j2 +++ /dev/null @@ -1,15 +0,0 @@ -apiVersion: v1 -kind: Secret -metadata: - labels: - pgo-created-by: bootstrap - pgo-pgouser: "true" - username: {{ pgo_admin_username }} - vendor: crunchydata - name: pgouser-{{ pgo_admin_username }} - namespace: {{ pgo_operator_namespace }} -type: Opaque -data: - password: {{ pgo_admin_password | b64encode }} - username: {{ pgo_admin_username | b64encode }} - roles: {{ pgo_admin_role_name | b64encode }} diff --git a/installers/ansible/roles/pgo-operator/templates/pgouser.local.j2 b/installers/ansible/roles/pgo-operator/templates/pgouser.local.j2 deleted file mode 100644 index 09b5ddb078..0000000000 --- a/installers/ansible/roles/pgo-operator/templates/pgouser.local.j2 +++ /dev/null @@ -1 +0,0 @@ -{{ pgo_admin_username }}:{{ pgo_admin_password }} diff --git a/installers/ansible/roles/pgo-operator/templates/service.json.j2 b/installers/ansible/roles/pgo-operator/templates/service.json.j2 deleted file mode 100644 index 766a060a72..0000000000 --- a/installers/ansible/roles/pgo-operator/templates/service.json.j2 +++ /dev/null @@ -1,37 +0,0 @@ -{ - "kind": "Service", - "apiVersion": "v1", - "metadata": { - "name": "postgres-operator", - "labels": { - "name": "postgres-operator" - } - }, - "spec": { - "ports": [ - { - "name": "apiserver", - "protocol": "TCP", - "port": {{ pgo_apiserver_port }}, - "targetPort": {{ pgo_apiserver_port }} - }, - { - "name": "nsqadmin", - "protocol": "TCP", - "port": 4171, - "targetPort": 4171 - }, - { - "name": "nsqd", - "protocol": "TCP", - "port": 4150, - "targetPort": 4150 - } - ], - "selector": { - "name": "postgres-operator" - }, - "type": "ClusterIP", - "sessionAffinity": "None" - } -} diff --git a/installers/ansible/roles/pgo-operator/vars/main.yml b/installers/ansible/roles/pgo-operator/vars/main.yml deleted file mode 100644 index c613bb340b..0000000000 --- a/installers/ansible/roles/pgo-operator/vars/main.yml +++ /dev/null @@ -1,3 +0,0 @@ ---- -pgo_client_url: "https://github.com/CrunchyData/postgres-operator/releases/download/v{{ pgo_client_version }}" -pgo_keys_dir: "{{ ansible_env.HOME }}/.pgo/{{ pgo_operator_namespace }}" diff --git a/installers/ansible/roles/pgo-operator/vars/openshift.yml b/installers/ansible/roles/pgo-operator/vars/openshift.yml deleted file mode 100644 index 57b50dd2c3..0000000000 --- a/installers/ansible/roles/pgo-operator/vars/openshift.yml +++ /dev/null @@ -1,2 +0,0 @@ ---- -openshift_oc_bin: "oc" diff --git a/installers/ansible/roles/pgo-preflight/tasks/check_kubernetes.yml b/installers/ansible/roles/pgo-preflight/tasks/check_kubernetes.yml deleted file mode 100644 index 9affc18ad7..0000000000 --- a/installers/ansible/roles/pgo-preflight/tasks/check_kubernetes.yml +++ /dev/null @@ -1,13 +0,0 @@ ---- -- name: Check if the kubectl command is installed - shell: which kubectl - register: kubectl_result - ignore_errors: yes - tags: always - -- name: Ensure kubectl is installed - assert: - that: - - kubectl_result.rc == 0 - msg: "Install kubectl: https://kubernetes.io/docs/tasks/tools/install-kubectl/" - tags: always diff --git a/installers/ansible/roles/pgo-preflight/tasks/check_openshift.yml b/installers/ansible/roles/pgo-preflight/tasks/check_openshift.yml deleted file mode 100644 index b9705daeb5..0000000000 --- a/installers/ansible/roles/pgo-preflight/tasks/check_openshift.yml +++ /dev/null @@ -1,39 +0,0 @@ -# check_openshift.yml ---- -- name: openshift_token should be defined - assert: - that: - - openshift_token != '' - msg: "Set the value of 'openshift_token' in the inventory file." - when: - - openshift_token is defined - tags: always - -- name: openshift_user should be defined - assert: - that: - - openshift_user is defined and openshift_user != '' - msg: "Set the value of 'openshift_user' in the inventory file." - when: openshift_token is not defined - tags: always - -- name: openshift_password should be defined - assert: - that: - - openshift_password is defined and openshift_password != '' - msg: "Set the value of 'openshift_password' in the inventory file." - when: openshift_token is not defined - tags: always - -- name: Check if the oc command is installed - shell: which oc - register: oc_result - ignore_errors: yes - tags: always - -- name: Ensure OpenShift CLI is installed - assert: - that: - - oc_result.rc == 0 - msg: "Install the OpenShift CLI (oc)" - tags: always diff --git a/installers/ansible/roles/pgo-preflight/tasks/check_vars.yml b/installers/ansible/roles/pgo-preflight/tasks/check_vars.yml deleted file mode 100644 index 3424c43151..0000000000 --- a/installers/ansible/roles/pgo-preflight/tasks/check_vars.yml +++ /dev/null @@ -1,37 +0,0 @@ ---- -- name: Check if mandatory variables are defined - fail: - msg: Please specify a value for variable {{ item }} in your values.yaml - tags: always - when: "lookup('vars', item, default='') == ''" - with_items: - - pgo_operator_namespace - - pgo_installation_name - - pgo_admin_username - - pgo_admin_password - - pgo_admin_role_name - - pgo_admin_perms - - ccp_image_prefix - - ccp_image_tag - - pgo_image_prefix - - pgo_image_tag - - disable_auto_failover - - badger - - metrics - - archive_mode - - archive_timeout - - db_password_length - - create_rbac - - db_port - - db_replicas - - db_user - - backrest_storage - - backup_storage - - primary_storage - - replica_storage - - pgo_client_version - - pgbadgerport - - exporterport - - scheduler_timeout - - namespace_mode - - reconcile_rbac diff --git a/installers/ansible/roles/pgo-preflight/tasks/main.yml b/installers/ansible/roles/pgo-preflight/tasks/main.yml deleted file mode 100644 index a6fda0caec..0000000000 --- a/installers/ansible/roles/pgo-preflight/tasks/main.yml +++ /dev/null @@ -1,58 +0,0 @@ ---- -- include_tasks: vars.yml - tags: always - -- fail: - msg: "Please specify the a tag: install, update or uninstall" - tags: always - when: ansible_run_tags[0] == "all" - -- name: Check Operating System - shell: uname - register: uname_result - tags: - - install - - update - -- assert: - msg: Please specify either OpenShift or Kubernetes variables in inventory - that: - - openshift_host | default('') != '' or - kubernetes_context | default('') != '' or - kubernetes_in_cluster | default(False) | bool - tags: always - -- assert: - msg: Only set one of kubernetes_context, kubernetes_in_cluster, or openshift_host - that: - - kubernetes_context | default('') == '' - - not (kubernetes_in_cluster | default(False) | bool) - when: openshift_host | default('') != '' - tags: always - -- assert: - msg: Only set one of kubernetes_context, kubernetes_in_cluster, or openshift_host - that: - - openshift_host | default('') == '' - - not (kubernetes_in_cluster | default(False) | bool) - when: kubernetes_context | default('') != '' - tags: always - -- assert: - msg: Only set one of kubernetes_context, kubernetes_in_cluster, or openshift_host - that: - - openshift_host | default('') == '' - - kubernetes_context | default('') == '' - when: kubernetes_in_cluster | default(False) | bool - tags: always - -- include_tasks: check_openshift.yml - when: openshift_host | default('') != '' - tags: always - -- include_tasks: check_kubernetes.yml - when: kubernetes_context | default('') != '' or kubernetes_in_cluster | default(False) | bool - tags: always - -- include_tasks: check_vars.yml - tags: always diff --git a/installers/ansible/roles/pgo-preflight/tasks/vars.yml b/installers/ansible/roles/pgo-preflight/tasks/vars.yml deleted file mode 100644 index c4db95762f..0000000000 --- a/installers/ansible/roles/pgo-preflight/tasks/vars.yml +++ /dev/null @@ -1,15 +0,0 @@ ---- -- name: Include values.yml - tags: always - block: - - name: Check for "{{ config_path }}" - stat: - path: "{{ config_path }}" - register: conf_path_result - - - fail: - msg: "Please provide a valid path to your values.yaml file. Expected path: {{ config_path }}" - when: - - not conf_path_result.stat.exists - - - include_vars: "{{ config_path }}" diff --git a/installers/ansible/values.yaml b/installers/ansible/values.yaml deleted file mode 100644 index 4eb672bcec..0000000000 --- a/installers/ansible/values.yaml +++ /dev/null @@ -1,120 +0,0 @@ -# ===================== -# Configuration Options -# More info for these options can be found in the docs -# https://access.crunchydata.com/documentation/postgres-operator/latest/installation/configuration/ -# ===================== -archive_mode: "true" -archive_timeout: "60" -backrest_aws_s3_bucket: "" -backrest_aws_s3_endpoint: "" -backrest_aws_s3_key: "" -backrest_aws_s3_region: "" -backrest_aws_s3_secret: "" -backrest_aws_s3_uri_style: "" -backrest_aws_s3_verify_tls: "true" -backrest_port: "2022" -badger: "false" -ccp_image_prefix: "registry.developers.crunchydata.com/crunchydata" -ccp_image_pull_secret: "" -ccp_image_pull_secret_manifest: "" -ccp_image_tag: "centos7-12.4-4.5.0" -create_rbac: "true" -crunchy_debug: "false" -db_name: "" -db_password_age_days: "0" -db_password_length: "24" -db_port: "5432" -db_replicas: "0" -db_user: "testuser" -default_instance_memory: "128Mi" -default_pgbackrest_memory: "48Mi" -default_pgbouncer_memory: "24Mi" -default_exporter_memory: "24Mi" -delete_operator_namespace: "false" -delete_watched_namespaces: "false" -disable_auto_failover: "false" -disable_fsgroup: "false" -reconcile_rbac: "true" -exporterport: "9187" -metrics: "false" -namespace: "pgo" -namespace_mode: "dynamic" -pgbadgerport: "10000" -pgo_add_os_ca_store: "false" -pgo_admin_password: "examplepassword" -pgo_admin_perms: "*" -pgo_admin_role_name: "pgoadmin" -pgo_admin_username: "admin" -pgo_apiserver_port: "8443" -pgo_apiserver_url: "https://postgres-operator" -pgo_client_cert_secret: "pgo.tls" -pgo_client_container_install: "false" -pgo_client_install: "true" -pgo_client_version: "4.5.0" -pgo_cluster_admin: "false" -pgo_disable_eventing: "false" -pgo_disable_tls: "false" -pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata" -pgo_image_pull_secret: "" -pgo_image_pull_secret_manifest: "" -pgo_image_tag: "centos7-4.5.0" -pgo_installation_name: "devtest" -pgo_noauth_routes: "" -pgo_operator_namespace: "pgo" -pgo_tls_ca_store: "" -pgo_tls_no_verify: "false" -pod_anti_affinity: "preferred" -pod_anti_affinity_pgbackrest: "" -pod_anti_affinity_pgbouncer: "" -scheduler_timeout: "3600" -service_type: "ClusterIP" -sync_replication: "false" -backrest_storage: "default" -backup_storage: "default" -primary_storage: "default" -replica_storage: "default" -wal_storage: "" -storage1_name: "default" -storage1_access_mode: "ReadWriteOnce" -storage1_size: "1G" -storage1_type: "dynamic" -storage2_name: "hostpathstorage" -storage2_access_mode: "ReadWriteMany" -storage2_size: "1G" -storage2_type: "create" -storage3_name: "nfsstorage" -storage3_access_mode: "ReadWriteMany" -storage3_size: "1G" -storage3_type: "create" -storage3_supplemental_groups: "65534" -storage4_name: "nfsstoragered" -storage4_access_mode: "ReadWriteMany" -storage4_size: "1G" -storage4_match_labels: "crunchyzone=red" -storage4_type: "create" -storage4_supplemental_groups: "65534" -storage5_name: "storageos" -storage5_access_mode: "ReadWriteOnce" -storage5_size: "5Gi" -storage5_type: "dynamic" -storage5_class: "fast" -storage6_name: "primarysite" -storage6_access_mode: "ReadWriteOnce" -storage6_size: "4G" -storage6_type: "dynamic" -storage6_class: "primarysite" -storage7_name: "alternatesite" -storage7_access_mode: "ReadWriteOnce" -storage7_size: "4G" -storage7_type: "dynamic" -storage7_class: "alternatesite" -storage8_name: "gce" -storage8_access_mode: "ReadWriteOnce" -storage8_size: "300M" -storage8_type: "dynamic" -storage8_class: "standard" -storage9_name: "rook" -storage9_access_mode: "ReadWriteOnce" -storage9_size: "1Gi" -storage9_type: "dynamic" -storage9_class: "rook-ceph-block" diff --git a/installers/favicon.png b/installers/favicon.png deleted file mode 100644 index 66ce2072e9..0000000000 Binary files a/installers/favicon.png and /dev/null differ diff --git a/installers/gcp-marketplace/Dockerfile b/installers/gcp-marketplace/Dockerfile deleted file mode 100644 index adf85a355a..0000000000 --- a/installers/gcp-marketplace/Dockerfile +++ /dev/null @@ -1,44 +0,0 @@ -ARG MARKETPLACE_VERSION -FROM gcr.io/cloud-marketplace-tools/k8s/deployer_envsubst:${MARKETPLACE_VERSION} AS build - -# Verify Bash (>= 4.3) has `wait -n` -RUN bash -c 'echo -n & wait -n' - - -FROM gcr.io/cloud-marketplace-tools/k8s/deployer_envsubst:${MARKETPLACE_VERSION} - -RUN install -D /bin/create_manifests.sh /opt/postgres-operator/cloud-marketplace-tools/bin/create_manifests.sh - -# https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html#installing-ansible-on-debian -RUN if [ -f /etc/os-release ] && [ debian = "$(. /etc/os-release; echo $ID)" ] && [ 10 -ge "$(. /etc/os-release; echo $VERSION_ID)" ]; then \ - apt-get update && apt-get install -y --no-install-recommends gnupg && rm -rf /var/lib/apt/lists/* && \ - wget -qO- 'https://keyserver.ubuntu.com/pks/lookup?op=get&search=0x93C4A3FD7BB9C367' | apt-key add && \ - echo > /etc/apt/sources.list.d/ansible.list deb http://ppa.launchpad.net/ansible/ansible-2.9/ubuntu trusty main ; \ - fi - -RUN apt-get update \ - && apt-get install -y --no-install-recommends ansible=2.9.* openssh-client \ - && rm -rf /var/lib/apt/lists/* - -COPY installers/ansible/* \ - /opt/postgres-operator/ansible/ -COPY installers/favicon.png \ - installers/gcp-marketplace/install-job.yaml \ - installers/gcp-marketplace/install.sh \ - installers/gcp-marketplace/values.yaml \ - /opt/postgres-operator/ - -COPY installers/gcp-marketplace/install-hook.sh \ - /bin/create_manifests.sh -COPY installers/gcp-marketplace/schema.yaml \ - /data/ -COPY installers/gcp-marketplace/application.yaml \ - /data/manifest/ -COPY installers/gcp-marketplace/test-pod.yaml \ - /data-test/manifest/ - -ARG PGO_VERSION -RUN for file in \ - /data/schema.yaml \ - /data/manifest/application.yaml \ - ; do envsubst '$PGO_VERSION' < "$file" > /tmp/sponge && mv /tmp/sponge "$file" ; done diff --git a/installers/gcp-marketplace/Makefile b/installers/gcp-marketplace/Makefile deleted file mode 100644 index 5f4f0c6eb1..0000000000 --- a/installers/gcp-marketplace/Makefile +++ /dev/null @@ -1,55 +0,0 @@ -.DEFAULT_GOAL := help - -DEPLOYER_IMAGE ?= registry.localhost:5000/postgres-operator-gcp-marketplace-deployer:$(PGO_VERSION) -IMAGE_BUILDER ?= buildah -MARKETPLACE_TOOLS ?= gcr.io/cloud-marketplace-tools/k8s/dev:$(MARKETPLACE_VERSION) -MARKETPLACE_VERSION ?= 0.9.4 -KUBECONFIG ?= $(HOME)/.kube/config -PARAMETERS ?= {} -PGO_VERSION ?= 4.5.0 - -IMAGE_BUILD_ARGS = --build-arg MARKETPLACE_VERSION='$(MARKETPLACE_VERSION)' \ - --build-arg PGO_VERSION='$(PGO_VERSION)' - -MARKETPLACE_TOOLS_DEV = docker run --net=host --rm \ - --mount 'type=bind,source=/var/run/docker.sock,target=/var/run/docker.sock,readonly' \ - --mount 'type=bind,source=$(KUBECONFIG),target=/mount/config/.kube/config,readonly' \ - '$(MARKETPLACE_TOOLS)' - -# One does _not_ need to be logged in with gcloud. -.PHONY: doctor -doctor: ## Check development prerequisites - $(MARKETPLACE_TOOLS_DEV) doctor - -.PHONY: doctor-fix -doctor-fix: - @# https://github.com/kubernetes-sigs/application/tree/master/config/crds - kubectl 2>/dev/null get crd/applications.app.k8s.io -o jsonpath='{""}' || \ - kubectl create -f https://raw.githubusercontent.com/GoogleCloudPlatform/marketplace-k8s-app-tools/master/crd/app-crd.yaml - -.PHONY: help -help: ALIGN=14 -help: ## Print this message - @awk -F ': ## ' -- "/^[^':]+: ## /"' { printf "'$$(tput bold)'%-$(ALIGN)s'$$(tput sgr0)' %s\n", $$1, $$2 }' $(MAKEFILE_LIST) - -.PHONY: image -image: image-$(IMAGE_BUILDER) - -.PHONY: image-buildah -image-buildah: ## Build the deployer image with Buildah - sudo buildah bud --file Dockerfile --tag '$(DEPLOYER_IMAGE)' $(IMAGE_BUILD_ARGS) --layers ../.. - sudo buildah push '$(DEPLOYER_IMAGE)' docker-daemon:'$(DEPLOYER_IMAGE)' - -.PHONY: image-docker -image-docker: ## Build the deployer image with Docker - docker build --file Dockerfile --tag '$(DEPLOYER_IMAGE)' $(IMAGE_BUILD_ARGS) ../.. - -# PARAMETERS='{"OPERATOR_NAMESPACE": "", "OPERATOR_NAME": "", "OPERATOR_ADMIN_PASSWORD": ""}' -.PHONY: install -install: ## Execute the deployer image in an existing Kubernetes namespace - $(MARKETPLACE_TOOLS_DEV) install --deployer='$(DEPLOYER_IMAGE)' --parameters='$(PARAMETERS)' - -# PARAMETERS='{"OPERATOR_ADMIN_PASSWORD": ""}' -.PHONY: verify -verify: ## Execute and test the deployer image in a new (random) Kubernetes namespace then clean up - $(MARKETPLACE_TOOLS_DEV) verify --deployer='$(DEPLOYER_IMAGE)' --parameters='$(PARAMETERS)' diff --git a/installers/gcp-marketplace/README.md b/installers/gcp-marketplace/README.md deleted file mode 100644 index fd686764ad..0000000000 --- a/installers/gcp-marketplace/README.md +++ /dev/null @@ -1,146 +0,0 @@ - -This directory contains the files that are used to install [Crunchy PostgreSQL for GKE][gcp-details], -which uses the PostgreSQL Operator, from the Google Cloud Marketplace. - -The integration centers around a container [image](./Dockerfile) that contains an installation -[schema](./schema.yaml) and an [Application][k8s-app] [manifest](./application.yaml). -Consult the [technical requirements][gcp-k8s-requirements] when making changes. - -[k8s-app]: https://github.com/kubernetes-sigs/application/ -[gcp-k8s]: https://cloud.google.com/marketplace/docs/kubernetes-apps/ -[gcp-k8s-requirements]: https://cloud.google.com/marketplace/docs/partners/kubernetes-solutions/create-app-package -[gcp-k8s-tool-images]: https://console.cloud.google.com/gcr/images/cloud-marketplace-tools -[gcp-k8s-tool-repository]: https://github.com/GoogleCloudPlatform/marketplace-k8s-app-tools -[gcp-details]: https://console.cloud.google.com/marketplace/details/crunchydata/crunchy-postgresql-operator - - -# Installation - -## Quick install with Google Cloud Marketplace - -Install [Crunchy PostgreSQL for GKE][gcp-details] to a Google Kubernetes Engine cluster using -Google Cloud Marketplace. - -## Command line instructions - -### Prepare - -1. You'll need the following tools in your development environment. If you are using Cloud Shell, - everything is already installed. - - - envsubst - - [git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) - - [kubectl](https://kubernetes.io/docs/reference/kubectl/overview/) - -2. Clone this repository. - - ```shell - git clone https://github.com/CrunchyData/postgres-operator.git - ``` - -3. Install the [Application][k8s-app] Custom Resource Definition. - - ```shell - kubectl apply -f 'https://raw.githubusercontent.com/GoogleCloudPlatform/marketplace-k8s-app-tools/master/crd/app-crd.yaml' - ``` - -4. At least one Storage Class is required. Google Kubernetes Engine is preconfigured with a default. - - ```shell - kubectl get storageclasses - ``` - -### Install the PostgreSQL Operator - -1. Configure the installation by setting environment variables. - - 1. Choose a version to install. - - ```shell - IMAGE_REPOSITORY=gcr.io/crunchydata-public/postgres-operator - - export PGO_VERSION=4.5.0 - export INSTALLER_IMAGE=${IMAGE_REPOSITORY}/deployer:${PGO_VERSION} - export OPERATOR_IMAGE=${IMAGE_REPOSITORY}:${PGO_VERSION} - export OPERATOR_IMAGE_API=${IMAGE_REPOSITORY}/pgo-apiserver:${PGO_VERSION} - export OPERATOR_IMAGE_EVENT=${IMAGE_REPOSITORY}/pgo-event:${PGO_VERSION} - export OPERATOR_IMAGE_SCHEDULER=${IMAGE_REPOSITORY}/pgo-scheduler:${PGO_VERSION} - ``` - - 2. Choose a namespace and name for the application. - - ```shell - export OPERATOR_NAMESPACE=pgo OPERATOR_NAME=pgo - ``` - - 2. Choose a password for the application admin. - - ```shell - export OPERATOR_ADMIN_PASSWORD=changethis - ``` - - 4. Choose default values for new PostgreSQL clusters. - - ```shell - export POSTGRES_METRICS=false - export POSTGRES_SERVICE_TYPE=ClusterIP - export POSTGRES_CPU=1000 # mCPU - export POSTGRES_MEM=2 # GiB - export POSTGRES_STORAGE_CAPACITY=1 # GiB - export POSTGRES_STORAGE_CLASS=ssd - export PGBACKREST_STORAGE_CAPACITY=2 # GiB - export PGBACKREST_STORAGE_CLASS=ssd - export BACKUP_STORAGE_CAPACITY=1 # GiB - export BACKUP_STORAGE_CLASS=ssd - ``` - -2. Prepare the Kubernetes namespace. - - ```shell - export INSTALLER_SERVICE_ACCOUNT=postgres-operator-installer - - kubectl create namespace "$OPERATOR_NAMESPACE" - kubectl create serviceaccount -n "$OPERATOR_NAMESPACE" "$INSTALLER_SERVICE_ACCOUNT" - kubectl create clusterrolebinding \ - "$OPERATOR_NAMESPACE:$INSTALLER_SERVICE_ACCOUNT:cluster-admin" \ - --serviceaccount="$OPERATOR_NAMESPACE:$INSTALLER_SERVICE_ACCOUNT" \ - --clusterrole=cluster-admin - ``` - -3. Generate and apply Kubernetes manifests. - - ```shell - envsubst < application.yaml > "${OPERATOR_NAME}_application.yaml" - envsubst < install-job.yaml > "${OPERATOR_NAME}_install-job.yaml" - envsubst < inventory.ini > "${OPERATOR_NAME}_inventory.ini" - - kubectl create -n "$OPERATOR_NAMESPACE" secret generic install-postgres-operator \ - --from-file=inventory="${OPERATOR_NAME}_inventory.ini" - - kubectl create -n "$OPERATOR_NAMESPACE" -f "${OPERATOR_NAME}_application.yaml" - kubectl create -n "$OPERATOR_NAMESPACE" -f "${OPERATOR_NAME}_install-job.yaml" - ``` - -The application can be seen in Google Cloud Platform Console at [Kubernetes Applications][]. - -[Kubernetes Applications]: https://console.cloud.google.com/kubernetes/application - - -# Uninstallation - -## Using Google Cloud Platform Console - -1. In the Console, open [Kubernetes Applications][]. -2. From the list of applications, select _Crunchy PostgreSQL Operator_ then click _Delete_. - -## Command line instructions - -Delete the Kubernetes resources created during install. - -```shell -export OPERATOR_NAMESPACE=pgo OPERATOR_NAME=pgo - -kubectl delete -n "$OPERATOR_NAMESPACE" job install-postgres-operator -kubectl delete -n "$OPERATOR_NAMESPACE" secret install-postgres-operator -kubectl delete -n "$OPERATOR_NAMESPACE" application "$OPERATOR_NAME" -``` diff --git a/installers/gcp-marketplace/application.yaml b/installers/gcp-marketplace/application.yaml deleted file mode 100644 index 9af6b1642c..0000000000 --- a/installers/gcp-marketplace/application.yaml +++ /dev/null @@ -1,50 +0,0 @@ -apiVersion: app.k8s.io/v1beta1 -kind: Application -metadata: - name: '${OPERATOR_NAME}' - labels: - app.kubernetes.io/name: '${OPERATOR_NAME}' -spec: - selector: - matchLabels: - app.kubernetes.io/name: '${OPERATOR_NAME}' - componentKinds: - - { group: core, kind: ConfigMap } - - { group: core, kind: Secret } - - { group: core, kind: Service } - - { group: apps, kind: Deployment } - - { group: batch, kind: Job } - descriptor: - description: Enterprise PostgreSQL-as-a-Service for Kubernetes - type: Crunchy PostgreSQL Operator - version: '${PGO_VERSION}' - maintainers: - - name: Crunchy Data - url: https://www.crunchydata.com/ - email: info@crunchydata.com - keywords: - - postgres - - postgresql - - database - - sql - - operator - - crunchy data - links: - - description: Crunchy PostgreSQL for Kubernetes - url: https://www.crunchydata.com/products/crunchy-postgresql-for-kubernetes/ - - description: Documentation - url: 'https://access.crunchydata.com/documentation/postgres-operator/${PGO_VERSION}' - - description: GitHub - url: https://github.com/CrunchyData/postgres-operator - - info: - - name: Operator API - value: kubectl port-forward --namespace '${OPERATOR_NAMESPACE}' service/postgres-operator 8443 - - name: Operator Client - value: 'https://github.com/CrunchyData/postgres-operator/releases/tag/v${PGO_VERSION}' - - name: Operator User - type: Reference - valueFrom: { type: SecretKeyRef, secretKeyRef: { name: pgouser-admin, key: username } } - - name: Operator Password - type: Reference - valueFrom: { type: SecretKeyRef, secretKeyRef: { name: pgouser-admin, key: password } } diff --git a/installers/gcp-marketplace/install-hook.sh b/installers/gcp-marketplace/install-hook.sh deleted file mode 100755 index 96688f75ac..0000000000 --- a/installers/gcp-marketplace/install-hook.sh +++ /dev/null @@ -1,48 +0,0 @@ -#!/usr/bin/env bash -# vim: set noexpandtab : -set -eu - -kc() { kubectl --namespace="$NAMESPACE" "$@"; } - -application_ownership="$( kc get "applications.app.k8s.io/$NAME" --output=json )" -application_ownership="$( jq <<< "$application_ownership" '{ metadata: { - labels: { "app.kubernetes.io/name": .metadata.name }, - ownerReferences: [{ - apiVersion, kind, name: .metadata.name, uid: .metadata.uid - }] -} }' )" - -existing="$( kc get deployment/postgres-operator --output=json 2> /dev/null || true )" - -if [ -n "$existing" ]; then - >&2 echo ERROR: Crunchy PostgreSQL Operator is already installed in this namespace - exit 1 -fi - -install_values="$( /bin/config_env.py envsubst < /opt/postgres-operator/values.yaml )" -installer="$( /bin/config_env.py envsubst < /opt/postgres-operator/install-job.yaml )" - -kc create --filename=/dev/stdin <<< "$installer" -kc patch job/install-postgres-operator --type=strategic --patch="$application_ownership" - -job_ownership="$( kc get job/install-postgres-operator --output=json )" -job_ownership="$( jq <<< "$job_ownership" '{ metadata: { - labels: { "app.kubernetes.io/name": .metadata.labels["app.kubernetes.io/name"] }, - ownerReferences: [{ - apiVersion, kind, name: .metadata.name, uid: .metadata.uid - }] -} }' )" - -kc create secret generic install-postgres-operator --from-file=values.yaml=/dev/stdin <<< "$install_values" -kc patch secret/install-postgres-operator --type=strategic --patch="$job_ownership" - -# Wait for either status condition then terminate the other. -kc wait --for=condition=complete --timeout=5m job/install-postgres-operator & -kc wait --for=condition=failed --timeout=5m job/install-postgres-operator & -wait -n -kill -s INT %% 2> /dev/null || true - -kc logs --selector=job-name=install-postgres-operator --tail=-1 -test 'Complete' = "$( kc get job/install-postgres-operator --output=jsonpath='{.status.conditions[*].type}' )" - -exec /opt/postgres-operator/cloud-marketplace-tools/bin/create_manifests.sh "$@" diff --git a/installers/gcp-marketplace/install-job.yaml b/installers/gcp-marketplace/install-job.yaml deleted file mode 100644 index 574aae7b12..0000000000 --- a/installers/gcp-marketplace/install-job.yaml +++ /dev/null @@ -1,23 +0,0 @@ -apiVersion: batch/v1 -kind: Job -metadata: - name: install-postgres-operator - labels: - app.kubernetes.io/name: '${OPERATOR_NAME}' -spec: - template: - spec: - serviceAccountName: '${INSTALLER_SERVICE_ACCOUNT}' - restartPolicy: Never - containers: - - name: installer - image: '${INSTALLER_IMAGE}' - imagePullPolicy: Always - command: ['/opt/postgres-operator/install.sh'] - env: - - { name: NAMESPACE, value: '${OPERATOR_NAMESPACE}' } - - { name: NAME, value: '${OPERATOR_NAME}' } - volumeMounts: - - { mountPath: /etc/ansible, name: configuration } - volumes: - - { name: configuration, secret: { secretName: install-postgres-operator } } diff --git a/installers/gcp-marketplace/install.sh b/installers/gcp-marketplace/install.sh deleted file mode 100755 index 6dc770b993..0000000000 --- a/installers/gcp-marketplace/install.sh +++ /dev/null @@ -1,52 +0,0 @@ -#!/usr/bin/env bash -# vim: set noexpandtab : -set -eu - -kc() { kubectl --namespace="$NAMESPACE" "$@"; } - -application_ownership="$( kc get "applications.app.k8s.io/$NAME" --output=json )" -application_ownership="$( jq <<< "$application_ownership" '{ metadata: { - labels: { "app.kubernetes.io/name": .metadata.name }, - ownerReferences: [{ - apiVersion, kind, name: .metadata.name, uid: .metadata.uid - }] -} }' )" - -existing="$( kc get clusterrole/pgo-cluster-role --output=json 2> /dev/null || true )" - -if [ -n "$existing" ]; then - >&2 echo ERROR: Crunchy PostgreSQL Operator is already installed in another namespace - exit 1 -fi - -application_icon="$( base64 --wrap=0 /opt/postgres-operator/favicon.png )" -application_metadata="$( jq <<< '{}' --arg icon "$application_icon" '{ metadata: { - annotations: { "kubernetes-engine.cloud.google.com/icon": "data:image/png;base64,\($icon)" } -} }' )" - -kc patch "applications.app.k8s.io/$NAME" --type=merge --patch="$application_metadata" - -/usr/bin/ansible-playbook \ - --extra-vars 'kubernetes_in_cluster=true' \ - --extra-vars 'config_path=/etc/ansible/values.yaml' \ - --inventory /opt/postgres-operator/ansible/inventory.yaml \ - --tags=install /opt/postgres-operator/ansible/main.yml - -resources=( - clusterrole/pgo-cluster-role - clusterrolebinding/pgo-cluster-role - configmap/pgo-config - deployment/postgres-operator - role/pgo-role - rolebinding/pgo-role - secret/pgo.tls - secret/pgo-backrest-repo-config - secret/pgorole-pgoadmin - secret/pgouser-admin - service/postgres-operator - serviceaccount/postgres-operator -) - -for resource in "${resources[@]}"; do - kc patch "$resource" --type=strategic --patch="$application_ownership" -done diff --git a/installers/gcp-marketplace/schema.yaml b/installers/gcp-marketplace/schema.yaml deleted file mode 100644 index 6f0ec5320f..0000000000 --- a/installers/gcp-marketplace/schema.yaml +++ /dev/null @@ -1,128 +0,0 @@ -applicationApiVersion: v1beta1 -properties: - BACKUP_STORAGE_CAPACITY: - title: Backup Storage Capacity [GiB] - description: Default gigabytes allocated to new backup PVCs - type: integer - default: 1 - minimum: 1 - - INSTALLER_IMAGE: { type: string, x-google-marketplace: { type: DEPLOYER_IMAGE } } - - INSTALLER_SERVICE_ACCOUNT: # This key appears in the ClusterRoleBinding name. - title: Cluster Admin Service Account - description: >- - Name of a service account in the target namespace that has cluster-admin permissions. - This is used by the operator installer to create Custom Resource Definitions. - type: string - x-google-marketplace: - type: SERVICE_ACCOUNT - serviceAccount: - roles: - - type: ClusterRole - rulesType: PREDEFINED - rulesFromRoleName: cluster-admin - - OPERATOR_ADMIN_PASSWORD: - title: Operator admin password - type: string - pattern: .+ - x-google-marketplace: - type: MASKED_FIELD - - OPERATOR_IMAGE: - type: string - default: gcr.io/crunchydata-public/postgres-operator:${PGO_VERSION} - x-google-marketplace: { type: IMAGE } - - OPERATOR_IMAGE_API: - type: string - default: gcr.io/crunchydata-public/postgres-operator/pgo-apiserver:${PGO_VERSION} - x-google-marketplace: { type: IMAGE } - - OPERATOR_IMAGE_EVENT: - type: string - default: gcr.io/crunchydata-public/postgres-operator/pgo-event:${PGO_VERSION} - x-google-marketplace: { type: IMAGE } - - OPERATOR_IMAGE_SCHEDULER: - type: string - default: gcr.io/crunchydata-public/postgres-operator/pgo-scheduler:${PGO_VERSION} - x-google-marketplace: { type: IMAGE } - - OPERATOR_NAME: { type: string, x-google-marketplace: { type: NAME } } - OPERATOR_NAMESPACE: { type: string, x-google-marketplace: { type: NAMESPACE } } - - PGBACKREST_STORAGE_CAPACITY: - title: pgBackRest Storage Capacity [GiB] - description: Default gigabytes allocated to new pgBackRest repositories - type: integer - default: 2 - minimum: 2 - - POSTGRES_CPU: - title: PostgreSQL CPU [mCPU] - description: Default mCPU allocated to new PostgreSQL clusters (1000 equals one Core) - type: integer - default: 1000 - minimum: 100 - - POSTGRES_MEM: - title: PostgreSQL Memory [GiB] - description: Default gigabytes allocated to new PostgreSQL clusters - type: integer - default: 2 - minimum: 1 - - POSTGRES_METRICS: - title: Always collect PostgreSQL metrics - description: When disabled, collection can be enabled per PostgreSQL cluster - type: boolean - default: false - - POSTGRES_SERVICE_TYPE: - title: PostgreSQL service type - description: Default type of the Service that exposes new PostgreSQL clusters - type: string - enum: [ ClusterIP, LoadBalancer, NodePort ] - default: ClusterIP - - POSTGRES_STORAGE_CAPACITY: - title: PostgreSQL Storage Capacity [GiB] - description: Default gigabytes allocated to new PostgreSQL clusters - type: integer - default: 1 - minimum: 1 - -required: - - INSTALLER_IMAGE - - INSTALLER_SERVICE_ACCOUNT - - - OPERATOR_ADMIN_PASSWORD - - OPERATOR_IMAGE - - OPERATOR_IMAGE_API - - OPERATOR_IMAGE_EVENT - - OPERATOR_IMAGE_SCHEDULER - - OPERATOR_NAME - - OPERATOR_NAMESPACE - - - POSTGRES_SERVICE_TYPE - - POSTGRES_CPU - - POSTGRES_MEM - - POSTGRES_STORAGE_CAPACITY - - POSTGRES_METRICS - - - PGBACKREST_STORAGE_CAPACITY - - - BACKUP_STORAGE_CAPACITY - -x-google-marketplace: - clusterConstraints: - istio: { type: UNSUPPORTED } - -form: - - widget: help - description: |- - Only one instance of Crunchy PostgreSQL Operator is necessary per Kubernetes cluster. - - If you have further questions, contact us at info@crunchydata.com. diff --git a/installers/gcp-marketplace/test-pod.yaml b/installers/gcp-marketplace/test-pod.yaml deleted file mode 100644 index e05e926ddb..0000000000 --- a/installers/gcp-marketplace/test-pod.yaml +++ /dev/null @@ -1,37 +0,0 @@ -apiVersion: v1 -kind: Pod -metadata: - name: test-postgres-operator - labels: - app.kubernetes.io/name: '${OPERATOR_NAME}' - annotations: - marketplace.cloud.google.com/verification: test -spec: - dnsPolicy: ClusterFirst - restartPolicy: Never - containers: - - name: tester - image: '${INSTALLER_IMAGE}' - imagePullPolicy: Always - command: ['sh', '-ce'] - args: - - >- - wget --quiet --output-document=- - --no-check-certificate - --http-user="${PGOUSERNAME}" - --http-password="${PGOUSERPASS}" - --private-key="${PGO_CLIENT_KEY}" - --certificate="${PGO_CLIENT_CERT}" - --ca-certificate="${PGO_CA_CERT}" - "${PGO_APISERVER_URL}/version" - env: - - { name: PGO_APISERVER_URL, value: 'https://postgres-operator:8443' } - - { name: PGOUSERNAME, valueFrom: { secretKeyRef: { name: pgouser-admin, key: username } } } - - { name: PGOUSERPASS, valueFrom: { secretKeyRef: { name: pgouser-admin, key: password } } } - - { name: PGO_CA_CERT, value: '/etc/pgo/certificates/tls.crt' } - - { name: PGO_CLIENT_CERT, value: '/etc/pgo/certificates/tls.crt' } - - { name: PGO_CLIENT_KEY, value: '/etc/pgo/certificates/tls.key' } - volumeMounts: - - { mountPath: /etc/pgo/certificates, name: certificates } - volumes: - - { name: certificates, secret: { secretName: pgo.tls } } diff --git a/installers/gcp-marketplace/values.yaml b/installers/gcp-marketplace/values.yaml deleted file mode 100644 index cb0840b35b..0000000000 --- a/installers/gcp-marketplace/values.yaml +++ /dev/null @@ -1,72 +0,0 @@ ---- -pgo_image: '${OPERATOR_IMAGE}' -pgo_event_image: '${OPERATOR_IMAGE_EVENT}' -pgo_apiserver_image: '${OPERATOR_IMAGE_API}' -pgo_scheduler_image: '${OPERATOR_IMAGE_SCHEDULER}' - -archive_mode: "true" -archive_timeout: "60" -badger: "false" -ccp_image_prefix: "registry.developers.crunchydata.com/crunchydata" -ccp_image_pull_secret: "" -ccp_image_pull_secret_manifest: "" -ccp_image_tag: "centos7-12.4-4.5.0" -create_rbac: "true" -db_name: "" -db_password_age_days: "0" -db_password_length: "24" -db_port: "5432" -db_replicas: "0" -db_user: "testuser" -default_instance_memory: "128Mi" -default_pgbackrest_memory: "48Mi" -default_pgbouncer_memory: "24Mi" -default_exporter_memory: "24Mi" -disable_auto_failover: "false" -exporterport: "9187" -metrics: '${POSTGRES_METRICS}' -pgbadgerport: "10000" -pgo_admin_password: '${OPERATOR_ADMIN_PASSWORD}' -pgo_admin_perms: "*" -pgo_admin_role_name: "pgoadmin" -pgo_admin_username: "admin" -pgo_client_container_install: "false" -pgo_client_install: 'false' -pgo_client_version: "4.5.0" -pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata" -pgo_image_tag: "centos7-4.5.0" -pgo_installation_name: '${OPERATOR_NAME}' -pgo_operator_namespace: '${OPERATOR_NAMESPACE}' -scheduler_timeout: "3600" -service_type: '${POSTGRES_SERVICE_TYPE}' -sync_replication: "false" - -backrest_storage: 'pgbackrest-default' -backup_storage: 'backup-default' -primary_storage: 'primary-default' -replica_storage: 'replica-default' -wal_storage: '' - -storage1_name: 'backup-default' -storage1_access_mode: 'ReadWriteOnce' -storage1_size: '${BACKUP_STORAGE_CAPACITY}Gi' -storage1_type: 'dynamic' -storage1_class: '' - -storage2_name: 'pgbackrest-default' -storage2_access_mode: 'ReadWriteOnce' -storage2_size: '${PGBACKREST_STORAGE_CAPACITY}Gi' -storage2_type: 'dynamic' -storage2_class: '' - -storage3_name: 'primary-default' -storage3_access_mode: 'ReadWriteOnce' -storage3_size: '${POSTGRES_STORAGE_CAPACITY}Gi' -storage3_type: 'dynamic' -storage3_class: '' - -storage4_name: 'replica-default' -storage4_access_mode: 'ReadWriteOnce' -storage4_size: '${POSTGRES_STORAGE_CAPACITY}Gi' -storage4_type: 'dynamic' -storage4_class: '' diff --git a/installers/helm/.helmignore b/installers/helm/.helmignore deleted file mode 100644 index a92ca61289..0000000000 --- a/installers/helm/.helmignore +++ /dev/null @@ -1 +0,0 @@ -helm_template.yaml diff --git a/installers/helm/Chart.yaml b/installers/helm/Chart.yaml deleted file mode 100644 index 6d7ffeaa30..0000000000 --- a/installers/helm/Chart.yaml +++ /dev/null @@ -1,16 +0,0 @@ -apiVersion: v2 -name: postgres-operator -description: Crunchy PostgreSQL Operator Helm chart for Kubernetes -type: application -version: 0.1.0 -appVersion: 4.5.0 -home: https://github.com/CrunchyData/postgres-operator -icon: https://github.com/CrunchyData/postgres-operator/raw/master/crunchy_logo.png -keywords: - - PostgreSQL - - Operator - - Database - - Postgres - - SQL - - NoSQL - - RDBMS \ No newline at end of file diff --git a/installers/helm/README.md b/installers/helm/README.md deleted file mode 100644 index bde966f9f9..0000000000 --- a/installers/helm/README.md +++ /dev/null @@ -1,58 +0,0 @@ -# Crunchy PostgreSQL Operator - -This Helm chart installs the Crunchy PostgreSQL Operator by using its “pgo-deployer” -container. Helm will setup the ServiceAccount, RBAC, and ConfigMap needed to run -the container as a Kubernetes Job. Then a job will be created based on `helm` -`install`, `upgrade`, or `uninstall`. After the job has completed the RBAC will -be cleaned up. - -## Prerequisites - -- Helm v3 -- Kubernetes 1.14+ - -## Getting the chart - -Clone the `postgres-operator` repo: -``` -git clone https://github.com/CrunchyData/postgres-operator.git -``` - -## Installing - -``` -cd postgres-operator/installers/helm -helm install postgres-operator . -n pgo -``` - -## Upgrading - -``` -cd postgres-operator/installers/helm -helm upgrade postgres-operator . -n pgo -``` - -## Uninstalling - -``` -cd postgres-operator/installers/helm -helm uninstall postgres-operator -n pgo -``` - -## Configuraiton - -The following shows the configurable parameters that are relevant to the Helm -Chart. A full list of all Crunchy PostgreSQL Operator configuration options can -be found in the [documentation](https://access.crunchydata.com/documentation/postgres-operator/latest/installation/configuration/). - -| Name | Default | Description | -| ---- | ------- | ----------- | -| fullnameOverride | "" | | -| rbac.create | true | If false RBAC will not be created. RBAC resources will need to be created manually and bound to `serviceAccount.name` | -| rbac.useClusterAdmin | false | If enabled the ServiceAccount will be given cluster-admin privileges. | -| serviceAccount.create | true | If false a ServiceAccount will not be created. A ServiceAccount must be created manually. | -| serviceAccount.name | "" | Use to override the default ServiceAccount name. If serviceAccount.create is false this ServiceAccount will be used. | - -{{% notice tip %}} -If installing into an OpenShift 3.11 or Kubernetes 1.11 cluster `rbac.useClusterAdmin` must be enabled. -{{% /notice %}} diff --git a/installers/helm/helm_template.yaml b/installers/helm/helm_template.yaml deleted file mode 100644 index 408861e463..0000000000 --- a/installers/helm/helm_template.yaml +++ /dev/null @@ -1,20 +0,0 @@ ---- -# ====================== -# Installer Controls -# ====================== -fullnameOverride: "" - -# rbac: settings for deployer RBAC creation -rbac: - # rbac.create: if false RBAC resources should be in place - create: true - # rbac.useClusterAdmin: creates a ClusterRoleBinding giving cluster-admin to serviceAccount.name - useClusterAdmin: false - -# serviceAccount: settings for Service Account used by the deployer -serviceAccount: - # serviceAccount.create: Whether to create a Service Account or not - create: true - # serviceAccount.name: The name of the Service Account to create or use - name: "" - diff --git a/installers/helm/templates/NOTES.txt b/installers/helm/templates/NOTES.txt deleted file mode 100644 index feee524f6b..0000000000 --- a/installers/helm/templates/NOTES.txt +++ /dev/null @@ -1,34 +0,0 @@ -Thank you for installing the Crunchy PostgreSQL Operator v{{ .Chart.AppVersion }}! - - (((((((((((((((((((((( - (((((((((((((%%%%%%%((((((((((((((( - (((((((((((%%% %%%%(((((((((((( - (((((((((((%%( (((( ( %%%((((((((((( - (((((((((((((%% (( ,(( %%%((((((((((( - (((((((((((((((%% *%%/ %%%%%%%(((((((((( - (((((((((((((((((((%%(( %%%%%%%%%%#(((((%%%%%%%%%%#(((((((((((( - ((((((((((((((((((%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%(((((((((((((( - *((((((((((((((((((((%%%%%% /%%%%%%%%%%%%%%%%%%%(((((((((((((((( - (((((((((((((((((((((((%%%/ .%, %%%((((((((((((((((((, - ((((((((((((((((((((((% %#((((((((((((((((( -(((((((((((((((%%%%%% #%((((((((((((((((( -((((((((((((((%% %%(((((((((((((((, -((((((((((((%%%#% % %%((((((((((((((( -((((((((((((%. % % #(((((((((((((( -(((((((((((%% % %%* %((((((((((((( -#(###(###(#%% %%% %% %%% #%%#(###(###(# -###########%%%%% /%%%%%%%%%%%%% %% %%%%% ,%%####### -###############%% %%%%%% %%% %%%%%%%% %%##### - ################%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% %%## - ################%% %%%%%%%%%%%%%%%%% %%%% % - ##############%# %% (%%%%%%% %%%%%% - #############% %%%%% %%%%%%%%%%% - ###########% %%%%%%%%%%% %%%%%%%%% - #########%% %% %%%%%%%%%%%%%%%# - ########%% %% %%%%%%%%% - ######%% %% %%%%%% - ####%%% %%%%% % - %% %%%% - -More information on using the postgres-operator can be found in the docs: -https://access.crunchydata.com/documentation/postgres-operator/ diff --git a/installers/helm/templates/_deployer_job_spec.yaml b/installers/helm/templates/_deployer_job_spec.yaml deleted file mode 100644 index 9abc655d0d..0000000000 --- a/installers/helm/templates/_deployer_job_spec.yaml +++ /dev/null @@ -1,26 +0,0 @@ -{{- define "deployerJob.spec" }} -spec: - backoffLimit: 0 - template: - metadata: - name: pgo-deploy - labels: -{{ include "postgres-operator.labels" . | indent 8 }} - spec: - serviceAccountName: {{ include "postgres-operator.serviceAccountName" . }} - restartPolicy: Never - containers: - - name: pgo-deploy - image: {{ .Values.pgo_image_prefix }}/pgo-deployer:{{ .Values.pgo_image_tag }} - imagePullPolicy: IfNotPresent - env: - - name: DEPLOY_ACTION - value: "{{ .deployAction }}" - volumeMounts: - - name: deployer-conf - mountPath: "/conf" - volumes: - - name: deployer-conf - configMap: - name: {{ template "postgres-operator.fullname" . }}-cm -{{- end }} \ No newline at end of file diff --git a/installers/helm/templates/_helpers.tpl b/installers/helm/templates/_helpers.tpl deleted file mode 100644 index a1027ba8d7..0000000000 --- a/installers/helm/templates/_helpers.tpl +++ /dev/null @@ -1,97 +0,0 @@ -{{/* vim: set filetype=mustache: */}} -{{/* -Expand the name of the chart. -*/}} -{{- define "postgres-operator.name" -}} -{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }} -{{- end }} - -{{/* -Create a default fully qualified app name. -We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). -If release name contains chart name it will be used as a full name. -*/}} -{{- define "postgres-operator.fullname" -}} -{{- if .Values.fullnameOverride }} -{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }} -{{- else }} -{{- $name := default .Chart.Name .Values.nameOverride }} -{{- if contains $name .Release.Name }} -{{- .Release.Name | trunc 63 | trimSuffix "-" }} -{{- else }} -{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }} -{{- end }} -{{- end }} -{{- end }} - -{{/* -Create chart name and version as used by the chart label. -*/}} -{{- define "postgres-operator.chart" -}} -{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }} -{{- end }} - -{{/* -Common labels -*/}} -{{- define "postgres-operator.labels" -}} -helm.sh/chart: {{ include "postgres-operator.chart" . }} -{{ include "postgres-operator.selectorLabels" . }} -{{- if .Chart.AppVersion }} -app.kubernetes.io/version: {{ .Chart.AppVersion | quote }} -{{- end }} -app.kubernetes.io/managed-by: {{ .Release.Service }} -meta.helm.sh/release-name: {{ .Release.Name }} -meta.helm.sh/release-namespace: {{ .Release.Namespace }} -{{- end }} - -{{/* -Selector labels -*/}} -{{- define "postgres-operator.selectorLabels" -}} -app.kubernetes.io/name: {{ include "postgres-operator.name" . }} -app.kubernetes.io/instance: {{ .Release.Name }} -{{- end }} - -{{/* -Create the name of the service account to use -*/}} -{{- define "postgres-operator.serviceAccountName" -}} -{{- if .Values.serviceAccount.create }} -{{- default "pgo-deployer-sa" .Values.serviceAccount.name }} -{{- else }} -{{- default "default" .Values.serviceAccount.name }} -{{- end }} -{{- end }} - -{{/* -Create the template for image pull secrets -*/}} -{{- define "postgres-operator.imagePullSecret" -}} -{{- if ne .Values.pgo_image_pull_secret "" }} -imagePullSecrets: -- name: "{{ .Values.pgo_image_pull_secret }}" -{{ end }} -{{ end }} - -{{/* -Create the template for clusterroleName based on values.yaml parameters -*/}} -{{- define "postgres-operator.clusterroleName" -}} -{{- if .Values.rbac.useClusterAdmin -}} -cluster-admin -{{- else -}} -{{ include "postgres-operator.fullname" . }}-cr -{{- end }} -{{- end }} - -{{/* -Generate Configmap based on Values defined in values.yaml -*/}} -{{- define "postgres-operator.values" -}} -values.yaml: | - --- -{{- range $index, $value := .Values }} -{{ $index | indent 2 }}: {{ $value | quote }} -{{- end }} -{{- end }} diff --git a/installers/helm/templates/postgres-operator-install.yaml b/installers/helm/templates/postgres-operator-install.yaml deleted file mode 100644 index 43b0604c3b..0000000000 --- a/installers/helm/templates/postgres-operator-install.yaml +++ /dev/null @@ -1,13 +0,0 @@ -{{ $_ := set . "deployAction" "install" }} ---- -apiVersion: batch/v1 -kind: Job -metadata: - name: pgo-deploy - namespace: {{ .Release.Namespace }} - labels: -{{ include "postgres-operator.labels" . | indent 4 }} - annotations: - helm.sh/hook: post-install - helm.sh/hook-delete-policy: hook-succeeded,before-hook-creation -{{ template "deployerJob.spec" . }} diff --git a/installers/helm/templates/postgres-operator-uninstall.yaml b/installers/helm/templates/postgres-operator-uninstall.yaml deleted file mode 100644 index 0b7553b0e7..0000000000 --- a/installers/helm/templates/postgres-operator-uninstall.yaml +++ /dev/null @@ -1,13 +0,0 @@ -{{ $_ := set . "deployAction" "uninstall" }} ---- -apiVersion: batch/v1 -kind: Job -metadata: - name: pgo-deploy - namespace: {{ .Release.Namespace }} - labels: -{{ include "postgres-operator.labels" . | indent 4 }} - annotations: - helm.sh/hook: pre-delete - helm.sh/hook-delete-policy: hook-succeeded,before-hook-creation -{{ template "deployerJob.spec" . }} diff --git a/installers/helm/templates/postgres-operator-upgrade.yaml b/installers/helm/templates/postgres-operator-upgrade.yaml deleted file mode 100644 index 4ba8954b14..0000000000 --- a/installers/helm/templates/postgres-operator-upgrade.yaml +++ /dev/null @@ -1,13 +0,0 @@ -{{ $_ := set . "deployAction" "update" }} ---- -apiVersion: batch/v1 -kind: Job -metadata: - name: pgo-deploy - namespace: {{ .Release.Namespace }} - labels: -{{ include "postgres-operator.labels" . | indent 4 }} - annotations: - helm.sh/hook: post-upgrade - helm.sh/hook-delete-policy: hook-succeeded,before-hook-creation -{{ template "deployerJob.spec" . }} diff --git a/installers/helm/templates/rbac.yaml b/installers/helm/templates/rbac.yaml deleted file mode 100644 index dbef140471..0000000000 --- a/installers/helm/templates/rbac.yaml +++ /dev/null @@ -1,148 +0,0 @@ -{{ if .Values.serviceAccount.create }} ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: {{ include "postgres-operator.serviceAccountName" . }} - namespace: {{ .Release.Namespace }} - labels: -{{ include "postgres-operator.labels" . | indent 4 }} -{{ include "postgres-operator.imagePullSecret" . }} -{{ end }} -{{ if and .Values.rbac.create (not .Values.rbac.useClusterAdmin) }} ---- -kind: ClusterRole -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - name: {{ template "postgres-operator.fullname" . }}-cr - labels: -{{ include "postgres-operator.labels" . | indent 4 }} - annotations: - helm.sh/hook: post-install,post-upgrade,pre-delete - helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded - helm.sh/hook-weight: "-10" -rules: - - apiGroups: - - '' - resources: - - namespaces - verbs: - - get - - list - - create - - patch - - delete - - apiGroups: - - '' - resources: - - pods - verbs: - - list - - apiGroups: - - '' - resources: - - secrets - verbs: - - list - - get - - create - - delete - - apiGroups: - - '' - resources: - - configmaps - - services - - persistentvolumeclaims - verbs: - - get - - create - - delete - - list - - apiGroups: - - '' - resources: - - serviceaccounts - verbs: - - get - - create - - delete - - patch - - list - - apiGroups: - - apps - - extensions - resources: - - deployments - verbs: - - get - - list - - watch - - create - - delete - - apiGroups: - - apiextensions.k8s.io - resources: - - customresourcedefinitions - verbs: - - get - - create - - delete - - apiGroups: - - rbac.authorization.k8s.io - resources: - - clusterroles - - clusterrolebindings - - roles - - rolebindings - verbs: - - get - - create - - delete - - bind - - escalate - - apiGroups: - - rbac.authorization.k8s.io - resources: - - roles - verbs: - - create - - delete - - apiGroups: - - batch - resources: - - jobs - verbs: - - delete - - list - - apiGroups: - - crunchydata.com - resources: - - pgclusters - - pgreplicas - - pgpolicies - - pgtasks - verbs: - - delete - - list -{{ end }} -{{ if .Values.rbac.create }} ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: {{ template "postgres-operator.fullname" . }}-crb - labels: -{{ include "postgres-operator.labels" . | indent 4 }} - annotations: - helm.sh/hook: post-install,post-upgrade,pre-delete - helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded - helm.sh/hook-weight: "-10" -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: {{ include "postgres-operator.clusterroleName" . }} -subjects: -- kind: ServiceAccount - name: {{ include "postgres-operator.serviceAccountName" . }} - namespace: {{ .Release.Namespace }} -{{ end }} \ No newline at end of file diff --git a/installers/helm/templates/values_configmap.yaml b/installers/helm/templates/values_configmap.yaml deleted file mode 100644 index 15ab0b9606..0000000000 --- a/installers/helm/templates/values_configmap.yaml +++ /dev/null @@ -1,9 +0,0 @@ -apiVersion: v1 -kind: ConfigMap -metadata: - name: {{ template "postgres-operator.fullname" . }}-cm - namespace: {{ .Release.Namespace }} - labels: -{{ include "postgres-operator.labels" . | indent 4 }} -data: -{{ include "postgres-operator.values" . | indent 2}} diff --git a/installers/helm/values.yaml b/installers/helm/values.yaml deleted file mode 100644 index 649436e0af..0000000000 --- a/installers/helm/values.yaml +++ /dev/null @@ -1,140 +0,0 @@ ---- -# ====================== -# Installer Controls -# ====================== -fullnameOverride: "" - -# rbac: settings for deployer RBAC creation -rbac: - # rbac.create: if false RBAC resources should be in place - create: true - # rbac.useClusterAdmin: creates a ClusterRoleBinding giving cluster-admin to serviceAccount.name - useClusterAdmin: false - -# serviceAccount: settings for Service Account used by the deployer -serviceAccount: - # serviceAccount.create: Whether to create a Service Account or not - create: true - # serviceAccount.name: The name of the Service Account to create or use - name: "" - -# ===================== -# Configuration Options -# More info for these options can be found in the docs -# https://access.crunchydata.com/documentation/postgres-operator/latest/installation/configuration/ -# ===================== -archive_mode: "true" -archive_timeout: "60" -backrest_aws_s3_bucket: "" -backrest_aws_s3_endpoint: "" -backrest_aws_s3_key: "" -backrest_aws_s3_region: "" -backrest_aws_s3_secret: "" -backrest_aws_s3_uri_style: "" -backrest_aws_s3_verify_tls: "true" -backrest_port: "2022" -badger: "false" -ccp_image_prefix: "registry.developers.crunchydata.com/crunchydata" -ccp_image_pull_secret: "" -ccp_image_pull_secret_manifest: "" -ccp_image_tag: "centos7-12.4-4.5.0" -create_rbac: "true" -crunchy_debug: "false" -db_name: "" -db_password_age_days: "0" -db_password_length: "24" -db_port: "5432" -db_replicas: "0" -db_user: "testuser" -default_instance_memory: "128Mi" -default_pgbackrest_memory: "48Mi" -default_pgbouncer_memory: "24Mi" -default_exporter_memory: "24Mi" -delete_operator_namespace: "false" -delete_watched_namespaces: "false" -disable_auto_failover: "false" -disable_fsgroup: "false" -reconcile_rbac: "true" -exporterport: "9187" -metrics: "false" -namespace: "pgo" -namespace_mode: "dynamic" -pgbadgerport: "10000" -pgo_add_os_ca_store: "false" -pgo_admin_password: "examplepassword" -pgo_admin_perms: "*" -pgo_admin_role_name: "pgoadmin" -pgo_admin_username: "admin" -pgo_apiserver_port: "8443" -pgo_apiserver_url: "https://postgres-operator" -pgo_client_cert_secret: "pgo.tls" -pgo_client_container_install: "false" -pgo_client_install: "true" -pgo_client_version: "4.5.0" -pgo_cluster_admin: "false" -pgo_disable_eventing: "false" -pgo_disable_tls: "false" -pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata" -pgo_image_pull_secret: "" -pgo_image_pull_secret_manifest: "" -pgo_image_tag: "centos7-4.5.0" -pgo_installation_name: "devtest" -pgo_noauth_routes: "" -pgo_operator_namespace: "pgo" -pgo_tls_ca_store: "" -pgo_tls_no_verify: "false" -pod_anti_affinity: "preferred" -pod_anti_affinity_pgbackrest: "" -pod_anti_affinity_pgbouncer: "" -scheduler_timeout: "3600" -service_type: "ClusterIP" -sync_replication: "false" -backrest_storage: "default" -backup_storage: "default" -primary_storage: "default" -replica_storage: "default" -wal_storage: "" -storage1_name: "default" -storage1_access_mode: "ReadWriteOnce" -storage1_size: "1G" -storage1_type: "dynamic" -storage2_name: "hostpathstorage" -storage2_access_mode: "ReadWriteMany" -storage2_size: "1G" -storage2_type: "create" -storage3_name: "nfsstorage" -storage3_access_mode: "ReadWriteMany" -storage3_size: "1G" -storage3_type: "create" -storage3_supplemental_groups: "65534" -storage4_name: "nfsstoragered" -storage4_access_mode: "ReadWriteMany" -storage4_size: "1G" -storage4_match_labels: "crunchyzone=red" -storage4_type: "create" -storage4_supplemental_groups: "65534" -storage5_name: "storageos" -storage5_access_mode: "ReadWriteOnce" -storage5_size: "5Gi" -storage5_type: "dynamic" -storage5_class: "fast" -storage6_name: "primarysite" -storage6_access_mode: "ReadWriteOnce" -storage6_size: "4G" -storage6_type: "dynamic" -storage6_class: "primarysite" -storage7_name: "alternatesite" -storage7_access_mode: "ReadWriteOnce" -storage7_size: "4G" -storage7_type: "dynamic" -storage7_class: "alternatesite" -storage8_name: "gce" -storage8_access_mode: "ReadWriteOnce" -storage8_size: "300M" -storage8_type: "dynamic" -storage8_class: "standard" -storage9_name: "rook" -storage9_access_mode: "ReadWriteOnce" -storage9_size: "1Gi" -storage9_type: "dynamic" -storage9_class: "rook-ceph-block" diff --git a/installers/image/bin/pgo-deploy.sh b/installers/image/bin/pgo-deploy.sh deleted file mode 100755 index 9a965d58be..0000000000 --- a/installers/image/bin/pgo-deploy.sh +++ /dev/null @@ -1,21 +0,0 @@ -#!/bin/bash - -# Copyright 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# http://www.apache.org/licenses/LICENSE-2.0 -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -export DEPLOY_ACTION=${DEPLOY_ACTION:-install} - -/usr/bin/env ansible-playbook \ - -i "/ansible/${PLAYBOOK:-postgres-operator}/inventory.yaml" \ - --extra-vars "kubernetes_in_cluster=true" \ - --extra-vars "config_path=/conf/values.yaml" \ - --tags=$DEPLOY_ACTION \ - "/ansible/${PLAYBOOK:-postgres-operator}/main.yml" diff --git a/installers/image/conf/kubernetes.repo b/installers/image/conf/kubernetes.repo deleted file mode 100644 index 0a8b4cf2bf..0000000000 --- a/installers/image/conf/kubernetes.repo +++ /dev/null @@ -1,7 +0,0 @@ -[kubernetes] -name=Kubernetes -baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 -enabled=1 -gpgcheck=1 -repo_gpgcheck=1 -gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg \ No newline at end of file diff --git a/installers/kubectl/client-setup.sh b/installers/kubectl/client-setup.sh deleted file mode 100755 index 6956d63f6b..0000000000 --- a/installers/kubectl/client-setup.sh +++ /dev/null @@ -1,86 +0,0 @@ -#!/bin/bash - -# Copyright 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# http://www.apache.org/licenses/LICENSE-2.0 -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# This script should be run after the operator has been deployed -PGO_OPERATOR_NAMESPACE="${PGO_OPERATOR_NAMESPACE:-pgo}" -PGO_USER_ADMIN="${PGO_USER_ADMIN:-pgouser-admin}" -PGO_CLIENT_VERSION="${PGO_CLIENT_VERSION:-v4.5.0}" -PGO_CLIENT_URL="https://github.com/CrunchyData/postgres-operator/releases/download/${PGO_CLIENT_VERSION}" - -PGO_CMD="${PGO_CMD-kubectl}" - -# Checks operating system and determines which binary to download -UNAME_RESULT=$(uname) -if [[ "${UNAME_RESULT}" == "Linux" ]] -then - BIN_NAME="pgo" -elif [[ "${UNAME_RESULT}" == "Darwin" ]] -then - BIN_NAME="pgo-mac" -else - echo "${UNAME_RESULT} is not supported, valid operating systems are: Linux, Darwin" - echo "Exiting..." - exit 1 -fi - -# Creates the output directory for files -OUTPUT_DIR="${HOME}/.pgo/${PGO_OPERATOR_NAMESPACE}" -install -d -m a-rwx,u+rwx "${OUTPUT_DIR}" - -if [ -f "${OUTPUT_DIR}/pgo" ] -then - echo "pgo Client Binary detected at: ${OUTPUT_DIR}" - echo "Updating Binary..." -fi - -echo "Operating System found is ${UNAME_RESULT}..." -echo "Downloading ${BIN_NAME} version: ${PGO_CLIENT_VERSION}..." -curl -Lo "${OUTPUT_DIR}/pgo" "${PGO_CLIENT_URL}/${BIN_NAME}" -chmod +x "${OUTPUT_DIR}/pgo" - - -# Check that the pgouser-admin secret exists -if [ -z "$($PGO_CMD get secret -n ${PGO_OPERATOR_NAMESPACE} ${PGO_USER_ADMIN})" ] -then - echo "${PGO_USER_ADMIN} Secret not found in namespace: ${PGO_OPERATOR_NAMESPACE}" - echo "Please ensure that the PostgreSQL Operator has been installed." - echo "Exiting..." - exit 1 -fi - -# Check that the pgo.tls secret exists -if [ -z "$($PGO_CMD get secret -n ${PGO_OPERATOR_NAMESPACE} pgo.tls)" ] -then - echo "pgo.tls Secret not found in namespace: ${PGO_OPERATOR_NAMESPACE}" - echo "Please ensure that the PostgreSQL Operator has been installed." - echo "Exiting..." - exit 1 -fi - -# Restrict access to the target file before writing -kubectl_get_private() { touch "$1" && chmod a-rwx,u+rw "$1" && $PGO_CMD get > "$1" "${@:2}"; } - -# Use the pgouser-admin secret to generate pgouser file -kubectl_get_private "${OUTPUT_DIR}/pgouser" secret -n "${PGO_OPERATOR_NAMESPACE}" "${PGO_USER_ADMIN}" \ - -o 'go-template={{ .data.username | base64decode }}:{{ .data.password | base64decode }}' - -# Use the pgo.tls secret to generate the client cert files -kubectl_get_private "${OUTPUT_DIR}/client.crt" secret -n "${PGO_OPERATOR_NAMESPACE}" pgo.tls -o 'go-template={{ index .data "tls.crt" | base64decode }}' -kubectl_get_private "${OUTPUT_DIR}/client.key" secret -n "${PGO_OPERATOR_NAMESPACE}" pgo.tls -o 'go-template={{ index .data "tls.key" | base64decode }}' - -echo "pgo client files have been generated, please add the following to your bashrc" -echo "export PATH=${OUTPUT_DIR}:\$PATH" -echo "export PGOUSER=${OUTPUT_DIR}/pgouser" -echo "export PGO_CA_CERT=${OUTPUT_DIR}/client.crt" -echo "export PGO_CLIENT_CERT=${OUTPUT_DIR}/client.crt" -echo "export PGO_CLIENT_KEY=${OUTPUT_DIR}/client.key" diff --git a/installers/kubectl/postgres-operator-ocp311.yml b/installers/kubectl/postgres-operator-ocp311.yml deleted file mode 100644 index 9978d052d3..0000000000 --- a/installers/kubectl/postgres-operator-ocp311.yml +++ /dev/null @@ -1,175 +0,0 @@ -apiVersion: v1 -kind: ServiceAccount -metadata: - name: pgo-deployer-sa - namespace: pgo ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: pgo-deployer-crb - namespace: pgo -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: cluster-admin -subjects: - - kind: ServiceAccount - name: pgo-deployer-sa - namespace: pgo ---- -apiVersion: v1 -kind: ConfigMap -metadata: - name: pgo-deployer-cm - namespace: pgo -data: - values.yaml: |- - # ===================== - # Configuration Options - # More info for these options can be found in the docs - # https://access.crunchydata.com/documentation/postgres-operator/latest/installation/configuration/ - # ===================== - archive_mode: "true" - archive_timeout: "60" - backrest_aws_s3_bucket: "" - backrest_aws_s3_endpoint: "" - backrest_aws_s3_key: "" - backrest_aws_s3_region: "" - backrest_aws_s3_secret: "" - backrest_aws_s3_uri_style: "" - backrest_aws_s3_verify_tls: "true" - backrest_port: "2022" - badger: "false" - ccp_image_prefix: "registry.developers.crunchydata.com/crunchydata" - ccp_image_pull_secret: "" - ccp_image_pull_secret_manifest: "" - ccp_image_tag: "centos7-12.4-4.5.0" - create_rbac: "true" - crunchy_debug: "false" - db_name: "" - db_password_age_days: "0" - db_password_length: "24" - db_port: "5432" - db_replicas: "0" - db_user: "testuser" - default_instance_memory: "128Mi" - default_pgbackrest_memory: "48Mi" - default_pgbouncer_memory: "24Mi" - default_exporter_memory: "24Mi" - delete_operator_namespace: "false" - delete_watched_namespaces: "false" - disable_auto_failover: "false" - disable_fsgroup: "false" - reconcile_rbac: "true" - exporterport: "9187" - metrics: "false" - namespace: "pgo" - namespace_mode: "dynamic" - pgbadgerport: "10000" - pgo_add_os_ca_store: "false" - pgo_admin_password: "examplepassword" - pgo_admin_perms: "*" - pgo_admin_role_name: "pgoadmin" - pgo_admin_username: "admin" - pgo_apiserver_port: "8443" - pgo_apiserver_url: "https://postgres-operator" - pgo_client_cert_secret: "pgo.tls" - pgo_client_container_install: "false" - pgo_client_install: "true" - pgo_client_version: "4.5.0" - pgo_cluster_admin: "false" - pgo_disable_eventing: "false" - pgo_disable_tls: "false" - pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata" - pgo_image_pull_secret: "" - pgo_image_pull_secret_manifest: "" - pgo_image_tag: "centos7-4.5.0" - pgo_installation_name: "devtest" - pgo_noauth_routes: "" - pgo_operator_namespace: "pgo" - pgo_tls_ca_store: "" - pgo_tls_no_verify: "false" - pod_anti_affinity: "preferred" - pod_anti_affinity_pgbackrest: "" - pod_anti_affinity_pgbouncer: "" - scheduler_timeout: "3600" - service_type: "ClusterIP" - sync_replication: "false" - backrest_storage: "default" - backup_storage: "default" - primary_storage: "default" - replica_storage: "default" - wal_storage: "" - storage1_name: "default" - storage1_access_mode: "ReadWriteOnce" - storage1_size: "1G" - storage1_type: "dynamic" - storage2_name: "hostpathstorage" - storage2_access_mode: "ReadWriteMany" - storage2_size: "1G" - storage2_type: "create" - storage3_name: "nfsstorage" - storage3_access_mode: "ReadWriteMany" - storage3_size: "1G" - storage3_type: "create" - storage3_supplemental_groups: "65534" - storage4_name: "nfsstoragered" - storage4_access_mode: "ReadWriteMany" - storage4_size: "1G" - storage4_match_labels: "crunchyzone=red" - storage4_type: "create" - storage4_supplemental_groups: "65534" - storage5_name: "storageos" - storage5_access_mode: "ReadWriteOnce" - storage5_size: "5Gi" - storage5_type: "dynamic" - storage5_class: "fast" - storage6_name: "primarysite" - storage6_access_mode: "ReadWriteOnce" - storage6_size: "4G" - storage6_type: "dynamic" - storage6_class: "primarysite" - storage7_name: "alternatesite" - storage7_access_mode: "ReadWriteOnce" - storage7_size: "4G" - storage7_type: "dynamic" - storage7_class: "alternatesite" - storage8_name: "gce" - storage8_access_mode: "ReadWriteOnce" - storage8_size: "300M" - storage8_type: "dynamic" - storage8_class: "standard" - storage9_name: "rook" - storage9_access_mode: "ReadWriteOnce" - storage9_size: "1Gi" - storage9_type: "dynamic" - storage9_class: "rook-ceph-block" ---- -apiVersion: batch/v1 -kind: Job -metadata: - name: pgo-deploy - namespace: pgo -spec: - backoffLimit: 0 - template: - metadata: - name: pgo-deploy - spec: - serviceAccountName: pgo-deployer-sa - restartPolicy: Never - containers: - - name: pgo-deploy - image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos7-4.5.0 - imagePullPolicy: IfNotPresent - env: - - name: DEPLOY_ACTION - value: install - volumeMounts: - - name: deployer-conf - mountPath: "/conf" - volumes: - - name: deployer-conf - configMap: - name: pgo-deployer-cm diff --git a/installers/kubectl/postgres-operator.yml b/installers/kubectl/postgres-operator.yml deleted file mode 100644 index 2b516ef2ca..0000000000 --- a/installers/kubectl/postgres-operator.yml +++ /dev/null @@ -1,282 +0,0 @@ -apiVersion: v1 -kind: ServiceAccount -metadata: - name: pgo-deployer-sa - namespace: pgo ---- -kind: ClusterRole -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - name: pgo-deployer-cr -rules: - - apiGroups: - - '' - resources: - - namespaces - verbs: - - get - - list - - create - - patch - - delete - - apiGroups: - - '' - resources: - - pods - verbs: - - list - - apiGroups: - - '' - resources: - - secrets - verbs: - - list - - get - - create - - delete - - apiGroups: - - '' - resources: - - configmaps - - services - - persistentvolumeclaims - verbs: - - get - - create - - delete - - list - - apiGroups: - - '' - resources: - - serviceaccounts - verbs: - - get - - create - - delete - - patch - - list - - apiGroups: - - apps - - extensions - resources: - - deployments - verbs: - - get - - list - - watch - - create - - delete - - apiGroups: - - apiextensions.k8s.io - resources: - - customresourcedefinitions - verbs: - - get - - create - - delete - - apiGroups: - - rbac.authorization.k8s.io - resources: - - clusterroles - - clusterrolebindings - - roles - - rolebindings - verbs: - - get - - create - - delete - - bind - - escalate - - apiGroups: - - rbac.authorization.k8s.io - resources: - - roles - verbs: - - create - - delete - - apiGroups: - - batch - resources: - - jobs - verbs: - - delete - - list - - apiGroups: - - crunchydata.com - resources: - - pgclusters - - pgreplicas - - pgpolicies - - pgtasks - verbs: - - delete - - list ---- -apiVersion: v1 -kind: ConfigMap -metadata: - name: pgo-deployer-cm - namespace: pgo -data: - values.yaml: |- - # ===================== - # Configuration Options - # More info for these options can be found in the docs - # https://access.crunchydata.com/documentation/postgres-operator/latest/installation/configuration/ - # ===================== - archive_mode: "true" - archive_timeout: "60" - backrest_aws_s3_bucket: "" - backrest_aws_s3_endpoint: "" - backrest_aws_s3_key: "" - backrest_aws_s3_region: "" - backrest_aws_s3_secret: "" - backrest_aws_s3_uri_style: "" - backrest_aws_s3_verify_tls: "true" - backrest_port: "2022" - badger: "false" - ccp_image_prefix: "registry.developers.crunchydata.com/crunchydata" - ccp_image_pull_secret: "" - ccp_image_pull_secret_manifest: "" - ccp_image_tag: "centos7-12.4-4.5.0" - create_rbac: "true" - crunchy_debug: "false" - db_name: "" - db_password_age_days: "0" - db_password_length: "24" - db_port: "5432" - db_replicas: "0" - db_user: "testuser" - default_instance_memory: "128Mi" - default_pgbackrest_memory: "48Mi" - default_pgbouncer_memory: "24Mi" - default_exporter_memory: "24Mi" - delete_operator_namespace: "false" - delete_watched_namespaces: "false" - disable_auto_failover: "false" - disable_fsgroup: "false" - reconcile_rbac: "true" - exporterport: "9187" - metrics: "false" - namespace: "pgo" - namespace_mode: "dynamic" - pgbadgerport: "10000" - pgo_add_os_ca_store: "false" - pgo_admin_password: "examplepassword" - pgo_admin_perms: "*" - pgo_admin_role_name: "pgoadmin" - pgo_admin_username: "admin" - pgo_apiserver_port: "8443" - pgo_apiserver_url: "https://postgres-operator" - pgo_client_cert_secret: "pgo.tls" - pgo_client_container_install: "false" - pgo_client_install: "true" - pgo_client_version: "4.5.0" - pgo_cluster_admin: "false" - pgo_disable_eventing: "false" - pgo_disable_tls: "false" - pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata" - pgo_image_pull_secret: "" - pgo_image_pull_secret_manifest: "" - pgo_image_tag: "centos7-4.5.0" - pgo_installation_name: "devtest" - pgo_noauth_routes: "" - pgo_operator_namespace: "pgo" - pgo_tls_ca_store: "" - pgo_tls_no_verify: "false" - pod_anti_affinity: "preferred" - pod_anti_affinity_pgbackrest: "" - pod_anti_affinity_pgbouncer: "" - scheduler_timeout: "3600" - service_type: "ClusterIP" - sync_replication: "false" - backrest_storage: "default" - backup_storage: "default" - primary_storage: "default" - replica_storage: "default" - wal_storage: "" - storage1_name: "default" - storage1_access_mode: "ReadWriteOnce" - storage1_size: "1G" - storage1_type: "dynamic" - storage2_name: "hostpathstorage" - storage2_access_mode: "ReadWriteMany" - storage2_size: "1G" - storage2_type: "create" - storage3_name: "nfsstorage" - storage3_access_mode: "ReadWriteMany" - storage3_size: "1G" - storage3_type: "create" - storage3_supplemental_groups: "65534" - storage4_name: "nfsstoragered" - storage4_access_mode: "ReadWriteMany" - storage4_size: "1G" - storage4_match_labels: "crunchyzone=red" - storage4_type: "create" - storage4_supplemental_groups: "65534" - storage5_name: "storageos" - storage5_access_mode: "ReadWriteOnce" - storage5_size: "5Gi" - storage5_type: "dynamic" - storage5_class: "fast" - storage6_name: "primarysite" - storage6_access_mode: "ReadWriteOnce" - storage6_size: "4G" - storage6_type: "dynamic" - storage6_class: "primarysite" - storage7_name: "alternatesite" - storage7_access_mode: "ReadWriteOnce" - storage7_size: "4G" - storage7_type: "dynamic" - storage7_class: "alternatesite" - storage8_name: "gce" - storage8_access_mode: "ReadWriteOnce" - storage8_size: "300M" - storage8_type: "dynamic" - storage8_class: "standard" - storage9_name: "rook" - storage9_access_mode: "ReadWriteOnce" - storage9_size: "1Gi" - storage9_type: "dynamic" - storage9_class: "rook-ceph-block" ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: pgo-deployer-crb -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: pgo-deployer-cr -subjects: - - kind: ServiceAccount - name: pgo-deployer-sa - namespace: pgo ---- -apiVersion: batch/v1 -kind: Job -metadata: - name: pgo-deploy - namespace: pgo -spec: - backoffLimit: 0 - template: - metadata: - name: pgo-deploy - spec: - serviceAccountName: pgo-deployer-sa - restartPolicy: Never - containers: - - name: pgo-deploy - image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos7-4.5.0 - imagePullPolicy: IfNotPresent - env: - - name: DEPLOY_ACTION - value: install - volumeMounts: - - name: deployer-conf - mountPath: "/conf" - volumes: - - name: deployer-conf - configMap: - name: pgo-deployer-cm diff --git a/installers/metrics/ansible/README.md b/installers/metrics/ansible/README.md deleted file mode 100644 index 1c047d1a85..0000000000 --- a/installers/metrics/ansible/README.md +++ /dev/null @@ -1,15 +0,0 @@ -# Crunchy Data PostgreSQL Operator Monitoring Playbook - -

- Crunchy Data -

- -Latest Release: 4.5.0 - -## General - -This repository contains Ansible Roles for deploying the metrics stack for the -Crunchy PostgreSQL Operator. - -See the [official Crunchy PostgreSQL Operator documentation](https://access.crunchydata.com/documentation/postgres-operator/) -for more information. diff --git a/installers/metrics/ansible/ansible.cfg b/installers/metrics/ansible/ansible.cfg deleted file mode 100644 index 670b29222b..0000000000 --- a/installers/metrics/ansible/ansible.cfg +++ /dev/null @@ -1,6 +0,0 @@ -[defaults] -retry_files_enabled = False -remote_tmp=/tmp - -[ssh_connection] -ssh_args = -o ControlMaster=no diff --git a/installers/metrics/ansible/inventory.yaml b/installers/metrics/ansible/inventory.yaml deleted file mode 100644 index 7cb421029a..0000000000 --- a/installers/metrics/ansible/inventory.yaml +++ /dev/null @@ -1,30 +0,0 @@ ---- - all: - hosts: - localhost: - vars: - ansible_connection: local - config_path: "{{ playbook_dir }}/values.yaml" - # ================== - # Installation Methods - # One of the following blocks must be updated: - # - Deploy into Kubernetes - # - Deploy into Openshift - - # Deploy into Kubernetes - # ================== - # Note: Context name can be found using: - # kubectl config current-context - # ================== - # kubernetes_context: '' - - # Deploy into Openshift - # ================== - # Note: openshift_host can use the format https://URL:PORT - # Note: openshift_token can be used for token authentication - # ================== - # openshift_host: '' - # openshift_skip_tls_verify: true - # openshift_user: '' - # openshift_password: '' - # openshift_token: '' diff --git a/installers/metrics/ansible/main.yml b/installers/metrics/ansible/main.yml deleted file mode 100644 index 3e00accbbc..0000000000 --- a/installers/metrics/ansible/main.yml +++ /dev/null @@ -1,7 +0,0 @@ ---- -- name: Deploy Crunchy PostgreSQL Operator Monitoring - hosts: all - gather_facts: true - roles: - - pgo-metrics-preflight - - pgo-metrics diff --git a/installers/metrics/ansible/roles/pgo-metrics-preflight/tasks/check_kubernetes.yml b/installers/metrics/ansible/roles/pgo-metrics-preflight/tasks/check_kubernetes.yml deleted file mode 100644 index 9affc18ad7..0000000000 --- a/installers/metrics/ansible/roles/pgo-metrics-preflight/tasks/check_kubernetes.yml +++ /dev/null @@ -1,13 +0,0 @@ ---- -- name: Check if the kubectl command is installed - shell: which kubectl - register: kubectl_result - ignore_errors: yes - tags: always - -- name: Ensure kubectl is installed - assert: - that: - - kubectl_result.rc == 0 - msg: "Install kubectl: https://kubernetes.io/docs/tasks/tools/install-kubectl/" - tags: always diff --git a/installers/metrics/ansible/roles/pgo-metrics-preflight/tasks/check_openshift.yml b/installers/metrics/ansible/roles/pgo-metrics-preflight/tasks/check_openshift.yml deleted file mode 100644 index 1f76355dc6..0000000000 --- a/installers/metrics/ansible/roles/pgo-metrics-preflight/tasks/check_openshift.yml +++ /dev/null @@ -1,38 +0,0 @@ ---- -- name: openshift_token should be defined - assert: - that: - - openshift_token != '' - msg: "Set the value of 'openshift_token' in the inventory file." - when: - - openshift_token is defined - tags: always - -- name: openshift_user should be defined - assert: - that: - - openshift_user is defined and openshift_user != '' - msg: "Set the value of 'openshift_user' in the inventory file." - when: openshift_token is not defined - tags: always - -- name: openshift_password should be defined - assert: - that: - - openshift_password is defined and openshift_password != '' - msg: "Set the value of 'openshift_password' in the inventory file." - when: openshift_token is not defined - tags: always - -- name: Check if the oc command is installed - shell: which oc - register: oc_result - ignore_errors: yes - tags: always - -- name: Ensure OpenShift CLI is installed - assert: - that: - - oc_result.rc == 0 - msg: "Install the OpenShift CLI (oc)" - tags: always diff --git a/installers/metrics/ansible/roles/pgo-metrics-preflight/tasks/check_vars.yml b/installers/metrics/ansible/roles/pgo-metrics-preflight/tasks/check_vars.yml deleted file mode 100644 index 0308779fe1..0000000000 --- a/installers/metrics/ansible/roles/pgo-metrics-preflight/tasks/check_vars.yml +++ /dev/null @@ -1,9 +0,0 @@ ---- -- name: Check if mandatory metrics variables are defined - fail: - msg: Please specify a value for variable {{ item }} in your values.yaml - tags: always - when: "lookup('vars', item, default='') == ''" - loop: - - metrics_namespace - - pgmonitor_version \ No newline at end of file diff --git a/installers/metrics/ansible/roles/pgo-metrics-preflight/tasks/main.yml b/installers/metrics/ansible/roles/pgo-metrics-preflight/tasks/main.yml deleted file mode 100644 index d09cf160e4..0000000000 --- a/installers/metrics/ansible/roles/pgo-metrics-preflight/tasks/main.yml +++ /dev/null @@ -1,78 +0,0 @@ ---- -- include_tasks: vars.yml - tags: always - -- fail: - msg: "Please specify the a tag: install-metrics, uninstall-metrics or update-metrics" - when: ansible_run_tags[0] == "all" - tags: always - -- assert: - msg: Alertmanager, Grafana, and Prometheus installs are disabled, nothing to install, update or uninstall - that: grafana_install | bool or prometheus_install | bool or alertmanager_install | bool - tags: - - install-metrics - - update-metrics - -- assert: - msg: Alertmanager can only be installed alongside of Prometheus, please enable prometheus_install - that: prometheus_install | bool - when: alertmanager_install | bool - tags: - - install-metrics - - update-metrics - -- assert: - msg: Please specify either OpenShift or Kubernetes variables in inventory - that: - - openshift_host | default('') != '' or - kubernetes_context | default('') != '' or - kubernetes_in_cluster | default(False) | bool - tags: always - -- assert: - msg: Only set one of kubernetes_context, kubernetes_in_cluster, or openshift_host - that: - - kubernetes_context | default('') == '' - - not (kubernetes_in_cluster | default(False) | bool) - when: openshift_host | default('') != '' - tags: always - -- assert: - msg: Only set one of kubernetes_context, kubernetes_in_cluster, or openshift_host - that: - - openshift_host | default('') == '' - - not (kubernetes_in_cluster | default(False) | bool) - when: kubernetes_context | default('') != '' - tags: always - -- assert: - msg: Only set one of kubernetes_context, kubernetes_in_cluster, or openshift_host - that: - - openshift_host | default('') == '' - - kubernetes_context | default('') == '' - when: kubernetes_in_cluster | default(False) | bool - tags: always - -- include_tasks: check_openshift.yml - when: openshift_host | default('') != '' - tags: always - -- include_tasks: check_kubernetes.yml - when: kubernetes_context | default('') != '' or kubernetes_in_cluster | default(False) | bool - tags: always - -- include_tasks: check_vars.yml - tags: always - -- include_tasks: "preflight-grafana.yml" - when: grafana_install | bool - tags: always - -- include_tasks: "preflight-prometheus.yml" - when: prometheus_install | bool - tags: always - -- include_tasks: "preflight-alertmanager.yml" - when: alertmanager_install | bool - tags: always \ No newline at end of file diff --git a/installers/metrics/ansible/roles/pgo-metrics-preflight/tasks/preflight-alertmanager.yml b/installers/metrics/ansible/roles/pgo-metrics-preflight/tasks/preflight-alertmanager.yml deleted file mode 100644 index 89b08ec89a..0000000000 --- a/installers/metrics/ansible/roles/pgo-metrics-preflight/tasks/preflight-alertmanager.yml +++ /dev/null @@ -1,18 +0,0 @@ ---- -- name: Check if inventory file variables are defined for Alertmanager - tags: always - fail: - msg: "Please specify the value of {{item}} in your inventory file" - when: lookup('vars', item, default='') == '' - loop: - - alertmanager_configmap - - alertmanager_rules_configmap - - alertmanager_image_name - - alertmanager_image_prefix - - alertmanager_image_tag - - alertmanager_log_level - - alertmanager_port - - alertmanager_service_name - - alertmanager_service_type - - alertmanager_storage_access_mode - - alertmanager_volume_size diff --git a/installers/metrics/ansible/roles/pgo-metrics-preflight/tasks/preflight-grafana.yml b/installers/metrics/ansible/roles/pgo-metrics-preflight/tasks/preflight-grafana.yml deleted file mode 100644 index 3e19e23ae6..0000000000 --- a/installers/metrics/ansible/roles/pgo-metrics-preflight/tasks/preflight-grafana.yml +++ /dev/null @@ -1,19 +0,0 @@ ---- -- name: Check if inventory file variables are defined for Grafana - tags: always - fail: - msg: "Please specify the value of {{item}} in your inventory file" - when: lookup('vars', item, default='') == '' - loop: - - grafana_admin_username - - grafana_admin_password - - grafana_dashboards_configmap - - grafana_datasources_configmap - - grafana_image_name - - grafana_image_prefix - - grafana_image_tag - - grafana_port - - grafana_service_name - - grafana_service_type - - grafana_storage_access_mode - - grafana_volume_size diff --git a/installers/metrics/ansible/roles/pgo-metrics-preflight/tasks/preflight-prometheus.yml b/installers/metrics/ansible/roles/pgo-metrics-preflight/tasks/preflight-prometheus.yml deleted file mode 100644 index de5825dfa6..0000000000 --- a/installers/metrics/ansible/roles/pgo-metrics-preflight/tasks/preflight-prometheus.yml +++ /dev/null @@ -1,17 +0,0 @@ ---- -- name: Check if inventory file variables are defined for Prometheus - fail: msg="Please specify the value of {{item}} in your inventory file" - tags: always - when: lookup('vars', item, default='') == '' - loop: - - db_port - - pgbadgerport - - prometheus_configmap - - prometheus_image_name - - prometheus_image_prefix - - prometheus_image_tag - - prometheus_port - - prometheus_service_name - - prometheus_service_type - - prometheus_storage_access_mode - - prometheus_volume_size diff --git a/installers/metrics/ansible/roles/pgo-metrics-preflight/tasks/vars.yml b/installers/metrics/ansible/roles/pgo-metrics-preflight/tasks/vars.yml deleted file mode 100644 index c4db95762f..0000000000 --- a/installers/metrics/ansible/roles/pgo-metrics-preflight/tasks/vars.yml +++ /dev/null @@ -1,15 +0,0 @@ ---- -- name: Include values.yml - tags: always - block: - - name: Check for "{{ config_path }}" - stat: - path: "{{ config_path }}" - register: conf_path_result - - - fail: - msg: "Please provide a valid path to your values.yaml file. Expected path: {{ config_path }}" - when: - - not conf_path_result.stat.exists - - - include_vars: "{{ config_path }}" diff --git a/installers/metrics/ansible/roles/pgo-metrics/defaults/main.yml b/installers/metrics/ansible/roles/pgo-metrics/defaults/main.yml deleted file mode 100644 index 600d57d6bf..0000000000 --- a/installers/metrics/ansible/roles/pgo-metrics/defaults/main.yml +++ /dev/null @@ -1,51 +0,0 @@ ---- -app_name: postgres-operator-monitoring - -kubernetes_context: "" -kubernetes_in_cluster: "false" -openshift_host: "" - -delete_metrics_namespace: "false" -metrics_namespace: "pgo" -metrics_image_pull_secret: "" -metrics_image_pull_secret_manifest: "" -pgmonitor_version: "v4.4-RC6" - -alertmanager_configmap: "alertmanager-config" -alertmanager_rules_configmap: "alertmanager-rules-config" -alertmanager_custom_config: "" -alertmanager_custom_rules_config: "" -alertmanager_install: "true" -alertmanager_image_prefix: "prom" -alertmanager_image_name: "alertmanager" -alertmanager_image_tag: "v0.21.0" -alertmanager_log_level: "info" -alertmanager_port: "9093" -alertmanager_service_name: "crunchy-alertmanager" -alertmanager_service_type: "ClusterIP" - -grafana_admin_username: "" -grafana_admin_password: "" -grafana_install: "true" -grafana_image_prefix: "grafana" -grafana_image_name: "grafana" -grafana_image_tag: "6.7.4" -grafana_port: "3000" -grafana_service_name: "crunchy-grafana" -grafana_service_type: "ClusterIP" -grafana_datasources_configmap: "grafana-datasources" -grafana_dashboards_configmap: "grafana-dashboards" -grafana_datasources_custom_config: "" -grafana_dashboards_custom_config: "" - -db_port: "5432" -pgbadgerport: "10000" -prometheus_configmap: "crunchy-prometheus" -prometheus_custom_config: "" -prometheus_install: "true" -prometheus_image_prefix: "prom" -prometheus_image_name: "prometheus" -prometheus_image_tag: "v2.20.0" -prometheus_port: "9090" -prometheus_service_name: "crunchy-prometheus" -prometheus_service_type: "ClusterIP" diff --git a/installers/metrics/ansible/roles/pgo-metrics/tasks/alertmanager.yml b/installers/metrics/ansible/roles/pgo-metrics/tasks/alertmanager.yml deleted file mode 100644 index dc82e92d1a..0000000000 --- a/installers/metrics/ansible/roles/pgo-metrics/tasks/alertmanager.yml +++ /dev/null @@ -1,101 +0,0 @@ ---- -- name: Deploy Alertmanager - tags: - - install-metrics - - update-metrics - block: - - name: Set Alertmanager Output Directory Fact - set_fact: - alertmanager_output_dir: "{{ metrics_dir }}/output/alertmanager" - - - name: Ensure Output Directory Exists - file: - path: "{{ alertmanager_output_dir }}" - state: "directory" - mode: "0700" - - - name: Set pgmonitor Prometheus Directory Fact - set_fact: - pgmonitor_prometheus_dir: "{{ metrics_dir }}/pgmonitor-{{ pgmonitor_version | replace('v','') }}/prometheus" - - - name: Copy Alertmanger Config to Output Directory - command: "cp {{ pgmonitor_prometheus_dir }}/{{ item.src }} {{ alertmanager_output_dir }}/{{ item.dst }}" - loop: - - { src: 'crunchy-alertmanager.yml', dst: 'alertmanager.yml'} - - { src: 'alert-rules.d/crunchy-alert-rules-pg.yml.containers.example', dst: 'crunchy-alert-rules-pg.yml'} - - - name: Create Alertmanager Config ConfigMap - shell: | - {{ kubectl_or_oc }} create configmap {{ alertmanager_configmap }} --dry-run --output=yaml \ \ - --from-file={{ alertmanager_output_dir }}/alertmanager.yml \ - | {{ kubectl_or_oc }} label --filename=- --local --dry-run --output=yaml \ - app.kubernetes.io/name={{ app_name }} \ - | {{ kubectl_or_oc }} create --filename=- -n {{ metrics_namespace }} - when: alertmanager_custom_config == "" - register: create_alertmanager_result - failed_when: - - create_alertmanager_result.rc != 0 - - "'AlreadyExists' not in create_alertmanager_result.stderr" - - - name: Set Alertmanager ConfigMap Name - set_fact: - alertmanager_configmap: "{{ alertmanager_custom_config }}" - when: alertmanager_custom_config != "" - - - name: Create Alertmanager Rules ConfigMap - shell: | - {{ kubectl_or_oc }} create configmap {{ alertmanager_rules_configmap }} --dry-run --output=yaml \ - --from-file={{ alertmanager_output_dir }}/crunchy-alert-rules-pg.yml \ - | {{ kubectl_or_oc }} label --filename=- --local --dry-run --output=yaml \ - app.kubernetes.io/name={{ app_name }} \ - | {{ kubectl_or_oc }} create --filename=- -n {{ metrics_namespace }} - when: alertmanager_custom_rules_config == "" - register: create_alertmanager_result - failed_when: - - create_alertmanager_result.rc != 0 - - "'AlreadyExists' not in create_alertmanager_result.stderr" - - - name: Set Alertmanager Rules ConfigMap Name - set_fact: - alertmanager_rules_configmap: "{{ alertmanager_custom_rules_config }}" - when: alertmanager_custom_rules_config != "" - - - name: Template Alertmanager RBAC - template: - src: "{{ item }}" - dest: "{{ alertmanager_output_dir }}/{{ item | replace('.j2', '') }}" - mode: "0600" - loop: - - alertmanager-rbac.json.j2 - when: create_rbac | bool - - - name: Create Alertmanager RBAC - command: "{{ kubectl_or_oc }} create -f {{ alertmanager_output_dir }}/{{ item }} -n {{ metrics_namespace }}" - loop: - - alertmanager-rbac.json - register: create_alertmanager_rbac_result - failed_when: - - create_alertmanager_rbac_result.rc != 0 - - "'AlreadyExists' not in create_alertmanager_rbac_result.stderr" - when: create_rbac | bool - - - name: Template Alertmanager PVC, Service & Deployment - template: - src: "{{ item }}" - dest: "{{ alertmanager_output_dir }}/{{ item | replace('.j2', '') }}" - mode: "0600" - loop: - - alertmanager-pvc.json.j2 - - alertmanager-service.json.j2 - - alertmanager-deployment.json.j2 - - - name: Create Alertmanager PVC, Service & Deployment - command: "{{ kubectl_or_oc }} create -f {{ alertmanager_output_dir }}/{{ item }} -n {{ metrics_namespace }}" - loop: - - alertmanager-pvc.json - - alertmanager-service.json - - alertmanager-deployment.json - register: create_alertmanager_deployment_result - failed_when: - - create_alertmanager_deployment_result.rc != 0 - - "'AlreadyExists' not in create_alertmanager_deployment_result.stderr" diff --git a/installers/metrics/ansible/roles/pgo-metrics/tasks/cleanup.yml b/installers/metrics/ansible/roles/pgo-metrics/tasks/cleanup.yml deleted file mode 100644 index ad13400c22..0000000000 --- a/installers/metrics/ansible/roles/pgo-metrics/tasks/cleanup.yml +++ /dev/null @@ -1,62 +0,0 @@ ---- -- name: Cleanup Metrics Resources - tags: - - update-metrics - - uninstall-metrics - block: - - name: Delete Deployments - command: "{{ kubectl_or_oc }} delete deployment {{ item }} -n {{ metrics_namespace }}" - ignore_errors: yes - loop: - - crunchy-alertmanager - - crunchy-prometheus - - crunchy-grafana - - - name: Delete Services - command: "{{ kubectl_or_oc }} delete service {{ item }} -n {{ metrics_namespace }}" - ignore_errors: yes - loop: - - "{{ alertmanager_service_name }}" - - "{{ prometheus_service_name }}" - - "{{ grafana_service_name }}" - - - name: Delete Prometheus Cluster Roles & Cluster Role Bindings - command: | - {{ kubectl_or_oc }} delete clusterrole,clusterrolebinding \ - {{ metrics_namespace }}-prometheus-sa -n {{ metrics_namespace }} - ignore_errors: yes - when: create_rbac|bool - - - name: Delete Service Accounts - command: "{{ kubectl_or_oc }} delete serviceaccount {{ item }} -n {{ metrics_namespace }}" - ignore_errors: yes - when: create_rbac | bool - loop: - - alertmanager - - grafana - - prometheus-sa - - - name: Delete Grafana Secret - command: "{{ kubectl_or_oc }} delete secret grafana-secret -n {{ metrics_namespace }}" - ignore_errors: yes - - - name: Delete ConfigMaps - command: "{{ kubectl_or_oc }} delete configmap {{ item }} -n {{ metrics_namespace }}" - ignore_errors: yes - loop: - - "{{ alertmanager_configmap }}" - - "{{ alertmanager_rules_configmap }}" - - "{{ grafana_dashboards_configmap }}" - - "{{ grafana_datasources_configmap }}" - - "{{ prometheus_configmap }}" - -- name: Cleanup Metrics Volumes - tags: uninstall-metrics - block: - - name: Delete PVCs - command: "{{ kubectl_or_oc }} delete pvc {{ item }} -n {{ metrics_namespace }}" - ignore_errors: yes - loop: - - alertmanagerdata - - grafanadata - - prometheusdata diff --git a/installers/metrics/ansible/roles/pgo-metrics/tasks/grafana.yml b/installers/metrics/ansible/roles/pgo-metrics/tasks/grafana.yml deleted file mode 100644 index 1d528429b5..0000000000 --- a/installers/metrics/ansible/roles/pgo-metrics/tasks/grafana.yml +++ /dev/null @@ -1,128 +0,0 @@ ---- -- name: Deploy Grafana - tags: - - install-metrics - - update-metrics - block: - - name: Set Grafana Output Directory Fact - set_fact: - grafana_output_dir: "{{ metrics_dir }}/output/grafana" - - - name: Ensure Output Directory Exists - file: - path: "{{ grafana_output_dir }}" - state: "directory" - mode: "0700" - - - name: Template Grafana RBAC - template: - src: "{{ item }}" - dest: "{{ grafana_output_dir }}/{{ item | replace('.j2', '') }}" - mode: "0600" - loop: - - grafana-rbac.json.j2 - when: create_rbac | bool - - - name: Create Grafana RBAC - command: "{{ kubectl_or_oc }} create -f {{ grafana_output_dir }}/{{ item }} -n {{ metrics_namespace }}" - loop: - - grafana-rbac.json - register: create_grafana_rbac_result - failed_when: - - create_grafana_rbac_result.rc != 0 - - "'AlreadyExists' not in create_grafana_rbac_result.stderr" - when: create_rbac | bool - - - name: Template Grafana Secret - template: - src: "grafana-secret.json.j2" - dest: "{{ grafana_output_dir }}/grafana-secret.json" - mode: "0600" - - - name: Create Grafana Secret - command: "{{ kubectl_or_oc }} create -f {{ grafana_output_dir }}/grafana-secret.json -n {{ metrics_namespace }}" - register: create_grafana_secret_result - failed_when: - - create_grafana_secret_result.rc != 0 - - "'AlreadyExists' not in create_grafana_secret_result.stderr" - - - name: Set pgmonitor Grafana Directory Fact - set_fact: - pgmonitor_grafana_dir: "{{ metrics_dir }}/pgmonitor-{{ pgmonitor_version | replace('v','') }}/grafana" - - - name: Copy Grafana Config to Output Directory - command: "cp {{ pgmonitor_grafana_dir }}/{{ item }} {{ grafana_output_dir }}" - loop: - - crunchy_grafana_datasource.yml - - crunchy_grafana_dashboards.yml - - - name: Add Grafana Dashboard Configuration - lineinfile: - path: "{{ grafana_output_dir }}/crunchy_grafana_dashboards.yml" - regexp: "^[ ]{4,}path:" - line: " path: $GF_PATHS_PROVISIONING/dashboards" - - - name: Add Grafana Datasource Configuration - lineinfile: - path: "{{ grafana_output_dir }}/crunchy_grafana_datasource.yml" - regexp: "^[ ]{2,}url:" - line: " url: http://$PROM_HOST:$PROM_PORT" - - - name: Create Grafana Datasource ConfigMap - shell: | - {{ kubectl_or_oc }} create configmap {{ grafana_datasources_configmap }} --dry-run --output=yaml \ - --from-file={{ grafana_output_dir }}/crunchy_grafana_datasource.yml \ - | {{ kubectl_or_oc }} label --filename=- --local --dry-run --output=yaml \ - app.kubernetes.io/name={{ app_name }} \ - | {{ kubectl_or_oc }} create --filename=- -n {{ metrics_namespace }} - when: grafana_datasources_custom_config == "" - register: create_grafana_datasources_result - failed_when: - - create_grafana_datasources_result.rc != 0 - - "'AlreadyExists' not in create_grafana_datasources_result.stderr" - - - name: Create Grafana Dashboard ConfigMap - shell: | - {{ kubectl_or_oc }} create configmap {{ grafana_dashboards_configmap }} --dry-run --output=yaml \ - --from-file={{ grafana_output_dir }}/crunchy_grafana_dashboards.yml \ - --from-file={{ pgmonitor_grafana_dir }}/containers/ \ - | sed -e 's,${DS_PROMETHEUS},PROMETHEUS,' \ - | {{ kubectl_or_oc }} label --filename=- --local --dry-run --output=yaml \ - app.kubernetes.io/name={{ app_name }} \ - | {{ kubectl_or_oc }} create --filename=- -n {{ metrics_namespace }} - when: grafana_dashboards_custom_config == "" - register: create_grafana_dashboards_result - failed_when: - - create_grafana_dashboards_result.rc != 0 - - "'AlreadyExists' not in create_grafana_dashboards_result.stderr" - - - name: Set Grafana Datasource ConfigMap Name - set_fact: - grafana_datasources_configmap: "{{ grafana_datasources_custom_config }}" - when: grafana_datasources_custom_config != "" - - - name: Set Grafana Dashboard ConfigMap Name - set_fact: - grafana_dashboards_configmap: "{{ grafana_dashboards_custom_config }}" - when: grafana_dashboards_custom_config != "" - - - name: Template Grafana PVC, Service & Deployment - template: - src: "{{ item }}" - dest: "{{ grafana_output_dir }}/{{ item | replace('.j2', '') }}" - mode: "0600" - loop: - - grafana-pvc.json.j2 - - grafana-service.json.j2 - - grafana-deployment.json.j2 - - - name: Create Grafana PVC, Service & Deployment - command: "{{ kubectl_or_oc }} create -f {{ grafana_output_dir }}/{{ item }} -n {{ metrics_namespace }}" - loop: - - grafana-pvc.json - - grafana-service.json - - grafana-deployment.json - register: create_grafana_deployment_result - failed_when: - - create_grafana_deployment_result.rc != 0 - - "'AlreadyExists' not in create_grafana_deployment_result.stderr" diff --git a/installers/metrics/ansible/roles/pgo-metrics/tasks/kubernetes.yml b/installers/metrics/ansible/roles/pgo-metrics/tasks/kubernetes.yml deleted file mode 100644 index c35ccc2e20..0000000000 --- a/installers/metrics/ansible/roles/pgo-metrics/tasks/kubernetes.yml +++ /dev/null @@ -1,8 +0,0 @@ ---- -- name: Create Namespace {{ metrics_namespace }} - command: "kubectl create namespace {{ metrics_namespace }}" - register: create_metrics_namespace_result - failed_when: - - create_metrics_namespace_result.rc != 0 - - "'AlreadyExists' not in create_metrics_namespace_result.stderr" - tags: install-metrics diff --git a/installers/metrics/ansible/roles/pgo-metrics/tasks/kubernetes_auth.yml b/installers/metrics/ansible/roles/pgo-metrics/tasks/kubernetes_auth.yml deleted file mode 100644 index 882897ce6e..0000000000 --- a/installers/metrics/ansible/roles/pgo-metrics/tasks/kubernetes_auth.yml +++ /dev/null @@ -1,5 +0,0 @@ ---- -- name: Set the Kubernetes Context - shell: "kubectl config use-context {{ kubernetes_context }}" - when: not (kubernetes_in_cluster | bool) - tags: always diff --git a/installers/metrics/ansible/roles/pgo-metrics/tasks/kubernetes_cleanup.yml b/installers/metrics/ansible/roles/pgo-metrics/tasks/kubernetes_cleanup.yml deleted file mode 100644 index c7afa69e1e..0000000000 --- a/installers/metrics/ansible/roles/pgo-metrics/tasks/kubernetes_cleanup.yml +++ /dev/null @@ -1,6 +0,0 @@ ---- -- name: Delete Metrics Namespace (Kubernetes) - command: "kubectl delete namespace {{ metrics_namespace }}" - when: delete_metrics_namespace | bool - ignore_errors: yes - tags: uninstall-metrics diff --git a/installers/metrics/ansible/roles/pgo-metrics/tasks/main.yml b/installers/metrics/ansible/roles/pgo-metrics/tasks/main.yml deleted file mode 100644 index 425d3f8e1b..0000000000 --- a/installers/metrics/ansible/roles/pgo-metrics/tasks/main.yml +++ /dev/null @@ -1,118 +0,0 @@ ---- -- name: Set Metrics Directory Fact - set_fact: - metrics_dir: "{{ ansible_env.HOME }}/.pgo/metrics/{{ metrics_namespace }}" - tags: always - -- name: Ensure Output Directory Exists - file: - path: "{{ metrics_dir }}" - state: directory - mode: 0700 - tags: always - -- include_tasks: "{{ tasks }}" - loop: - - openshift_auth.yml - - openshift.yml - loop_control: - loop_var: tasks - when: openshift_host != '' - tags: always - -- include_tasks: "{{ tasks }}" - loop: - - kubernetes_auth.yml - - kubernetes.yml - loop_control: - loop_var: tasks - when: kubernetes_context != '' or kubernetes_in_cluster | bool - tags: always - -- name: Use kubectl or oc - set_fact: - kubectl_or_oc: "{{ openshift_oc_bin if openshift_oc_bin is defined else 'kubectl' }}" - tags: always - -- include_tasks: cleanup.yml - tags: - - update-metrics - - uninstall-metrics - -- include_tasks: kubernetes_cleanup.yml - when: kubernetes_context != '' or kubernetes_in_cluster | bool - tags: - - uninstall-metrics - -- include_tasks: openshift_cleanup.yml - when: openshift_host != '' - tags: - - uninstall-metrics - -- name: Install Crunchy PostgreSQL Operator Monitoring - tags: - - install-metrics - - update-metrics - block: - - name: Download pgmonitor {{ pgmonitor_version }} - get_url: - url: https://github.com/CrunchyData/pgmonitor/archive/{{ pgmonitor_version }}.tar.gz - dest: "{{ metrics_dir }}" - mode: "0600" - - - name: Extract pgmonitor - unarchive: - src: "{{ metrics_dir }}/pgmonitor-{{ pgmonitor_version | replace('v','') }}.tar.gz" - dest: "{{ metrics_dir }}" - - - name: Create Metrics Image Pull Secret - shell: > - {{ kubectl_or_oc }} -n {{ metrics_namespace }} get secret/{{ metrics_image_pull_secret }} -o jsonpath='{""}' 2> /dev/null || - {{ kubectl_or_oc }} -n {{ metrics_namespace }} create -f {{ metrics_image_pull_secret_manifest }} - when: - - create_rbac | bool - - metrics_image_pull_secret_manifest != '' - - - block: - - include_tasks: alertmanager.yml - when: alertmanager_install | bool - # alertmanager tasks must be run before prometheus to ensure that a custom - # rules configmap can be used - - include_tasks: prometheus.yml - when: prometheus_install | bool - - - include_tasks: grafana.yml - when: grafana_install | bool - - - name: Check if Timeout Flag Supported - command: | - {{ kubectl_or_oc }} rollout status -n {{ metrics_namespace }} \ - --watch=false --timeout=1s --help - register: timeout_flag_result - ignore_errors: yes - - - name: Set Metrics Deployments Fact - set_fact: deployments="{{ deployments | default([]) + [ item.name ] }}" - when: item.deployed - loop: - - { deployed: "{{ alertmanager_install | bool }}", name: "crunchy-alertmanager" } - - { deployed: "{{ grafana_install | bool }}", name: "crunchy-grafana" } - - { deployed: "{{ prometheus_install | bool }}", name: "crunchy-prometheus" } - - - name: Wait for Metrics to Finish Deploying - command: | - {{ kubectl_or_oc }} rollout status deployment/{{ item }} -n {{ metrics_namespace }} \ - {{ '--timeout=600s' if not 'unknown flag: --timeout' in timeout_flag_result.stderr else '' }} - async: 610 # must be > or = to the rollout status timeout (600s) to ensure proper timeout behavior - poll: 0 - loop: "{{ deployments }}" - register: deployment_results - - - name: Check Metrics Deployment Status - async_status: - jid: "{{ item.ansible_job_id }}" - loop: "{{ deployment_results.results }}" - register: deployment_poll_results - until: deployment_poll_results.finished - retries: 60 - delay: 10 diff --git a/installers/metrics/ansible/roles/pgo-metrics/tasks/openshift.yml b/installers/metrics/ansible/roles/pgo-metrics/tasks/openshift.yml deleted file mode 100644 index a2644ab66e..0000000000 --- a/installers/metrics/ansible/roles/pgo-metrics/tasks/openshift.yml +++ /dev/null @@ -1,9 +0,0 @@ ---- -- name: Create Project {{ metrics_namespace }} - command: "{{ openshift_oc_bin}} new-project {{ metrics_namespace }}" - register: create_metrics_project_result - failed_when: - - create_metrics_project_result.rc != 0 - - "'AlreadyExists' not in create_metrics_project_result.stderr" - tags: install-metrics - diff --git a/installers/metrics/ansible/roles/pgo-metrics/tasks/openshift_auth.yml b/installers/metrics/ansible/roles/pgo-metrics/tasks/openshift_auth.yml deleted file mode 100644 index 93a8fdb5ca..0000000000 --- a/installers/metrics/ansible/roles/pgo-metrics/tasks/openshift_auth.yml +++ /dev/null @@ -1,25 +0,0 @@ ---- -- include_vars: openshift.yml - tags: always - -- name: Authenticate with OpenShift via user and password - command: | - {{ openshift_oc_bin }} login {{ openshift_host }} \ - -u {{ openshift_user }} \ - -p {{ openshift_password }} \ - --insecure-skip-tls-verify={{ openshift_skip_tls_verify | default(false) | bool }} - when: - - openshift_user is defined and openshift_user != '' - - openshift_password is defined and openshift_password != '' - - openshift_token is not defined - no_log: true - tags: always - -- name: Authenticate with OpenShift via token - command: | - {{ openshift_oc_bin }} login {{ openshift_host }} \ - --token {{ openshift_token }} \ - --insecure-skip-tls-verify={{ openshift_skip_tls_verify | default(false) | bool }} - when: openshift_token is defined and openshift_token != '' - no_log: true - tags: always diff --git a/installers/metrics/ansible/roles/pgo-metrics/tasks/openshift_cleanup.yml b/installers/metrics/ansible/roles/pgo-metrics/tasks/openshift_cleanup.yml deleted file mode 100644 index bb586edf7f..0000000000 --- a/installers/metrics/ansible/roles/pgo-metrics/tasks/openshift_cleanup.yml +++ /dev/null @@ -1,6 +0,0 @@ ---- -- name: Delete Metrics Namespace (Openshift) - command: "{{ openshift_oc_bin}} delete project {{ metrics_namespace }}" - when: delete_metrics_namespace | bool - ignore_errors: yes - tags: uninstall-metrics diff --git a/installers/metrics/ansible/roles/pgo-metrics/tasks/prometheus.yml b/installers/metrics/ansible/roles/pgo-metrics/tasks/prometheus.yml deleted file mode 100644 index ffcfa7c625..0000000000 --- a/installers/metrics/ansible/roles/pgo-metrics/tasks/prometheus.yml +++ /dev/null @@ -1,105 +0,0 @@ ---- -- name: Deploy Prometheus - tags: - - install-metrics - - update-metrics - block: - - name: Set Prometheus Output Directory Fact - set_fact: - prom_output_dir: "{{ metrics_dir }}/output/prom" - - - name: Ensure Output Directory Exists - file: - path: "{{ prom_output_dir }}" - state: "directory" - mode: "0700" - - - name: Template Prometheus RBAC - template: - src: "{{ item }}" - dest: "{{ prom_output_dir }}/{{ item | replace('.j2', '') }}" - mode: "0600" - loop: - - prometheus-rbac.json.j2 - when: create_rbac | bool - - - name: Create Prometheus RBAC - command: "{{ kubectl_or_oc }} create -f {{ prom_output_dir }}/{{ item }} -n {{ metrics_namespace }}" - loop: - - prometheus-rbac.json - register: create_prometheus_rbac_result - failed_when: - - create_prometheus_rbac_result.rc != 0 - - "'AlreadyExists' not in create_prometheus_rbac_result.stderr" - when: create_rbac | bool - - - name: Set pgmonitor Prometheus Directory Fact - set_fact: - pgmonitor_prometheus_dir: "{{ metrics_dir }}/pgmonitor-{{ pgmonitor_version | replace('v','') }}/prometheus" - - - name: Copy Prometheus Config to Output Directory - command: "cp {{ pgmonitor_prometheus_dir }}/{{ item.src }} {{ prom_output_dir }}/{{ item.dst }}" - loop: - - { src: 'crunchy-prometheus.yml.containers', dst: 'prometheus.yml'} - - - name: Add Prometheus Port Configuration - lineinfile: - path: "{{ prom_output_dir }}/prometheus.yml" - regex: "{{ item.regex }}" - line: "{{ item.line }}" - loop: - - regex: "^[ ]{4,}regex: 5432" - line: " regex: {{ db_port }}" - - regex: "^[ ]{4,}regex: 10000" - line: " regex: {{ pgbadgerport }}" - - - name: Add Alerting Configuration - lineinfile: - path: "{{ prom_output_dir }}/prometheus.yml" - line: | - alerting: - alertmanagers: - - scheme: http - static_configs: - - targets: - - "{{ alertmanager_service_name }}:{{ alertmanager_port }}" - when: alertmanager_install | bool - - - name: Create Prometheus ConfigMap - shell: | - {{ kubectl_or_oc }} create configmap crunchy-prometheus --dry-run --output=yaml \ - --from-file={{ prom_output_dir }}/prometheus.yml \ - | {{ kubectl_or_oc }} label --filename=- --local --dry-run --output=yaml \ - app.kubernetes.io/name={{ app_name }} \ - | {{ kubectl_or_oc }} create --filename=- -n {{ metrics_namespace }} - when: prometheus_custom_config == "" - register: create_prometheus_datasources_result - failed_when: - - create_prometheus_datasources_result.rc != 0 - - "'AlreadyExists' not in create_prometheus_datasources_result.stderr" - - - name: Set Prometheus ConfigMap Name - set_fact: - prometheus_configmap: "{{ prometheus_custom_config }}" - when: prometheus_custom_config != "" - - - name: Template Prometheus PVC, Service & Deployment - template: - src: "{{ item }}" - dest: "{{ prom_output_dir }}/{{ item | replace('.j2', '') }}" - mode: "0600" - loop: - - prometheus-pvc.json.j2 - - prometheus-service.json.j2 - - prometheus-deployment.json.j2 - - - name: Create Prometheus PVC, Service & Deployment - command: "{{ kubectl_or_oc }} create -f {{ prom_output_dir }}/{{ item }} -n {{ metrics_namespace }}" - loop: - - prometheus-pvc.json - - prometheus-service.json - - prometheus-deployment.json - register: create_prometheus_deployment_result - failed_when: - - create_prometheus_deployment_result.rc != 0 - - "'AlreadyExists' not in create_prometheus_deployment_result.stderr" diff --git a/installers/metrics/ansible/roles/pgo-metrics/templates/alertmanager-deployment.json.j2 b/installers/metrics/ansible/roles/pgo-metrics/templates/alertmanager-deployment.json.j2 deleted file mode 100644 index bcae32cc41..0000000000 --- a/installers/metrics/ansible/roles/pgo-metrics/templates/alertmanager-deployment.json.j2 +++ /dev/null @@ -1,96 +0,0 @@ -{ - "apiVersion": "apps/v1", - "kind": "Deployment", - "metadata": { - "name": "crunchy-alertmanager", - "labels": { - "app.kubernetes.io/name": "{{ app_name }}" - } - }, - "spec": { - "replicas": 1, - "selector": { - "matchLabels": { - "name": "{{ alertmanager_service_name }}", - "app.kubernetes.io/name": "{{ app_name }}" - } - }, - "template": { - "metadata": { - "labels": { - "name": "{{ alertmanager_service_name }}", - "app.kubernetes.io/name": "{{ app_name }}" - } - }, - "spec": { - "securityContext": { -{% if (alertmanager_supplemental_groups | default('')) != '' %} - "supplementalGroups": [{{ alertmanager_supplemental_groups }}] -{% endif %} -{% if not (disable_fsgroup | default(false) | bool) %} - {% if (alertmanager_supplemental_groups | default('')) != '' %},{% endif -%} - "fsGroup": 26, - "runAsUser": 2 -{% endif %} - }, - "serviceAccountName": "alertmanager", - "containers": [ - { - "name": "alertmanager", - "image": "{{ alertmanager_image_prefix }}/{{ alertmanager_image_name }}:{{ alertmanager_image_tag }}", - "args": [ - "--config.file=/etc/alertmanager/alertmanager.yml", - "--storage.path=/alertmanager", - "--log.level={{ alertmanager_log_level }}" - ], - "ports": [ - { - "containerPort": {{ alertmanager_port }}, - "protocol": "TCP" - } - ], - "readinessProbe": { - "httpGet": { - "path": "/-/ready", - "port": {{ alertmanager_port }} - }, - "periodSeconds": 10 - }, - "livenessProbe": { - "httpGet": { - "path": "/-/healthy", - "port": {{ alertmanager_port }} - }, - "initialDelaySeconds": 25, - "periodSeconds": 20 - }, - "volumeMounts": [ - { - "mountPath": "/etc/alertmanager", - "name": "alertmanagerconf" - }, - { - "mountPath": "/alertmanager", - "name": "alertmanagerdata" - } - ] - } - ], - "volumes": [ - { - "name": "alertmanagerdata", - "persistentVolumeClaim": { - "claimName": "alertmanagerdata" - } - }, - { - "name": "alertmanagerconf", - "configMap": { - "name": "{{ alertmanager_configmap }}" - } - } - ] - } - } - } -} diff --git a/installers/metrics/ansible/roles/pgo-metrics/templates/alertmanager-pvc.json.j2 b/installers/metrics/ansible/roles/pgo-metrics/templates/alertmanager-pvc.json.j2 deleted file mode 100644 index 996f8264b1..0000000000 --- a/installers/metrics/ansible/roles/pgo-metrics/templates/alertmanager-pvc.json.j2 +++ /dev/null @@ -1,23 +0,0 @@ -{ - "kind": "PersistentVolumeClaim", - "apiVersion": "v1", - "metadata": { - "name": "alertmanagerdata", - "labels": { - "app.kubernetes.io/name": "{{ app_name }}" - } - }, - "spec": { - "accessModes": [ - "{{ alertmanager_storage_access_mode }}" - ], -{% if alertmanager_storage_class_name is defined and alertmanager_storage_class_name != '' %} - "storageClassName": "{{ alertmanager_storage_class_name }}", -{% endif %} - "resources": { - "requests": { - "storage": "{{ alertmanager_volume_size }}" - } - } - } -} diff --git a/installers/metrics/ansible/roles/pgo-metrics/templates/alertmanager-rbac.json.j2 b/installers/metrics/ansible/roles/pgo-metrics/templates/alertmanager-rbac.json.j2 deleted file mode 100644 index ac44c26b02..0000000000 --- a/installers/metrics/ansible/roles/pgo-metrics/templates/alertmanager-rbac.json.j2 +++ /dev/null @@ -1,16 +0,0 @@ -{ - "apiVersion": "v1", - "kind": "ServiceAccount", - "metadata": { - "name": "alertmanager", - "labels": { - "app.kubernetes.io/name": "{{ app_name }}" - } - }, - "automountServiceAccountToken": false, - "imagePullSecrets": [ -{% if metrics_image_pull_secret %} - { "name": "{{ metrics_image_pull_secret }}" } -{% endif %} - ] -} diff --git a/installers/metrics/ansible/roles/pgo-metrics/templates/alertmanager-service.json.j2 b/installers/metrics/ansible/roles/pgo-metrics/templates/alertmanager-service.json.j2 deleted file mode 100644 index fe0bc53bf9..0000000000 --- a/installers/metrics/ansible/roles/pgo-metrics/templates/alertmanager-service.json.j2 +++ /dev/null @@ -1,26 +0,0 @@ -{ - "kind": "Service", - "apiVersion": "v1", - "metadata": { - "name": "{{ alertmanager_service_name }}", - "labels": { - "name": "{{ alertmanager_service_name }}", - "app.kubernetes.io/name": "{{ app_name }}" - } - }, - "spec": { - "ports": [ - { - "name": "alertmanager", - "protocol": "TCP", - "port": {{ alertmanager_port }}, - "targetPort": {{ alertmanager_port }} - } - ], - "selector": { - "name": "{{ alertmanager_service_name }}" - }, - "type": "{{ alertmanager_service_type }}", - "sessionAffinity": "None" - } -} diff --git a/installers/metrics/ansible/roles/pgo-metrics/templates/grafana-deployment.json.j2 b/installers/metrics/ansible/roles/pgo-metrics/templates/grafana-deployment.json.j2 deleted file mode 100644 index f3815541d6..0000000000 --- a/installers/metrics/ansible/roles/pgo-metrics/templates/grafana-deployment.json.j2 +++ /dev/null @@ -1,133 +0,0 @@ -{ - "apiVersion": "apps/v1", - "kind": "Deployment", - "metadata": { - "name": "crunchy-grafana", - "labels": { - "app.kubernetes.io/name": "{{ app_name }}" - } - }, - "spec": { - "replicas": 1, - "selector": { - "matchLabels": { - "name": "{{ grafana_service_name }}", - "app.kubernetes.io/name": "{{ app_name }}" - } - }, - "template": { - "metadata": { - "labels": { - "name": "{{ grafana_service_name }}", - "app.kubernetes.io/name": "{{ app_name }}" - } - }, - "spec": { - "securityContext": { -{% if (grafana_supplemental_groups | default('')) != '' %} - "supplementalGroups": [{{ grafana_supplemental_groups }}] -{% endif %} -{% if not (disable_fsgroup | default(false) | bool) %} - {% if (grafana_supplemental_groups | default('')) != '' %},{% endif -%} - "fsGroup": 26, - "runAsUser": 2 -{% endif %} - }, - "serviceAccountName": "grafana", - "containers": [ - { - "name": "grafana", - "image": "{{ grafana_image_prefix }}/{{ grafana_image_name }}:{{ grafana_image_tag }}", - "ports": [ - { - "containerPort": {{ grafana_port }}, - "protocol": "TCP" - } - ], - "readinessProbe": { - "httpGet": { - "path": "/api/health", - "port": {{ grafana_port }} - }, - "periodSeconds": 10 - }, - "livenessProbe": { - "httpGet": { - "path": "/api/health", - "port": {{ grafana_port }} - }, - "initialDelaySeconds": 25, - "periodSeconds": 20 - }, - "env": [ - { - "name": "GF_PATHS_DATA", - "value": "/data/grafana/data" - }, - { - "name": "GF_SECURITY_ADMIN_USER__FILE", - "value": "/conf/admin/username" - }, - { - "name": "GF_SECURITY_ADMIN_PASSWORD__FILE", - "value": "/conf/admin/password" - }, - { - "name": "PROM_HOST", - "value": "{{ prometheus_service_name }}" - }, - { - "name": "PROM_PORT", - "value": "{{ prometheus_port }}" - } - ], - "volumeMounts": [ - { - "mountPath": "/data", - "name": "grafanadata" - }, - { - "mountPath": "/conf/admin", - "name": "grafana-secret" - }, - { - "mountPath": "/etc/grafana/provisioning/datasources", - "name": "grafana-datasources" - }, - { - "mountPath": "/etc/grafana/provisioning/dashboards", - "name": "grafana-dashboards" - } - ] - } - ], - "volumes": [ - { - "name": "grafanadata", - "persistentVolumeClaim": { - "claimName": "grafanadata" - } - }, - { - "name": "grafana-secret", - "secret": { - "secretName": "grafana-secret" - } - }, - { - "name": "grafana-datasources", - "configMap": { - "name": "{{ grafana_datasources_configmap }}" - } - }, - { - "name": "grafana-dashboards", - "configMap": { - "name": "{{ grafana_dashboards_configmap }}" - } - } - ] - } - } - } -} diff --git a/installers/metrics/ansible/roles/pgo-metrics/templates/grafana-pvc.json.j2 b/installers/metrics/ansible/roles/pgo-metrics/templates/grafana-pvc.json.j2 deleted file mode 100644 index 2f268cc718..0000000000 --- a/installers/metrics/ansible/roles/pgo-metrics/templates/grafana-pvc.json.j2 +++ /dev/null @@ -1,23 +0,0 @@ -{ - "kind": "PersistentVolumeClaim", - "apiVersion": "v1", - "metadata": { - "name": "grafanadata", - "labels": { - "app.kubernetes.io/name": "{{ app_name }}" - } - }, - "spec": { - "accessModes": [ - "{{ grafana_storage_access_mode }}" - ], -{% if grafana_storage_class_name is defined and grafana_storage_class_name != '' %} - "storageClassName": "{{ grafana_storage_class_name }}", -{% endif %} - "resources": { - "requests": { - "storage": "{{ grafana_volume_size }}" - } - } - } -} diff --git a/installers/metrics/ansible/roles/pgo-metrics/templates/grafana-rbac.json.j2 b/installers/metrics/ansible/roles/pgo-metrics/templates/grafana-rbac.json.j2 deleted file mode 100644 index a31a2e508f..0000000000 --- a/installers/metrics/ansible/roles/pgo-metrics/templates/grafana-rbac.json.j2 +++ /dev/null @@ -1,16 +0,0 @@ -{ - "apiVersion": "v1", - "kind": "ServiceAccount", - "metadata": { - "name": "grafana", - "labels": { - "app.kubernetes.io/name": "{{ app_name }}" - } - }, - "automountServiceAccountToken": false, - "imagePullSecrets": [ -{% if metrics_image_pull_secret %} - { "name": "{{ metrics_image_pull_secret }}" } -{% endif %} - ] -} diff --git a/installers/metrics/ansible/roles/pgo-metrics/templates/grafana-secret.json.j2 b/installers/metrics/ansible/roles/pgo-metrics/templates/grafana-secret.json.j2 deleted file mode 100644 index c2b981dc82..0000000000 --- a/installers/metrics/ansible/roles/pgo-metrics/templates/grafana-secret.json.j2 +++ /dev/null @@ -1,15 +0,0 @@ -{ - "apiVersion": "v1", - "kind": "Secret", - "metadata": { - "name": "grafana-secret", - "labels": { - "app.kubernetes.io/name": "{{ app_name }}" - } - }, - "type": "Opaque", - "stringData": { - "username": "{{ grafana_admin_username }}", - "password": "{{ grafana_admin_password }}" - } -} diff --git a/installers/metrics/ansible/roles/pgo-metrics/templates/grafana-service.json.j2 b/installers/metrics/ansible/roles/pgo-metrics/templates/grafana-service.json.j2 deleted file mode 100644 index baacea73f5..0000000000 --- a/installers/metrics/ansible/roles/pgo-metrics/templates/grafana-service.json.j2 +++ /dev/null @@ -1,26 +0,0 @@ -{ - "kind": "Service", - "apiVersion": "v1", - "metadata": { - "name": "{{ grafana_service_name }}", - "labels": { - "name": "{{ grafana_service_name }}", - "app.kubernetes.io/name": "{{ app_name }}" - } - }, - "spec": { - "ports": [ - { - "name": "grafana", - "protocol": "TCP", - "port": {{ grafana_port }}, - "targetPort": {{ grafana_port }} - } - ], - "selector": { - "name": "{{ grafana_service_name }}" - }, - "type": "{{ grafana_service_type }}", - "sessionAffinity": "None" - } -} diff --git a/installers/metrics/ansible/roles/pgo-metrics/templates/prometheus-deployment.json.j2 b/installers/metrics/ansible/roles/pgo-metrics/templates/prometheus-deployment.json.j2 deleted file mode 100644 index 64980cb2b5..0000000000 --- a/installers/metrics/ansible/roles/pgo-metrics/templates/prometheus-deployment.json.j2 +++ /dev/null @@ -1,106 +0,0 @@ -{ - "apiVersion": "apps/v1", - "kind": "Deployment", - "metadata": { - "name": "crunchy-prometheus", - "labels": { - "app.kubernetes.io/name": "{{ app_name }}" - } - }, - "spec": { - "replicas": 1, - "selector": { - "matchLabels": { - "name": "{{ prometheus_service_name }}", - "app.kubernetes.io/name": "{{ app_name }}" - } - }, - "template": { - "metadata": { - "labels": { - "name": "{{ prometheus_service_name }}", - "app.kubernetes.io/name": "{{ app_name }}" - } - }, - "spec": { - "securityContext": { -{% if (prometheus_supplemental_groups | default('')) != '' %} - "supplementalGroups": [{{ prometheus_supplemental_groups }}] -{% endif %} -{% if not (disable_fsgroup | default(false) | bool) %} - {% if (prometheus_supplemental_groups | default('')) != '' %},{% endif -%} - "fsGroup": 26, - "runAsUser": 2 -{% endif %} - }, - "serviceAccountName": "prometheus-sa", - "containers": [ - { - "name": "prometheus", - "image": "{{ prometheus_image_prefix }}/{{ prometheus_image_name }}:{{ prometheus_image_tag }}", - "ports": [ - { - "containerPort": {{ prometheus_port }}, - "protocol": "TCP" - } - ], - "readinessProbe": { - "httpGet": { - "path": "/-/ready", - "port": {{ prometheus_port }} - }, - "periodSeconds": 10 - }, - "livenessProbe": { - "httpGet": { - "path": "/-/healthy", - "port": {{ prometheus_port }} - }, - "initialDelaySeconds": 15, - "periodSeconds": 20 - }, - "env": [], - "volumeMounts": [ - { - "mountPath": "/etc/prometheus", - "name": "prometheusconf" - }, - { - "mountPath": "/prometheus", - "name": "prometheusdata" -{% if alertmanager_install | bool %} - }, - { - "mountPath": "/etc/prometheus/alert-rules.d", - "name": "alertmanagerrules" -{% endif %} - } - ] - } - ], - "volumes": [ - { - "name": "prometheusconf", - "configMap": { - "name": "{{ prometheus_configmap }}" - } - }, - { - "name": "prometheusdata", - "persistentVolumeClaim": { - "claimName": "prometheusdata" - } -{% if alertmanager_install | bool %} - }, - { - "name": "alertmanagerrules", - "configMap": { - "name": "{{ alertmanager_rules_configmap }}" - } -{% endif %} - } - ] - } - } - } -} diff --git a/installers/metrics/ansible/roles/pgo-metrics/templates/prometheus-pvc.json.j2 b/installers/metrics/ansible/roles/pgo-metrics/templates/prometheus-pvc.json.j2 deleted file mode 100644 index 435e1814b3..0000000000 --- a/installers/metrics/ansible/roles/pgo-metrics/templates/prometheus-pvc.json.j2 +++ /dev/null @@ -1,23 +0,0 @@ -{ - "kind": "PersistentVolumeClaim", - "apiVersion": "v1", - "metadata": { - "name": "prometheusdata", - "labels": { - "app.kubernetes.io/name": "{{ app_name }}" - } - }, - "spec": { - "accessModes": [ - "{{ prometheus_storage_access_mode }}" - ], -{% if prometheus_storage_class_name is defined and prometheus_storage_class_name != '' %} - "storageClassName": "{{ prometheus_storage_class_name }}", -{% endif %} - "resources": { - "requests": { - "storage": "{{ prometheus_volume_size }}" - } - } - } -} diff --git a/installers/metrics/ansible/roles/pgo-metrics/templates/prometheus-rbac.json.j2 b/installers/metrics/ansible/roles/pgo-metrics/templates/prometheus-rbac.json.j2 deleted file mode 100644 index 60260597b2..0000000000 --- a/installers/metrics/ansible/roles/pgo-metrics/templates/prometheus-rbac.json.j2 +++ /dev/null @@ -1,61 +0,0 @@ -{ - "apiVersion": "rbac.authorization.k8s.io/v1", - "kind": "ClusterRole", - "metadata": { - "name": "{{ metrics_namespace }}-prometheus-sa", - "labels": { - "app.kubernetes.io/name": "{{ app_name }}" - } - }, - "rules": [ - { - "apiGroups": [ - "" - ], - "resources": [ - "pods" - ], - "verbs": [ - "get", - "list", - "watch" - ] - } - ] -} - -{ - "apiVersion": "v1", - "kind": "ServiceAccount", - "metadata": { - "name": "prometheus-sa", - "labels": { - "app.kubernetes.io/name": "{{ app_name }}" - } - }, - "imagePullSecrets": [ -{% if metrics_image_pull_secret %} - { "name": "{{ metrics_image_pull_secret }}" } -{% endif %} - ] -} - -{ - "apiVersion": "rbac.authorization.k8s.io/v1", - "kind": "ClusterRoleBinding", - "metadata": { - "name": "{{ metrics_namespace }}-prometheus-sa" - }, - "roleRef": { - "apiGroup": "rbac.authorization.k8s.io", - "kind": "ClusterRole", - "name": "{{ metrics_namespace }}-prometheus-sa" - }, - "subjects": [ - { - "kind": "ServiceAccount", - "name": "prometheus-sa", - "namespace": "{{ metrics_namespace }}" - } - ] -} diff --git a/installers/metrics/ansible/roles/pgo-metrics/templates/prometheus-service.json.j2 b/installers/metrics/ansible/roles/pgo-metrics/templates/prometheus-service.json.j2 deleted file mode 100644 index 46cee2b580..0000000000 --- a/installers/metrics/ansible/roles/pgo-metrics/templates/prometheus-service.json.j2 +++ /dev/null @@ -1,26 +0,0 @@ -{ - "kind": "Service", - "apiVersion": "v1", - "metadata": { - "name": "{{ prometheus_service_name }}", - "labels": { - "name": "{{ prometheus_service_name }}", - "app.kubernetes.io/name": "{{ app_name }}" - } - }, - "spec": { - "ports": [ - { - "name": "prometheus", - "protocol": "TCP", - "port": {{ prometheus_port }}, - "targetPort": {{ prometheus_port }} - } - ], - "selector": { - "name": "{{ prometheus_service_name }}" - }, - "type": "{{ prometheus_service_type }}", - "sessionAffinity": "None" - } -} diff --git a/installers/metrics/ansible/roles/pgo-metrics/vars/openshift.yml b/installers/metrics/ansible/roles/pgo-metrics/vars/openshift.yml deleted file mode 100644 index 57b50dd2c3..0000000000 --- a/installers/metrics/ansible/roles/pgo-metrics/vars/openshift.yml +++ /dev/null @@ -1,2 +0,0 @@ ---- -openshift_oc_bin: "oc" diff --git a/installers/metrics/ansible/values.yaml b/installers/metrics/ansible/values.yaml deleted file mode 100644 index 476c113240..0000000000 --- a/installers/metrics/ansible/values.yaml +++ /dev/null @@ -1,45 +0,0 @@ -# ===================== -# Configuration Options -# More info for these options can be found in the docs -# https://access.crunchydata.com/documentation/postgres-operator/latest/installation/metrics/metrics-configuration/ -# ===================== -alertmanager_custom_config: "" -alertmanager_custom_rules_config: "" -alertmanager_install: "true" -alertmanager_log_level: "info" -alertmanager_port: "9093" -alertmanager_service_name: "crunchy-alertmanager" -alertmanager_service_type: "ClusterIP" -alertmanager_storage_access_mode: "ReadWriteOnce" -alertmanager_storage_class_name: "" -alertmanager_supplemental_groups: "" -alertmanager_volume_size: "1Gi" -create_rbac: "true" -db_port: "5432" -delete_metrics_namespace: "false" -disable_fsgroup: "false" -grafana_admin_password: "admin" -grafana_admin_username: "admin" -grafana_dashboards_custom_config: "" -grafana_datasources_custom_config: "" -grafana_install: "true" -grafana_port: "3000" -grafana_service_name: "crunchy-grafana" -grafana_service_type: "ClusterIP" -grafana_storage_access_mode: "ReadWriteOnce" -grafana_storage_class_name: "" -grafana_supplemental_groups: "" -grafana_volume_size: "1Gi" -metrics_image_pull_secret: "" -metrics_image_pull_secret_manifest: "" -metrics_namespace: "pgo" -pgbadgerport: "10000" -prometheus_custom_config: "" -prometheus_install: "true" -prometheus_port: "9090" -prometheus_service_name: "crunchy-prometheus" -prometheus_service_type: "ClusterIP" -prometheus_storage_access_mode: "ReadWriteOnce" -prometheus_storage_class_name: "" -prometheus_supplemental_groups: "" -prometheus_volume_size: "1Gi" diff --git a/installers/metrics/helm/Chart.yaml b/installers/metrics/helm/Chart.yaml deleted file mode 100644 index 603cab3982..0000000000 --- a/installers/metrics/helm/Chart.yaml +++ /dev/null @@ -1,8 +0,0 @@ -apiVersion: v2 -name: postgres-operator-monitoring -description: Install for Crunchy PostgreSQL Operator Monitoring -type: application -version: 0.1.0 -appVersion: 4.5.0 -home: https://github.com/CrunchyData/postgres-operator -icon: https://github.com/CrunchyData/postgres-operator/raw/master/crunchy_logo.png \ No newline at end of file diff --git a/installers/metrics/helm/README.md b/installers/metrics/helm/README.md deleted file mode 100644 index 026d35f223..0000000000 --- a/installers/metrics/helm/README.md +++ /dev/null @@ -1,51 +0,0 @@ -# Installing the Monitoring Infrastructure - -This Helm chart installs the metrics deployment for Crunchy PostgreSQL Operator -by using its “pgo-deployer” container. Helm will setup the ServiceAccount, RBAC, -and ConfigMap needed to run the container as a Kubernetes Job. Then a job will -be created based on `helm` `install`, or `uninstall` to install or uninstall -metrics. After the job has completed the RBAC will be cleaned up. - -## Prerequisites - -- Helm v3 -- Kubernetes 1.14+ - -## Getting the chart - -Clone the `postgres-operator` repo: -``` -git clone https://github.com/CrunchyData/postgres-operator.git -``` - -## Installing - -``` -cd postgres-operator/installers/metrics/helm -helm install metrics . -n pgo -``` - -## Uninstalling - -``` -cd postgres-operator/installers/metrics/helm -helm uninstall metrics -n pgo -``` - -## Configuraiton - -The following shows the configurable parameters that are relevant to the Helm -Chart. A full list of all Crunchy PostgreSQL Operator configuration options can -be found in the [documentation](https://access.crunchydata.com/documentation/postgres-operator/latest/installation/configuration/). - -| Name | Default | Description | -| ---- | ------- | ----------- | -| fullnameOverride | "" | | -| rbac.create | true | If false RBAC will not be created. RBAC resources will need to be created manually and bound to `serviceAccount.name` | -| rbac.useClusterAdmin | false | If enabled the ServiceAccount will be given cluster-admin privileges. | -| serviceAccount.create | true | If false a ServiceAccount will not be created. A ServiceAccount must be created manually. | -| serviceAccount.name | "" | Use to override the default ServiceAccount name. If serviceAccount.create is false this ServiceAccount will be used. | - -{{% notice tip %}} -If installing into an OpenShift 3.11 or Kubernetes 1.11 cluster `rbac.useClusterAdmin` must be enabled. -{{% /notice %}} diff --git a/installers/metrics/helm/helm_template.yaml b/installers/metrics/helm/helm_template.yaml deleted file mode 100644 index d5e346dbc7..0000000000 --- a/installers/metrics/helm/helm_template.yaml +++ /dev/null @@ -1,24 +0,0 @@ ---- -# ====================== -# Installer Controls -# ====================== -fullnameOverride: "" - -# rbac: settings for deployer RBAC creation -rbac: - # rbac.create: if false RBAC resources should be in place - create: true - # rbac.useClusterAdmin: creates a ClusterRoleBinding giving cluster-admin to serviceAccount.name - useClusterAdmin: false - -# serviceAccount: settings for Service Account used by the deployer -serviceAccount: - # serviceAccount.create: Whether to create a Service Account or not - create: true - # serviceAccount.name: The name of the Service Account to create or use - name: "" - -# the image prefix and tag to use for the 'pgo-deployer' container -pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata" -pgo_image_tag: "centos7-4.5.0" - diff --git a/installers/metrics/helm/templates/NOTES.txt b/installers/metrics/helm/templates/NOTES.txt deleted file mode 100644 index f7827d3b46..0000000000 --- a/installers/metrics/helm/templates/NOTES.txt +++ /dev/null @@ -1,34 +0,0 @@ -Thank you for installing Crunchy PostgreSQL Operator Monitoring v{{ .Chart.AppVersion }}! - - (((((((((((((((((((((( - (((((((((((((%%%%%%%((((((((((((((( - (((((((((((%%% %%%%(((((((((((( - (((((((((((%%( (((( ( %%%((((((((((( - (((((((((((((%% (( ,(( %%%((((((((((( - (((((((((((((((%% *%%/ %%%%%%%(((((((((( - (((((((((((((((((((%%(( %%%%%%%%%%#(((((%%%%%%%%%%#(((((((((((( - ((((((((((((((((((%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%(((((((((((((( - *((((((((((((((((((((%%%%%% /%%%%%%%%%%%%%%%%%%%(((((((((((((((( - (((((((((((((((((((((((%%%/ .%, %%%((((((((((((((((((, - ((((((((((((((((((((((% %#((((((((((((((((( -(((((((((((((((%%%%%% #%((((((((((((((((( -((((((((((((((%% %%(((((((((((((((, -((((((((((((%%%#% % %%((((((((((((((( -((((((((((((%. % % #(((((((((((((( -(((((((((((%% % %%* %((((((((((((( -#(###(###(#%% %%% %% %%% #%%#(###(###(# -###########%%%%% /%%%%%%%%%%%%% %% %%%%% ,%%####### -###############%% %%%%%% %%% %%%%%%%% %%##### - ################%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% %%## - ################%% %%%%%%%%%%%%%%%%% %%%% % - ##############%# %% (%%%%%%% %%%%%% - #############% %%%%% %%%%%%%%%%% - ###########% %%%%%%%%%%% %%%%%%%%% - #########%% %% %%%%%%%%%%%%%%%# - ########%% %% %%%%%%%%% - ######%% %% %%%%%% - ####%%% %%%%% % - %% %%%% - -More information about Crunchy PostgreSQL Operator Monitoring can be found in the docs: -https://access.crunchydata.com/documentation/postgres-operator/ diff --git a/installers/metrics/helm/templates/_deployer_job_spec.yaml b/installers/metrics/helm/templates/_deployer_job_spec.yaml deleted file mode 100644 index 85cf39a061..0000000000 --- a/installers/metrics/helm/templates/_deployer_job_spec.yaml +++ /dev/null @@ -1,28 +0,0 @@ -{{- define "deployerJob.spec" }} -spec: - backoffLimit: 0 - template: - metadata: - name: pgo-metrics-deploy - labels: -{{ include "postgres-operator.labels" . | indent 8 }} - spec: - serviceAccountName: {{ include "postgres-operator.serviceAccountName" . }} - restartPolicy: Never - containers: - - name: pgo-metrics-deploy - image: {{ .Values.pgo_image_prefix }}/pgo-deployer:{{ .Values.pgo_image_tag }} - imagePullPolicy: IfNotPresent - env: - - name: DEPLOY_ACTION - value: "{{ .deployAction }}" - - name: PLAYBOOK - value: metrics - volumeMounts: - - name: deployer-conf - mountPath: "/conf" - volumes: - - name: deployer-conf - configMap: - name: {{ template "postgres-operator.fullname" . }}-cm -{{- end }} \ No newline at end of file diff --git a/installers/metrics/helm/templates/_helpers.tpl b/installers/metrics/helm/templates/_helpers.tpl deleted file mode 100644 index d259eab15d..0000000000 --- a/installers/metrics/helm/templates/_helpers.tpl +++ /dev/null @@ -1,96 +0,0 @@ -{{/* vim: set filetype=mustache: */}} -{{/* -Expand the name of the chart. -*/}} -{{- define "postgres-operator.name" -}} -{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }} -{{- end }} - -{{/* -Create a default fully qualified app name. -We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). -If release name contains chart name it will be used as a full name. -*/}} -{{- define "postgres-operator.fullname" -}} -{{- if .Values.fullnameOverride }} -{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }} -{{- else }} -{{- $name := default .Chart.Name .Values.nameOverride }} -{{- if contains $name .Release.Name }} -{{- .Release.Name | trunc 63 | trimSuffix "-" }} -{{- else }} -{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }} -{{- end }} -{{- end }} -{{- end }} - -{{/* -Create chart name and version as used by the chart label. -*/}} -{{- define "postgres-operator.chart" -}} -{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }} -{{- end }} - -{{/* -Common labels -*/}} -{{- define "postgres-operator.labels" -}} -helm.sh/chart: {{ include "postgres-operator.chart" . }} -{{ include "postgres-operator.selectorLabels" . }} -{{- if .Chart.AppVersion }} -app.kubernetes.io/version: {{ .Chart.AppVersion | quote }} -{{- end }} -app.kubernetes.io/managed-by: {{ .Release.Service }} -meta.helm.sh/release-name: {{ .Release.Name }} -meta.helm.sh/release-namespace: {{ .Release.Namespace }} -{{- end }} - -{{/* -Selector labels -*/}} -{{- define "postgres-operator.selectorLabels" -}} -app.kubernetes.io/name: {{ include "postgres-operator.name" . }} -app.kubernetes.io/instance: {{ .Release.Name }} -{{- end }} - -{{/* -Create the name of the service account to use -*/}} -{{- define "postgres-operator.serviceAccountName" -}} -{{- if .Values.serviceAccount.create }} -{{- default "pgo-deployer-metrics-sa" .Values.serviceAccount.name }} -{{- else }} -{{- default "default" .Values.serviceAccount.name }} -{{- end }} -{{- end }} - -{{/* -Create the template for image pull secrets -*/}} -{{- define "postgres-operator.imagePullSecret" -}} -{{- if ne .Values.metrics_image_pull_secret "" }} -imagePullSecrets: -- name: "{{ .Values.metrics_image_pull_secret }}" -{{ end }} -{{ end }} - -{{/* -Create the template for clusterroleName based on values.yaml parameters -*/}} -{{- define "postgres-operator.clusterroleName" -}} -{{- if .Values.rbac.useClusterAdmin -}} -cluster-admin -{{- else -}} -{{ include "postgres-operator.fullname" . }}-cr -{{- end }} -{{- end }} - -{{/* -Generate Configmap based on Values defined in values.yaml -*/}} -{{- define "postgres-operator.values" -}} -values.yaml: | - --- -{{ $vals := omit .Values "fullnameOverride" "rbac" "serviceAccount" }} -{{- toYaml $vals | indent 2 }} -{{- end }} diff --git a/installers/metrics/helm/templates/postgres-operator-metrics-install.yaml b/installers/metrics/helm/templates/postgres-operator-metrics-install.yaml deleted file mode 100644 index d2e79006b7..0000000000 --- a/installers/metrics/helm/templates/postgres-operator-metrics-install.yaml +++ /dev/null @@ -1,13 +0,0 @@ -{{ $_ := set . "deployAction" "install-metrics" }} ---- -apiVersion: batch/v1 -kind: Job -metadata: - name: pgo-metrics-deploy - namespace: {{ .Release.Namespace }} - labels: -{{ include "postgres-operator.labels" . | indent 4 }} - annotations: - helm.sh/hook: post-install - helm.sh/hook-delete-policy: hook-succeeded,before-hook-creation -{{ template "deployerJob.spec" . }} diff --git a/installers/metrics/helm/templates/postgres-operator-metrics-uninstall.yaml b/installers/metrics/helm/templates/postgres-operator-metrics-uninstall.yaml deleted file mode 100644 index b18dfd63f5..0000000000 --- a/installers/metrics/helm/templates/postgres-operator-metrics-uninstall.yaml +++ /dev/null @@ -1,13 +0,0 @@ -{{ $_ := set . "deployAction" "uninstall-metrics" }} ---- -apiVersion: batch/v1 -kind: Job -metadata: - name: pgo-metrics-deploy - namespace: {{ .Release.Namespace }} - labels: -{{ include "postgres-operator.labels" . | indent 4 }} - annotations: - helm.sh/hook: pre-delete - helm.sh/hook-delete-policy: hook-succeeded,before-hook-creation -{{ template "deployerJob.spec" . }} diff --git a/installers/metrics/helm/templates/postgres-operator-metrics-upgrade.yaml b/installers/metrics/helm/templates/postgres-operator-metrics-upgrade.yaml deleted file mode 100644 index 041e03417b..0000000000 --- a/installers/metrics/helm/templates/postgres-operator-metrics-upgrade.yaml +++ /dev/null @@ -1,13 +0,0 @@ -{{ $_ := set . "deployAction" "update-metrics" }} ---- -apiVersion: batch/v1 -kind: Job -metadata: - name: pgo-metrics-deploy - namespace: {{ .Release.Namespace }} - labels: -{{ include "postgres-operator.labels" . | indent 4 }} - annotations: - helm.sh/hook: post-upgrade - helm.sh/hook-delete-policy: hook-succeeded,before-hook-creation -{{ template "deployerJob.spec" . }} diff --git a/installers/metrics/helm/templates/rbac.yaml b/installers/metrics/helm/templates/rbac.yaml deleted file mode 100644 index f67b328104..0000000000 --- a/installers/metrics/helm/templates/rbac.yaml +++ /dev/null @@ -1,108 +0,0 @@ -{{ if .Values.serviceAccount.create }} ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: {{ include "postgres-operator.serviceAccountName" . }} - namespace: {{ .Release.Namespace }} - labels: -{{ include "postgres-operator.labels" . | indent 4 }} -{{ include "postgres-operator.imagePullSecret" . }} -{{ end }} -{{ if and .Values.rbac.create (not .Values.rbac.useClusterAdmin) }} ---- -kind: ClusterRole -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - name: {{ template "postgres-operator.fullname" . }}-cr - labels: -{{ include "postgres-operator.labels" . | indent 4 }} - annotations: - helm.sh/hook: post-install,post-upgrade,pre-delete - helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded - helm.sh/hook-weight: "-10" -rules: - - apiGroups: - - '' - resources: - - namespaces - verbs: - - get - - list - - create - - patch - - delete - - apiGroups: - - '' - resources: - - secrets - verbs: - - list - - get - - create - - delete - - apiGroups: - - '' - resources: - - configmaps - - services - - persistentvolumeclaims - verbs: - - get - - create - - delete - - list - - apiGroups: - - '' - resources: - - serviceaccounts - verbs: - - get - - create - - delete - - patch - - list - - apiGroups: - - apps - - extensions - resources: - - deployments - verbs: - - get - - list - - watch - - create - - delete - - apiGroups: - - rbac.authorization.k8s.io - resources: - - clusterroles - - clusterrolebindings - verbs: - - get - - create - - delete - - bind - - escalate -{{ end }} -{{ if .Values.rbac.create }} ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: {{ template "postgres-operator.fullname" . }}-crb - labels: -{{ include "postgres-operator.labels" . | indent 4 }} - annotations: - helm.sh/hook: post-install,post-upgrade,pre-delete - helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded - helm.sh/hook-weight: "-10" -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: {{ include "postgres-operator.clusterroleName" . }} -subjects: -- kind: ServiceAccount - name: {{ include "postgres-operator.serviceAccountName" . }} - namespace: {{ .Release.Namespace }} -{{ end }} \ No newline at end of file diff --git a/installers/metrics/helm/templates/values_configmap.yaml b/installers/metrics/helm/templates/values_configmap.yaml deleted file mode 100644 index 15ab0b9606..0000000000 --- a/installers/metrics/helm/templates/values_configmap.yaml +++ /dev/null @@ -1,9 +0,0 @@ -apiVersion: v1 -kind: ConfigMap -metadata: - name: {{ template "postgres-operator.fullname" . }}-cm - namespace: {{ .Release.Namespace }} - labels: -{{ include "postgres-operator.labels" . | indent 4 }} -data: -{{ include "postgres-operator.values" . | indent 2}} diff --git a/installers/metrics/helm/values.yaml b/installers/metrics/helm/values.yaml deleted file mode 100644 index 9f2ecefb63..0000000000 --- a/installers/metrics/helm/values.yaml +++ /dev/null @@ -1,69 +0,0 @@ ---- -# ====================== -# Installer Controls -# ====================== -fullnameOverride: "" - -# rbac: settings for deployer RBAC creation -rbac: - # rbac.create: if false RBAC resources should be in place - create: true - # rbac.useClusterAdmin: creates a ClusterRoleBinding giving cluster-admin to serviceAccount.name - useClusterAdmin: false - -# serviceAccount: settings for Service Account used by the deployer -serviceAccount: - # serviceAccount.create: Whether to create a Service Account or not - create: true - # serviceAccount.name: The name of the Service Account to create or use - name: "" - -# the image prefix and tag to use for the 'pgo-deployer' container -pgo_image_prefix: "registry.developers.crunchydata.com/crunchydata" -pgo_image_tag: "centos7-4.5.0" - -# ===================== -# Configuration Options -# More info for these options can be found in the docs -# https://access.crunchydata.com/documentation/postgres-operator/latest/installation/metrics/metrics-configuration/ -# ===================== -alertmanager_custom_config: "" -alertmanager_custom_rules_config: "" -alertmanager_install: "true" -alertmanager_log_level: "info" -alertmanager_port: "9093" -alertmanager_service_name: "crunchy-alertmanager" -alertmanager_service_type: "ClusterIP" -alertmanager_storage_access_mode: "ReadWriteOnce" -alertmanager_storage_class_name: "" -alertmanager_supplemental_groups: "" -alertmanager_volume_size: "1Gi" -create_rbac: "true" -db_port: "5432" -delete_metrics_namespace: "false" -disable_fsgroup: "false" -grafana_admin_password: "admin" -grafana_admin_username: "admin" -grafana_dashboards_custom_config: "" -grafana_datasources_custom_config: "" -grafana_install: "true" -grafana_port: "3000" -grafana_service_name: "crunchy-grafana" -grafana_service_type: "ClusterIP" -grafana_storage_access_mode: "ReadWriteOnce" -grafana_storage_class_name: "" -grafana_supplemental_groups: "" -grafana_volume_size: "1Gi" -metrics_image_pull_secret: "" -metrics_image_pull_secret_manifest: "" -metrics_namespace: "pgo" -pgbadgerport: "10000" -prometheus_custom_config: "" -prometheus_install: "true" -prometheus_port: "9090" -prometheus_service_name: "crunchy-prometheus" -prometheus_service_type: "ClusterIP" -prometheus_storage_access_mode: "ReadWriteOnce" -prometheus_storage_class_name: "" -prometheus_supplemental_groups: "" -prometheus_volume_size: "1Gi" diff --git a/installers/metrics/kubectl/postgres-operator-metrics-ocp311.yml b/installers/metrics/kubectl/postgres-operator-metrics-ocp311.yml deleted file mode 100644 index ca4daafd16..0000000000 --- a/installers/metrics/kubectl/postgres-operator-metrics-ocp311.yml +++ /dev/null @@ -1,112 +0,0 @@ -apiVersion: v1 -kind: ServiceAccount -metadata: - name: pgo-metrics-deployer-sa - namespace: pgo - labels: - app.kubernetes.io/name: postgres-operator-monitoring ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: pgo-metrics-deployer-crb - namespace: pgo - labels: - app.kubernetes.io/name: postgres-operator-monitoring -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: cluster-admin -subjects: - - kind: ServiceAccount - name: pgo-metrics-deployer-sa - namespace: pgo ---- -apiVersion: v1 -kind: ConfigMap -metadata: - name: pgo-metrics-deployer-cm - namespace: pgo - labels: - app.kubernetes.io/name: postgres-operator-monitoring -data: - values.yaml: |- - # ===================== - # Configuration Options - # More info for these options can be found in the docs - # https://access.crunchydata.com/documentation/postgres-operator/latest/installation/metrics/metrics-configuration/ - # ===================== - alertmanager_custom_config: "" - alertmanager_custom_rules_config: "" - alertmanager_install: "true" - alertmanager_log_level: "info" - alertmanager_port: "9093" - alertmanager_service_name: "crunchy-alertmanager" - alertmanager_service_type: "ClusterIP" - alertmanager_storage_access_mode: "ReadWriteOnce" - alertmanager_storage_class_name: "" - alertmanager_supplemental_groups: "" - alertmanager_volume_size: "1Gi" - create_rbac: "true" - db_port: "5432" - delete_metrics_namespace: "false" - disable_fsgroup: "false" - grafana_admin_password: "admin" - grafana_admin_username: "admin" - grafana_dashboards_custom_config: "" - grafana_datasources_custom_config: "" - grafana_install: "true" - grafana_port: "3000" - grafana_service_name: "crunchy-grafana" - grafana_service_type: "ClusterIP" - grafana_storage_access_mode: "ReadWriteOnce" - grafana_storage_class_name: "" - grafana_supplemental_groups: "" - grafana_volume_size: "1Gi" - metrics_image_pull_secret: "" - metrics_image_pull_secret_manifest: "" - metrics_namespace: "pgo" - pgbadgerport: "10000" - prometheus_custom_config: "" - prometheus_install: "true" - prometheus_port: "9090" - prometheus_service_name: "crunchy-prometheus" - prometheus_service_type: "ClusterIP" - prometheus_storage_access_mode: "ReadWriteOnce" - prometheus_storage_class_name: "" - prometheus_supplemental_groups: "" - prometheus_volume_size: "1Gi" ---- -apiVersion: batch/v1 -kind: Job -metadata: - name: pgo-metrics-deploy - namespace: pgo - labels: - app.kubernetes.io/name: postgres-operator-monitoring -spec: - backoffLimit: 0 - template: - metadata: - name: pgo-metrics-deploy - labels: - app.kubernetes.io/name: postgres-operator-monitoring - spec: - serviceAccountName: pgo-metrics-deployer-sa - restartPolicy: Never - containers: - - name: pgo-metrics-deploy - image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos7-4.5.0 - imagePullPolicy: IfNotPresent - env: - - name: DEPLOY_ACTION - value: install-metrics - - name: PLAYBOOK - value: metrics - volumeMounts: - - name: deployer-conf - mountPath: "/conf" - volumes: - - name: deployer-conf - configMap: - name: pgo-metrics-deployer-cm diff --git a/installers/metrics/kubectl/postgres-operator-metrics.yml b/installers/metrics/kubectl/postgres-operator-metrics.yml deleted file mode 100644 index e1cc94fd5a..0000000000 --- a/installers/metrics/kubectl/postgres-operator-metrics.yml +++ /dev/null @@ -1,181 +0,0 @@ -apiVersion: v1 -kind: ServiceAccount -metadata: - name: pgo-metrics-deployer-sa - namespace: pgo - labels: - app.kubernetes.io/name: postgres-operator-monitoring ---- -kind: ClusterRole -apiVersion: rbac.authorization.k8s.io/v1 -metadata: - name: pgo-metrics-deployer-cr - labels: - app.kubernetes.io/name: postgres-operator-monitoring -rules: - - apiGroups: - - '' - resources: - - namespaces - verbs: - - get - - list - - create - - patch - - delete - - apiGroups: - - '' - resources: - - secrets - verbs: - - list - - get - - create - - delete - - apiGroups: - - '' - resources: - - configmaps - - services - - persistentvolumeclaims - verbs: - - get - - create - - delete - - list - - apiGroups: - - '' - resources: - - serviceaccounts - verbs: - - get - - create - - delete - - patch - - list - - apiGroups: - - apps - - extensions - resources: - - deployments - verbs: - - get - - list - - watch - - create - - delete - - apiGroups: - - rbac.authorization.k8s.io - resources: - - clusterroles - - clusterrolebindings - verbs: - - get - - create - - delete - - bind - - escalate ---- -apiVersion: v1 -kind: ConfigMap -metadata: - name: pgo-metrics-deployer-cm - namespace: pgo - labels: - app.kubernetes.io/name: postgres-operator-monitoring -data: - values.yaml: |- - # ===================== - # Configuration Options - # More info for these options can be found in the docs - # https://access.crunchydata.com/documentation/postgres-operator/latest/installation/metrics/metrics-configuration/ - # ===================== - alertmanager_custom_config: "" - alertmanager_custom_rules_config: "" - alertmanager_install: "true" - alertmanager_log_level: "info" - alertmanager_port: "9093" - alertmanager_service_name: "crunchy-alertmanager" - alertmanager_service_type: "ClusterIP" - alertmanager_storage_access_mode: "ReadWriteOnce" - alertmanager_storage_class_name: "" - alertmanager_supplemental_groups: "" - alertmanager_volume_size: "1Gi" - create_rbac: "true" - db_port: "5432" - delete_metrics_namespace: "false" - disable_fsgroup: "false" - grafana_admin_password: "admin" - grafana_admin_username: "admin" - grafana_dashboards_custom_config: "" - grafana_datasources_custom_config: "" - grafana_install: "true" - grafana_port: "3000" - grafana_service_name: "crunchy-grafana" - grafana_service_type: "ClusterIP" - grafana_storage_access_mode: "ReadWriteOnce" - grafana_storage_class_name: "" - grafana_supplemental_groups: "" - grafana_volume_size: "1Gi" - metrics_image_pull_secret: "" - metrics_image_pull_secret_manifest: "" - metrics_namespace: "pgo" - pgbadgerport: "10000" - prometheus_custom_config: "" - prometheus_install: "true" - prometheus_port: "9090" - prometheus_service_name: "crunchy-prometheus" - prometheus_service_type: "ClusterIP" - prometheus_storage_access_mode: "ReadWriteOnce" - prometheus_storage_class_name: "" - prometheus_supplemental_groups: "" - prometheus_volume_size: "1Gi" ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: pgo-metrics-deployer-crb - labels: - app.kubernetes.io/name: postgres-operator-monitoring -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: pgo-metrics-deployer-cr -subjects: - - kind: ServiceAccount - name: pgo-metrics-deployer-sa - namespace: pgo ---- -apiVersion: batch/v1 -kind: Job -metadata: - name: pgo-metrics-deploy - namespace: pgo - labels: - app.kubernetes.io/name: postgres-operator-monitoring -spec: - backoffLimit: 0 - template: - metadata: - name: pgo-metrics-deploy - labels: - app.kubernetes.io/name: postgres-operator-monitoring - spec: - serviceAccountName: pgo-metrics-deployer-sa - restartPolicy: Never - containers: - - name: pgo-metrics-deploy - image: registry.developers.crunchydata.com/crunchydata/pgo-deployer:centos7-4.5.0 - imagePullPolicy: IfNotPresent - env: - - name: DEPLOY_ACTION - value: install-metrics - - name: PLAYBOOK - value: metrics - volumeMounts: - - name: deployer-conf - mountPath: "/conf" - volumes: - - name: deployer-conf - configMap: - name: pgo-metrics-deployer-cm diff --git a/installers/olm/.gitignore b/installers/olm/.gitignore deleted file mode 100644 index f875607c2f..0000000000 --- a/installers/olm/.gitignore +++ /dev/null @@ -1 +0,0 @@ -/package/ diff --git a/installers/olm/Dockerfile b/installers/olm/Dockerfile deleted file mode 100644 index 5e5e8ee9db..0000000000 --- a/installers/olm/Dockerfile +++ /dev/null @@ -1,29 +0,0 @@ -FROM docker.io/library/centos:latest - -RUN curl -Lo /usr/local/bin/jq -s "https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64" \ - && chmod +x /usr/local/bin/jq \ - && sha256sum -c <<< "SHA256 (/usr/local/bin/jq) = af986793a515d500ab2d35f8d2aecd656e764504b789b66d7e1a0b727a124c44" - -RUN curl -Lo /usr/local/bin/yq -s "https://github.com/mikefarah/yq/releases/download/2.4.1/yq_linux_amd64" \ - && chmod +x /usr/local/bin/yq \ - && sha256sum -c <<< "SHA256 (/usr/local/bin/yq) = 754c6e6a7ef92b00ef73b8b0bb1d76d651e04d26aa6c6625e272201afa889f8b" - -RUN dnf update -d1 -y \ - && dnf install -d1 -y gettext glibc-langpack-en make ncurses python3 tree zip \ - && dnf clean all - -ARG OLM_SDK_VERSION -RUN python3 -m pip install operator-courier \ - && curl -Lo /usr/local/bin/operator-sdk -s "https://github.com/operator-framework/operator-sdk/releases/download/v${OLM_SDK_VERSION}/operator-sdk-v${OLM_SDK_VERSION}-x86_64-linux-gnu" \ - && chmod +x /usr/local/bin/operator-sdk \ - && sha256sum -c <<< "SHA256 (/usr/local/bin/operator-sdk) = 5c8c06bd8a0c47f359aa56f85fe4e3ee2066d4e51b60b75e131dec601b7b3cd6" - -COPY --from=docker.io/bitnami/kubectl:1.11 /opt/bitnami/kubectl/bin/kubectl /usr/local/bin/kubectl-1.11 -COPY --from=docker.io/bitnami/kubectl:1.12 /opt/bitnami/kubectl/bin/kubectl /usr/local/bin/kubectl-1.12 -COPY --from=docker.io/bitnami/kubectl:1.13 /opt/bitnami/kubectl/bin/kubectl /usr/local/bin/kubectl-1.13 -COPY --from=docker.io/bitnami/kubectl:1.14 /opt/bitnami/kubectl/bin/kubectl /usr/local/bin/kubectl-1.14 -COPY --from=docker.io/bitnami/kubectl:1.15 /opt/bitnami/kubectl/bin/kubectl /usr/local/bin/kubectl-1.15 -COPY --from=docker.io/bitnami/kubectl:1.16 /opt/bitnami/kubectl/bin/kubectl /usr/local/bin/kubectl-1.16 -COPY --from=docker.io/bitnami/kubectl:1.17 /opt/bitnami/kubectl/bin/kubectl /usr/local/bin/kubectl-1.17 -COPY --from=docker.io/bitnami/kubectl:1.18 /opt/bitnami/kubectl/bin/kubectl /usr/local/bin/kubectl-1.18 -COPY --from=docker.io/bitnami/kubectl:1.19 /opt/bitnami/kubectl/bin/kubectl /usr/local/bin/kubectl-1.19 diff --git a/installers/olm/Makefile b/installers/olm/Makefile deleted file mode 100644 index 3b8884bf78..0000000000 --- a/installers/olm/Makefile +++ /dev/null @@ -1,98 +0,0 @@ -.DEFAULT_GOAL := help -.SUFFIXES: - -CCP_IMAGE_PREFIX ?= registry.developers.crunchydata.com/crunchydata -CCP_PG_FULLVERSION ?= 12.4 -CCP_POSTGIS_VERSION ?= 3.0 -KUBECONFIG ?= $(HOME)/.kube/config -OLM_SDK_VERSION ?= 0.15.1 -OLM_TOOLS ?= registry.localhost:5000/postgres-operator-olm-tools:$(OLM_SDK_VERSION) -OLM_VERSION ?= 0.15.1 -PGO_BASEOS ?= centos7 -PGO_IMAGE_PREFIX ?= registry.developers.crunchydata.com/crunchydata -PGO_VERSION ?= 4.5.0 -PGO_IMAGE_TAG ?= $(PGO_BASEOS)-$(PGO_VERSION) -CCP_IMAGE_TAG ?= $(PGO_BASEOS)-$(CCP_PG_FULLVERSION)-$(PGO_VERSION) -CCP_POSTGIS_IMAGE_TAG ?= $(PGO_BASEOS)-$(CCP_PG_FULLVERSION)-$(CCP_POSTGIS_VERSION)-$(PGO_VERSION) - -OLM_TOOLS_BASH = docker run --net=host --rm --tty $(DOCKER_ARGS) \ - --mount 'type=bind,source=$(KUBECONFIG),target=/root/.kube/config,ro' \ - --mount 'type=bind,source=$(CURDIR)/..,target=/mnt/installers' \ - --workdir '/mnt/installers/$(basename $(notdir $(CURDIR)))' \ - '$(OLM_TOOLS)' - -export CCP_IMAGE_PREFIX CCP_IMAGE_TAG CCP_POSTGIS_IMAGE_TAG -export KUBECONFIG -export OLM_SDK_VERSION -export PGO_IMAGE_PREFIX PGO_IMAGE_TAG PGO_VERSION - -.PHONY: clean -clean: - rm -rf ./package - -.PHONY: catalog-source -catalog-source: ## Upload package and version bundle to a Kubernetes namespace - @test -n '$(NAMESPACE)' || { >&2 echo Must choose a NAMESPACE; exit 1; } - ./install.sh registry '$(NAMESPACE)' olm-registry - ./install.sh catalog_source '$(NAMESPACE)' olm-catalog-source '$(NAMESPACE)' olm-registry - -.PHONY: courier-verify -courier-verify: - operator-courier verify --ui_validate_io ./package - -.PHONY: docker-package -docker-package: image-tools -docker-package: ## Build package and version bundle from inside a container - $(OLM_TOOLS_BASH) make package courier-verify - -.PHONY: docker-shell -docker-shell: image-tools -docker-shell: DOCKER_ARGS = --interactive -docker-shell: ## Start a shell inside a container with all the tools needed to build and test - $(OLM_TOOLS_BASH) - -.PHONY: docker-verify -docker-verify: image-tools - $(OLM_TOOLS_BASH) make verify - -.PHONY: help -help: ALIGN=14 -help: ## Print this message - @awk -F ': ## ' -- "/^[^':]+: ## /"' { printf "'$$(tput bold)'%-$(ALIGN)s'$$(tput sgr0)' %s\n", $$1, $$2 }' $(MAKEFILE_LIST) - -.PHONY: image-tools -image-tools: - docker build --file Dockerfile --tag '$(OLM_TOOLS)' --build-arg OLM_SDK_VERSION='$(OLM_SDK_VERSION)' . - -.PHONY: install -install: ## Install the package in a Kubernetes namespace - @test -n '$(NAMESPACE)' || { >&2 echo Must choose a NAMESPACE; exit 1; } - ./install.sh operator '$(NAMESPACE)' '$(NAMESPACE)' - -.PHONY: install-olm -install-olm: ## Install OLM in Kubernetes - kubectl apply -f https://github.com/operator-framework/operator-lifecycle-manager/releases/download/$(OLM_VERSION)/crds.yaml - kubectl apply -f https://github.com/operator-framework/operator-lifecycle-manager/releases/download/$(OLM_VERSION)/olm.yaml - -.PHONY: package -package: export PACKAGE_NAME := postgresql -package: ## Build package and version bundle - ./generate.sh - -.PHONY: package-openshift -package-openshift: export K8S_DISTRIBUTION := openshift -package-openshift: package - -.PHONY: package-redhat -package-redhat: export K8S_DISTRIBUTION := openshift -package-redhat: export PACKAGE_NAME := crunchy-postgres-operator -package-redhat: CCP_IMAGE_PREFIX := registry.connect.redhat.com/crunchydata -package-redhat: PGO_IMAGE_PREFIX := registry.connect.redhat.com/crunchydata -package-redhat: PGO_BASEOS := $(subst centos,rhel,$(PGO_BASEOS)) -package-redhat: - ./generate.sh - cd ./package && zip -r '$(PACKAGE_NAME)-$(PGO_VERSION).zip' *.yaml '$(PGO_VERSION)' - -.PHONY: verify -verify: ## Install and test the package in a new (random) Kubernetes namespace then clean up - ./verify.sh diff --git a/installers/olm/README.md b/installers/olm/README.md deleted file mode 100644 index 207f85daa8..0000000000 --- a/installers/olm/README.md +++ /dev/null @@ -1,13 +0,0 @@ - -This directory contains the files that are used to install [Crunchy PostgreSQL for Kubernetes][hub-listing], -which uses the PostgreSQL Operator, using [Operator Lifecycle Manager][OLM]. - -The integration centers around a [ClusterServiceVersion][olm-csv] [manifest](./postgresoperator.csv.yaml) -that gets packaged for OperatorHub. Changes there are accepted only if they pass all the [scorecard][] -tests. Consult the [technical requirements][hub-contrib] when making changes. - -[hub-contrib]: https://github.com/operator-framework/community-operators/blob/master/docs/contributing.md -[hub-listing]: https://operatorhub.io/operator/postgresql -[olm-csv]: https://github.com/operator-framework/operator-lifecycle-manager/blob/master/doc/design/building-your-csv.md -[OLM]: https://github.com/operator-framework/operator-lifecycle-manager -[scorecard]: https://sdk.operatorframework.io/docs/scorecard/ diff --git a/installers/olm/description.openshift.md b/installers/olm/description.openshift.md deleted file mode 100644 index ad31cbe1e5..0000000000 --- a/installers/olm/description.openshift.md +++ /dev/null @@ -1,145 +0,0 @@ -Crunchy PostgreSQL for OpenShift lets you run your own production-grade PostgreSQL-as-a-Service on OpenShift! - -Powered by the Crunchy [PostgreSQL Operator](https://github.com/CrunchyData/postgres-operator), Crunchy PostgreSQL -for OpenShift automates and simplifies deploying and managing open source PostgreSQL clusters on OpenShift by -providing the essential features you need to keep your PostgreSQL clusters up and running, including: - -- **PostgreSQL Cluster Provisioning**: [Create, Scale, & Delete PostgreSQL clusters with ease][provisioning], - while fully customizing your Pods and PostgreSQL configuration! -- **High-Availability**: Safe, automated failover backed by a [distributed consensus based high-availability solution][high-availability]. - Uses [Pod Anti-Affinity][k8s-anti-affinity] to help resiliency; you can configure how aggressive this can be! - Failed primaries automatically heal, allowing for faster recovery time. You can even create regularly scheduled - backups as well and set your backup retention policy -- **Disaster Recovery**: Backups and restores leverage the open source [pgBackRest][] utility - and [includes support for full, incremental, and differential backups as well as efficient delta restores][disaster-recovery]. - Set how long you want your backups retained for. Works great with very large databases! -- **Monitoring**: Track the health of your PostgreSQL clusters using the open source [pgMonitor][] library. -- **Clone**: Create new clusters from your existing clusters or backups with a single [`pgo create cluster --restore-from`][pgo-create-cluster] command. -- **Full Customizability**: Crunchy PostgreSQL for OpenShift makes it easy to get your own PostgreSQL-as-a-Service up and running on - and lets make further enhancements to customize your deployments, including: - - Selecting different storage classes for your primary, replica, and backup storage - - Select your own container resources class for each PostgreSQL cluster deployment; differentiate between resources applied for primary and replica clusters! - - Use your own container image repository, including support `imagePullSecrets` and private repositories - - Bring your own trusted certificate authority (CA) for use with the Operator API server - - Override your PostgreSQL configuration for each cluster - -and much more! - -[disaster-recovery]: https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/disaster-recovery/ -[high-availability]: https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/high-availability/ -[pgo-create-cluster]: https://access.crunchydata.com/documentation/postgres-operator/latest/pgo-client/reference/pgo_create_cluster/ -[provisioning]: https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/provisioning/ - -[k8s-anti-affinity]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity - -[pgBackRest]: https://www.pgbackrest.org -[pgMonitor]: https://github.com/CrunchyData/pgmonitor - - -## Before You Begin - -There are a few manual steps that the cluster administrator must perform prior to installing the PostgreSQL Operator. -At the very least, it must be provided with an initial configuration. - -First, select a namespace in which to install the PostgreSQL Operator. PostgreSQL clusters will also be deployed here. -If it does not exist, create it now. - -``` -export PGO_OPERATOR_NAMESPACE=pgo -oc create namespace "$PGO_OPERATOR_NAMESPACE" -``` - -Next, clone the PostgreSQL Operator repository locally. - -``` -git clone -b v${PGO_VERSION} https://github.com/CrunchyData/postgres-operator.git -cd postgres-operator -``` - -### Security - -For the PostgreSQL Operator and PostgreSQL clusters to run in the recommended `restricted` [Security Context Constraint][], -edit `conf/postgres-operator/pgo.yaml` and set `DisableFSGroup` to `true`. - -[Security Context Constraint]: https://docs.openshift.com/container-platform/latest/authentication/managing-security-context-constraints.html - -### PostgreSQL Operator Configuration - -Edit `conf/postgres-operator/pgo.yaml` to configure the deployment. Look over all of the options and make any -changes necessary for your environment. A [full description of each option][pgo-yaml-reference] is available in the documentation. - -[pgo-yaml-reference]: https://access.crunchydata.com/documentation/postgres-operator/${PGO_VERSION}/configuration/pgo-yaml-configuration/ - -When the file is ready, upload the entire directory to the `pgo-config` ConfigMap. - -``` -oc -n "$PGO_OPERATOR_NAMESPACE" create configmap pgo-config \ - --from-file=./conf/postgres-operator -``` - -### Secrets - -Configure pgBackRest for your environment. If you do not plan to use AWS S3 to store backups, you can omit -the `aws-s3` keys below. - -``` -oc -n "$PGO_OPERATOR_NAMESPACE" create secret generic pgo-backrest-repo-config \ - --from-file=./installers/ansible/roles/pgo-operator/files/pgo-backrest-repo/config \ - --from-file=./installers/ansible/roles/pgo-operator/files/pgo-backrest-repo/sshd_config \ - --from-file=./installers/ansible/roles/pgo-operator/files/pgo-backrest-repo/aws-s3-ca.crt \ - --from-literal=aws-s3-key="" \ - --from-literal=aws-s3-key-secret="" -``` - -### Certificates (optional) - -The PostgreSQL Operator has an API that uses TLS to communicate securely with clients. If you have -a certificate bundle validated by your organization, you can install it now. If not, the API will -automatically generate and use a self-signed certificate. - -``` -oc -n "$PGO_OPERATOR_NAMESPACE" create secret tls pgo.tls \ - --cert=/path/to/server.crt \ - --key=/path/to/server.key -``` - -Once these resources are in place, the PostgreSQL Operator can be installed into the cluster. - - -## After You Install - -Once the PostgreSQL Operator is installed in your OpenShift cluster, you will need to do a few things -to use the [PostgreSQL Operator Client][pgo-client]. - -[pgo-client]: https://access.crunchydata.com/documentation/postgres-operator/latest/pgo-client/ - -Install the first set of client credentials and download the `pgo` binary and client certificates. - -``` -PGO_CMD=oc ./deploy/install-bootstrap-creds.sh -PGO_CMD=oc ./installers/kubectl/client-setup.sh -``` - -The client needs to be able to reach the PostgreSQL Operator API from outside the OpenShift cluster. -Create an external service or forward a port locally. - -``` -oc -n "$PGO_OPERATOR_NAMESPACE" expose deployment postgres-operator -oc -n "$PGO_OPERATOR_NAMESPACE" create route passthrough postgres-operator --service=postgres-operator - -export PGO_APISERVER_URL="https://$(oc -n "$PGO_OPERATOR_NAMESPACE" get route postgres-operator -o jsonpath="{.spec.host}")" -``` -_or_ -``` -oc -n "$PGO_OPERATOR_NAMESPACE" port-forward deployment/postgres-operator 8443 - -export PGO_APISERVER_URL="https://127.0.0.1:8443" -``` - -Verify connectivity using the `pgo` command. - -``` -pgo version -# pgo client version ${PGO_VERSION} -# pgo-apiserver version ${PGO_VERSION} -``` diff --git a/installers/olm/description.upstream.md b/installers/olm/description.upstream.md deleted file mode 100644 index 8838098032..0000000000 --- a/installers/olm/description.upstream.md +++ /dev/null @@ -1,140 +0,0 @@ -Crunchy PostgreSQL for Kubernetes lets you run your own production-grade PostgreSQL-as-a-Service on Kubernetes! - -Powered by the Crunchy [PostgreSQL Operator](https://github.com/CrunchyData/postgres-operator), Crunchy PostgreSQL -for Kubernetes automates and simplifies deploying and managing open source PostgreSQL clusters on Kubernetes by -providing the essential features you need to keep your PostgreSQL clusters up and running, including: - -- **PostgreSQL Cluster Provisioning**: [Create, Scale, & Delete PostgreSQL clusters with ease][provisioning], - while fully customizing your Pods and PostgreSQL configuration! -- **High-Availability**: Safe, automated failover backed by a [distributed consensus based high-availability solution][high-availability]. - Uses [Pod Anti-Affinity][k8s-anti-affinity] to help resiliency; you can configure how aggressive this can be! - Failed primaries automatically heal, allowing for faster recovery time. You can even create regularly scheduled - backups as well and set your backup retention policy -- **Disaster Recovery**: Backups and restores leverage the open source [pgBackRest][] utility - and [includes support for full, incremental, and differential backups as well as efficient delta restores][disaster-recovery]. - Set how long you want your backups retained for. Works great with very large databases! -- **Monitoring**: Track the health of your PostgreSQL clusters using the open source [pgMonitor][] library. -- **Clone**: Create new clusters from your existing clusters or backups with a single [`pgo create cluster --restore-from`][pgo-create-cluster] command. -- **Full Customizability**: Crunchy PostgreSQL for Kubernetes makes it easy to get your own PostgreSQL-as-a-Service up and running on - and lets make further enhancements to customize your deployments, including: - - Selecting different storage classes for your primary, replica, and backup storage - - Select your own container resources class for each PostgreSQL cluster deployment; differentiate between resources applied for primary and replica clusters! - - Use your own container image repository, including support `imagePullSecrets` and private repositories - - Bring your own trusted certificate authority (CA) for use with the Operator API server - - Override your PostgreSQL configuration for each cluster - -and much more! - -[disaster-recovery]: https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/disaster-recovery/ -[high-availability]: https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/high-availability/ -[pgo-create-cluster]: https://access.crunchydata.com/documentation/postgres-operator/latest/pgo-client/reference/pgo_create_cluster/ -[provisioning]: https://access.crunchydata.com/documentation/postgres-operator/latest/architecture/provisioning/ - -[k8s-anti-affinity]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity - -[pgBackRest]: https://www.pgbackrest.org -[pgMonitor]: https://github.com/CrunchyData/pgmonitor - - -## Before You Begin - -There are a few manual steps that the cluster administrator must perform prior to installing the PostgreSQL Operator. -At the very least, it must be provided with an initial configuration. - -First, select a namespace in which to install the PostgreSQL Operator. PostgreSQL clusters will also be deployed here. -If it does not exist, create it now. - -``` -export PGO_OPERATOR_NAMESPACE=pgo -kubectl create namespace "$PGO_OPERATOR_NAMESPACE" -``` - -Next, clone the PostgreSQL Operator repository locally. - -``` -git clone -b v${PGO_VERSION} https://github.com/CrunchyData/postgres-operator.git -cd postgres-operator -``` - -### PostgreSQL Operator Configuration - -Edit `conf/postgres-operator/pgo.yaml` to configure the deployment. Look over all of the options and make any -changes necessary for your environment. A [full description of each option][pgo-yaml-reference] is available in the documentation. - -[pgo-yaml-reference]: https://access.crunchydata.com/documentation/postgres-operator/${PGO_VERSION}/configuration/pgo-yaml-configuration/ - -When the file is ready, upload the entire directory to the `pgo-config` ConfigMap. - -``` -kubectl -n "$PGO_OPERATOR_NAMESPACE" create configmap pgo-config \ - --from-file=./conf/postgres-operator -``` - -### Secrets - -Configure pgBackRest for your environment. If you do not plan to use AWS S3 to store backups, you can omit -the `aws-s3` keys below. - -``` -kubectl -n "$PGO_OPERATOR_NAMESPACE" create secret generic pgo-backrest-repo-config \ - --from-file=./installers/ansible/roles/pgo-operator/files/pgo-backrest-repo/config \ - --from-file=./installers/ansible/roles/pgo-operator/files/pgo-backrest-repo/sshd_config \ - --from-file=./installers/ansible/roles/pgo-operator/files/pgo-backrest-repo/aws-s3-ca.crt \ - --from-literal=aws-s3-key="" \ - --from-literal=aws-s3-key-secret="" -``` - -### Certificates (optional) - -The PostgreSQL Operator has an API that uses TLS to communicate securely with clients. If you have -a certificate bundle validated by your organization, you can install it now. If not, the API will -automatically generate and use a self-signed certificate. - -``` -kubectl -n "$PGO_OPERATOR_NAMESPACE" create secret tls pgo.tls \ - --cert=/path/to/server.crt \ - --key=/path/to/server.key -``` - -Once these resources are in place, the PostgreSQL Operator can be installed into the cluster. - - -## After You Install - -Once the PostgreSQL Operator is installed in your Kubernetes cluster, you will need to do a few things -to use the [PostgreSQL Operator Client][pgo-client]. - -[pgo-client]: https://access.crunchydata.com/documentation/postgres-operator/latest/pgo-client/ - -Install the first set of client credentials and download the `pgo` binary and client certificates. - -``` -PGO_CMD=kubectl ./deploy/install-bootstrap-creds.sh -PGO_CMD=kubectl ./installers/kubectl/client-setup.sh -``` - -The client needs to be able to reach the PostgreSQL Operator API from outside the Kubernetes cluster. -Create an external service or forward a port locally. - -``` -kubectl -n "$PGO_OPERATOR_NAMESPACE" expose deployment postgres-operator --type=LoadBalancer - -export PGO_APISERVER_URL="https://$( - kubectl -n "$PGO_OPERATOR_NAMESPACE" get service postgres-operator \ - -o jsonpath="{.status.loadBalancer.ingress[*]['ip','hostname']}" -):8443" -``` -_or_ -``` -kubectl -n "$PGO_OPERATOR_NAMESPACE" port-forward deployment/postgres-operator 8443 - -export PGO_APISERVER_URL="https://127.0.0.1:8443" -``` - -Verify connectivity using the `pgo` command. - -``` -pgo version -# pgo client version ${PGO_VERSION} -# pgo-apiserver version ${PGO_VERSION} -``` diff --git a/installers/olm/generate.sh b/installers/olm/generate.sh deleted file mode 100755 index 28d44d54d7..0000000000 --- a/installers/olm/generate.sh +++ /dev/null @@ -1,53 +0,0 @@ -#!/usr/bin/env bash -# vim: set noexpandtab : -set -eu - -render() { envsubst '$CCP_IMAGE_PREFIX $CCP_IMAGE_TAG $CCP_POSTGIS_IMAGE_TAG $PACKAGE_NAME $PGO_IMAGE_PREFIX $PGO_IMAGE_TAG $PGO_VERSION'; } - -mkdir -p "./package/${PGO_VERSION}" - -# PackageManifest filename must end with '.package.yaml' -render < postgresql.package.yaml > "./package/${PACKAGE_NAME}.package.yaml" - -# ClusterServiceVersion filenames must end with '.clusterserviceversion.yaml' -render < postgresoperator.csv.yaml > "./package/${PGO_VERSION}/postgresoperator.v${PGO_VERSION}.clusterserviceversion.yaml" - -crd_array="$( yq read --doc='*' --tojson postgresoperator.crd.yaml )" -crd_names="$( jq <<< "$crd_array" --raw-output 'to_entries[] | [.key, .value.metadata.name] | @tsv' )" - -# `operator-courier verify` expects only one CustomResourceDefinition per file. -while IFS=$'\t' read index name; do - yq read --doc="$index" postgresoperator.crd.yaml > "./package/${PGO_VERSION}/${name}.crd.yaml" -done <<< "$crd_names" - -yq_script="$( yq read --tojson "./package/${PGO_VERSION}/postgresoperator.v${PGO_VERSION}.clusterserviceversion.yaml" | jq \ - --argjson images "$( yq read --tojson postgresoperator.csv.images.yaml | render )" \ - --argjson crds "$( yq read --tojson postgresoperator.crd.descriptions.yaml | render )" \ - --arg examples "$( yq read --tojson postgresoperator.crd.examples.yaml --doc='*' | render | jq . )" \ - --arg description "$( render < description.upstream.md )" \ - --arg icon "$( base64 ../seal.svg | tr -d '\n' )" \ -'{ - "metadata.annotations.alm-examples": $examples, - "spec.customresourcedefinitions.owned": $crds, - "spec.description": $description, - "spec.icon": [{ mediatype: "image/svg+xml", base64data: $icon }], - - "spec.install.spec.deployments[0].spec.template.spec.containers[0].env": ( - .spec.install.spec.deployments[0].spec.template.spec.containers[0].env + $images), - - "spec.install.spec.deployments[0].spec.template.spec.containers[1].env": ( - .spec.install.spec.deployments[0].spec.template.spec.containers[1].env + $images) -}' )" -yq write --inplace --script=- <<< "$yq_script" "./package/${PGO_VERSION}/postgresoperator.v${PGO_VERSION}.clusterserviceversion.yaml" - -if [ "${K8S_DISTRIBUTION:-}" = 'openshift' ]; then - yq_script="$( jq <<< '{}' \ - --arg description "$( render < description.openshift.md )" \ - '{ - "spec.description": $description, - "spec.displayName": "Crunchy PostgreSQL for OpenShift", - }' )" - yq write --inplace --script=- <<< "$yq_script" "./package/${PGO_VERSION}/postgresoperator.v${PGO_VERSION}.clusterserviceversion.yaml" -fi - -if > /dev/null command -v tree; then tree -C './package'; fi diff --git a/installers/olm/install.sh b/installers/olm/install.sh deleted file mode 100755 index a983e08107..0000000000 --- a/installers/olm/install.sh +++ /dev/null @@ -1,349 +0,0 @@ -#!/usr/bin/env bash -# vim: set noexpandtab : -set -eu - -if command -v oc >/dev/null; then - kubectl() { oc "$@"; } - kubectl version -elif ! command -v kubectl >/dev/null; then - # Use a version of `kubectl` that matches the Kubernetes server. - eval "kubectl() { kubectl-$( kubectl-1.16 version --output=json | - jq --raw-output '.serverVersion | .major + "." + .minor')"' "$@"; }' - kubectl version --short -fi - -catalog_source() ( - source_namespace="$1" - source_name="$2" - registry_namespace="$3" - registry_name="$4" - - kc() { kubectl --namespace="$source_namespace" "$@"; } - kc get namespace "$source_namespace" --output=jsonpath='{""}' 2>/dev/null || - kc create namespace "$source_namespace" - - # See https://godoc.org/github.com/operator-framework/api/pkg/operators/v1alpha1#CatalogSource - source_json="$( jq <<< '{}' \ - --arg name "$source_name" \ - --arg registry "${registry_name}.${registry_namespace}" \ - '{ - apiVersion: "operators.coreos.com/v1alpha1", kind: "CatalogSource", - metadata: { name: $name }, - spec: { - displayName: "Test Registry", - sourceType: "grpc", address: "\($registry):50051" - } - }' )" - kc create --filename=- <<< "$source_json" -) - -operator_group() ( - group_namespace="$1" - group_name="$2" - target_namespaces=("${@:3}") - - kc() { kubectl --namespace="$group_namespace" "$@"; } - kc get namespace "$group_namespace" --output=jsonpath='{""}' 2>/dev/null || - kc create namespace "$group_namespace" - - group_json="$( jq <<< '{}' --arg name "$group_name" '{ - apiVersion: "operators.coreos.com/v1", kind: "OperatorGroup", - metadata: { "name": $name }, - spec: { targetNamespaces: [] } - }' )" - - for ns in "${target_namespaces[@]}"; do - group_json="$( jq <<< "$group_json" --arg namespace "$ns" '.spec.targetNamespaces += [ $namespace ]' )" - done - - kc create --filename=- <<< "$group_json" -) - -registry() ( - registry_namespace="$1" - registry_name="$2" - - package_name="$( yq read ./package/*.package.yaml packageName )" - - kc() { kubectl --namespace="$registry_namespace" "$@"; } - kc get namespace "$registry_namespace" --output=jsonpath='{""}' 2>/dev/null || - kc create namespace "$registry_namespace" - - # Create a registry based on a ConfigMap containing the package with subdirectories encoded as dashes. - # - # There is a simpler `configmap-server` and CatalogSource.sourceType of `configmap`, but those only - # support a subset of possible bundle files. Notably, Service files are not supported at this time. - # - # See https://godoc.org/github.com/operator-framework/operator-registry/pkg/sqlite#ConfigMapLoader - # and https://godoc.org/github.com/operator-framework/operator-registry/pkg/sqlite#DirectoryLoader - deployment_json="$( jq <<< '{}' \ - --arg name "$registry_name" \ - --arg package "$package_name" \ - --arg script ' - find -L /mnt/package -name ".*" -prune -o -type f -print | while IFS="" read s ; do - t="${s#/mnt/package/}"; t="${t//--//}" - install -D -m 644 "$s" "manifests/$PACKAGE_NAME/$t" - done - /usr/bin/initializer - exec /usr/bin/registry-server - ' \ - '{ - apiVersion: "apps/v1", kind: "Deployment", - metadata: { name: $name }, - spec: { - selector: { matchLabels: { name: $name } }, - template: { - metadata: { labels: { name: $name } }, - spec: { - containers: [{ - name: "registry", - image: "quay.io/openshift/origin-operator-registry:latest", - imagePullPolicy: "IfNotPresent", - command: ["bash", "-ec"], args: [ $script ], - env: [{ name: "PACKAGE_NAME", value: $package }], - volumeMounts: [{ mountPath: "/mnt/package", name: "package" }] - }], - volumes: [{ name: "package", configMap: { name: $name } }] - } - } - } - }' )" - kc create configmap "$registry_name" $( - find ./package -type f | while IFS='' read s ; do - t="${s#./package/}"; t="${t//\//--}" - echo "--from-file=$t=$s" - done - ) - kc create --filename=- <<< "$deployment_json" - kc expose deploy "$registry_name" --port=50051 - - if ! kc wait --for='condition=available' --timeout='90s' deploy "$registry_name"; then - kc logs --selector="name=$registry_name" --tail='-1' --previous || - kc logs --selector="name=$registry_name" --tail='-1' - exit 1 - fi -) - -operator() ( - operator_namespace="$1" - target_namespaces=("${@:2}") - - package_json="$( yq read --tojson ./package/*.package.yaml )" - package_name="$( yq read ./package/*.package.yaml packageName )" - package_channel_name="$( yq read ./package/*.package.yaml defaultChannel )" - package_csv_name="$( jq <<< "$package_json" \ - --raw-output --arg channel "$package_channel_name" \ - '.channels[] | select(.name == $channel).currentCSV' )" - - kc() { kubectl --namespace="$operator_namespace" "$@"; } - - registry "$operator_namespace" olm-registry - catalog_source "$operator_namespace" olm-catalog-source "$operator_namespace" olm-registry - operator_group "$operator_namespace" olm-operator-group "${target_namespaces[@]}" - - # Create a Subscription to install the operator. - # See https://godoc.org/github.com/operator-framework/api/pkg/operators/v1alpha1#Subscription - subscription_json="$( jq <<< '{}' \ - --arg channel "$package_channel_name" \ - --arg namespace "$operator_namespace" \ - --arg package "$package_name" \ - --arg version "$package_csv_name" \ - '{ - apiVersion: "operators.coreos.com/v1alpha1", kind: "Subscription", - metadata: { name: $package }, - spec: { - name: $package, - sourceNamespace: $namespace, - source: "olm-catalog-source", - startingCSV: $version, - channel: $channel - } - }' )" - kc create --filename=- <<< "$subscription_json" - - # Wait for the InstallPlan to exist and be healthy. - for i in $(seq 10); do - [ '[]' != "$( kc get installplan --output=jsonpath="{.items}" )" ] && - break || sleep 1s - done - if ! kc wait --for='condition=installed' --timeout='30s' installplan --all; then - subscription_uid="$( kc get subscription "$package_name" --output=jsonpath='{.metadata.uid}' )" - installplan_json="$( kc get installplan --output=json )" - - jq <<< "$installplan_json" --arg uid "$subscription_uid" \ - '.items[] | select(.metadata.ownerReferences[] | select(.uid == $uid)).status.conditions' - exit 1 - fi - - # Wait for Deployment to exist and be healthy. - for i in $(seq 10); do - [ '[]' != "$( kc get deploy --selector="olm.owner=$package_csv_name" --output=jsonpath='{.items}' )" ] && - break || sleep 1s - done - if ! kc wait --for='condition=available' --timeout='30s' deploy --selector="olm.owner=$package_csv_name"; then - kc describe pod --selector="olm.owner=$package_csv_name" - - crashed_containers="$( kc get pod --selector="olm.owner=$package_csv_name" --output=json )" - crashed_containers="$( jq <<< "$crashed_containers" --raw-output \ - '.items[] | { - pod: .metadata.name, - container: .status.containerStatuses[] | select(.restartCount > 0).name - } | [.pod, .container] | @tsv' )" - - test -z "$crashed_containers" || while IFS=$'\t' read -r pod container; do - echo; echo "$pod/$container" restarted: - kc logs --container="$container" --previous --tail='-1' "pod/$pod" - done <<< "$crashed_containers" - - exit 1 - fi - - exit 0 - - # Create a client Pod from which commands can be executed. - client_image="$( kc get deploy --selector="olm.owner=$package_csv_name" --output=json | - jq --raw-output '.items[0].spec.template.spec.containers[] | select(.name == "operator").image' )" - client_image="${client_image/postgres-operator/pgo-client}" - - subscription_ownership="$( kc get "subscription.operators.coreos.com/$package_name" --output=json )" - subscription_ownership="$( jq <<< "$subscription_ownership" '{ - apiVersion, kind, name: .metadata.name, uid: .metadata.uid - }' )" - - role_secret_json="$( jq <<< '{}' \ - --arg rolename admin \ - '{ - apiVersion: "v1", kind: "Secret", - metadata: { - name: "pgorole-\($rolename)", - labels: { "pgo-pgorole": "true", rolename: $rolename } - }, - stringData: { permissions: "*", rolename: $rolename } - }' )" - user_secret_json="$( jq <<< '{}' \ - --arg password "${RANDOM}${RANDOM}${RANDOM}" \ - --arg rolename admin \ - --arg username admin \ - '{ - apiVersion: "v1", kind: "Secret", - metadata: { - name: "pgouser-\($username)", - labels: { "pgo-pgouser": "true", username: $username } - }, - stringData: { username: $username, password: $password, roles: $rolename } - }' )" - - client_job_json="$( jq <<< '{}' \ - --arg image "$client_image" \ - --argjson subscription "$subscription_ownership" \ - '{ - apiVersion: "batch/v1", kind: "Job", - metadata: { name: "pgo-client", ownerReferences: [ $subscription ] }, - spec: { template: { spec: { - dnsPolicy: "ClusterFirst", - restartPolicy: "OnFailure", - containers: [{ - name: "client", - image: $image, - imagePullPolicy: "IfNotPresent", - command: ["tail", "-f", "/dev/null"], - env: [ - { name: "PGO_APISERVER_URL", value: "https://postgres-operator:8443" }, - { name: "PGOUSERNAME", valueFrom: { secretKeyRef: { name: "pgouser-admin", key: "username" } } }, - { name: "PGOUSERPASS", valueFrom: { secretKeyRef: { name: "pgouser-admin", key: "password" } } }, - { name: "PGO_CA_CERT", value: "/etc/pgo/certificates/tls.crt" }, - { name: "PGO_CLIENT_CERT", value: "/etc/pgo/certificates/tls.crt" }, - { name: "PGO_CLIENT_KEY", value: "/etc/pgo/certificates/tls.key" } - ], - volumeMounts: [{ mountPath: "/etc/pgo/certificates", name: "certificates" }] - }], - volumes: [{ name: "certificates", secret: { secretName: "pgo.tls" } }] - } } } - }' )" - kc expose deploy postgres-operator - kc create --filename=- <<< "$role_secret_json" - kc create --filename=- <<< "$user_secret_json" - kc create --filename=- <<< "$client_job_json" -) - -scorecard() ( - operator_namespace="$1" - sdk_version="$2" - - kc() { kubectl --namespace="$operator_namespace" "$@"; } - - # Create a Secret that contains a `kubectl` configuration file to authenticate with `scorecard-proxy`. - # See https://github.com/operator-framework/operator-sdk/blob/master/doc/test-framework/scorecard.md - scorecard_username="$( jq <<< '{}' \ - --arg namespace "$operator_namespace" \ - '{ apiVersion: "", kind:"", "uid":"", name: "scorecard", Namespace: $namespace }' )" - scorecard_kubeconfig="$( jq <<< '{}' \ - --arg namespace "$operator_namespace" \ - --arg username "$scorecard_username" \ - '{ - apiVersion: "v1", kind: "Config", - clusters: [{ - name: "proxy-server", - cluster: { - server: "http://\($username | @base64)@localhost:8889", - "insecure-skip-tls-verify": true - } - }], - users: [{ - name: "admin/proxy-server", - user: { - username: ($username | @base64), - password: "unused" - } - }], - contexts: [{ - name: "\($namespace)/proxy-server", - context: { - cluster: "proxy-server", - user: "admin/proxy-server" - } - }], - "current-context": "\($namespace)/proxy-server", - preferences: {} - }' )" - kc delete secret scorecard-kubeconfig --ignore-not-found - kc create secret generic scorecard-kubeconfig --from-literal="kubeconfig=$scorecard_kubeconfig" - - # Inject a `scorecard-proxy` Container into the main Deployment and configure other containers - # to make Kubernetes API calls through it. - yq_script="$(mktemp)" - jq > "$yq_script" <<< '{}' \ - --arg image "quay.io/operator-framework/scorecard-proxy:v$sdk_version" \ - '{ - "spec.template.spec.volumes[+]": { - name: "scorecard-kubeconfig", - secret: { - secretName: "scorecard-kubeconfig", - items: [{ key: "kubeconfig", path: "config" }] - } - }, - "spec.template.spec.containers[*].volumeMounts[+]": { - name: "scorecard-kubeconfig", - mountPath: "/scorecard-secret" - }, - "spec.template.spec.containers[*].env[+]": { - name: "KUBECONFIG", - value: "/scorecard-secret/config" - }, - "spec.template.spec.containers[+]": { - name: "scorecard-proxy", - image: $image, imagePullPolicy: "Always", - env: [{ - name: "WATCH_NAMESPACE", - valueFrom: { fieldRef: { apiVersion: "v1", fieldPath: "metadata.namespace" } } - }], - ports: [{ name: "proxy", containerPort: 8889 }] - } - }' - KUBE_EDITOR="yq write --inplace --script=$yq_script" kc edit deploy postgres-operator - rm "$yq_script" - - kc rollout status deploy postgres-operator --watch -) - -"$@" diff --git a/installers/olm/postgresoperator.crd.descriptions.yaml b/installers/olm/postgresoperator.crd.descriptions.yaml deleted file mode 100644 index 5d76dd4e0c..0000000000 --- a/installers/olm/postgresoperator.crd.descriptions.yaml +++ /dev/null @@ -1,222 +0,0 @@ -# https://github.com/openshift/console/tree/master/frontend/packages/operator-lifecycle-manager/src/components/descriptors -- name: pgclusters.crunchydata.com - kind: Pgcluster - version: v1 - displayName: Postgres Primary Cluster Member - description: Represents a Postgres primary cluster member - resources: - - { kind: Pgcluster, version: v1 } - - { kind: ConfigMap, version: v1 } - - { kind: Deployment, version: v1 } - - { kind: Job, version: v1 } - - { kind: Pod, version: v1 } - - { kind: ReplicaSet, version: v1 } - - { kind: Secret, version: v1 } - - { kind: Service, version: v1 } - - { kind: PersistentVolumeClaim, version: v1 } - specDescriptors: - - path: ccpimage - displayName: PostgreSQL Image - description: The Crunchy PostgreSQL image to use. Possible values are "crunchy-postgres-ha" and "crunchy-postgres-gis-ha" - x-descriptors: - - 'urn:alm:descriptor:com.tectonic.ui:text' - - path: ccpimagetag - displayName: PostgreSQL Image Tag - description: The tag of the PostgreSQL image to use. Example is "${CCP_IMAGE_TAG}" - x-descriptors: - - 'urn:alm:descriptor:com.tectonic.ui:text' - - path: clustername - displayName: PostgreSQL Cluster name - description: The name that is assigned to this specific PostgreSQL cluster - x-descriptors: - - 'urn:alm:descriptor:com.tectonic.ui:text' - - path: database - displayName: Initial PostgreSQL database name - description: The name of the initial database to be created inside of the PostgreSQL cluster, e.g. "hippo" - x-descriptors: - - 'urn:alm:descriptor:com.tectonic.ui:text' - - path: exporterport - displayName: PostgreSQL Monitor Port - description: The port to use for the PostgreSQL metrics exporter used for cluster monitoring, e.g. "9187" - x-descriptors: - - 'urn:alm:descriptor:com.tectonic.ui:number' - - path: name - displayName: PostgreSQL CRD name - description: The name of the CRD entry. Should match the PostgreSQL Cluster name - x-descriptors: - - 'urn:alm:descriptor:com.tectonic.ui:text' - - path: pgbadgerport - displayName: pgBadger Port - description: The port to use for the pgBadger PostgreSQL query analysis service, e.g. "10000" - x-descriptors: - - 'urn:alm:descriptor:com.tectonic.ui:number' - - path: port - displayName: PostgreSQL Port - description: The port to use for the PostgreSQL cluster, e.g. "5432" - x-descriptors: - - 'urn:alm:descriptor:com.tectonic.ui:number' - - - path: rootsecretname - displayName: PostgreSQL superuser credentials - description: The name of the Secret that contains the PostgreSQL superuser credentials - x-descriptors: - - 'urn:alm:descriptor:io.kubernetes:Secret' - - path: primarysecretname - displayName: PostgreSQL support service credentials - description: The name of the Secret that contains the credentials used for managing cluster instance authentication, e.g. connections for replicas - x-descriptors: - - 'urn:alm:descriptor:io.kubernetes:Secret' - - path: usersecretname - displayName: PostgreSQL user credentials - description: The name of the Secret that contains the PostgreSQL user credentials for logging into the PostgreSQL cluster - x-descriptors: - - 'urn:alm:descriptor:io.kubernetes:Secret' - - # `operator-sdk scorecard` expects this field to have a descriptor. - - path: PrimaryStorage - displayName: PostgreSQL Primary Storage - description: Attributes that help set the primary storage of a PostgreSQL cluster - x-descriptors: - - 'urn:alm:descriptor:com.tectonic.ui:fieldGroup:PrimaryStorage' - - path: PrimaryStorage.name - displayName: PostgreSQL Primary Storage Name - description: Contains the name of the PostgreSQL cluster to associate with this storage. Should match the Cluster name - x-descriptors: - - 'urn:alm:descriptor:com.tectonic.ui:fieldGroup:PrimaryStorage' - - 'urn:alm:descriptor:com.tectonic.ui:text' - - path: PrimaryStorage.storageclass - displayName: PostgreSQL Primary StorageClass - description: Contains the storage class used for the primary PostgreSQL instance of the cluster - x-descriptors: - - 'urn:alm:descriptor:com.tectonic.ui:fieldGroup:PrimaryStorage' - - 'urn:alm:descriptor:io.kubernetes:StorageClass' - - path: PrimaryStorage.accessmode - displayName: PostgreSQL Primary StorageClass Access Mode - description: The access mode for the storage class, e.g. "ReadWriteOnce" - x-descriptors: - - 'urn:alm:descriptor:com.tectonic.ui:fieldGroup:PrimaryStorage' - - 'urn:alm:descriptor:com.tectonic.ui:select:ReadWriteOnce' - - 'urn:alm:descriptor:com.tectonic.ui:select:ReadWriteMany' - - path: PrimaryStorage.size - displayName: PostgreSQL Primary Data PVC Size - description: The size of the PVC that will store the data for the primary PostgreSQL instance, e.g. "1G" - x-descriptors: - - 'urn:alm:descriptor:com.tectonic.ui:fieldGroup:PrimaryStorage' - - 'urn:alm:descriptor:com.tectonic.ui:text' - - - path: status - displayName: Deprecated - description: Deprecated - x-descriptors: - - 'urn:alm:descriptor:com.tectonic.ui:text' - - 'urn:alm:descriptor:com.tectonic.ui:advanced' - - path: userlabels - displayName: User defined labels - description: A set of labels that help the PostgreSQL Operator manage a PostgreSQL cluster - x-descriptors: - - 'urn:alm:descriptor:com.tectonic.ui:text' - - statusDescriptors: - - path: message - displayName: Initialization Message - description: Outputs a human readable message of the status of if the PostgreSQL cluster initialization - x-descriptors: - - 'urn:alm:descriptor:text' - - path: state - displayName: Initialization State - description: Outputs the state of if the PostgreSQL cluster was initialized - x-descriptors: - - 'urn:alm:descriptor:text' - -- name: pgreplicas.crunchydata.com - kind: Pgreplica - version: v1 - displayName: Postgres Replica Cluster Member - description: Represents a Postgres replica cluster member - resources: - - { kind: Pgreplica, version: v1 } - - { kind: ConfigMap, version: v1 } - - { kind: Deployment, version: v1 } - - { kind: Job, version: v1 } - - { kind: Pod, version: v1 } - - { kind: ReplicaSet, version: v1 } - - { kind: Secret, version: v1 } - - { kind: Service, version: v1 } - - { kind: PersistentVolumeClaim, version: v1 } - specDescriptors: - - path: size - displayName: Size - description: The desired number of member Pods for the deployment. - x-descriptors: - - 'urn:alm:descriptor:com.tectonic.ui:podCount' - statusDescriptors: - - path: message - displayName: Message - description: Message - x-descriptors: - - 'urn:alm:descriptor:text' - - path: state - displayName: State - description: State - x-descriptors: - - 'urn:alm:descriptor:text' - -- name: pgpolicies.crunchydata.com - kind: Pgpolicy - version: v1 - displayName: Postgres SQL Policy - description: Represents a Postgres sql policy - resources: - - { kind: Pgpolicy, version: v1 } - specDescriptors: - - path: name - displayName: Name - description: The pgpolicy name. - x-descriptors: - - 'urn:alm:descriptor:com.tectonic.ui:text' - - path: sql - displayName: SQL - description: The pgpolicy sql. - x-descriptors: - - 'urn:alm:descriptor:com.tectonic.ui:text' - statusDescriptors: - - path: message - displayName: Message - description: Message - x-descriptors: - - 'urn:alm:descriptor:text' - - path: state - displayName: State - description: State - x-descriptors: - - 'urn:alm:descriptor:text' - -- name: pgtasks.crunchydata.com - kind: Pgtask - version: v1 - displayName: Postgres workflow task - description: Represents a Postgres workflow task - resources: - - { kind: Pgtask, version: v1 } - specDescriptors: - - path: name - displayName: Name - description: The pgtask name. - x-descriptors: - - 'urn:alm:descriptor:com.tectonic.ui:text' - - path: tasktype - displayName: TaskType - description: The pgtask type. - x-descriptors: - - 'urn:alm:descriptor:com.tectonic.ui:text' - statusDescriptors: - - path: message - displayName: Message - description: Message - x-descriptors: - - 'urn:alm:descriptor:text' - - path: state - displayName: State - description: State - x-descriptors: - - 'urn:alm:descriptor:text' diff --git a/installers/olm/postgresoperator.crd.examples.yaml b/installers/olm/postgresoperator.crd.examples.yaml deleted file mode 100644 index d7783c2707..0000000000 --- a/installers/olm/postgresoperator.crd.examples.yaml +++ /dev/null @@ -1,47 +0,0 @@ ---- -apiVersion: crunchydata.com/v1 -kind: Pgcluster -metadata: - name: example - labels: { archive: 'false' } -spec: - name: example - clustername: example - ccpimage: crunchy-postgres-ha - ccpimagetag: '${CCP_IMAGE_TAG}' - PrimaryStorage: - accessmode: ReadWriteOnce - size: 1G - storageclass: standard - storagetype: dynamic - database: example - exporterport: '9187' - pgbadgerport: '10000' - port: '5432' - primarysecretname: example-primaryuser - rootsecretname: example-postgresuser - usersecretname: example-primaryuser - userlabels: { archive: 'false' } - ---- -apiVersion: crunchydata.com/v1 -kind: Pgreplica -metadata: - name: example -spec: {} -status: {} - ---- -apiVersion: crunchydata.com/v1 -kind: Pgpolicy -metadata: - name: example -spec: {} -status: {} - ---- -apiVersion: crunchydata.com/v1 -kind: Pgtask -metadata: - name: example -spec: {} diff --git a/installers/olm/postgresoperator.crd.yaml b/installers/olm/postgresoperator.crd.yaml deleted file mode 100644 index f39ac244fc..0000000000 --- a/installers/olm/postgresoperator.crd.yaml +++ /dev/null @@ -1,100 +0,0 @@ ---- -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: pgclusters.crunchydata.com -spec: - group: crunchydata.com - names: - kind: Pgcluster - listKind: PgclusterList - plural: pgclusters - singular: pgcluster - scope: Namespaced - version: v1 - validation: - openAPIV3Schema: - properties: - spec: - properties: - clustername: { type: string } - ccpimage: { type: string } - ccpimagetag: { type: string } - database: { type: string } - exporterport: { type: string } - name: { type: string } - pgbadgerport: { type: string } - primarysecretname: { type: string } - PrimaryStorage: { type: object } - port: { type: string } - rootsecretname: { type: string } - status: { type: string } - userlabels: { type: object } - usersecretname: { type: string } - status: - properties: - state: { type: string } - message: { type: string } ---- -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: pgpolicies.crunchydata.com -spec: - group: crunchydata.com - names: - kind: Pgpolicy - listKind: PgpolicyList - plural: pgpolicies - singular: pgpolicy - scope: Namespaced - version: v1 - validation: - openAPIV3Schema: - properties: - status: - properties: - message: { type: string } - state: { type: string } ---- -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: pgreplicas.crunchydata.com -spec: - group: crunchydata.com - names: - kind: Pgreplica - listKind: PgreplicaList - plural: pgreplicas - singular: pgreplica - scope: Namespaced - version: v1 - validation: - openAPIV3Schema: - properties: - status: - properties: - message: { type: string } - state: { type: string } ---- -apiVersion: apiextensions.k8s.io/v1beta1 -kind: CustomResourceDefinition -metadata: - name: pgtasks.crunchydata.com -spec: - group: crunchydata.com - names: - kind: Pgtask - listKind: PgtaskList - plural: pgtasks - singular: pgtask - scope: Namespaced - version: v1 - validation: - openAPIV3Schema: - properties: - status: - properties: - message: { type: string } - state: { type: string } diff --git a/installers/olm/postgresoperator.csv.images.yaml b/installers/olm/postgresoperator.csv.images.yaml deleted file mode 100644 index 2467864baf..0000000000 --- a/installers/olm/postgresoperator.csv.images.yaml +++ /dev/null @@ -1,24 +0,0 @@ -# https://github.com/operator-framework/operator-lifecycle-manager/blob/0.13.0/doc/contributors/design-proposals/related-images.md -- { name: CCP_IMAGE_PREFIX, value: '${CCP_IMAGE_PREFIX}' } -- { name: CCP_IMAGE_TAG, value: '${CCP_IMAGE_TAG}' } -- { name: PGO_IMAGE_PREFIX, value: '${PGO_IMAGE_PREFIX}' } -- { name: PGO_IMAGE_TAG, value: '${PGO_IMAGE_TAG}' } - -- { name: RELATED_IMAGE_PGO_BACKREST, value: '${PGO_IMAGE_PREFIX}/pgo-backrest:${PGO_IMAGE_TAG}' } -- { name: RELATED_IMAGE_PGO_BACKREST_REPO, value: '${PGO_IMAGE_PREFIX}/pgo-backrest-repo:${PGO_IMAGE_TAG}' } -- { name: RELATED_IMAGE_PGO_BACKREST_REPO_SYNC, value: '${PGO_IMAGE_PREFIX}/pgo-backrest-repo-sync:${PGO_IMAGE_TAG}' } -- { name: RELATED_IMAGE_PGO_BACKREST_RESTORE, value: '${PGO_IMAGE_PREFIX}/pgo-backrest-restore:${PGO_IMAGE_TAG}' } -- { name: RELATED_IMAGE_PGO_CLIENT, value: '${PGO_IMAGE_PREFIX}/pgo-client:${PGO_IMAGE_TAG}' } -- { name: RELATED_IMAGE_PGO_RMDATA, value: '${PGO_IMAGE_PREFIX}/pgo-rmdata:${PGO_IMAGE_TAG}' } -- { name: RELATED_IMAGE_PGO_SQL_RUNNER, value: '${PGO_IMAGE_PREFIX}/pgo-sqlrunner:${PGO_IMAGE_TAG}' } -- { name: RELATED_IMAGE_CRUNCHY_POSTGRES_EXPORTER, value: '${PGO_IMAGE_PREFIX}/crunchy-postgres-exporter:${PGO_IMAGE_TAG}' } - -- { name: RELATED_IMAGE_CRUNCHY_ADMIN, value: '${CCP_IMAGE_PREFIX}/crunchy-admin:${CCP_IMAGE_TAG}' } -- { name: RELATED_IMAGE_CRUNCHY_BACKREST_RESTORE, value: '${CCP_IMAGE_PREFIX}/crunchy-backrest-restore:${CCP_IMAGE_TAG}' } -- { name: RELATED_IMAGE_CRUNCHY_PGADMIN, value: '${CCP_IMAGE_PREFIX}/crunchy-pgadmin4:${CCP_IMAGE_TAG}' } -- { name: RELATED_IMAGE_CRUNCHY_PGBADGER, value: '${CCP_IMAGE_PREFIX}/crunchy-pgbadger:${CCP_IMAGE_TAG}' } -- { name: RELATED_IMAGE_CRUNCHY_PGBOUNCER, value: '${CCP_IMAGE_PREFIX}/crunchy-pgbouncer:${CCP_IMAGE_TAG}' } -- { name: RELATED_IMAGE_CRUNCHY_PGDUMP, value: '${CCP_IMAGE_PREFIX}/crunchy-pgdump:${CCP_IMAGE_TAG}' } -- { name: RELATED_IMAGE_CRUNCHY_PGRESTORE, value: '${CCP_IMAGE_PREFIX}/crunchy-pgrestore:${CCP_IMAGE_TAG}' } -- { name: RELATED_IMAGE_CRUNCHY_POSTGRES_HA, value: '${CCP_IMAGE_PREFIX}/crunchy-postgres-ha:${CCP_IMAGE_TAG}' } -- { name: RELATED_IMAGE_CRUNCHY_POSTGRES_GIS_HA, value: '${CCP_IMAGE_PREFIX}/crunchy-postgres-gis-ha:${CCP_POSTGIS_IMAGE_TAG}' } diff --git a/installers/olm/postgresoperator.csv.yaml b/installers/olm/postgresoperator.csv.yaml deleted file mode 100644 index 18fbdb7fa1..0000000000 --- a/installers/olm/postgresoperator.csv.yaml +++ /dev/null @@ -1,273 +0,0 @@ -# See https://godoc.org/github.com/operator-framework/api/pkg/operators/v1alpha1#ClusterServiceVersion -apiVersion: operators.coreos.com/v1alpha1 -kind: ClusterServiceVersion -metadata: - name: 'postgresoperator.v${PGO_VERSION}' - annotations: - certified: 'false' - support: crunchydata.com - - # The following affect how the package is indexed at OperatorHub.io: - # https://operatorhub.io/?category=Database - categories: Database - capabilities: Auto Pilot - description: Enterprise open source PostgreSQL-as-a-Service - - # The following appear on the details page at OperatorHub.io: - # https://operatorhub.io/operator/postgresql - createdAt: 2019-12-31 19:40Z - containerImage: '${PGO_IMAGE_PREFIX}/postgres-operator:${PGO_IMAGE_TAG}' - repository: https://github.com/CrunchyData/postgres-operator - -spec: - # The following affect how the package is indexed at OperatorHub.io: - # https://operatorhub.io/ - displayName: Crunchy PostgreSQL for Kubernetes - provider: { name: Crunchy Data } - keywords: - - postgres - - postgresql - - database - - sql - - operator - - crunchy data - - # The following appear on the details page at OperatorHub.io: - # https://operatorhub.io/operator/postgresql - description: '' # description.*.md - version: '${PGO_VERSION}' - links: - - name: Crunchy Data - url: https://www.crunchydata.com/ - - name: Documentation - url: 'https://access.crunchydata.com/documentation/postgres-operator/' - maintainers: - - name: Crunchy Data - email: info@crunchydata.com - - minKubeVersion: 1.11.0 - maturity: stable - labels: - alm-owner-enterprise-app: postgres-operator - alm-status-descriptors: 'postgres-operator.v${PGO_VERSION}' - - customresourcedefinitions: - owned: {} # postgresoperator.crd.descriptions.yaml - - installModes: - - { type: OwnNamespace, supported: true } - - { type: SingleNamespace, supported: true } - - { type: MultiNamespace, supported: true } - - { type: AllNamespaces, supported: false } - - install: - strategy: deployment - spec: - clusterPermissions: - - serviceAccountName: postgres-operator - rules: - # dynamic namespace mode - - apiGroups: - - '' - resources: - - namespaces - verbs: - - get - - list - - watch - - create - - update - - delete - # reconcile rbac - - apiGroups: - - '' - resources: - - serviceaccounts - verbs: - - get - - create - - update - - delete - - apiGroups: - - rbac.authorization.k8s.io - resources: - - roles - - rolebindings - verbs: - - get - - create - - update - - delete - - apiGroups: - - '' - resources: - - configmaps - - endpoints - - pods - - pods/exec - - pods/log - - replicasets - - secrets - - services - - persistentvolumeclaims - verbs: - - get - - list - - watch - - create - - patch - - update - - delete - - deletecollection - - apiGroups: - - apps - resources: - - deployments - verbs: - - get - - list - - watch - - create - - patch - - update - - delete - - deletecollection - - apiGroups: - - batch - resources: - - jobs - verbs: - - get - - list - - watch - - create - - patch - - update - - delete - - deletecollection - - apiGroups: - - crunchydata.com - resources: - - pgclusters - - pgpolicies - - pgreplicas - - pgtasks - verbs: - - get - - list - - watch - - create - - patch - - update - - delete - - deletecollection - - permissions: - - serviceAccountName: postgres-operator - rules: - - apiGroups: - - '' - resources: - - configmaps - - secrets - verbs: - - get - - list - - create - - update - - delete - - apiGroups: - - '' - resources: - - serviceaccounts - verbs: - - get - - deployments: - - name: postgres-operator - spec: - replicas: 1 - selector: - matchLabels: - name: postgres-operator - vendor: crunchydata - template: - metadata: - labels: - name: postgres-operator - vendor: crunchydata - spec: - serviceAccountName: postgres-operator - containers: - - name: apiserver - image: '${PGO_IMAGE_PREFIX}/pgo-apiserver:${PGO_IMAGE_TAG}' - imagePullPolicy: IfNotPresent - ports: - - containerPort: 8443 - readinessProbe: - httpGet: - path: /healthz - port: 8443 - scheme: HTTPS - initialDelaySeconds: 15 - periodSeconds: 5 - livenessProbe: - httpGet: - path: /healthz - port: 8443 - scheme: HTTPS - initialDelaySeconds: 15 - periodSeconds: 5 - env: - - { name: NAMESPACE, valueFrom: { fieldRef: { fieldPath: "metadata.annotations['olm.targetNamespaces']" } } } - - { name: PGO_INSTALLATION_NAME, valueFrom: { fieldRef: { fieldPath: "metadata.namespace" } } } - - { name: PGO_OPERATOR_NAMESPACE, valueFrom: { fieldRef: { fieldPath: "metadata.namespace" } } } - - - { name: CRUNCHY_DEBUG, value: 'false' } - - { name: EVENT_ADDR, value: 'localhost:4150' } - - { name: PORT, value: '8443' } - - - name: operator - image: '${PGO_IMAGE_PREFIX}/postgres-operator:${PGO_IMAGE_TAG}' - imagePullPolicy: IfNotPresent - env: - - { name: NAMESPACE, valueFrom: { fieldRef: { fieldPath: "metadata.annotations['olm.targetNamespaces']" } } } - - { name: PGO_INSTALLATION_NAME, valueFrom: { fieldRef: { fieldPath: "metadata.namespace" } } } - - { name: PGO_OPERATOR_NAMESPACE, valueFrom: { fieldRef: { fieldPath: "metadata.namespace" } } } - - - { name: CRUNCHY_DEBUG, value: 'false' } - - { name: EVENT_ADDR, value: 'localhost:4150' } - - - name: scheduler - image: '${PGO_IMAGE_PREFIX}/pgo-scheduler:${PGO_IMAGE_TAG}' - imagePullPolicy: IfNotPresent - livenessProbe: - exec: - command: [ - "bash", - "-c", - "test -n \"$(find /tmp/scheduler.hb -newermt '61 sec ago')\"" - ] - failureThreshold: 2 - initialDelaySeconds: 60 - periodSeconds: 60 - env: - - { name: NAMESPACE, valueFrom: { fieldRef: { fieldPath: "metadata.annotations['olm.targetNamespaces']" } } } - - { name: PGO_INSTALLATION_NAME, valueFrom: { fieldRef: { fieldPath: "metadata.namespace" } } } - - { name: PGO_OPERATOR_NAMESPACE, valueFrom: { fieldRef: { fieldPath: "metadata.namespace" } } } - - - { name: CRUNCHY_DEBUG, value: 'false' } - - { name: EVENT_ADDR, value: 'localhost:4150' } - - { name: TIMEOUT, value: '3600' } - - - name: event - image: '${PGO_IMAGE_PREFIX}/pgo-event:${PGO_IMAGE_TAG}' - imagePullPolicy: IfNotPresent - livenessProbe: - httpGet: - path: /ping - port: 4151 - initialDelaySeconds: 15 - periodSeconds: 5 - env: - - { name: TIMEOUT, value: '3600' } diff --git a/installers/olm/postgresql.package.yaml b/installers/olm/postgresql.package.yaml deleted file mode 100644 index c9dfc625bb..0000000000 --- a/installers/olm/postgresql.package.yaml +++ /dev/null @@ -1,5 +0,0 @@ -packageName: '${PACKAGE_NAME}' -defaultChannel: stable -channels: - - name: stable - currentCSV: 'postgresoperator.v${PGO_VERSION}' diff --git a/installers/olm/verify.sh b/installers/olm/verify.sh deleted file mode 100755 index 731ef5036e..0000000000 --- a/installers/olm/verify.sh +++ /dev/null @@ -1,107 +0,0 @@ -#!/usr/bin/env bash -# vim: set noexpandtab : -set -eu - -# Simplify `push_trap_exit()` by always having something. -trap 'date' EXIT - -push_trap_exit() { - local -a array - read -ra array <<< "$( trap -p EXIT )" - eval "local previous=${array[@]:2:(${#array[@]}-3)}" - trap "$1; $previous" EXIT -} - -# Store anything in a single temporary directory that gets cleaned up. -export TMPDIR="$(mktemp --directory)" -push_trap_exit "rm -rf '$TMPDIR'" - -if command -v oc >/dev/null; then - kubectl() { oc "$@"; } -elif ! command -v kubectl >/dev/null; then - # Use a version of `kubectl` that matches the Kubernetes server. - eval "kubectl() { kubectl-$( kubectl-1.16 version --output=json | - jq --raw-output '.serverVersion | .major + "." + .minor')"' "$@"; }' -fi - -# Find the OLM operator deployment. -olm_deployments="$( kubectl get deploy --all-namespaces --selector='app=olm-operator' --output=json )" -if [ '1' != "$( jq <<< "$olm_deployments" '.items | length' )" ] || - [ 'olm-operator' != "$( jq --raw-output <<< "$olm_deployments" '.items[0].metadata.name' )" ] -then - >&2 echo Unable to find the OLM operator! - exit 1 -fi -olm_namespace="$( jq --raw-output <<< "$olm_deployments" '.items[0].metadata.namespace' )" - -# Create a Namespace in which to deploy and test. -test_namespace="$( kubectl create --filename=- --output=jsonpath='{.metadata.name}' <<< '{ - "apiVersion": "v1", "kind": "Namespace", - "metadata": { "generateName": "olm-test-" } -}' )" -echo 'namespace "'"$test_namespace"'" created' -push_trap_exit "kubectl delete namespace '$test_namespace'" - -kc() { kubectl --namespace="$test_namespace" "$@"; } - -# Clean up anything created by the Subscription, especially CustomResourceDefinitions. -push_trap_exit "kubectl delete clusterrole,clusterrolebinding --selector='olm.owner.namespace=$test_namespace'" -push_trap_exit "kubectl delete --ignore-not-found --filename='./package/${PGO_VERSION}/'" - -# Install the package. -./install.sh operator "$test_namespace" "$test_namespace" - -# Turn off OLM while we manipulate the operator deployment. -# OLM crashes when a running Deployment doesn't match the CSV. ->&2 echo $(tput bold)Turning off the OLM operator!$(tput sgr0) -kubectl --namespace="$olm_namespace" scale --replicas=0 deploy olm-operator -push_trap_exit "kubectl --namespace='$olm_namespace' scale --replicas=1 deploy olm-operator" -kubectl --namespace="$olm_namespace" rollout status deploy olm-operator --timeout=1m - -# Inject the scorecard proxy. -./install.sh scorecard "$test_namespace" "$OLM_SDK_VERSION" - - -# Run the OLM test suite against each example stored in CSV annotations. -examples_array="$( yq read \ - "./package/${PGO_VERSION}/postgresoperator.v${PGO_VERSION}.clusterserviceversion.yaml" \ - metadata.annotations.alm-examples )" - -error=0 -for index in $(seq 0 $(jq <<< "$examples_array" 'length - 1')); do - jq > "${TMPDIR}/resource.json" <<< "$examples_array" ".[$index]" - jq > "${TMPDIR}/scorecard.json" <<< '{}' \ - --arg resource "${TMPDIR}/resource.json" \ - --arg namespace "$test_namespace" \ - --arg version "$PGO_VERSION" \ - '{ scorecard: { bundle: "./package", plugins: [ - { basic: { - "cr-manifest": $resource, - "crds-dir": "./package/\($version)", - "csv-path": "./package/\($version)/postgresoperator.v\($version).clusterserviceversion.yaml", - "namespace": $namespace, - "olm-deployed": true - } }, - { olm: { - "cr-manifest": $resource, - "crds-dir": "./package/\($version)", - "csv-path": "./package/\($version)/postgresoperator.v\($version).clusterserviceversion.yaml", - "namespace": $namespace, - "olm-deployed": true - } } - ] } }' - - echo "Verifying metadata.annotations.alm-examples<[$index]>:" - jq '{ apiVersion, kind, name: .metadata.name }' "${TMPDIR}/resource.json" - - start="$(date --utc +'%FT%TZ')" - if operator-sdk scorecard --config "${TMPDIR}/scorecard.json"; then - : # no-op to preserve the exit code above - else - echo "Error: $?" - #kc logs --container='operator' --selector='name=postgres-operator' --since-time="$start" - error=1 - fi -done - -exit $error diff --git a/installers/seal.svg b/installers/seal.svg deleted file mode 100644 index 686d0c974d..0000000000 --- a/installers/seal.svg +++ /dev/null @@ -1,131 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/internal/apiserver/backrestservice/backrestimpl.go b/internal/apiserver/backrestservice/backrestimpl.go deleted file mode 100644 index 5f6d5644c5..0000000000 --- a/internal/apiserver/backrestservice/backrestimpl.go +++ /dev/null @@ -1,662 +0,0 @@ -package backrestservice - -/* -Copyright 2018 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "encoding/json" - "errors" - "fmt" - "io/ioutil" - "strconv" - "strings" - "time" - - "github.com/crunchydata/postgres-operator/internal/apiserver/backupoptions" - "github.com/crunchydata/postgres-operator/internal/operator" - "github.com/crunchydata/postgres-operator/internal/util" - - "github.com/crunchydata/postgres-operator/internal/apiserver" - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/kubeapi" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - v1 "k8s.io/api/core/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/fields" -) - -const containername = "database" - -// pgBackRestInfoCommand is the baseline command used for getting the -// pgBackRest info -var pgBackRestInfoCommand = []string{"pgbackrest", "info", "--output", "json"} - -// repoTypeFlagS3 is used for getting the pgBackRest info for a repository that -// is stored in S3 -var repoTypeFlagS3 = []string{"--repo1-type", "s3"} - -// noRepoS3VerifyTLS is used to disable SSL certificate verification when getting -// the pgBackRest info for a repository that is stored in S3 -var noRepoS3VerifyTLS = "--no-repo1-s3-verify-tls" - -// CreateBackup ... -// pgo backup mycluster -// pgo backup --selector=name=mycluster -func CreateBackup(request *msgs.CreateBackrestBackupRequest, ns, pgouser string) msgs.CreateBackrestBackupResponse { - resp := msgs.CreateBackrestBackupResponse{} - resp.Status.Code = msgs.Ok - resp.Status.Msg = "" - resp.Results = make([]string, 0) - - if request.BackupOpts != "" { - err := backupoptions.ValidateBackupOpts(request.BackupOpts, request) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - } - - clusterList := crv1.PgclusterList{} - var err error - if request.Selector != "" { - //use the selector instead of an argument list to filter on - cl, err := apiserver.Clientset. - CrunchydataV1().Pgclusters(ns). - List(metav1.ListOptions{LabelSelector: request.Selector}) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - clusterList = *cl - - if len(clusterList.Items) == 0 { - log.Debug("no clusters found") - resp.Results = append(resp.Results, "no clusters found with that selector") - return resp - } else { - newargs := make([]string, 0) - for _, cluster := range clusterList.Items { - newargs = append(newargs, cluster.Spec.Name) - } - request.Args = newargs - } - - } - - // Convert the names of all pgclusters specified for the request to a pgclusterList - if clusterList.Items == nil { - clusterList, err = clusterNamesToPGClusterList(ns, request.Args) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - } - - // Return an error if any clusters identified for the backup are in standby mode. Backups - // from standby servers are not allowed since the cluster is following a remote primary, - // which itself is responsible for performing any backups for the cluster as required. - if hasStandby, standbyClusters := apiserver.PGClusterListHasStandby(clusterList); hasStandby { - resp.Status.Code = msgs.Error - resp.Status.Msg = fmt.Sprintf("Request rejected, unable to create backups for clusters "+ - "%s: %s.", strings.Join(standbyClusters, ","), apiserver.ErrStandbyNotAllowed.Error()) - return resp - } - - for _, clusterName := range request.Args { - log.Debugf("create backrestbackup called for %s", clusterName) - taskName := "backrest-backup-" + clusterName - - cluster, err := apiserver.Clientset.CrunchydataV1().Pgclusters(ns).Get(clusterName, metav1.GetOptions{}) - if kubeapi.IsNotFound(err) { - resp.Status.Code = msgs.Error - resp.Status.Msg = clusterName + " was not found, verify cluster name" - return resp - } else if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - // check if the current cluster is not upgraded to the deployed - // Operator version. If not, do not allow the command to complete - if cluster.Annotations[config.ANNOTATION_IS_UPGRADED] == config.ANNOTATIONS_FALSE { - resp.Status.Code = msgs.Error - resp.Status.Msg = fmt.Sprintf("%s %s", cluster.Name, msgs.UpgradeError) - return resp - } - - if cluster.Labels[config.LABEL_BACKREST] != "true" { - resp.Status.Code = msgs.Error - resp.Status.Msg = clusterName + " does not have pgbackrest enabled" - return resp - } - - err = util.ValidateBackrestStorageTypeOnBackupRestore(request.BackrestStorageType, - cluster.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE], false) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - err = apiserver.Clientset.CrunchydataV1().Pgtasks(ns).Delete(taskName, &metav1.DeleteOptions{}) - if err != nil && !kubeapi.IsNotFound(err) { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } else { - - //remove any previous backup job - - //selector := config.LABEL_PG_CLUSTER + "=" + clusterName + "," + config.LABEL_BACKREST + "=true" - selector := config.LABEL_BACKREST_COMMAND + "=" + crv1.PgtaskBackrestBackup + "," + config.LABEL_PG_CLUSTER + "=" + clusterName + "," + config.LABEL_BACKREST + "=true" - deletePropagation := metav1.DeletePropagationForeground - err = apiserver.Clientset. - BatchV1().Jobs(ns). - DeleteCollection( - &metav1.DeleteOptions{PropagationPolicy: &deletePropagation}, - metav1.ListOptions{LabelSelector: selector}) - if err != nil { - log.Error(err) - } - - //a hack sort of due to slow propagation - for i := 0; i < 3; i++ { - jobList, err := apiserver.Clientset.BatchV1().Jobs(ns).List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - log.Error(err) - } - if len(jobList.Items) > 0 { - log.Debug("sleeping a bit for delete job propagation") - time.Sleep(time.Second * 2) - } - } - - } - - //get pod name from cluster - var podname string - podname, err = getPrimaryPodName(cluster, ns) - - if err != nil { - log.Error(err) - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - jobName := "backrest-" + crv1.PgtaskBackrestBackup + "-" + clusterName - log.Debugf("setting jobName to %s", jobName) - - _, err = apiserver.Clientset.CrunchydataV1().Pgtasks(ns).Create( - getBackupParams(cluster.ObjectMeta.Labels[config.LABEL_PG_CLUSTER_IDENTIFIER], clusterName, taskName, crv1.PgtaskBackrestBackup, podname, "database", - util.GetValueOrDefault(cluster.Spec.PGOImagePrefix, apiserver.Pgo.Pgo.PGOImagePrefix), request.BackupOpts, request.BackrestStorageType, operator.GetS3VerifyTLSSetting(cluster), jobName, ns, pgouser), - ) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - resp.Results = append(resp.Results, "created Pgtask "+taskName) - - } - - return resp -} - -func getBackupParams(identifier, clusterName, taskName, action, podName, containerName, imagePrefix, backupOpts, backrestStorageType, s3VerifyTLS, jobName, ns, pgouser string) *crv1.Pgtask { - var newInstance *crv1.Pgtask - spec := crv1.PgtaskSpec{} - spec.Name = taskName - spec.Namespace = ns - - spec.TaskType = crv1.PgtaskBackrest - spec.Parameters = make(map[string]string) - spec.Parameters[config.LABEL_JOB_NAME] = jobName - spec.Parameters[config.LABEL_PG_CLUSTER] = clusterName - spec.Parameters[config.LABEL_POD_NAME] = podName - spec.Parameters[config.LABEL_CONTAINER_NAME] = containerName - // pass along the appropriate image prefix for the backup task - // this will be used by the associated backrest job - spec.Parameters[config.LABEL_IMAGE_PREFIX] = imagePrefix - spec.Parameters[config.LABEL_BACKREST_COMMAND] = action - spec.Parameters[config.LABEL_BACKREST_OPTS] = backupOpts - spec.Parameters[config.LABEL_BACKREST_STORAGE_TYPE] = backrestStorageType - spec.Parameters[config.LABEL_BACKREST_S3_VERIFY_TLS] = s3VerifyTLS - - newInstance = &crv1.Pgtask{ - ObjectMeta: metav1.ObjectMeta{ - Name: taskName, - }, - Spec: spec, - } - newInstance.ObjectMeta.Labels = make(map[string]string) - newInstance.ObjectMeta.Labels[config.LABEL_PG_CLUSTER] = clusterName - newInstance.ObjectMeta.Labels[config.LABEL_PG_CLUSTER_IDENTIFIER] = identifier - newInstance.ObjectMeta.Labels[config.LABEL_PGOUSER] = pgouser - return newInstance -} - -func getDeployName(cluster *crv1.Pgcluster, ns string) (string, error) { - var depName string - - selector := config.LABEL_PG_CLUSTER + "=" + cluster.Spec.Name + "," + config.LABEL_SERVICE_NAME + "=" + cluster.Spec.Name - - deps, err := apiserver.Clientset. - AppsV1().Deployments(ns). - List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - return depName, err - } - - if len(deps.Items) != 1 { - return depName, errors.New("error: deployment count is wrong for backrest backup " + cluster.Spec.Name) - } - for _, d := range deps.Items { - return d.Name, err - } - - return depName, errors.New("unknown error in backrest backup") -} - -func getPrimaryPodName(cluster *crv1.Pgcluster, ns string) (string, error) { - - //look up the backrest-repo pod name - selector := "pg-cluster=" + cluster.Spec.Name + ",pgo-backrest-repo=true" - - options := metav1.ListOptions{ - FieldSelector: fields.OneTermEqualSelector("status.phase", string(v1.PodRunning)).String(), - LabelSelector: selector, - } - - repopods, err := apiserver.Clientset.CoreV1().Pods(ns).List(options) - if len(repopods.Items) != 1 { - log.Errorf("pods len != 1 for cluster %s", cluster.Spec.Name) - return "", errors.New("backrestrepo pod not found for cluster " + cluster.Spec.Name) - } - if err != nil { - log.Error(err) - return "", err - } - - repopodName := repopods.Items[0].Name - - primaryReady := false - - //make sure the primary pod is in the ready state - selector = config.LABEL_SERVICE_NAME + "=" + cluster.Spec.Name - - pods, err := apiserver.Clientset.CoreV1().Pods(ns).List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - return "", err - } - for _, p := range pods.Items { - if isPrimary(&p, cluster.Spec.Name) && isReady(&p) { - primaryReady = true - } - } - - if primaryReady == false { - return "", errors.New("primary pod is not in Ready state") - } - - return repopodName, err -} - -func isPrimary(pod *v1.Pod, clusterName string) bool { - if pod.ObjectMeta.Labels[config.LABEL_SERVICE_NAME] == clusterName { - return true - } - return false - -} - -func isReady(pod *v1.Pod) bool { - readyCount := 0 - containerCount := 0 - for _, stat := range pod.Status.ContainerStatuses { - containerCount++ - if stat.Ready { - readyCount++ - } - } - if readyCount != containerCount { - return false - } - return true - -} - -// ShowBackrest ... -func ShowBackrest(name, selector, ns string) msgs.ShowBackrestResponse { - var err error - - response := msgs.ShowBackrestResponse{} - response.Status = msgs.Status{Code: msgs.Ok, Msg: ""} - response.Items = make([]msgs.ShowBackrestDetail, 0) - - if selector == "" && name == "all" { - } else { - if selector == "" { - selector = "name=" + name - } - } - - //get a list of all clusters - clusterList, err := apiserver.Clientset. - CrunchydataV1().Pgclusters(ns). - List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - - log.Debugf("clusters found len is %d\n", len(clusterList.Items)) - - for _, c := range clusterList.Items { - podname, err := getPrimaryPodName(&c, ns) - - if err != nil { - log.Error(err) - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - - // so we potentially add two "pieces of detail" based on whether or not we - // have a local repository, a s3 repository, or both - storageTypes := c.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE] - - for _, storageType := range apiserver.GetBackrestStorageTypes() { - - // so the way we currently store the different repos is not ideal, and - // this is not being fixed right now, so we'll follow this logic: - // - // 1. If storage type is "local" and the string either contains "local" or - // is empty, we can add the pgBackRest info - // 2. if the storage type is "s3" and the string contains "s3", we can - // add the pgBackRest info - // 3. Otherwise, continue - if (storageTypes == "" && storageType != "local") || (storageTypes != "" && !strings.Contains(storageTypes, storageType)) { - continue - } - - // begin preparing the detailed response - detail := msgs.ShowBackrestDetail{ - Name: c.Name, - StorageType: storageType, - } - - verifyTLS, _ := strconv.ParseBool(operator.GetS3VerifyTLSSetting(&c)) - - // get the pgBackRest info using this legacy function - info, err := getInfo(c.Name, storageType, podname, ns, verifyTLS) - - // see if the function returned successfully, and if so, unmarshal the JSON - if err != nil { - log.Error(err) - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - - return response - } - - if err := json.Unmarshal([]byte(info), &detail.Info); err != nil { - log.Error(err) - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - - return response - } - - // append the details to the list of items - response.Items = append(response.Items, detail) - } - - } - - return response -} - -func getInfo(clusterName, storageType, podname, ns string, verifyTLS bool) (string, error) { - log.Debug("backrest info command requested") - - cmd := pgBackRestInfoCommand - - if storageType == "s3" { - cmd = append(cmd, repoTypeFlagS3...) - - if !verifyTLS { - cmd = append(cmd, noRepoS3VerifyTLS) - } - } - - output, stderr, err := kubeapi.ExecToPodThroughAPI(apiserver.RESTConfig, apiserver.Clientset, cmd, containername, podname, ns, nil) - - if err != nil { - log.Error(err, stderr) - return "", err - } - - log.Debug("output=[" + output + "]") - - log.Debug("backrest info ends") - - return output, err -} - -// Restore ... -// pgo restore mycluster --to-cluster=restored -func Restore(request *msgs.RestoreRequest, ns, pgouser string) msgs.RestoreResponse { - resp := msgs.RestoreResponse{} - resp.Status.Code = msgs.Ok - resp.Status.Msg = "" - resp.Results = make([]string, 0) - - log.Debugf("Restore %v\n", request) - - if request.RestoreOpts != "" { - err := backupoptions.ValidateBackupOpts(request.RestoreOpts, request) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - } - - cluster, err := apiserver.Clientset.CrunchydataV1().Pgclusters(ns).Get(request.FromCluster, metav1.GetOptions{}) - if kubeapi.IsNotFound(err) { - resp.Status.Code = msgs.Error - resp.Status.Msg = request.FromCluster + " was not found, verify cluster name" - return resp - } else if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - // check if the current cluster is not upgraded to the deployed - // Operator version. If not, do not allow the command to complete - if cluster.Annotations[config.ANNOTATION_IS_UPGRADED] == config.ANNOTATIONS_FALSE { - resp.Status.Code = msgs.Error - resp.Status.Msg = fmt.Sprintf("%s %s", cluster.Name, msgs.UpgradeError) - return resp - } - - // verify that the cluster we are restoring from has backrest enabled - if cluster.Labels[config.LABEL_BACKREST] != "true" { - resp.Status.Code = msgs.Error - resp.Status.Msg = "can't restore, cluster restoring from does not have backrest enabled" - return resp - } - - // Return an error if any clusters identified for the restore are in standby mode. Restoring - // from a standby cluster is not allowed since the cluster is following a remote primary, - // which itself is responsible for performing any restores as required for the cluster. - if cluster.Spec.Standby { - resp.Status.Code = msgs.Error - resp.Status.Msg = fmt.Sprintf("Request rejected, unable to restore cluster "+ - "%s: %s.", cluster.Name, apiserver.ErrStandbyNotAllowed.Error()) - return resp - } - - // ensure the backrest storage type specified for the backup is valid and enabled in the - // cluster - err = util.ValidateBackrestStorageTypeOnBackupRestore(request.BackrestStorageType, - cluster.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE], true) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - var id string - id, err = createRestoreWorkflowTask(cluster.Name, ns) - if err != nil { - resp.Results = append(resp.Results, err.Error()) - return resp - } - - pgtask, err := getRestoreParams(request, ns, *cluster) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - pgtask.ObjectMeta.Labels[config.LABEL_PG_CLUSTER_IDENTIFIER] = cluster.ObjectMeta.Labels[config.LABEL_PG_CLUSTER_IDENTIFIER] - pgtask.ObjectMeta.Labels[config.LABEL_PGOUSER] = pgouser - pgtask.Spec.Parameters[crv1.PgtaskWorkflowID] = id - - // delete any previous restore task - err = apiserver.Clientset.CrunchydataV1().Pgtasks(ns).Delete(pgtask.Name, &metav1.DeleteOptions{}) - if err != nil && !kubeapi.IsNotFound(err) { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - //create a pgtask for the restore workflow - if _, err := apiserver.Clientset.CrunchydataV1().Pgtasks(ns).Create(pgtask); err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - resp.Results = append(resp.Results, fmt.Sprintf("restore request for %s with opts %q and pitr-target=%q", - request.FromCluster, request.RestoreOpts, request.PITRTarget)) - - resp.Results = append(resp.Results, "workflow id "+id) - - return resp -} - -func getRestoreParams(request *msgs.RestoreRequest, ns string, cluster crv1.Pgcluster) (*crv1.Pgtask, error) { - var newInstance *crv1.Pgtask - - spec := crv1.PgtaskSpec{} - spec.Namespace = ns - spec.Name = "backrest-restore-" + request.FromCluster - spec.TaskType = crv1.PgtaskBackrestRestore - spec.Parameters = make(map[string]string) - spec.Parameters[config.LABEL_BACKREST_RESTORE_FROM_CLUSTER] = request.FromCluster - spec.Parameters[config.LABEL_BACKREST_RESTORE_OPTS] = request.RestoreOpts - spec.Parameters[config.LABEL_BACKREST_PITR_TARGET] = request.PITRTarget - spec.Parameters[config.LABEL_BACKREST_STORAGE_TYPE] = request.BackrestStorageType - - // validate & parse nodeLabel if exists - if request.NodeLabel != "" { - if err := apiserver.ValidateNodeLabel(request.NodeLabel); err != nil { - return nil, err - } - - parts := strings.Split(request.NodeLabel, "=") - spec.Parameters[config.LABEL_NODE_LABEL_KEY] = parts[0] - spec.Parameters[config.LABEL_NODE_LABEL_VALUE] = parts[1] - - log.Debug("Restore node labels used from user entered flag") - } - - newInstance = &crv1.Pgtask{ - ObjectMeta: metav1.ObjectMeta{ - Labels: map[string]string{config.LABEL_PG_CLUSTER: request.FromCluster}, - Name: spec.Name, - }, - Spec: spec, - } - return newInstance, nil -} - -func createRestoreWorkflowTask(clusterName, ns string) (string, error) { - - taskName := clusterName + "-" + crv1.PgtaskWorkflowBackrestRestoreType - - //delete any existing pgtask with the same name - if err := apiserver.Clientset.CrunchydataV1().Pgtasks(ns).Delete(taskName, &metav1.DeleteOptions{}); err != nil && !kubeapi.IsNotFound(err) { - return "", err - } - - //create pgtask CRD - spec := crv1.PgtaskSpec{} - spec.Namespace = ns - spec.Name = clusterName + "-" + crv1.PgtaskWorkflowBackrestRestoreType - spec.TaskType = crv1.PgtaskWorkflow - - spec.Parameters = make(map[string]string) - spec.Parameters[crv1.PgtaskWorkflowSubmittedStatus] = time.Now().Format(time.RFC3339) - spec.Parameters[config.LABEL_PG_CLUSTER] = clusterName - - u, err := ioutil.ReadFile("/proc/sys/kernel/random/uuid") - if err != nil { - log.Error(err) - return "", err - } - spec.Parameters[crv1.PgtaskWorkflowID] = string(u[:len(u)-1]) - - newInstance := &crv1.Pgtask{ - ObjectMeta: metav1.ObjectMeta{ - Name: spec.Name, - }, - Spec: spec, - } - newInstance.ObjectMeta.Labels = make(map[string]string) - newInstance.ObjectMeta.Labels[config.LABEL_PG_CLUSTER] = clusterName - newInstance.ObjectMeta.Labels[crv1.PgtaskWorkflowID] = spec.Parameters[crv1.PgtaskWorkflowID] - - if _, err := apiserver.Clientset.CrunchydataV1().Pgtasks(ns).Create(newInstance); err != nil { - log.Error(err) - return "", err - } - return spec.Parameters[crv1.PgtaskWorkflowID], err -} - -// clusterNamesToPGClusterList takes a list of cluster names as specified by a slice of -// strings containing cluster names and then returns a PgclusterList containing Pgcluster's -// corresponding to those names. -func clusterNamesToPGClusterList(namespace string, clusterNames []string) (crv1.PgclusterList, - error) { - selector := fmt.Sprintf("%s in(%s)", config.LABEL_NAME, strings.Join(clusterNames, ",")) - clusterList, err := apiserver.Clientset.CrunchydataV1().Pgclusters(namespace).List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - return crv1.PgclusterList{}, err - } - return *clusterList, nil -} diff --git a/internal/apiserver/backrestservice/backrestservice.go b/internal/apiserver/backrestservice/backrestservice.go deleted file mode 100644 index e436afb878..0000000000 --- a/internal/apiserver/backrestservice/backrestservice.go +++ /dev/null @@ -1,204 +0,0 @@ -package backrestservice - -/* -Copyright 2018 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "encoding/json" - "github.com/crunchydata/postgres-operator/internal/apiserver" - "github.com/crunchydata/postgres-operator/internal/config" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - "github.com/gorilla/mux" - log "github.com/sirupsen/logrus" - "net/http" -) - -// CreateBackupHandler ... -// pgo backup all -// pgo backup --selector=name=mycluster -// pgo backup mycluster -func CreateBackupHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /backrestbackup backrestservice backrestbackup - /*``` - Performs a backup using pgBackrest - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Create Backrest Backup Request" - // in: "body" - // schema: - // "$ref": "#/definitions/CreateBackrestBackupRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/CreateBackrestBackupResponse" - var ns string - log.Debug("backrestservice.CreateBackupHandler called") - - var request msgs.CreateBackrestBackupRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - username, err := apiserver.Authn(apiserver.CREATE_BACKUP_PERM, w, r) - if err != nil { - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - resp := msgs.CreateBackrestBackupResponse{} - resp.Status = msgs.Status{Code: msgs.Ok, Msg: ""} - - ns, err = apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace) - if err != nil { - resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()} - json.NewEncoder(w).Encode(resp) - return - } - - resp = CreateBackup(&request, ns, username) - json.NewEncoder(w).Encode(resp) -} - -// ShowBackrestHandler ... -// returns a ShowBackrestResponse -func ShowBackrestHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation GET /backrest/{name} backrestservice backrest-name - /*``` - Returns a ShowBackrestResponse that provides information about a given backup - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "name" - // description: "Backup Name" - // in: "path" - // type: "string" - // required: true - // - name: "version" - // description: "Client Version" - // in: "path" - // type: "string" - // required: true - // - name: "namespace" - // description: "Namespace" - // in: "path" - // type: "string" - // required: true - // - name: "selector" - // description: "Selector" - // in: "path" - // type: "string" - // required: true - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/ShowBackrestResponse" - var ns string - - vars := mux.Vars(r) - - backupname := vars[config.LABEL_NAME] - - clientVersion := r.URL.Query().Get(config.LABEL_VERSION) - selector := r.URL.Query().Get(config.LABEL_SELECTOR) - namespace := r.URL.Query().Get(config.LABEL_NAMESPACE) - - log.Debugf("ShowBackrestHandler parameters name [%s] version [%s] selector [%s] namespace [%s]", backupname, clientVersion, selector, namespace) - - username, err := apiserver.Authn(apiserver.SHOW_BACKUP_PERM, w, r) - if err != nil { - return - } - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - log.Debug("backrestservice.ShowBackrestHandler GET called") - resp := msgs.ShowBackrestResponse{} - - if clientVersion != msgs.PGO_VERSION { - resp.Status = msgs.Status{Code: msgs.Error, Msg: apiserver.VERSION_MISMATCH_ERROR} - json.NewEncoder(w).Encode(resp) - return - } - - ns, err = apiserver.GetNamespace(apiserver.Clientset, username, namespace) - if err != nil { - resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()} - json.NewEncoder(w).Encode(resp) - return - } - - resp = ShowBackrest(backupname, selector, ns) - json.NewEncoder(w).Encode(resp) - -} - -// RestoreHandler ... -// pgo restore mycluster --to-cluster=restored -func RestoreHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /restore backrestservice restore - /*``` - Restore a cluster with backrest - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Restore Request" - // in: "body" - // schema: - // "$ref": "#/definitions/RestoreRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/RestoreResponse" - var ns string - - log.Debug("backrestservice.RestoreHandler called") - - var request msgs.RestoreRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - username, err := apiserver.Authn(apiserver.RESTORE_PERM, w, r) - if err != nil { - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - resp := msgs.RestoreResponse{} - resp.Status = msgs.Status{Code: msgs.Ok, Msg: ""} - - ns, err = apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace) - if err != nil { - resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()} - json.NewEncoder(w).Encode(resp) - return - } - - resp = Restore(&request, ns, username) - json.NewEncoder(w).Encode(resp) -} diff --git a/internal/apiserver/backupoptions/backupoptionsutil.go b/internal/apiserver/backupoptions/backupoptionsutil.go deleted file mode 100644 index 196d0f1fa1..0000000000 --- a/internal/apiserver/backupoptions/backupoptionsutil.go +++ /dev/null @@ -1,230 +0,0 @@ -package backupoptions - -/* -Copyright 2019 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "bytes" - "errors" - "fmt" - "reflect" - "regexp" - "strings" - - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - "github.com/spf13/pflag" -) - -type backupOptions interface { - validate([]string) error - getDenyListFlags() ([]string, []string) -} - -// ValidateBackupOpts validates the backup/restore options that can be provided to the various backup -// and restore utilities supported by pgo (e.g. pg_dump, pg_restore, pgBackRest, etc.) -func ValidateBackupOpts(backupOpts string, request interface{}) error { - - // some quick checks to make sure backup opts string is valid and should be processed and validated - if strings.TrimSpace(backupOpts) == "" { - return nil - } else if !strings.HasPrefix(strings.TrimSpace(backupOpts), "-") && - !strings.HasPrefix(strings.TrimSpace(backupOpts), "--") { - return errors.New("bad flag syntax. Backup options must start with '-' or '--'") - } else if strings.TrimSpace(strings.Replace(backupOpts, "-", "", -1)) == "" { - return errors.New("bad flag syntax. No backup options provided") - } - - // validate backup opts - backupOptions, setFlagFieldNames, err := convertBackupOptsToStruct(backupOpts, request) - if err != nil { - return err - } else { - err := backupOptions.validate(setFlagFieldNames) - - if err != nil { - return err - } - } - return nil -} - -func convertBackupOptsToStruct(backupOpts string, request interface{}) (backupOptions, []string, error) { - - parsedBackupOpts := parseBackupOpts(backupOpts) - - optsStruct, utilityName, err := createBackupOptionsStruct(backupOpts, request) - if err != nil { - return nil, nil, err - } - - structValue := reflect.Indirect(reflect.ValueOf(optsStruct)) - structType := structValue.Type() - - commandLine := pflag.NewFlagSet(utilityName+" backup-opts", pflag.ContinueOnError) - usage := new(bytes.Buffer) - commandLine.SetOutput(usage) - - for i := 0; i < structValue.NumField(); i++ { - fieldVal := structValue.Field(i) - - flag, _ := structType.Field(i).Tag.Lookup("flag") - flagShort, _ := structType.Field(i).Tag.Lookup("flag-short") - - if flag != "" || flagShort != "" { - switch fieldVal.Kind() { - case reflect.String: - commandLine.StringVarP(fieldVal.Addr().Interface().(*string), flag, flagShort, "", "") - case reflect.Int: - commandLine.IntVarP(fieldVal.Addr().Interface().(*int), flag, flagShort, 0, "") - case reflect.Bool: - commandLine.BoolVarP(fieldVal.Addr().Interface().(*bool), flag, flagShort, false, "") - case reflect.Slice: - commandLine.StringArrayVarP(fieldVal.Addr().Interface().(*[]string), flag, flagShort, nil, "") - } - } - } - - err = commandLine.Parse(parsedBackupOpts) - if err != nil { - if customErr := handleCustomParseErrors(err, usage, optsStruct); customErr != nil { - return nil, nil, customErr - } - } - - setFlagFieldNames := obtainSetFlagFieldNames(commandLine, structType) - - return optsStruct, setFlagFieldNames, nil -} - -func parseBackupOpts(backupOpts string) []string { - - newFields := []string{} - var newField string - for i, c := range backupOpts { - // if another option is found, add current option to newFields array - if !(c == ' ' && backupOpts[i+1] == '-') { - newField = newField + string(c) - } - - // append if at the end of the flag (i.e. if another new flag was found) or if at the end of the string - if i == len(backupOpts)-1 || c == ' ' && backupOpts[i+1] == '-' { - if len(strings.Split(newField, " ")) > 1 && !strings.Contains(strings.Split(newField, " ")[0], "=") { - splitFlagNoEqualsSign := strings.SplitN(newField, " ", 2) - if (len(splitFlagNoEqualsSign)) > 1 { - newFields = append(newFields, strings.TrimSpace(splitFlagNoEqualsSign[0])) - newFields = append(newFields, strings.TrimSpace(splitFlagNoEqualsSign[1])) - } - } else { - newFields = append(newFields, strings.TrimSpace(newField)) - } - newField = "" - } - } - - return newFields -} - -func createBackupOptionsStruct(backupOpts string, request interface{}) (backupOptions, string, error) { - - switch request.(type) { - case *msgs.CreateBackrestBackupRequest: - return &pgBackRestBackupOptions{}, "pgBackRest", nil - case *msgs.RestoreRequest, *msgs.CreateClusterRequest: - return &pgBackRestRestoreOptions{}, "pgBackRest", nil - case *msgs.CreatepgDumpBackupRequest: - if strings.Contains(backupOpts, "--dump-all") { - return &pgDumpAllOptions{}, "pg_dumpall", nil - } else { - return &pgDumpOptions{}, "pg_dump", nil - } - case *msgs.PgRestoreRequest: - return &pgRestoreOptions{}, "pg_restore", nil - case *msgs.CreateScheduleRequest: - if request.(*msgs.CreateScheduleRequest).ScheduleType == "pgbackrest" { - return &pgBackRestBackupOptions{}, "pgBackRest", nil - } - } - return nil, "", errors.New("Request type not recognized. Unable to create struct for backup opts") -} - -func isValidCompressLevel(compressLevel int) bool { - if compressLevel >= 0 && compressLevel <= 9 { - return true - } else { - return false - } -} - -// isValidRetentionRange validates that pgBackrest Full, Diff or Archive -// retention option value is set within the allowable range. -// allowed: 1-9999999 -func isValidRetentionRange(retentionRange int) bool { - return (retentionRange >= 1 && retentionRange <= 9999999) -} - -func isValidValue(vals []string, val string) bool { - isValid := false - for _, currVal := range vals { - if val == currVal { - isValid = true - return isValid - } - } - return isValid -} - -// this function checks unknown options from the backup-opts flag to validate that they are not denied -// if the option is in the deny list and error is returned, otherwise the flag is unknown to the operator -// and can be passed to pgBackRest for validation. -func handleCustomParseErrors(err error, usage *bytes.Buffer, optsStruct backupOptions) error { - denyListFlags, denyListFlagsShort := optsStruct.getDenyListFlags() - if err.Error() == "pflag: help requested" { - pflag.Usage() - return errors.New(usage.String()) - } else if strings.Contains(err.Error(), "unknown flag") { - for _, denyListFlag := range denyListFlags { - flagMatch, err := regexp.MatchString("\\B"+denyListFlag+"$", err.Error()) - if err != nil { - return err - } else if flagMatch { - return fmt.Errorf("Flag %s is not supported for use with PGO", denyListFlag) - } - } - } else if strings.Contains(err.Error(), "unknown shorthand flag") { - for _, denyListFlagShort := range denyListFlagsShort { - denyListFlagQuotes := "'" + strings.TrimPrefix(denyListFlagShort, "-") + "'" - if strings.Contains(err.Error(), denyListFlagQuotes) { - return fmt.Errorf("Shorthand flag %s is not supported for use with PGO", denyListFlagShort) - } - } - } - return nil -} - -func obtainSetFlagFieldNames(commandLine *pflag.FlagSet, structType reflect.Type) []string { - var setFlagFieldNames []string - var visitBackupOptFlags = func(flag *pflag.Flag) { - for i := 0; i < structType.NumField(); i++ { - field := structType.Field(i) - flagName, _ := field.Tag.Lookup("flag") - flagNameShort, _ := field.Tag.Lookup("flag-short") - if flag.Name == flagName || flag.Name == flagNameShort { - setFlagFieldNames = append(setFlagFieldNames, field.Name) - } - } - } - commandLine.Visit(visitBackupOptFlags) - return setFlagFieldNames -} diff --git a/internal/apiserver/backupoptions/pgbackrestoptions.go b/internal/apiserver/backupoptions/pgbackrestoptions.go deleted file mode 100644 index 2c7a1e356e..0000000000 --- a/internal/apiserver/backupoptions/pgbackrestoptions.go +++ /dev/null @@ -1,266 +0,0 @@ -package backupoptions - -/* -Copyright 2019 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "errors" - "strings" -) - -var pgBackRestOptsDenyList = []string{ - "--cmd-ssh", - "--config", - "--config-include-path", - "--config-path", - "--link-all", - "--link-map", - "--lock-path", - "--log-timestamp", - "--neutral-umask", - "--no-neutral-umask", - "--no-online", - "--online", - "--pg-host", - "--pg-host-cmd", - "--pg-host-config", - "--pg-host-config-include-path", - "--pg-host-config-path", - "--pg-host-port", - "--pg-host-user", - "--pg-path", - "--pg-port", - "--repo-host", - "--repo-host-cmd", - "--repo-host-config", - "--repo-host-config-include-path", - "--repo-host-config-path", - "--repo-host-port", - "--repo-host-user", - "--repo-path", - "--repo-s3-bucket", - "--repo-s3-endpoint", - "--repo-s3-host", - "--repo-s3-region", - "--no-repo-s3-verify-tls", - "--repo-s3-uri-style", - "--stanza", - "--tablespace-map", - "--tablespace-map-all", -} - -type pgBackRestBackupOptions struct { - ArchiveCopy bool `flag:"archive-copy"` - NoArchiveCopy bool `flag:"no-archive-copy"` - ArchiveTimeout int `flag:"archive-timeout"` - BackupStandby bool `flag:"backup-standby"` - NoBackupStandby bool `flag:"no-backup-standby"` - ChecksumPage bool `flag:"checksum-page"` - NoChecksumPage bool `flag:"no-checksum-page"` - Exclude string `flag:"exclude"` - Force bool `flag:"force"` - ManifestSaveThreshold string `flag:"manifest-save-threshold"` - Resume bool `flag:"resume"` - NoResume bool `flag:"no-resume"` - StartFast bool `flag:"start-fast"` - NoStartFast bool `flag:"no-start-fast"` - StopAuto bool `flag:"stop-auto"` - NoStopAuto bool `flag:"no-stop-auto"` - BackupType string `flag:"type"` - BufferSize string `flag:"buffer-size"` - Compress bool `flag:"compress"` - NoCompress bool `flag:"no-compress"` - CompressLevel int `flag:"compress-level"` - CompressLevelNetwork int `flag:"compress-level-network"` - DBTimeout int `flag:"db-timeout"` - Delta bool `flag:"no-delta"` - ProcessMax int `flag:"process-max"` - ProtocolTimeout int `flag:"protocol-timeout"` - LogLevelConsole string `flag:"log-level-console"` - LogLevelFile string `flag:"log-level-file"` - LogLevelStderr string `flag:"log-level-stderr"` - LogSubprocess bool `flag:"log-subprocess"` - RepoRetentionFull int `flag:"repo1-retention-full"` - RepoRetentionDiff int `flag:"repo1-retention-diff"` - RepoRetentionArchive int `flag:"repo1-retention-archive"` - RepoRetentionArchiveType string `flag:"repo1-retention-archive-type"` -} - -type pgBackRestRestoreOptions struct { - DBInclude string `flag:"db-include"` - Force bool `flag:"force"` - RecoveryOption string `flag:"recovery-option"` - Set string `flag:"set"` - Target string `flag:"target"` - TargetAction string `flag:"target-action"` - TargetExclusive bool `flag:"target-exclusive"` - NoTargetExclusive bool `flag:"no-target-exclusive"` - TargetTimeline int `flag:"target-timeline"` - RestoreType string `flag:"type"` - BufferSize string `flag:"buffer-size"` - Compress bool `flag:"compress"` - NoCompress bool `flag:"no-compress"` - CompressLevel int `flag:"compress-level"` - CompressLevelNetwork int `flag:"compress-level-network"` - DBTimeout int `flag:"db-timeout"` - Delta bool `flag:"no-delta"` - ProcessMax int `flag:"process-max"` - ProtocolTimeout int `flag:"protocol-timeout"` - LogLevelConsole string `flag:"log-level-console"` - LogLevelFile string `flag:"log-level-file"` - LogLevelStderr string `flag:"log-level-stderr"` - LogSubprocess bool `flag:"log-subprocess"` -} - -// validate method runs validation checks against any pgBackrest backup options given when executing -// a pgBackrest backup. As it iterates through the options array, it will call the appropriate -// function to ensure an allowed value has been set, otherwise it will produce an appropriate error -func (backRestBackupOpts pgBackRestBackupOptions) validate(setFlagFieldNames []string) error { - var errstrings []string - - for _, setFlag := range setFlagFieldNames { - - switch setFlag { - case "BackupType": - if !isValidValue([]string{"full", "diff", "incr"}, backRestBackupOpts.BackupType) { - err := errors.New("Invalid type provided for pgBackRest backup") - errstrings = append(errstrings, err.Error()) - } - case "CompressLevel": - if !isValidCompressLevel(backRestBackupOpts.CompressLevel) { - err := errors.New("Invalid compress level for pgBackRest backup") - errstrings = append(errstrings, err.Error()) - } - case "CompressLevelNetwork": - if !isValidCompressLevel(backRestBackupOpts.CompressLevelNetwork) { - err := errors.New("Invalid network compress level for pgBackRest backup") - errstrings = append(errstrings, err.Error()) - } - case "LogLevelConsole": - if !isValidBackrestLogLevel(backRestBackupOpts.LogLevelConsole) { - err := errors.New("Invalid log level for pgBackRest backup") - errstrings = append(errstrings, err.Error()) - } - case "LogLevelFile": - if !isValidBackrestLogLevel(backRestBackupOpts.LogLevelFile) { - err := errors.New("Invalid log level for pgBackRest backup") - errstrings = append(errstrings, err.Error()) - } - case "LogLevelStdErr": - if !isValidBackrestLogLevel(backRestBackupOpts.LogLevelStderr) { - err := errors.New("Invalid log level for pgBackRest backup") - errstrings = append(errstrings, err.Error()) - } - case "RepoRetentionFull": - if !isValidRetentionRange(backRestBackupOpts.RepoRetentionFull) { - err := errors.New("Invalid value for pgBackRest full backup retention. Allowed: 1-9999999") - errstrings = append(errstrings, err.Error()) - } - case "RepoRetentionDiff": - if !isValidRetentionRange(backRestBackupOpts.RepoRetentionDiff) { - err := errors.New("Invalid value for pgBackRest diff backup retention. Allowed: 1-9999999") - errstrings = append(errstrings, err.Error()) - } - case "RepoRetentionArchive": - if !isValidRetentionRange(backRestBackupOpts.RepoRetentionArchive) { - err := errors.New("Invalid value for pgBackRest archive retention. Allowed: 1-9999999") - errstrings = append(errstrings, err.Error()) - } - case "RepoRetentionArchiveType": - if !isValidValue([]string{"full", "diff", "incr"}, backRestBackupOpts.RepoRetentionArchiveType) { - err := errors.New("Invalid backup type for pgBackRest WAL retention. Allowed: \"full\", \"diff\", \"incr\"") - errstrings = append(errstrings, err.Error()) - } - } - } - - if len(errstrings) > 0 { - return errors.New(strings.Join(errstrings, "\n")) - } - - return nil -} - -func (backRestRestoreOpts pgBackRestRestoreOptions) validate(setFlagFieldNames []string) error { - - var errstrings []string - - for _, setFlag := range setFlagFieldNames { - - switch setFlag { - case "TargetAction": - if !isValidValue([]string{"pause", "promote", "shutdown"}, backRestRestoreOpts.TargetAction) { - err := errors.New("Invalid target action provided for pgBackRest restore") - errstrings = append(errstrings, err.Error()) - } - case "TargetExclusive": - if backRestRestoreOpts.RestoreType != "time" && backRestRestoreOpts.RestoreType != "xid" { - err := errors.New("The target exclusive option is only applicable for a pgBackRest restore " + - "when type is 'time' or 'xid' ") - errstrings = append(errstrings, err.Error()) - } - case "RestoreType": - validRestoreTypes := []string{"default", "immediate", "name", "xid", "time", "preserve", "none"} - if !isValidValue(validRestoreTypes, backRestRestoreOpts.RestoreType) { - err := errors.New("Invalid type provided for pgBackRest restore") - errstrings = append(errstrings, err.Error()) - } - case "CompressLevel": - if !isValidCompressLevel(backRestRestoreOpts.CompressLevel) { - err := errors.New("Invalid compress level for pgBackRest restore") - errstrings = append(errstrings, err.Error()) - } - case "CompressLevelNetwork": - if !isValidCompressLevel(backRestRestoreOpts.CompressLevelNetwork) { - err := errors.New("Invalid network compress level for pgBackRest restore") - errstrings = append(errstrings, err.Error()) - } - case "LogLevelConsole": - if !isValidBackrestLogLevel(backRestRestoreOpts.LogLevelConsole) { - err := errors.New("Invalid log level for pgBackRest restore") - errstrings = append(errstrings, err.Error()) - } - case "LogLevelFile": - if !isValidBackrestLogLevel(backRestRestoreOpts.LogLevelFile) { - err := errors.New("Invalid log level for pgBackRest restore") - errstrings = append(errstrings, err.Error()) - } - case "LogLevelStdErr": - if !isValidBackrestLogLevel(backRestRestoreOpts.LogLevelStderr) { - err := errors.New("Invalid log level for pgBackRest restore") - errstrings = append(errstrings, err.Error()) - } - } - } - - if len(errstrings) > 0 { - return errors.New(strings.Join(errstrings, "\n")) - } - - return nil -} - -func isValidBackrestLogLevel(logLevel string) bool { - logLevels := []string{"off", "error", "warn", "info", "detail", "debug", "trace"} - return isValidValue(logLevels, logLevel) -} - -func (backRestBackupOpts pgBackRestBackupOptions) getDenyListFlags() ([]string, []string) { - return pgBackRestOptsDenyList, nil -} - -func (backRestRestoreOpts pgBackRestRestoreOptions) getDenyListFlags() ([]string, []string) { - return pgBackRestOptsDenyList, nil -} diff --git a/internal/apiserver/backupoptions/pgdumpoptions.go b/internal/apiserver/backupoptions/pgdumpoptions.go deleted file mode 100644 index 268aa42412..0000000000 --- a/internal/apiserver/backupoptions/pgdumpoptions.go +++ /dev/null @@ -1,296 +0,0 @@ -package backupoptions - -/* -Copyright 2019 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "errors" - "strings" -) - -var pgDumpRestoreOptsDenyList = []string{ - "--binary-upgrade", - "--dbname", - "--host", - "--no-password", - "--no-reconnect", - "--password", - "--port", - "--username", - "--version", -} - -var pgDumpRestoreOptsDenyListShort = []string{ - "-R", - "-d", - "-h", - "-p", - "-U", - "-w", - "-W", - "-V", -} - -type pgDumpOptions struct { - DataOnly bool `flag:"data-only" flag-short:"a"` - Blobs bool `flag:"blobs" flag-short:"b"` - NoBlobs bool `flag:"no-blobs" flag-short:"B"` - Clean bool `flag:"clean" flag-short:"c"` - Create bool `flag:"create" flag-short:"C"` - Encoding string `flag:"encoding" flag-short:"E"` - Jobs int `flag:"jobs" flag-short:"j"` - Format string `flag:"format" flag-short:"F"` - Schema []string `flag:"schema" flag-short:"n"` - ExcludeSchema string `flag:"exclude-schema" flag-short:"N"` - Oids bool `flag:"oids" flag-short:"o"` - NoOwner bool `flag:"no-owner" flag-short:"O"` - SchemaOnly bool `flag:"schema-only" flag-short:"s"` - SuperUser string `flag:"superuser" flag-short:"S"` - Table []string `flag:"table" flag-short:"t"` - ExcludeTable string `flag:"exclude-table" flag-short:"T"` - Verbose bool `flag:"verbose" flag-short:"v"` - NoPrivileges bool `flag:"no-privileges" flag-short:"x"` - NoACL bool `flag:"no-acl"` - Compress int `flag:"compress" flag-short:"Z"` - ColumnInserts bool `flag:"column-inserts"` - AttributeInserts bool `flag:"attribute-inserts"` - DisableDollarQuoting bool `flag:"disable-dollar-quoting"` - DisableTriggers bool `flag:"disable-triggers"` - EnableRowSecurity bool `flag:"exclude-row-security"` - ExcludeTableData string `flag:"exclude-table-data"` - IfExists bool `flag:"if-exists"` - Inserts bool `flag:"inserts"` - LockWaitTimeout string `flag:"lock-wait-timeout"` - LoadViaPartitionRoot bool `flag:"load-via-partition-root"` - NoComments bool `flag:"no-comments"` - NoPublications bool `flag:"no-publications"` // PG 10+ - NoSecurityLabels bool `flag:"no-security-labels"` - NoSubscriptions bool `flag:"no-subscriptions"` // PG 10+ - NoSync bool `flag:"no-sync"` - NoTableSpaces bool `flag:"no-tablespaces"` - NoUnloggedTableData bool `flag:"no-unlogged-table-data"` - QuoteAllIdentifiers bool `flag:"quote-all-identifiers"` - Section []string `flag:"section"` - SerializableDeferrable bool `flag:"serializable-deferrable"` - Snapshot string `flag:"snapshot"` - StrictNames string `flag:"strict-names"` // PG 9.6+ - UseSetSessionAuthorization bool `flag:"use-set-session-authorization"` - Role string `flag:"role"` -} - -type pgDumpAllOptions struct { - DataOnly bool `flag:"data-only" flag-short:"a"` - Clean bool `flag:"clean" flag-short:"c"` - Encoding string `flag:"encoding" flag-short:"E"` - GlobalsOnly bool `flag:"globals-only" flag-short:"g"` - Oids bool `flag:"oids" flag-short:"o"` - NoOwner bool `flag:"no-owner" flag-short:"O"` - RolesOnly bool `flag:"roles-only" flag-short:"r"` - SchemaOnly bool `flag:"schema-only" flag-short:"s"` - SuperUser string `flag:"superuser" flag-short:"S"` - TablespacesOnly bool `flag:"tablespaces-only" flag-short:"t"` - Verbose bool `flag:"verbose" flag-short:"v"` - NoPrivileges bool `flag:"no-privileges" flag-short:"x"` - NoACL bool `flag:"no-acl"` - ColumnInserts bool `flag:"column-inserts"` - AttributeInserts bool `flag:"attribute-inserts"` - DisableDollarQuoting bool `flag:"disable-dollar-quoting"` - DisableTriggers bool `flag:"disable-triggers"` - IfExists bool `flag:"if-exists"` - Inserts bool `flag:"inserts"` - LockWaitTimeout string `flag:"lock-wait-timeout"` - LoadViaPartitionRoot bool `flag:"load-via-partition-root"` - NoComments bool `flag:"no-comments"` - NoPublications bool `flag:"no-publications"` // PG 10+ - NoRolePasswords bool `flag:"no-role-passwords"` - NoSecurityLabels bool `flag:"no-security-labels"` - NoSubscriptions bool `flag:"no-subscriptions"` // PG 10+ - NoSync bool `flag:"no-sync"` - NoTableSpaces bool `flag:"no-tablespaces"` - NoUnloggedTableData bool `flag:"no-unlogged-table-data"` - QuoteAllIdentifiers bool `flag:"quote-all-identifiers"` - UseSetSessionAuthorization bool `flag:"use-set-session-authorization"` - Role string `flag:"role"` - DumpAll bool `flag:"dump-all"` // custom pgo backup opt used for pg_dumpall -} - -type pgRestoreOptions struct { - DataOnly bool `flag:"data-only" flag-short:"a"` - Clean bool `flag:"clean" flag-short:"c"` - Create bool `flag:"create" flag-short:"C"` - ExitOnError bool `flag:"exit-on-error" flag-short:"e"` - Filename string `flag:"filename" flag-short:"f"` - Format string `flag:"format" flag-short:"F"` - Index []string `flag:"index" flag-short:"I"` - Jobs int `flag:"jobs" flag-short:"j"` - List bool `flag:"list" flag-short:"l"` - UseList bool `flag:"useList" flag-short:"L"` - Schema string `flag:"schema" flag-short:"n"` - ExcludeSchema string `flag:"exclude-schema" flag-short:"N"` - NoOwner bool `flag:"no-owner" flag-short:"O"` - Function []string `flag:"function" flag-short:"P"` - SchemaOnly bool `flag:"schema-only" flag-short:"s"` - SuperUser string `flag:"superuser" flag-short:"S"` - Table string `flag:"table" flag-short:"t"` - Trigger []string `flag:"trigger" flag-short:"T"` - Verbose bool `flag:"verbose" flag-short:"v"` - NoPrivileges bool `flag:"no-privileges" flag-short:"x"` - NoACL bool `flag:"no-acl"` - SingleTransaction bool `flag:"single-transaction" flag-short:"1"` - DisableTriggers bool `flag:"disable-triggers"` - EnableRowSecurity bool `flag:"enable-row-security"` - IfExists bool `flag:"if-exists"` - NoComments bool `flag:"no-comments"` - NoDataForFailedTables bool `flag:"no-data-for-failed-tables"` - NoPublications bool `flag:"no-publications"` // PG 10+ - NoSecurityLabels bool `flag:"no-security-labels"` - NoSubscriptions bool `flag:"no-subscriptions"` // PG 10+ - NoTableSpaces bool `flag:"no-tablespaces"` - Section []string `flag:"section"` - StrictNames string `flag:"strict-names"` // PG 9.6+ - UseSetSessionAuthorization bool `flag:"use-set-session-authorization"` - Role string `flag:"role"` -} - -func (dumpOpts pgDumpOptions) validate(setFlagFieldNames []string) error { - - var errstrings []string - - for _, setFlag := range setFlagFieldNames { - - switch setFlag { - case "Format": - if !isValidValue([]string{"p", "plain", "c", "custom", "t", "tar"}, dumpOpts.Format) { - err := errors.New("Invalid format provided for pg_dump backup") - errstrings = append(errstrings, err.Error()) - } - case "SuperUser": - if !dumpOpts.DisableTriggers { - err := errors.New("The --superuser option is only applicable for a pg_dump backup if the " + - "--disable-triggers option has also been specified") - errstrings = append(errstrings, err.Error()) - } - case "Compress": - if !isValidCompressLevel(dumpOpts.Compress) { - err := errors.New("Invalid compress level for pg_dump backup") - errstrings = append(errstrings, err.Error()) - } else if dumpOpts.Format == "tar" { - err := errors.New("Compress level is not supported when using the tar format for a pg_dump backup") - errstrings = append(errstrings, err.Error()) - } - case "IfExists": - if !dumpOpts.Clean { - err := errors.New("The --if-exists option is only valid for a pg_dump backup if the --clean option is " + - "also specified") - errstrings = append(errstrings, err.Error()) - } - case "Section": - for _, currSection := range dumpOpts.Section { - if !isValidValue([]string{"pre-data", "data", "post-data"}, currSection) { - err := errors.New("Invalid section provided for pg_dump backup") - errstrings = append(errstrings, err.Error()) - } - } - } - } - - if len(errstrings) > 0 { - return errors.New(strings.Join(errstrings, "\n")) - } - - return nil -} - -func (dumpAllOpts pgDumpAllOptions) validate(setFlagFieldNames []string) error { - - var errstrings []string - - for _, setFlag := range setFlagFieldNames { - - switch setFlag { - case "SuperUser": - if !dumpAllOpts.DisableTriggers { - err := errors.New("The --superuser option is only applicable for a pg_dumpall backup if the " + - "--disable-triggers option has also been specified") - errstrings = append(errstrings, err.Error()) - } - case "IfExists": - if !dumpAllOpts.Clean { - err := errors.New("The --if-exists option is only valid for a pg_dumpall backup if the --clean option is " + - "also specified") - errstrings = append(errstrings, err.Error()) - } - } - } - - if len(errstrings) > 0 { - return errors.New(strings.Join(errstrings, "\n")) - } - - return nil -} - -func (restoreOpts pgRestoreOptions) validate(setFlagFieldNames []string) error { - - var errstrings []string - - for _, setFlag := range setFlagFieldNames { - - switch setFlag { - case "Format": - if !isValidValue([]string{"p", "plain", "c", "custom", "t", "tar"}, restoreOpts.Format) { - err := errors.New("Invalid format provided for pg_restore restore") - errstrings = append(errstrings, err.Error()) - } - case "SuperUser": - if !restoreOpts.DisableTriggers { - err := errors.New("The --superuser option is only applicable for a pg_restore restore if the " + - "--disable-triggers option has also been specified") - errstrings = append(errstrings, err.Error()) - } - case "IfExists": - if !restoreOpts.Clean { - err := errors.New("The --if-exists option is only valid for a pg_restore restore if the --clean option is " + - "also specified") - errstrings = append(errstrings, err.Error()) - } - case "Section": - for _, currSection := range restoreOpts.Section { - if !isValidValue([]string{"pre-data", "data", "post-data"}, currSection) { - err := errors.New("Invalid section provided for pg_restore restore") - errstrings = append(errstrings, err.Error()) - } - } - } - } - - if len(errstrings) > 0 { - return errors.New(strings.Join(errstrings, "\n")) - } - - return nil -} - -func (dumpOpts pgDumpOptions) getDenyListFlags() ([]string, []string) { - return pgDumpRestoreOptsDenyList, pgDumpRestoreOptsDenyListShort -} - -func (dumpAllOpts pgDumpAllOptions) getDenyListFlags() ([]string, []string) { - return pgDumpRestoreOptsDenyList, pgDumpRestoreOptsDenyListShort -} - -func (restoreOpts pgRestoreOptions) getDenyListFlags() ([]string, []string) { - return pgDumpRestoreOptsDenyList, pgDumpRestoreOptsDenyListShort -} diff --git a/internal/apiserver/catservice/catimpl.go b/internal/apiserver/catservice/catimpl.go deleted file mode 100644 index 6d656f7e9a..0000000000 --- a/internal/apiserver/catservice/catimpl.go +++ /dev/null @@ -1,132 +0,0 @@ -package catservice - -/* -Copyright 2019 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "errors" - "strings" - - "github.com/crunchydata/postgres-operator/internal/apiserver" - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/kubeapi" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - v1 "k8s.io/api/core/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" -) - -// pgo cat mycluster /pgdata/mycluster/postgresql.conf -func Cat(request *msgs.CatRequest, ns string) msgs.CatResponse { - resp := msgs.CatResponse{} - resp.Status.Code = msgs.Ok - resp.Status.Msg = "" - resp.Results = make([]string, 0) - - log.Debugf("Cat %v", request) - - if len(request.Args) == 0 { - resp.Status.Code = msgs.Error - resp.Status.Msg = "no cluster name was passed" - return resp - } - - clusterName := request.Args[0] - cluster, err := apiserver.Clientset.CrunchydataV1().Pgclusters(ns).Get(clusterName, metav1.GetOptions{}) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = clusterName + " was not found, verify cluster name" - return resp - } - - // check if the current cluster is not upgraded to the deployed - // Operator version. If not, do not allow the command to complete - if cluster.Annotations[config.ANNOTATION_IS_UPGRADED] == config.ANNOTATIONS_FALSE { - resp.Status.Code = msgs.Error - resp.Status.Msg = cluster.Name + msgs.UpgradeError - return resp - } - - err = validateArgs(request.Args) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - var podList *v1.PodList - selector := config.LABEL_SERVICE_NAME + "=" + cluster.Spec.Name - podList, err = apiserver.Clientset.CoreV1().Pods(ns).List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - if len(podList.Items) == 0 { - resp.Status.Code = msgs.Error - resp.Status.Msg = "no pods found using " + selector - return resp - } - - clusterName = request.Args[0] - log.Debugf("cat called for cluster %s", clusterName) - - var results string - results, err = cat(&podList.Items[0], ns, request.Args) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - resp.Results = append(resp.Results, results) - - return resp -} - -// run cat on the postgres pod, remember we are assuming -// first container in the pod is always the postgres container. -func cat(pod *v1.Pod, ns string, args []string) (string, error) { - - command := make([]string, 0) - command = append(command, "cat") - for i := 1; i < len(args); i++ { - command = append(command, args[i]) - } - - log.Debugf("running Exec in namespace=[%s] podname=[%s] container name=[%s] command=[%v]", ns, pod.Name, pod.Spec.Containers[0].Name, command) - - stdout, stderr, err := kubeapi.ExecToPodThroughAPI(apiserver.RESTConfig, apiserver.Clientset, command, pod.Spec.Containers[0].Name, pod.Name, ns, nil) - if err != nil { - log.Error(err) - return "error in exec to pod", err - } - log.Debugf("stdout=[%s] stderr=[%s]", stdout, stderr) - - return stdout, err -} - -//make sure the parameters to the cat command dont' container mischief -func validateArgs(args []string) error { - var err error - var bad = "&|;>" - - for i := 1; i < len(args); i++ { - if strings.ContainsAny(args[i], bad) { - return errors.New(args[i] + " contains non-allowed characters [" + bad + "]") - } - } - return err -} diff --git a/internal/apiserver/catservice/catservice.go b/internal/apiserver/catservice/catservice.go deleted file mode 100644 index 439274271e..0000000000 --- a/internal/apiserver/catservice/catservice.go +++ /dev/null @@ -1,82 +0,0 @@ -package catservice - -/* -Copyright 2019 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "encoding/json" - "github.com/crunchydata/postgres-operator/internal/apiserver" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "net/http" -) - -// CatHandler ... -// pgo cat mycluster /pgdata/mycluster/postgresql.conf /tmp/foo -func CatHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /cat catservice cat - /*``` - CAT performs a Linux `cat` command on a cluster file - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Cat Request" - // in: "body" - // schema: - // "$ref": "#/definitions/CatRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/CatResponse" - var err error - var username, ns string - - log.Debug("catservice.CatHandler called") - - var request msgs.CatRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - username, err = apiserver.Authn(apiserver.CAT_PERM, w, r) - if err != nil { - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - ns, err = apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace) - if err != nil { - resp := msgs.CatResponse{} - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - json.NewEncoder(w).Encode(resp) - return - } - - catResponse := Cat(&request, ns) - if err != nil { - resp := msgs.CatResponse{} - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - json.NewEncoder(w).Encode(resp) - return - } - - json.NewEncoder(w).Encode(catResponse) -} diff --git a/internal/apiserver/cloneservice/cloneimpl.go b/internal/apiserver/cloneservice/cloneimpl.go deleted file mode 100644 index 4eeb4ac8ac..0000000000 --- a/internal/apiserver/cloneservice/cloneimpl.go +++ /dev/null @@ -1,221 +0,0 @@ -package cloneservice - -/* -Copyright 2019 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "errors" - "fmt" - "io/ioutil" - "time" - - "github.com/crunchydata/postgres-operator/internal/apiserver" - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/util" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - - log "github.com/sirupsen/logrus" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" -) - -// Clone allows a user to clone a cluster into a new deployment -func Clone(request *msgs.CloneRequest, namespace, pgouser string) msgs.CloneResponse { - log.Debugf("clone called with ") - - // set up the response here - response := msgs.CloneResponse{ - Status: msgs.Status{ - Code: msgs.Ok, - Msg: "", - }, - } - - log.Debug("Getting pgcluster") - - // get the information about the current pgcluster by name, to ensure it - // exists - sourcePgcluster, err := apiserver.Clientset. - CrunchydataV1().Pgclusters(namespace). - Get(request.SourceClusterName, metav1.GetOptions{}) - - // if there is an error getting the pgcluster, abort here - if err != nil { - response.Status.Code = msgs.Error - response.Status.Msg = fmt.Sprintf("Could not get cluster: %s", err) - return response - } - - // validate the parameters of the request that do not require setting - // additional information, so we can avoid additional API lookups - if err := validateCloneRequest(request, *sourcePgcluster); err != nil { - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - - // check if the current cluster is not upgraded to the deployed - // Operator version. If not, do not allow the command to complete - if sourcePgcluster.Annotations[config.ANNOTATION_IS_UPGRADED] == config.ANNOTATIONS_FALSE { - response.Status.Code = msgs.Error - response.Status.Msg = sourcePgcluster.Name + msgs.UpgradeError - return response - } - - // now, let's ensure the target pgCluster does *not* exist - if _, err := apiserver.Clientset.CrunchydataV1().Pgclusters(namespace).Get(request.TargetClusterName, metav1.GetOptions{}); err == nil { - response.Status.Code = msgs.Error - response.Status.Msg = fmt.Sprintf("Could not clone cluster: %s already exists", - request.TargetClusterName) - return response - } - - // finally, let's make sure there is not already a task in progress for - // making the clone - selector := fmt.Sprintf("%s=true,pg-cluster=%s", config.LABEL_PGO_CLONE, request.TargetClusterName) - taskList, err := apiserver.Clientset.CrunchydataV1().Pgtasks(namespace).List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - log.Error(err) - response.Status.Code = msgs.Error - response.Status.Msg = fmt.Sprintf("Could not clone cluster: could not validate %s", err.Error()) - return response - } - - // iterate through the list of tasks and see if there are any pending - for _, task := range taskList.Items { - if task.Spec.Status != crv1.CompletedStatus { - response.Status.Code = msgs.Error - response.Status.Msg = fmt.Sprintf("Could not clone cluster: there exists an ongoing clone task: [%s]. If you believe this is an error, try deleting this pgtask CRD.", task.Spec.Name) - return response - } - } - - // create the workflow task to track how this is progressing - uid := util.RandStringBytesRmndr(4) - workflowID, err := createWorkflowTask(request.TargetClusterName, uid, namespace) - - if err != nil { - response.Status.Code = msgs.Error - response.Status.Msg = fmt.Errorf("could not create clone workflow task: %s", err).Error() - return response - } - - // alright, begin the create the proper clone task! - cloneTask := util.CloneTask{ - BackrestPVCSize: request.BackrestPVCSize, - BackrestStorageSource: request.BackrestStorageSource, - EnableMetrics: request.EnableMetrics, - PGOUser: pgouser, - PVCSize: request.PVCSize, - SourceClusterName: request.SourceClusterName, - TargetClusterName: request.TargetClusterName, - TaskStepLabel: config.LABEL_PGO_CLONE_STEP_1, - TaskType: crv1.PgtaskCloneStep1, - Timestamp: time.Now(), - WorkflowID: workflowID, - } - - task := cloneTask.Create() - - // create the Pgtask CRD for the clone task - if _, err := apiserver.Clientset.CrunchydataV1().Pgtasks(namespace).Create(task); err != nil { - response.Status.Code = msgs.Error - response.Status.Msg = fmt.Sprintf("Could not create clone task: %s", err) - return response - } - - response.TargetClusterName = request.TargetClusterName - response.WorkflowID = workflowID - - return response -} - -// createWorkflowTask creates the workflow task that is tracked as we attempt -// to clone the cluster -func createWorkflowTask(targetClusterName, uid, namespace string) (string, error) { - // set a random ID for this workflow task - u, err := ioutil.ReadFile("/proc/sys/kernel/random/uuid") - - if err != nil { - return "", err - } - - id := string(u[:len(u)-1]) - - // set up the workflow task - taskName := fmt.Sprintf("%s-%s-%s", targetClusterName, uid, crv1.PgtaskWorkflowCloneType) - task := &crv1.Pgtask{ - ObjectMeta: metav1.ObjectMeta{ - Name: taskName, - Labels: map[string]string{ - config.LABEL_PG_CLUSTER: targetClusterName, - crv1.PgtaskWorkflowID: id, - }, - }, - Spec: crv1.PgtaskSpec{ - Namespace: namespace, - Name: taskName, - TaskType: crv1.PgtaskWorkflow, - Parameters: map[string]string{ - crv1.PgtaskWorkflowSubmittedStatus: time.Now().Format(time.RFC3339), - config.LABEL_PG_CLUSTER: targetClusterName, - crv1.PgtaskWorkflowID: id, - }, - }, - } - - // create the workflow task - if _, err := apiserver.Clientset.CrunchydataV1().Pgtasks(namespace).Create(task); err != nil { - return "", err - } - - // return successfully after creating the task - return id, nil -} - -// validateCloneRequest validates the input from the create clone request -// that does not set any additional information -func validateCloneRequest(request *msgs.CloneRequest, cluster crv1.Pgcluster) error { - // ensure the cluster name for the source of the clone is set - if request.SourceClusterName == "" { - return errors.New("the source cluster name must be set") - } - - // ensure the cluster name for the target of the clone (the new cluster) is - // set - if request.TargetClusterName == "" { - return errors.New("the target cluster name must be set") - } - - // if any of the the PVCSizes are set to a customized value, ensure that they - // are recognizable by Kubernetes - // first, the primary/replica PVC size - if err := apiserver.ValidateQuantity(request.PVCSize); err != nil { - return fmt.Errorf(apiserver.ErrMessagePVCSize, request.PVCSize, err.Error()) - } - - // next, the pgBackRest repo PVC size - if err := apiserver.ValidateQuantity(request.BackrestPVCSize); err != nil { - return fmt.Errorf(apiserver.ErrMessagePVCSize, request.BackrestPVCSize, err.Error()) - } - - // clone is a form of restore, so validate using ValidateBackrestStorageTypeOnBackupRestore - if err := util.ValidateBackrestStorageTypeOnBackupRestore(request.BackrestStorageSource, - cluster.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE], true); err != nil { - return err - } - - return nil -} diff --git a/internal/apiserver/cloneservice/cloneservice.go b/internal/apiserver/cloneservice/cloneservice.go deleted file mode 100644 index f98a5f439d..0000000000 --- a/internal/apiserver/cloneservice/cloneservice.go +++ /dev/null @@ -1,77 +0,0 @@ -package cloneservice - -/* -Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "encoding/json" - "net/http" - - "github.com/crunchydata/postgres-operator/internal/apiserver" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" -) - -func CloneHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /clone cloneservice clone - /*``` - Clone a PostgreSQL cluster into a new deployment - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Clone PostgreSQL Cluster" - // in: "body" - // schema: - // "$ref": "#/definitions/CloneRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/CloneResponse" - var err error - var username, ns string - - log.Debug("cloneservice.CloneHandler called") - log.Warn(`cloneservice.CloneHandler is deprecated. Please use "pgo create cluster --restore-from" instead.`) - - var request msgs.CloneRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - username, err = apiserver.Authn(apiserver.CLONE_PERM, w, r) - if err != nil { - return - } - - w.WriteHeader(http.StatusOK) - w.Header().Set("Content-Type", "application/json") - - ns, err = apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace) - - if err != nil { - resp := msgs.CloneResponse{ - Status: msgs.Status{ - Code: msgs.Error, - Msg: err.Error(), - }, - } - json.NewEncoder(w).Encode(resp) - return - } - - resp := Clone(&request, ns, username) - json.NewEncoder(w).Encode(resp) -} diff --git a/internal/apiserver/clusterservice/clusterimpl.go b/internal/apiserver/clusterservice/clusterimpl.go deleted file mode 100644 index 6163b4b45d..0000000000 --- a/internal/apiserver/clusterservice/clusterimpl.go +++ /dev/null @@ -1,2315 +0,0 @@ -package clusterservice - -/* -Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "errors" - "fmt" - "io/ioutil" - "strconv" - "strings" - "time" - - "github.com/crunchydata/postgres-operator/internal/apiserver" - "github.com/crunchydata/postgres-operator/internal/apiserver/backupoptions" - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/kubeapi" - "github.com/crunchydata/postgres-operator/internal/operator/backrest" - "github.com/crunchydata/postgres-operator/internal/util" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - - log "github.com/sirupsen/logrus" - corev1 "k8s.io/api/core/v1" - v1 "k8s.io/api/core/v1" - kerrors "k8s.io/apimachinery/pkg/api/errors" - "k8s.io/apimachinery/pkg/api/resource" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/util/validation" - "k8s.io/client-go/kubernetes" -) - -const ( - // ErrInvalidDataSource defines the error string that is displayed when the data source - // parameters for a create cluster request are invalid - ErrInvalidDataSource = "Unable to validate data source" -) - -// DeleteCluster ... -func DeleteCluster(name, selector string, deleteData, deleteBackups bool, ns, pgouser string) msgs.DeleteClusterResponse { - var err error - - response := msgs.DeleteClusterResponse{} - response.Status = msgs.Status{Code: msgs.Ok, Msg: ""} - response.Results = make([]string, 0) - - if name != "all" { - if selector == "" { - selector = "name=" + name - } - } - - log.Debugf("delete-data is [%t]", deleteData) - log.Debugf("delete-backups is [%t]", deleteBackups) - - //get the clusters list - clusterList, err := apiserver.Clientset.CrunchydataV1().Pgclusters(ns).List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - - if len(clusterList.Items) == 0 { - response.Status.Code = msgs.Error - response.Status.Msg = "no clusters found" - return response - } - - for _, cluster := range clusterList.Items { - - // check if the current cluster is not upgraded to the deployed - // Operator version. If not, do not allow the command to complete - if cluster.Annotations[config.ANNOTATION_IS_UPGRADED] == config.ANNOTATIONS_FALSE { - response.Status.Code = msgs.Error - response.Status.Msg = cluster.Name + msgs.UpgradeError - return response - } - - log.Debugf("deleting cluster %s", cluster.Spec.Name) - taskName := cluster.Spec.Name + "-rmdata" - log.Debugf("creating taskName %s", taskName) - isBackup := false - isReplica := false - replicaName := "" - clusterPGHAScope := cluster.ObjectMeta.Labels[config.LABEL_PGHA_SCOPE] - - // first delete any existing rmdata pgtask with the same name - err = apiserver.Clientset.CrunchydataV1().Pgtasks(ns).Delete(taskName, &metav1.DeleteOptions{}) - if err != nil && !kerrors.IsNotFound(err) { - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - - err := apiserver.CreateRMDataTask(cluster.Spec.Name, replicaName, taskName, deleteBackups, deleteData, isReplica, isBackup, ns, clusterPGHAScope) - if err != nil { - log.Debugf("error on creating rmdata task %s", err.Error()) - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - - response.Results = append(response.Results, "deleted pgcluster "+cluster.Spec.Name) - - } - - return response - -} - -// ShowCluster ... -func ShowCluster(name, selector, ccpimagetag, ns string, allflag bool) msgs.ShowClusterResponse { - var err error - - response := msgs.ShowClusterResponse{} - response.Status = msgs.Status{Code: msgs.Ok, Msg: ""} - response.Results = make([]msgs.ShowClusterDetail, 0) - - if selector == "" && allflag { - log.Debugf("allflags set to true") - } else { - if selector == "" { - selector = "name=" + name - } - } - - log.Debugf("selector on showCluster is %s", selector) - - //get a list of all clusters - clusterList, err := apiserver.Clientset.CrunchydataV1().Pgclusters(ns).List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - - log.Debugf("clusters found len is %d", len(clusterList.Items)) - - for _, c := range clusterList.Items { - detail := msgs.ShowClusterDetail{} - detail.Cluster = c - detail.Deployments, err = getDeployments(&c, ns) - if err != nil { - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - detail.Pods, err = GetPods(apiserver.Clientset, &c) - if err != nil { - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - detail.Services, err = getServices(&c, ns) - if err != nil { - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - detail.Replicas, err = getReplicas(&c, ns) - if err != nil { - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - - // capture whether or not the cluster is currently a standby cluster - detail.Standby = c.Spec.Standby - - if ccpimagetag == "" { - response.Results = append(response.Results, detail) - } else if ccpimagetag == c.Spec.CCPImageTag { - response.Results = append(response.Results, detail) - } - } - - return response - -} - -func getDeployments(cluster *crv1.Pgcluster, ns string) ([]msgs.ShowClusterDeployment, error) { - output := make([]msgs.ShowClusterDeployment, 0) - - selector := config.LABEL_PG_CLUSTER + "=" + cluster.Spec.Name - deployments, err := apiserver.Clientset. - AppsV1().Deployments(ns). - List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - return output, err - } - - for _, dep := range deployments.Items { - d := msgs.ShowClusterDeployment{} - d.Name = dep.Name - d.PolicyLabels = make([]string, 0) - - for k, v := range cluster.ObjectMeta.Labels { - if v == "pgpolicy" { - d.PolicyLabels = append(d.PolicyLabels, k) - } - } - output = append(output, d) - - } - - return output, err -} - -func GetPods(clientset kubernetes.Interface, cluster *crv1.Pgcluster) ([]msgs.ShowClusterPod, error) { - output := []msgs.ShowClusterPod{} - - //get pods, but exclude backup pods and backrest repo - selector := fmt.Sprintf("%s=%s,%s", config.LABEL_PG_CLUSTER, cluster.GetName(), config.LABEL_PG_DATABASE) - log.Debugf("selector for GetPods is %s", selector) - - pods, err := clientset.CoreV1().Pods(cluster.Namespace).List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - return output, err - } - - for _, p := range pods.Items { - d := msgs.ShowClusterPod{ - PVC: []msgs.ShowClusterPodPVC{}, - } - d.Name = p.Name - d.Phase = string(p.Status.Phase) - d.NodeName = p.Spec.NodeName - d.ReadyStatus, d.Ready = getReadyStatus(&p) - - // get information about several of the PVCs. This borrows from a legacy - // method to get this information - for _, v := range p.Spec.Volumes { - // if this volume is not a PVC, continue - if v.VolumeSource.PersistentVolumeClaim == nil { - continue - } - - // if this is not any of the 3 mounted PVCs to a PostgreSQL Pod, continue - if !(v.Name == "pgdata" || v.Name == "pgwal-volume" || strings.HasPrefix(v.Name, "tablespace")) { - continue - } - - pvcName := v.VolumeSource.PersistentVolumeClaim.ClaimName - // query the PVC to get the storage capacity - pvc, err := clientset.CoreV1().PersistentVolumeClaims(cluster.Namespace).Get(pvcName, metav1.GetOptions{}) - - // if there is an error, ignore it, and move on to the next one - if err != nil { - log.Warn(err) - continue - } - - capacity := pvc.Status.Capacity[v1.ResourceStorage] - - clusterPVCDetail := msgs.ShowClusterPodPVC{ - Capacity: capacity.String(), - Name: pvcName, - } - - d.PVC = append(d.PVC, clusterPVCDetail) - } - - d.Primary = false - d.Type = getType(&p, cluster.Spec.Name) - if d.Type == msgs.PodTypePrimary { - d.Primary = true - } - output = append(output, d) - - } - - return output, err - -} - -func getServices(cluster *crv1.Pgcluster, ns string) ([]msgs.ShowClusterService, error) { - - output := make([]msgs.ShowClusterService, 0) - selector := config.LABEL_PGO_BACKREST_REPO + "!=true," + config.LABEL_PG_CLUSTER + "=" + cluster.Spec.Name - - services, err := apiserver.Clientset.CoreV1().Services(ns).List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - return output, err - } - - log.Debugf("got %d services for %s", len(services.Items), cluster.Spec.Name) - for _, p := range services.Items { - d := msgs.ShowClusterService{} - d.Name = p.Name - if strings.Contains(p.Name, "-backrest-repo") { - d.BackrestRepo = true - d.ClusterName = cluster.Name - } else if strings.Contains(p.Name, "-pgbouncer") { - d.Pgbouncer = true - d.ClusterName = cluster.Name - } - d.ClusterIP = p.Spec.ClusterIP - if len(p.Spec.ExternalIPs) > 0 { - d.ExternalIP = p.Spec.ExternalIPs[0] - } - if len(p.Status.LoadBalancer.Ingress) > 0 { - d.ExternalIP = p.Status.LoadBalancer.Ingress[0].IP - } - - output = append(output, d) - - } - - return output, err -} - -// TestCluster performs a variety of readiness checks against one or more -// clusters within a namespace. It leverages the following two Kubernetes -// constructs in order to determine the availability of PostgreSQL clusters: -// - Pod readiness checks. The Pod readiness checks leverage "pg_isready" to -// determine if the PostgreSQL cluster is able to accept connecions -// - Endpoint checks. The check sees if the services in front of the the -// PostgreSQL instances are able to route connections from the "outside" into -// the instances -func TestCluster(name, selector, ns, pgouser string, allFlag bool) msgs.ClusterTestResponse { - var err error - - log.Debugf("TestCluster(%s,%s,%s,%s,%v): Called", - name, selector, ns, pgouser, allFlag) - - response := msgs.ClusterTestResponse{} - response.Results = make([]msgs.ClusterTestResult, 0) - response.Status = msgs.Status{Code: msgs.Ok, Msg: ""} - - log.Debugf("selector is: %s", selector) - - // if the select is empty, determine if its because the flag for - // "all clusters" in a namespace is set - // - // otherwise, a name cluster name must be passed in, and said name should - // be used - if selector == "" { - if allFlag { - log.Debugf("selector is : all clusters in %s", ns) - } else { - selector = "name=" + name - log.Debugf("selector is: %s", selector) - } - } - - // Find a list of a clusters that match the given selector - clusterList, err := apiserver.Clientset.CrunchydataV1().Pgclusters(ns).List(metav1.ListOptions{LabelSelector: selector}) - - // If the response errors, return here, as we won't be able to return any - // useful information in the test - if err != nil { - log.Errorf("Cluster lookup failed: %s", err.Error()) - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - - log.Debugf("Total clusters found: %d", len(clusterList.Items)) - - // Iterate through each cluster and perform the various tests against them - for _, c := range clusterList.Items { - // Set up the object that will be appended to the response that - // indicates the availability of the endpoints / instances for this - // cluster - result := msgs.ClusterTestResult{ - ClusterName: c.Name, - Endpoints: make([]msgs.ClusterTestDetail, 0), - Instances: make([]msgs.ClusterTestDetail, 0), - } - - detail := msgs.ShowClusterDetail{} - detail.Cluster = c - - // Get the PostgreSQL instances! - log.Debugf("Looking up instance pods for cluster: %s", c.Name) - pods, err := GetPrimaryAndReplicaPods(&c, ns) - - // if there is an error with returning the primary/replica instances, - // then error and continue - if err != nil { - log.Errorf("Instance pod lookup failed: %s", err.Error()) - instance := msgs.ClusterTestDetail{ - Available: false, - InstanceType: msgs.ClusterTestInstanceTypePrimary, - } - result.Instances = append(result.Instances, instance) - response.Results = append(response.Results, result) - continue - } - - log.Debugf("pods found %d", len(pods)) - - // if there are no pods found, then the cluster is not ready at all, and - // we can make an early on checking the availability of this cluster - if len(pods) == 0 { - log.Infof("Cluster has no instances available: %s", c.Name) - instance := msgs.ClusterTestDetail{ - Available: false, - InstanceType: msgs.ClusterTestInstanceTypePrimary, - } - result.Instances = append(result.Instances, instance) - response.Results = append(response.Results, result) - continue - } - - // Check each instance (i.e. pod) to see if its readiness check passes. - // - // (We are assuming that the readiness check is performing the - // equivalent to a "pg_isready" which denotes that a PostgreSQL instance - // is connectable. If you have any doubts about this, check the - // readiness check code) - // - // Also denotes the type of PostgreSQL instance this is. All of the pods - // returned are either primaries or replicas - for _, pod := range pods { - // set up the object with the instance status - instance := msgs.ClusterTestDetail{ - Available: pod.Ready, - Message: pod.Name, - } - switch pod.Type { - default: - instance.InstanceType = msgs.ClusterTestInstanceTypeUnknown - case msgs.PodTypePrimary: - instance.InstanceType = msgs.ClusterTestInstanceTypePrimary - case msgs.PodTypeReplica: - instance.InstanceType = msgs.ClusterTestInstanceTypeReplica - } - log.Debugf("Instance found with attributes: (%s, %s, %v)", - instance.InstanceType, instance.Message, instance.Available) - // Add the report on the pods to this set - result.Instances = append(result.Instances, instance) - } - - // Time to check the endpoints. We will check the available endpoints - // vis-a-vis the services - detail.Services, err = getServices(&c, ns) - - // if the services are unavailable, report an error and continue - // iterating - if err != nil { - log.Errorf("Service lookup failed: %s", err.Error()) - endpoint := msgs.ClusterTestDetail{ - Available: false, - InstanceType: msgs.ClusterTestInstanceTypePrimary, - } - result.Endpoints = append(result.Endpoints, endpoint) - response.Results = append(response.Results, result) - continue - } - - // Iterate through the services and determine if they are reachable via - // their endpionts - for _, service := range detail.Services { - // prepare the endpoint request - endpointRequest := &kubeapi.GetEndpointRequest{ - Clientset: apiserver.Clientset, // current clientset - Name: service.Name, // name of the service, used to find the endpoint - Namespace: ns, // namespace the service / endpoint resides in - } - // prepare the end result, add the endpoint connection information - endpoint := msgs.ClusterTestDetail{ - Message: fmt.Sprintf("%s:%s", service.ClusterIP, c.Spec.Port), - } - - // determine the type of endpoint that is being checked based on - // the information available in the service - switch { - default: - endpoint.InstanceType = msgs.ClusterTestInstanceTypePrimary - case strings.Contains(service.Name, msgs.PodTypeReplica): - endpoint.InstanceType = msgs.ClusterTestInstanceTypeReplica - case service.Pgbouncer: - endpoint.InstanceType = msgs.ClusterTestInstanceTypePGBouncer - case service.BackrestRepo: - endpoint.InstanceType = msgs.ClusterTestInstanceTypeBackups - } - - // make a call to the Kubernetes API to see if the endpoint exists - // if there is an error, indicate that this endpoint is inaccessible - // otherwise inspect the endpoint response to see if the Pods that - // comprise the Service are in the "NotReadyAddresses" - endpoint.Available = true - if endpointResponse, err := kubeapi.GetEndpoint(endpointRequest); err != nil { - endpoint.Available = false - } else { - for _, subset := range endpointResponse.Endpoint.Subsets { - // if any of the addresses are not ready in the endpoint, - // or there are no address ready, then the endpoint is not - // ready - if len(subset.NotReadyAddresses) > 0 && len(subset.Addresses) == 0 { - endpoint.Available = false - } - } - } - - log.Debugf("Endpoint found with attributes: (%s, %s, %v)", - endpoint.InstanceType, endpoint.Message, endpoint.Available) - - // append the endpoint to the list - result.Endpoints = append(result.Endpoints, endpoint) - - } - - // concaentate to the results and continue - response.Results = append(response.Results, result) - } - - return response -} - -// CreateCluster ... -// pgo create cluster mycluster -func CreateCluster(request *msgs.CreateClusterRequest, ns, pgouser string) msgs.CreateClusterResponse { - var id string - resp := msgs.CreateClusterResponse{ - Result: msgs.CreateClusterDetail{}, - Status: msgs.Status{ - Code: msgs.Ok, - Msg: "", - }, - } - - clusterName := request.Name - - if clusterName == "all" { - resp.Status.Code = msgs.Error - resp.Status.Msg = "invalid cluster name 'all' is not allowed as a cluster name" - return resp - } - - if request.ReplicaCount < 0 { - resp.Status.Code = msgs.Error - resp.Status.Msg = "invalid replica-count , should be greater than or equal to 0" - return resp - } - - errs := validation.IsDNS1035Label(clusterName) - if len(errs) > 0 { - resp.Status.Code = msgs.Error - resp.Status.Msg = "invalid cluster name format " + errs[0] - return resp - } - - log.Debugf("create cluster called for %s", clusterName) - - // error if it already exists - _, err := apiserver.Clientset.CrunchydataV1().Pgclusters(ns).Get(clusterName, metav1.GetOptions{}) - if err == nil { - log.Debugf("pgcluster %s was found so we will not create it", clusterName) - resp.Status.Code = msgs.Error - resp.Status.Msg = "pgcluster " + clusterName + " was found so we will not create it" - return resp - } else if kerrors.IsNotFound(err) { - log.Debugf("pgcluster %s not found so we will create it", clusterName) - } else { - resp.Status.Code = msgs.Error - resp.Status.Msg = "error getting pgcluster " + clusterName + err.Error() - return resp - } - - userLabelsMap := make(map[string]string) - if request.UserLabels != "" { - labels := strings.Split(request.UserLabels, ",") - for _, v := range labels { - p := strings.Split(v, "=") - if len(p) < 2 { - resp.Status.Code = msgs.Error - resp.Status.Msg = "invalid labels format" - return resp - } - userLabelsMap[p[0]] = p[1] - } - } - - // validate any parameters provided to bootstrap the cluster from an existing data source - if err := validateDataSourceParms(request); err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - // if any of the the PVCSizes are set to a customized value, ensure that they - // are recognizable by Kubernetes - // first, the primary/replica PVC size - if err := apiserver.ValidateQuantity(request.PVCSize); err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = fmt.Sprintf(apiserver.ErrMessagePVCSize, request.PVCSize, err.Error()) - return resp - } - - // next, the pgBackRest repo PVC size - if err := apiserver.ValidateQuantity(request.BackrestPVCSize); err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = fmt.Sprintf(apiserver.ErrMessagePVCSize, request.BackrestPVCSize, err.Error()) - return resp - } - - if err := apiserver.ValidateQuantity(request.WALPVCSize); err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = fmt.Sprintf(apiserver.ErrMessagePVCSize, request.WALPVCSize, err.Error()) - return resp - } - - // evaluate if the CPU / Memory have been set to custom values and ensure the - // limit is set to valid bounds - zeroQuantity := resource.Quantity{} - - if err := apiserver.ValidateResourceRequestLimit(request.CPURequest, request.CPULimit, zeroQuantity); err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - if err := apiserver.ValidateResourceRequestLimit(request.MemoryRequest, request.MemoryLimit, - apiserver.Pgo.Cluster.DefaultInstanceResourceMemory); err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - // similarly, if any of the pgBackRest repo CPU / Memory values have been set, - // evaluate those as well - if err := apiserver.ValidateResourceRequestLimit(request.BackrestCPURequest, request.BackrestCPULimit, zeroQuantity); err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - if err := apiserver.ValidateResourceRequestLimit(request.BackrestMemoryRequest, request.BackrestMemoryLimit, - apiserver.Pgo.Cluster.DefaultBackrestResourceMemory); err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - // similarly, if any of the pgBouncer CPU / Memory values have been set, - // evaluate those as well - if err := apiserver.ValidateResourceRequestLimit(request.PgBouncerCPURequest, request.PgBouncerCPULimit, zeroQuantity); err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - if err := apiserver.ValidateResourceRequestLimit(request.PgBouncerMemoryRequest, request.PgBouncerMemoryLimit, - apiserver.Pgo.Cluster.DefaultPgBouncerResourceMemory); err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - // similarly, if any of the Crunchy Postgres Exporter CPU / Memory values have been set, - // evaluate those as well - if err := apiserver.ValidateResourceRequestLimit(request.ExporterCPURequest, request.ExporterCPULimit, zeroQuantity); err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - if err := apiserver.ValidateResourceRequestLimit(request.ExporterMemoryRequest, request.ExporterMemoryLimit, - apiserver.Pgo.Cluster.DefaultPgBouncerResourceMemory); err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - // validate the storage type for each specified tablespace actually exists. - // if a PVCSize is passed in, also validate that it follows the Kubernetes - // format - if err := validateTablespaces(request.Tablespaces); err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - // validate the TLS parameters for enabling TLS in a PostgreSQL cluster - if err := validateClusterTLS(request); err != nil { - log.Error(err) - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - if request.CustomConfig != "" { - found, err := validateCustomConfig(request.CustomConfig, ns) - if !found { - resp.Status.Code = msgs.Error - resp.Status.Msg = request.CustomConfig + " configmap was not found " - return resp - } - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - //add a label for the custom config - userLabelsMap[config.LABEL_CUSTOM_CONFIG] = request.CustomConfig - } - - //set the metrics flag with the global setting first - userLabelsMap[config.LABEL_EXPORTER] = strconv.FormatBool(apiserver.MetricsFlag) - if err != nil { - log.Error(err) - } - - //if metrics is chosen on the pgo command, stick it into the user labels - if request.MetricsFlag { - userLabelsMap[config.LABEL_EXPORTER] = "true" - } - if request.ServiceType != "" { - if request.ServiceType != config.DEFAULT_SERVICE_TYPE && request.ServiceType != config.LOAD_BALANCER_SERVICE_TYPE && request.ServiceType != config.NODEPORT_SERVICE_TYPE { - resp.Status.Code = msgs.Error - resp.Status.Msg = "error ServiceType should be either ClusterIP or LoadBalancer " - - return resp - } - userLabelsMap[config.LABEL_SERVICE_TYPE] = request.ServiceType - } - - // if the request is for a standby cluster then validate it to ensure all parameters have - // been properly specified as required to create a standby cluster - if request.Standby { - if err := validateStandbyCluster(request); err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - } - - // check that the specified ConfigMap exists - if request.BackrestConfig != "" { - _, err := apiserver.Clientset.CoreV1().ConfigMaps(ns).Get(request.BackrestConfig, metav1.GetOptions{}) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - } - - // ensure the backrest storage type specified for the cluster is valid, and that the - // configuration required to use that storage type (e.g. a bucket, endpoint and region - // when using aws s3 storage) has been provided - err = validateBackrestStorageTypeOnCreate(request) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - if request.BackrestStorageType != "" { - log.Debug("using backrest storage type provided by user") - userLabelsMap[config.LABEL_BACKREST_STORAGE_TYPE] = request.BackrestStorageType - } - - // if a value for BackrestStorageConfig is provided, validate it here - if request.BackrestStorageConfig != "" && !apiserver.IsValidStorageName(request.BackrestStorageConfig) { - resp.Status.Code = msgs.Error - resp.Status.Msg = fmt.Sprintf("%q storage config was not found", request.BackrestStorageConfig) - return resp - } - - log.Debug("userLabelsMap") - log.Debugf("%v", userLabelsMap) - - if existsGlobalConfig(ns) { - userLabelsMap[config.LABEL_CUSTOM_CONFIG] = config.GLOBAL_CUSTOM_CONFIGMAP - } - - if request.StorageConfig != "" && !apiserver.IsValidStorageName(request.StorageConfig) { - resp.Status.Code = msgs.Error - resp.Status.Msg = fmt.Sprintf("%q storage config was not found", request.StorageConfig) - return resp - } - - if request.WALStorageConfig != "" && !apiserver.IsValidStorageName(request.WALStorageConfig) { - resp.Status.Code = msgs.Error - resp.Status.Msg = fmt.Sprintf("%q storage config was not found", request.WALStorageConfig) - return resp - } - - if request.WALPVCSize != "" && request.WALStorageConfig == "" && apiserver.Pgo.WALStorage == "" { - resp.Status.Code = msgs.Error - resp.Status.Msg = "WAL size requires WAL storage" - return resp - } - - // validate & parse nodeLabel if exists - if request.NodeLabel != "" { - if err = apiserver.ValidateNodeLabel(request.NodeLabel); err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - parts := strings.Split(request.NodeLabel, "=") - userLabelsMap[config.LABEL_NODE_LABEL_KEY] = parts[0] - userLabelsMap[config.LABEL_NODE_LABEL_VALUE] = parts[1] - - log.Debug("primary node labels used from user entered flag") - } - - if request.ReplicaStorageConfig != "" { - if apiserver.IsValidStorageName(request.ReplicaStorageConfig) == false { - resp.Status.Code = msgs.Error - resp.Status.Msg = request.ReplicaStorageConfig + " Storage config was not found " - return resp - } - } - - // if the pgBouncer flag is set, validate that replicas is set to a - // nonnegative value - if request.PgbouncerFlag && request.PgBouncerReplicas < 0 { - resp.Status.Code = msgs.Error - resp.Status.Msg = fmt.Sprintf(apiserver.ErrMessageReplicas+" for pgBouncer", 1) - return resp - } - - // if a value is provided in the request for PodAntiAffinity, then ensure is valid. If - // it is, then set the user label for pod anti-affinity to the request value - // (which in turn becomes a *Label* which is important for anti-affinity). - // Otherwise, return the validation error. - if request.PodAntiAffinity != "" { - podAntiAffinityType := crv1.PodAntiAffinityType(request.PodAntiAffinity) - if err := podAntiAffinityType.Validate(); err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - userLabelsMap[config.LABEL_POD_ANTI_AFFINITY] = request.PodAntiAffinity - } else { - userLabelsMap[config.LABEL_POD_ANTI_AFFINITY] = "" - } - - // check to see if there are any pod anti-affinity overrides, specifically for - // pgBackRest and pgBouncer. If there are, ensure that they are valid values - if request.PodAntiAffinityPgBackRest != "" { - podAntiAffinityType := crv1.PodAntiAffinityType(request.PodAntiAffinityPgBackRest) - - if err := podAntiAffinityType.Validate(); err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - } - - if request.PodAntiAffinityPgBouncer != "" { - podAntiAffinityType := crv1.PodAntiAffinityType(request.PodAntiAffinityPgBouncer) - - if err := podAntiAffinityType.Validate(); err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - } - - // if synchronous replication has been enabled, then add to user labels - if request.SyncReplication != nil { - userLabelsMap[config.LABEL_SYNC_REPLICATION] = - string(strconv.FormatBool(*request.SyncReplication)) - } - - // pgBackRest URI style must be set to either 'path' or 'host'. If it is neither, - // log an error and stop the cluster from being created. - if request.BackrestS3URIStyle != "" { - if request.BackrestS3URIStyle != "path" && request.BackrestS3URIStyle != "host" { - resp.Status.Code = msgs.Error - resp.Status.Msg = "pgBackRest S3 URI style must be set to either \"path\" or \"host\"." - return resp - } - } - - // Create an instance of our CRD - newInstance := getClusterParams(request, clusterName, userLabelsMap, ns) - newInstance.ObjectMeta.Labels[config.LABEL_PGOUSER] = pgouser - - if request.SecretFrom != "" { - err = validateSecretFrom(request.SecretFrom, newInstance.Spec.User, ns) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = request.SecretFrom + " secret was not found " - return resp - } - } - - validateConfigPolicies(clusterName, request.Policies, ns) - - // create the user secrets - // first, the superuser - if secretName, password, err := createUserSecret(request, newInstance, crv1.RootSecretSuffix, - crv1.PGUserSuperuser, request.PasswordSuperuser); err != nil { - log.Error(err) - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } else { - newInstance.Spec.RootSecretName = secretName - - // if the user requests to show system accounts, append it to the list - if request.ShowSystemAccounts { - user := msgs.CreateClusterDetailUser{ - Username: crv1.PGUserSuperuser, - Password: password, - } - - resp.Result.Users = append(resp.Result.Users, user) - } - } - - // next, the replication user - if secretName, password, err := createUserSecret(request, newInstance, crv1.PrimarySecretSuffix, - crv1.PGUserReplication, request.PasswordReplication); err != nil { - log.Error(err) - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } else { - newInstance.Spec.PrimarySecretName = secretName - - // if the user requests to show system accounts, append it to the list - if request.ShowSystemAccounts { - user := msgs.CreateClusterDetailUser{ - Username: crv1.PGUserReplication, - Password: password, - } - - resp.Result.Users = append(resp.Result.Users, user) - } - } - - // finally, the user from the request and/or default user - userSecretSuffix := fmt.Sprintf("-%s%s", newInstance.Spec.User, crv1.UserSecretSuffix) - if secretName, password, err := createUserSecret(request, newInstance, userSecretSuffix, newInstance.Spec.User, - request.Password); err != nil { - log.Error(err) - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } else { - newInstance.Spec.UserSecretName = secretName - - user := msgs.CreateClusterDetailUser{ - Username: newInstance.Spec.User, - Password: password, - } - - resp.Result.Users = append(resp.Result.Users, user) - } - - // there's a secret for the monitoring user too - newInstance.Spec.CollectSecretName = clusterName + crv1.ExporterSecretSuffix - - // Create Backrest secret for S3/SSH Keys: - // We make this regardless if backrest is enabled or not because - // the deployment template always tries to mount /sshd volume - secretName := fmt.Sprintf("%s-%s", clusterName, config.LABEL_BACKREST_REPO_SECRET) - - if _, err := apiserver.Clientset. - CoreV1().Secrets(request.Namespace). - Get(secretName, metav1.GetOptions{}); kubeapi.IsNotFound(err) { - // determine if a custom CA secret should be used - backrestS3CACert := []byte{} - - if request.BackrestS3CASecretName != "" { - backrestSecret, err := apiserver.Clientset. - CoreV1().Secrets(request.Namespace). - Get(request.BackrestS3CASecretName, metav1.GetOptions{}) - - if err != nil { - log.Error(err) - resp.Status.Code = msgs.Error - resp.Status.Msg = fmt.Sprintf("Error finding pgBackRest S3 CA secret \"%s\": %s", - request.BackrestS3CASecretName, err.Error()) - return resp - } - - // attempt to retrieves the custom CA, assuming it has the name - // "aws-s3-ca.crt" - backrestS3CACert = backrestSecret.Data[util.BackRestRepoSecretKeyAWSS3KeyAWSS3CACert] - } - - err := util.CreateBackrestRepoSecrets(apiserver.Clientset, - util.BackrestRepoConfig{ - BackrestS3CA: backrestS3CACert, - BackrestS3Key: request.BackrestS3Key, - BackrestS3KeySecret: request.BackrestS3KeySecret, - ClusterName: clusterName, - ClusterNamespace: request.Namespace, - OperatorNamespace: apiserver.PgoNamespace, - }) - - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = fmt.Sprintf("could not create backrest repo secret: %s", err) - return resp - } - } else if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = fmt.Sprintf("could not query if backrest repo secret exits: %s", err) - return resp - } - - //create a workflow for this new cluster - id, err = createWorkflowTask(clusterName, ns, pgouser) - if err != nil { - log.Error(err) - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - // assign the workflow information to rhe result, as well as the use labels - // for the CRD - resp.Result.WorkflowID = id - newInstance.Spec.UserLabels[config.LABEL_WORKFLOW_ID] = id - resp.Result.Database = newInstance.Spec.Database - - //create CRD for new cluster - _, err = apiserver.Clientset.CrunchydataV1().Pgclusters(ns).Create(newInstance) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - // assign the cluster information to the result - resp.Result.Name = newInstance.Spec.Name - - // and return! - return resp -} - -func validateConfigPolicies(clusterName, PoliciesFlag, ns string) error { - var err error - var configPolicies string - - if PoliciesFlag == "" { - log.Debugf("%s is Pgo.Cluster.Policies", apiserver.Pgo.Cluster.Policies) - configPolicies = apiserver.Pgo.Cluster.Policies - } else { - configPolicies = PoliciesFlag - } - - if configPolicies == "" { - log.Debug("no policies are specified in either pgo.yaml or from user") - return err - } - - policies := strings.Split(configPolicies, ",") - - for _, v := range policies { - // error if it already exists - _, err := apiserver.Clientset.CrunchydataV1().Pgpolicies(ns).Get(v, metav1.GetOptions{}) - if err != nil { - log.Error("error getting pgpolicy " + v + err.Error()) - return err - } - //create a pgtask to add the policy after the db is ready - } - - spec := crv1.PgtaskSpec{} - spec.StorageSpec = crv1.PgStorageSpec{} - spec.TaskType = crv1.PgtaskAddPolicies - spec.Status = "requested" - spec.Parameters = make(map[string]string) - for _, v := range policies { - spec.Parameters[v] = v - } - spec.Name = clusterName + "-policies" - spec.Namespace = ns - labels := make(map[string]string) - labels[config.LABEL_PG_CLUSTER] = clusterName - - newInstance := &crv1.Pgtask{ - ObjectMeta: metav1.ObjectMeta{ - Name: spec.Name, - Labels: labels, - }, - Spec: spec, - } - - _, err = apiserver.Clientset.CrunchydataV1().Pgtasks(ns).Create(newInstance) - - return err -} - -func getClusterParams(request *msgs.CreateClusterRequest, name string, userLabelsMap map[string]string, ns string) *crv1.Pgcluster { - - spec := crv1.PgclusterSpec{ - Annotations: crv1.ClusterAnnotations{ - Backrest: map[string]string{}, - Global: map[string]string{}, - PgBouncer: map[string]string{}, - Postgres: map[string]string{}, - }, - BackrestResources: v1.ResourceList{}, - BackrestLimits: v1.ResourceList{}, - Limits: v1.ResourceList{}, - Resources: v1.ResourceList{}, - ExporterResources: v1.ResourceList{}, - ExporterLimits: v1.ResourceList{}, - PgBouncer: crv1.PgBouncerSpec{ - Limits: v1.ResourceList{}, - Resources: v1.ResourceList{}, - }, - } - - if userLabelsMap[config.LABEL_CUSTOM_CONFIG] != "" { - spec.CustomConfig = userLabelsMap[config.LABEL_CUSTOM_CONFIG] - } - - // if the request has overriding CPU/Memory requests/limits parameters, - // these will take precedence over the defaults - if request.CPULimit != "" { - // as this was already validated, we can ignore the error - quantity, _ := resource.ParseQuantity(request.CPULimit) - spec.Limits[v1.ResourceCPU] = quantity - } - - if request.CPURequest != "" { - // as this was already validated, we can ignore the error - quantity, _ := resource.ParseQuantity(request.CPURequest) - spec.Resources[v1.ResourceCPU] = quantity - } - - if request.MemoryLimit != "" { - // as this was already validated, we can ignore the error - quantity, _ := resource.ParseQuantity(request.MemoryLimit) - spec.Limits[v1.ResourceMemory] = quantity - } - - if request.MemoryRequest != "" { - // as this was already validated, we can ignore the error - quantity, _ := resource.ParseQuantity(request.MemoryRequest) - spec.Resources[v1.ResourceMemory] = quantity - } else { - spec.Resources[v1.ResourceMemory] = apiserver.Pgo.Cluster.DefaultInstanceResourceMemory - } - - // similarly, if there are any overriding pgBackRest repository container - // resource request values, set them here - if request.BackrestCPULimit != "" { - // as this was already validated, we can ignore the error - quantity, _ := resource.ParseQuantity(request.BackrestCPULimit) - spec.BackrestLimits[v1.ResourceCPU] = quantity - } - - if request.BackrestCPURequest != "" { - // as this was already validated, we can ignore the error - quantity, _ := resource.ParseQuantity(request.BackrestCPURequest) - spec.BackrestResources[v1.ResourceCPU] = quantity - } - - if request.BackrestMemoryLimit != "" { - // as this was already validated, we can ignore the error - quantity, _ := resource.ParseQuantity(request.BackrestMemoryLimit) - spec.BackrestLimits[v1.ResourceMemory] = quantity - } - - if request.BackrestMemoryRequest != "" { - // as this was already validated, we can ignore the error - quantity, _ := resource.ParseQuantity(request.BackrestMemoryRequest) - spec.BackrestResources[v1.ResourceMemory] = quantity - } else { - spec.BackrestResources[v1.ResourceMemory] = apiserver.Pgo.Cluster.DefaultBackrestResourceMemory - } - - // similarly, if there are any overriding pgBackRest repository container - // resource request values, set them here - if request.ExporterCPULimit != "" { - // as this was already validated, we can ignore the error - quantity, _ := resource.ParseQuantity(request.ExporterCPULimit) - spec.ExporterLimits[v1.ResourceCPU] = quantity - } - - if request.ExporterCPURequest != "" { - // as this was already validated, we can ignore the error - quantity, _ := resource.ParseQuantity(request.ExporterCPURequest) - spec.ExporterResources[v1.ResourceCPU] = quantity - } - - if request.ExporterMemoryLimit != "" { - // as this was already validated, we can ignore the error - quantity, _ := resource.ParseQuantity(request.ExporterMemoryLimit) - spec.ExporterLimits[v1.ResourceMemory] = quantity - } - - if request.ExporterMemoryRequest != "" { - // as this was already validated, we can ignore the error - quantity, _ := resource.ParseQuantity(request.ExporterMemoryRequest) - spec.ExporterResources[v1.ResourceMemory] = quantity - } else { - spec.ExporterResources[v1.ResourceMemory] = apiserver.Pgo.Cluster.DefaultExporterResourceMemory - } - - // if the pgBouncer flag is set to true, indicate that the pgBouncer - // deployment should be made available in this cluster - if request.PgbouncerFlag { - spec.PgBouncer.Replicas = config.DefaultPgBouncerReplicas - - // if the user requests a custom amount of pgBouncer replicas, pass them in - // here - if request.PgBouncerReplicas > 0 { - spec.PgBouncer.Replicas = request.PgBouncerReplicas - } - } - - // similarly, if there are any overriding pgBouncer container resource request - // values, set them here - if request.PgBouncerCPULimit != "" { - // as this was already validated, we can ignore the error - quantity, _ := resource.ParseQuantity(request.PgBouncerCPULimit) - spec.PgBouncer.Limits[v1.ResourceCPU] = quantity - } - - if request.PgBouncerCPURequest != "" { - // as this was already validated, we can ignore the error - quantity, _ := resource.ParseQuantity(request.PgBouncerCPURequest) - spec.PgBouncer.Resources[v1.ResourceCPU] = quantity - } - - if request.PgBouncerMemoryLimit != "" { - // as this was already validated, we can ignore the error - quantity, _ := resource.ParseQuantity(request.PgBouncerMemoryLimit) - spec.PgBouncer.Limits[v1.ResourceMemory] = quantity - } - - if request.PgBouncerMemoryRequest != "" { - // as this was already validated, we can ignore the error - quantity, _ := resource.ParseQuantity(request.PgBouncerMemoryRequest) - spec.PgBouncer.Resources[v1.ResourceMemory] = quantity - } else { - spec.PgBouncer.Resources[v1.ResourceMemory] = apiserver.Pgo.Cluster.DefaultPgBouncerResourceMemory - } - - spec.PrimaryStorage, _ = apiserver.Pgo.GetStorageSpec(apiserver.Pgo.PrimaryStorage) - if request.StorageConfig != "" { - spec.PrimaryStorage, _ = apiserver.Pgo.GetStorageSpec(request.StorageConfig) - } - - // set the pd anti-affinity values - if podAntiAffinity, err := apiserver.Pgo.GetPodAntiAffinitySpec( - crv1.PodAntiAffinityType(request.PodAntiAffinity), - crv1.PodAntiAffinityType(request.PodAntiAffinityPgBackRest), - crv1.PodAntiAffinityType(request.PodAntiAffinityPgBouncer), - ); err != nil { - log.Warn("Could not set pod anti-affinity rules:", err.Error()) - spec.PodAntiAffinity = crv1.PodAntiAffinitySpec{} - } else { - spec.PodAntiAffinity = podAntiAffinity - } - - // if the PVCSize is overwritten, update the primary storage spec with this - // value - if request.PVCSize != "" { - log.Debugf("PVC Size is overwritten to be [%s]", request.PVCSize) - spec.PrimaryStorage.Size = request.PVCSize - } - - // extract parameters for optional WAL storage. server configuration and - // request parameters are all optional. - if apiserver.Pgo.WALStorage != "" { - spec.WALStorage, _ = apiserver.Pgo.GetStorageSpec(apiserver.Pgo.WALStorage) - } - if request.WALStorageConfig != "" { - spec.WALStorage, _ = apiserver.Pgo.GetStorageSpec(request.WALStorageConfig) - } - if request.WALPVCSize != "" { - spec.WALStorage.Size = request.WALPVCSize - } - - // extract the parameters for the TablespaceMounts and put them in the format - // that is required by the pgcluster CRD - spec.TablespaceMounts = map[string]crv1.PgStorageSpec{} - - for _, tablespace := range request.Tablespaces { - storageSpec, _ := apiserver.Pgo.GetStorageSpec(tablespace.StorageConfig) - - // if a PVCSize is specified, override the value of the Size parameter in - // storage spec - if tablespace.PVCSize != "" { - storageSpec.Size = tablespace.PVCSize - } - - spec.TablespaceMounts[tablespace.Name] = storageSpec - } - - spec.ReplicaStorage, _ = apiserver.Pgo.GetStorageSpec(apiserver.Pgo.ReplicaStorage) - if request.ReplicaStorageConfig != "" { - spec.ReplicaStorage, _ = apiserver.Pgo.GetStorageSpec(request.ReplicaStorageConfig) - } - - // if the PVCSize is overwritten, update the replica storage spec with this - // value - if request.PVCSize != "" { - log.Debugf("PVC Size is overwritten to be [%s]", request.PVCSize) - spec.ReplicaStorage.Size = request.PVCSize - } - - spec.BackrestStorage, _ = apiserver.Pgo.GetStorageSpec(apiserver.Pgo.BackrestStorage) - - // if the user passed in a value to override the pgBackRest storage - // configuration, apply it here. Note that (and this follows the legacy code) - // given we've validated this storage configuration exists, this call should - // be ok - if request.BackrestStorageConfig != "" { - spec.BackrestStorage, _ = apiserver.Pgo.GetStorageSpec(request.BackrestStorageConfig) - } - - // if the BackrestPVCSize is overwritten, update the backrest storage spec - // with this value - if request.BackrestPVCSize != "" { - log.Debugf("pgBackRest PVC Size is overwritten to be [%s]", - request.BackrestPVCSize) - spec.BackrestStorage.Size = request.BackrestPVCSize - } - - spec.CCPImageTag = apiserver.Pgo.Cluster.CCPImageTag - if request.CCPImageTag != "" { - spec.CCPImageTag = request.CCPImageTag - log.Debugf("using CCPImageTag from command line %s", request.CCPImageTag) - } - - if request.CCPImage != "" { - spec.CCPImage = request.CCPImage - log.Debugf("user is overriding CCPImage from command line %s", request.CCPImage) - } else { - spec.CCPImage = "crunchy-postgres-ha" - } - - // update the CRD spec to use the custom CCPImagePrefix, if given - // otherwise, set the value from the global configuration - spec.CCPImagePrefix = util.GetValueOrDefault(request.CCPImagePrefix, apiserver.Pgo.Cluster.CCPImagePrefix) - - // update the CRD spec to use the custom PGOImagePrefix, if given - // otherwise, set the value from the global configuration - spec.PGOImagePrefix = util.GetValueOrDefault(request.PGOImagePrefix, apiserver.Pgo.Pgo.PGOImagePrefix) - - spec.Namespace = ns - spec.Name = name - spec.ClusterName = name - spec.Port = apiserver.Pgo.Cluster.Port - spec.PGBadgerPort = apiserver.Pgo.Cluster.PGBadgerPort - spec.ExporterPort = apiserver.Pgo.Cluster.ExporterPort - if request.Policies == "" { - spec.Policies = apiserver.Pgo.Cluster.Policies - log.Debugf("Pgo.Cluster.Policies %s", apiserver.Pgo.Cluster.Policies) - } else { - spec.Policies = request.Policies - } - - spec.Replicas = "0" - str := apiserver.Pgo.Cluster.Replicas - log.Debugf("[%s] is Pgo.Cluster.Replicas", str) - if str != "" { - spec.Replicas = str - } - log.Debugf("replica count is %d", request.ReplicaCount) - if request.ReplicaCount > 0 { - spec.Replicas = strconv.Itoa(request.ReplicaCount) - log.Debugf("replicas is %s", spec.Replicas) - } - spec.UserLabels = userLabelsMap - spec.UserLabels[config.LABEL_PGO_VERSION] = msgs.PGO_VERSION - - //override any values from config file - str = apiserver.Pgo.Cluster.Port - log.Debugf("%s", apiserver.Pgo.Cluster.Port) - if str != "" { - spec.Port = str - } - - // set the user. First, attempt to default to the user that is in the pgo.yaml - // configuration file. If the user has entered a username in the request, - // then use that one - spec.User = apiserver.Pgo.Cluster.User - - if request.Username != "" { - spec.User = request.Username - } - - log.Debugf("username set to [%s]", spec.User) - - // set the name of the database. The hierarchy is as such: - // 1. Use the name that the user provides in the request - // 2. Use the name that is in the pgo.yaml file - // 3. Use the name of the cluster - switch { - case request.Database != "": - spec.Database = request.Database - case apiserver.Pgo.Cluster.Database != "": - spec.Database = apiserver.Pgo.Cluster.Database - default: - spec.Database = spec.Name - } - - log.Debugf("database set to [%s]", spec.Database) - - // set up TLS - spec.TLSOnly = request.TLSOnly - spec.TLS.CASecret = request.CASecret - spec.TLS.TLSSecret = request.TLSSecret - spec.TLS.ReplicationTLSSecret = request.ReplicationTLSSecret - - spec.CustomConfig = request.CustomConfig - spec.SyncReplication = request.SyncReplication - - if request.BackrestConfig != "" { - configmap := v1.ConfigMapProjection{} - configmap.Name = request.BackrestConfig - spec.BackrestConfig = append(spec.BackrestConfig, v1.VolumeProjection{ConfigMap: &configmap}) - } - - // set pgBackRest S3 settings in the spec if included in the request - // otherwise set to the default configuration value - if request.BackrestS3Bucket != "" { - spec.BackrestS3Bucket = request.BackrestS3Bucket - } else { - spec.BackrestS3Bucket = apiserver.Pgo.Cluster.BackrestS3Bucket - } - if request.BackrestS3Endpoint != "" { - spec.BackrestS3Endpoint = request.BackrestS3Endpoint - } else { - spec.BackrestS3Endpoint = apiserver.Pgo.Cluster.BackrestS3Endpoint - } - if request.BackrestS3Region != "" { - spec.BackrestS3Region = request.BackrestS3Region - } else { - spec.BackrestS3Region = apiserver.Pgo.Cluster.BackrestS3Region - } - if request.BackrestS3URIStyle != "" { - spec.BackrestS3URIStyle = request.BackrestS3URIStyle - } else { - spec.BackrestS3URIStyle = apiserver.Pgo.Cluster.BackrestS3URIStyle - } - - // if the pgbackrest-s3-verify-tls flag was set, update the CR spec - // value accordingly, otherwise, do not set - if request.BackrestS3VerifyTLS != msgs.UpdateBackrestS3VerifyTLSDoNothing { - if request.BackrestS3VerifyTLS == msgs.UpdateBackrestS3VerifyTLSDisable { - spec.BackrestS3VerifyTLS = "false" - } else { - spec.BackrestS3VerifyTLS = "true" - } - } else { - spec.BackrestS3VerifyTLS = apiserver.Pgo.Cluster.BackrestS3VerifyTLS - } - - // set the data source that should be utilized to bootstrap the cluster - spec.PGDataSource = request.PGDataSource - - // create a map for the CR specific annotations - annotations := map[string]string{} - // store the default current primary value as an annotation - annotations[config.ANNOTATION_CURRENT_PRIMARY] = spec.Name - // store the initial deployment value, which will match the - // cluster name initially - annotations[config.ANNOTATION_PRIMARY_DEPLOYMENT] = spec.Name - - // set the user-defined annotations - // go through each annotation grouping and make the appropriate changes in the - // equivalent cluster annotation group - setClusterAnnotationGroup(spec.Annotations.Global, request.Annotations.Global) - setClusterAnnotationGroup(spec.Annotations.Postgres, request.Annotations.Postgres) - setClusterAnnotationGroup(spec.Annotations.Backrest, request.Annotations.Backrest) - setClusterAnnotationGroup(spec.Annotations.PgBouncer, request.Annotations.PgBouncer) - - labels := make(map[string]string) - labels[config.LABEL_NAME] = name - if !request.AutofailFlag || apiserver.Pgo.Cluster.DisableAutofail { - labels[config.LABEL_AUTOFAIL] = "false" - } else { - labels[config.LABEL_AUTOFAIL] = "true" - } - - // set whether or not the cluster will be a standby cluster - spec.Standby = request.Standby - // set the pgBackRest repository path - spec.BackrestRepoPath = request.BackrestRepoPath - - // pgbadger - set with global flag first then check for a user flag - labels[config.LABEL_BADGER] = strconv.FormatBool(apiserver.BadgerFlag) - if request.BadgerFlag { - labels[config.LABEL_BADGER] = "true" - } - - // pgBackRest is always set to true. This is here due to a time where - // pgBackRest was not the only way - labels[config.LABEL_BACKREST] = "true" - - newInstance := &crv1.Pgcluster{ - ObjectMeta: metav1.ObjectMeta{ - Name: name, - Labels: labels, - Annotations: annotations, - }, - Spec: spec, - Status: crv1.PgclusterStatus{ - State: crv1.PgclusterStateCreated, - Message: "Created, not processed yet", - }, - } - return newInstance -} - -func validateSecretFrom(secretname, user, ns string) error { - var err error - selector := config.LABEL_PG_CLUSTER + "=" + secretname - secrets, err := apiserver.Clientset. - CoreV1().Secrets(ns). - List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - return err - } - - log.Debugf("secrets for %s", secretname) - pgprimaryFound := false - pgrootFound := false - pguserFound := false - - for _, s := range secrets.Items { - if s.ObjectMeta.Name == secretname+crv1.PrimarySecretSuffix { - pgprimaryFound = true - } else if s.ObjectMeta.Name == secretname+crv1.RootSecretSuffix { - pgrootFound = true - } else if s.ObjectMeta.Name == secretname+"-"+user+crv1.UserSecretSuffix { - pguserFound = true - } - } - if !pgprimaryFound { - return errors.New(secretname + crv1.PrimarySecretSuffix + " not found") - } - if !pgrootFound { - return errors.New(secretname + crv1.RootSecretSuffix + " not found") - } - if !pguserFound { - return errors.New(secretname + "-" + user + crv1.UserSecretSuffix + " not found") - } - - return err -} - -func getReadyStatus(pod *v1.Pod) (string, bool) { - equal := false - readyCount := 0 - containerCount := 0 - for _, stat := range pod.Status.ContainerStatuses { - containerCount++ - if stat.Ready { - readyCount++ - } - } - if readyCount == containerCount { - equal = true - } - return fmt.Sprintf("%d/%d", readyCount, containerCount), equal - -} - -func createDeleteDataTasks(clusterName string, storageSpec crv1.PgStorageSpec, deleteBackups bool, ns string) error { - - var err error - - log.Debugf("creatingDeleteDataTasks deployments for pg-cluster=%s\n", clusterName) - - return err -} - -func createWorkflowTask(clusterName, ns, pgouser string) (string, error) { - - //create pgtask CRD - spec := crv1.PgtaskSpec{} - spec.Namespace = ns - spec.Name = clusterName + "-" + crv1.PgtaskWorkflowCreateClusterType - spec.TaskType = crv1.PgtaskWorkflow - - spec.Parameters = make(map[string]string) - spec.Parameters[crv1.PgtaskWorkflowSubmittedStatus] = time.Now().Format(time.RFC3339) - spec.Parameters[config.LABEL_PG_CLUSTER] = clusterName - - u, err := ioutil.ReadFile("/proc/sys/kernel/random/uuid") - if err != nil { - log.Error(err) - return "", err - } - spec.Parameters[crv1.PgtaskWorkflowID] = string(u[:len(u)-1]) - - newInstance := &crv1.Pgtask{ - ObjectMeta: metav1.ObjectMeta{ - Name: spec.Name, - }, - Spec: spec, - } - newInstance.ObjectMeta.Labels = make(map[string]string) - newInstance.ObjectMeta.Labels[config.LABEL_PGOUSER] = pgouser - newInstance.ObjectMeta.Labels[config.LABEL_PG_CLUSTER] = clusterName - newInstance.ObjectMeta.Labels[crv1.PgtaskWorkflowID] = spec.Parameters[crv1.PgtaskWorkflowID] - - _, err = apiserver.Clientset.CrunchydataV1().Pgtasks(ns).Create(newInstance) - if err != nil { - log.Error(err) - return "", err - } - return spec.Parameters[crv1.PgtaskWorkflowID], err -} - -func getType(pod *v1.Pod, clusterName string) string { - - //log.Debugf("%v\n", pod.ObjectMeta.Labels) - if pod.ObjectMeta.Labels[config.LABEL_PGO_BACKREST_REPO] != "" { - return msgs.PodTypePgbackrest - } else if pod.ObjectMeta.Labels[config.LABEL_PGBOUNCER] != "" { - return msgs.PodTypePgbouncer - } else if pod.ObjectMeta.Labels[config.LABEL_PGHA_ROLE] == config.LABEL_PGHA_ROLE_PRIMARY { - return msgs.PodTypePrimary - } else if pod.ObjectMeta.Labels[config.LABEL_PGHA_ROLE] == config.LABEL_PGHA_ROLE_REPLICA { - return msgs.PodTypeReplica - } - return msgs.PodTypeUnknown - -} - -func validateCustomConfig(configmapname, ns string) (bool, error) { - _, err := apiserver.Clientset.CoreV1().ConfigMaps(ns).Get(configmapname, metav1.GetOptions{}) - return err == nil, err -} - -func existsGlobalConfig(ns string) bool { - _, err := apiserver.Clientset.CoreV1().ConfigMaps(ns).Get(config.GLOBAL_CUSTOM_CONFIGMAP, metav1.GetOptions{}) - return err == nil -} - -func getReplicas(cluster *crv1.Pgcluster, ns string) ([]msgs.ShowClusterReplica, error) { - - output := make([]msgs.ShowClusterReplica, 0) - - selector := config.LABEL_PG_CLUSTER + "=" + cluster.Spec.Name - - replicaList, err := apiserver.Clientset.CrunchydataV1().Pgreplicas(ns).List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - return output, err - } - - if len(replicaList.Items) == 0 { - log.Debug("no replicas found") - return output, err - } - - for _, replica := range replicaList.Items { - d := msgs.ShowClusterReplica{} - d.Name = replica.Spec.Name - output = append(output, d) - } - - return output, err -} - -// createUserSecret is modeled off of the legacy "createSecrets" method to -// create a user secret for a specified username and password. It determines how -// to assign the credentials to the user based on whether or not they selected -// one of the following in precedence order, with the first in order having -// higher precedence: -// -// 1. The password is supplied directly by the user -// 2. The password is loaded from a pre-existing secret and copied into a new -// secret. -// 3. The password is generated based on the length provided by the user -// 4. The password is generated based on the value available in the Operator -// configuration -// 5. The password is generated by the global Operator default value for -// password length -// -// returns the secertname, password as well as any errors -func createUserSecret(request *msgs.CreateClusterRequest, cluster *crv1.Pgcluster, secretNameSuffix, username, password string) (string, string, error) { - // the secretName is just the combination cluster name and the secretNameSuffix - secretName := fmt.Sprintf("%s%s", cluster.Spec.Name, secretNameSuffix) - - // if the secret already exists, we can perform an early exit - // if there is an error, we'll ignore it - if secret, err := apiserver.Clientset. - CoreV1().Secrets(cluster.Spec.Namespace). - Get(secretName, metav1.GetOptions{}); err == nil { - log.Infof("secret exists: [%s] - skipping", secretName) - - return secretName, string(secret.Data["password"][:]), nil - } - - // alright, go through the hierarchy and determine if we need to set the - // password. - switch { - // if the user password is already set, then we can move on to the next step - case password != "": - break - // if the "SecretFrom" parameter is set, then load the password from a prexisting password - case request.SecretFrom != "": - // set up the name of the secret that we are loading the secret from - secretFromSecretName := fmt.Sprintf("%s%s", request.SecretFrom, secretNameSuffix) - - // now attempt to load said secret - oldPassword, err := util.GetPasswordFromSecret(apiserver.Clientset, cluster.Spec.Namespace, secretFromSecretName) - - // if there is an error, abandon here, otherwise set the oldPassword as the - // current password - if err != nil { - return "", "", err - } - - password = oldPassword - // if the user set the password length in the request, honor that here - // otherwise use either the configured or hard coded default - default: - passwordLength := request.PasswordLength - - if request.PasswordLength <= 0 { - passwordLength = util.GeneratedPasswordLength(apiserver.Pgo.Cluster.PasswordLength) - } - - generatedPassword, err := util.GeneratePassword(passwordLength) - - // if the password fails to generate, return the error - if err != nil { - return "", "", err - } - - password = generatedPassword - } - - // great, now we can create the secret! if we can't, return an error - if err := util.CreateSecret(apiserver.Clientset, cluster.Spec.Name, secretName, - username, password, cluster.Spec.Namespace); err != nil { - return "", "", err - } - - // otherwise, return the secret name, password - return secretName, password, nil -} - -// UpdateCluster ... -func UpdateCluster(request *msgs.UpdateClusterRequest) msgs.UpdateClusterResponse { - - response := msgs.UpdateClusterResponse{} - response.Status = msgs.Status{Code: msgs.Ok, Msg: ""} - response.Results = make([]string, 0) - - log.Debugf("autofail is [%v]\n", request.Autofail) - - switch { - case request.Startup && request.Shutdown: - response.Status.Code = msgs.Error - response.Status.Msg = fmt.Sprintf("A startup and a shutdown were requested. " + - "Please specify one or the other.") - return response - } - - if request.Startup && request.Shutdown { - response.Status.Code = msgs.Error - response.Status.Msg = fmt.Sprintf("Both a startup and a shutdown was requested. " + - "Please specify one or the other") - return response - } - - // evaluate if the CPU / Memory have been set to custom values - zeroQuantity := resource.Quantity{} - - if err := apiserver.ValidateResourceRequestLimit(request.CPURequest, request.CPULimit, zeroQuantity); err != nil { - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - - // Note: we don't consider the default value here because the cluster is - // already deployed. Additionally, this does not check to see if the - // request/limits are inline with what's already deployed in a pgcluster. That - // just becomes too complicated - if err := apiserver.ValidateResourceRequestLimit(request.MemoryRequest, request.MemoryLimit, zeroQuantity); err != nil { - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - - // similarly, if any of the pgBackRest repo CPU / Memory values have been set, - // evaluate those as well - if err := apiserver.ValidateResourceRequestLimit(request.BackrestCPURequest, request.BackrestCPULimit, zeroQuantity); err != nil { - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - - // Note: we don't consider the default value here because the cluster is - // already deployed. Additionally, this does not check to see if the - // request/limits are inline with what's already deployed for pgBackRest. That - // just becomes too complicated - if err := apiserver.ValidateResourceRequestLimit(request.BackrestMemoryRequest, request.BackrestMemoryLimit, zeroQuantity); err != nil { - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - - // similarly, if any of the Crunchy Postgres Exporter repo CPU / Memory values have been set, - // evaluate those as well - if err := apiserver.ValidateResourceRequestLimit(request.ExporterCPURequest, request.ExporterCPULimit, zeroQuantity); err != nil { - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - - // Note: we don't consider the default value here because the cluster is - // already deployed. Additionally, this does not check to see if the - // request/limits are inline with what's already deployed for Crunchy Postgres - // Exporter. That just becomes too complicated - if err := apiserver.ValidateResourceRequestLimit(request.ExporterMemoryRequest, request.ExporterMemoryLimit, zeroQuantity); err != nil { - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - - // validate the storage type for each specified tablespace actually exists. - // if a PVCSize is passed in, also validate that it follows the Kubernetes - // format - if err := validateTablespaces(request.Tablespaces); err != nil { - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - - clusterList := crv1.PgclusterList{} - - //get the clusters list - if request.AllFlag { - cl, err := apiserver.Clientset.CrunchydataV1().Pgclusters(request.Namespace).List(metav1.ListOptions{}) - if err != nil { - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - clusterList = *cl - } else if request.Selector != "" { - cl, err := apiserver.Clientset.CrunchydataV1().Pgclusters(request.Namespace).List(metav1.ListOptions{ - LabelSelector: request.Selector, - }) - if err != nil { - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - clusterList = *cl - } else { - for _, v := range request.Clustername { - cl, err := apiserver.Clientset.CrunchydataV1().Pgclusters(request.Namespace).Get(v, metav1.GetOptions{}) - if err != nil { - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - clusterList.Items = append(clusterList.Items, *cl) - } - } - - if len(clusterList.Items) == 0 { - response.Status.Code = msgs.Error - response.Status.Msg = "no clusters found" - return response - } - - for _, cluster := range clusterList.Items { - - //set autofail=true or false on each pgcluster CRD - // Make the change based on the value of Autofail vis-a-vis UpdateClusterAutofailStatus - switch request.Autofail { - case msgs.UpdateClusterAutofailEnable: - cluster.ObjectMeta.Labels[config.LABEL_AUTOFAIL] = "true" - case msgs.UpdateClusterAutofailDisable: - cluster.ObjectMeta.Labels[config.LABEL_AUTOFAIL] = "false" - } - - // enable or disable standby mode based on UpdateClusterStandbyStatus provided in - // the request - switch request.Standby { - case msgs.UpdateClusterStandbyEnable: - if cluster.Status.State == crv1.PgclusterStateShutdown { - cluster.Spec.Standby = true - } else { - response.Status.Code = msgs.Error - response.Status.Msg = "Cluster must be shutdown in order to enable standby mode" - return response - } - case msgs.UpdateClusterStandbyDisable: - cluster.Spec.Standby = false - } - // return an error if attempting to enable standby for a cluster that does not have the - // required S3 settings - if cluster.Spec.Standby && - !strings.Contains(cluster.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE], "s3") { - response.Status.Code = msgs.Error - response.Status.Msg = "Backrest storage type 's3' must be enabled in order to enable " + - "standby mode" - return response - } - - // if a startup or shutdown was requested then update the pgcluster spec accordingly - if request.Startup { - cluster.Spec.Shutdown = false - } else if request.Shutdown { - cluster.Spec.Shutdown = true - } - - // ensure there is a value for Resources - if cluster.Spec.Resources == nil { - cluster.Spec.Limits = v1.ResourceList{} - cluster.Spec.Resources = v1.ResourceList{} - } - - // if the CPU or memory values have been modified, update the values in the - // cluster CRD - if request.CPULimit != "" { - quantity, _ := resource.ParseQuantity(request.CPULimit) - cluster.Spec.Limits[v1.ResourceCPU] = quantity - } - - if request.CPURequest != "" { - quantity, _ := resource.ParseQuantity(request.CPURequest) - cluster.Spec.Resources[v1.ResourceCPU] = quantity - } - - if request.MemoryLimit != "" { - quantity, _ := resource.ParseQuantity(request.MemoryLimit) - cluster.Spec.Limits[v1.ResourceMemory] = quantity - } - - if request.MemoryRequest != "" { - quantity, _ := resource.ParseQuantity(request.MemoryRequest) - cluster.Spec.Resources[v1.ResourceMemory] = quantity - } - - // ensure there is a value for BackrestResources - if cluster.Spec.BackrestResources == nil { - cluster.Spec.BackrestLimits = v1.ResourceList{} - cluster.Spec.BackrestResources = v1.ResourceList{} - } - - // if the pgBackRest repository CPU or memory values have been modified, - // update the values in the cluster CRD - if request.BackrestCPULimit != "" { - quantity, _ := resource.ParseQuantity(request.BackrestCPULimit) - cluster.Spec.BackrestLimits[v1.ResourceCPU] = quantity - } - - if request.BackrestCPURequest != "" { - quantity, _ := resource.ParseQuantity(request.BackrestCPURequest) - cluster.Spec.BackrestResources[v1.ResourceCPU] = quantity - } - - if request.BackrestMemoryLimit != "" { - quantity, _ := resource.ParseQuantity(request.BackrestMemoryLimit) - cluster.Spec.BackrestLimits[v1.ResourceMemory] = quantity - } - - if request.BackrestMemoryRequest != "" { - quantity, _ := resource.ParseQuantity(request.BackrestMemoryRequest) - cluster.Spec.BackrestResources[v1.ResourceMemory] = quantity - } - - // ensure there is a value for ExporterResources - if cluster.Spec.ExporterResources == nil { - cluster.Spec.ExporterLimits = v1.ResourceList{} - cluster.Spec.ExporterResources = v1.ResourceList{} - } - - // if the Exporter CPU or memory values have been modified, - // update the values in the cluster CRD - if request.ExporterCPULimit != "" { - quantity, _ := resource.ParseQuantity(request.ExporterCPULimit) - cluster.Spec.ExporterLimits[v1.ResourceCPU] = quantity - } - - if request.ExporterCPURequest != "" { - quantity, _ := resource.ParseQuantity(request.ExporterCPURequest) - cluster.Spec.ExporterResources[v1.ResourceCPU] = quantity - } - - if request.ExporterMemoryLimit != "" { - quantity, _ := resource.ParseQuantity(request.ExporterMemoryLimit) - cluster.Spec.ExporterLimits[v1.ResourceMemory] = quantity - } - - if request.ExporterMemoryRequest != "" { - quantity, _ := resource.ParseQuantity(request.ExporterMemoryRequest) - cluster.Spec.ExporterResources[v1.ResourceMemory] = quantity - } - - // set any user-defined annotations - // go through each annotation grouping and make the appropriate changes in the - // equivalent cluster annotation group - setClusterAnnotationGroup(cluster.Spec.Annotations.Global, request.Annotations.Global) - setClusterAnnotationGroup(cluster.Spec.Annotations.Postgres, request.Annotations.Postgres) - setClusterAnnotationGroup(cluster.Spec.Annotations.Backrest, request.Annotations.Backrest) - setClusterAnnotationGroup(cluster.Spec.Annotations.PgBouncer, request.Annotations.PgBouncer) - - // if TablespaceMounts happens to be nil (e.g. an upgraded cluster), and - // the tablespaces are being updated, set it here - if len(request.Tablespaces) > 0 && cluster.Spec.TablespaceMounts == nil { - cluster.Spec.TablespaceMounts = map[string]crv1.PgStorageSpec{} - } - - // extract the parameters for the TablespaceMounts and put them in the - // format that is required by the pgcluster CRD - for _, tablespace := range request.Tablespaces { - storageSpec, _ := apiserver.Pgo.GetStorageSpec(tablespace.StorageConfig) - - // if a PVCSize is specified, override the value of the Size parameter in - // storage spec - if tablespace.PVCSize != "" { - storageSpec.Size = tablespace.PVCSize - } - - cluster.Spec.TablespaceMounts[tablespace.Name] = storageSpec - } - - if _, err := apiserver.Clientset.CrunchydataV1().Pgclusters(request.Namespace).Update(&cluster); err != nil { - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - - response.Results = append(response.Results, "updated pgcluster "+cluster.Spec.Name) - } - - return response -} - -func GetPrimaryAndReplicaPods(cluster *crv1.Pgcluster, ns string) ([]msgs.ShowClusterPod, error) { - - output := make([]msgs.ShowClusterPod, 0) - - selector := config.LABEL_SERVICE_NAME + "=" + cluster.Spec.Name + "," + config.LABEL_DEPLOYMENT_NAME - log.Debugf("selector for GetPrimaryAndReplicaPods is %s", selector) - - pods, err := apiserver.Clientset.CoreV1().Pods(ns).List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - return output, err - } - for _, p := range pods.Items { - d := msgs.ShowClusterPod{} - d.Name = p.Name - d.Phase = string(p.Status.Phase) - d.NodeName = p.Spec.NodeName - d.ReadyStatus, d.Ready = getReadyStatus(&p) - - d.Primary = false - d.Type = getType(&p, cluster.Spec.Name) - if d.Type == msgs.PodTypePrimary { - d.Primary = true - } - output = append(output, d) - - } - selector = config.LABEL_SERVICE_NAME + "=" + cluster.Spec.Name + "-replica" + "," + config.LABEL_DEPLOYMENT_NAME - log.Debugf("selector for GetPrimaryAndReplicaPods is %s", selector) - - pods, err = apiserver.Clientset.CoreV1().Pods(ns).List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - return output, err - } - for _, p := range pods.Items { - d := msgs.ShowClusterPod{} - d.Name = p.Name - d.Phase = string(p.Status.Phase) - d.NodeName = p.Spec.NodeName - d.ReadyStatus, d.Ready = getReadyStatus(&p) - - d.Primary = false - d.Type = getType(&p, cluster.Spec.Name) - if d.Type == msgs.PodTypePrimary { - d.Primary = true - } - output = append(output, d) - - } - - return output, err - -} - -// setClusterAnnotationGroup helps with setting the specific annotation group -func setClusterAnnotationGroup(annotationGroup, annotations map[string]string) { - for k, v := range annotations { - switch v { - default: - annotationGroup[k] = v - case "": - delete(annotationGroup, k) - } - } -} - -// validateBackrestStorageTypeOnCreate validates the pgbackrest storage type specified when -// a new cluster. This includes ensuring the type provided is valid, and that the required -// configuration settings (s3 bucket, region, etc.) are also present -func validateBackrestStorageTypeOnCreate(request *msgs.CreateClusterRequest) error { - - requestBackRestStorageType := request.BackrestStorageType - - if requestBackRestStorageType != "" && !util.IsValidBackrestStorageType(requestBackRestStorageType) { - return fmt.Errorf("Invalid value provided for pgBackRest storage type. The following values are allowed: %s", - "\""+strings.Join(apiserver.GetBackrestStorageTypes(), "\", \"")+"\"") - } else if strings.Contains(requestBackRestStorageType, "s3") && isMissingS3Config(request) { - return errors.New("A configuration setting for AWS S3 storage is missing. Values must be " + - "provided for the S3 bucket, S3 endpoint and S3 region in order to use the 's3' " + - "storage type with pgBackRest.") - } - - return nil -} - -// validateClusterTLS validates the parameters that allow a user to enable TLS -// connections to a PostgreSQL cluster -func validateClusterTLS(request *msgs.CreateClusterRequest) error { - // if ReplicationTLSSecret is set, but neither TLSSecret nor CASecret is not - // set, then return - if request.ReplicationTLSSecret != "" && (request.TLSSecret == "" || request.CASecret == "") { - return fmt.Errorf("Both TLS secret and CA secret must be set in order to enable certificate-based authentication for replication") - } - - // if TLSOnly is not set and neither TLSSecret no CASecret are set, just return - if !request.TLSOnly && request.TLSSecret == "" && request.CASecret == "" { - return nil - } - - // if TLS only is set, but there is no TLSSecret nor CASecret, return - if request.TLSOnly && !(request.TLSSecret != "" && request.CASecret != "") { - return errors.New("TLS only clusters requires both a TLS secret and CA secret") - } - // if TLSSecret or CASecret is set, but not both are set, return - if (request.TLSSecret != "" && request.CASecret == "") || (request.TLSSecret == "" && request.CASecret != "") { - return errors.New("Both TLS secret and CA secret must be set in order to enable TLS for PostgreSQL") - } - - // now check for the existence of the two secrets - // First the TLS secret - if _, err := apiserver.Clientset. - CoreV1().Secrets(request.Namespace). - Get(request.TLSSecret, metav1.GetOptions{}); err != nil { - return err - } - - // then, the CA secret - if _, err := apiserver.Clientset. - CoreV1().Secrets(request.Namespace). - Get(request.CASecret, metav1.GetOptions{}); err != nil { - return err - } - - // then, if set, the Replication TLS secret - if request.ReplicationTLSSecret != "" { - if _, err := apiserver.Clientset. - CoreV1().Secrets(request.Namespace). - Get(request.ReplicationTLSSecret, metav1.GetOptions{}); err != nil { - return err - } - } - - // after this, we are validated! - return nil -} - -// validateTablespaces validates the tablespace parameters. if there is an error -// it aborts and returns an error -func validateTablespaces(tablespaces []msgs.ClusterTablespaceDetail) error { - // iterate through the list of tablespaces and return any erors - for _, tablespace := range tablespaces { - if !apiserver.IsValidStorageName(tablespace.StorageConfig) { - return fmt.Errorf("%s storage config for tablespace %s was not found", - tablespace.StorageConfig, tablespace.Name) - } - - if err := apiserver.ValidateQuantity(tablespace.PVCSize); err != nil { - return fmt.Errorf(apiserver.ErrMessagePVCSize, - tablespace.PVCSize, err.Error()) - } - } - - return nil -} - -// determines if any of the required S3 configuration settings (bucket, endpoint -// and region) are missing from both the incoming request or the pgo.yaml config file -func isMissingS3Config(request *msgs.CreateClusterRequest) bool { - if request.BackrestS3Bucket == "" && apiserver.Pgo.Cluster.BackrestS3Bucket == "" { - return true - } - if request.BackrestS3Endpoint == "" && apiserver.Pgo.Cluster.BackrestS3Endpoint == "" { - return true - } - if request.BackrestS3Region == "" && apiserver.Pgo.Cluster.BackrestS3Region == "" { - return true - } - return false -} - -// isMissingExistingDataSourceS3Config determines if any of the required S3 configuration -// settings (bucket, endpoint, region, key and key secret) are missing from the annotations -// in the pgBackRest repo secret as needed to bootstrap a cluster from an existing S3 repository -func isMissingExistingDataSourceS3Config(backrestRepoSecret *corev1.Secret) bool { - switch { - case backrestRepoSecret.Annotations[config.ANNOTATION_S3_BUCKET] == "": - return true - case backrestRepoSecret.Annotations[config.ANNOTATION_S3_ENDPOINT] == "": - return true - case backrestRepoSecret.Annotations[config.ANNOTATION_S3_REGION] == "": - return true - case len(backrestRepoSecret.Data[util.BackRestRepoSecretKeyAWSS3KeyAWSS3Key]) == 0: - return true - case len(backrestRepoSecret.Data[util.BackRestRepoSecretKeyAWSS3KeyAWSS3KeySecret]) == 0: - return true - } - return false -} - -// validateDataSourceParms performs validation of any data source parameters included in a request -// to create a new cluster -func validateDataSourceParms(request *msgs.CreateClusterRequest) error { - - namespace := request.Namespace - restoreClusterName := request.PGDataSource.RestoreFrom - restoreOpts := request.PGDataSource.RestoreOpts - - if restoreClusterName == "" && restoreOpts == "" { - return nil - } - - // first verify that a "restore from" parameter was specified if the restore options - // are not empty - if restoreOpts != "" && restoreClusterName == "" { - return fmt.Errorf("A cluster to restore from must be specified when providing restore " + - "options") - } - - // next verify whether or not a PVC exists for the cluster we are restoring from - if _, err := apiserver.Clientset.CoreV1().PersistentVolumeClaims(namespace).Get( - fmt.Sprintf(util.BackrestRepoPVCName, restoreClusterName), - metav1.GetOptions{}); err != nil { - return fmt.Errorf("Unable to find PVC %s for cluster %s, cannot to restore from the "+ - "specified data source", fmt.Sprintf(util.BackrestRepoPVCName, restoreClusterName), - restoreClusterName) - } - - // now verify that a pgBackRest repo secret exists for the cluster we are restoring from - backrestRepoSecret, err := apiserver.Clientset.CoreV1().Secrets(namespace).Get( - fmt.Sprintf(util.BackrestRepoSecretName, restoreClusterName), metav1.GetOptions{}) - if err != nil { - return fmt.Errorf("Unable to find secret %s for cluster %s, cannot restore from the "+ - "specified data source", - fmt.Sprintf(util.BackrestRepoSecretName, restoreClusterName), restoreClusterName) - } - - // next perform general validation of the restore options - if err := backupoptions.ValidateBackupOpts(restoreOpts, request); err != nil { - return fmt.Errorf("%s: %w", ErrInvalidDataSource, err) - } - - // now detect if an 's3' repo type was specified via the restore opts, and if so verify that s3 - // settings are present in backrest repo secret for the backup being restored from - s3Restore := backrest.S3RepoTypeCLIOptionExists(restoreOpts) - if s3Restore && isMissingExistingDataSourceS3Config(backrestRepoSecret) { - return fmt.Errorf("Secret %s is missing the S3 configuration required to restore "+ - "from an S3 repository", backrestRepoSecret.GetName()) - } - - // finally, verify that the cluster being restored from is in the proper status, and that no - // other clusters currently being bootstrapping from the same cluster - clusterList, err := apiserver.Clientset.CrunchydataV1().Pgclusters(namespace).List(metav1.ListOptions{}) - if err != nil { - return fmt.Errorf("%s: %w", ErrInvalidDataSource, err) - } - for _, cl := range clusterList.Items { - - if cl.GetName() == restoreClusterName && - cl.Status.State == crv1.PgclusterStateShutdown { - return fmt.Errorf("Unable to restore from cluster %s because it has a %s "+ - "status", restoreClusterName, string(cl.Status.State)) - } - - if cl.Spec.PGDataSource.RestoreFrom == restoreClusterName && - cl.Status.State == crv1.PgclusterStateBootstrapping { - return fmt.Errorf("Cluster %s is currently bootstrapping from cluster %s, please "+ - "try again once it is completes", cl.GetName(), restoreClusterName) - } - } - - return nil -} - -func validateStandbyCluster(request *msgs.CreateClusterRequest) error { - switch { - case !strings.Contains(request.BackrestStorageType, "s3"): - return errors.New("Backrest storage type 's3' must be selected in order to create a " + - "standby cluster") - case request.BackrestRepoPath == "": - return errors.New("A pgBackRest repository path must be specified when creating a " + - "standby cluster") - } - return nil -} diff --git a/internal/apiserver/clusterservice/clusterservice.go b/internal/apiserver/clusterservice/clusterservice.go deleted file mode 100644 index d0f31df636..0000000000 --- a/internal/apiserver/clusterservice/clusterservice.go +++ /dev/null @@ -1,370 +0,0 @@ -package clusterservice - -/* -Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "encoding/json" - "net/http" - - "github.com/crunchydata/postgres-operator/internal/apiserver" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" -) - -// CreateClusterHandler ... -// pgo create cluster -// parameters secretfrom -func CreateClusterHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /clusters clusterservice clusters - /*``` - Create a PostgreSQL cluster consisting of a primary and a number of replica backends - */ - // --- - // Produces: - // - application/json - // - // parameters: - // - name: "Cluster Create Request" - // in: "body" - // schema: - // "$ref": "#/definitions/CreateClusterRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/CreateClusterResponse" - - var ns string - - log.Debug("clusterservice.CreateClusterHandler called") - username, err := apiserver.Authn(apiserver.CREATE_CLUSTER_PERM, w, r) - if err != nil { - return - } - - var request msgs.CreateClusterRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - // a special authz check here: if the ShowSystemAccounts flag is set, ensure - // the user is authorized to show system accounts - if request.ShowSystemAccounts && - !apiserver.BasicAuthzCheck(username, apiserver.SHOW_SYSTEM_ACCOUNTS_PERM) { - log.Errorf("Authorization Failed %s username=[%s]", apiserver.SHOW_SYSTEM_ACCOUNTS_PERM, username) - http.Error(w, "Not authorized for this apiserver action", 403) - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - resp := msgs.CreateClusterResponse{} - resp.Status = msgs.Status{Code: msgs.Ok, Msg: ""} - - if request.ClientVersion != msgs.PGO_VERSION { - resp.Status.Code = msgs.Error - resp.Status.Msg = apiserver.VERSION_MISMATCH_ERROR - json.NewEncoder(w).Encode(resp) - return - } - ns, err = apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - json.NewEncoder(w).Encode(resp) - return - } - resp = CreateCluster(&request, ns, username) - json.NewEncoder(w).Encode(resp) - -} - -// ShowClusterHandler ... -// pgo show cluster -// pgo delete mycluster -// parameters showsecrets -// parameters selector -// parameters postgresversion -// returns a ShowClusterResponse -func ShowClusterHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /showclusters clusterservice showclusters - /*``` - Show a PostgreSQL cluster - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Show Cluster Request" - // in: "body" - // schema: - // "$ref": "#/definitions/ShowClusterRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/ShowClusterResponse" - var ns string - - var request msgs.ShowClusterRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - log.Debugf("clusterservice.ShowClusterHandler %v\n", request) - clustername := request.Clustername - - selector := request.Selector - ccpimagetag := request.Ccpimagetag - clientVersion := request.ClientVersion - namespace := request.Namespace - allflag := request.AllFlag - - log.Debugf("ShowClusterHandler: parameters name [%s] selector [%s] ccpimagetag [%s] version [%s] namespace [%s] allflag [%v]", clustername, selector, ccpimagetag, clientVersion, namespace, allflag) - - username, err := apiserver.Authn(apiserver.SHOW_CLUSTER_PERM, w, r) - if err != nil { - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - log.Debug("clusterservice.ShowClusterHandler GET called") - - var resp msgs.ShowClusterResponse - resp.Status = msgs.Status{Code: msgs.Ok, Msg: ""} - - if clientVersion != msgs.PGO_VERSION { - resp.Status = msgs.Status{Code: msgs.Error, Msg: apiserver.VERSION_MISMATCH_ERROR} - resp.Results = make([]msgs.ShowClusterDetail, 0) - json.NewEncoder(w).Encode(resp) - return - } - - ns, err = apiserver.GetNamespace(apiserver.Clientset, username, namespace) - if err != nil { - resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()} - resp.Results = make([]msgs.ShowClusterDetail, 0) - json.NewEncoder(w).Encode(resp) - return - } - - resp = ShowCluster(clustername, selector, ccpimagetag, ns, allflag) - json.NewEncoder(w).Encode(resp) - -} - -// DeleteClusterHandler ... -// pgo delete mycluster -// parameters showsecrets -// parameters selector -// parameters postgresversion -// returns a ShowClusterResponse -func DeleteClusterHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /clustersdelete clusterservice clustersdelete - /*``` - Delete a PostgreSQL cluster - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Delete Cluster Request" - // in: "body" - // schema: - // "$ref": "#/definitions/DeleteClusterRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/DeleteClusterResponse" - var request msgs.DeleteClusterRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - var ns string - log.Debugf("clusterservice.DeleteClusterHandler %v\n", request) - - clustername := request.Clustername - - selector := request.Selector - clientVersion := request.ClientVersion - namespace := request.Namespace - - deleteData := request.DeleteData - deleteBackups := request.DeleteBackups - - log.Debugf("DeleteClusterHandler: parameters namespace [%s] selector [%s] delete-data [%t] delete-backups [%t]", namespace, selector, deleteData, deleteBackups) - - username, err := apiserver.Authn(apiserver.DELETE_CLUSTER_PERM, w, r) - if err != nil { - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - log.Debug("clusterservice.DeleteClusterHandler called") - - resp := msgs.DeleteClusterResponse{} - resp.Status = msgs.Status{Code: msgs.Ok, Msg: ""} - - if clientVersion != msgs.PGO_VERSION { - resp.Status = msgs.Status{Code: msgs.Error, Msg: apiserver.VERSION_MISMATCH_ERROR} - resp.Results = make([]string, 0) - json.NewEncoder(w).Encode(resp) - return - } - ns, err = apiserver.GetNamespace(apiserver.Clientset, username, namespace) - if err != nil { - resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()} - resp.Results = make([]string, 0) - json.NewEncoder(w).Encode(resp) - return - } - resp = DeleteCluster(clustername, selector, deleteData, deleteBackups, ns, username) - json.NewEncoder(w).Encode(resp) - -} - -// TestClusterHandler ... -// pgo test mycluster -func TestClusterHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /testclusters clusterservice testclusters - /*``` - TEST allows you to test the connectivity for a cluster. - - If you set the AllFlag to true in the request it will test connectivity for all clusters in the namespace. - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Cluster Test Request" - // in: "body" - // schema: - // "$ref": "#/definitions/ClusterTestRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/ClusterTestResponse" - var request msgs.ClusterTestRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - log.Debugf("clusterservice.TestClusterHandler %v\n", request) - - var ns string - clustername := request.Clustername - - selector := request.Selector - namespace := request.Namespace - clientVersion := request.ClientVersion - - log.Debugf("TestClusterHandler parameters %v", request) - - username, err := apiserver.Authn(apiserver.TEST_CLUSTER_PERM, w, r) - if err != nil { - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - resp := msgs.ClusterTestResponse{} - resp.Status = msgs.Status{Code: msgs.Ok, Msg: ""} - - if clientVersion != msgs.PGO_VERSION { - resp.Status = msgs.Status{Code: msgs.Error, Msg: apiserver.VERSION_MISMATCH_ERROR} - json.NewEncoder(w).Encode(resp) - return - } - - ns, err = apiserver.GetNamespace(apiserver.Clientset, username, namespace) - if err != nil { - resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()} - json.NewEncoder(w).Encode(resp) - return - } - - resp = TestCluster(clustername, selector, ns, username, request.AllFlag) - json.NewEncoder(w).Encode(resp) -} - -// UpdateClusterHandler ... -// pgo update cluster mycluster --autofail=true -// pgo update cluster --selector=env=research --autofail=false -// returns a UpdateClusterResponse -func UpdateClusterHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /clustersupdate clusterservice clustersupdate - /*``` - Update a PostgreSQL cluster - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Update Request" - // in: "body" - // schema: - // "$ref": "#/definitions/UpdateClusterRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/UpdateClusterResponse" - var request msgs.UpdateClusterRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - log.Debugf("clusterservice.UpdateClusterHandler %v\n", request) - - namespace := request.Namespace - clientVersion := request.ClientVersion - - username, err := apiserver.Authn(apiserver.UPDATE_CLUSTER_PERM, w, r) - if err != nil { - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - log.Debug("clusterservice.UpdateClusterHandler called") - - resp := msgs.UpdateClusterResponse{} - resp.Status = msgs.Status{Code: msgs.Ok, Msg: ""} - - if clientVersion != msgs.PGO_VERSION { - resp.Status = msgs.Status{Code: msgs.Error, Msg: apiserver.VERSION_MISMATCH_ERROR} - resp.Results = make([]string, 0) - json.NewEncoder(w).Encode(resp) - return - } - - _, err = apiserver.GetNamespace(apiserver.Clientset, username, namespace) - if err != nil { - resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()} - resp.Results = make([]string, 0) - json.NewEncoder(w).Encode(resp) - return - } - - resp = UpdateCluster(&request) - json.NewEncoder(w).Encode(resp) - -} diff --git a/internal/apiserver/clusterservice/scaleimpl.go b/internal/apiserver/clusterservice/scaleimpl.go deleted file mode 100644 index 6e3c7e975b..0000000000 --- a/internal/apiserver/clusterservice/scaleimpl.go +++ /dev/null @@ -1,317 +0,0 @@ -package clusterservice - -/* -Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "fmt" - "strconv" - "strings" - - "github.com/crunchydata/postgres-operator/internal/apiserver" - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/util" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - - log "github.com/sirupsen/logrus" - kerrors "k8s.io/apimachinery/pkg/api/errors" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" -) - -// ScaleCluster ... -func ScaleCluster(name, replicaCount, storageConfig, nodeLabel, - ccpImageTag, serviceType, ns, pgouser string) msgs.ClusterScaleResponse { - var err error - - response := msgs.ClusterScaleResponse{} - response.Status = msgs.Status{Code: msgs.Ok, Msg: ""} - - if name == "all" { - response.Status.Code = msgs.Error - response.Status.Msg = "all is not allowed for the scale command" - return response - } - - cluster, err := apiserver.Clientset.CrunchydataV1().Pgclusters(ns).Get(name, metav1.GetOptions{}) - - if kerrors.IsNotFound(err) { - log.Error("no clusters found") - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - - if err != nil { - log.Error("error getting cluster" + err.Error()) - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - - // check if the current cluster is not upgraded to the deployed - // Operator version. If not, do not allow the command to complete - if cluster.Annotations[config.ANNOTATION_IS_UPGRADED] == config.ANNOTATIONS_FALSE { - response.Status.Code = msgs.Error - response.Status.Msg = cluster.Name + msgs.UpgradeError - return response - } - - spec := crv1.PgreplicaSpec{} - - //refer to the cluster's replica storage setting by default - spec.ReplicaStorage = cluster.Spec.ReplicaStorage - - //allow for user override - if storageConfig != "" { - spec.ReplicaStorage, _ = apiserver.Pgo.GetStorageSpec(storageConfig) - } - - spec.UserLabels = cluster.Spec.UserLabels - - if ccpImageTag != "" { - spec.UserLabels[config.LABEL_CCP_IMAGE_TAG_KEY] = ccpImageTag - } - if serviceType != "" { - if serviceType != config.DEFAULT_SERVICE_TYPE && - serviceType != config.NODEPORT_SERVICE_TYPE && - serviceType != config.LOAD_BALANCER_SERVICE_TYPE { - response.Status.Code = msgs.Error - response.Status.Msg = "error --service-type should be either ClusterIP, NodePort, or LoadBalancer " - return response - } - spec.UserLabels[config.LABEL_SERVICE_TYPE] = serviceType - } - - //set replica node lables to blank to start with, then check for overrides - spec.UserLabels[config.LABEL_NODE_LABEL_KEY] = "" - spec.UserLabels[config.LABEL_NODE_LABEL_VALUE] = "" - - // validate & parse nodeLabel if exists - if nodeLabel != "" { - if err = apiserver.ValidateNodeLabel(nodeLabel); err != nil { - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - - parts := strings.Split(nodeLabel, "=") - spec.UserLabels[config.LABEL_NODE_LABEL_KEY] = parts[0] - spec.UserLabels[config.LABEL_NODE_LABEL_VALUE] = parts[1] - - log.Debug("using user entered node label for replica creation") - } - - labels := make(map[string]string) - labels[config.LABEL_PG_CLUSTER] = cluster.Spec.Name - - spec.ClusterName = cluster.Spec.Name - - var rc int - rc, err = strconv.Atoi(replicaCount) - if err != nil { - log.Error(err.Error()) - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - - labels[config.LABEL_PGOUSER] = pgouser - labels[config.LABEL_PG_CLUSTER_IDENTIFIER] = cluster.ObjectMeta.Labels[config.LABEL_PG_CLUSTER_IDENTIFIER] - - for i := 0; i < rc; i++ { - - uniqueName := util.RandStringBytesRmndr(4) - labels[config.LABEL_NAME] = cluster.Spec.Name + "-" + uniqueName - spec.Namespace = ns - spec.Name = labels[config.LABEL_NAME] - - newInstance := &crv1.Pgreplica{ - ObjectMeta: metav1.ObjectMeta{ - Name: labels[config.LABEL_NAME], - Labels: labels, - }, - Spec: spec, - Status: crv1.PgreplicaStatus{ - State: crv1.PgreplicaStateCreated, - Message: "Created, not processed yet", - }, - } - - _, err = apiserver.Clientset.CrunchydataV1().Pgreplicas(ns).Create(newInstance) - if err != nil { - log.Error(" in creating Pgreplica instance" + err.Error()) - } - - response.Results = append(response.Results, "created Pgreplica "+labels[config.LABEL_NAME]) - } - - return response -} - -// ScaleQuery lists the replicas that are in the PostgreSQL cluster -// with information that is helpful in determining which one to fail over to, -// such as the lag behind the replica as well as the timeline -func ScaleQuery(name, ns string) msgs.ScaleQueryResponse { - var err error - - response := msgs.ScaleQueryResponse{ - Results: make([]msgs.ScaleQueryTargetSpec, 0), - Status: msgs.Status{Code: msgs.Ok, Msg: ""}, - } - - cluster, err := apiserver.Clientset.CrunchydataV1().Pgclusters(ns).Get(name, metav1.GetOptions{}) - - // If no clusters are found, return a specific error message, - // otherwise, pass forward the generic error message that Kubernetes sends - if kerrors.IsNotFound(err) { - errorMsg := fmt.Sprintf(`No cluster found for "%s"`, name) - log.Error(errorMsg) - response.Status.Code = msgs.Error - response.Status.Msg = errorMsg - return response - } else if err != nil { - log.Error(err.Error()) - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - - // Get information about the current status of all of the replicas. This is - // handled by a helper function, that will return the information in a struct - // with the key elements to help the user understand the current state of the - // replicas in a cluster - replicationStatusRequest := util.ReplicationStatusRequest{ - RESTConfig: apiserver.RESTConfig, - Clientset: apiserver.Clientset, - Namespace: ns, - ClusterName: name, - } - - replicationStatusResponse, err := util.ReplicationStatus(replicationStatusRequest, false, true) - - // if an error is return, log the message, and return the response - if err != nil { - log.Error(err.Error()) - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - - // indicate in the response whether or not a standby cluster - response.Standby = cluster.Spec.Standby - - // if there are no results, return the response as is - if len(replicationStatusResponse.Instances) == 0 { - return response - } - - // iterate through response results to create the API response - for _, instance := range replicationStatusResponse.Instances { - // create a result for the response - result := msgs.ScaleQueryTargetSpec{ - Name: instance.Name, - Node: instance.Node, - Status: instance.Status, - ReplicationLag: instance.ReplicationLag, - Timeline: instance.Timeline, - PendingRestart: instance.PendingRestart, - } - - // append the result to the response list - response.Results = append(response.Results, result) - } - - return response -} - -// ScaleDown ... -func ScaleDown(deleteData bool, clusterName, replicaName, ns string) msgs.ScaleDownResponse { - - var err error - - response := msgs.ScaleDownResponse{} - response.Status = msgs.Status{Code: msgs.Ok, Msg: ""} - response.Results = make([]string, 0) - - cluster, err := apiserver.Clientset.CrunchydataV1().Pgclusters(ns).Get(clusterName, metav1.GetOptions{}) - - if kerrors.IsNotFound(err) { - log.Error("no clusters found") - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - - if err != nil { - log.Error("error getting cluster" + err.Error()) - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - - // dont proceed any further if the cluster is shutdown - if cluster.Status.State == crv1.PgclusterStateShutdown { - response.Status.Code = msgs.Error - response.Status.Msg = "Nothing to scale, the cluster is currently " + - "shutdown" - return response - } - - // selector in the format "pg-cluster=,pgo-pg-database,role!=config.LABEL_PGHA_ROLE_PRIMARY" - // which will grab all the replicas - selector := fmt.Sprintf("%s=%s,%s,%s!=%s", config.LABEL_PG_CLUSTER, clusterName, - config.LABEL_PG_DATABASE, config.LABEL_PGHA_ROLE, config.LABEL_PGHA_ROLE_PRIMARY) - replicaList, err := apiserver.Clientset.CoreV1().Pods(ns).List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - - // check to see if the replica name provided matches the name of any of the - // replicas found for the cluster - var replicaNameFound bool - for _, pod := range replicaList.Items { - if pod.Labels[config.LABEL_DEPLOYMENT_NAME] == replicaName { - replicaNameFound = true - break - } - } - // return an error if the replica name provided does not match the primary or any replicas - if !replicaNameFound { - response.Status.Code = msgs.Error - response.Status.Msg = fmt.Sprintf("Unable to find replica with name %s", - replicaName) - return response - } - - //create the rmdata task which does the cleanup - - clusterPGHAScope := cluster.ObjectMeta.Labels[config.LABEL_PGHA_SCOPE] - deleteBackups := false - isReplica := true - isBackup := false - taskName := replicaName + "-rmdata" - err = apiserver.CreateRMDataTask(clusterName, replicaName, taskName, deleteBackups, deleteData, isReplica, isBackup, ns, clusterPGHAScope) - if err != nil { - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - - response.Results = append(response.Results, "deleted replica "+replicaName) - return response -} diff --git a/internal/apiserver/clusterservice/scaleservice.go b/internal/apiserver/clusterservice/scaleservice.go deleted file mode 100644 index 92db853216..0000000000 --- a/internal/apiserver/clusterservice/scaleservice.go +++ /dev/null @@ -1,295 +0,0 @@ -package clusterservice - -/* -Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "encoding/json" - "net/http" - "strconv" - - "github.com/crunchydata/postgres-operator/internal/apiserver" - "github.com/crunchydata/postgres-operator/internal/config" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - "github.com/gorilla/mux" - log "github.com/sirupsen/logrus" -) - -// ScaleClusterHandler ... -// pgo scale mycluster --replica-count=1 -// parameters showsecrets -// returns a ScaleResponse -func ScaleClusterHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation GET /clusters/scale/{name} clusterservice clusters-scale-name - /*``` - The scale command allows you to adjust a Cluster's replica configuration - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "name" - // description: "Cluster Name" - // in: "path" - // type: "string" - // required: true - // - name: "version" - // description: "Client Version" - // in: "path" - // type: "string" - // required: true - // - name: "namespace" - // description: "Namespace" - // in: "path" - // type: "string" - // required: true - // - name: "replica-count" - // description: "The replica count to apply to the clusters." - // in: "path" - // type: "int" - // required: true - // - name: "storage-config" - // description: "The service type to use in the replica Service. If not set, the default in pgo.yaml will be used." - // in: "path" - // type: "string" - // required: false - // - name: "node-label" - // description: "The node label (key) to use in placing the replica database. If not set, any node is used." - // in: "path" - // type: "string" - // required: false - // - name: "service-type" - // description: "The service type to use in the replica Service. If not set, the default in pgo.yaml will be used." - // in: "path" - // type: "string" - // required: false - // - name: "ccp-image-tag" - // description: "The CCPImageTag to use for cluster creation. If specified, overrides the .pgo.yaml setting." - // in: "path" - // type: "string" - // required: false - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/ClusterScaleResponse" - //SCALE_CLUSTER_PERM - // This is a pain to document because it doesn't use a struct... - var ns string - vars := mux.Vars(r) - - clusterName := vars[config.LABEL_NAME] - namespace := r.URL.Query().Get(config.LABEL_NAMESPACE) - replicaCount := r.URL.Query().Get(config.LABEL_REPLICA_COUNT) - storageConfig := r.URL.Query().Get(config.LABEL_STORAGE_CONFIG) - nodeLabel := r.URL.Query().Get(config.LABEL_NODE_LABEL) - serviceType := r.URL.Query().Get(config.LABEL_SERVICE_TYPE) - clientVersion := r.URL.Query().Get(config.LABEL_VERSION) - ccpImageTag := r.URL.Query().Get(config.LABEL_CCP_IMAGE_TAG_KEY) - - log.Debugf("ScaleClusterHandler parameters name [%s] namespace [%s] replica-count [%s] "+ - "storage-config [%s] node-label [%s] service-type [%s] version [%s]"+ - "ccp-image-tag [%s]", clusterName, namespace, replicaCount, - storageConfig, nodeLabel, serviceType, clientVersion, ccpImageTag) - - username, err := apiserver.Authn(apiserver.SCALE_CLUSTER_PERM, w, r) - if err != nil { - return - } - - w.WriteHeader(http.StatusOK) - w.Header().Set("Content-Type", "application/json") - - resp := msgs.ClusterScaleResponse{} - resp.Status = msgs.Status{Code: msgs.Ok, Msg: ""} - - if clientVersion != msgs.PGO_VERSION { - resp.Status = msgs.Status{Code: msgs.Error, Msg: apiserver.VERSION_MISMATCH_ERROR} - json.NewEncoder(w).Encode(resp) - return - } - - ns, err = apiserver.GetNamespace(apiserver.Clientset, username, namespace) - if err != nil { - resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()} - json.NewEncoder(w).Encode(resp) - return - } - - // TODO too many params need to create a struct for this - resp = ScaleCluster(clusterName, replicaCount, storageConfig, nodeLabel, - ccpImageTag, serviceType, ns, username) - - json.NewEncoder(w).Encode(resp) -} - -// ScaleQueryHandler ... -// pgo scale mycluster --query -// returns a ScaleQueryResponse -func ScaleQueryHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation GET /scale/{name} clusterservice scale-name - /*``` - Provides the list of targetable replica candidates for scaledown. - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "name" - // description: "Cluster Name" - // in: "path" - // type: "string" - // required: true - // - name: "version" - // description: "Client Version" - // in: "path" - // type: "string" - // required: true - // - name: "namespace" - // description: "Namespace" - // in: "path" - // type: "string" - // required: true - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/ScaleQueryResponse" - //SCALE_CLUSTER_PERM - var ns string - vars := mux.Vars(r) - - clusterName := vars[config.LABEL_NAME] - clientVersion := r.URL.Query().Get(config.LABEL_VERSION) - namespace := r.URL.Query().Get(config.LABEL_NAMESPACE) - - log.Debugf("ScaleQueryHandler parameters clusterName [%v] version [%s] namespace [%s]", clusterName, clientVersion, namespace) - - username, err := apiserver.Authn(apiserver.SCALE_CLUSTER_PERM, w, r) - if err != nil { - return - } - - w.WriteHeader(http.StatusOK) - w.Header().Set("Content-Type", "application/json") - - resp := msgs.ScaleQueryResponse{} - resp.Status = msgs.Status{Code: msgs.Ok, Msg: ""} - - if clientVersion != msgs.PGO_VERSION { - resp.Status = msgs.Status{Code: msgs.Error, Msg: apiserver.VERSION_MISMATCH_ERROR} - json.NewEncoder(w).Encode(resp) - return - } - - ns, err = apiserver.GetNamespace(apiserver.Clientset, username, namespace) - if err != nil { - resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()} - json.NewEncoder(w).Encode(resp) - return - } - - resp = ScaleQuery(clusterName, ns) - json.NewEncoder(w).Encode(resp) -} - -// ScaleDownHandler ... -// pgo scale mycluster --scale-down-target=somereplicaname -// returns a ScaleDownResponse -func ScaleDownHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation GET /scaledown/{name} clusterservice scaledown-name - /*``` - Scale down a cluster by removing the given replica - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "name" - // description: "Cluster Name" - // in: "path" - // type: "string" - // required: true - // - name: "version" - // description: "Client Version" - // in: "path" - // type: "string" - // required: true - // - name: "namespace" - // description: "Namespace" - // in: "path" - // type: "string" - // required: true - // - name: "replica-name" - // description: "The replica to target for scaling down." - // in: "path" - // type: "string" - // required: true - // - name: "delete-data" - // description: "Causes the data for the scaled down replica to be removed permanently." - // in: "path" - // type: "string" - // required: true - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/ScaleDownResponse" - //SCALE_CLUSTER_PERM - var ns string - vars := mux.Vars(r) - - clusterName := vars[config.LABEL_NAME] - clientVersion := r.URL.Query().Get(config.LABEL_VERSION) - namespace := r.URL.Query().Get(config.LABEL_NAMESPACE) - replicaName := r.URL.Query().Get(config.LABEL_REPLICA_NAME) - tmp := r.URL.Query().Get(config.LABEL_DELETE_DATA) - - log.Debugf("ScaleDownHandler parameters clusterName [%s] version [%s] namespace [%s] replica-name [%s] delete-data [%s]", clusterName, clientVersion, namespace, replicaName, tmp) - - username, err := apiserver.Authn(apiserver.SCALE_CLUSTER_PERM, w, r) - if err != nil { - return - } - - w.WriteHeader(http.StatusOK) - w.Header().Set("Content-Type", "application/json") - - resp := msgs.ScaleDownResponse{} - resp.Status = msgs.Status{Code: msgs.Ok, Msg: ""} - - deleteData, err := strconv.ParseBool(tmp) - if err != nil { - resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()} - json.NewEncoder(w).Encode(resp) - return - } - - if clientVersion != msgs.PGO_VERSION { - resp.Status = msgs.Status{Code: msgs.Error, Msg: apiserver.VERSION_MISMATCH_ERROR} - json.NewEncoder(w).Encode(resp) - return - } - - ns, err = apiserver.GetNamespace(apiserver.Clientset, username, namespace) - if err != nil { - resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()} - json.NewEncoder(w).Encode(resp) - return - } - - resp = ScaleDown(deleteData, clusterName, replicaName, ns) - json.NewEncoder(w).Encode(resp) -} diff --git a/internal/apiserver/common.go b/internal/apiserver/common.go deleted file mode 100644 index b171b8ce6a..0000000000 --- a/internal/apiserver/common.go +++ /dev/null @@ -1,180 +0,0 @@ -package apiserver - -/* -Copyright 2018 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "errors" - "fmt" - "strconv" - - "github.com/crunchydata/postgres-operator/internal/config" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - log "github.com/sirupsen/logrus" - kerrors "k8s.io/apimachinery/pkg/api/errors" - "k8s.io/apimachinery/pkg/api/resource" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" -) - -const ( - // ErrMessageLimitInvalid indicates that a limit is lower than the request - ErrMessageLimitInvalid = `limit %q is lower than the request %q` - // ErrMessagePVCSize provides a standard error message when a PVCSize is not - // specified to the Kubernetes stnadard - ErrMessagePVCSize = `could not parse PVC size "%s": %s (hint: try a value like "1Gi")` - // ErrMessageReplicas provides a standard error message when the count of - // replicas is incorrect - ErrMessageReplicas = `must have at least %d replica(s)` -) - -var ( - backrestStorageTypes = []string{"local", "s3"} - // ErrDBContainerNotFound is an error that indicates that a "database" container - // could not be found in a specific pod - ErrDBContainerNotFound = errors.New("\"database\" container not found in pod") - // ErrStandbyNotAllowed contains the error message returned when an API call is not - // permitted because it involves a cluster that is in standby mode - ErrStandbyNotAllowed = errors.New("Action not permitted because standby mode is enabled") - - // ErrMethodNotAllowed represents the error that is thrown when a feature is disabled within the - // current Operator install - ErrMethodNotAllowed = errors.New("This method has is not allowed in the current PostgreSQL " + - "Operator installation") -) - -func CreateRMDataTask(clusterName, replicaName, taskName string, deleteBackups, deleteData, isReplica, isBackup bool, ns, clusterPGHAScope string) error { - var err error - - //create pgtask CRD - spec := crv1.PgtaskSpec{} - spec.Namespace = ns - spec.Name = taskName - spec.TaskType = crv1.PgtaskDeleteData - - spec.Parameters = make(map[string]string) - spec.Parameters[config.LABEL_DELETE_DATA] = strconv.FormatBool(deleteData) - spec.Parameters[config.LABEL_DELETE_BACKUPS] = strconv.FormatBool(deleteBackups) - spec.Parameters[config.LABEL_IS_REPLICA] = strconv.FormatBool(isReplica) - spec.Parameters[config.LABEL_IS_BACKUP] = strconv.FormatBool(isBackup) - spec.Parameters[config.LABEL_PG_CLUSTER] = clusterName - spec.Parameters[config.LABEL_REPLICA_NAME] = replicaName - spec.Parameters[config.LABEL_PGHA_SCOPE] = clusterPGHAScope - - newInstance := &crv1.Pgtask{ - ObjectMeta: metav1.ObjectMeta{ - Name: taskName, - }, - Spec: spec, - } - newInstance.ObjectMeta.Labels = make(map[string]string) - newInstance.ObjectMeta.Labels[config.LABEL_PG_CLUSTER] = clusterName - newInstance.ObjectMeta.Labels[config.LABEL_RMDATA] = "true" - - _, err = Clientset.CrunchydataV1().Pgtasks(ns).Create(newInstance) - if err != nil { - log.Error(err) - return err - } - - return err - -} - -func GetBackrestStorageTypes() []string { - return backrestStorageTypes -} - -// IsValidPVC determines if a PVC with the name provided exits -func IsValidPVC(pvcName, ns string) bool { - pvc, err := Clientset.CoreV1().PersistentVolumeClaims(ns).Get(pvcName, metav1.GetOptions{}) - if kerrors.IsNotFound(err) { - return false - } - if err != nil { - log.Error(err) - return false - } - return pvc != nil -} - -// ValidateResourceRequestLimit validates that a Kubernetes Requests/Limit pair -// is valid, both by validating the values are valid quantity values, and then -// by checking that the limit >= request. This also needs to check against the -// configured values for a request, which must be provided as a value -func ValidateResourceRequestLimit(request, limit string, defaultQuantity resource.Quantity) error { - // ensure that the request/limit are valid, as this simplifies the rest of - // this code. We know that the defaultRequest is already valid at this point, - // as otherwise the Operator will fail to load - if err := ValidateQuantity(request); err != nil { - return err - } - - if err := ValidateQuantity(limit); err != nil { - return err - } - - // parse the quantities so we can compare - requestQuantity, _ := resource.ParseQuantity(request) - limitQuantity, _ := resource.ParseQuantity(limit) - - if requestQuantity.IsZero() { - requestQuantity = defaultQuantity - } - - // if limit and request are nonzero and the limit is less than the request, - // error - if !limitQuantity.IsZero() && !requestQuantity.IsZero() && limitQuantity.Cmp(requestQuantity) == -1 { - return fmt.Errorf(ErrMessageLimitInvalid, limitQuantity.String(), requestQuantity.String()) - } - - return nil -} - -// ValidateQuantity runs the Kubernetes "ParseQuantity" function on a string -// and determine whether or not it is a valid quantity object. Returns an error -// if it is invalid, along with the error message. -// -// If it is empty, it returns no error -// -// See: https://github.com/kubernetes/apimachinery/blob/master/pkg/api/resource/quantity.go -func ValidateQuantity(quantity string) error { - if quantity == "" { - return nil - } - - _, err := resource.ParseQuantity(quantity) - return err -} - -// FindStandbyClusters takes a list of pgcluster structs and returns a slice containing the names -// of those clusters that are in standby mode as indicated by whether or not the standby prameter -// in the pgcluster spec is true. -func FindStandbyClusters(clusterList crv1.PgclusterList) (standbyClusters []string) { - standbyClusters = make([]string, 0) - for _, cluster := range clusterList.Items { - if cluster.Spec.Standby { - standbyClusters = append(standbyClusters, cluster.Name) - } - } - return -} - -// PGClusterListHasStandby determines if a PgclusterList has any standby clusters, specifically -// returning "true" if one or more standby clusters exist, along with a slice of strings -// containing the names of the clusters in standby mode -func PGClusterListHasStandby(clusterList crv1.PgclusterList) (bool, []string) { - standbyClusters := FindStandbyClusters(clusterList) - return len(FindStandbyClusters(clusterList)) > 0, standbyClusters -} diff --git a/internal/apiserver/common_test.go b/internal/apiserver/common_test.go deleted file mode 100644 index da909d2ba6..0000000000 --- a/internal/apiserver/common_test.go +++ /dev/null @@ -1,101 +0,0 @@ -package apiserver - -/* -Copyright 2018 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "testing" - - "k8s.io/apimachinery/pkg/api/resource" -) - -func TestValidateResourceRequestLimit(t *testing.T) { - t.Run("valid", func(t *testing.T) { - resources := []struct{ request, limit, defaultRequest string }{ - {"", "", "0"}, - {"256Mi", "256Mi", "128Mi"}, - {"", "256Mi", "128Mi"}, - {"", "256Mi", "0"}, - {"256Mi", "", "128Mi"}, - {"64Mi", "", "128Mi"}, - {"256Mi", "", "0"}, - {"", "", "128Mi"}, - } - - for _, r := range resources { - defaultQuantity := resource.MustParse(r.defaultRequest) - - if err := ValidateResourceRequestLimit(r.request, r.limit, defaultQuantity); err != nil { - t.Fatal(err) - return - } - } - }) - - t.Run("invalid", func(t *testing.T) { - resources := []struct{ request, limit, defaultRequest string }{ - {"broken", "3000 Gigabytes", "128Mi"}, - {"256Mi", "3000 Gigabytes", "128Mi"}, - {"broken", "256Mi", "128Mi"}, - {"256Mi", "128Mi", "512Mi"}, - {"", "128Mi", "512Mi"}, - } - - for _, r := range resources { - defaultQuantity := resource.MustParse(r.defaultRequest) - - if err := ValidateResourceRequestLimit(r.request, r.limit, defaultQuantity); err == nil { - t.Fatalf("expected error with values %v", r) - return - } - } - }) -} - -func TestValidateQuantity(t *testing.T) { - t.Run("valid", func(t *testing.T) { - quantities := []string{ - "", - "100Mi", - "100M", - "250Gi", - "25G", - "0.1", - "1.2", - "150m", - } - - for _, quantity := range quantities { - if err := ValidateQuantity(quantity); err != nil { - t.Fatal(err) - return - } - } - }) - - t.Run("invalid", func(t *testing.T) { - quantities := []string{ - "broken", - "3000 Gigabytes", - } - - for _, quantity := range quantities { - if err := ValidateQuantity(quantity); err == nil { - t.Fatalf("expected error with value %q", quantity) - return - } - } - }) -} diff --git a/internal/apiserver/configservice/configimpl.go b/internal/apiserver/configservice/configimpl.go deleted file mode 100644 index 76d891d5ed..0000000000 --- a/internal/apiserver/configservice/configimpl.go +++ /dev/null @@ -1,32 +0,0 @@ -package configservice - -/* -Copyright 2018 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "github.com/crunchydata/postgres-operator/internal/apiserver" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" -) - -func ShowConfig() msgs.ShowConfigResponse { - log.Debug("ShowConfig called") - response := msgs.ShowConfigResponse{} - response.Status = msgs.Status{Code: msgs.Ok, Msg: ""} - - response.Result = apiserver.Pgo - - return response -} diff --git a/internal/apiserver/configservice/configservice.go b/internal/apiserver/configservice/configservice.go deleted file mode 100644 index 1f70934888..0000000000 --- a/internal/apiserver/configservice/configservice.go +++ /dev/null @@ -1,84 +0,0 @@ -package configservice - -/* -Copyright 2018 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "encoding/json" - "github.com/crunchydata/postgres-operator/internal/apiserver" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "net/http" -) - -// ShowConfigHandler ... -// pgo show config -func ShowConfigHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation GET /config configservice config - /*``` - Show configuration information for the Operator. - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "version" - // description: "Client Version" - // in: "path" - // type: "string" - // required: true - // - name: "namespace" - // description: "Namespace" - // in: "path" - // type: "string" - // required: true - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/ShowConfigResponse" - clientVersion := r.URL.Query().Get("version") - namespace := r.URL.Query().Get("namespace") - - log.Debugf("ShowConfigHandler parameters version [%s] namespace [%s]", clientVersion, namespace) - - username, err := apiserver.Authn(apiserver.SHOW_CONFIG_PERM, w, r) - if err != nil { - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - resp := msgs.ShowConfigResponse{} - resp.Status = msgs.Status{Code: msgs.Ok, Msg: ""} - - if clientVersion != msgs.PGO_VERSION { - resp.Status = msgs.Status{Code: msgs.Error, Msg: apiserver.VERSION_MISMATCH_ERROR} - json.NewEncoder(w).Encode(resp) - return - } - - _, err = apiserver.GetNamespace(apiserver.Clientset, username, namespace) - if err != nil { - resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()} - json.NewEncoder(w).Encode(resp) - return - } - - resp = ShowConfig() - json.NewEncoder(w).Encode(resp) -} diff --git a/internal/apiserver/dfservice/dfimpl.go b/internal/apiserver/dfservice/dfimpl.go deleted file mode 100644 index 6b244836c0..0000000000 --- a/internal/apiserver/dfservice/dfimpl.go +++ /dev/null @@ -1,330 +0,0 @@ -package dfservice - -/* -Copyright 2018 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "fmt" - "strings" - - "github.com/crunchydata/postgres-operator/internal/apiserver" - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/kubeapi" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - v1 "k8s.io/api/core/v1" - "k8s.io/apimachinery/pkg/api/resource" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/client-go/kubernetes" -) - -// pvcContainerName contains the name of the container that the PVCs are mounted -// to, which, curiously, is "database" for all of them -const pvcContainerName = "database" - -func DfCluster(request msgs.DfRequest) msgs.DfResponse { - response := msgs.DfResponse{} - // set the namespace - namespace := request.Namespace - // set up the selector - selector := "" - // if the selector is not set to "*", then set it to the value that is in the - // Selector paramater - if request.Selector != msgs.DfShowAllSelector { - selector = request.Selector - } - - log.Debugf("df selector is [%s]", selector) - - // get all of the clusters that match the selector - clusterList, err := apiserver.Clientset. - CrunchydataV1().Pgclusters(namespace). - List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - return CreateErrorResponse(err.Error()) - } - - totalClusters := len(clusterList.Items) - - log.Debugf("df clusters found len is %d", totalClusters) - - // if there are no clusters found, exit early - if totalClusters == 0 { - response.Status = msgs.Status{ - Code: msgs.Error, - Msg: fmt.Sprintf("no clusters found for selector %q in namespace %q", selector, namespace), - } - return response - } - - // iterate through each cluster and get the information about the disk - // utilization. As there could be a lot of clusters doing this, we opt for - // concurrency, but have a way to escape if one of the clusters has an error - // response - clusterResultsChannel := make(chan msgs.DfDetail) - errorChannel := make(chan error) - clusterProgressChannel := make(chan bool) - - for _, c := range clusterList.Items { - // first, to properly handle the goroutine, declare a new variable here - cluster := c - // now, go get the disk capacity information about the cluster - go getClusterDf(&cluster, clusterResultsChannel, clusterProgressChannel, errorChannel) - } - - // track the progress / completion, so we know when to exit - processed := 0 - -loop: - for { - select { - // if a result comes through, append it to the list - case result := <-clusterResultsChannel: - response.Results = append(response.Results, result) - // if an error comes through, immeidately abort - case err := <-errorChannel: - return CreateErrorResponse(err.Error()) - // and if we have finished, then break the loop - case <-clusterProgressChannel: - processed++ - - log.Debugf("df [%s] progress: [%d/%d]", selector, processed, totalClusters) - - if processed == totalClusters { - break loop - } - } - } - - // lastly, set the response as being OK - response.Status = msgs.Status{Code: msgs.Ok} - - return response -} - -// getClaimCapacity makes a call to the PVC API to get the total capacity -// available on the PVC -func getClaimCapacity(clientset kubernetes.Interface, pvcName, ns string) (string, error) { - log.Debugf("in df pvc name found to be %s", pvcName) - - pvc, err := clientset.CoreV1().PersistentVolumeClaims(ns).Get(pvcName, metav1.GetOptions{}) - - if err != nil { - log.Error(err) - return "", err - } - - qty := pvc.Status.Capacity[v1.ResourceStorage] - - log.Debugf("storage cap string value %s", qty.String()) - - return qty.String(), err -} - -// getClusterDf breaks out the tasks for getting all the capacity information -// about a PostgreSQL cluster so it can be performed on each relevant instance -// (Pod) -// -// we use pointers to keep the argument size down and because we are not -// modifying any of the content -func getClusterDf(cluster *crv1.Pgcluster, clusterResultsChannel chan msgs.DfDetail, clusterProgressChannel chan bool, errorChannel chan error) { - log.Debugf("pod df: %s", cluster.Spec.Name) - - selector := fmt.Sprintf("%s=%s", config.LABEL_PG_CLUSTER, cluster.Spec.Name) - - pods, err := apiserver.Clientset.CoreV1().Pods(cluster.Spec.Namespace).List(metav1.ListOptions{LabelSelector: selector}) - - // if there is an error attempting to get the pods, just return - if err != nil { - errorChannel <- err - return - } - - // set up channels for collecting the results that will be sent to the user - podResultsChannel := make(chan msgs.DfDetail) - podProgressChannel := make(chan bool) - - // figure out how many pods will need to be checked, as this will be the - // "completed" number - totalPods := 0 - - for _, p := range pods.Items { - // to properly handle the goroutine that is coming up, we first declare a - // new variable - pod := p - - // get the map of labels for convenience - podLabels := pod.ObjectMeta.GetLabels() - - // if this is neither a PostgreSQL or pgBackRest pod, skip - // we can cheat a little bit and check that the HA label is present, or - // the pgBackRest repo pod label - if podLabels[config.LABEL_PGHA_ROLE] == "" && podLabels[config.LABEL_PGO_BACKREST_REPO] == "" { - continue - } - - // at this point, we can include this pod in the total pods - totalPods++ - - // now, we can spin up goroutines to get the information and results from - // the rest of the pods - go getPodDf(cluster, &pod, podResultsChannel, podProgressChannel, errorChannel) - } - - // track how many pods have been processed - processed := 0 - -loop: - for { - select { - // if a result is found, immediately put onto the cluster results channel - case result := <-podResultsChannel: - log.Debug(result) - clusterResultsChannel <- result - // if a pod is fully processed, increment the processed counter and - // determine if we have finished and can break the loop - case <-podProgressChannel: - processed++ - log.Debugf("df cluster [%s] pod progress: [%d/%d]", cluster.Spec.Name, processed, totalPods) - if processed == totalPods { - break loop - } - } - } - - // if we are finished with this cluster, indicate we are done - clusterProgressChannel <- true -} - -// getPodDf performs the heavy lifting of getting the total capacity values for -// the PostgreSQL cluster by introspecting each Pod, which requires a few API -// calls. This function is optimized to return concurrently, though has an -// escape if an error is reached by reusing the error channel from the main Df -// function -// -// we use pointers to keep the argument size down and because we are not -// modifying any of the content -func getPodDf(cluster *crv1.Pgcluster, pod *v1.Pod, podResultsChannel chan msgs.DfDetail, podProgressChannel chan bool, errorChannel chan error) { - podLabels := pod.ObjectMeta.GetLabels() - // at this point, we can get the instance name, which is conveniently - // available from the deployment label - // - /// ...this is a bit hacky to get the pgBackRest repo name, but it works - instanceName := podLabels[config.LABEL_DEPLOYMENT_NAME] - - if instanceName == "" { - log.Debug(podLabels) - instanceName = podLabels[config.LABEL_NAME] - } - - log.Debugf("df processing pod [%s]", instanceName) - - // now, iterate through each volume, and only continue one if this is a - // "volume of interest" - for _, volume := range pod.Spec.Volumes { - // as a first check, ensure there is a PVC associated with this volume - // if not, this is a nonstarter - if volume.VolumeSource.PersistentVolumeClaim == nil { - continue - } - - // start setting up the result...there's a chance we may not need it - // based on the next check, but it's more convenient - result := msgs.DfDetail{ - InstanceName: instanceName, // OK to set this here, even if we continue - PodName: pod.ObjectMeta.Name, - } - - // we want three types of volumes: - // PostgreSQL data directories (pgdata) - // PostgreSQL tablespaces (tablespace-) - // pgBackRest repositories (backrestrepo) - // classify by the type of volume that we want...if we don't find any of - // them, continue one - switch { - case volume.Name == config.VOLUME_POSTGRESQL_DATA: - result.PVCType = msgs.PVCTypePostgreSQL - case volume.Name == config.VOLUME_PGBACKREST_REPO_NAME: - result.PVCType = msgs.PVCTypepgBackRest - case strings.HasPrefix(volume.Name, config.VOLUME_TABLESPACE_NAME_PREFIX): - result.PVCType = msgs.PVCTypeTablespace - case volume.Name == config.PostgreSQLWALVolumeMount().Name: - result.PVCType = msgs.PVCTypeWriteAheadLog - default: - continue - } - - // get the name of the PVC - result.PVCName = volume.VolumeSource.PersistentVolumeClaim.ClaimName - - log.Debugf("pvc found [%s]", result.PVCName) - - // next, get the size of the PVC. First have to get the correct PVC - // mount point - var pvcMountPoint string - - switch result.PVCType { - case msgs.PVCTypePostgreSQL: - pvcMountPoint = fmt.Sprintf("%s/%s", config.VOLUME_POSTGRESQL_DATA_MOUNT_PATH, result.PVCName) - case msgs.PVCTypepgBackRest: - pvcMountPoint = fmt.Sprintf("%s/%s", config.VOLUME_PGBACKREST_REPO_MOUNT_PATH, podLabels["Name"]) - case msgs.PVCTypeTablespace: - // first, extract the name of the tablespace by removing the - // VOLUME_TABLESPACE_NAME_PREFIX prefix from the volume name - tablespaceName := strings.Replace(volume.Name, config.VOLUME_TABLESPACE_NAME_PREFIX, "", 1) - // use that to populate the path structure for the tablespaces - pvcMountPoint = fmt.Sprintf("%s%s/%s", config.VOLUME_TABLESPACE_PATH_PREFIX, tablespaceName, tablespaceName) - case msgs.PVCTypeWriteAheadLog: - pvcMountPoint = config.PostgreSQLWALPath(instanceName) - } - - cmd := []string{"du", "-s", "--block-size", "1", pvcMountPoint} - - stdout, stderr, err := kubeapi.ExecToPodThroughAPI(apiserver.RESTConfig, - apiserver.Clientset, cmd, pvcContainerName, pod.Name, cluster.Spec.Namespace, nil) - - // if the command fails, exit here - if err != nil { - err := fmt.Errorf(stderr) - log.Error(err) - errorChannel <- err - return - } - - // have to parse the size out from the statement. Size is in bytes - if _, err = fmt.Sscan(stdout, &result.PVCUsed); err != nil { - err := fmt.Errorf("could not find the size of pvc %s: %v", result.PVCName, err) - log.Error(err) - errorChannel <- err - return - } - - if claimSize, err := getClaimCapacity(apiserver.Clientset, result.PVCName, cluster.Spec.Namespace); err != nil { - errorChannel <- err - return - } else { - resourceClaimSize := resource.MustParse(claimSize) - result.PVCCapacity, _ = resourceClaimSize.AsInt64() - } - - log.Debugf("pvc info [%+v]", result) - - // put the result on the result channel - podResultsChannel <- result - } - - podProgressChannel <- true -} diff --git a/internal/apiserver/dfservice/dfservice.go b/internal/apiserver/dfservice/dfservice.go deleted file mode 100644 index 325e20257a..0000000000 --- a/internal/apiserver/dfservice/dfservice.go +++ /dev/null @@ -1,103 +0,0 @@ -package dfservice - -/* -Copyright 2018 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "encoding/json" - "net/http" - - "github.com/crunchydata/postgres-operator/internal/apiserver" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - - log "github.com/sirupsen/logrus" -) - -// CreateErrorResponse creates an error response message -func CreateErrorResponse(errorMessage string) msgs.DfResponse { - return msgs.DfResponse{ - Status: msgs.Status{ - Code: msgs.Error, - Msg: errorMessage, - }, - } -} - -// StatusHandler ... -// pgo df mycluster -// pgo df --selector=env=research -func DfHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation GET /df/{name} dfservice df-name - /*``` - Displays the disk status for PostgreSQL clusters. - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "PostgreSQL Cluster Disk Utilization" - // in: "body" - // schema: - // "$ref": "#/definitions/DfRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/DfResponse" - log.Debug("dfservice.DFHandler called") - - // first, check that the requesting user is authorized to make this request - username, err := apiserver.Authn(apiserver.DF_CLUSTER_PERM, w, r) - if err != nil { - return - } - - // decode the request paramaeters - var request msgs.DfRequest - - if err := json.NewDecoder(r.Body).Decode(&request); err != nil { - response := CreateErrorResponse(err.Error()) - json.NewEncoder(w).Encode(response) - return - } - - log.Debugf("DfHandler parameters [%+v]", request) - - // set some of the header...though we really should not be setting the HTTP - // Status upfront, but whatever - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - // check that the client versions match. If they don't, error out - if request.ClientVersion != msgs.PGO_VERSION { - response := CreateErrorResponse(apiserver.VERSION_MISMATCH_ERROR) - json.NewEncoder(w).Encode(response) - return - } - - // ensure that the user has access to this namespace. if not, error out - if _, err := apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace); err != nil { - response := CreateErrorResponse(err.Error()) - json.NewEncoder(w).Encode(response) - return - } - - // process the request - response := DfCluster(request) - - // turn the response into JSON - json.NewEncoder(w).Encode(response) -} diff --git a/internal/apiserver/failoverservice/failoverimpl.go b/internal/apiserver/failoverservice/failoverimpl.go deleted file mode 100644 index 0a988a3027..0000000000 --- a/internal/apiserver/failoverservice/failoverimpl.go +++ /dev/null @@ -1,218 +0,0 @@ -package failoverservice - -/* -Copyright 2018 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "errors" - - "github.com/crunchydata/postgres-operator/internal/apiserver" - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/util" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - v1 "k8s.io/api/apps/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" -) - -// CreateFailover ... -// pgo failover mycluster -// pgo failover all -// pgo failover --selector=name=mycluster -func CreateFailover(request *msgs.CreateFailoverRequest, ns, pgouser string) msgs.CreateFailoverResponse { - var err error - resp := msgs.CreateFailoverResponse{} - resp.Status.Code = msgs.Ok - resp.Status.Msg = "" - resp.Results = make([]string, 0) - - cluster, err := validateClusterName(request.ClusterName, ns) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - // check if the current cluster is not upgraded to the deployed - // Operator version. If not, do not allow the command to complete - if cluster.Annotations[config.ANNOTATION_IS_UPGRADED] == config.ANNOTATIONS_FALSE { - resp.Status.Code = msgs.Error - resp.Status.Msg = cluster.Name + msgs.UpgradeError - return resp - } - - if request.Target != "" { - _, err = isValidFailoverTarget(request.Target, request.ClusterName, ns) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - } - - log.Debugf("create failover called for %s", request.ClusterName) - - // Create a pgtask - spec := crv1.PgtaskSpec{} - spec.Namespace = ns - spec.Name = request.ClusterName + "-" + config.LABEL_FAILOVER - - // previous failovers will leave a pgtask so remove it first - apiserver.Clientset.CrunchydataV1().Pgtasks(ns).Delete(spec.Name, &metav1.DeleteOptions{}) - - spec.TaskType = crv1.PgtaskFailover - spec.Parameters = make(map[string]string) - spec.Parameters[request.ClusterName] = request.ClusterName - - labels := make(map[string]string) - labels["target"] = request.Target - labels[config.LABEL_PG_CLUSTER] = request.ClusterName - labels[config.LABEL_PGOUSER] = pgouser - - newInstance := &crv1.Pgtask{ - ObjectMeta: metav1.ObjectMeta{ - Name: spec.Name, - Labels: labels, - }, - Spec: spec, - } - - _, err = apiserver.Clientset.CrunchydataV1().Pgtasks(ns).Create(newInstance) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - resp.Results = append(resp.Results, "created Pgtask (failover) for cluster "+request.ClusterName) - - return resp -} - -// QueryFailover provides the user with a list of replicas that can be failed -// over to -// pgo failover mycluster --query -func QueryFailover(name, ns string) msgs.QueryFailoverResponse { - - response := msgs.QueryFailoverResponse{ - Results: make([]msgs.FailoverTargetSpec, 0), - Status: msgs.Status{Code: msgs.Ok, Msg: ""}, - } - - cluster, err := validateClusterName(name, ns) - if err != nil { - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - - log.Debugf("query failover called for %s", name) - - // indicate in the response whether or not a standby cluster - response.Standby = cluster.Spec.Standby - - // Get information about the current status of all of the replicas. This is - // handled by a helper function, that will return the information in a struct - // with the key elements to help the user understand the current state of the - // replicas in a cluster - replicationStatusRequest := util.ReplicationStatusRequest{ - RESTConfig: apiserver.RESTConfig, - Clientset: apiserver.Clientset, - Namespace: ns, - ClusterName: name, - } - - replicationStatusResponse, err := util.ReplicationStatus(replicationStatusRequest, false, false) - - // if an error is return, log the message, and return the response - if err != nil { - log.Error(err.Error()) - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - - // if there are no results, return the response as is - if len(replicationStatusResponse.Instances) == 0 { - return response - } - - // iterate through response results to create the API response - for _, instance := range replicationStatusResponse.Instances { - // create an result for the response - result := msgs.FailoverTargetSpec{ - Name: instance.Name, - Node: instance.Node, - Status: instance.Status, - ReplicationLag: instance.ReplicationLag, - Timeline: instance.Timeline, - PendingRestart: instance.PendingRestart, - } - - // append the result to the response list - response.Results = append(response.Results, result) - } - - return response -} - -func validateClusterName(clusterName, ns string) (*crv1.Pgcluster, error) { - cluster, err := apiserver.Clientset.CrunchydataV1().Pgclusters(ns).Get(clusterName, metav1.GetOptions{}) - - if err != nil { - return cluster, errors.New("no cluster found named " + clusterName) - } - - return cluster, err -} - -// isValidFailoverTarget checks to see if the failover target specified in the request is valid, -// i.e. that it represents a valid replica deployment in the cluster specified. This is -// done by first ensuring the deployment specified exists and is associated with the cluster -// specified, and then ensuring the PG pod created by the deployment is not the current primary. -// If the deployment is not found, or if the pod is the current primary, an error will be returned. -// Otherwise the deployment is returned. -func isValidFailoverTarget(deployName, clusterName, ns string) (*v1.Deployment, error) { - - // Using the following label selector, ensure the deployment specified using deployName exists in the - // cluster specified using clusterName: - // pg-cluster=clusterName,deployment-name=deployName - selector := config.LABEL_PG_CLUSTER + "=" + clusterName + "," + config.LABEL_DEPLOYMENT_NAME + "=" + deployName - deployments, err := apiserver.Clientset. - AppsV1().Deployments(ns). - List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - log.Error(err) - return nil, err - } else if len(deployments.Items) == 0 { - return nil, errors.New("no target found named " + deployName) - } else if len(deployments.Items) > 1 { - return nil, errors.New("more than one target found named " + deployName) - } - - // Using the following label selector, determine if the target specified is the current - // primary for the cluster and return an error if it is: - // pg-cluster=clusterName,deployment-name=deployName,role=primary - selector = config.LABEL_PG_CLUSTER + "=" + clusterName + "," + config.LABEL_DEPLOYMENT_NAME + "=" + deployName + - "," + config.LABEL_PGHA_ROLE + "=" + config.LABEL_PGHA_ROLE_PRIMARY - pods, _ := apiserver.Clientset.CoreV1().Pods(ns).List(metav1.ListOptions{LabelSelector: selector}) - if len(pods.Items) > 0 { - return nil, errors.New("The primary database cannot be selected as a failover target") - } - - return &deployments.Items[0], nil - -} diff --git a/internal/apiserver/failoverservice/failoverservice.go b/internal/apiserver/failoverservice/failoverservice.go deleted file mode 100644 index 164d7b1545..0000000000 --- a/internal/apiserver/failoverservice/failoverservice.go +++ /dev/null @@ -1,153 +0,0 @@ -package failoverservice - -/* -Copyright 2018 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "encoding/json" - "github.com/crunchydata/postgres-operator/internal/apiserver" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - "github.com/gorilla/mux" - log "github.com/sirupsen/logrus" - "net/http" -) - -// CreateFailoverHandler ... -// pgo failover mycluster -func CreateFailoverHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /failover failoverservice failover - /*``` - Performs a manual failover. - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Create Failover Request" - // in: "body" - // schema: - // "$ref": "#/definitions/CreateFailoverRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/CreateFailoverResponse" - var ns string - - log.Debug("failoverservice.CreateFailoverHandler called") - - var request msgs.CreateFailoverRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - username, err := apiserver.Authn(apiserver.CREATE_FAILOVER_PERM, w, r) - if err != nil { - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - resp := msgs.CreateFailoverResponse{} - resp.Status = msgs.Status{Code: msgs.Ok, Msg: ""} - - if request.ClientVersion != msgs.PGO_VERSION { - resp.Status = msgs.Status{Code: msgs.Error, Msg: apiserver.VERSION_MISMATCH_ERROR} - json.NewEncoder(w).Encode(resp) - return - } - - ns, err = apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace) - if err != nil { - resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()} - json.NewEncoder(w).Encode(resp) - return - } - - resp = CreateFailover(&request, ns, username) - - json.NewEncoder(w).Encode(resp) -} - -// QueryFailoverHandler ... -// pgo failover mycluster --query -func QueryFailoverHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation GET /failover/{name} failoverservice failover-service - /*``` - Prints the list of failover candidates. - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "name" - // description: "Cluster Name" - // in: "path" - // type: "string" - // required: true - // - name: "version" - // description: "Client Version" - // in: "path" - // type: "string" - // required: true - // - name: "namespace" - // description: "Namespace" - // in: "path" - // type: "string" - // required: true - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/QueryFailoverResponse" - var ns string - - vars := mux.Vars(r) - name := vars["name"] - - clientVersion := r.URL.Query().Get("version") - - namespace := r.URL.Query().Get("namespace") - - log.Debugf("QueryFailoverHandler parameters version[%s] namespace [%s] name [%s]", clientVersion, namespace, name) - - username, err := apiserver.Authn(apiserver.CREATE_FAILOVER_PERM, w, r) - if err != nil { - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - resp := msgs.QueryFailoverResponse{} - resp.Status = msgs.Status{Code: msgs.Ok, Msg: ""} - - if clientVersion != msgs.PGO_VERSION { - resp.Status = msgs.Status{Code: msgs.Error, Msg: apiserver.VERSION_MISMATCH_ERROR} - json.NewEncoder(w).Encode(resp) - return - } - - ns, err = apiserver.GetNamespace(apiserver.Clientset, username, namespace) - if err != nil { - resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()} - json.NewEncoder(w).Encode(resp) - return - } - - resp = QueryFailover(name, ns) - json.NewEncoder(w).Encode(resp) -} diff --git a/internal/apiserver/labelservice/labelimpl.go b/internal/apiserver/labelservice/labelimpl.go deleted file mode 100644 index bd6e26da3d..0000000000 --- a/internal/apiserver/labelservice/labelimpl.go +++ /dev/null @@ -1,471 +0,0 @@ -package labelservice - -/* -Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "encoding/json" - "errors" - "strings" - - "github.com/crunchydata/postgres-operator/internal/apiserver" - "github.com/crunchydata/postgres-operator/internal/config" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - "github.com/crunchydata/postgres-operator/pkg/events" - jsonpatch "github.com/evanphx/json-patch" - log "github.com/sirupsen/logrus" - v1 "k8s.io/api/apps/v1" - "k8s.io/apimachinery/pkg/api/meta" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/types" - "k8s.io/apimachinery/pkg/util/validation" -) - -// Label ... 2 forms ... -// pgo label myucluser yourcluster --label=env=prod -// pgo label --label=env=prod --selector=name=mycluster -func Label(request *msgs.LabelRequest, ns, pgouser string) msgs.LabelResponse { - var err error - var labelsMap map[string]string - resp := msgs.LabelResponse{} - resp.Status.Code = msgs.Ok - resp.Status.Msg = "" - resp.Results = make([]string, 0) - - if len(request.Args) == 0 && request.Selector == "" { - resp.Status.Code = msgs.Error - resp.Status.Msg = "no clusters specified" - return resp - } - - labelsMap, err = validateLabel(request.LabelCmdLabel, ns) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = "labels not formatted correctly" - return resp - } - - clusterList := crv1.PgclusterList{} - if len(request.Args) > 0 && request.Args[0] == "all" { - cl, err := apiserver.Clientset.CrunchydataV1().Pgclusters(ns).List(metav1.ListOptions{}) - if err != nil { - log.Error("error getting list of clusters" + err.Error()) - resp.Status.Code = msgs.Error - resp.Status.Msg = "error getting list of clusters" + err.Error() - return resp - } - if len(clusterList.Items) == 0 { - resp.Status.Code = msgs.Error - resp.Status.Msg = "no clusters found" - return resp - } - clusterList = *cl - - } else if request.Selector != "" { - log.Debugf("label selector is %s and ns is %s", request.Selector, ns) - - cl, err := apiserver.Clientset.CrunchydataV1().Pgclusters(ns).List(metav1.ListOptions{LabelSelector: request.Selector}) - if err != nil { - log.Error("error getting list of clusters" + err.Error()) - resp.Status.Code = msgs.Error - resp.Status.Msg = "error getting list of clusters" + err.Error() - return resp - } - if len(clusterList.Items) == 0 { - resp.Status.Code = msgs.Error - resp.Status.Msg = "no clusters found" - return resp - } - clusterList = *cl - } else { - //each arg represents a cluster name - items := make([]crv1.Pgcluster, 0) - for _, cluster := range request.Args { - result, err := apiserver.Clientset.CrunchydataV1().Pgclusters(ns).Get(cluster, metav1.GetOptions{}) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = "error getting list of clusters" + err.Error() - return resp - } - - items = append(items, *result) - } - clusterList.Items = items - } - - for _, c := range clusterList.Items { - resp.Results = append(resp.Results, c.Spec.Name) - } - - addLabels(clusterList.Items, request.DryRun, request.LabelCmdLabel, labelsMap, ns, pgouser) - - return resp - -} - -func addLabels(items []crv1.Pgcluster, DryRun bool, LabelCmdLabel string, newLabels map[string]string, ns, pgouser string) { - for i := 0; i < len(items); i++ { - if DryRun { - log.Debug("dry run only") - } else { - log.Debugf("adding label to cluster %s", items[i].Spec.Name) - err := PatchPgcluster(newLabels, items[i], ns) - if err != nil { - log.Error(err.Error()) - } - - //publish event for create label - topics := make([]string, 1) - topics[0] = events.EventTopicCluster - - f := events.EventCreateLabelFormat{ - EventHeader: events.EventHeader{ - Namespace: ns, - Username: pgouser, - Topic: topics, - EventType: events.EventCreateLabel, - }, - Clustername: items[i].Spec.Name, - Label: LabelCmdLabel, - } - - err = events.Publish(f) - if err != nil { - log.Error(err.Error()) - } - - } - } - - for i := 0; i < len(items); i++ { - //get deployments for this CRD - selector := config.LABEL_PG_CLUSTER + "=" + items[i].Spec.Name - deployments, err := apiserver.Clientset. - AppsV1().Deployments(ns). - List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - return - } - - for _, d := range deployments.Items { - //update Deployment with the label - if !DryRun { - //err := updateLabels(&d, items[i].Spec.Name, newLabels) - err := updateLabels(&d, d.Name, newLabels, ns) - if err != nil { - log.Error(err.Error()) - } - } - } - - } -} - -func updateLabels(deployment *v1.Deployment, clusterName string, newLabels map[string]string, ns string) error { - - var err error - - log.Debugf("%v are the labels to apply", newLabels) - - var patchBytes, newData, origData []byte - origData, err = json.Marshal(deployment) - if err != nil { - return err - } - - accessor, err2 := meta.Accessor(deployment) - if err2 != nil { - return err2 - } - - objLabels := accessor.GetLabels() - if objLabels == nil { - objLabels = make(map[string]string) - } - - //update the deployment labels - for key, value := range newLabels { - objLabels[key] = value - } - log.Debugf("updated labels are %v", objLabels) - - accessor.SetLabels(objLabels) - newData, err = json.Marshal(deployment) - if err != nil { - return err - } - - patchBytes, err = jsonpatch.CreateMergePatch(origData, newData) - if err != nil { - return err - } - - _, err = apiserver.Clientset.AppsV1().Deployments(ns).Patch(clusterName, types.MergePatchType, patchBytes, "") - if err != nil { - log.Debugf("error updating patching deployment %s", err.Error()) - } - return err - -} - -func PatchPgcluster(newLabels map[string]string, oldCRD crv1.Pgcluster, ns string) error { - - oldData, err := json.Marshal(oldCRD) - if err != nil { - return err - } - if oldCRD.ObjectMeta.Labels == nil { - oldCRD.ObjectMeta.Labels = make(map[string]string) - } - for key, value := range newLabels { - oldCRD.ObjectMeta.Labels[key] = value - } - var newData, patchBytes []byte - newData, err = json.Marshal(oldCRD) - if err != nil { - return err - } - patchBytes, err = jsonpatch.CreateMergePatch(oldData, newData) - if err != nil { - return err - } - - log.Debug(string(patchBytes)) - _, err6 := apiserver.Clientset.CrunchydataV1().Pgclusters(ns).Patch(oldCRD.Spec.Name, types.MergePatchType, patchBytes) - - return err6 - -} - -func validateLabel(LabelCmdLabel, ns string) (map[string]string, error) { - var err error - labelMap := make(map[string]string) - userValues := strings.Split(LabelCmdLabel, ",") - for _, v := range userValues { - pair := strings.Split(v, "=") - if len(pair) != 2 { - log.Error("label format incorrect, requires name=value") - return labelMap, errors.New("label format incorrect, requires name=value") - } - - errs := validation.IsDNS1035Label(pair[0]) - if len(errs) > 0 { - return labelMap, errors.New("label format incorrect, requires name=value " + errs[0]) - } - errs = validation.IsDNS1035Label(pair[1]) - if len(errs) > 0 { - return labelMap, errors.New("label format incorrect, requires name=value " + errs[0]) - } - - labelMap[pair[0]] = pair[1] - } - return labelMap, err -} - -// DeleteLabel ... -// pgo delete label mycluster yourcluster --label=env=prod -// pgo delete label --label=env=prod --selector=group=somegroup -func DeleteLabel(request *msgs.DeleteLabelRequest, ns string) msgs.LabelResponse { - var err error - var labelsMap map[string]string - resp := msgs.LabelResponse{} - resp.Status.Code = msgs.Ok - resp.Status.Msg = "" - resp.Results = make([]string, 0) - - if len(request.Args) == 0 && request.Selector == "" { - resp.Status.Code = msgs.Error - resp.Status.Msg = "no clusters specified" - return resp - } - - labelsMap, err = validateLabel(request.LabelCmdLabel, ns) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = "labels not formatted correctly" - return resp - } - - clusterList := crv1.PgclusterList{} - if len(request.Args) > 0 && request.Args[0] == "all" { - cl, err := apiserver.Clientset.CrunchydataV1().Pgclusters(ns).List(metav1.ListOptions{}) - if err != nil { - log.Error("error getting list of clusters" + err.Error()) - resp.Status.Code = msgs.Error - resp.Status.Msg = "error getting list of clusters" + err.Error() - return resp - } - if len(clusterList.Items) == 0 { - resp.Status.Code = msgs.Error - resp.Status.Msg = "no clusters found" - return resp - } - clusterList = *cl - - } else if request.Selector != "" { - cl, err := apiserver.Clientset.CrunchydataV1().Pgclusters(ns).List(metav1.ListOptions{LabelSelector: request.Selector}) - if err != nil { - log.Error("error getting list of clusters" + err.Error()) - resp.Status.Code = msgs.Error - resp.Status.Msg = "error getting list of clusters" + err.Error() - return resp - } - if len(clusterList.Items) == 0 { - resp.Status.Code = msgs.Error - resp.Status.Msg = "no clusters found" - return resp - } - clusterList = *cl - } else { - //each arg represents a cluster name - items := make([]crv1.Pgcluster, 0) - for _, cluster := range request.Args { - result, err := apiserver.Clientset.CrunchydataV1().Pgclusters(ns).Get(cluster, metav1.GetOptions{}) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = "error getting list of clusters" + err.Error() - return resp - } - - items = append(items, *result) - } - clusterList.Items = items - } - - for _, c := range clusterList.Items { - resp.Results = append(resp.Results, "deleting label from "+c.Spec.Name) - } - - err = deleteLabels(clusterList.Items, request.LabelCmdLabel, labelsMap, ns) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - return resp - -} - -func deleteLabels(items []crv1.Pgcluster, LabelCmdLabel string, labelsMap map[string]string, ns string) error { - var err error - - for i := 0; i < len(items); i++ { - log.Debugf("deleting label from %s", items[i].Spec.Name) - err = deletePatchPgcluster(labelsMap, items[i], ns) - if err != nil { - log.Error(err.Error()) - return err - } - } - - for i := 0; i < len(items); i++ { - //get deployments for this CRD - selector := config.LABEL_PG_CLUSTER + "=" + items[i].Spec.Name - deployments, err := apiserver.Clientset. - AppsV1().Deployments(ns). - List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - return err - } - - for _, d := range deployments.Items { - err = deleteTheLabel(&d, items[i].Spec.Name, labelsMap, ns) - if err != nil { - log.Error(err.Error()) - return err - } - } - - } - return err -} - -func deletePatchPgcluster(labelsMap map[string]string, oldCRD crv1.Pgcluster, ns string) error { - - oldData, err := json.Marshal(oldCRD) - if err != nil { - return err - } - if oldCRD.ObjectMeta.Labels == nil { - oldCRD.ObjectMeta.Labels = make(map[string]string) - } - for k := range labelsMap { - delete(oldCRD.ObjectMeta.Labels, k) - } - - var newData, patchBytes []byte - newData, err = json.Marshal(oldCRD) - if err != nil { - return err - } - patchBytes, err = jsonpatch.CreateMergePatch(oldData, newData) - if err != nil { - return err - } - - log.Debug(string(patchBytes)) - _, err6 := apiserver.Clientset.CrunchydataV1().Pgclusters(ns).Patch(oldCRD.Spec.Name, types.MergePatchType, patchBytes) - - return err6 - -} - -func deleteTheLabel(deployment *v1.Deployment, clusterName string, labelsMap map[string]string, ns string) error { - - var err error - - log.Debugf("%v are the labels to delete", labelsMap) - - var patchBytes, newData, origData []byte - origData, err = json.Marshal(deployment) - if err != nil { - return err - } - - accessor, err2 := meta.Accessor(deployment) - if err2 != nil { - return err2 - } - - objLabels := accessor.GetLabels() - if objLabels == nil { - objLabels = make(map[string]string) - } - - for k := range labelsMap { - delete(objLabels, k) - } - log.Debugf("revised labels after delete are %v", objLabels) - - accessor.SetLabels(objLabels) - newData, err = json.Marshal(deployment) - if err != nil { - return err - } - - patchBytes, err = jsonpatch.CreateMergePatch(origData, newData) - if err != nil { - return err - } - - _, err = apiserver.Clientset.AppsV1().Deployments(ns).Patch(deployment.Name, types.MergePatchType, patchBytes, "") - if err != nil { - log.Debugf("error patching deployment: %v", err.Error()) - } - return err - -} diff --git a/internal/apiserver/labelservice/labelservice.go b/internal/apiserver/labelservice/labelservice.go deleted file mode 100644 index f13054fd17..0000000000 --- a/internal/apiserver/labelservice/labelservice.go +++ /dev/null @@ -1,137 +0,0 @@ -package labelservice - -/* -Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "encoding/json" - "github.com/crunchydata/postgres-operator/internal/apiserver" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "net/http" -) - -// LabelHandler ... -func LabelHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /label labelservice label - /*``` - LABEL allows you to add a label on a set of clusters. - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Label Request" - // in: "body" - // schema: - // "$ref": "#/definitions/LabelRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/LabelResponse" - var ns string - - log.Debug("labelservice.LabelHandler called") - - var request msgs.LabelRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - username, err := apiserver.Authn(apiserver.LABEL_PERM, w, r) - if err != nil { - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - resp := msgs.LabelResponse{} - resp.Status = msgs.Status{Msg: "", Code: msgs.Ok} - - if request.ClientVersion != msgs.PGO_VERSION { - resp.Status.Msg = apiserver.VERSION_MISMATCH_ERROR - resp.Status.Code = msgs.Error - json.NewEncoder(w).Encode(resp) - return - } - - ns, err = apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace) - if err != nil { - resp.Status = msgs.Status{Msg: err.Error(), Code: msgs.Error} - json.NewEncoder(w).Encode(resp) - return - } - - resp = Label(&request, ns, username) - - json.NewEncoder(w).Encode(resp) -} - -// DeleteLabelHandler ... -func DeleteLabelHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /labeldelete labelservice labeldelete - /*``` - LABEL allows you to remove a label on a set of clusters. - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Delete Label Request" - // in: "body" - // schema: - // "$ref": "#/definitions/DeleteLabelRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/LabelResponse" - var ns string - - log.Debug("labelservice.DeleteLabelHandler called") - - var request msgs.DeleteLabelRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - username, err := apiserver.Authn(apiserver.LABEL_PERM, w, r) - if err != nil { - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - resp := msgs.LabelResponse{} - resp.Status = msgs.Status{Msg: "", Code: msgs.Ok} - - if request.ClientVersion != msgs.PGO_VERSION { - resp.Status = msgs.Status{Msg: apiserver.VERSION_MISMATCH_ERROR, Code: msgs.Error} - json.NewEncoder(w).Encode(resp) - return - } - - ns, err = apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace) - if err != nil { - resp.Status = msgs.Status{Msg: err.Error(), Code: msgs.Error} - json.NewEncoder(w).Encode(resp) - return - } - - resp = DeleteLabel(&request, ns) - - json.NewEncoder(w).Encode(resp) -} diff --git a/internal/apiserver/middleware.go b/internal/apiserver/middleware.go deleted file mode 100644 index fc7e60e8e2..0000000000 --- a/internal/apiserver/middleware.go +++ /dev/null @@ -1,70 +0,0 @@ -package apiserver - -/* -Copyright 2019 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "fmt" - "net/http" - "strings" -) - -// certEnforcer is a contextual middleware for deferred enforcement of -// client certificates. It assumes any certificates presented where validated -// as part of establishing the TLS connection. -type certEnforcer struct { - skip map[string]struct{} -} - -// NewCertEnforcer ensures a certEnforcer is created with skipped routes -// and validates that the configured routes are allowed -func NewCertEnforcer(reqRoutes []string) (*certEnforcer, error) { - allowed := map[string]struct{}{ - // List of allowed routes is part of the published documentation - "/health": {}, - "/healthz": {}, - } - - ce := &certEnforcer{ - skip: map[string]struct{}{}, - } - - for _, route := range reqRoutes { - r := strings.TrimSpace(route) - if _, ok := allowed[r]; !ok { - return nil, fmt.Errorf("Disabling auth unsupported for route [%s]", r) - } - ce.skip[r] = struct{}{} - } - return ce, nil -} - -// Enforce is an HTTP middleware for selectively enforcing deferred client -// certificate checks based on the certEnforcer's skip list -func (ce *certEnforcer) Enforce(next http.Handler) http.Handler { - return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { - path := r.URL.Path - if _, ok := ce.skip[path]; ok { - next.ServeHTTP(w, r) - } else { - clientCerts := len(r.TLS.PeerCertificates) > 0 - if !clientCerts { - http.Error(w, "Forbidden: Client Certificate Required", http.StatusForbidden) - } else { - next.ServeHTTP(w, r) - } - } - }) -} diff --git a/internal/apiserver/namespaceservice/namespaceimpl.go b/internal/apiserver/namespaceservice/namespaceimpl.go deleted file mode 100644 index 04b6671f26..0000000000 --- a/internal/apiserver/namespaceservice/namespaceimpl.go +++ /dev/null @@ -1,163 +0,0 @@ -package namespaceservice - -/* -Copyright 2019 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "fmt" - - "github.com/crunchydata/postgres-operator/internal/apiserver" - "github.com/crunchydata/postgres-operator/internal/ns" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/client-go/kubernetes" -) - -func ShowNamespace(clientset kubernetes.Interface, username string, request *msgs.ShowNamespaceRequest) msgs.ShowNamespaceResponse { - log.Debug("ShowNamespace called") - resp := msgs.ShowNamespaceResponse{} - resp.Status = msgs.Status{Code: msgs.Ok, Msg: ""} - resp.Username = username - resp.Results = make([]msgs.NamespaceResult, 0) - - //namespaceList := util.GetNamespaces() - - nsList := make([]string, 0) - - if request.AllFlag { - namespaceList, err := clientset.CoreV1().Namespaces().List(metav1.ListOptions{}) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - for _, v := range namespaceList.Items { - nsList = append(nsList, v.Name) - } - } else { - if len(request.Args) == 0 { - resp.Status.Code = msgs.Error - resp.Status.Msg = "namespace names or --all flag is required for this command" - return resp - } - - for i := 0; i < len(request.Args); i++ { - _, err := clientset.CoreV1().Namespaces().Get(request.Args[i], metav1.GetOptions{}) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = "namespace " + request.Args[i] + " not found" - - return resp - } else { - nsList = append(nsList, request.Args[i]) - } - } - } - - for i := 0; i < len(nsList); i++ { - iaccess, uaccess, err := apiserver.UserIsPermittedInNamespace(username, nsList[i]) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = fmt.Sprintf("Error when determining whether user [%s] is allowed "+ - "access to namespace [%s]: %s", username, nsList[i], err.Error()) - return resp - } - r := msgs.NamespaceResult{ - Namespace: nsList[i], - InstallationAccess: iaccess, - UserAccess: uaccess, - } - resp.Results = append(resp.Results, r) - } - - return resp -} - -// CreateNamespace ... -func CreateNamespace(clientset kubernetes.Interface, createdBy string, request *msgs.CreateNamespaceRequest) msgs.CreateNamespaceResponse { - - log.Debugf("CreateNamespace %v", request) - resp := msgs.CreateNamespaceResponse{} - resp.Status.Code = msgs.Ok - resp.Status.Msg = "" - resp.Results = make([]string, 0) - - //iterate thru all the args (namespace names) - for _, namespace := range request.Args { - - if err := ns.CreateNamespace(clientset, apiserver.InstallationName, - apiserver.PgoNamespace, createdBy, namespace); err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - resp.Results = append(resp.Results, "created namespace "+namespace) - - } - - return resp - -} - -// DeleteNamespace ... -func DeleteNamespace(clientset kubernetes.Interface, deletedBy string, request *msgs.DeleteNamespaceRequest) msgs.DeleteNamespaceResponse { - resp := msgs.DeleteNamespaceResponse{} - resp.Status.Code = msgs.Ok - resp.Status.Msg = "" - resp.Results = make([]string, 0) - - for _, namespace := range request.Args { - - err := ns.DeleteNamespace(clientset, apiserver.InstallationName, apiserver.PgoNamespace, deletedBy, namespace) - - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - resp.Results = append(resp.Results, "deleted namespace "+namespace) - - } - - return resp - -} - -// UpdateNamespace ... -func UpdateNamespace(clientset kubernetes.Interface, updatedBy string, request *msgs.UpdateNamespaceRequest) msgs.UpdateNamespaceResponse { - - log.Debugf("UpdateNamespace %v", request) - resp := msgs.UpdateNamespaceResponse{} - resp.Status.Code = msgs.Ok - resp.Status.Msg = "" - resp.Results = make([]string, 0) - - //iterate thru all the args (namespace names) - for _, namespace := range request.Args { - - if err := ns.UpdateNamespace(clientset, apiserver.InstallationName, - apiserver.PgoNamespace, updatedBy, namespace); err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - resp.Results = append(resp.Results, "updated namespace "+namespace) - } - - return resp - -} diff --git a/internal/apiserver/namespaceservice/namespaceservice.go b/internal/apiserver/namespaceservice/namespaceservice.go deleted file mode 100644 index 1e27294c96..0000000000 --- a/internal/apiserver/namespaceservice/namespaceservice.go +++ /dev/null @@ -1,251 +0,0 @@ -package namespaceservice - -/* -Copyright 2019 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "encoding/json" - "fmt" - "net/http" - - "github.com/crunchydata/postgres-operator/internal/apiserver" - "github.com/crunchydata/postgres-operator/internal/ns" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" -) - -// ShowNamespaceHandler ... -// pgo show namespace -func ShowNamespaceHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /namespace namespaceservice namespace - /*``` - Show namespace information - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Show Namespace Request" - // in: "body" - // schema: - // "$ref": "#/definitions/ShowNamespaceRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/ShowNamespaceResponse" - - resp := msgs.ShowNamespaceResponse{} - resp.Status.Code = msgs.Ok - resp.Status.Msg = "" - log.Debug("namespaceservice.ShowNamespaceHandler called") - - var request msgs.ShowNamespaceRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - log.Debugf("ShowNamespaceHandler called [%v]", request) - - // return 405 Method Not Allowed if all namespace functionality is disabled - if apiserver.NamespaceOperatingMode() == ns.NamespaceOperatingModeDisabled { - w.Header().Set("Allow", "") - http.Error(w, fmt.Errorf("Unable to show namespaces: %w", - apiserver.ErrMethodNotAllowed).Error(), http.StatusMethodNotAllowed) - return - } - - username, err := apiserver.Authn(apiserver.SHOW_NAMESPACE_PERM, w, r) - if err != nil { - return - } - - w.WriteHeader(http.StatusOK) - w.Header().Set("Content-Type", "application/json") - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - - if request.ClientVersion != msgs.PGO_VERSION { - resp.Status.Code = msgs.Error - resp.Status.Msg = apiserver.VERSION_MISMATCH_ERROR - json.NewEncoder(w).Encode(resp) - return - } - - resp = ShowNamespace(apiserver.Clientset, username, &request) - json.NewEncoder(w).Encode(resp) -} - -func CreateNamespaceHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /namespacecreate namespaceservice namespacecreate - /*``` - Create a namespace - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Create Namespace" - // in: "body" - // schema: - // "$ref": "#/definitions/CreateNamespaceRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/CreateNamespaceResponse" - resp := msgs.CreateNamespaceResponse{} - resp.Status.Code = msgs.Ok - resp.Status.Msg = "" - log.Debug("namespaceservice.CreateNamespaceHandler called") - - var request msgs.CreateNamespaceRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - // return 405 Method Not Allowed if dynamic namespace functionality is not enabled - if apiserver.NamespaceOperatingMode() != ns.NamespaceOperatingModeDynamic { - w.Header().Set("Allow", "") - http.Error(w, fmt.Errorf("Unable to create namespaces: %w", - apiserver.ErrMethodNotAllowed).Error(), http.StatusMethodNotAllowed) - return - } - - username, err := apiserver.Authn(apiserver.CREATE_NAMESPACE_PERM, w, r) - if err != nil { - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - log.Debugf("namespaceservice.CreateNamespaceHandler got request %v", request) - if request.ClientVersion != msgs.PGO_VERSION { - resp.Status.Code = msgs.Error - resp.Status.Msg = apiserver.VERSION_MISMATCH_ERROR - json.NewEncoder(w).Encode(resp) - return - } - - resp = CreateNamespace(apiserver.Clientset, username, &request) - json.NewEncoder(w).Encode(resp) -} - -func DeleteNamespaceHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /namespacedelete namespaceservice namespacedelete - /*``` - Delete a namespaces - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Delete Namespace" - // in: "body" - // schema: - // "$ref": "#/definitions/DeleteNamespaceRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/DeleteNamespaceResponse" - var request msgs.DeleteNamespaceRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - log.Debugf("DeleteNamespaceHandler parameters [%v]", request) - - // return 405 Method Not Allowed if dynamic namespace functionality is not enabled - if apiserver.NamespaceOperatingMode() != ns.NamespaceOperatingModeDynamic { - w.Header().Set("Allow", "") - http.Error(w, fmt.Errorf("Unable to delete namespaces: %w", - apiserver.ErrMethodNotAllowed).Error(), http.StatusMethodNotAllowed) - return - } - - username, err := apiserver.Authn(apiserver.DELETE_NAMESPACE_PERM, w, r) - if err != nil { - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - resp := msgs.DeleteNamespaceResponse{} - resp.Status.Code = msgs.Ok - resp.Status.Msg = "" - - if request.ClientVersion != msgs.PGO_VERSION { - resp.Status.Code = msgs.Error - resp.Status.Msg = apiserver.VERSION_MISMATCH_ERROR - json.NewEncoder(w).Encode(resp) - return - } - - resp = DeleteNamespace(apiserver.Clientset, username, &request) - json.NewEncoder(w).Encode(resp) - -} -func UpdateNamespaceHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /namespaceupdate namespaceservice namespaceupdate - /*``` - Update a namespace, applying Operator RBAC - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Update Namespace" - // in: "body" - // schema: - // "$ref": "#/definitions/UpdateNamespaceRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/UpdateNamespaceResponse" - resp := msgs.UpdateNamespaceResponse{} - resp.Status.Code = msgs.Ok - resp.Status.Msg = "" - log.Debug("namespaceservice.UpdateNamespaceHandler called") - - var request msgs.UpdateNamespaceRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - // return 405 Method Not Allowed if dynamic namespace functionality is not enabled - if apiserver.NamespaceOperatingMode() != ns.NamespaceOperatingModeDynamic { - w.Header().Set("Allow", "") - http.Error(w, fmt.Errorf("Unable to update namespaces: %w", - apiserver.ErrMethodNotAllowed).Error(), http.StatusMethodNotAllowed) - return - } - - username, err := apiserver.Authn(apiserver.UPDATE_NAMESPACE_PERM, w, r) - if err != nil { - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - log.Debugf("namespaceservice.UpdateNamespaceHandler got request %v", request) - if request.ClientVersion != msgs.PGO_VERSION { - resp.Status.Code = msgs.Error - resp.Status.Msg = apiserver.VERSION_MISMATCH_ERROR - json.NewEncoder(w).Encode(resp) - return - } - - resp = UpdateNamespace(apiserver.Clientset, username, &request) - json.NewEncoder(w).Encode(resp) -} diff --git a/internal/apiserver/perms.go b/internal/apiserver/perms.go deleted file mode 100644 index d8796c8590..0000000000 --- a/internal/apiserver/perms.go +++ /dev/null @@ -1,189 +0,0 @@ -package apiserver - -/* -Copyright 2018 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - log "github.com/sirupsen/logrus" -) - -// The below constants contains the "apiserver RBAC permissions" -- this was -// reorganized to make it...slightly more organized as we continue to evole -// the system -const ( - // MISC - APPLY_POLICY_PERM = "ApplyPolicy" - CAT_PERM = "Cat" - CLONE_PERM = "Clone" - DF_CLUSTER_PERM = "DfCluster" - LABEL_PERM = "Label" - RELOAD_PERM = "Reload" - RESTART_PERM = "Restart" - RESTORE_PERM = "Restore" - STATUS_PERM = "Status" - TEST_CLUSTER_PERM = "TestCluster" - VERSION_PERM = "Version" - - // CREATE - CREATE_BACKUP_PERM = "CreateBackup" - CREATE_CLUSTER_PERM = "CreateCluster" - CREATE_DUMP_PERM = "CreateDump" - CREATE_FAILOVER_PERM = "CreateFailover" - CREATE_INGEST_PERM = "CreateIngest" - CREATE_NAMESPACE_PERM = "CreateNamespace" - CREATE_PGADMIN_PERM = "CreatePgAdmin" - CREATE_PGBOUNCER_PERM = "CreatePgbouncer" - CREATE_PGOUSER_PERM = "CreatePgouser" - CREATE_PGOROLE_PERM = "CreatePgorole" - CREATE_POLICY_PERM = "CreatePolicy" - CREATE_SCHEDULE_PERM = "CreateSchedule" - CREATE_UPGRADE_PERM = "CreateUpgrade" - CREATE_USER_PERM = "CreateUser" - - // RESTORE - RESTORE_DUMP_PERM = "RestoreDump" - - // DELETE - DELETE_BACKUP_PERM = "DeleteBackup" - DELETE_CLUSTER_PERM = "DeleteCluster" - DELETE_INGEST_PERM = "DeleteIngest" - DELETE_NAMESPACE_PERM = "DeleteNamespace" - DELETE_PGADMIN_PERM = "DeletePgAdmin" - DELETE_PGBOUNCER_PERM = "DeletePgbouncer" - DELETE_PGOROLE_PERM = "DeletePgorole" - DELETE_PGOUSER_PERM = "DeletePgouser" - DELETE_POLICY_PERM = "DeletePolicy" - DELETE_SCHEDULE_PERM = "DeleteSchedule" - DELETE_USER_PERM = "DeleteUser" - - // SHOW - SHOW_BACKUP_PERM = "ShowBackup" - SHOW_CLUSTER_PERM = "ShowCluster" - SHOW_CONFIG_PERM = "ShowConfig" - SHOW_INGEST_PERM = "ShowIngest" - SHOW_NAMESPACE_PERM = "ShowNamespace" - SHOW_PGADMIN_PERM = "ShowPgAdmin" - SHOW_PGBOUNCER_PERM = "ShowPgBouncer" - SHOW_PGOROLE_PERM = "ShowPgorole" - SHOW_PGOUSER_PERM = "ShowPgouser" - SHOW_POLICY_PERM = "ShowPolicy" - SHOW_PVC_PERM = "ShowPVC" - SHOW_SCHEDULE_PERM = "ShowSchedule" - SHOW_SECRETS_PERM = "ShowSecrets" - SHOW_SYSTEM_ACCOUNTS_PERM = "ShowSystemAccounts" - SHOW_USER_PERM = "ShowUser" - SHOW_WORKFLOW_PERM = "ShowWorkflow" - - // SCALE - SCALE_CLUSTER_PERM = "ScaleCluster" - - // UPDATE - UPDATE_CLUSTER_PERM = "UpdateCluster" - UPDATE_NAMESPACE_PERM = "UpdateNamespace" - UPDATE_PGBOUNCER_PERM = "UpdatePgBouncer" - UPDATE_PGOROLE_PERM = "UpdatePgorole" - UPDATE_PGOUSER_PERM = "UpdatePgouser" - UPDATE_USER_PERM = "UpdateUser" -) - -var RoleMap map[string]map[string]string -var PermMap map[string]string - -const pgorolePath = "/default-pgo-config/pgorole" -const pgoroleFile = "pgorole" - -func initializePerms() { - RoleMap = make(map[string]map[string]string) - - // ...initialize the permission map using most of the legacy method, but make - // it slightly more organized - PermMap = map[string]string{ - // MISC - APPLY_POLICY_PERM: "yes", - CAT_PERM: "yes", - CLONE_PERM: "yes", - DF_CLUSTER_PERM: "yes", - LABEL_PERM: "yes", - RELOAD_PERM: "yes", - RESTORE_PERM: "yes", - STATUS_PERM: "yes", - TEST_CLUSTER_PERM: "yes", - VERSION_PERM: "yes", - - // CREATE - CREATE_BACKUP_PERM: "yes", - CREATE_DUMP_PERM: "yes", - CREATE_CLUSTER_PERM: "yes", - CREATE_FAILOVER_PERM: "yes", - CREATE_INGEST_PERM: "yes", - CREATE_NAMESPACE_PERM: "yes", - CREATE_PGADMIN_PERM: "yes", - CREATE_PGBOUNCER_PERM: "yes", - CREATE_PGOROLE_PERM: "yes", - CREATE_PGOUSER_PERM: "yes", - CREATE_POLICY_PERM: "yes", - CREATE_SCHEDULE_PERM: "yes", - CREATE_UPGRADE_PERM: "yes", - CREATE_USER_PERM: "yes", - - // RESTORE - RESTORE_DUMP_PERM: "yes", - - // DELETE - DELETE_BACKUP_PERM: "yes", - DELETE_CLUSTER_PERM: "yes", - DELETE_INGEST_PERM: "yes", - DELETE_NAMESPACE_PERM: "yes", - DELETE_PGADMIN_PERM: "yes", - DELETE_PGBOUNCER_PERM: "yes", - DELETE_PGOROLE_PERM: "yes", - DELETE_PGOUSER_PERM: "yes", - DELETE_POLICY_PERM: "yes", - DELETE_SCHEDULE_PERM: "yes", - DELETE_USER_PERM: "yes", - - // SHOW - SHOW_BACKUP_PERM: "yes", - SHOW_CLUSTER_PERM: "yes", - SHOW_CONFIG_PERM: "yes", - SHOW_INGEST_PERM: "yes", - SHOW_NAMESPACE_PERM: "yes", - SHOW_PGADMIN_PERM: "yes", - SHOW_PGBOUNCER_PERM: "yes", - SHOW_PGOROLE_PERM: "yes", - SHOW_PGOUSER_PERM: "yes", - SHOW_POLICY_PERM: "yes", - SHOW_PVC_PERM: "yes", - SHOW_SCHEDULE_PERM: "yes", - SHOW_SECRETS_PERM: "yes", - SHOW_SYSTEM_ACCOUNTS_PERM: "yes", - SHOW_USER_PERM: "yes", - SHOW_WORKFLOW_PERM: "yes", - - // SCALE - SCALE_CLUSTER_PERM: "yes", - - // UPDATE - UPDATE_CLUSTER_PERM: "yes", - UPDATE_NAMESPACE_PERM: "yes", - UPDATE_PGBOUNCER_PERM: "yes", - UPDATE_PGOROLE_PERM: "yes", - UPDATE_PGOUSER_PERM: "yes", - UPDATE_USER_PERM: "yes", - } - - log.Infof("loading PermMap with %d Permissions\n", len(PermMap)) - -} diff --git a/internal/apiserver/pgadminservice/pgadminimpl.go b/internal/apiserver/pgadminservice/pgadminimpl.go deleted file mode 100644 index 5ca4aa8a1a..0000000000 --- a/internal/apiserver/pgadminservice/pgadminimpl.go +++ /dev/null @@ -1,296 +0,0 @@ -package pgadminservice - -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "fmt" - - "github.com/crunchydata/postgres-operator/internal/apiserver" - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/pgadmin" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - - log "github.com/sirupsen/logrus" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" -) - -const pgAdminServiceSuffix = "-pgadmin" - -// CreatePgAdmin ... -// pgo create pgadmin mycluster -// pgo create pgadmin --selector=name=mycluster -func CreatePgAdmin(request *msgs.CreatePgAdminRequest, ns, pgouser string) msgs.CreatePgAdminResponse { - var err error - resp := msgs.CreatePgAdminResponse{ - Status: msgs.Status{Code: msgs.Ok}, - Results: []string{}, - } - - log.Debugf("createPgAdmin selector is [%s]", request.Selector) - - // try to get the list of clusters. if there is an error, put it into the - // status and return - clusterList, err := getClusterList(request.Namespace, request.Args, request.Selector) - if err != nil { - resp.SetError(err.Error()) - return resp - } - - for _, cluster := range clusterList.Items { - // check if the current cluster is not upgraded to the deployed - // Operator version. If not, do not allow the command to complete - if cluster.Annotations[config.ANNOTATION_IS_UPGRADED] == config.ANNOTATIONS_FALSE { - resp.Status.Code = msgs.Error - resp.Status.Msg = cluster.Name + msgs.UpgradeError - return resp - } - - log.Debugf("adding pgAdmin to cluster [%s]", cluster.Name) - - // generate the pgtask, starting with spec - spec := crv1.PgtaskSpec{ - Namespace: cluster.Namespace, - Name: fmt.Sprintf("%s-%s", config.LABEL_PGADMIN_TASK_ADD, cluster.Name), - TaskType: crv1.PgtaskPgAdminAdd, - StorageSpec: cluster.Spec.PrimaryStorage, - Parameters: map[string]string{ - config.LABEL_PGADMIN_TASK_CLUSTER: cluster.Name, - }, - } - - task := &crv1.Pgtask{ - ObjectMeta: metav1.ObjectMeta{ - Name: spec.Name, - Labels: map[string]string{ - config.LABEL_PG_CLUSTER: cluster.Name, - config.LABEL_PGADMIN_TASK_ADD: "true", - config.LABEL_PGOUSER: pgouser, - }, - }, - Spec: spec, - } - - if _, err := apiserver.Clientset.CrunchydataV1().Pgtasks(cluster.Namespace).Create(task); err != nil { - log.Error(err) - resp.SetError("error creating tasks for one or more clusters") - resp.Results = append(resp.Results, fmt.Sprintf("%s: error - %s", cluster.Name, err.Error())) - continue - } else { - resp.Results = append(resp.Results, fmt.Sprintf("%s pgAdmin addition scheduled", cluster.Name)) - } - } - - return resp -} - -// DeletePgAdmin ... -// pgo delete pgadmin mycluster -// pgo delete pgadmin --selector=name=mycluster -func DeletePgAdmin(request *msgs.DeletePgAdminRequest, ns string) msgs.DeletePgAdminResponse { - var err error - resp := msgs.DeletePgAdminResponse{ - Status: msgs.Status{Code: msgs.Ok}, - Results: []string{}, - } - - log.Debugf("deletePgAdmin selector is [%s]", request.Selector) - - // try to get the list of clusters. if there is an error, put it into the - // status and return - clusterList, err := getClusterList(request.Namespace, request.Args, request.Selector) - if err != nil { - resp.SetError(err.Error()) - return resp - } - - for _, cluster := range clusterList.Items { - // check if the current cluster is not upgraded to the deployed - // Operator version. If not, do not allow the command to complete - if cluster.Annotations[config.ANNOTATION_IS_UPGRADED] == config.ANNOTATIONS_FALSE { - resp.Status.Code = msgs.Error - resp.Status.Msg = cluster.Name + msgs.UpgradeError - return resp - } - - log.Debugf("deleting pgAdmin from cluster [%s]", cluster.Name) - - // generate the pgtask, starting with spec - spec := crv1.PgtaskSpec{ - Namespace: cluster.Namespace, - Name: config.LABEL_PGADMIN_TASK_DELETE + "-" + cluster.Name, - TaskType: crv1.PgtaskPgAdminDelete, - Parameters: map[string]string{ - config.LABEL_PGADMIN_TASK_CLUSTER: cluster.Name, - }, - } - - task := &crv1.Pgtask{ - ObjectMeta: metav1.ObjectMeta{ - Name: spec.Name, - Labels: map[string]string{ - config.LABEL_PG_CLUSTER: cluster.Name, - config.LABEL_PGADMIN_TASK_DELETE: "true", - }, - }, - Spec: spec, - } - - if _, err := apiserver.Clientset.CrunchydataV1().Pgtasks(cluster.Namespace).Create(task); err != nil { - log.Error(err) - resp.SetError("error creating tasks for one or more clusters") - resp.Results = append(resp.Results, fmt.Sprintf("%s: error - %s", cluster.Name, err.Error())) - return resp - } else { - resp.Results = append(resp.Results, cluster.Name+" pgAdmin delete scheduled") - } - - } - - return resp -} - -// ShowPgAdmin gets information about a PostgreSQL cluster's pgAdmin -// deployment -// -// pgo show pgadmin -// pgo show pgadmin --selector -func ShowPgAdmin(request *msgs.ShowPgAdminRequest, namespace string) msgs.ShowPgAdminResponse { - log.Debugf("show pgAdmin called, cluster [%v], selector [%s]", request.ClusterNames, request.Selector) - - response := msgs.ShowPgAdminResponse{ - Results: []msgs.ShowPgAdminDetail{}, - Status: msgs.Status{Code: msgs.Ok}, - } - - // try to get the list of clusters. if there is an error, put it into the - // status and return - clusterList, err := getClusterList(request.Namespace, request.ClusterNames, request.Selector) - - if err != nil { - response.SetError(err.Error()) - return response - } - - // iterate through the list of clusters to get the relevant pgAdmin - // information about them - for _, cluster := range clusterList.Items { - result := msgs.ShowPgAdminDetail{ - ClusterName: cluster.Spec.Name, - HasPgAdmin: true, - } - - // first, check if the cluster has the pgAdmin label. If it does not, we - // add it to the list and keep iterating - clusterLabels := cluster.GetLabels() - - if clusterLabels[config.LABEL_PGADMIN] != "true" { - result.HasPgAdmin = false - response.Results = append(response.Results, result) - continue - } - - // This takes advantage of pgadmin deployment and pgadmin service - // sharing a name that is clustername + pgAdminServiceSuffix - service, err := apiserver.Clientset. - CoreV1().Services(cluster.Namespace). - Get(cluster.Name+pgAdminServiceSuffix, metav1.GetOptions{}) - if err != nil { - response.SetError(err.Error()) - return response - } - - result.ServiceClusterIP = service.Spec.ClusterIP - result.ServiceName = service.Name - if len(service.Spec.ExternalIPs) > 0 { - result.ServiceExternalIP = service.Spec.ExternalIPs[0] - } - if len(service.Status.LoadBalancer.Ingress) > 0 { - result.ServiceExternalIP = service.Status.LoadBalancer.Ingress[0].IP - } - - // In the future, construct results to contain individual error stati - // for now log and return empty content if encountered - qr, err := pgadmin.GetPgAdminQueryRunner(apiserver.Clientset, apiserver.RESTConfig, &cluster) - if err != nil { - log.Error(err) - continue - } else if qr != nil { - names, err := pgadmin.GetUsernames(qr) - if err != nil { - log.Error(err) - continue - } - result.Users = names - } - - // append the result to the list - response.Results = append(response.Results, result) - } - - return response -} - -// getClusterList tries to return a list of clusters based on either having an -// argument list of cluster names, or a Kubernetes selector -func getClusterList(namespace string, clusterNames []string, selector string) (crv1.PgclusterList, error) { - clusterList := crv1.PgclusterList{} - - // see if there are any values in the cluster name list or in the selector - // if nothing exists, return an error - if len(clusterNames) == 0 && selector == "" { - err := fmt.Errorf("either a list of cluster names or a selector needs to be supplied for this comment") - return clusterList, err - } - - // try to build the cluster list based on either the selector or the list - // of arguments...or both. First, start with the selector - if selector != "" { - cl, err := apiserver.Clientset. - CrunchydataV1().Pgclusters(namespace). - List(metav1.ListOptions{LabelSelector: selector}) - - // if there is an error, return here with an empty cluster list - if err != nil { - return crv1.PgclusterList{}, err - } - clusterList = *cl - } - - // now try to get clusters based specific cluster names - for _, clusterName := range clusterNames { - cluster, err := apiserver.Clientset.CrunchydataV1().Pgclusters(namespace).Get(clusterName, metav1.GetOptions{}) - - // if there is an error, capture it here and return here with an empty list - if err != nil { - return crv1.PgclusterList{}, err - } - - // if successful, append to the cluster list - clusterList.Items = append(clusterList.Items, *cluster) - } - - log.Debugf("clusters founds: [%d]", len(clusterList.Items)) - - // if after all this, there are no clusters found, return an error - if len(clusterList.Items) == 0 { - err := fmt.Errorf("no clusters found") - return clusterList, err - } - - // all set! return the cluster list with error - return clusterList, nil -} diff --git a/internal/apiserver/pgadminservice/pgadminservice.go b/internal/apiserver/pgadminservice/pgadminservice.go deleted file mode 100644 index 90378868ca..0000000000 --- a/internal/apiserver/pgadminservice/pgadminservice.go +++ /dev/null @@ -1,193 +0,0 @@ -package pgadminservice - -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "encoding/json" - "net/http" - - "github.com/crunchydata/postgres-operator/internal/apiserver" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - - log "github.com/sirupsen/logrus" -) - -// CreatePgAdminHandler ... -// pgo create pgadmin -func CreatePgAdminHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /pgadmin pgadminservice pgadmin-post - /*``` - Create a pgAdmin instance - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Create PgAdmin Request" - // in: "body" - // schema: - // "$ref": "#/definitions/CreatePgAdminRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/CreatePgAdminResponse" - var ns string - log.Debug("pgadminservice.CreatePgAdminHandler called") - username, err := apiserver.Authn(apiserver.CREATE_PGADMIN_PERM, w, r) - if err != nil { - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - var request msgs.CreatePgAdminRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - resp := msgs.CreatePgAdminResponse{} - - if request.ClientVersion != msgs.PGO_VERSION { - resp.SetError(apiserver.VERSION_MISMATCH_ERROR) - json.NewEncoder(w).Encode(resp) - return - } - - ns, err = apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace) - if err != nil { - resp.SetError(err.Error()) - json.NewEncoder(w).Encode(resp) - return - } - - resp = CreatePgAdmin(&request, ns, username) - json.NewEncoder(w).Encode(resp) - -} - -// DeletePgAdminHandler ... -// pgo delete pgadmin -func DeletePgAdminHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation DELETE /pgadmin pgadminservice pgadmin-delete - /*``` - Delete pgadmin from a cluster - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Delete PgAdmin Request" - // in: "body" - // schema: - // "$ref": "#/definitions/DeletePgAdminRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/DeletePgAdminResponse" - var ns string - log.Debug("pgadminservice.DeletePgAdminHandler called") - username, err := apiserver.Authn(apiserver.DELETE_PGADMIN_PERM, w, r) - if err != nil { - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - var request msgs.DeletePgAdminRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - resp := msgs.DeletePgAdminResponse{} - - if request.ClientVersion != msgs.PGO_VERSION { - resp.SetError(apiserver.VERSION_MISMATCH_ERROR) - json.NewEncoder(w).Encode(resp) - return - } - - ns, err = apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace) - if err != nil { - resp.SetError(err.Error()) - json.NewEncoder(w).Encode(resp) - return - } - - resp = DeletePgAdmin(&request, ns) - json.NewEncoder(w).Encode(resp) - -} - -// ShowPgAdminHandler is the HTTP handler to get information about a pgBouncer -// deployment, aka `pgo show pgadmin` -func ShowPgAdminHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /pgadmin/show pgadminservice pgadmin-post - /*``` - Show information about a pgBouncer deployment - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Show PGBouncer Information" - // in: "body" - // schema: - // "$ref": "#/definitions/ShowPgAdminRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/ShowPgAdminResponse" - log.Debug("pgadminservice.ShowPgAdminHandler called") - - // first, determine if the user is authorized to access this resource - username, err := apiserver.Authn(apiserver.SHOW_PGADMIN_PERM, w, r) - if err != nil { - return - } - - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - // get the information that is in the request - var request msgs.ShowPgAdminRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - resp := msgs.ShowPgAdminResponse{} - - // ensure the versions align... - if request.ClientVersion != msgs.PGO_VERSION { - resp.SetError(apiserver.VERSION_MISMATCH_ERROR) - json.NewEncoder(w).Encode(resp) - return - } - - // ensure the namespace being used exists - namespace, err := apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace) - - if err != nil { - resp.SetError(err.Error()) - json.NewEncoder(w).Encode(resp) - return - } - - // get the information about a pgAdmin deployment(s) - resp = ShowPgAdmin(&request, namespace) - json.NewEncoder(w).Encode(resp) - -} diff --git a/internal/apiserver/pgbouncerservice/pgbouncerimpl.go b/internal/apiserver/pgbouncerservice/pgbouncerimpl.go deleted file mode 100644 index bab470b00a..0000000000 --- a/internal/apiserver/pgbouncerservice/pgbouncerimpl.go +++ /dev/null @@ -1,527 +0,0 @@ -package pgbouncerservice - -/* -Copyright 2018 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "fmt" - "strings" - - "github.com/crunchydata/postgres-operator/internal/apiserver" - "github.com/crunchydata/postgres-operator/internal/config" - clusteroperator "github.com/crunchydata/postgres-operator/internal/operator/cluster" - "github.com/crunchydata/postgres-operator/internal/util" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - - log "github.com/sirupsen/logrus" - v1 "k8s.io/api/core/v1" - "k8s.io/apimachinery/pkg/api/resource" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" -) - -const pgBouncerServiceSuffix = "-pgbouncer" - -// CreatePgbouncer ... -// pgo create pgbouncer mycluster -// pgo create pgbouncer --selector=name=mycluster -func CreatePgbouncer(request *msgs.CreatePgbouncerRequest, ns, pgouser string) msgs.CreatePgbouncerResponse { - var err error - resp := msgs.CreatePgbouncerResponse{} - resp.Status.Code = msgs.Ok - resp.Status.Msg = "" - resp.Results = make([]string, 0) - - // validate the CPU/Memory request parameters, if they are passed in - if err := apiserver.ValidateResourceRequestLimit(request.CPURequest, request.CPULimit, resource.Quantity{}); err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - if err := apiserver.ValidateResourceRequestLimit(request.MemoryRequest, request.MemoryLimit, - apiserver.Pgo.Cluster.DefaultPgBouncerResourceMemory); err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - // validate the number of replicas being requested - if request.Replicas < 0 { - resp.Status.Code = msgs.Error - resp.Status.Msg = fmt.Sprintf(apiserver.ErrMessageReplicas, 1) - return resp - } - - log.Debugf("createPgbouncer selector is [%s]", request.Selector) - - // try to get the list of clusters. if there is an error, put it into the - // status and return - clusterList, err := getClusterList(request.Namespace, request.Args, request.Selector) - - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - for _, cluster := range clusterList.Items { - - // check if the current cluster is not upgraded to the deployed - // Operator version. If not, do not allow the command to complete - if cluster.Annotations[config.ANNOTATION_IS_UPGRADED] == config.ANNOTATIONS_FALSE { - resp.Status.Code = msgs.Error - resp.Status.Msg = cluster.Name + msgs.UpgradeError - return resp - } - - log.Debugf("adding pgbouncer to cluster [%s]", cluster.Name) - - resources := v1.ResourceList{} - limits := v1.ResourceList{} - - // Set the value that enables the pgBouncer, which is the replicas - // Set the default value, and if there is a custom number of replicas - // provided, set it to that - cluster.Spec.PgBouncer.Replicas = config.DefaultPgBouncerReplicas - - if request.Replicas > 0 { - cluster.Spec.PgBouncer.Replicas = request.Replicas - } - - // if the request has overriding CPU/memory parameters, - // these will take precedence over the defaults - if request.CPULimit != "" { - // as this was already validated, we can ignore the error - quantity, _ := resource.ParseQuantity(request.CPULimit) - limits[v1.ResourceCPU] = quantity - } - - if request.CPURequest != "" { - // as this was already validated, we can ignore the error - quantity, _ := resource.ParseQuantity(request.CPURequest) - resources[v1.ResourceCPU] = quantity - } - - if request.MemoryLimit != "" { - // as this was already validated, we can ignore the error - quantity, _ := resource.ParseQuantity(request.MemoryLimit) - limits[v1.ResourceMemory] = quantity - } - - if request.MemoryRequest != "" { - // as this was already validated, we can ignore the error - quantity, _ := resource.ParseQuantity(request.MemoryRequest) - resources[v1.ResourceMemory] = quantity - } else { - resources[v1.ResourceMemory] = apiserver.Pgo.Cluster.DefaultPgBouncerResourceMemory - } - - cluster.Spec.PgBouncer.Resources = resources - - // update the cluster CRD with these udpates. If there is an error - if _, err := apiserver.Clientset.CrunchydataV1().Pgclusters(request.Namespace).Update(&cluster); err != nil { - log.Error(err) - resp.Results = append(resp.Results, err.Error()) - continue - } - - resp.Results = append(resp.Results, fmt.Sprintf("%s pgbouncer added", cluster.Name)) - } - - return resp -} - -// DeletePgbouncer ... -// pgo delete pgbouncer mycluster -// pgo delete pgbouncer --selector=name=mycluster -func DeletePgbouncer(request *msgs.DeletePgbouncerRequest, ns string) msgs.DeletePgbouncerResponse { - var err error - resp := msgs.DeletePgbouncerResponse{} - resp.Status.Code = msgs.Ok - resp.Status.Msg = "" - resp.Results = make([]string, 0) - - log.Debugf("deletePgbouncer selector is [%s]", request.Selector) - - // try to get the list of clusters. if there is an error, put it into the - // status and return - clusterList, err := getClusterList(request.Namespace, request.Args, request.Selector) - - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - // Return an error if any clusters identified to have pgbouncer fully deleted (as specified - // using the uninstall parameter) have standby mode enabled and the 'uninstall' option selected. - // This because while in standby mode the cluster is read-only, preventing the execution of the - // SQL required to remove pgBouncer. - if hasStandby, standbyClusters := apiserver.PGClusterListHasStandby(clusterList); hasStandby && - request.Uninstall { - - resp.Status.Code = msgs.Error - resp.Status.Msg = fmt.Sprintf("Request rejected, unable to delete pgbouncer using the "+ - "'uninstall' parameter for clusters %s: %s.", strings.Join(standbyClusters, ","), - apiserver.ErrStandbyNotAllowed.Error()) - return resp - } - - for _, cluster := range clusterList.Items { - log.Debugf("deleting pgbouncer from cluster [%s]", cluster.Name) - - // check to see if the uninstall flag was set. If it was, apply the update - // inline - if request.Uninstall { - if err := clusteroperator.UninstallPgBouncer(apiserver.Clientset, apiserver.RESTConfig, &cluster); err != nil { - log.Error(err) - resp.Status.Code = msgs.Error - resp.Results = append(resp.Results, err.Error()) - return resp - } - } - - // Disable the pgBouncer Deploymnet, which means setting Replicas to 0 - cluster.Spec.PgBouncer.Replicas = 0 - // Set the resources/limits to their default values - cluster.Spec.PgBouncer.Resources = v1.ResourceList{} - cluster.Spec.PgBouncer.Limits = v1.ResourceList{} - - // update the cluster CRD with these udpates. If there is an error - if _, err := apiserver.Clientset.CrunchydataV1().Pgclusters(request.Namespace).Update(&cluster); err != nil { - log.Error(err) - resp.Status.Code = msgs.Error - resp.Results = append(resp.Results, err.Error()) - return resp - } - - // follow the legacy format for returning this information - result := fmt.Sprintf("%s pgbouncer deleted", cluster.Name) - resp.Results = append(resp.Results, result) - } - - return resp - -} - -// ShowPgBouncer gets information about a PostgreSQL cluster's pgBouncer -// deployment -// -// pgo show pgbouncer -// pgo show pgbouncer --selector -func ShowPgBouncer(request *msgs.ShowPgBouncerRequest, namespace string) msgs.ShowPgBouncerResponse { - // set up a dummy response - response := msgs.ShowPgBouncerResponse{ - Results: []msgs.ShowPgBouncerDetail{}, - Status: msgs.Status{ - Code: msgs.Ok, - Msg: "", - }, - } - - log.Debugf("show pgbouncer called, cluster [%v], selector [%s]", request.ClusterNames, request.Selector) - - // try to get the list of clusters. if there is an error, put it into the - // status and return - clusterList, err := getClusterList(request.Namespace, request.ClusterNames, request.Selector) - - if err != nil { - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - - // iterate through the list of clusters to get the relevant pgBouncer - // information about them - for _, cluster := range clusterList.Items { - result := msgs.ShowPgBouncerDetail{ - ClusterName: cluster.Spec.Name, - HasPgBouncer: true, - } - // first, check if the cluster has pgBouncer enabled - if !cluster.Spec.PgBouncer.Enabled() { - result.HasPgBouncer = false - response.Results = append(response.Results, result) - continue - } - - // only set the pgBouncer user if we know this is a pgBouncer enabled - // cluster...even though, yes, this is a constant - result.Username = crv1.PGUserPgBouncer - - // set the pgBouncer service information on this record - setPgBouncerServiceDetail(cluster, &result) - - // get the user information about the pgBouncer deployment - setPgBouncerPasswordDetail(cluster, &result) - - // append the result to the list - response.Results = append(response.Results, result) - } - - return response -} - -// UpdatePgBouncer updates a cluster's pgBouncer deployment based on the -// parameters passed in. This includes: -// -// - password rotation -// - updating CPU/memory resources -func UpdatePgBouncer(request *msgs.UpdatePgBouncerRequest, namespace, pgouser string) msgs.UpdatePgBouncerResponse { - // set up a dummy response - response := msgs.UpdatePgBouncerResponse{ - // Results: []msgs.ShowPgBouncerDetail{}, - Status: msgs.Status{ - Code: msgs.Ok, - Msg: "", - }, - } - - // validate the CPU/Memory parameters, if they are passed in - zeroQuantity := resource.Quantity{} - - if err := apiserver.ValidateResourceRequestLimit(request.CPURequest, request.CPULimit, zeroQuantity); err != nil { - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - - // Don't check the default value as pgBouncer is already deployed - if err := apiserver.ValidateResourceRequestLimit(request.MemoryRequest, request.MemoryLimit, zeroQuantity); err != nil { - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - - // validate the number of replicas being requested - if request.Replicas < 0 { - response.Status.Code = msgs.Error - response.Status.Msg = fmt.Sprintf(apiserver.ErrMessageReplicas, 1) - return response - } - - log.Debugf("update pgbouncer called, cluster [%v], selector [%s]", request.ClusterNames, request.Selector) - - // try to get the list of clusters. if there is an error, put it into the - // status and return - clusterList, err := getClusterList(request.Namespace, request.ClusterNames, request.Selector) - - if err != nil { - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - - // Return an error if any clusters selected to have pgbouncer updated have standby mode enabled. - // This is because while in standby mode the cluster is read-only, preventing the execution of the - // SQL required to update pgbouncer. - if hasStandby, standbyClusters := apiserver.PGClusterListHasStandby(clusterList); hasStandby { - - response.Status.Code = msgs.Error - response.Status.Msg = fmt.Sprintf("Request rejected, unable to update pgbouncer for "+ - "clusters %s: %s.", strings.Join(standbyClusters, ","), - apiserver.ErrStandbyNotAllowed.Error()) - return response - } - - // iterate through the list of clusters to get the relevant pgBouncer - // information about them - for _, cluster := range clusterList.Items { - result := msgs.UpdatePgBouncerDetail{ - ClusterName: cluster.Spec.Name, - HasPgBouncer: true, - } - - // first, check if the cluster has pgBouncer enabled - if !cluster.Spec.PgBouncer.Enabled() { - result.HasPgBouncer = false - response.Results = append(response.Results, result) - continue - } - - // if we are rotating the password, perform the request inline - if request.RotatePassword { - if err := clusteroperator.RotatePgBouncerPassword(apiserver.Clientset, apiserver.RESTConfig, &cluster); err != nil { - log.Error(err) - result.Error = true - result.ErrorMessage = err.Error() - response.Results = append(response.Results, result) - } - } - - // ensure the Resources/Limits are non-nil - if cluster.Spec.PgBouncer.Resources == nil { - cluster.Spec.PgBouncer.Resources = v1.ResourceList{} - } - - if cluster.Spec.PgBouncer.Limits == nil { - cluster.Spec.PgBouncer.Limits = v1.ResourceList{} - } - - // if the request has overriding CPU/Memory parameters, - // add them to the cluster's pgbouncer resource list - if request.CPULimit != "" { - // as this was already validated, we can ignore the error - quantity, _ := resource.ParseQuantity(request.CPULimit) - cluster.Spec.PgBouncer.Limits[v1.ResourceCPU] = quantity - } - - if request.CPURequest != "" { - // as this was already validated, we can ignore the error - quantity, _ := resource.ParseQuantity(request.CPURequest) - cluster.Spec.PgBouncer.Resources[v1.ResourceCPU] = quantity - } - - if request.MemoryLimit != "" { - // as this was already validated, we can ignore the error - quantity, _ := resource.ParseQuantity(request.MemoryLimit) - cluster.Spec.PgBouncer.Limits[v1.ResourceMemory] = quantity - } - - if request.MemoryRequest != "" { - // as this was already validated, we can ignore the error - quantity, _ := resource.ParseQuantity(request.MemoryRequest) - cluster.Spec.PgBouncer.Resources[v1.ResourceMemory] = quantity - } - - // apply the replica count number if there is a change, i.e. replicas is not - // 0 - if request.Replicas > 0 { - cluster.Spec.PgBouncer.Replicas = request.Replicas - } - - if _, err := apiserver.Clientset.CrunchydataV1().Pgclusters(cluster.Namespace).Update(&cluster); err != nil { - log.Error(err) - result.Error = true - result.ErrorMessage = err.Error() - response.Results = append(response.Results, result) - continue - } - - // append the result to the list - response.Results = append(response.Results, result) - } - - return response -} - -// getClusterList tries to return a list of clusters based on either having an -// argument list of cluster names, or a Kubernetes selector -func getClusterList(namespace string, clusterNames []string, selector string) (crv1.PgclusterList, error) { - clusterList := crv1.PgclusterList{} - - // see if there are any values in the cluster name list or in the selector - // if nothing exists, return an error - if len(clusterNames) == 0 && selector == "" { - err := fmt.Errorf("either a list of cluster names or a selector needs to be supplied for this comment") - return clusterList, err - } - - // try to build the cluster list based on either the selector or the list - // of arguments...or both. First, start with the selector - if selector != "" { - cl, err := apiserver.Clientset.CrunchydataV1().Pgclusters(namespace).List(metav1.ListOptions{LabelSelector: selector}) - - // if there is an error, return here with an empty cluster list - if err != nil { - return crv1.PgclusterList{}, err - } - clusterList = *cl - } - - // now try to get clusters based specific cluster names - for _, clusterName := range clusterNames { - cluster, err := apiserver.Clientset.CrunchydataV1().Pgclusters(namespace).Get(clusterName, metav1.GetOptions{}) - - // if there is an error, capture it here and return here with an empty list - if err != nil { - return crv1.PgclusterList{}, err - } - - // if successful, append to the cluster list - clusterList.Items = append(clusterList.Items, *cluster) - } - - log.Debugf("clusters founds: [%d]", len(clusterList.Items)) - - // if after all this, there are no clusters found, return an error - if len(clusterList.Items) == 0 { - err := fmt.Errorf("no clusters found") - return clusterList, err - } - - // all set! return the cluster list with error - return clusterList, nil -} - -// setPgBouncerPasswordDetail applies the password that is used by the pgbouncer -// service account -func setPgBouncerPasswordDetail(cluster crv1.Pgcluster, result *msgs.ShowPgBouncerDetail) { - pgBouncerSecretName := util.GeneratePgBouncerSecretName(cluster.Spec.Name) - - // attempt to get the secret, but only get the password - password, err := util.GetPasswordFromSecret(apiserver.Clientset, - cluster.Spec.Namespace, pgBouncerSecretName) - - if err != nil { - log.Warn(err) - } - - // and set the password. Easy! - result.Password = password -} - -// setPgBouncerServiceDetail applies the information about the pgBouncer service -// to the result for the pgBouncer show -func setPgBouncerServiceDetail(cluster crv1.Pgcluster, result *msgs.ShowPgBouncerDetail) { - // get the service information about the pgBouncer deployment - selector := fmt.Sprintf("%s=%s", config.LABEL_PG_CLUSTER, cluster.Spec.Name) - - // have to go through a bunch of services because "current design" - services, err := apiserver.Clientset. - CoreV1().Services(cluster.Spec.Namespace). - List(metav1.ListOptions{LabelSelector: selector}) - - // if there is an error, return without making any adjustments - if err != nil { - log.Warn(err) - return - } - - log.Debugf("cluster [%s] has [%d] services", cluster.Spec.Name, len(services.Items)) - - // adding the service information was borrowed from the ShowCluster - // resource - for _, service := range services.Items { - // if this service is not for pgBouncer, then skip - if !strings.HasSuffix(service.Name, pgBouncerServiceSuffix) { - continue - } - - // this is the pgBouncer service! - result.ServiceClusterIP = service.Spec.ClusterIP - result.ServiceName = service.Name - - // try to get the exterinal IP based on the formula used in show cluster - if len(service.Spec.ExternalIPs) > 0 { - result.ServiceExternalIP = service.Spec.ExternalIPs[0] - } - - if len(service.Status.LoadBalancer.Ingress) > 0 { - result.ServiceExternalIP = service.Status.LoadBalancer.Ingress[0].IP - } - } -} diff --git a/internal/apiserver/pgbouncerservice/pgbouncerservice.go b/internal/apiserver/pgbouncerservice/pgbouncerservice.go deleted file mode 100644 index 969aabd205..0000000000 --- a/internal/apiserver/pgbouncerservice/pgbouncerservice.go +++ /dev/null @@ -1,294 +0,0 @@ -package pgbouncerservice - -/* -Copyright 2018 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "encoding/json" - "github.com/crunchydata/postgres-operator/internal/apiserver" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "net/http" -) - -// CreatePgbouncerHandler ... -// pgo create pgbouncer -func CreatePgbouncerHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /pgbouncer pgbouncerservice pgbouncer-post - /*``` - Create a pgbouncer - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Create Pgbouncer Request" - // in: "body" - // schema: - // "$ref": "#/definitions/CreatePgbouncerRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/CreatePgbouncerResponse" - var ns string - log.Debug("pgbouncerservice.CreatePgbouncerHandler called") - username, err := apiserver.Authn(apiserver.CREATE_PGBOUNCER_PERM, w, r) - if err != nil { - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - var request msgs.CreatePgbouncerRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - resp := msgs.CreatePgbouncerResponse{} - resp.Status = msgs.Status{Code: msgs.Ok, Msg: ""} - - if request.ClientVersion != msgs.PGO_VERSION { - resp.Status.Code = msgs.Error - resp.Status.Msg = apiserver.VERSION_MISMATCH_ERROR - json.NewEncoder(w).Encode(resp) - return - } - - ns, err = apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - json.NewEncoder(w).Encode(resp) - return - } - - resp = CreatePgbouncer(&request, ns, username) - json.NewEncoder(w).Encode(resp) - -} - -/* The delete pgboucner handler is setup to be used by two different routes. To keep -the documentation consistent with the API this endpoint is documented along with the -/pgbouncer (DELETE) enpoint. This endpoint should be deprecated in future API versions. -*/ -// swagger:operation DELETE /pgbouncer pgbouncerservice pgbouncer-delete -/*``` -Delete a pgbouncer from a cluster -*/ -// --- -// produces: -// - application/json -// parameters: -// - name: "Delete PgBouncer Request" -// in: "body" -// schema: -// "$ref": "#/definitions/DeletePgbouncerRequest" -// responses: -// '200': -// description: Output -// schema: -// "$ref": "#/definitions/DeletePgbouncerResponse" -// DeletePgbouncerHandler ... -// pgo delete pgbouncer -func DeletePgbouncerHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation DELETE /pgbouncer pgbouncerservice pgbouncer-delete - /*``` - Delete a pgbouncer from a cluster - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Delete PgBouncer Request" - // in: "body" - // schema: - // "$ref": "#/definitions/DeletePgbouncerRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/DeletePgbouncerResponse" - var ns string - log.Debug("pgbouncerservice.DeletePgbouncerHandler called") - username, err := apiserver.Authn(apiserver.DELETE_PGBOUNCER_PERM, w, r) - if err != nil { - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - var request msgs.DeletePgbouncerRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - resp := msgs.DeletePgbouncerResponse{} - resp.Status = msgs.Status{Code: msgs.Ok, Msg: ""} - - if request.ClientVersion != msgs.PGO_VERSION { - resp.Status.Code = msgs.Error - resp.Status.Msg = apiserver.VERSION_MISMATCH_ERROR - json.NewEncoder(w).Encode(resp) - return - } - - ns, err = apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - json.NewEncoder(w).Encode(resp) - return - } - - resp = DeletePgbouncer(&request, ns) - json.NewEncoder(w).Encode(resp) - -} - -// ShowPgBouncerHandler is the HTTP handler to get information about a pgBouncer -// deployment, aka `pgo show pgbouncer` -func ShowPgBouncerHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /pgbouncer/show pgbouncerservice pgbouncer-post - /*``` - Show information about a pgBouncer deployment - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Show PGBouncer Information" - // in: "body" - // schema: - // "$ref": "#/definitions/ShowPgBouncerRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/ShowPgBouncerResponse" - log.Debug("pgbouncerservice.ShowPgbouncerHandler called") - - // first, determine if the user is authorized to access this resource - username, err := apiserver.Authn(apiserver.SHOW_PGBOUNCER_PERM, w, r) - - if err != nil { - return - } - - w.WriteHeader(http.StatusOK) - w.Header().Set("Content-Type", "application/json") - - // get the information that is in the request - var request msgs.ShowPgBouncerRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - // ensure the versions align... - if request.ClientVersion != msgs.PGO_VERSION { - response := msgs.ShowPgBouncerResponse{ - Status: msgs.Status{ - Code: msgs.Error, - Msg: apiserver.VERSION_MISMATCH_ERROR, - }, - } - json.NewEncoder(w).Encode(response) - return - } - - // ensure the namespace being used exists - namespace, err := apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace) - - if err != nil { - response := msgs.ShowPgBouncerResponse{ - Status: msgs.Status{ - Code: msgs.Error, - Msg: err.Error(), - }, - } - json.NewEncoder(w).Encode(response) - return - } - - // get the information about a pgbouncer deployment(s) - response := ShowPgBouncer(&request, namespace) - json.NewEncoder(w).Encode(response) - -} - -// UpdatePgBouncerHandler is the HTTP handler to perform update tasks on a -// pgbouncer instance, such as rotating the password -func UpdatePgBouncerHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation PUT /pgbouncer pgbouncerservice pgbouncer-put - /*``` - Update a pgBouncer cluster, e.g. rotate the password - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Update PGBouncer" - // in: "body" - // schema: - // "$ref": "#/definitions/UpdatePgBouncerRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/UpdatePgBouncerResponse" - log.Debug("pgbouncerservice.UpdatePgbouncerHandler called") - - // first, determine if the user is authorized to access this resource - username, err := apiserver.Authn(apiserver.UPDATE_PGBOUNCER_PERM, w, r) - - if err != nil { - return - } - - w.WriteHeader(http.StatusOK) - w.Header().Set("Content-Type", "application/json") - - // get the information that is in the request - var request msgs.UpdatePgBouncerRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - // ensure the versions align... - if request.ClientVersion != msgs.PGO_VERSION { - response := msgs.UpdatePgBouncerResponse{ - Status: msgs.Status{ - Code: msgs.Error, - Msg: apiserver.VERSION_MISMATCH_ERROR, - }, - } - json.NewEncoder(w).Encode(response) - return - } - - // ensure the namespace being used exists - namespace, err := apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace) - - if err != nil { - response := msgs.UpdatePgBouncerResponse{ - Status: msgs.Status{ - Code: msgs.Error, - Msg: err.Error(), - }, - } - json.NewEncoder(w).Encode(response) - return - } - - // get the information about a pgbouncer deployment(s) - response := UpdatePgBouncer(&request, namespace, username) - json.NewEncoder(w).Encode(response) -} diff --git a/internal/apiserver/pgdumpservice/pgdumpimpl.go b/internal/apiserver/pgdumpservice/pgdumpimpl.go deleted file mode 100644 index ecad14f7fd..0000000000 --- a/internal/apiserver/pgdumpservice/pgdumpimpl.go +++ /dev/null @@ -1,554 +0,0 @@ -package pgdumpservice - -/* -Copyright 2019 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "errors" - "fmt" - "strconv" - "strings" - - "github.com/crunchydata/postgres-operator/internal/apiserver" - "github.com/crunchydata/postgres-operator/internal/apiserver/backupoptions" - "github.com/crunchydata/postgres-operator/internal/config" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - v1 "k8s.io/api/core/v1" - kerrors "k8s.io/apimachinery/pkg/api/errors" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" -) - -const pgDumpCommand = "pgdump" -const pgDumpInfoCommand = "info" -const pgDumpTaskExtension = "-pgdump" -const pgDumpJobExtension = "-pgdump-job" - -// CreateBackup ... -// pgo backup mycluster -// pgo backup --selector=name=mycluster -func CreatepgDump(request *msgs.CreatepgDumpBackupRequest, ns string) msgs.CreatepgDumpBackupResponse { - - resp := msgs.CreatepgDumpBackupResponse{} - resp.Status.Code = msgs.Ok - resp.Status.Msg = "" - resp.Results = make([]string, 0) - - // var newInstance *crv1.Pgtask - - log.Debug("CreatePgDump storage config... " + request.StorageConfig) - if request.StorageConfig != "" { - if apiserver.IsValidStorageName(request.StorageConfig) == false { - log.Debug("CreateBackup sc error is found " + request.StorageConfig) - resp.Status.Code = msgs.Error - resp.Status.Msg = request.StorageConfig + " Storage config was not found " - return resp - } - } - - if request.BackupOpts != "" { - err := backupoptions.ValidateBackupOpts(request.BackupOpts, request) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - } - - if request.Selector != "" { - //use the selector instead of an argument list to filter on - - clusterList, err := apiserver.Clientset. - CrunchydataV1().Pgclusters(ns). - List(metav1.ListOptions{LabelSelector: request.Selector}) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - if len(clusterList.Items) == 0 { - log.Debug("no clusters found") - resp.Results = append(resp.Results, "no clusters found with that selector") - return resp - } else { - newargs := make([]string, 0) - for _, cluster := range clusterList.Items { - newargs = append(newargs, cluster.Spec.Name) - } - request.Args = newargs - } - - } - - for _, clusterName := range request.Args { - log.Debugf("create pgdump called for %s", clusterName) - taskName := "backup-" + clusterName + pgDumpTaskExtension - - cluster, err := apiserver.Clientset.CrunchydataV1().Pgclusters(ns).Get(clusterName, metav1.GetOptions{}) - if kerrors.IsNotFound(err) { - resp.Status.Code = msgs.Error - resp.Status.Msg = clusterName + " was not found, verify cluster name" - return resp - } else if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - // check if the current cluster is not upgraded to the deployed - // Operator version. If not, do not allow the command to complete - if cluster.Annotations[config.ANNOTATION_IS_UPGRADED] == config.ANNOTATIONS_FALSE { - resp.Status.Code = msgs.Error - resp.Status.Msg = cluster.Name + msgs.UpgradeError - return resp - } - - deletePropagation := metav1.DeletePropagationForeground - apiserver.Clientset. - BatchV1().Jobs(ns). - Delete(clusterName+pgDumpJobExtension, &metav1.DeleteOptions{PropagationPolicy: &deletePropagation}) - - // error if the task already exists - _, err = apiserver.Clientset.CrunchydataV1().Pgtasks(ns).Get(taskName, metav1.GetOptions{}) - if kerrors.IsNotFound(err) { - log.Debugf("pgdump pgtask %s was not found so we will create it", taskName) - } else if err != nil { - - resp.Results = append(resp.Results, "error getting pgtask for "+taskName) - break - } else { - - log.Debugf("pgtask %s was found so we will recreate it", taskName) - //remove the existing pgtask - err := apiserver.Clientset.CrunchydataV1().Pgtasks(ns).Delete(taskName, &metav1.DeleteOptions{}) - - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - } - - //get pod name from cluster - // var podname, deployName string - var podname string - podname, err = getPrimaryPodName(cluster, ns) - - if err != nil { - log.Error(err) - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - // where all the magic happens about the task. - // TODO: Needs error handling for invalid parameters in the request - theTask := buildPgTaskForDump(clusterName, taskName, crv1.PgtaskpgDump, podname, "database", request) - - _, err = apiserver.Clientset.CrunchydataV1().Pgtasks(ns).Create(theTask) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - resp.Results = append(resp.Results, "created Pgtask "+taskName) - - } - - return resp -} - -// ShowpgDump ... -func ShowpgDump(clusterName string, selector string, ns string) msgs.ShowBackupResponse { - var err error - - response := msgs.ShowBackupResponse{ - BackupList: msgs.PgbackupList{ - Items: []msgs.Pgbackup{}, - }, - Status: msgs.Status{ - Code: msgs.Ok, - Msg: "", - }, - } - - if selector == "" && clusterName == "all" { - // leave selector empty, retrieves all clusters. - } else { - if selector == "" { - selector = "name=" + clusterName - } - } - - //get a list of all clusters - clusterList, err := apiserver.Clientset. - CrunchydataV1().Pgclusters(ns). - List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - - log.Debugf("clusters found len is %d\n", len(clusterList.Items)) - - for _, c := range clusterList.Items { - - if err != nil { - log.Error(err) - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - - pgTaskName := "backup-" + c.Name + pgDumpTaskExtension - - backupItem, error := getPgBackupForTask(c.Name, pgTaskName, ns) - - if backupItem != nil { - log.Debugf("pgTask %s was found", pgTaskName) - response.BackupList.Items = append(response.BackupList.Items, *backupItem) - - } else if error != nil { - log.Debugf("pgTask %s was not found, error", pgTaskName) - response.Status.Code = msgs.Error - response.Status.Msg = error.Error() - - } else { - // nothing found, no error - log.Debugf("pgTask %s not found, no erros", pgTaskName) - response.Status.Code = msgs.Ok - response.Status.Msg = fmt.Sprintf("pgDump %s not found.", pgTaskName) - } - - } - - return response - -} - -// builds out a pgTask structure that can be handed to kube -func buildPgTaskForDump(clusterName, taskName, action, podName, containerName string, - request *msgs.CreatepgDumpBackupRequest) *crv1.Pgtask { - - var newInstance *crv1.Pgtask - var storageSpec crv1.PgStorageSpec - var pvcName string - - backupUser := clusterName + "-postgres-secret" - - if request.StorageConfig != "" { - storageSpec, _ = apiserver.Pgo.GetStorageSpec(request.StorageConfig) - } else { - storageSpec, _ = apiserver.Pgo.GetStorageSpec(apiserver.Pgo.BackupStorage) - } - - // specify PVC name if not set by user. - if len(request.PVCName) > 0 { - pvcName = request.PVCName - } else { - // Set the default PVC name using the pgcluster name and the - // database name. For example, a pgcluster 'mycluster' with - // a databsae 'postgres' would have a PVC named - // backup-mycluster-pgdump-postgres-pvc - pvcName = taskName + "-" + request.PGDumpDB + "-pvc" - } - - // get dumpall flag, separate from dumpOpts, validate options - dumpAllFlag, dumpOpts := parseOptionFlags(request.BackupOpts) - - spec := crv1.PgtaskSpec{} - - spec.Name = taskName - spec.TaskType = crv1.PgtaskpgDump - spec.Parameters = make(map[string]string) - spec.Parameters[config.LABEL_PG_CLUSTER] = clusterName - spec.Parameters[config.LABEL_PGDUMP_HOST] = clusterName // same name as service - spec.Parameters[config.LABEL_CONTAINER_NAME] = containerName // ?? - spec.Parameters[config.LABEL_PGDUMP_COMMAND] = action - spec.Parameters[config.LABEL_PGDUMP_OPTS] = dumpOpts - spec.Parameters[config.LABEL_PGDUMP_DB] = request.PGDumpDB - spec.Parameters[config.LABEL_PGDUMP_USER] = backupUser - spec.Parameters[config.LABEL_PGDUMP_PORT] = apiserver.Pgo.Cluster.Port - spec.Parameters[config.LABEL_PGDUMP_ALL] = strconv.FormatBool(dumpAllFlag) - spec.Parameters[config.LABEL_PVC_NAME] = pvcName - spec.Parameters[config.LABEL_CCP_IMAGE_TAG_KEY] = apiserver.Pgo.Cluster.CCPImageTag - spec.StorageSpec = storageSpec - - newInstance = &crv1.Pgtask{ - ObjectMeta: metav1.ObjectMeta{ - Name: taskName, - }, - Spec: spec, - } - return newInstance -} - -func getDeployName(cluster *crv1.Pgcluster, ns string) (string, error) { - var depName string - - selector := config.LABEL_PG_CLUSTER + "=" + cluster.Spec.Name + "," + config.LABEL_SERVICE_NAME + "=" + cluster.Spec.Name - - deps, err := apiserver.Clientset. - AppsV1().Deployments(ns). - List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - return depName, err - } - - if len(deps.Items) != 1 { - return depName, errors.New("error: deployment count is wrong for pgdump backup " + cluster.Spec.Name) - } - for _, d := range deps.Items { - return d.Name, err - } - - return depName, errors.New("unknown error in pgdump backup") -} - -func getPrimaryPodName(cluster *crv1.Pgcluster, ns string) (string, error) { - var podname string - - selector := config.LABEL_SERVICE_NAME + "=" + cluster.Spec.Name - - pods, err := apiserver.Clientset.CoreV1().Pods(ns).List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - return podname, err - } - - for _, p := range pods.Items { - if isPrimary(&p, cluster.Spec.Name) && isReady(&p) { - return p.Name, err - } - } - - return podname, errors.New("primary pod is not in Ready state") -} - -func isPrimary(pod *v1.Pod, clusterName string) bool { - if pod.ObjectMeta.Labels[config.LABEL_SERVICE_NAME] == clusterName { - return true - } - return false - -} - -func isReady(pod *v1.Pod) bool { - readyCount := 0 - containerCount := 0 - for _, stat := range pod.Status.ContainerStatuses { - containerCount++ - if stat.Ready { - readyCount++ - } - } - if readyCount != containerCount { - return false - } - return true - -} - -// dumpAllFlag, dumpOpts = parseOptionFlags(request.BackupOpt) -func parseOptionFlags(allFlags string) (bool, string) { - dumpFlag := false - - // error = - - parsedOptions := []string{} - - options := strings.Split(allFlags, " ") - - for _, token := range options { - - // handle dump flag - if strings.Contains(token, "--dump-all") { - dumpFlag = true - } else { - parsedOptions = append(parsedOptions, token) - } - - } - - optionString := strings.Join(parsedOptions, " ") - - log.Debugf("pgdump optionFlags: %s, dumpAll: %t", optionString, dumpFlag) - - return dumpFlag, optionString - -} - -// if backup && err are nil, it simply wasn't found. Otherwise found or an error -func getPgBackupForTask(clusterName string, taskName string, ns string) (*msgs.Pgbackup, error) { - task, err := apiserver.Clientset.CrunchydataV1().Pgtasks(ns).Get(taskName, metav1.GetOptions{}) - - if err == nil { - return buildPgBackupFrompgTask(task), nil - } else if kerrors.IsNotFound(err) { - // keeping in this weird old logic for the moment - return nil, nil - } - - return nil, err -} - -// converts pgTask to a pgBackup structure -func buildPgBackupFrompgTask(dumpTask *crv1.Pgtask) *msgs.Pgbackup { - - backup := msgs.Pgbackup{} - - spec := dumpTask.Spec - - backup.Name = spec.Name - backup.CreationTimestamp = dumpTask.ObjectMeta.CreationTimestamp.String() - backup.BackupStatus = spec.Status - backup.CCPImageTag = spec.Parameters[config.LABEL_CCP_IMAGE_TAG_KEY] - backup.BackupHost = spec.Parameters[config.LABEL_PGDUMP_HOST] - backup.BackupUserSecret = spec.Parameters[config.LABEL_PGDUMP_USER] - backup.BackupPort = spec.Parameters[config.LABEL_PGDUMP_PORT] - backup.BackupPVC = spec.Parameters[config.LABEL_PVC_NAME] - backup.StorageSpec.Size = dumpTask.Spec.StorageSpec.Size - backup.StorageSpec.AccessMode = dumpTask.Spec.StorageSpec.AccessMode - - // if dump-all flag is set, prepend it to options string since it was separated out before processing. - if spec.Parameters[config.LABEL_PGDUMP_ALL] == "true" { - backup.BackupOpts = "--dump-all " + spec.Parameters[config.LABEL_PGDUMP_OPTS] - } else { - backup.BackupOpts = spec.Parameters[config.LABEL_PGDUMP_OPTS] - } - - return &backup -} - -// Restore ... -// pgo restore mycluster --to-cluster=restored -func Restore(request *msgs.PgRestoreRequest, ns string) msgs.PgRestoreResponse { - resp := msgs.PgRestoreResponse{} - resp.Status.Code = msgs.Ok - resp.Status.Msg = "Restore Not Implemented" - resp.Results = make([]string, 0) - - taskName := "restore-" + request.FromCluster + pgDumpTaskExtension - - log.Debugf("Restore %v\n", request) - - if request.RestoreOpts != "" { - err := backupoptions.ValidateBackupOpts(request.RestoreOpts, request) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - } - - _, err := apiserver.Clientset.CrunchydataV1().Pgclusters(ns).Get(request.FromCluster, metav1.GetOptions{}) - if kerrors.IsNotFound(err) { - resp.Status.Code = msgs.Error - resp.Status.Msg = request.FromCluster + " was not found, verify cluster name" - return resp - } else if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - _, err = apiserver.Clientset.CoreV1().PersistentVolumeClaims(ns).Get(request.FromPVC, metav1.GetOptions{}) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - pgtask, err := buildPgTaskForRestore(taskName, crv1.PgtaskpgRestore, request) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - //delete any existing pgtask with the same name - err = apiserver.Clientset.CrunchydataV1().Pgtasks(ns).Delete(pgtask.Name, &metav1.DeleteOptions{}) - if err != nil && !kerrors.IsNotFound(err) { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - //create a pgtask for the restore workflow - _, err = apiserver.Clientset.CrunchydataV1().Pgtasks(ns).Create(pgtask) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - resp.Results = append(resp.Results, "restore performed on "+request.FromCluster+" to "+request.FromPVC+" opts="+request.RestoreOpts+" pitr-target="+request.PITRTarget) - - return resp -} - -// builds out a pgTask structure that can be handed to kube -func buildPgTaskForRestore(taskName string, action string, request *msgs.PgRestoreRequest) (*crv1.Pgtask, error) { - - var newInstance *crv1.Pgtask - var storageSpec crv1.PgStorageSpec - - backupUser := request.FromCluster + "-postgres-secret" - - spec := crv1.PgtaskSpec{} - - spec.Name = taskName - spec.Namespace = request.Namespace - spec.TaskType = crv1.PgtaskpgRestore - spec.Parameters = make(map[string]string) - spec.Parameters[config.LABEL_PGRESTORE_DB] = request.PGDumpDB - spec.Parameters[config.LABEL_PGRESTORE_HOST] = request.FromCluster - spec.Parameters[config.LABEL_PGRESTORE_FROM_CLUSTER] = request.FromCluster - spec.Parameters[config.LABEL_PGRESTORE_FROM_PVC] = request.FromPVC - spec.Parameters[config.LABEL_PGRESTORE_PITR_TARGET] = request.PITRTarget - spec.Parameters[config.LABEL_PGRESTORE_OPTS] = request.RestoreOpts - spec.Parameters[config.LABEL_PGRESTORE_USER] = backupUser - spec.Parameters[config.LABEL_PGRESTORE_PITR_TARGET] = request.PITRTarget - - spec.Parameters[config.LABEL_PGRESTORE_COMMAND] = action - - spec.Parameters[config.LABEL_PGRESTORE_PORT] = apiserver.Pgo.Cluster.Port - spec.Parameters[config.LABEL_CCP_IMAGE_TAG_KEY] = apiserver.Pgo.Cluster.CCPImageTag - - // validate & parse nodeLabel if exists - if request.NodeLabel != "" { - if err := apiserver.ValidateNodeLabel(request.NodeLabel); err != nil { - return nil, err - } - - parts := strings.Split(request.NodeLabel, "=") - spec.Parameters[config.LABEL_NODE_LABEL_KEY] = parts[0] - spec.Parameters[config.LABEL_NODE_LABEL_VALUE] = parts[1] - - log.Debug("Restore node labels used from user entered flag") - } - - spec.StorageSpec = storageSpec - - newInstance = &crv1.Pgtask{ - ObjectMeta: metav1.ObjectMeta{ - Name: taskName, - }, - Spec: spec, - } - return newInstance, nil -} diff --git a/internal/apiserver/pgdumpservice/pgdumpservice.go b/internal/apiserver/pgdumpservice/pgdumpservice.go deleted file mode 100644 index 755a9bbd98..0000000000 --- a/internal/apiserver/pgdumpservice/pgdumpservice.go +++ /dev/null @@ -1,209 +0,0 @@ -package pgdumpservice - -/* -Copyright 2019 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "encoding/json" - "github.com/crunchydata/postgres-operator/internal/apiserver" - "github.com/crunchydata/postgres-operator/internal/config" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - "github.com/gorilla/mux" - log "github.com/sirupsen/logrus" - "net/http" -) - -// BackupHandler ... -// pgo backup --backup-type=pgdump mycluster -func BackupHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /pgdumpbackup pgdumpservice pgdumpbackup - /*``` - Backup a cluster using pgdump - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Create pgDump Request" - // in: "body" - // schema: - // "$ref": "#/definitions/CreatepgDumpBackupRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/CreatepgDumpBackupResponse" - var ns string - log.Debug("pgdumpservice.CreatepgDumpHandlerBackupHandler called") - - var request msgs.CreatepgDumpBackupRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - username, err := apiserver.Authn(apiserver.CREATE_DUMP_PERM, w, r) - if err != nil { - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - resp := msgs.CreatepgDumpBackupResponse{} - resp.Status = msgs.Status{Code: msgs.Ok, Msg: ""} - - ns, err = apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace) - if err != nil { - resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()} - json.NewEncoder(w).Encode(resp) - return - } - - resp = CreatepgDump(&request, ns) - json.NewEncoder(w).Encode(resp) -} - -// ShowpgDumpHandler ... -// returns a ShowpgDumpResponse -func ShowDumpHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation GET /pgdump/{name} pgdumpservice pgdump-name - /*``` - Show backups taken with pgdump - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "name" - // description: "Cluster Name" - // in: "path" - // type: "string" - // required: true - // - name: "version" - // description: "Client Version" - // in: "path" - // type: "string" - // required: true - // - name: "namespace" - // description: "Namespace" - // in: "path" - // type: "string" - // required: true - // - name: "selector" - // description: "Selector" - // in: "path" - // type: "string" - // required: true - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/ShowBackupResponse" - var ns string - vars := mux.Vars(r) - - clustername := vars[config.LABEL_NAME] - - clientVersion := r.URL.Query().Get(config.LABEL_VERSION) - namespace := r.URL.Query().Get(config.LABEL_NAMESPACE) - selector := r.URL.Query().Get(config.LABEL_SELECTOR) - - log.Debugf("ShowDumpHandler parameters version [%s] namespace [%s] selector [%s] name [%s]", clientVersion, namespace, selector, clustername) - - username, err := apiserver.Authn(apiserver.SHOW_BACKUP_PERM, w, r) - if err != nil { - return - } - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - log.Debug("pgdumpservice.pgdumpHandler GET called") - resp := msgs.ShowBackupResponse{} - resp.Status = msgs.Status{Code: msgs.Ok, Msg: ""} - - if clientVersion != msgs.PGO_VERSION { - resp.Status = msgs.Status{Code: msgs.Error, Msg: apiserver.VERSION_MISMATCH_ERROR} - json.NewEncoder(w).Encode(resp) - return - - } - - ns, err = apiserver.GetNamespace(apiserver.Clientset, username, namespace) - if err != nil { - resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()} - json.NewEncoder(w).Encode(resp) - return - } - - resp = ShowpgDump(clustername, selector, ns) - json.NewEncoder(w).Encode(resp) - -} - -// RestoreHandler ... -// pgo restore mycluster --restore-type=pgdump --to-cluster=restored -func RestoreHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /restore pgdumpservice restore - /*``` - Restore a cluster with pgrestore. This endpoint is used to restore backups taken with pgdump - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Restore Request" - // in: "body" - // schema: - // "$ref": "#/definitions/PgRestoreRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/PgRestoreResponse" - var ns string - - log.Debug("pgdumpservice.RestoreHandler called") - - var request msgs.PgRestoreRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - username, err := apiserver.Authn(apiserver.RESTORE_DUMP_PERM, w, r) - if err != nil { - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - resp := msgs.PgRestoreResponse{} - resp.Status = msgs.Status{Code: msgs.Ok, Msg: ""} - - ns, err = apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - json.NewEncoder(w).Encode(resp) - return - } - - resp = Restore(&request, ns) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - } - - json.NewEncoder(w).Encode(resp) -} diff --git a/internal/apiserver/pgoroleservice/pgoroleimpl.go b/internal/apiserver/pgoroleservice/pgoroleimpl.go deleted file mode 100644 index 633d0c3660..0000000000 --- a/internal/apiserver/pgoroleservice/pgoroleimpl.go +++ /dev/null @@ -1,313 +0,0 @@ -package pgoroleservice - -/* -Copyright 2019 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "errors" - "strings" - "time" - - "github.com/crunchydata/postgres-operator/internal/apiserver" - "github.com/crunchydata/postgres-operator/internal/apiserver/pgouserservice" - "github.com/crunchydata/postgres-operator/internal/config" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - "github.com/crunchydata/postgres-operator/pkg/events" - log "github.com/sirupsen/logrus" - v1 "k8s.io/api/core/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/client-go/kubernetes" -) - -// CreatePgorole ... -func CreatePgorole(clientset kubernetes.Interface, createdBy string, request *msgs.CreatePgoroleRequest) msgs.CreatePgoroleResponse { - - log.Debugf("CreatePgorole %v", request) - resp := msgs.CreatePgoroleResponse{} - resp.Status.Code = msgs.Ok - resp.Status.Msg = "" - - err := validPermissions(request.PgorolePermissions) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - err = createSecret(clientset, createdBy, request.PgoroleName, request.PgorolePermissions) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - //publish event - topics := make([]string, 1) - topics[0] = events.EventTopicPGOUser - - f := events.EventPGOCreateRoleFormat{ - EventHeader: events.EventHeader{ - Namespace: apiserver.PgoNamespace, - Username: createdBy, - Topic: topics, - Timestamp: time.Now(), - EventType: events.EventPGOCreateRole, - }, - CreatedRolename: request.PgoroleName, - } - - err = events.Publish(f) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - return resp - -} - -// ShowPgorole ... -func ShowPgorole(clientset kubernetes.Interface, request *msgs.ShowPgoroleRequest) msgs.ShowPgoroleResponse { - resp := msgs.ShowPgoroleResponse{} - resp.Status.Code = msgs.Ok - resp.Status.Msg = "" - resp.RoleInfo = make([]msgs.PgoroleInfo, 0) - - selector := config.LABEL_PGO_PGOROLE + "=true" - if request.AllFlag { - secrets, err := clientset. - CoreV1().Secrets(apiserver.PgoNamespace). - List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - for _, s := range secrets.Items { - info := msgs.PgoroleInfo{} - info.Name = s.ObjectMeta.Labels[config.LABEL_ROLENAME] - info.Permissions = string(s.Data["permissions"]) - resp.RoleInfo = append(resp.RoleInfo, info) - } - } else { - for _, v := range request.PgoroleName { - info := msgs.PgoroleInfo{} - secretName := "pgorole-" + v - s, err := clientset.CoreV1().Secrets(apiserver.PgoNamespace).Get(secretName, metav1.GetOptions{}) - - if err != nil { - info.Name = v + " was not found" - info.Permissions = "" - } else { - info.Name = v - info.Permissions = string(s.Data["permissions"]) - } - resp.RoleInfo = append(resp.RoleInfo, info) - } - } - - return resp - -} - -// DeletePgorole ... -func DeletePgorole(clientset kubernetes.Interface, deletedBy string, request *msgs.DeletePgoroleRequest) msgs.DeletePgoroleResponse { - resp := msgs.DeletePgoroleResponse{} - resp.Status.Code = msgs.Ok - resp.Status.Msg = "" - resp.Results = make([]string, 0) - - for _, v := range request.PgoroleName { - secretName := "pgorole-" + v - log.Debugf("DeletePgorole %s deleted by %s", secretName, deletedBy) - - // try to see if a secret exists for this pgorole. If it does not, continue - // on - if _, err := clientset.CoreV1().Secrets(apiserver.PgoNamespace).Get(secretName, metav1.GetOptions{}); err != nil { - resp.Results = append(resp.Results, secretName+" not found") - continue - } - - // attempt to delete the pgorole secret. if it cannot be deleted, move on - if err := clientset.CoreV1().Secrets(apiserver.PgoNamespace).Delete(secretName, &metav1.DeleteOptions{}); err != nil { - resp.Results = append(resp.Results, "error deleting secret "+secretName) - continue - } - - // this was successful - resp.Results = append(resp.Results, "deleted role "+v) - - // ensure the pgorole is deleted from the various users that may have this - // role. Though it may be odd to return at this point, this is part of the - // legacy of this function and is kept in for those purposes - if err := deleteRoleFromUsers(clientset, v); err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - } - - return resp - -} - -func UpdatePgorole(clientset kubernetes.Interface, updatedBy string, request *msgs.UpdatePgoroleRequest) msgs.UpdatePgoroleResponse { - - resp := msgs.UpdatePgoroleResponse{} - resp.Status.Msg = "" - resp.Status.Code = msgs.Ok - - err := validPermissions(request.PgorolePermissions) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - secretName := "pgorole-" + request.PgoroleName - - secret, err := clientset.CoreV1().Secrets(apiserver.PgoNamespace).Get(secretName, metav1.GetOptions{}) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - secret.ObjectMeta.Labels[config.LABEL_PGO_UPDATED_BY] = updatedBy - secret.Data["rolename"] = []byte(request.PgoroleName) - secret.Data["permissions"] = []byte(request.PgorolePermissions) - - _, err = clientset.CoreV1().Secrets(apiserver.PgoNamespace).Update(secret) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - //publish event - topics := make([]string, 1) - topics[0] = events.EventTopicPGOUser - - f := events.EventPGOUpdateRoleFormat{ - EventHeader: events.EventHeader{ - Namespace: apiserver.PgoNamespace, - Username: updatedBy, - Topic: topics, - Timestamp: time.Now(), - EventType: events.EventPGOUpdateRole, - }, - UpdatedRolename: request.PgoroleName, - } - - err = events.Publish(f) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - return resp - -} - -func createSecret(clientset kubernetes.Interface, createdBy, pgorolename, permissions string) error { - - var enRolename = pgorolename - - secretName := "pgorole-" + pgorolename - - // if this secret is found (i.e. no errors returned) return here - if _, err := clientset.CoreV1().Secrets(apiserver.PgoNamespace).Get(secretName, metav1.GetOptions{}); err == nil { - return nil - } - - secret := v1.Secret{} - secret.Name = secretName - secret.ObjectMeta.Labels = make(map[string]string) - secret.ObjectMeta.Labels[config.LABEL_PGO_CREATED_BY] = createdBy - secret.ObjectMeta.Labels[config.LABEL_ROLENAME] = pgorolename - secret.ObjectMeta.Labels[config.LABEL_PGO_PGOROLE] = "true" - secret.ObjectMeta.Labels[config.LABEL_VENDOR] = "crunchydata" - secret.Data = make(map[string][]byte) - secret.Data["rolename"] = []byte(enRolename) - secret.Data["permissions"] = []byte(permissions) - - _, err := clientset.CoreV1().Secrets(apiserver.PgoNamespace).Create(&secret) - return err -} - -func validPermissions(perms string) error { - var err error - fields := strings.Split(perms, ",") - - for _, v := range fields { - if apiserver.PermMap[strings.TrimSpace(v)] == "" && strings.TrimSpace(v) != "*" { - return errors.New(v + " not a valid Permission") - } - } - - return err -} - -func deleteRoleFromUsers(clientset kubernetes.Interface, roleName string) error { - - //get pgouser Secrets - - selector := config.LABEL_PGO_PGOUSER + "=true" - pgouserSecrets, err := clientset. - CoreV1().Secrets(apiserver.PgoNamespace). - List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - log.Error("could not get pgouser Secrets") - return err - } - - for _, s := range pgouserSecrets.Items { - rolesString := string(s.Data[pgouserservice.MAP_KEY_ROLES]) - roles := strings.Split(rolesString, ",") - resultRoles := make([]string, 0) - - var rolesUpdated bool - for _, r := range roles { - if r != roleName { - resultRoles = append(resultRoles, r) - } else { - rolesUpdated = true - } - } - - //update the pgouser Secret removing any roles as necessary - if rolesUpdated { - var resultingRoleString string - - for i := 0; i < len(resultRoles); i++ { - if i == len(resultRoles)-1 { - resultingRoleString = resultingRoleString + resultRoles[i] - } else { - resultingRoleString = resultingRoleString + resultRoles[i] + "," - } - } - - s.Data[pgouserservice.MAP_KEY_ROLES] = []byte(resultingRoleString) - _, err = clientset.CoreV1().Secrets(apiserver.PgoNamespace).Update(&s) - if err != nil { - return err - } - - } - } - return err -} diff --git a/internal/apiserver/pgoroleservice/pgoroleimpl_test.go b/internal/apiserver/pgoroleservice/pgoroleimpl_test.go deleted file mode 100644 index 98ad0c61f2..0000000000 --- a/internal/apiserver/pgoroleservice/pgoroleimpl_test.go +++ /dev/null @@ -1,55 +0,0 @@ -package pgoroleservice - -import ( - "fmt" - "testing" - - "github.com/crunchydata/postgres-operator/internal/apiserver" -) - -func TestValidPermissions(t *testing.T) { - apiserver.PermMap = map[string]string{ - apiserver.CREATE_CLUSTER_PERM: "yes", - apiserver.CREATE_PGBOUNCER_PERM: "yes", - } - - t.Run("with valid permission", func(t *testing.T) { - perms := apiserver.CREATE_CLUSTER_PERM - - if err := validPermissions(perms); err != nil { - t.Errorf("%q should be a valid permission", perms) - } - }) - - t.Run("with multiple valid permissions", func(t *testing.T) { - perms := fmt.Sprintf("%s,%s", apiserver.CREATE_CLUSTER_PERM, apiserver.CREATE_PGBOUNCER_PERM) - - if err := validPermissions(perms); err != nil { - t.Errorf("%v should be a valid permission", perms) - } - }) - - t.Run("with an invalid permission", func(t *testing.T) { - perms := "bogus" - - if err := validPermissions(perms); err == nil { - t.Errorf("%q should raise an error", perms) - } - }) - - t.Run("with a mix of valid and invalid permissions", func(t *testing.T) { - perms := fmt.Sprintf("%s,%s", apiserver.CREATE_CLUSTER_PERM, "bogus") - - if err := validPermissions(perms); err == nil { - t.Errorf("%q should raise an error", perms) - } - }) - - t.Run("with *", func(t *testing.T) { - perms := "*" - - if err := validPermissions(perms); err != nil { - t.Errorf("%q should be a valid permission", perms) - } - }) -} diff --git a/internal/apiserver/pgoroleservice/pgoroleservice.go b/internal/apiserver/pgoroleservice/pgoroleservice.go deleted file mode 100644 index b3e3413e09..0000000000 --- a/internal/apiserver/pgoroleservice/pgoroleservice.go +++ /dev/null @@ -1,217 +0,0 @@ -package pgoroleservice - -/* -Copyright 2019 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "encoding/json" - apiserver "github.com/crunchydata/postgres-operator/internal/apiserver" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "k8s.io/apimachinery/pkg/util/validation" - "net/http" -) - -func CreatePgoroleHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /pgorolecreate pgoroleservice pgorolecreate - /*``` - Create a pgorole - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Create pgorole Request" - // in: "body" - // schema: - // "$ref": "#/definitions/CreatePgoroleRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/CreatePgoroleResponse" - resp := msgs.CreatePgoroleResponse{} - resp.Status.Code = msgs.Ok - resp.Status.Msg = "" - log.Debug("pgoroleservice.CreatePgoroleHandler called") - - var request msgs.CreatePgoroleRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - rolename, err := apiserver.Authn(apiserver.CREATE_PGOUSER_PERM, w, r) - if err != nil { - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - log.Debugf("pgoroleservice.CreatePgoroleHandler got request %v", request) - if request.ClientVersion != msgs.PGO_VERSION { - resp.Status.Code = msgs.Error - resp.Status.Msg = apiserver.VERSION_MISMATCH_ERROR - json.NewEncoder(w).Encode(resp) - return - } - - errs := validation.IsDNS1035Label(request.PgoroleName) - if len(errs) > 0 { - resp.Status.Code = msgs.Error - resp.Status.Msg = "invalid pgorole name format " + errs[0] - } else { - resp = CreatePgorole(apiserver.Clientset, rolename, &request) - } - - json.NewEncoder(w).Encode(resp) -} - -func DeletePgoroleHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /pgoroledelete pgoroleservice pgoroledelete - /*``` - Delete a pgorole - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Delete pgorole Request" - // in: "body" - // schema: - // "$ref": "#/definitions/DeletePgoroleRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/DeletePgoroleResponse" - var request msgs.DeletePgoroleRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - log.Debugf("DeletePgoroleHandler parameters [%v]", request) - - rolename, err := apiserver.Authn(apiserver.DELETE_PGOUSER_PERM, w, r) - if err != nil { - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - resp := msgs.DeletePgoroleResponse{} - resp.Status.Code = msgs.Ok - resp.Status.Msg = "" - - if request.ClientVersion != msgs.PGO_VERSION { - resp.Status.Code = msgs.Error - resp.Status.Msg = apiserver.VERSION_MISMATCH_ERROR - json.NewEncoder(w).Encode(resp) - return - } - - resp = DeletePgorole(apiserver.Clientset, rolename, &request) - - json.NewEncoder(w).Encode(resp) - -} - -func ShowPgoroleHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /pgoroleshow pgoroleservice pgoroleshow - /*``` - Show pgorole information - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Show pgorole Request" - // in: "body" - // schema: - // "$ref": "#/definitions/ShowPgoroleRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/ShowPgoroleResponse" - var request msgs.ShowPgoroleRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - log.Debugf("ShowPgoroleHandler parameters [%v]", request) - - _, err := apiserver.Authn(apiserver.SHOW_PGOUSER_PERM, w, r) - if err != nil { - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - log.Debug("pgoroleservice.ShowPgoroleHandler POST called") - resp := msgs.ShowPgoroleResponse{} - resp.Status.Code = msgs.Ok - resp.Status.Msg = "" - - if request.ClientVersion != msgs.PGO_VERSION { - resp.Status.Code = msgs.Error - resp.Status.Msg = apiserver.VERSION_MISMATCH_ERROR - json.NewEncoder(w).Encode(resp) - return - } - - resp = ShowPgorole(apiserver.Clientset, &request) - - json.NewEncoder(w).Encode(resp) - -} - -func UpdatePgoroleHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /pgoroleupdate pgoroleservice pgoroleupdate - /*``` - Delete a pgorole - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Update pgorole Request" - // in: "body" - // schema: - // "$ref": "#/definitions/UpdatePgoroleRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/UpdatePgoroleResponse" - log.Debug("pgoroleservice.UpdatePgoroleHandler called") - - var request msgs.UpdatePgoroleRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - rolename, err := apiserver.Authn(apiserver.UPDATE_PGOUSER_PERM, w, r) - if err != nil { - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - resp := msgs.UpdatePgoroleResponse{} - resp.Status = msgs.Status{Code: msgs.Ok, Msg: ""} - - resp = UpdatePgorole(apiserver.Clientset, rolename, &request) - json.NewEncoder(w).Encode(resp) -} diff --git a/internal/apiserver/pgouserservice/pgouserimpl.go b/internal/apiserver/pgouserservice/pgouserimpl.go deleted file mode 100644 index 6e0c061467..0000000000 --- a/internal/apiserver/pgouserservice/pgouserimpl.go +++ /dev/null @@ -1,338 +0,0 @@ -package pgouserservice - -/* -Copyright 2019 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "errors" - "strings" - "time" - - "github.com/crunchydata/postgres-operator/internal/apiserver" - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/ns" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - "github.com/crunchydata/postgres-operator/pkg/events" - log "github.com/sirupsen/logrus" - v1 "k8s.io/api/core/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/client-go/kubernetes" -) - -const MAP_KEY_USERNAME = "username" -const MAP_KEY_PASSWORD = "password" -const MAP_KEY_ROLES = "roles" -const MAP_KEY_NAMESPACES = "namespaces" - -// CreatePgouser ... -func CreatePgouser(clientset kubernetes.Interface, createdBy string, request *msgs.CreatePgouserRequest) msgs.CreatePgouserResponse { - - log.Debugf("CreatePgouser %v", request) - resp := msgs.CreatePgouserResponse{} - resp.Status.Code = msgs.Ok - resp.Status.Msg = "" - - err := validRoles(clientset, request.PgouserRoles) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - err = validNamespaces(request.PgouserNamespaces, request.AllNamespaces) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - err = createSecret(clientset, createdBy, request) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - if request.AllNamespaces && request.PgouserNamespaces != "" { - resp.Status.Code = msgs.Error - resp.Status.Msg = "--all-namespaces and --pgouser-namespaces are mutually exclusive." - return resp - } - - //publish event - topics := make([]string, 1) - topics[0] = events.EventTopicPGOUser - - f := events.EventPGOCreateUserFormat{ - EventHeader: events.EventHeader{ - Namespace: apiserver.PgoNamespace, - Username: createdBy, - Topic: topics, - Timestamp: time.Now(), - EventType: events.EventPGOCreateUser, - }, - CreatedUsername: request.PgouserName, - } - - err = events.Publish(f) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - return resp - -} - -// ShowPgouser ... -func ShowPgouser(clientset kubernetes.Interface, request *msgs.ShowPgouserRequest) msgs.ShowPgouserResponse { - resp := msgs.ShowPgouserResponse{} - resp.Status.Code = msgs.Ok - resp.Status.Msg = "" - - selector := config.LABEL_PGO_PGOUSER + "=true" - if request.AllFlag { - secrets, err := clientset. - CoreV1().Secrets(apiserver.PgoNamespace). - List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - for _, s := range secrets.Items { - info := msgs.PgouserInfo{} - info.Username = s.ObjectMeta.Labels[config.LABEL_USERNAME] - info.Role = make([]string, 0) - info.Role = append(info.Role, string(s.Data[MAP_KEY_ROLES])) - info.Namespace = make([]string, 0) - info.Namespace = append(info.Namespace, string(s.Data[MAP_KEY_NAMESPACES])) - - resp.UserInfo = append(resp.UserInfo, info) - } - } else { - for _, v := range request.PgouserName { - secretName := "pgouser-" + v - - info := msgs.PgouserInfo{} - info.Username = v - info.Role = make([]string, 0) - info.Namespace = make([]string, 0) - - s, err := clientset.CoreV1().Secrets(apiserver.PgoNamespace).Get(secretName, metav1.GetOptions{}) - - if err != nil { - info.Username = v + " was not found" - } else { - info.Username = v - info.Role = append(info.Role, string(s.Data[MAP_KEY_ROLES])) - info.Namespace = append(info.Namespace, string(s.Data[MAP_KEY_NAMESPACES])) - } - resp.UserInfo = append(resp.UserInfo, info) - } - } - - return resp - -} - -// DeletePgouser ... -func DeletePgouser(clientset kubernetes.Interface, deletedBy string, request *msgs.DeletePgouserRequest) msgs.DeletePgouserResponse { - resp := msgs.DeletePgouserResponse{} - resp.Status.Code = msgs.Ok - resp.Status.Msg = "" - resp.Results = make([]string, 0) - - for _, v := range request.PgouserName { - secretName := "pgouser-" + v - log.Debugf("DeletePgouser %s deleted by %s", secretName, deletedBy) - - if _, err := clientset.CoreV1().Secrets(apiserver.PgoNamespace).Get(secretName, metav1.GetOptions{}); err != nil { - resp.Results = append(resp.Results, secretName+" not found") - } else { - err = clientset.CoreV1().Secrets(apiserver.PgoNamespace).Delete(secretName, &metav1.DeleteOptions{}) - if err != nil { - resp.Results = append(resp.Results, "error deleting secret "+secretName) - } else { - resp.Results = append(resp.Results, "deleted pgouser "+v) - //publish event - topics := make([]string, 1) - topics[0] = events.EventTopicPGOUser - - f := events.EventPGODeleteUserFormat{ - EventHeader: events.EventHeader{ - Namespace: apiserver.PgoNamespace, - Username: deletedBy, - Topic: topics, - Timestamp: time.Now(), - EventType: events.EventPGODeleteUser, - }, - DeletedUsername: v, - } - - err = events.Publish(f) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - } - - } - } - - return resp - -} - -// UpdatePgouser - update the pgouser secret -func UpdatePgouser(clientset kubernetes.Interface, updatedBy string, request *msgs.UpdatePgouserRequest) msgs.UpdatePgouserResponse { - - resp := msgs.UpdatePgouserResponse{} - resp.Status.Msg = "" - resp.Status.Code = msgs.Ok - - secretName := "pgouser-" + request.PgouserName - - secret, err := clientset.CoreV1().Secrets(apiserver.PgoNamespace).Get(secretName, metav1.GetOptions{}) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - secret.ObjectMeta.Labels[config.LABEL_PGO_UPDATED_BY] = updatedBy - secret.Data[MAP_KEY_USERNAME] = []byte(request.PgouserName) - - if request.PgouserPassword != "" { - secret.Data[MAP_KEY_PASSWORD] = []byte(request.PgouserPassword) - } - if request.PgouserRoles != "" { - err = validRoles(clientset, request.PgouserRoles) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - secret.Data[MAP_KEY_ROLES] = []byte(request.PgouserRoles) - } - if request.PgouserNamespaces != "" { - err = validNamespaces(request.PgouserNamespaces, request.AllNamespaces) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - secret.Data[MAP_KEY_NAMESPACES] = []byte(request.PgouserNamespaces) - } else if request.AllNamespaces { - secret.Data[MAP_KEY_NAMESPACES] = []byte("") - } - - log.Info("Updating secret for: ", request.PgouserName) - _, err = clientset.CoreV1().Secrets(apiserver.PgoNamespace).Update(secret) - if err != nil { - log.Debug("Error updating pgouser secret: ", err.Error()) - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - //publish event - topics := make([]string, 1) - topics[0] = events.EventTopicPGOUser - - f := events.EventPGOUpdateUserFormat{ - EventHeader: events.EventHeader{ - Namespace: apiserver.PgoNamespace, - Username: updatedBy, - Topic: topics, - EventType: events.EventPGOUpdateUser, - }, - UpdatedUsername: request.PgouserName, - } - - err = events.Publish(f) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - return resp - -} - -func createSecret(clientset kubernetes.Interface, createdBy string, request *msgs.CreatePgouserRequest) error { - - secretName := "pgouser-" + request.PgouserName - - // if this secret is found (no errors returned), returned here - if _, err := clientset.CoreV1().Secrets(apiserver.PgoNamespace).Get(secretName, metav1.GetOptions{}); err == nil { - return nil - } - - secret := v1.Secret{} - secret.Name = secretName - secret.ObjectMeta.Labels = make(map[string]string) - secret.ObjectMeta.Labels[config.LABEL_PGO_CREATED_BY] = createdBy - secret.ObjectMeta.Labels[config.LABEL_USERNAME] = request.PgouserName - secret.ObjectMeta.Labels[config.LABEL_PGO_PGOUSER] = "true" - secret.ObjectMeta.Labels[config.LABEL_VENDOR] = "crunchydata" - secret.Data = make(map[string][]byte) - secret.Data[MAP_KEY_USERNAME] = []byte(request.PgouserName) - secret.Data[MAP_KEY_ROLES] = []byte(request.PgouserRoles) - secret.Data[MAP_KEY_NAMESPACES] = []byte(request.PgouserNamespaces) - secret.Data[MAP_KEY_PASSWORD] = []byte(request.PgouserPassword) - - _, err := clientset.CoreV1().Secrets(apiserver.PgoNamespace).Create(&secret) - return err -} - -func validRoles(clientset kubernetes.Interface, roles string) error { - var err error - fields := strings.Split(roles, ",") - for _, v := range fields { - r := strings.TrimSpace(v) - secretName := "pgorole-" + r - - if _, err := clientset.CoreV1().Secrets(apiserver.PgoNamespace).Get(secretName, metav1.GetOptions{}); err != nil { - return errors.New(v + " pgorole was not found") - } - - } - - return err -} - -func validNamespaces(namespaces string, allnamespaces bool) error { - - if allnamespaces { - return nil - } - - nsSlice := strings.Split(namespaces, ",") - for i := range nsSlice { - nsSlice[i] = strings.TrimSpace(nsSlice[i]) - } - - err := ns.ValidateNamespacesWatched(apiserver.Clientset, apiserver.NamespaceOperatingMode(), - apiserver.InstallationName, nsSlice...) - if err != nil { - return err - } - - return nil -} diff --git a/internal/apiserver/pgouserservice/pgouserservice.go b/internal/apiserver/pgouserservice/pgouserservice.go deleted file mode 100644 index ccf1b1ce8f..0000000000 --- a/internal/apiserver/pgouserservice/pgouserservice.go +++ /dev/null @@ -1,217 +0,0 @@ -package pgouserservice - -/* -Copyright 2019 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "encoding/json" - apiserver "github.com/crunchydata/postgres-operator/internal/apiserver" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "k8s.io/apimachinery/pkg/util/validation" - "net/http" -) - -func CreatePgouserHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /pgousercreate pgouserservice pgousercreate - /*``` - Create a pgouser - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Create Pgouser Request" - // in: "body" - // schema: - // "$ref": "#/definitions/CreatePgouserRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/CreatePgouserResponse" - resp := msgs.CreatePgouserResponse{} - resp.Status.Code = msgs.Ok - resp.Status.Msg = "" - log.Debug("pgouserservice.CreatePgouserHandler called") - - var request msgs.CreatePgouserRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - username, err := apiserver.Authn(apiserver.CREATE_PGOUSER_PERM, w, r) - if err != nil { - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - log.Debugf("pgouserservice.CreatePgouserHandler got request %v", request) - if request.ClientVersion != msgs.PGO_VERSION { - resp.Status.Code = msgs.Error - resp.Status.Msg = apiserver.VERSION_MISMATCH_ERROR - json.NewEncoder(w).Encode(resp) - return - } - - errs := validation.IsDNS1035Label(request.PgouserName) - if len(errs) > 0 { - resp.Status.Code = msgs.Error - resp.Status.Msg = "invalid pgouser name format " + errs[0] - } else { - resp = CreatePgouser(apiserver.Clientset, username, &request) - } - - json.NewEncoder(w).Encode(resp) -} - -func DeletePgouserHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /pgouserdelete pgouserservice pgouserdelete - /*``` - Delete a pgouser - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Delete Pgouser Request" - // in: "body" - // schema: - // "$ref": "#/definitions/DeletePgouserRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/DeletePgouserResponse" - var request msgs.DeletePgouserRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - log.Debugf("DeletePgouserHandler parameters [%v]", request) - - username, err := apiserver.Authn(apiserver.DELETE_PGOUSER_PERM, w, r) - if err != nil { - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - resp := msgs.DeletePgouserResponse{} - resp.Status.Code = msgs.Ok - resp.Status.Msg = "" - - if request.ClientVersion != msgs.PGO_VERSION { - resp.Status.Code = msgs.Error - resp.Status.Msg = apiserver.VERSION_MISMATCH_ERROR - json.NewEncoder(w).Encode(resp) - return - } - - resp = DeletePgouser(apiserver.Clientset, username, &request) - - json.NewEncoder(w).Encode(resp) - -} - -func ShowPgouserHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /pgousershow pgouserservice pgousershow - /*``` - Show pgouser information - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Show Pgouser Request" - // in: "body" - // schema: - // "$ref": "#/definitions/ShowPgouserRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/ShowPgouserResponse" - var request msgs.ShowPgouserRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - log.Debugf("ShowPgouserHandler parameters [%v]", request) - - _, err := apiserver.Authn(apiserver.SHOW_PGOUSER_PERM, w, r) - if err != nil { - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - log.Debug("pgouserservice.ShowPgouserHandler POST called") - resp := msgs.ShowPgouserResponse{} - resp.Status.Code = msgs.Ok - resp.Status.Msg = "" - - if request.ClientVersion != msgs.PGO_VERSION { - resp.Status.Code = msgs.Error - resp.Status.Msg = apiserver.VERSION_MISMATCH_ERROR - json.NewEncoder(w).Encode(resp) - return - } - - resp = ShowPgouser(apiserver.Clientset, &request) - - json.NewEncoder(w).Encode(resp) - -} - -func UpdatePgouserHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /pgouserupdate pgouserservice pgouserupdate - /*``` - Update a pgouser - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Update Pgouser Request" - // in: "body" - // schema: - // "$ref": "#/definitions/UpdatePgouserRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/UpdatePgouserResponse" - log.Debug("pgouserservice.UpdatePgouserHandler called") - - var request msgs.UpdatePgouserRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - username, err := apiserver.Authn(apiserver.UPDATE_PGOUSER_PERM, w, r) - if err != nil { - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - resp := msgs.UpdatePgouserResponse{} - resp.Status = msgs.Status{Code: msgs.Ok, Msg: ""} - - resp = UpdatePgouser(apiserver.Clientset, username, &request) - json.NewEncoder(w).Encode(resp) -} diff --git a/internal/apiserver/policyservice/policyimpl.go b/internal/apiserver/policyservice/policyimpl.go deleted file mode 100644 index aff08b0e36..0000000000 --- a/internal/apiserver/policyservice/policyimpl.go +++ /dev/null @@ -1,278 +0,0 @@ -package policyservice - -/* -Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "fmt" - "strings" - "time" - - "github.com/crunchydata/postgres-operator/internal/apiserver" - "github.com/crunchydata/postgres-operator/internal/apiserver/labelservice" - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/util" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - "github.com/crunchydata/postgres-operator/pkg/events" - pgo "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned" - log "github.com/sirupsen/logrus" - v1 "k8s.io/api/apps/v1" - kerrors "k8s.io/apimachinery/pkg/api/errors" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" -) - -// CreatePolicy ... -func CreatePolicy(client pgo.Interface, policyName, policyURL, policyFile, ns, pgouser string) (bool, error) { - - log.Debugf("create policy called for %s", policyName) - - // Create an instance of our CRD - spec := crv1.PgpolicySpec{} - spec.Namespace = ns - spec.Name = policyName - spec.URL = policyURL - spec.SQL = policyFile - - myLabels := make(map[string]string) - myLabels[config.LABEL_PGOUSER] = pgouser - - newInstance := &crv1.Pgpolicy{ - ObjectMeta: metav1.ObjectMeta{ - Name: policyName, - Labels: myLabels, - }, - Spec: spec, - } - - _, err := client.CrunchydataV1().Pgpolicies(ns).Create(newInstance) - - if kerrors.IsAlreadyExists(err) { - log.Debugf("pgpolicy %s was found so we will not create it", policyName) - return true, nil - } - - return false, err - -} - -// ShowPolicy ... -func ShowPolicy(client pgo.Interface, name string, allflags bool, ns string) crv1.PgpolicyList { - policyList := crv1.PgpolicyList{} - - if allflags { - //get a list of all policies - list, err := client.CrunchydataV1().Pgpolicies(ns).List(metav1.ListOptions{}) - if list != nil && err == nil { - policyList = *list - } - } else { - policy, err := client.CrunchydataV1().Pgpolicies(ns).Get(name, metav1.GetOptions{}) - if policy != nil && err == nil { - policyList.Items = []crv1.Pgpolicy{*policy} - } - } - - return policyList - -} - -// DeletePolicy ... -func DeletePolicy(client pgo.Interface, policyName, ns, pgouser string) msgs.DeletePolicyResponse { - resp := msgs.DeletePolicyResponse{} - resp.Status.Code = msgs.Ok - resp.Status.Msg = "" - resp.Results = make([]string, 0) - - policyList, err := client.CrunchydataV1().Pgpolicies(ns).List(metav1.ListOptions{}) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - policyFound := false - log.Debugf("deleting policy %s", policyName) - for _, policy := range policyList.Items { - if policyName == "all" || policyName == policy.Spec.Name { - //update pgpolicy with current pgouser so that - //we can create an event holding the pgouser - //that deleted the policy - policy.ObjectMeta.Labels[config.LABEL_PGOUSER] = pgouser - _, err = client.CrunchydataV1().Pgpolicies(ns).Update(&policy) - - //ok, now delete the pgpolicy - policyFound = true - err = client.CrunchydataV1().Pgpolicies(ns).Delete(policy.Spec.Name, &metav1.DeleteOptions{}) - if err == nil { - msg := "deleted policy " + policy.Spec.Name - log.Debug(msg) - resp.Results = append(resp.Results, msg) - } else { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - } - } - - if !policyFound { - log.Debugf("policy %s not found", policyName) - resp.Status.Code = msgs.Error - resp.Status.Msg = "policy " + policyName + " not found" - return resp - } - - return resp - -} - -// ApplyPolicy ... -// pgo apply mypolicy --selector=name=mycluster -func ApplyPolicy(request *msgs.ApplyPolicyRequest, ns, pgouser string) msgs.ApplyPolicyResponse { - var err error - - resp := msgs.ApplyPolicyResponse{} - resp.Name = make([]string, 0) - resp.Status.Msg = "" - resp.Status.Code = msgs.Ok - - //validate policy - err = util.ValidatePolicy(apiserver.Clientset, ns, request.Name) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = "policy " + request.Name + " is not found, cancelling request" - return resp - } - - //get filtered list of Deployments - selector := request.Selector - log.Debugf("apply policy selector string=[%s]", selector) - - //get a list of all clusters - clusterList, err := apiserver.Clientset. - CrunchydataV1().Pgclusters(ns). - List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - log.Debugf("apply policy clusters found len is %d", len(clusterList.Items)) - - // Return an error if any clusters identified for the policy are in standby mode. Standby - // clusters are in read-only mode, and therefore cannot have policies applied to them - // until standby mode has been disabled. - if hasStandby, standbyClusters := apiserver.PGClusterListHasStandby(*clusterList); hasStandby { - resp.Status.Code = msgs.Error - resp.Status.Msg = fmt.Sprintf("Request rejected, unable to load clusters %s: %s."+ - strings.Join(standbyClusters, ","), apiserver.ErrStandbyNotAllowed.Error()) - return resp - } - - var allDeployments []v1.Deployment - for _, c := range clusterList.Items { - depSelector := config.LABEL_SERVICE_NAME + "=" + c.Name - deployments, err := apiserver.Clientset. - AppsV1().Deployments(ns). - List(metav1.ListOptions{LabelSelector: depSelector}) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - if len(deployments.Items) < 1 { - log.Errorf("%s did not have a deployment for some reason", c.Name) - } else { - allDeployments = append(allDeployments, deployments.Items[0]) - } - } - - if request.DryRun { - for _, d := range allDeployments { - log.Debugf("deployment : %s", d.ObjectMeta.Name) - resp.Name = append(resp.Name, d.ObjectMeta.Name) - } - return resp - } - - labels := make(map[string]string) - labels[request.Name] = "pgpolicy" - - for _, d := range allDeployments { - if d.ObjectMeta.Labels[config.LABEL_SERVICE_NAME] != d.ObjectMeta.Labels[config.LABEL_PG_CLUSTER] { - log.Debugf("skipping apply policy on deployment %s", d.Name) - continue - //skip non primary deployments - } - - log.Debugf("apply policy %s on deployment %s based on selector %s", request.Name, d.ObjectMeta.Name, selector) - - cl, err := apiserver.Clientset. - CrunchydataV1().Pgclusters(ns). - Get(d.ObjectMeta.Labels[config.LABEL_SERVICE_NAME], metav1.GetOptions{}) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - if err := util.ExecPolicy(apiserver.Clientset, apiserver.RESTConfig, - ns, request.Name, d.ObjectMeta.Labels[config.LABEL_SERVICE_NAME], cl.Spec.Port); err != nil { - log.Error(err) - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - err = util.UpdatePolicyLabels(apiserver.Clientset, d.ObjectMeta.Name, ns, labels) - if err != nil { - log.Error(err) - } - - //update the pgcluster crd labels with the new policy - err = labelservice.PatchPgcluster(map[string]string{request.Name: config.LABEL_PGPOLICY}, *cl, ns) - if err != nil { - log.Error(err) - } - - resp.Name = append(resp.Name, d.ObjectMeta.Name) - - //publish event - topics := make([]string, 1) - topics[0] = events.EventTopicPolicy - - f := events.EventApplyPolicyFormat{ - EventHeader: events.EventHeader{ - Namespace: ns, - Username: pgouser, - Topic: topics, - Timestamp: time.Now(), - EventType: events.EventApplyPolicy, - }, - Clustername: d.ObjectMeta.Labels[config.LABEL_PG_CLUSTER], - Policyname: request.Name, - } - - err = events.Publish(f) - if err != nil { - log.Error(err.Error()) - } - - } - return resp - -} diff --git a/internal/apiserver/policyservice/policyservice.go b/internal/apiserver/policyservice/policyservice.go deleted file mode 100644 index d2a3d6234f..0000000000 --- a/internal/apiserver/policyservice/policyservice.go +++ /dev/null @@ -1,280 +0,0 @@ -package policyservice - -/* -Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "encoding/json" - "net/http" - - apiserver "github.com/crunchydata/postgres-operator/internal/apiserver" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "k8s.io/apimachinery/pkg/util/validation" -) - -// CreatePolicyHandler ... -// pgo create policy -// parameters secretfrom -func CreatePolicyHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /policies policyservice policies - /*``` - Create a SQL policy - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Create Policy Request" - // in: "body" - // schema: - // "$ref": "#/definitions/CreatePolicyRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/CreatePolicyResponse" - var ns string - - resp := msgs.CreatePolicyResponse{} - resp.Status.Code = msgs.Ok - resp.Status.Msg = "" - log.Debug("policyservice.CreatePolicyHandler called") - - var request msgs.CreatePolicyRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - username, err := apiserver.Authn(apiserver.CREATE_POLICY_PERM, w, r) - if err != nil { - return - } - - log.Debugf("policyservice.CreatePolicyHandler got request %s", request.Name) - if request.ClientVersion != msgs.PGO_VERSION { - resp.Status.Code = msgs.Error - resp.Status.Msg = apiserver.VERSION_MISMATCH_ERROR - json.NewEncoder(w).Encode(resp) - return - } - - ns, err = apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - json.NewEncoder(w).Encode(resp) - return - } - - errs := validation.IsDNS1035Label(request.Name) - if len(errs) > 0 { - resp.Status.Code = msgs.Error - resp.Status.Msg = "invalid policy name format " + errs[0] - } else { - - found, err := CreatePolicy(apiserver.Clientset, request.Name, request.URL, request.SQL, ns, username) - if err != nil { - log.Error(err.Error()) - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - } - if found { - resp.Status.Code = msgs.Error - resp.Status.Msg = "policy already exists with that name" - } - } - - json.NewEncoder(w).Encode(resp) -} - -// DeletePolicyHandler ... -// returns a DeletePolicyResponse -func DeletePolicyHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /policiesdelete policyservice policiesdelete - /*``` - Delete a SQL policy - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Delete Policy Request" - // in: "body" - // schema: - // "$ref": "#/definitions/DeletePolicyRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/DeletePolicyResponse" - var ns string - - var request msgs.DeletePolicyRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - policyname := request.PolicyName - clientVersion := request.ClientVersion - namespace := request.Namespace - - log.Debugf("DeletePolicyHandler parameters version [%s] name [%s] namespace [%s]", clientVersion, policyname, namespace) - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - username, err := apiserver.Authn(apiserver.DELETE_POLICY_PERM, w, r) - if err != nil { - return - } - log.Debug("policyservice.DeletePolicyHandler GET called") - resp := msgs.DeletePolicyResponse{} - resp.Status.Code = msgs.Ok - resp.Status.Msg = "" - - if clientVersion != msgs.PGO_VERSION { - resp.Status.Code = msgs.Error - resp.Status.Msg = apiserver.VERSION_MISMATCH_ERROR - json.NewEncoder(w).Encode(resp) - return - } - - ns, err = apiserver.GetNamespace(apiserver.Clientset, username, namespace) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - json.NewEncoder(w).Encode(resp) - return - } - - resp = DeletePolicy(apiserver.Clientset, policyname, ns, username) - - json.NewEncoder(w).Encode(resp) - -} - -// ShowPolicyHandler ... -// returns a ShowPolicyResponse -func ShowPolicyHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /showpolicies policyservice showpolicies - /*``` - Show policy information - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Show Policy Request" - // in: "body" - // schema: - // "$ref": "#/definitions/ShowPolicyRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/ShowPolicyResponse" - var ns string - - var request msgs.ShowPolicyRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - policyname := request.Policyname - - clientVersion := request.ClientVersion - namespace := request.Namespace - - log.Debugf("ShowPolicyHandler parameters version [%s] namespace [%s] name [%s]", clientVersion, namespace, policyname) - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - username, err := apiserver.Authn(apiserver.SHOW_POLICY_PERM, w, r) - if err != nil { - return - } - - log.Debug("policyservice.ShowPolicyHandler POST called") - resp := msgs.ShowPolicyResponse{} - resp.Status.Code = msgs.Ok - resp.Status.Msg = "" - - if clientVersion != msgs.PGO_VERSION { - resp.Status.Code = msgs.Error - resp.Status.Msg = apiserver.VERSION_MISMATCH_ERROR - json.NewEncoder(w).Encode(resp) - return - } - - ns, err = apiserver.GetNamespace(apiserver.Clientset, username, namespace) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - json.NewEncoder(w).Encode(resp) - return - } - - resp.PolicyList = ShowPolicy(apiserver.Clientset, policyname, request.AllFlag, ns) - - json.NewEncoder(w).Encode(resp) - -} - -// ApplyPolicyHandler ... -// pgo apply mypolicy --selector=name=mycluster -func ApplyPolicyHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /policies/apply policyservice policies-apply - /*``` - APPLY allows you to apply a Policy to a set of clusters. - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Create Policy Request" - // in: "body" - // schema: - // "$ref": "#/definitions/ApplyPolicyRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/ApplyPolicyResponse" - var ns string - log.Debug("policyservice.ApplyPolicyHandler called") - - var request msgs.ApplyPolicyRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - username, err := apiserver.Authn(apiserver.APPLY_POLICY_PERM, w, r) - if err != nil { - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - resp := msgs.ApplyPolicyResponse{} - resp.Status = msgs.Status{Code: msgs.Ok, Msg: ""} - - ns, err = apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace) - if err != nil { - resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()} - json.NewEncoder(w).Encode(resp) - return - } - - resp = ApplyPolicy(&request, ns, username) - json.NewEncoder(w).Encode(resp) -} diff --git a/internal/apiserver/pvcservice/pvcimpl.go b/internal/apiserver/pvcservice/pvcimpl.go deleted file mode 100644 index 734c21fa4c..0000000000 --- a/internal/apiserver/pvcservice/pvcimpl.go +++ /dev/null @@ -1,58 +0,0 @@ -package pvcservice - -/* -Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "fmt" - - "github.com/crunchydata/postgres-operator/internal/apiserver" - "github.com/crunchydata/postgres-operator/internal/config" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" -) - -// ShowPVC ... -func ShowPVC(allflag bool, clusterName, ns string) ([]msgs.ShowPVCResponseResult, error) { - pvcList := []msgs.ShowPVCResponseResult{} - // note to a future editor...all of our managed PVCs have a label called - // called "pgremove" - selector := fmt.Sprintf("%s=%s", config.LABEL_PGREMOVE, "true") - - // if allflag is not set to true, then update the selector to target the - // specific PVCs for a specific cluster - if !allflag { - selector += fmt.Sprintf(",%s=%s", config.LABEL_PG_CLUSTER, clusterName) - } - - pvcs, err := apiserver.Clientset. - CoreV1().PersistentVolumeClaims(ns). - List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - return pvcList, err - } - - log.Debugf("got %d PVCs from ShowPVC query", len(pvcs.Items)) - for _, p := range pvcs.Items { - pvcResult := msgs.ShowPVCResponseResult{ - ClusterName: p.ObjectMeta.Labels[config.LABEL_PG_CLUSTER], - PVCName: p.Name, - } - pvcList = append(pvcList, pvcResult) - } - - return pvcList, nil -} diff --git a/internal/apiserver/pvcservice/pvcservice.go b/internal/apiserver/pvcservice/pvcservice.go deleted file mode 100644 index a12979cb3c..0000000000 --- a/internal/apiserver/pvcservice/pvcservice.go +++ /dev/null @@ -1,98 +0,0 @@ -package pvcservice - -/* -Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "encoding/json" - "github.com/crunchydata/postgres-operator/internal/apiserver" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "net/http" -) - -// ShowPVCHandler ... -// returns a ShowPVCResponse -func ShowPVCHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /showpvc pvcservice showpvc - /*``` - Show PVC information - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Show PVC Request" - // in: "body" - // schema: - // "$ref": "#/definitions/ShowPVCRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/ShowPVCResponse" - var err error - var username, ns string - - var request msgs.ShowPVCRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - clusterName := request.ClusterName - clientVersion := request.ClientVersion - namespace := request.Namespace - - log.Debugf("ShowPVCHandler parameters version [%s] namespace [%s] pvcname [%s]", clientVersion, namespace, clusterName) - - switch r.Method { - case "GET": - log.Debug("pvcservice.ShowPVCHandler GET called") - case "DELETE": - log.Debug("pvcservice.ShowPVCHandler DELETE called") - } - - username, err = apiserver.Authn(apiserver.SHOW_PVC_PERM, w, r) - if err != nil { - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - resp := msgs.ShowPVCResponse{} - - if clientVersion != msgs.PGO_VERSION { - resp.Status.Code = msgs.Error - resp.Status.Msg = apiserver.VERSION_MISMATCH_ERROR - json.NewEncoder(w).Encode(resp) - return - } - - ns, err = apiserver.GetNamespace(apiserver.Clientset, username, namespace) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - json.NewEncoder(w).Encode(resp) - return - } - - resp.Results, err = ShowPVC(request.AllFlag, clusterName, ns) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - } - - json.NewEncoder(w).Encode(resp) -} diff --git a/internal/apiserver/reloadservice/reloadimpl.go b/internal/apiserver/reloadservice/reloadimpl.go deleted file mode 100644 index 08bc21430e..0000000000 --- a/internal/apiserver/reloadservice/reloadimpl.go +++ /dev/null @@ -1,137 +0,0 @@ -package reloadservice - -/* -Copyright 2018 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "fmt" - "strings" - "time" - - "github.com/crunchydata/postgres-operator/internal/apiserver" - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/patroni" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - "github.com/crunchydata/postgres-operator/pkg/events" - log "github.com/sirupsen/logrus" - - kerrors "k8s.io/apimachinery/pkg/api/errors" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" -) - -// Reload ... -// pgo reload mycluster -// pgo reload all -// pgo reload --selector=name=mycluster -func Reload(request *msgs.ReloadRequest, ns, username string) msgs.ReloadResponse { - - log.Debugf("Reload %v", request) - - var clusterNames []string - var errorMsgs []string - - resp := msgs.ReloadResponse{ - Status: msgs.Status{ - Code: msgs.Ok, - }, - } - - if request.Selector != "" { - clusterList, err := apiserver.Clientset.CrunchydataV1().Pgclusters(ns).List(metav1.ListOptions{}) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } else if len(clusterList.Items) == 0 { - resp.Results = append(resp.Results, "no clusters found with that selector") - return resp - } - - for _, cluster := range clusterList.Items { - clusterNames = append(clusterNames, cluster.Spec.Name) - } - } else { - clusterNames = request.Args - } - - for _, clusterName := range clusterNames { - - log.Debugf("reload requested for cluster %s", clusterName) - - cluster, err := apiserver.Clientset.CrunchydataV1().Pgclusters(ns).Get(clusterName, - metav1.GetOptions{}) - // maintain same "is not found" error message for backwards compatibility - if kerrors.IsNotFound(err) { - errorMsgs = append(errorMsgs, fmt.Sprintf("%s was not found, verify cluster name", clusterName)) - continue - } else if err != nil { - errorMsgs = append(errorMsgs, err.Error()) - continue - } - - // check if the current cluster is not upgraded to the deployed - // Operator version. If not, do not allow the command to complete - if cluster.Annotations[config.ANNOTATION_IS_UPGRADED] == config.ANNOTATIONS_FALSE { - errorMsgs = append(errorMsgs, fmt.Sprintf("%s %s", clusterName, msgs.UpgradeError)) - continue - } - - // now reload the cluster, providing any targets specified - patroniClient := patroni.NewPatroniClient(apiserver.RESTConfig, apiserver.Clientset, - cluster.GetName(), ns) - if err := patroniClient.ReloadCluster(); err != nil { - errorMsgs = append(errorMsgs, err.Error()) - continue - } - - resp.Results = append(resp.Results, fmt.Sprintf("reload performed on %s", clusterName)) - - if err := publishReloadClusterEvent(cluster.GetName(), ns, username); err != nil { - log.Error(err.Error()) - errorMsgs = append(errorMsgs, err.Error()) - } - } - - if len(errorMsgs) > 0 { - resp.Status.Code = msgs.Error - resp.Status.Msg = strings.Join(errorMsgs, "\n") - } - - return resp -} - -// publishReloadClusterEvent publishes an event when a cluster is reloaded -func publishReloadClusterEvent(clusterName, username, namespace string) error { - - topics := make([]string, 1) - topics[0] = events.EventTopicCluster - - f := events.EventReloadClusterFormat{ - EventHeader: events.EventHeader{ - Namespace: namespace, - Username: username, - Topic: topics, - Timestamp: time.Now(), - EventType: events.EventReloadCluster, - }, - Clustername: clusterName, - } - - if err := events.Publish(f); err != nil { - return err - } - - return nil -} diff --git a/internal/apiserver/reloadservice/reloadservice.go b/internal/apiserver/reloadservice/reloadservice.go deleted file mode 100644 index 9d1096c3c9..0000000000 --- a/internal/apiserver/reloadservice/reloadservice.go +++ /dev/null @@ -1,84 +0,0 @@ -package reloadservice - -/* -Copyright 2018 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "encoding/json" - "github.com/crunchydata/postgres-operator/internal/apiserver" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "net/http" -) - -// ReloadHandler ... -// pgo reload all -// pgo reload --selector=name=mycluster -// pgo reload mycluster -func ReloadHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /reload reloadservice reload - /*``` - RELOAD performs a PostgreSQL reload on a cluster or set of clusters. - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Reload Request" - // in: "body" - // schema: - // "$ref": "#/definitions/ReloadRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/ReloadResponse" - var err error - var username, ns string - - log.Debug("reloadservice.ReloadHandler called") - - var request msgs.ReloadRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - username, err = apiserver.Authn(apiserver.RELOAD_PERM, w, r) - if err != nil { - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - ns, err = apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace) - if err != nil { - resp := msgs.ReloadResponse{} - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - json.NewEncoder(w).Encode(resp) - return - } - - reloadResponse := Reload(&request, ns, username) - if err != nil { - resp := msgs.ReloadResponse{} - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - json.NewEncoder(w).Encode(resp) - return - } - - json.NewEncoder(w).Encode(reloadResponse) -} diff --git a/internal/apiserver/restartservice/restartimpl.go b/internal/apiserver/restartservice/restartimpl.go deleted file mode 100644 index 51c172e91f..0000000000 --- a/internal/apiserver/restartservice/restartimpl.go +++ /dev/null @@ -1,158 +0,0 @@ -package restartservice - -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "fmt" - - "github.com/crunchydata/postgres-operator/internal/apiserver" - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/patroni" - "github.com/crunchydata/postgres-operator/internal/util" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" -) - -// Restart restarts either all PostgreSQL databases within a PostgreSQL cluster (i.e. the primary -// and all replicas) or if targets are specified, just those targets. -// pgo restart mycluster -// pgo restart mycluster --target=mycluster-abcd -func Restart(request *msgs.RestartRequest, pgouser string) msgs.RestartResponse { - - log.Debugf("restart called for %s", request.ClusterName) - - resp := msgs.RestartResponse{ - Status: msgs.Status{ - Code: msgs.Ok, - }, - } - - clusterName := request.ClusterName - namespace := request.Namespace - - cluster, err := apiserver.Clientset.CrunchydataV1().Pgclusters(namespace).Get(clusterName, - metav1.GetOptions{}) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - // check if the current cluster is not upgraded to the deployed - // Operator version. If not, do not allow the command to complete - if cluster.Annotations[config.ANNOTATION_IS_UPGRADED] == config.ANNOTATIONS_FALSE { - resp.Status.Code = msgs.Error - resp.Status.Msg = fmt.Sprintf("%s %s", cluster.Name, msgs.UpgradeError) - return resp - } - - var restartResults []patroni.RestartResult - // restart either the whole cluster, or just any targets specified - patroniClient := patroni.NewPatroniClient(apiserver.RESTConfig, apiserver.Clientset, - cluster.GetName(), namespace) - if len(request.Targets) > 0 { - restartResults, err = patroniClient.RestartInstances(request.Targets...) - } else { - restartResults, err = patroniClient.RestartCluster() - } - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - restartDetails := msgs.RestartDetail{ClusterName: clusterName} - for _, restartResult := range restartResults { - - instanceDetail := msgs.InstanceDetail{InstanceName: restartResult.Instance} - if restartResult.Error != nil { - instanceDetail.Error = true - instanceDetail.ErrorMessage = restartResult.Error.Error() - } - - restartDetails.Instances = append(restartDetails.Instances, instanceDetail) - } - - resp.Result = restartDetails - - return resp -} - -// QueryRestart queries a cluster for instances available to use as as targets for a PostgreSQL restart. -// pgo restart mycluster --query -func QueryRestart(clusterName, namespace string) msgs.QueryRestartResponse { - - log.Debugf("query restart called for %s", clusterName) - - resp := msgs.QueryRestartResponse{ - Status: msgs.Status{ - Code: msgs.Ok, - }, - } - - cluster, err := apiserver.Clientset.CrunchydataV1().Pgclusters(namespace).Get(clusterName, - metav1.GetOptions{}) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - // Get information about the current status of all of all cluster instances. This is - // handled by a helper function, that will return the information in a struct with the - // key elements to help the user understand the current state of the instances in a cluster - replicationStatusRequest := util.ReplicationStatusRequest{ - RESTConfig: apiserver.RESTConfig, - Clientset: apiserver.Clientset, - Namespace: namespace, - ClusterName: clusterName, - } - - // get a list of all the Pods...note that we can included "busted" pods as - // by including the primary, we're getting all of the database pods anyway. - replicationStatusResponse, err := util.ReplicationStatus(replicationStatusRequest, true, true) - if err != nil { - log.Error(err.Error()) - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - return resp - } - - // if there are no results, return the response as is - if len(replicationStatusResponse.Instances) == 0 { - return resp - } - - // iterate through response results to create the API response - for _, instance := range replicationStatusResponse.Instances { - // create an result for the response - resp.Results = append(resp.Results, msgs.RestartTargetSpec{ - Name: instance.Name, - Node: instance.Node, - Status: instance.Status, - ReplicationLag: instance.ReplicationLag, - Timeline: instance.Timeline, - PendingRestart: instance.PendingRestart, - Role: instance.Role, - }) - } - - resp.Standby = cluster.Spec.Standby - - return resp -} diff --git a/internal/apiserver/restartservice/restartservice.go b/internal/apiserver/restartservice/restartservice.go deleted file mode 100644 index a1bfb97194..0000000000 --- a/internal/apiserver/restartservice/restartservice.go +++ /dev/null @@ -1,154 +0,0 @@ -package restartservice - -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "encoding/json" - "net/http" - - "github.com/crunchydata/postgres-operator/internal/apiserver" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - "github.com/gorilla/mux" - log "github.com/sirupsen/logrus" -) - -// RestartHandler handles requests to the "restart" endpoint. -// pgo restart mycluster -// pgo restart mycluster --target=mycluster-abcd -func RestartHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /restart restartservice restart - /*``` - RESTART performs a PostgreSQL restart on a PostgreSQL cluster. If no targets are specified, - then all instances (the primary and all replicas) within the cluster will be restarted. - Otherwise, only those targets specified will be restarted. - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Restart Request" - // in: "body" - // schema: - // "$ref": "#/definitions/RestartRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/RestartResponse" - - log.Debug("restartservice.RestartHandler called") - - resp := msgs.RestartResponse{} - - var request msgs.RestartRequest - if err := json.NewDecoder(r.Body).Decode(&request); err != nil { - resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()} - json.NewEncoder(w).Encode(resp) - return - } - - username, err := apiserver.Authn(apiserver.RESTART_PERM, w, r) - if err != nil { - resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()} - json.NewEncoder(w).Encode(resp) - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - if request.ClientVersion != msgs.PGO_VERSION { - resp.Status = msgs.Status{Code: msgs.Error, Msg: apiserver.VERSION_MISMATCH_ERROR} - json.NewEncoder(w).Encode(resp) - } - - if _, err := apiserver.GetNamespace(apiserver.Clientset, username, - request.Namespace); err != nil { - resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()} - json.NewEncoder(w).Encode(resp) - return - } - - json.NewEncoder(w).Encode(Restart(&request, username)) -} - -// QueryRestartHandler handles requests to query a cluster for instances available to use as -// as targets for a PostgreSQL restart. -// pgo restart mycluster --query -func QueryRestartHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation GET /restart/{name} restartservice restart-service - /*``` - Prints the list of restart candidates. - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "name" - // description: "Cluster Name" - // in: "path" - // type: "string" - // required: true - // - name: "version" - // description: "Client Version" - // in: "path" - // type: "string" - // required: true - // - name: "namespace" - // description: "Namespace" - // in: "path" - // type: "string" - // required: true - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/QueryRestartResponse" - - resp := msgs.QueryRestartResponse{} - - clusterName := mux.Vars(r)["name"] - clientVersion := r.URL.Query().Get("version") - namespace := r.URL.Query().Get("namespace") - - log.Debugf("QueryRestartHandler parameters version[%s] namespace [%s] name [%s]", clientVersion, - namespace, clusterName) - - username, err := apiserver.Authn(apiserver.RESTART_PERM, w, r) - if err != nil { - resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()} - json.NewEncoder(w).Encode(resp) - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - if clientVersion != msgs.PGO_VERSION { - resp.Status = msgs.Status{Code: msgs.Error, Msg: apiserver.VERSION_MISMATCH_ERROR} - json.NewEncoder(w).Encode(resp) - } - - if _, err := apiserver.GetNamespace(apiserver.Clientset, username, namespace); err != nil { - resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()} - json.NewEncoder(w).Encode(resp) - return - } - - json.NewEncoder(w).Encode(QueryRestart(clusterName, namespace)) -} diff --git a/internal/apiserver/root.go b/internal/apiserver/root.go deleted file mode 100644 index 2ced3eaca7..0000000000 --- a/internal/apiserver/root.go +++ /dev/null @@ -1,507 +0,0 @@ -package apiserver - -/* -Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "crypto/rsa" - "crypto/x509" - "errors" - "fmt" - "io/ioutil" - "net/http" - "os" - "strconv" - "strings" - "time" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/kubeapi" - "github.com/crunchydata/postgres-operator/internal/ns" - "github.com/crunchydata/postgres-operator/internal/tlsutil" - log "github.com/sirupsen/logrus" - corev1 "k8s.io/api/core/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/client-go/kubernetes" - "k8s.io/client-go/rest" -) - -const rsaKeySize = 2048 -const duration365d = time.Hour * 24 * 365 -const PGOSecretName = "pgo.tls" - -const VERSION_MISMATCH_ERROR = "pgo client and server version mismatch" - -var ( - // Clientset is a client for Kubernetes resources - Clientset kubeapi.Interface - // RESTConfig holds the REST configuration for a Kube client - RESTConfig *rest.Config -) - -// MetricsFlag if set to true will cause crunchy-postgres-exporter to be added into new clusters -var MetricsFlag, BadgerFlag bool - -// AuditFlag if set to true will cause auditing to occur in the logs -var AuditFlag bool - -// DebugFlag is the debug flag value -var DebugFlag bool - -// BasicAuth comes from the apiserver config -var BasicAuth bool - -// Namespace comes from the apiserver config in this version -var PgoNamespace string -var InstallationName string - -// namespaceList is the list of namespaces identified at install time -var namespaceList []string - -var CRUNCHY_DEBUG bool - -// TreeTrunk is for debugging only in this context -const TreeTrunk = "└── " - -// TreeBranch is for debugging only in this context -const TreeBranch = "├── " - -type CredentialDetail struct { - Username string - Password string - Role string - Namespaces []string -} - -var Pgo config.PgoConfig - -// NamespaceOperatingMode defines the namespace operating mode for the cluster, -// e.g. "dynamic", "readonly" or "disabled". See type NamespaceOperatingMode -// for detailed explanations of each mode available. -var namespaceOperatingMode ns.NamespaceOperatingMode - -func Initialize() { - - PgoNamespace = os.Getenv("PGO_OPERATOR_NAMESPACE") - if PgoNamespace == "" { - log.Info("PGO_OPERATOR_NAMESPACE environment variable is not set and is required, this is the namespace that the Operator is to run within.") - os.Exit(2) - } - log.Info("Pgo Namespace is [" + PgoNamespace + "]") - - InstallationName = os.Getenv("PGO_INSTALLATION_NAME") - if InstallationName == "" { - log.Error("PGO_INSTALLATION_NAME environment variable is missng") - os.Exit(2) - } - log.Info("InstallationName is [" + InstallationName + "]") - - tmp := os.Getenv("CRUNCHY_DEBUG") - CRUNCHY_DEBUG = false - if tmp == "true" { - CRUNCHY_DEBUG = true - } - BasicAuth = true - MetricsFlag = false - BadgerFlag = false - AuditFlag = false - - log.Infoln("apiserver starts") - - connectToKube() - - initializePerms() - - err := Pgo.GetConfig(Clientset, PgoNamespace) - if err != nil { - log.Error(err) - log.Error("error in Pgo configuration") - os.Exit(2) - } - - initConfig() - - if err := setNamespaceOperatingMode(); err != nil { - log.Error(err) - os.Exit(2) - } - - namespaceList, err = ns.GetInitialNamespaceList(Clientset, NamespaceOperatingMode(), - InstallationName, PgoNamespace) - if err != nil { - log.Error(err) - os.Exit(2) - } - - log.Infof("Namespace operating mode is '%s'", NamespaceOperatingMode()) -} - -func connectToKube() { - - client, err := kubeapi.NewClient() - if err != nil { - panic(err) - } - - Clientset = client - RESTConfig = client.Config -} - -func initConfig() { - - AuditFlag = Pgo.Pgo.Audit - if AuditFlag { - log.Info("audit flag is set to true") - } - - MetricsFlag = Pgo.Cluster.Metrics - if MetricsFlag { - log.Info("metrics flag is set to true") - } - BadgerFlag = Pgo.Cluster.Badger - if BadgerFlag { - log.Info("badger flag is set to true") - } - - tmp := Pgo.BasicAuth - if tmp == "" { - BasicAuth = true - } else { - var err error - BasicAuth, err = strconv.ParseBool(tmp) - if err != nil { - log.Error("BasicAuth config value is not valid") - os.Exit(2) - } - } - log.Infof("BasicAuth is %v", BasicAuth) -} - -func BasicAuthCheck(username, password string) bool { - - if BasicAuth == false { - return true - } - - //see if there is a pgouser Secret for this username - secretName := "pgouser-" + username - secret, err := Clientset.CoreV1().Secrets(PgoNamespace).Get(secretName, metav1.GetOptions{}) - - if err != nil { - log.Errorf("could not get pgouser secret %s: %s", username, err.Error()) - return false - } - - return password == string(secret.Data["password"]) -} - -func BasicAuthzCheck(username, perm string) bool { - - secretName := "pgouser-" + username - secret, err := Clientset.CoreV1().Secrets(PgoNamespace).Get(secretName, metav1.GetOptions{}) - - if err != nil { - log.Errorf("could not get pgouser secret %s: %s", username, err.Error()) - return false - } - - //get the roles for this user - rolesString := string(secret.Data["roles"]) - roles := strings.Split(rolesString, ",") - if len(roles) == 0 { - log.Errorf("%s user has no roles ", username) - return false - } - - //venture thru each role this user has looking for a perm match - for _, r := range roles { - - //get the pgorole - roleSecretName := "pgorole-" + r - rolesecret, err := Clientset.CoreV1().Secrets(PgoNamespace).Get(roleSecretName, metav1.GetOptions{}) - - if err != nil { - log.Errorf("could not get pgorole secret %s: %s", r, err.Error()) - return false - } - - permsString := strings.TrimSpace(string(rolesecret.Data["permissions"])) - - // first a special case. If this is a solitary "*" indicating that this - // encompasses every permission, then we can exit here as true - if permsString == "*" { - return true - } - - // otherwise, blow up the permission string and see if the user has explicit - // permission (i.e. is authorized) to access this resource - perms := strings.Split(permsString, ",") - - for _, p := range perms { - pp := strings.TrimSpace(p) - if pp == perm { - log.Debugf("%s perm found in role %s for username %s", pp, r, username) - return true - } - } - - } - - return false - -} - -//GetNamespace determines if a user has permission for -//a namespace they are requesting -//a valid requested namespace is required -func GetNamespace(clientset kubernetes.Interface, username, requestedNS string) (string, error) { - - log.Debugf("GetNamespace username [%s] ns [%s]", username, requestedNS) - - if requestedNS == "" { - return requestedNS, errors.New("empty namespace is not valid from pgo clients") - } - - iAccess, uAccess, err := UserIsPermittedInNamespace(username, requestedNS) - if err != nil { - return requestedNS, fmt.Errorf("Error when determining whether user [%s] is allowed access to "+ - "namespace [%s]: %s", username, requestedNS, err.Error()) - } - if iAccess == false { - errMsg := fmt.Sprintf("namespace [%s] is not part of the Operator installation", requestedNS) - return requestedNS, errors.New(errMsg) - } - if uAccess == false { - errMsg := fmt.Sprintf("user [%s] is not allowed access to namespace [%s]", username, requestedNS) - return requestedNS, errors.New(errMsg) - } - - return requestedNS, nil -} - -// Authn performs HTTP Basic Authentication against a user if "BasicAuth" is set -// to "true" (which it is by default). -// -// ...it also performs Authorization (Authz) against the user that is attempting -// to authenticate, and as such, to truly "authenticate/authorize," one needs -// at least a valid Operator User account. -func Authn(perm string, w http.ResponseWriter, r *http.Request) (string, error) { - var err error - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - - // Need to run the HTTP library `BasicAuth` even if `BasicAuth == false`, as - // this function currently encapsulates authorization as well, and this is - // the call where we get the username to check the RBAC settings - username, password, authOK := r.BasicAuth() - if AuditFlag { - log.Infof("[audit] %s username=[%s] method=[%s] ip=[%s] ok=[%t] ", perm, username, r.Method, r.RemoteAddr, authOK) - } - - // Check to see if this user is authenticated - // If BasicAuth is "disabled", skip the authentication; o/w/ check if the - // authentication passed - if !BasicAuth { - log.Debugf("BasicAuth disabled, Skipping Authentication %s username=[%s]", perm, username) - } else { - log.Debugf("Authentication Attempt %s username=[%s]", perm, username) - if !authOK { - http.Error(w, "Not Authorized. Basic Authentication credentials must be provided according to RFC 7617, Section 2.", 401) - return "", errors.New("Not Authorized: Credentials do not comply with RFC 7617") - } - } - - if !BasicAuthCheck(username, password) { - log.Errorf("Authentication Failed %s username=[%s]", perm, username) - http.Error(w, "Not authenticated in apiserver", 401) - return "", errors.New("Not Authenticated") - } - - if !BasicAuthzCheck(username, perm) { - log.Errorf("Authorization Failed %s username=[%s]", perm, username) - http.Error(w, "Not authorized for this apiserver action", 403) - return "", errors.New("Not authorized for this apiserver action") - } - - log.Debug("Authentication Success") - return username, err - -} - -func IsValidStorageName(name string) bool { - _, ok := Pgo.Storage[name] - return ok -} - -// ValidateNodeLabel -// returns error if node label is invalid based on format -func ValidateNodeLabel(nodeLabel string) error { - parts := strings.Split(nodeLabel, "=") - if len(parts) != 2 { - return errors.New(nodeLabel + " node label does not follow key=value format") - } - - return nil -} - -// UserIsPermittedInNamespace returns installation access and user access. -// Installation access means a namespace belongs to this Operator installation. -// User access means this user has access to a namespace. -func UserIsPermittedInNamespace(username, requestedNS string) (bool, bool, error) { - - var iAccess, uAccess bool - - if err := ns.ValidateNamespacesWatched(Clientset, NamespaceOperatingMode(), InstallationName, - requestedNS); err != nil { - if !errors.Is(err, ns.ErrNamespaceNotWatched) { - return false, false, err - } - } else { - iAccess = true - } - - if iAccess { - //get the pgouser Secret for this username - userSecretName := "pgouser-" + username - userSecret, err := Clientset.CoreV1().Secrets(PgoNamespace).Get(userSecretName, metav1.GetOptions{}) - if err != nil { - log.Errorf("could not get pgouser secret %s: %s", username, err.Error()) - return false, false, err - } - - // handle the case of a user in pgouser with "" (all) namespaces, otherwise check the - // namespaces config in the user secret - nsstring := string(userSecret.Data["namespaces"]) - if nsstring == "" { - uAccess = true - } else { - nsList := strings.Split(nsstring, ",") - for _, v := range nsList { - ns := strings.TrimSpace(v) - if ns == requestedNS { - uAccess = true - } - } - } - } - - return iAccess, uAccess, nil -} - -// WriteTLSCert is a legacy method that writes the server certificate and key to -// files from the PGOSecretName secret or generates a new key (writing to both -// the secret and the expected files -func WriteTLSCert(certPath, keyPath string) error { - pgoSecret, err := Clientset.CoreV1().Secrets(PgoNamespace).Get(PGOSecretName, metav1.GetOptions{}) - - // if the TLS certificate secret is not found, attempt to generate one - if err != nil { - log.Infof("%s Secret NOT found in namespace %s", PGOSecretName, PgoNamespace) - - if err := generateTLSCert(certPath, keyPath); err != nil { - log.Error("error generating pgo.tls Secret") - return err - } - - return nil - } - - // otherwise, write the TLS sertificate to the certificate and key path - log.Infof("%s Secret found in namespace %s", PGOSecretName, PgoNamespace) - log.Infof("cert key data len is %d", len(pgoSecret.Data[corev1.TLSCertKey])) - - if err := ioutil.WriteFile(certPath, pgoSecret.Data[corev1.TLSCertKey], 0644); err != nil { - return err - } - - log.Infof("private key data len is %d", len(pgoSecret.Data[corev1.TLSPrivateKeyKey])) - - if err := ioutil.WriteFile(keyPath, pgoSecret.Data[corev1.TLSPrivateKeyKey], 0644); err != nil { - return err - } - - return nil -} - -// generateTLSCert generates a self signed cert and stores it in both -// the PGOSecretName Secret and certPath, keyPath files -func generateTLSCert(certPath, keyPath string) error { - var err error - - //generate private key - var privateKey *rsa.PrivateKey - privateKey, err = tlsutil.NewPrivateKey() - if err != nil { - fmt.Println(err.Error()) - os.Exit(2) - } - - privateKeyBytes := tlsutil.EncodePrivateKeyPEM(privateKey) - log.Debugf("generated privateKeyBytes len %d", len(privateKeyBytes)) - - var caCert *x509.Certificate - caCert, err = tlsutil.NewSelfSignedCACertificate(privateKey) - if err != nil { - fmt.Println(err.Error()) - os.Exit(2) - } - - caCertBytes := tlsutil.EncodeCertificatePEM(caCert) - log.Debugf("generated caCertBytes len %d", len(caCertBytes)) - - // CreateSecret - newSecret := corev1.Secret{} - newSecret.Name = PGOSecretName - newSecret.ObjectMeta.Labels = make(map[string]string) - newSecret.ObjectMeta.Labels[config.LABEL_VENDOR] = "crunchydata" - newSecret.Data = make(map[string][]byte) - newSecret.Data[corev1.TLSCertKey] = caCertBytes - newSecret.Data[corev1.TLSPrivateKeyKey] = privateKeyBytes - newSecret.Type = corev1.SecretTypeTLS - - _, err = Clientset.CoreV1().Secrets(PgoNamespace).Create(&newSecret) - if err != nil { - fmt.Println(err.Error()) - os.Exit(2) - } - - if err := ioutil.WriteFile(certPath, newSecret.Data[corev1.TLSCertKey], 0644); err != nil { - return err - } - if err := ioutil.WriteFile(keyPath, newSecret.Data[corev1.TLSPrivateKeyKey], 0644); err != nil { - return err - } - - return err - -} - -// setNamespaceOperatingMode set the namespace operating mode for the Operator by calling the -// proper utility function to determine which mode is applicable based on the current -// permissions assigned to the Operator Service Account. -func setNamespaceOperatingMode() error { - nsOpMode, err := ns.GetNamespaceOperatingMode(Clientset) - if err != nil { - return err - } - namespaceOperatingMode = nsOpMode - - return nil -} - -// NamespaceOperatingMode returns the namespace operating mode for the current Operator -// installation, which is stored in the "namespaceOperatingMode" variable -func NamespaceOperatingMode() ns.NamespaceOperatingMode { - return namespaceOperatingMode -} diff --git a/internal/apiserver/routing/doc.go b/internal/apiserver/routing/doc.go deleted file mode 100644 index e985fd4280..0000000000 --- a/internal/apiserver/routing/doc.go +++ /dev/null @@ -1,34 +0,0 @@ -/* -Copyright 2019 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -/* Package routing temporarily breaks circular dependencies within the -structure of the apiserver package - -The apiserver package contains a mix of package content (used by external -code) and refactored functionality from the *service folders. The -refactored functionality of the *service folders causes import dependencies -on the apiserver package. - -Strictly speaking, the *service folders are an organizational element and -their dependencies could be resolved via dot-import. Idiomatic Go -guidelines point out that using a dot-import outside of testing scenarios is -a sign that package structure needs to be reconsidered and should not be -used outside of the *_test.go scenarios. - -Creating this package is preferable to pushing all service-common code into -a 'junk-drawer' package to resolve the circular dependency. - -*/ -package routing diff --git a/internal/apiserver/routing/routes.go b/internal/apiserver/routing/routes.go deleted file mode 100644 index 0198efaaf8..0000000000 --- a/internal/apiserver/routing/routes.go +++ /dev/null @@ -1,233 +0,0 @@ -package routing - -/* -Copyright 2019 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "github.com/crunchydata/postgres-operator/internal/apiserver/backrestservice" - "github.com/crunchydata/postgres-operator/internal/apiserver/catservice" - "github.com/crunchydata/postgres-operator/internal/apiserver/cloneservice" - "github.com/crunchydata/postgres-operator/internal/apiserver/clusterservice" - "github.com/crunchydata/postgres-operator/internal/apiserver/configservice" - "github.com/crunchydata/postgres-operator/internal/apiserver/dfservice" - "github.com/crunchydata/postgres-operator/internal/apiserver/failoverservice" - "github.com/crunchydata/postgres-operator/internal/apiserver/labelservice" - "github.com/crunchydata/postgres-operator/internal/apiserver/namespaceservice" - "github.com/crunchydata/postgres-operator/internal/apiserver/pgadminservice" - "github.com/crunchydata/postgres-operator/internal/apiserver/pgbouncerservice" - "github.com/crunchydata/postgres-operator/internal/apiserver/pgdumpservice" - "github.com/crunchydata/postgres-operator/internal/apiserver/pgoroleservice" - "github.com/crunchydata/postgres-operator/internal/apiserver/pgouserservice" - "github.com/crunchydata/postgres-operator/internal/apiserver/policyservice" - "github.com/crunchydata/postgres-operator/internal/apiserver/pvcservice" - "github.com/crunchydata/postgres-operator/internal/apiserver/reloadservice" - "github.com/crunchydata/postgres-operator/internal/apiserver/restartservice" - "github.com/crunchydata/postgres-operator/internal/apiserver/scheduleservice" - "github.com/crunchydata/postgres-operator/internal/apiserver/statusservice" - "github.com/crunchydata/postgres-operator/internal/apiserver/upgradeservice" - "github.com/crunchydata/postgres-operator/internal/apiserver/userservice" - "github.com/crunchydata/postgres-operator/internal/apiserver/versionservice" - "github.com/crunchydata/postgres-operator/internal/apiserver/workflowservice" - - "github.com/gorilla/mux" -) - -// RegisterAllRoutes adds all routes supported by the apiserver to the -// provided router -func RegisterAllRoutes(r *mux.Router) { - RegisterBackrestSvcRoutes(r) - RegisterCatSvcRoutes(r) - RegisterCloneSvcRoutes(r) - RegisterClusterSvcRoutes(r) - RegisterConfigSvcRoutes(r) - RegisterDfSvcRoutes(r) - RegisterFailoverSvcRoutes(r) - RegisterLabelSvcRoutes(r) - RegisterNamespaceSvcRoutes(r) - RegisterPGAdminSvcRoutes(r) - RegisterPGBouncerSvcRoutes(r) - RegisterPGDumpSvcRoutes(r) - RegisterPGORoleSvcRoutes(r) - RegisterPGOUserSvcRoutes(r) - RegisterPolicySvcRoutes(r) - RegisterPVCSvcRoutes(r) - RegisterReloadSvcRoutes(r) - RegisterRestartSvcRoutes(r) - RegisterScheduleSvcRoutes(r) - RegisterStatusSvcRoutes(r) - RegisterUpgradeSvcRoutes(r) - RegisterUserSvcRoutes(r) - RegisterVersionSvcRoutes(r) - RegisterWorkflowSvcRoutes(r) -} - -// RegisterBackrestSvcRoutes registers all routes from the Backrest Service -func RegisterBackrestSvcRoutes(r *mux.Router) { - r.HandleFunc("/backrestbackup", backrestservice.CreateBackupHandler).Methods("POST") - r.HandleFunc("/backrest/{name}", backrestservice.ShowBackrestHandler).Methods("GET") - r.HandleFunc("/restore", backrestservice.RestoreHandler).Methods("POST") -} - -// RegisterCatSvcRoutes registers all routes from the Cat Service -func RegisterCatSvcRoutes(r *mux.Router) { - r.HandleFunc("/cat", catservice.CatHandler).Methods("POST") -} - -// RegisterCloneSvcRoutes registers all routes from the Clone Service -func RegisterCloneSvcRoutes(r *mux.Router) { - r.HandleFunc("/clone", cloneservice.CloneHandler).Methods("POST") -} - -// RegisterClusterSvcRoutes registers all routes from the Cluster Service -func RegisterClusterSvcRoutes(r *mux.Router) { - r.HandleFunc("/clusters", clusterservice.CreateClusterHandler).Methods("POST") - r.HandleFunc("/showclusters", clusterservice.ShowClusterHandler).Methods("POST") - r.HandleFunc("/clustersdelete", clusterservice.DeleteClusterHandler).Methods("POST") - r.HandleFunc("/clustersupdate", clusterservice.UpdateClusterHandler).Methods("POST") - r.HandleFunc("/testclusters", clusterservice.TestClusterHandler).Methods("POST") - r.HandleFunc("/clusters/scale/{name}", clusterservice.ScaleClusterHandler) - r.HandleFunc("/scale/{name}", clusterservice.ScaleQueryHandler).Methods("GET") - r.HandleFunc("/scaledown/{name}", clusterservice.ScaleDownHandler).Methods("GET") -} - -// RegisterConfigSvcRoutes registers all routes from the Config Service -func RegisterConfigSvcRoutes(r *mux.Router) { - r.HandleFunc("/config", configservice.ShowConfigHandler) -} - -// RegisterDfSvcRoutes registers all routes from the Df Service -func RegisterDfSvcRoutes(r *mux.Router) { - r.HandleFunc("/df", dfservice.DfHandler).Methods("POST") -} - -// RegisterFailoverSvcRoutes registers all routes from the Failover Service -func RegisterFailoverSvcRoutes(r *mux.Router) { - r.HandleFunc("/failover", failoverservice.CreateFailoverHandler).Methods("POST") - r.HandleFunc("/failover/{name}", failoverservice.QueryFailoverHandler).Methods("GET") -} - -// RegisterLabelSvcRoutes registers all routes from the Label Service -func RegisterLabelSvcRoutes(r *mux.Router) { - r.HandleFunc("/label", labelservice.LabelHandler).Methods("POST") - r.HandleFunc("/labeldelete", labelservice.DeleteLabelHandler).Methods("POST") -} - -// RegisterNamespaceSvcRoutes registers all routes from the Namespace Service -func RegisterNamespaceSvcRoutes(r *mux.Router) { - r.HandleFunc("/namespace", namespaceservice.ShowNamespaceHandler).Methods("POST") - r.HandleFunc("/namespacedelete", namespaceservice.DeleteNamespaceHandler).Methods("POST") - r.HandleFunc("/namespacecreate", namespaceservice.CreateNamespaceHandler).Methods("POST") - r.HandleFunc("/namespaceupdate", namespaceservice.UpdateNamespaceHandler).Methods("POST") -} - -// RegisterPGAdminSvcRoutes registers all routes from the PGAdmin Service -func RegisterPGAdminSvcRoutes(r *mux.Router) { - r.HandleFunc("/pgadmin", pgadminservice.CreatePgAdminHandler).Methods("POST") - r.HandleFunc("/pgadmin", pgadminservice.DeletePgAdminHandler).Methods("DELETE") - r.HandleFunc("/pgadmin/show", pgadminservice.ShowPgAdminHandler).Methods("POST") -} - -// RegisterPGBouncerSvcRoutes registers all routes from the PGBouncer Service -func RegisterPGBouncerSvcRoutes(r *mux.Router) { - r.HandleFunc("/pgbouncer", pgbouncerservice.CreatePgbouncerHandler).Methods("POST") - r.HandleFunc("/pgbouncer", pgbouncerservice.UpdatePgBouncerHandler).Methods("PUT") - r.HandleFunc("/pgbouncer", pgbouncerservice.DeletePgbouncerHandler).Methods("DELETE") - r.HandleFunc("/pgbouncer/show", pgbouncerservice.ShowPgBouncerHandler).Methods("POST") - r.HandleFunc("/pgbouncerdelete", pgbouncerservice.DeletePgbouncerHandler).Methods("POST") -} - -// RegisterPGDumpSvcRoutes registers all routes from the PGDump Service -func RegisterPGDumpSvcRoutes(r *mux.Router) { - r.HandleFunc("/pgdumpbackup", pgdumpservice.BackupHandler).Methods("POST") - r.HandleFunc("/pgdump/{name}", pgdumpservice.ShowDumpHandler).Methods("GET") - r.HandleFunc("/pgdumprestore", pgdumpservice.RestoreHandler).Methods("POST") -} - -// RegisterPGORoleSvcRoutes registers all routes from the PGORole Service -func RegisterPGORoleSvcRoutes(r *mux.Router) { - r.HandleFunc("/pgoroleupdate", pgoroleservice.UpdatePgoroleHandler).Methods("POST") - r.HandleFunc("/pgoroledelete", pgoroleservice.DeletePgoroleHandler).Methods("POST") - r.HandleFunc("/pgorolecreate", pgoroleservice.CreatePgoroleHandler).Methods("POST") - r.HandleFunc("/pgoroleshow", pgoroleservice.ShowPgoroleHandler).Methods("POST") -} - -// RegisterPGOUserSvcRoutes registers all routes from the PGOUser Service -func RegisterPGOUserSvcRoutes(r *mux.Router) { - r.HandleFunc("/pgouserupdate", pgouserservice.UpdatePgouserHandler).Methods("POST") - r.HandleFunc("/pgouserdelete", pgouserservice.DeletePgouserHandler).Methods("POST") - r.HandleFunc("/pgousercreate", pgouserservice.CreatePgouserHandler).Methods("POST") - r.HandleFunc("/pgousershow", pgouserservice.ShowPgouserHandler).Methods("POST") -} - -// RegisterPolicySvcRoutes registers all routes from the Policy Service -func RegisterPolicySvcRoutes(r *mux.Router) { - r.HandleFunc("/policies", policyservice.CreatePolicyHandler) - r.HandleFunc("/showpolicies", policyservice.ShowPolicyHandler).Methods("POST") - r.HandleFunc("/policiesdelete", policyservice.DeletePolicyHandler).Methods("POST") - r.HandleFunc("/policies/apply", policyservice.ApplyPolicyHandler).Methods("POST") -} - -// RegisterPVCSvcRoutes registers all routes from the PVC Service -func RegisterPVCSvcRoutes(r *mux.Router) { - r.HandleFunc("/showpvc", pvcservice.ShowPVCHandler).Methods("POST") -} - -// RegisterReloadSvcRoutes registers all routes from the Reload Service -func RegisterReloadSvcRoutes(r *mux.Router) { - r.HandleFunc("/reload", reloadservice.ReloadHandler).Methods("POST") -} - -// RegisterRestartSvcRoutes registers all routes from the Restart Service -func RegisterRestartSvcRoutes(r *mux.Router) { - r.HandleFunc("/restart", restartservice.RestartHandler).Methods("POST") - r.HandleFunc("/restart/{name}", restartservice.QueryRestartHandler).Methods("GET") -} - -// RegisterScheduleSvcRoutes registers all routes from the Schedule Service -func RegisterScheduleSvcRoutes(r *mux.Router) { - r.HandleFunc("/schedule", scheduleservice.CreateScheduleHandler).Methods("POST") - r.HandleFunc("/scheduledelete", scheduleservice.DeleteScheduleHandler).Methods("POST") - r.HandleFunc("/scheduleshow", scheduleservice.ShowScheduleHandler).Methods("POST") -} - -// RegisterStatusSvcRoutes registers all routes from the Status Service -func RegisterStatusSvcRoutes(r *mux.Router) { - r.HandleFunc("/status", statusservice.StatusHandler) -} - -// RegisterUpgradeSvcRoutes registers all routes from the Upgrade Service -func RegisterUpgradeSvcRoutes(r *mux.Router) { - r.HandleFunc("/upgrades", upgradeservice.CreateUpgradeHandler).Methods("POST") -} - -// RegisterUserSvcRoutes registers all routes from the User Service -func RegisterUserSvcRoutes(r *mux.Router) { - r.HandleFunc("/userupdate", userservice.UpdateUserHandler).Methods("POST") - r.HandleFunc("/usercreate", userservice.CreateUserHandler).Methods("POST") - r.HandleFunc("/usershow", userservice.ShowUserHandler).Methods("POST") - r.HandleFunc("/userdelete", userservice.DeleteUserHandler).Methods("POST") -} - -// RegisterVersionSvcRoutes registers all routes from the Version Service -func RegisterVersionSvcRoutes(r *mux.Router) { - r.HandleFunc("/version", versionservice.VersionHandler) - r.HandleFunc("/health", versionservice.HealthHandler) - r.HandleFunc("/healthz", versionservice.HealthyHandler) -} - -// RegisterWorkflowSvcRoutes registers all routes from the Workflow Service -func RegisterWorkflowSvcRoutes(r *mux.Router) { - r.HandleFunc("/workflow/{id}", workflowservice.ShowWorkflowHandler).Methods("GET") -} diff --git a/internal/apiserver/scheduleservice/scheduleimpl.go b/internal/apiserver/scheduleservice/scheduleimpl.go deleted file mode 100644 index 96e134949c..0000000000 --- a/internal/apiserver/scheduleservice/scheduleimpl.go +++ /dev/null @@ -1,342 +0,0 @@ -package scheduleservice - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "encoding/json" - "fmt" - "time" - - "github.com/crunchydata/postgres-operator/internal/apiserver" - "github.com/crunchydata/postgres-operator/internal/apiserver/backupoptions" - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/util" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - - v1 "k8s.io/api/core/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" -) - -type scheduleRequest struct { - Request *msgs.CreateScheduleRequest - Response *msgs.CreateScheduleResponse -} - -func (s scheduleRequest) createBackRestSchedule(cluster *crv1.Pgcluster, ns string) *PgScheduleSpec { - name := fmt.Sprintf("%s-%s-%s", cluster.Name, s.Request.ScheduleType, s.Request.PGBackRestType) - - err := util.ValidateBackrestStorageTypeOnBackupRestore(s.Request.BackrestStorageType, - cluster.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE], false) - if err != nil { - s.Response.Status.Code = msgs.Error - s.Response.Status.Msg = err.Error() - return &PgScheduleSpec{} - } - - schedule := &PgScheduleSpec{ - Name: name, - Cluster: cluster.Name, - Version: "v1", - Created: time.Now().Format(time.RFC3339), - Schedule: s.Request.Schedule, - Type: s.Request.ScheduleType, - Namespace: ns, - PGBackRest: PGBackRest{ - Label: fmt.Sprintf("pg-cluster=%s,name=%s,deployment-name=%s", cluster.Name, cluster.Name, cluster.Name), - Container: "database", - Type: s.Request.PGBackRestType, - StorageType: s.Request.BackrestStorageType, - Options: s.Request.ScheduleOptions, - }, - } - return schedule -} - -func (s scheduleRequest) createPolicySchedule(cluster *crv1.Pgcluster, ns string) *PgScheduleSpec { - name := fmt.Sprintf("%s-%s-%s", cluster.Name, s.Request.ScheduleType, s.Request.PolicyName) - - err := util.ValidatePolicy(apiserver.Clientset, ns, s.Request.PolicyName) - if err != nil { - s.Response.Status.Code = msgs.Error - s.Response.Status.Msg = fmt.Sprintf("policy %s not found", s.Request.PolicyName) - return &PgScheduleSpec{} - } - - if s.Request.Secret == "" { - s.Request.Secret = cluster.Spec.PrimarySecretName - } - schedule := &PgScheduleSpec{ - Name: name, - Cluster: cluster.Name, - Version: "v1", - Created: time.Now().Format(time.RFC3339), - Schedule: s.Request.Schedule, - Type: s.Request.ScheduleType, - Namespace: ns, - Policy: Policy{ - Name: s.Request.PolicyName, - Database: s.Request.Database, - Secret: s.Request.Secret, - ImagePrefix: util.GetValueOrDefault(cluster.Spec.PGOImagePrefix, apiserver.Pgo.Pgo.PGOImagePrefix), - ImageTag: apiserver.Pgo.Pgo.PGOImageTag, - }, - } - return schedule -} - -// CreateSchedule -func CreateSchedule(request *msgs.CreateScheduleRequest, ns string) msgs.CreateScheduleResponse { - log.Debugf("Create schedule called: %s", request.ClusterName) - sr := &scheduleRequest{ - Request: request, - Response: &msgs.CreateScheduleResponse{ - Status: msgs.Status{ - Code: msgs.Ok, - Msg: "", - }, - Results: make([]string, 0), - }, - } - - log.Debug("Getting cluster") - var selector string - if sr.Request.ClusterName != "" { - selector = fmt.Sprintf("%s=%s", config.LABEL_PG_CLUSTER, sr.Request.ClusterName) - } else if sr.Request.Selector != "" { - selector = sr.Request.Selector - } - - clusterList, err := apiserver.Clientset.CrunchydataV1().Pgclusters(ns).List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - sr.Response.Status.Code = msgs.Error - sr.Response.Status.Msg = fmt.Sprintf("Could not get cluster via selector: %s", err) - return *sr.Response - } - - // validate schedule options - if sr.Request.ScheduleOptions != "" { - err := backupoptions.ValidateBackupOpts(sr.Request.ScheduleOptions, request) - if err != nil { - sr.Response.Status.Code = msgs.Error - sr.Response.Status.Msg = err.Error() - return *sr.Response - } - } - - log.Debug("Making schedules") - var schedules []*PgScheduleSpec - for _, cluster := range clusterList.Items { - - // check if the current cluster is not upgraded to the deployed - // Operator version. If not, do not allow the command to complete - if cluster.Annotations[config.ANNOTATION_IS_UPGRADED] == config.ANNOTATIONS_FALSE { - sr.Response.Status.Code = msgs.Error - sr.Response.Status.Msg = cluster.Name + msgs.UpgradeError - return *sr.Response - } - switch sr.Request.ScheduleType { - case "pgbackrest": - schedule := sr.createBackRestSchedule(&cluster, ns) - schedules = append(schedules, schedule) - case "policy": - schedule := sr.createPolicySchedule(&cluster, ns) - schedules = append(schedules, schedule) - default: - sr.Response.Status.Code = msgs.Error - sr.Response.Status.Msg = fmt.Sprintf("Schedule type unknown: %s", sr.Request.ScheduleType) - return *sr.Response - } - - if sr.Response.Status.Code == msgs.Error { - return *sr.Response - } - } - - log.Debug("Marshalling schedules") - for _, schedule := range schedules { - log.Debug(schedule.Name, schedule.Cluster) - blob, err := json.Marshal(schedule) - if err != nil { - sr.Response.Status.Code = msgs.Error - sr.Response.Status.Msg = err.Error() - } - - log.Debug("Getting configmap..") - _, err = apiserver.Clientset.CoreV1().ConfigMaps(schedule.Namespace).Get(schedule.Name, metav1.GetOptions{}) - if err == nil { - sr.Response.Status.Code = msgs.Error - sr.Response.Status.Msg = fmt.Sprintf("Schedule %s already exists", schedule.Name) - return *sr.Response - } - - labels := make(map[string]string) - labels["pg-cluster"] = schedule.Cluster - labels["crunchy-scheduler"] = "true" - - data := make(map[string]string) - data[schedule.Name] = string(blob) - - configmap := &v1.ConfigMap{ - ObjectMeta: metav1.ObjectMeta{ - Name: schedule.Name, - Labels: labels, - }, - Data: data, - } - - log.Debug("Creating configmap..") - _, err = apiserver.Clientset.CoreV1().ConfigMaps(schedule.Namespace).Create(configmap) - if err != nil { - sr.Response.Status.Code = msgs.Error - sr.Response.Status.Msg = err.Error() - return *sr.Response - } - - msg := fmt.Sprintf("created schedule %s for cluster %s", configmap.ObjectMeta.Name, schedule.Cluster) - sr.Response.Results = append(sr.Response.Results, msg) - } - return *sr.Response -} - -// DeleteSchedule ... -func DeleteSchedule(request *msgs.DeleteScheduleRequest, ns string) msgs.DeleteScheduleResponse { - log.Debug("Deleted schedule called") - - sr := &msgs.DeleteScheduleResponse{ - Status: msgs.Status{ - Code: msgs.Ok, - Msg: "", - }, - Results: make([]string, 0), - } - - if request.ScheduleName == "" && request.ClusterName == "" && request.Selector == "" { - sr.Status.Code = msgs.Error - sr.Status.Msg = fmt.Sprintf("Cluster name, schedule name or selector must be provided") - return *sr - } - - schedules := []string{} - var err error - if request.ScheduleName != "" { - schedules = append(schedules, request.ScheduleName) - } else { - schedules, err = getSchedules(request.ClusterName, request.Selector, ns) - if err != nil { - sr.Status.Code = msgs.Error - sr.Status.Msg = err.Error() - return *sr - } - } - - log.Debug("Deleting configMaps") - for _, schedule := range schedules { - err := apiserver.Clientset.CoreV1().ConfigMaps(ns).Delete(schedule, &metav1.DeleteOptions{}) - if err != nil { - sr.Status.Code = msgs.Error - sr.Status.Msg = fmt.Sprintf("Could not delete ConfigMap %s: %s", schedule, err) - return *sr - } - msg := fmt.Sprintf("deleted schedule %s", schedule) - sr.Results = append(sr.Results, msg) - } - - return *sr -} - -// ShowSchedule ... -func ShowSchedule(request *msgs.ShowScheduleRequest, ns string) msgs.ShowScheduleResponse { - log.Debug("Show schedule called") - - sr := &msgs.ShowScheduleResponse{ - Status: msgs.Status{ - Code: msgs.Ok, - Msg: "", - }, - Results: make([]string, 0), - } - - if request.ScheduleName == "" && request.ClusterName == "" && request.Selector == "" { - sr.Status.Code = msgs.Error - sr.Status.Msg = fmt.Sprintf("Cluster name, schedule name or selector must be provided") - return *sr - } - - schedules := []string{} - var err error - if request.ScheduleName != "" { - schedules = append(schedules, request.ScheduleName) - } else { - schedules, err = getSchedules(request.ClusterName, request.Selector, ns) - if err != nil { - sr.Status.Code = msgs.Error - sr.Status.Msg = err.Error() - return *sr - } - } - - log.Debug("Parsing configMaps") - for _, schedule := range schedules { - cm, err := apiserver.Clientset.CoreV1().ConfigMaps(ns).Get(schedule, metav1.GetOptions{}) - if err != nil { - sr.Status.Code = msgs.Error - sr.Status.Msg = fmt.Sprintf("Could not delete ConfigMap %s: %s", schedule, err) - return *sr - } - - var blob PgScheduleSpec - log.Debug(cm.Data[schedule]) - if err := json.Unmarshal([]byte(cm.Data[schedule]), &blob); err != nil { - sr.Status.Code = msgs.Error - sr.Status.Msg = fmt.Sprintf("Could not parse schedule json %s: %s", schedule, err) - return *sr - } - - results := fmt.Sprintf("%s:\n\tschedule: %s\n\tschedule-type: %s", blob.Name, blob.Schedule, blob.Type) - if blob.Type == "pgbackrest" { - results += fmt.Sprintf("\n\tbackup-type: %s", blob.PGBackRest.Type) - } - sr.Results = append(sr.Results, results) - } - return *sr -} - -func getSchedules(clusterName, selector, ns string) ([]string, error) { - schedules := []string{} - label := "crunchy-scheduler=true" - if clusterName == "all" { - } else if clusterName != "" { - label += fmt.Sprintf(",pg-cluster=%s", clusterName) - } - - if selector != "" { - label += fmt.Sprintf(",%s", selector) - } - - log.Debugf("Finding configMaps with selector: %s", label) - list, err := apiserver.Clientset.CoreV1().ConfigMaps(ns).List(metav1.ListOptions{LabelSelector: label}) - if err != nil { - return nil, fmt.Errorf("No schedules found for selector: %s", label) - } - - for _, cm := range list.Items { - schedules = append(schedules, cm.Name) - } - - return schedules, nil -} diff --git a/internal/apiserver/scheduleservice/scheduleservice.go b/internal/apiserver/scheduleservice/scheduleservice.go deleted file mode 100644 index b88fa16d7e..0000000000 --- a/internal/apiserver/scheduleservice/scheduleservice.go +++ /dev/null @@ -1,213 +0,0 @@ -package scheduleservice - -/* -Copyright 2018 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "encoding/json" - "net/http" - - "github.com/crunchydata/postgres-operator/internal/apiserver" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" -) - -type PgScheduleSpec struct { - Version string `json:"version"` - Name string `json:"name"` - Cluster string `json:"cluster"` - Created string `json:"created"` - Schedule string `json:"schedule"` - Namespace string `json:"namespace"` - Type string `json:"type"` - PGBackRest `json:"pgbackrest,omitempty"` - Policy `json:"policy,omitempty"` -} - -type Policy struct { - Name string `json:"name,omitempty"` - Database string `json:"database,omitempty"` - Secret string `json:"secret,omitempty"` - ImagePrefix string `json:"imagePrefix,omitempty"` - ImageTag string `json:"imageTag,omitempty"` -} - -type PGBackRest struct { - Deployment string `json:"deployment,omitempty"` - Label string `json:"label,omitempty"` - Container string `json:"container,omitempty"` - Type string `json:"type,omitempty"` - StorageType string `json:"storageType,omitempty"` - Options string `json:"options,omitempty"` -} - -// CreateScheduleHandler ... -func CreateScheduleHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /schedule scheduleservice schedule - /*``` - Schedule creates a cron-like scheduled task - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Create Schedule Request" - // in: "body" - // schema: - // "$ref": "#/definitions/CreateScheduleRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/CreateScheduleResponse" - var err error - var username, ns string - - log.Debug("scheduleservice.CreateScheduleHandler called") - - var request msgs.CreateScheduleRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - username, err = apiserver.Authn(apiserver.CREATE_SCHEDULE_PERM, w, r) - if err != nil { - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - ns, err = apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace) - if err != nil { - resp := msgs.CreateScheduleResponse{ - Status: msgs.Status{ - Code: msgs.Error, - Msg: err.Error(), - }, - Results: make([]string, 0), - } - json.NewEncoder(w).Encode(resp) - return - } - - resp := CreateSchedule(&request, ns) - json.NewEncoder(w).Encode(resp) -} - -func DeleteScheduleHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /scheduledelete scheduleservice scheduledelete - /*``` - Delete a cron-like schedule - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Delete Schedule Request" - // in: "body" - // schema: - // "$ref": "#/definitions/DeleteScheduleRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/DeleteScheduleResponse" - var err error - var username, ns string - - log.Debug("scheduleservice.DeleteScheduleHandler called") - - var request msgs.DeleteScheduleRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - username, err = apiserver.Authn(apiserver.DELETE_SCHEDULE_PERM, w, r) - if err != nil { - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - ns, err = apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace) - if err != nil { - resp := &msgs.DeleteScheduleResponse{ - Status: msgs.Status{ - Code: msgs.Error, - Msg: err.Error(), - }, - Results: make([]string, 0), - } - json.NewEncoder(w).Encode(resp) - return - - } - - resp := DeleteSchedule(&request, ns) - json.NewEncoder(w).Encode(resp) -} - -func ShowScheduleHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /scheduleshow scheduleservice scheduleshow - /*``` - Show cron-like schedules - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Show Schedule Request" - // in: "body" - // schema: - // "$ref": "#/definitions/ShowScheduleRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/ShowScheduleResponse" - var err error - var username, ns string - - log.Debug("scheduleservice.ShowScheduleHandler called") - - var request msgs.ShowScheduleRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - username, err = apiserver.Authn(apiserver.SHOW_SCHEDULE_PERM, w, r) - if err != nil { - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - ns, err = apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace) - if err != nil { - resp := &msgs.ShowScheduleResponse{ - Status: msgs.Status{ - Code: msgs.Error, - Msg: err.Error(), - }, - Results: make([]string, 0), - } - - json.NewEncoder(w).Encode(resp) - return - } - - resp := ShowSchedule(&request, ns) - json.NewEncoder(w).Encode(resp) -} diff --git a/internal/apiserver/statusservice/statusimpl.go b/internal/apiserver/statusservice/statusimpl.go deleted file mode 100644 index 64c55f62e7..0000000000 --- a/internal/apiserver/statusservice/statusimpl.go +++ /dev/null @@ -1,186 +0,0 @@ -package statusservice - -/* -Copyright 2018 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "fmt" - "sort" - - "github.com/crunchydata/postgres-operator/internal/apiserver" - "github.com/crunchydata/postgres-operator/internal/config" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - v1 "k8s.io/api/core/v1" - "k8s.io/apimachinery/pkg/api/resource" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" -) - -func Status(ns string) msgs.StatusResponse { - return msgs.StatusResponse{ - Result: msgs.StatusDetail{ - DbTags: getDBTags(ns), - NotReady: getNotReady(ns), - NumClaims: getNumClaims(ns), - NumDatabases: getNumDatabases(ns), - Labels: getLabels(ns), - VolumeCap: getVolumeCap(ns), - }, - Status: msgs.Status{Code: msgs.Ok, Msg: ""}, - } -} - -func getNumClaims(ns string) int { - //count number of PVCs with pgremove=true - pvcs, err := apiserver.Clientset. - CoreV1().PersistentVolumeClaims(ns). - List(metav1.ListOptions{LabelSelector: config.LABEL_PGREMOVE}) - if err != nil { - log.Error(err) - return 0 - } - return len(pvcs.Items) -} - -func getNumDatabases(ns string) int { - //count number of Deployments with pg-cluster - deps, err := apiserver.Clientset. - AppsV1().Deployments(ns). - List(metav1.ListOptions{LabelSelector: config.LABEL_PG_CLUSTER}) - if err != nil { - log.Error(err) - return 0 - } - return len(deps.Items) -} - -func getVolumeCap(ns string) string { - //sum all PVCs storage capacity - pvcs, err := apiserver.Clientset. - CoreV1().PersistentVolumeClaims(ns). - List(metav1.ListOptions{LabelSelector: config.LABEL_PGREMOVE}) - if err != nil { - log.Error(err) - return "error" - } - - var capTotal int64 - capTotal = 0 - for _, p := range pvcs.Items { - capTotal = capTotal + getClaimCapacity(&p) - } - q := resource.NewQuantity(capTotal, resource.BinarySI) - //log.Infof("capTotal string is %s\n", q.String()) - return q.String() -} - -func getDBTags(ns string) map[string]int { - results := make(map[string]int) - //count all pods with pg-cluster, sum by image tag value - pods, err := apiserver.Clientset.CoreV1().Pods(ns).List(metav1.ListOptions{LabelSelector: config.LABEL_PG_CLUSTER}) - if err != nil { - log.Error(err) - return results - } - for _, p := range pods.Items { - for _, c := range p.Spec.Containers { - results[c.Image]++ - } - } - - return results -} - -func getNotReady(ns string) []string { - //show all database pods for each pgcluster that are not yet running - agg := make([]string, 0) - clusterList, err := apiserver.Clientset.CrunchydataV1().Pgclusters(ns).List(metav1.ListOptions{}) - if err != nil { - clusterList = &crv1.PgclusterList{} - } - - for _, cluster := range clusterList.Items { - - selector := fmt.Sprintf("%s=crunchydata,name=%s", config.LABEL_VENDOR, cluster.Spec.ClusterName) - pods, err := apiserver.Clientset.CoreV1().Pods(ns).List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - log.Error(err) - return agg - } else if len(pods.Items) > 1 { - log.Error(fmt.Errorf("Multiple database pods found with the same name using selector while searching for "+ - "databases that are not ready using selector %s", selector)) - return agg - } else if len(pods.Items) == 0 { - log.Error(fmt.Errorf("No database pods found while searching for database pods that are not ready using "+ - "selector %s", selector)) - return agg - } - - pod := pods.Items[0] - for _, stat := range pod.Status.ContainerStatuses { - if !stat.Ready { - agg = append(agg, pod.ObjectMeta.Name) - } - } - } - - return agg -} - -func getClaimCapacity(pvc *v1.PersistentVolumeClaim) int64 { - qty := pvc.Status.Capacity[v1.ResourceStorage] - diskSize := resource.MustParse(qty.String()) - diskSizeInt64, _ := diskSize.AsInt64() - - return diskSizeInt64 - -} - -func getLabels(ns string) []msgs.KeyValue { - var ss []msgs.KeyValue - results := make(map[string]int) - deps, err := apiserver.Clientset. - AppsV1().Deployments(ns). - List(metav1.ListOptions{}) - if err != nil { - log.Error(err) - return ss - } - - for _, dep := range deps.Items { - - for k, v := range dep.ObjectMeta.Labels { - lv := k + "=" + v - if results[lv] == 0 { - results[lv] = 1 - } else { - results[lv] = results[lv] + 1 - } - } - - } - - for k, v := range results { - ss = append(ss, msgs.KeyValue{Key: k, Value: v}) - } - - sort.Slice(ss, func(i, j int) bool { - return ss[i].Value > ss[j].Value - }) - - return ss - -} diff --git a/internal/apiserver/statusservice/statusservice.go b/internal/apiserver/statusservice/statusservice.go deleted file mode 100644 index ecab0047c2..0000000000 --- a/internal/apiserver/statusservice/statusservice.go +++ /dev/null @@ -1,89 +0,0 @@ -package statusservice - -/* -Copyright 2018 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "encoding/json" - "github.com/crunchydata/postgres-operator/internal/apiserver" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - //"github.com/gorilla/mux" - "net/http" -) - -// StatusHandler ... -// pgo status mycluster -// pgo status --selector=env=research -func StatusHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation GET /status statusservice status - /*``` - Display namespace wide information for PostgreSQL clusters. - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "version" - // description: "Client Version" - // in: "path" - // type: "string" - // required: true - // - name: "namespace" - // description: "Namespace" - // in: "path" - // type: "string" - // required: true - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/StatusResponse" - var username, ns string - - clientVersion := r.URL.Query().Get("version") - - namespace := r.URL.Query().Get("namespace") - log.Debugf("StatusHandler parameters version [%s] namespace [%s]", clientVersion, namespace) - - username, err := apiserver.Authn(apiserver.STATUS_PERM, w, r) - if err != nil { - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - var resp msgs.StatusResponse - if clientVersion != msgs.PGO_VERSION { - resp = msgs.StatusResponse{} - resp.Status = msgs.Status{Code: msgs.Error, Msg: apiserver.VERSION_MISMATCH_ERROR} - json.NewEncoder(w).Encode(resp) - return - } - - ns, err = apiserver.GetNamespace(apiserver.Clientset, username, namespace) - if err != nil { - resp = msgs.StatusResponse{} - resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()} - json.NewEncoder(w).Encode(resp) - return - } - - resp = Status(ns) - - json.NewEncoder(w).Encode(resp) -} diff --git a/internal/apiserver/upgradeservice/upgradeimpl.go b/internal/apiserver/upgradeservice/upgradeimpl.go deleted file mode 100644 index 04758d2d34..0000000000 --- a/internal/apiserver/upgradeservice/upgradeimpl.go +++ /dev/null @@ -1,282 +0,0 @@ -package upgradeservice - -/* -Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "fmt" - "io/ioutil" - "regexp" - "strconv" - "time" - - "github.com/crunchydata/postgres-operator/internal/apiserver" - "github.com/crunchydata/postgres-operator/internal/config" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - - log "github.com/sirupsen/logrus" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/labels" -) - -// Currently supported version information for upgrades -const ( - REQUIRED_MAJOR_PGO_VERSION = 4 - MAXIMUM_MINOR_PGO_VERSION = 5 - MINIMUM_MINOR_PGO_VERSION = 1 -) - -// CreateUpgrade accepts the CreateUpgradeRequest performs the necessary validation checks and -// organizes the needed upgrade information before creating the required pgtask -// Command format: pgo upgrade mycluster -func CreateUpgrade(request *msgs.CreateUpgradeRequest, ns, pgouser string) msgs.CreateUpgradeResponse { - response := msgs.CreateUpgradeResponse{} - response.Status = msgs.Status{Code: msgs.Ok, Msg: ""} - response.Results = make([]string, 0) - - log.Debugf("createUpgrade called %v", request) - - if request.Selector != "" { - // use the selector instead of an argument list to filter on - - myselector, err := labels.Parse(request.Selector) - if err != nil { - log.Error("could not parse selector flag") - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - log.Debugf("myselector is %s", myselector.String()) - - // get the clusters list - - clusterList, err := apiserver.Clientset. - CrunchydataV1().Pgclusters(ns). - List(metav1.ListOptions{LabelSelector: request.Selector}) - if err != nil { - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - - // check that the cluster can be found - if len(clusterList.Items) == 0 { - log.Debug("no clusters found") - response.Status.Msg = "no clusters found" - return response - } else { - newargs := make([]string, 0) - for _, cluster := range clusterList.Items { - newargs = append(newargs, cluster.Spec.Name) - } - request.Args = newargs - } - } - - for _, clusterName := range request.Args { - log.Debugf("create upgrade called for %s", clusterName) - - // build the pgtask for the upgrade - spec := crv1.PgtaskSpec{} - spec.TaskType = crv1.PgtaskUpgrade - // set the status as created - spec.Status = crv1.PgtaskUpgradeCreated - spec.Parameters = make(map[string]string) - spec.Parameters[config.LABEL_PG_CLUSTER] = clusterName - spec.Parameters[crv1.PgtaskWorkflowSubmittedStatus] = time.Now().Format(time.RFC3339) - - u, err := ioutil.ReadFile("/proc/sys/kernel/random/uuid") - if err != nil { - log.Error(err) - response.Status.Code = msgs.Error - response.Status.Msg = fmt.Sprintf("Could not generate UUID for upgrade task. Error: %s", err.Error()) - return response - } - spec.Parameters[crv1.PgtaskWorkflowID] = string(u[:len(u)-1]) - - if request.UpgradeCCPImageTag != "" { - // pass the PostGIS CCP Image Tag provided with the upgrade command - spec.Parameters[config.LABEL_CCP_IMAGE_KEY] = request.UpgradeCCPImageTag - } else { - // pass the CCP Image Tag from the apiserver - spec.Parameters[config.LABEL_CCP_IMAGE_KEY] = apiserver.Pgo.Cluster.CCPImageTag - } - // pass the PGO version for the upgrade - spec.Parameters[config.LABEL_PGO_VERSION] = msgs.PGO_VERSION - // pass the PGO username for use in the updated CR if missing - spec.Parameters[config.LABEL_PGOUSER] = pgouser - - spec.Name = clusterName + "-" + config.LABEL_UPGRADE - spec.Namespace = ns - labels := make(map[string]string) - labels[config.LABEL_PG_CLUSTER] = clusterName - labels[config.LABEL_PGOUSER] = pgouser - labels[crv1.PgtaskWorkflowID] = spec.Parameters[crv1.PgtaskWorkflowID] - - newInstance := &crv1.Pgtask{ - ObjectMeta: metav1.ObjectMeta{ - Name: spec.Name, - Labels: labels, - }, - Spec: spec, - } - - // remove any existing pgtask for this upgrade - task, err := apiserver.Clientset.CrunchydataV1().Pgtasks(ns).Get(spec.Name, metav1.GetOptions{}) - - if err == nil && task.Spec.Status != crv1.CompletedStatus { - response.Status.Code = msgs.Error - response.Status.Msg = fmt.Sprintf("Could not upgrade cluster: there exists an ongoing upgrade task: [%s]. If you believe this is an error, try deleting this pgtask CR.", task.Spec.Name) - return response - } - - // validate the cluster name and ensure autofail is turned off for each cluster. - cl, err := apiserver.Clientset.CrunchydataV1().Pgclusters(ns).Get(clusterName, metav1.GetOptions{}) - if err != nil { - response.Status.Code = msgs.Error - response.Status.Msg = clusterName + " is not a valid pgcluster" - return response - } - - // for the upgrade procedure, we only upgrade to the current image used by the - // Postgres Operator. As such, we will validate that the Postgres Operator version is - // is supported by the upgrade, unless the --ignore-validation flag is set. - if !supportedOperatorVersion(cl.ObjectMeta.Labels[config.LABEL_PGO_VERSION]) && !request.IgnoreValidation { - response.Status.Code = msgs.Error - response.Status.Msg = "Cannot upgrade " + clusterName + " from Postgres Operator version " + cl.ObjectMeta.Labels[config.LABEL_PGO_VERSION] - return response - } - - // for the upgrade procedure, we only upgrade to the current image used by the - // Postgres Operator. As such, we will validate that the Postgres Operator's configured - // image tag (first value) is compatible (i.e. is the same Major PostgreSQL version) as the - // existing cluster's PG value, unless the --ignore-validation flag is set or the --post-gis-image-tag - // flag is used - if !upgradeTagValid(cl.Spec.CCPImageTag, apiserver.Pgo.Cluster.CCPImageTag) && !request.IgnoreValidation && request.UpgradeCCPImageTag != "" { - log.Debugf("Cannot upgrade from %s to %s. Image must be the same base OS and the upgrade must be within the same major PG version.", cl.Spec.CCPImageTag, apiserver.Pgo.Cluster.CCPImageTag) - response.Status.Code = msgs.Error - response.Status.Msg = fmt.Sprintf("cannot upgrade from %s to %s, upgrade task failed.", cl.Spec.CCPImageTag, apiserver.Pgo.Cluster.CCPImageTag) - return response - } - - // Create an instance of our CRD - _, err = apiserver.Clientset.CrunchydataV1().Pgtasks(ns).Create(newInstance) - if err != nil { - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - response.WorkflowID = spec.Parameters[crv1.PgtaskWorkflowID] - return response - } - - msg := "created upgrade task for " + clusterName - response.Results = append(response.Results, msg) - response.WorkflowID = spec.Parameters[crv1.PgtaskWorkflowID] - } - - return response -} - -// supportedOperatorVersion validates the Postgres Operator version -// information for the candidate pgcluster. If this value is in the -// required range, return true so that the upgrade may continue. Otherwise, -// return false. -func supportedOperatorVersion(version string) bool { - // get the Operator version - operatorVersionRegex := regexp.MustCompile(`^(\d)\.(\d)\.(\d)`) - operatorVersion := operatorVersionRegex.FindStringSubmatch(version) - - // if this regex passes, the returned array should always contain - // 4 values. At 0, the full match, then 1-3 are the three defined groups - // If this is not true, the upgrade cannot continue (and we won't want to - // reference potentially missing array items). - if len(operatorVersion) != 4 { - return false - } - - // if the first group does not equal the current major version - // then the upgrade cannot continue - if major, err := strconv.Atoi(operatorVersion[1]); err != nil { - log.Error(err) - return false - } else if major != REQUIRED_MAJOR_PGO_VERSION { - return false - } - - // if the second group does is not in the supported range, - // then the upgrade cannot continue - minor, err := strconv.Atoi(operatorVersion[2]) - if err != nil { - log.Errorf("Cannot convert Postgres Operator's minor version to an integer. Error: %v", err) - return false - } - if minor < MINIMUM_MINOR_PGO_VERSION || minor > MAXIMUM_MINOR_PGO_VERSION { - return false - } - - // If none of the above is true, the upgrade can continue - return true - -} - -// upgradeTagValid compares and validates the PostgreSQL version values stored -// in the image tag of the existing pgcluster CR against the values set in the -// Postgres Operator's configuration -func upgradeTagValid(upgradeFrom, upgradeTo string) bool { - - log.Debugf("Validating upgrade from %s to %s", upgradeFrom, upgradeTo) - - versionRegex := regexp.MustCompile(`-(\d+)\.(\d+)(\.\d+)?-`) - - // get the PostgreSQL version values - upgradeFromValue := versionRegex.FindStringSubmatch(upgradeFrom) - upgradeToValue := versionRegex.FindStringSubmatch(upgradeTo) - - // if this regex passes, the returned array should always contain - // 4 values. At 0, the full match, then 1-3 are the three defined groups - // If this is not true, the upgrade cannot continue (and we won't want to - // reference potentially missing array items). - if len(upgradeFromValue) != 4 || len(upgradeToValue) != 4 { - return false - } - - // if the first group does not match (PG version 9, 10, 11, 12 etc), or if a value is - // missing, then the upgrade cannot continue - if upgradeFromValue[1] != upgradeToValue[1] && upgradeToValue[1] != "" { - return false - } - - // if the above check passed, and there is no fourth value, then the PG - // version has only two digits (e.g. PG 10, 11 or 12), meaning this is a minor upgrade. - // After validating the second value is at least equal (this is to allow for multiple executions of the - // upgrade in case an error occurs), the upgrade can continue - if upgradeFromValue[3] == "" && upgradeToValue[3] == "" && upgradeFromValue[2] <= upgradeToValue[2] { - return true - } - - // finally, if the second group matches and is not empty, then, based on the - // possibilities remaining for Operator container image tags, this is either PG 9.5 or 9.6. - // if the second group value matches, and the third group was already validated as not - // empty, check that the third value is at least equal (this is to allow for multiple executions of the - // upgrade in case an error occurs). If so, the upgrade can continue. - if upgradeFromValue[2] == upgradeToValue[2] && upgradeToValue[2] != "" && upgradeFromValue[3] <= upgradeToValue[3] { - return true - } - - // if none of the above conditions are met, a two digit Major version upgrade is likely being - // attempted, or a tag value or general error occurred, so we cannot continue - return false - -} diff --git a/internal/apiserver/upgradeservice/upgradeservice.go b/internal/apiserver/upgradeservice/upgradeservice.go deleted file mode 100644 index dee9c68dc2..0000000000 --- a/internal/apiserver/upgradeservice/upgradeservice.go +++ /dev/null @@ -1,87 +0,0 @@ -package upgradeservice - -/* -Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "encoding/json" - "net/http" - - "github.com/crunchydata/postgres-operator/internal/apiserver" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" -) - -// CreateUpgradeHandler ... -// pgo upgrade mycluster -func CreateUpgradeHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /upgrades upgradeservice upgrades - /*``` - UPGRADE performs an upgrade on a PostgreSQL cluster from an earlier version - of the Postgres Operator to the current version. - - OTHER UPGRADE DESCRIPTION: - This upgrade will update the scale down any existing replicas while saving the primary - and pgbackrest repo PVCs, then update the existing pgcluster CR and resubmit it for - re-creation. - - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Create Upgrade Request" - // in: "body" - // schema: - // "$ref": "#/definitions/CreateUpgradeRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/CreateUpgradeResponse" - var ns string - - log.Debug("upgradeservice.CreateUpgradeHandler called") - var request msgs.CreateUpgradeRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - username, err := apiserver.Authn(apiserver.CREATE_UPGRADE_PERM, w, r) - if err != nil { - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - resp := msgs.CreateUpgradeResponse{} - resp.Status = msgs.Status{Code: msgs.Ok, Msg: ""} - - if request.ClientVersion != msgs.PGO_VERSION { - resp.Status = msgs.Status{Code: msgs.Error, Msg: apiserver.VERSION_MISMATCH_ERROR} - json.NewEncoder(w).Encode(resp) - return - } - - ns, err = apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace) - if err != nil { - resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()} - json.NewEncoder(w).Encode(resp) - return - } - - resp = CreateUpgrade(&request, ns, username) - json.NewEncoder(w).Encode(resp) -} diff --git a/internal/apiserver/userservice/userimpl.go b/internal/apiserver/userservice/userimpl.go deleted file mode 100644 index f3bad44677..0000000000 --- a/internal/apiserver/userservice/userimpl.go +++ /dev/null @@ -1,1201 +0,0 @@ -package userservice - -/* -Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "bufio" - "fmt" - "regexp" - "strings" - "time" - - "github.com/crunchydata/postgres-operator/internal/apiserver" - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/kubeapi" - "github.com/crunchydata/postgres-operator/internal/pgadmin" - pgpassword "github.com/crunchydata/postgres-operator/internal/postgres/password" - "github.com/crunchydata/postgres-operator/internal/util" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - - log "github.com/sirupsen/logrus" - v1 "k8s.io/api/core/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" -) - -const ( - errParsingExpiredUsernames = "Error parsing usernames for expired passwords." - errSystemAccountFormat = `"%s" is a system account and cannot be modified.` -) - -const ( - // sqlAlterRole is SQL that allows for the management of a PostgreSQL user - // this is really just the clause and effectively does nothing without - // additional options being supplied to it, but allows for the user to be - // supplied in. Note that the user must be escape to avoid SQL injections - sqlAlterRole = `ALTER ROLE %s` - // sqlCreateRole is SQL that allows a new PostgreSQL user to be created. To - // safely use this function, the role name and passsword must be escaped to - // avoid SQL injections, which is handled in the SetPostgreSQLPassword - // function - sqlCreateRole = `CREATE ROLE %s PASSWORD %s LOGIN` - // sqlDisableLoginClause allows a user to disable login to a PostgreSQL - // account - sqlDisableLoginClause = `NOLOGIN` - // sqlDropOwnedBy drops all the objects owned by a PostgreSQL user in a - // specific **database**, not a cluster. As such, this needs to be executed - // multiple times when trying to drop a user from a PostgreSQL cluster. The - // value must be escaped with SQLQuoteIdentifier - sqlDropOwnedBy = "DROP OWNED BY %s CASCADE" - // sqlDropRole drops a PostgreSQL user from a PostgreSQL cluster. This must - // be escaped with SQLQuoteIdentifier - sqlDropRole = "DROP ROLE %s" - // sqlEnableLoginClause allows a user to enable login to a PostgreSQL account - sqlEnableLoginClause = `LOGIN` - // sqlExpiredPasswordClause is the clause that is used to query a set of - // PostgreSQL users that have an expired passwords, regardless of if they can - // log in or not. Note that the value definitely needs to be escaped using - // SQLQuoteLiteral - sqlExpiredPasswordClause = `CURRENT_TIMESTAMP + %s::interval >= rolvaliduntil` - // sqlFindDatabases finds all the database a user can connect to. This is used - // to ensure we can drop all objects for a particular role. Amazingly, we do - // not need to do an escaping here - sqlFindDatabases = `SELECT datname FROM pg_catalog.pg_database WHERE datallowconn;` - // sqlFindUsers returns information about PostgreSQL users that will be in - // a format that we need to parse - sqlFindUsers = `SELECT rolname, rolvaliduntil -FROM pg_catalog.pg_authid -WHERE rolcanlogin` - // sqlOrderByUsername allows one to order a list from pg_authid by the - // username - sqlOrderByUsername = "ORDER BY rolname" - // sqlPasswordClause is the clause that allows on to set the password. This - // needs to be escaped to avoid SQL injections using the SQLQuoteLiteral - // function - sqlPasswordClause = `PASSWORD %s` - // sqlSetDatestyle will ensure consistent date formats as we force the - // datestyle to ISO...which differs from Golang's RFC3339, bu we handle this - // with sqlTimeFormat. - // This should be inserted as part of an instructions sent to PostgreSQL, and - // is only active for that particular query session - sqlSetDatestyle = `SET datestyle TO 'ISO'` - // sqlValidUntilClause is a clause that allows one to pass in a valid until - // timestamp. The value must be escaped to avoid SQL injections, using the - // util.SQLQuoteLiteral function - sqlValidUntilClause = `VALID UNTIL %s` -) - -const ( - // sqlDelimiter is just a pipe - sqlDelimiter = "|" - // sqlTimeFormat is the defauly time format that is used - sqlTimeFormat = "2006-01-02 15:04:05.999999999Z07" -) - -var ( - // sqlCommand is the command that needs to be executed for running SQL - sqlCommand = []string{"psql", "-A", "-t"} -) - -// connInfo .... -type connInfo struct { - Username string - Hostip string - Port string - Database string - Password string -} - -// CreatueUser allows one to create a PostgreSQL user in one of more PostgreSQL -// clusters, and provides the abilit to do the following: -// -// - set a password or have one automatically generated -// - set a valid period where the account/password is activ// - setting password expirations -// - and more -// -// This corresponds to the `pgo update user` command -func CreateUser(request *msgs.CreateUserRequest, pgouser string) msgs.CreateUserResponse { - response := msgs.CreateUserResponse{ - Results: []msgs.UserResponseDetail{}, - Status: msgs.Status{ - Code: msgs.Ok, - Msg: "", - }, - } - - log.Debugf("create user called, cluster [%v], selector [%s], all [%t]", - request.Clusters, request.Selector, request.AllFlag) - - // if the username is one of the PostgreSQL system accounts, return here - if util.IsPostgreSQLUserSystemAccount(request.Username) { - response.Status.Code = msgs.Error - response.Status.Msg = fmt.Sprintf(errSystemAccountFormat, request.Username) - return response - } - - // try to get a list of clusters. if there is an error, return - clusterList, err := getClusterList(request.Namespace, request.Clusters, request.Selector, request.AllFlag) - - if err != nil { - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - - // NOTE: this is a legacy requirement as the uesrname is kept in the name of - // the secret, which requires RFC 1035 compliance. We could probably update - // this check as well to be more accurate, and even more the MustCompile - // statement to being a file-level constant, but for now this is just going - // to sit here and changed in a planned later commit. - re := regexp.MustCompile("^[a-z0-9.-]*$") - if !re.MatchString(request.Username) { - response.Status.Code = msgs.Error - response.Status.Msg = "user name is required to contain lowercase letters, numbers, '.' and '-' only." - return response - } - - // determine if the user passed in a valid password type - passwordType, err := msgs.GetPasswordType(request.PasswordType) - - if err != nil { - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - - // as the password age is uniform throughout the request, we can check for the - // user supplied value and the defaults here - validUntil := generateValidUntilDateString(request.PasswordAgeDays) - sqlValidUntil := fmt.Sprintf(sqlValidUntilClause, util.SQLQuoteLiteral(validUntil)) - - // Return an error if any clusters identified for user creation are in standby mode. Users - // cannot be created in standby clusters because the database is in read-only mode while the - // cluster replicates from a remote primary. - if hasStandby, standbyClusters := apiserver.PGClusterListHasStandby(clusterList); hasStandby { - response.Status.Code = msgs.Error - response.Status.Msg = fmt.Sprintf("Request rejected, unable to create users for clusters "+ - "%s: %s.", strings.Join(standbyClusters, ","), apiserver.ErrStandbyNotAllowed.Error()) - return response - } - - // iterate through each cluster and add the new PostgreSQL role to each pod - for _, cluster := range clusterList.Items { - result := msgs.UserResponseDetail{ - ClusterName: cluster.Spec.ClusterName, - Username: request.Username, - ValidUntil: validUntil, - } - - log.Debugf("creating user [%s] on cluster [%s]", result.Username, cluster.Spec.ClusterName) - - // first, find the primary Pod - pod, err := util.GetPrimaryPod(apiserver.Clientset, &cluster) - - // if the primary Pod cannot be found, we're going to continue on for the - // other clusters, but provide some sort of error message in the response - if err != nil { - log.Error(err) - - result.Error = true - result.ErrorMessage = err.Error() - - response.Results = append(response.Results, result) - continue - } - - // check if the current cluster is not upgraded to the deployed - // Operator version. If not, do not allow the command to complete - if cluster.Annotations[config.ANNOTATION_IS_UPGRADED] == config.ANNOTATIONS_FALSE { - response.Status.Code = msgs.Error - response.Status.Msg = cluster.Spec.ClusterName + msgs.UpgradeError - return response - } - - // build up the SQL clause that will be executed. - sql := sqlCreateRole - - // determine if there is a password expiration set. The SQL clause - // is already generated and has its injectable input escaped - if sqlValidUntil != "" { - sql = fmt.Sprintf("%s %s", sql, sqlValidUntil) - } - - // Set the password. We want a password to be generated if the user did not - // set a password - _, password, hashedPassword, err := generatePassword(result.Username, request.Password, passwordType, true, request.PasswordLength) - - // on the off-chance there is an error, record it and continue - if err != nil { - log.Error(err) - - result.Error = true - result.ErrorMessage = err.Error() - - response.Results = append(response.Results, result) - continue - } - - result.Password = password - - // attempt to set the password! - if err := util.SetPostgreSQLPassword(apiserver.Clientset, apiserver.RESTConfig, pod, - cluster.Spec.Port, result.Username, hashedPassword, sql); err != nil { - log.Error(err) - - result.Error = true - result.ErrorMessage = err.Error() - - response.Results = append(response.Results, result) - continue - } - - // if this user is "managed" by the Operator, add a secret. If there is an - // error, we can fall through as the next step is appending the results - if request.ManagedUser { - if err := util.CreateUserSecret(apiserver.Clientset, cluster.Spec.ClusterName, result.Username, - result.Password, cluster.Spec.Namespace); err != nil { - log.Error(err) - - result.Error = true - result.ErrorMessage = err.Error() - - response.Results = append(response.Results, result) - continue - } - } - - // if a pgAdmin deployment exists, attempt to add the user to it - if err := updatePgAdmin(&cluster, result.Username, result.Password); err != nil { - log.Error(err) - result.Error = true - result.ErrorMessage = err.Error() - - response.Results = append(response.Results, result) - continue - } - - // append to the results - response.Results = append(response.Results, result) - } - - return response -} - -// DeleteUser deletes a PostgreSQL user from clusters -func DeleteUser(request *msgs.DeleteUserRequest, pgouser string) msgs.DeleteUserResponse { - response := msgs.DeleteUserResponse{ - Results: []msgs.UserResponseDetail{}, - Status: msgs.Status{ - Code: msgs.Ok, - Msg: "", - }, - } - - log.Debugf("delete user called, cluster [%v], selector [%s], all [%t]", - request.Clusters, request.Selector, request.AllFlag) - - // if the username is one of the PostgreSQL system accounts, return here - if util.IsPostgreSQLUserSystemAccount(request.Username) { - response.Status.Code = msgs.Error - response.Status.Msg = fmt.Sprintf(errSystemAccountFormat, request.Username) - return response - } - - // try to get a list of clusters. if there is an error, return - clusterList, err := getClusterList(request.Namespace, request.Clusters, request.Selector, request.AllFlag) - - if err != nil { - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - - // iterate through each cluster and try to delete the user! -loop: - for _, cluster := range clusterList.Items { - result := msgs.UserResponseDetail{ - ClusterName: cluster.Spec.ClusterName, - Username: request.Username, - } - - log.Debugf("dropping user [%s] from cluster [%s]", result.Username, cluster.Spec.ClusterName) - - // first, find the primary Pod - pod, err := util.GetPrimaryPod(apiserver.Clientset, &cluster) - - // if the primary Pod cannot be found, we're going to continue on for the - // other clusters, but provide some sort of error message in the response - if err != nil { - log.Error(err) - - result.Error = true - result.ErrorMessage = err.Error() - - response.Results = append(response.Results, result) - continue - } - - // first, get a list of all the databases in the cluster. We will need to - // go through each database and drop any object that the user owns - output, err := executeSQL(pod, cluster.Spec.Port, sqlFindDatabases, []string{}) - - // if there is an error, record it and move on as we cannot actually deleted - // the user - if err != nil { - log.Error(err) - - result.Error = true - result.ErrorMessage = err.Error() - - response.Results = append(response.Results, result) - continue - } - - // create the buffer of all the databases, and iterate through them so - // we can drop individuale objects in them - databases := bufio.NewScanner(strings.NewReader(output)) - - // so we need to parse each of these...and then determine if these are - // managed accounts and make a call to the secret to get...the password - for databases.Scan() { - database := strings.TrimSpace(databases.Text()) - - // set up the sql to drop the user object from the database - sql := fmt.Sprintf(sqlDropOwnedBy, util.SQLQuoteIdentifier(result.Username)) - - // and use the one instance where we need to pass in additional argments - // to the execteSQL function - // if there is an error, we'll make a note of it here, but we have to - // continue in the outer loop - if _, err := executeSQL(pod, cluster.Spec.Port, sql, []string{database}); err != nil { - log.Error(err) - - result.Error = true - result.ErrorMessage = err.Error() - - response.Results = append(response.Results, result) - continue loop - } - } - - // and if we survie that unscathed, we can now delete the user, which we - // have to escape to avoid SQL injections - sql := fmt.Sprintf(sqlDropRole, util.SQLQuoteIdentifier(result.Username)) - - // exceute the SQL. if there is an error, make note and continue - if _, err := executeSQL(pod, cluster.Spec.Port, sql, []string{}); err != nil { - log.Error(err) - - result.Error = true - result.ErrorMessage = err.Error() - - response.Results = append(response.Results, result) - continue - } - - // alright, final step: try to delete the user secret. if it does not exist, - // or it fails to delete, we don't care - deleteUserSecret(cluster, result.Username) - - // remove user from pgAdmin, if enabled - qr, err := pgadmin.GetPgAdminQueryRunner(apiserver.Clientset, apiserver.RESTConfig, &cluster) - if err != nil { - log.Error(err) - - result.Error = true - result.ErrorMessage = err.Error() - - response.Results = append(response.Results, result) - continue - } else if qr != nil { - err = pgadmin.DeleteUser(qr, result.Username) - if err != nil { - log.Error(err) - - result.Error = true - result.ErrorMessage = err.Error() - - response.Results = append(response.Results, result) - continue - } - } - - response.Results = append(response.Results, result) - } - - return response -} - -// ShowUser lets the caller view details about PostgreSQL users across the -// PostgreSQL clusters that are queried. This includes details such as: -// -// - when the password expires -// - if the user is active or not -// -// etc. -func ShowUser(request *msgs.ShowUserRequest) msgs.ShowUserResponse { - response := msgs.ShowUserResponse{ - Results: []msgs.UserResponseDetail{}, - Status: msgs.Status{ - Code: msgs.Ok, - Msg: "", - }, - } - - log.Debugf("show user called, cluster [%v], selector [%s], all [%t]", - request.Clusters, request.Selector, request.AllFlag) - - // first try to get a list of clusters based on the various ways one can get - // them. If if this returns an error, exit here - clusterList, err := getClusterList(request.Namespace, - request.Clusters, request.Selector, request.AllFlag) - - if err != nil { - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - - // to save some computing power, we can determine if the caller is looking - // up if passwords are expiring for users. This value is passed in days, so - // we can get the expiration mark that we are looking for - expirationInterval := "" - - if request.Expired > 0 { - // we need to find a set of user passwords that need to be updated - // set the expiration interval - expirationInterval = fmt.Sprintf("%d days", request.Expired) - } - - // iterate through each cluster and look up information about each user - for _, cluster := range clusterList.Items { - // first, find the primary Pod - pod, err := util.GetPrimaryPod(apiserver.Clientset, &cluster) - - // if the primary Pod cannot be found, we're going to continue on for the - // other clusters, but provide some sort of error message in the response - if err != nil { - log.Error(err) - - result := msgs.UserResponseDetail{ - Error: true, - ErrorMessage: err.Error(), - } - - response.Results = append(response.Results, result) - continue - } - - // we need to build out some SQL. Start with the base - sql := fmt.Sprintf("%s; %s", sqlSetDatestyle, sqlFindUsers) - - // determine if we only want to find the users that have expiring passwords - if expirationInterval != "" { - sql = fmt.Sprintf("%s AND %s", sql, - fmt.Sprintf(sqlExpiredPasswordClause, util.SQLQuoteLiteral(expirationInterval))) - } - - // being a bit cute here, but ordering by the role name - sql = fmt.Sprintf("%s %s", sql, sqlOrderByUsername) - - // great, now we can perform the user lookup - output, err := executeSQL(pod, cluster.Spec.Port, sql, []string{}) - - // if there is an error, record it and move on to the next cluster - if err != nil { - log.Error(err) - - result := msgs.UserResponseDetail{ - Error: true, - ErrorMessage: err.Error(), - } - - response.Results = append(response.Results, result) - continue - } - - // get the rows into a buffer and start scanning - rows := bufio.NewScanner(strings.NewReader(output)) - - // the output corresponds to the following pattern: - // "username|validuntil" which corresponds to: - // string|sqlTimeFormat - // - // so we need to parse each of these...and then determine if these are - // managed accounts and make a call to the secret to get...the password - for rows.Scan() { - row := strings.TrimSpace(rows.Text()) - - // split aong the "sqlDelimiter" ("|") to get the 3 values - values := strings.Split(row, sqlDelimiter) - - // if there are not two values, continue on, as this means this is not - // the row we are interested in - if len(values) != 2 { - continue - } - - // before continuing, check to see if this is a system account. - // If it is, check to see that the user requested to view system accounts - if !request.ShowSystemAccounts && util.IsPostgreSQLUserSystemAccount(values[0]) { - continue - } - - // start building a result - result := msgs.UserResponseDetail{ - ClusterName: cluster.Spec.ClusterName, - Username: values[0], - ValidUntil: values[1], - } - - // alright, attempt to get the password if it is "managed"...sigh - // as we are in a loop, this is costly as there are a lot of network calls - // so we may want to either add some concurrency or rethink how the - // managed passwords are stored - // - // We ignore any errors...if the password get set, we add it. If not, we - // don't - secretName := fmt.Sprintf(util.UserSecretFormat, result.ClusterName, result.Username) - password, _ := util.GetPasswordFromSecret(apiserver.Clientset, pod.Namespace, secretName) - - if password != "" { - result.Password = password - } - - // add the result - response.Results = append(response.Results, result) - } - } - - return response -} - -// UpdateUser allows one to update a PostgreSQL user across PostgreSQL clusters, -// and provides the ability to perform inline various updates, including: -// -// - resetting passwords -// - disabling accounts -// - setting password expirations -// - and more -// -// This corresponds to the `pgo update user` command -func UpdateUser(request *msgs.UpdateUserRequest, pgouser string) msgs.UpdateUserResponse { - response := msgs.UpdateUserResponse{ - Results: []msgs.UserResponseDetail{}, - Status: msgs.Status{ - Code: msgs.Ok, - }, - } - - log.Debugf("update user called, cluster [%v], selector [%s], all [%t]", - request.Clusters, request.Selector, request.AllFlag) - - // either a username must be set, or the user is updating the passwords for - // accounts that are about to expire - if request.Username == "" && request.Expired == 0 { - response.Status.Code = msgs.Error - response.Status.Msg = "Either --username or --expired or must be set." - return response - } - - // if this involes updating a specific PostgreSQL account, and it is a system - // account, return here - if request.Username != "" && util.IsPostgreSQLUserSystemAccount(request.Username) { - response.Status.Code = msgs.Error - response.Status.Msg = fmt.Sprintf(errSystemAccountFormat, request.Username) - return response - } - - // determine if the user passed in a valid password type - if _, err := msgs.GetPasswordType(request.PasswordType); err != nil { - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - - // try to get a list of clusters. if there is an error, return - clusterList, err := getClusterList(request.Namespace, request.Clusters, request.Selector, request.AllFlag) - - if err != nil { - response.Status.Code = msgs.Error - response.Status.Msg = err.Error() - return response - } - - // Return an error if any clusters identified for the user updare are in standby mode. Users - // cannot be updated in standby clusters because the database is in read-only mode while the - // cluster replicates from a remote primary - if hasStandby, standbyClusters := apiserver.PGClusterListHasStandby(clusterList); hasStandby { - response.Status.Code = msgs.Error - response.Status.Msg = fmt.Sprintf("Request rejected, unable to update users for clusters "+ - "%s: %s.", strings.Join(standbyClusters, ", "), apiserver.ErrStandbyNotAllowed.Error()) - return response - } - - for _, cluster := range clusterList.Items { - var result msgs.UserResponseDetail - - // determine which update user actions needs to be performed - switch { - // determine if any passwords expiring in X days should be updated - // it returns a slice of results, which are then append to the list - case request.Expired > 0: - results := rotateExpiredPasswords(request, &cluster) - response.Results = append(response.Results, results...) - // otherwise, perform a regular "update user" request which covers all the - // other "regular" cases. It returns a result, which is append to the list - default: - result = updateUser(request, &cluster) - response.Results = append(response.Results, result) - } - } - - return response -} - -// deleteUserSecret deletes the user secret that stores information like the -// user's password. -// For the purposes of this module, we don't care if this fails. We'll log the -// error in here, but do nothing with it -func deleteUserSecret(cluster crv1.Pgcluster, username string) { - secretName := fmt.Sprintf(util.UserSecretFormat, cluster.Spec.ClusterName, username) - - err := apiserver.Clientset.CoreV1().Secrets(cluster.Spec.Namespace).Delete(secretName, nil) - - if err != nil { - log.Error(err) - } -} - -// executeSQL executes SQL on the primary PostgreSQL Pod. This occurs using the -// Kubernetes exec function, which allows us to perform the request over -// a PostgreSQL connection that's authenticated with peer authentication -func executeSQL(pod *v1.Pod, port, sql string, extraCommandArgs []string) (string, error) { - command := sqlCommand - - // add the port - command = append(command, "-p", port) - - // add any extra arguments - command = append(command, extraCommandArgs...) - - // execute into the primary pod to run the query - stdout, stderr, err := kubeapi.ExecToPodThroughAPI(apiserver.RESTConfig, - apiserver.Clientset, command, - "database", pod.Name, pod.ObjectMeta.Namespace, strings.NewReader(sql)) - - // if there is an error executing the command, which includes the stderr, - // return the error - if err != nil { - return "", err - } else if stderr != "" { - return "", fmt.Errorf(stderr) - } - - return stdout, nil -} - -// generatePassword will return a password that is either set by the user or -// generated based upon a length that is passed in. Additionally, it will return -// the password in a hashed format so it can be saved safely by the PostgreSQL -// server. There is also a boolean parameter that indicates whether or not a -// password was updated: it's set to true if it is -// -// It also includes a boolean parameter to determine whether or not a password -// should be generated, which is helpful in the "update user" workflow. -// -// If both parameters return empty, then this means that no action should be -// taken on updating the password. -// -// A set password takes precedence over a password being generated. if -// "password" is empty, then a password will be generated. If both are set, -// then "password" is used. -// -// Finally, one can specify the "password type" to be generated, which right now -// is either one of MD5 of SCRAM, the two PostgreSQL password authentication -// methods. This will return a hash / verifier that is stored in PostgreSQL -func generatePassword(username, password string, passwordType pgpassword.PasswordType, - generatePassword bool, generatedPasswordLength int) (bool, string, string, error) { - // first, an early exit: nothing is updated - if password == "" && !generatePassword { - return false, "", "", nil - } - - // give precedence to the user customized password - if password == "" && generatePassword { - // Determine if the user passed in a password length, otherwise us the - // default - passwordLength := generatedPasswordLength - - if passwordLength == 0 { - passwordLength = util.GeneratedPasswordLength(apiserver.Pgo.Cluster.PasswordLength) - } - - // generate the password - generatedPassword, err := util.GeneratePassword(passwordLength) - - // if there is an error, return - if err != nil { - return false, "", "", err - } - - password = generatedPassword - } - - // finally, hash the password - postgresPassword, err := pgpassword.NewPostgresPassword(passwordType, username, password) - - if err != nil { - return false, "", "", err - } - - hashedPassword, err := postgresPassword.Build() - - if err != nil { - return false, "", "", err - } - - // return! - return true, password, hashedPassword, nil -} - -// generateValidUntilDateString returns a RFC3339 string that is computed by -// adding the current time on the Operator server with the integer number of -// days that are passed in. If the total number of days passed in is <= 0, then -// it also checks the server configured value. -// -// If it's still less than 0, then the password is considered to be always -// valid and a value of "infinity" is returned -// -// otherwise, it computes the password expiration from the total number of days -func generateValidUntilDateString(validUntilDays int) string { - // if validUntilDays is zero (or less than zero), attempt to set the value - // supplied by the server. If it's still zero, then the user can create a - // password without expiration - if validUntilDays <= 0 { - validUntilDays = util.GeneratedPasswordValidUntilDays(apiserver.Pgo.Cluster.PasswordAgeDays) - - if validUntilDays <= 0 { - return util.SQLValidUntilAlways - } - } - - // okay, this is slightly annoying. So to get the total duration in days, we - // need to set up validUntilDays * # hours in the time.Duration function, and then - // multiple it by the value for hours - duration := time.Duration(validUntilDays*24) * time.Hour - - // ok, set the validUntil time and return the correct format - validUntil := time.Now().Add(duration) - - return validUntil.Format(time.RFC3339) -} - -// getClusterList tries to return a list of clusters based on either having an -// argument list of cluster names, a Kubernetes selector, or set to "all" -func getClusterList(namespace string, clusterNames []string, selector string, all bool) (crv1.PgclusterList, error) { - clusterList := crv1.PgclusterList{} - - // see if there are any in one of the three parametes used to return everything - if len(clusterNames) == 0 && selector == "" && !all { - err := fmt.Errorf("either a list of cluster names, a selector, or the all flag needs to be supplied for this comment") - return clusterList, err - } - - // if the all flag is set, let's return all the clusters here and return - if all { - // return the value of cluster list or that of the error here - cl, err := apiserver.Clientset.CrunchydataV1().Pgclusters(namespace).List(metav1.ListOptions{}) - if err == nil { - clusterList = *cl - } - return clusterList, err - } - - // try to build the cluster list based on either the selector or the list - // of arguments...or both. First, start with the selector - if selector != "" { - cl, err := apiserver.Clientset.CrunchydataV1().Pgclusters(namespace).List(metav1.ListOptions{LabelSelector: selector}) - - // if there is an error, return here with an empty cluster list - if err != nil { - return crv1.PgclusterList{}, err - } - clusterList = *cl - } - - // now try to get clusters based specific cluster names - for _, clusterName := range clusterNames { - cluster, err := apiserver.Clientset.CrunchydataV1().Pgclusters(namespace).Get(clusterName, metav1.GetOptions{}) - - // if there is an error, capture it here and return here with an empty list - if err != nil { - return crv1.PgclusterList{}, err - } - - // if successful, append to the cluster list - clusterList.Items = append(clusterList.Items, *cluster) - } - - log.Debugf("clusters founds: [%d]", len(clusterList.Items)) - - // if after all this, there are no clusters found, return an error - if len(clusterList.Items) == 0 { - err := fmt.Errorf("no clusters found") - return clusterList, err - } - - // all set! return the cluster list with error - return clusterList, nil -} - -// rotateExpiredPasswords finds all of the PostgreSQL users in a cluster that can -// login but have their passwords expired or are expring in X days and rotates -// the passwords. This is accomplish in two steps: -// -// 1. Finding all of the non-system accounts and checking for expirations -// 2. Generating a new password and updating each account -func rotateExpiredPasswords(request *msgs.UpdateUserRequest, cluster *crv1.Pgcluster) []msgs.UserResponseDetail { - results := []msgs.UserResponseDetail{} - - log.Debugf("rotate expired passwords on cluster [%s]", cluster.Spec.ClusterName) - - // first, find the primary Pod. If we can't do that, no rense in continuing - pod, err := util.GetPrimaryPod(apiserver.Clientset, cluster) - - if err != nil { - result := msgs.UserResponseDetail{ - ClusterName: cluster.Spec.ClusterName, - Error: true, - ErrorMessage: err.Error(), - } - results = append(results, result) - return results - } - - // start building the sql, which is the clause for finding users that can - // login - sql := sqlFindUsers - - // we need to find a set of user passwords that need to be updated - // set the expiration interval - expirationInterval := fmt.Sprintf("%d days", request.Expired) - // and then immediately put it into SQL, with appropriate SQL injection - // escaping - sql = fmt.Sprintf("%s AND %s", sql, - fmt.Sprintf(sqlExpiredPasswordClause, util.SQLQuoteLiteral(expirationInterval))) - - // alright, time to find if there are any expired accounts. If this errors, - // then we will abort here - output, err := executeSQL(pod, cluster.Spec.Port, sql, []string{}) - - if err != nil { - result := msgs.UserResponseDetail{ - ClusterName: cluster.Spec.ClusterName, - Error: true, - ErrorMessage: err.Error(), - } - results = append(results, result) - return results - } - - // put the list of usernames into a buffer that we will iterate through - usernames := bufio.NewScanner(strings.NewReader(output)) - - // before we start the loop, prepare for the update to the expiration time. - // We do need to update the expiration time, otherwise these passwords will - // still expire :) - // - // check to see if the user passedin the "never expire" flag, otherwise try - // to update either from the user generated value or the default value (which - // may very well be to not expire) - validUntil := "" - - switch { - case request.PasswordValidAlways: - validUntil = util.SQLValidUntilAlways - default: - validUntil = generateValidUntilDateString(request.PasswordAgeDays) - } - - // iterate through each user name, which will then be used to go through and - // update the password for each user - // Note that the query has the format "username|sqlTimeFormat" so we need - // to parse that below - for usernames.Scan() { - // get the values out of the query - values := strings.Split(strings.TrimSpace(usernames.Text()), "|") - - // if there is not at least one value, just abort here - if len(values) < 1 { - result := msgs.UserResponseDetail{ - Error: true, - ErrorMessage: errParsingExpiredUsernames, - } - results = append(results, result) - continue - } - - // otherwise, we can safely set the username - username := values[0] - - // start building a result. The Username call strips off the newlines and - // other garbage and returns the actual username - result := msgs.UserResponseDetail{ - ClusterName: cluster.Spec.ClusterName, - Username: username, - ValidUntil: validUntil, - } - - // start building the SQL - sql := fmt.Sprintf(sqlAlterRole, util.SQLQuoteIdentifier(result.Username)) - - // get the password type. the error is already evaluated in a called - // function - passwordType, _ := msgs.GetPasswordType(request.PasswordType) - - // generate a new password. Check to see if the user passed in a particular - // length of the password, or passed in a password to rotate (though that - // is not advised...). This forced the password to change - _, password, hashedPassword, err := generatePassword(result.Username, request.Password, passwordType, true, request.PasswordLength) - - // on the off-chance there's an error in generating the password, record it - // and continue - if err != nil { - log.Error(err) - - result.Error = true - result.ErrorMessage = err.Error() - - results = append(results, result) - continue - } - - result.Password = password - sql = fmt.Sprintf("%s %s", sql, - fmt.Sprintf(sqlPasswordClause, util.SQLQuoteLiteral(hashedPassword))) - - // build the "valid until" value into the SQL string - sql = fmt.Sprintf("%s %s", sql, - fmt.Sprintf(sqlValidUntilClause, util.SQLQuoteLiteral(result.ValidUntil))) - - // and this is enough to execute - // if there is an error, record it here. The next step is to continue - // iterating through the loop, and we will continue to do so - if _, err := executeSQL(pod, cluster.Spec.Port, sql, []string{}); err != nil { - result.Error = true - result.ErrorMessage = err.Error() - } - - results = append(results, result) - } - - return results -} - -// updatePgAdmin will attempt to synchronize information in a pgAdmin -// deployment, should one exist. Basically, it adds or updates the credentials -// of a user should there be a pgadmin deploymen associated with this PostgreSQL -// cluster. Returns an error if anything goes wrong -func updatePgAdmin(cluster *crv1.Pgcluster, username, password string) error { - // Sync user to pgAdmin, if enabled - qr, err := pgadmin.GetPgAdminQueryRunner(apiserver.Clientset, apiserver.RESTConfig, cluster) - - // if there is an error, return as such - if err != nil { - return err - } - - // likewise, if there is no pgAdmin associated this cluster, return no error - if qr == nil { - return nil - } - - // proceed onward - // Get service details and prep connection metadata - service, err := apiserver.Clientset.CoreV1().Services(cluster.Namespace).Get(cluster.Name, metav1.GetOptions{}) - if err != nil { - return err - } - - // set up the server entry data - dbService := pgadmin.ServerEntryFromPgService(service, cluster.Name) - dbService.Password = password - - // attempt to set the username/password for this user in the pgadmin - // deployment - if err := pgadmin.SetLoginPassword(qr, username, password); err != nil { - return err - } - - // if the service name for the database is present, also set the cluster - // if it's not set, early exit - if dbService.Name == "" { - return nil - } - - if err := pgadmin.SetClusterConnection(qr, username, dbService); err != nil { - return err - } - - return nil -} - -// updateUser, though perhaps poorly named in context, performs the standard -// "ALTER ROLE" type functionality on a user, which is just updating a single -// user account on a single PostgreSQL cluster. This is in contrast with some -// of the bulk updates that can occur with updating a user (e.g. resetting -// expired passwords), which is why it's broken out into its own function -func updateUser(request *msgs.UpdateUserRequest, cluster *crv1.Pgcluster) msgs.UserResponseDetail { - result := msgs.UserResponseDetail{ - ClusterName: cluster.Spec.ClusterName, - Username: request.Username, - } - - log.Debugf("updating user [%s] on cluster [%s]", result.Username, cluster.Spec.ClusterName) - - // first, find the primary Pod - pod, err := util.GetPrimaryPod(apiserver.Clientset, cluster) - - // if the primary Pod cannot be found, we're going to continue on for the - // other clusters, but provide some sort of error message in the response - if err != nil { - log.Error(err) - - result.Error = true - result.ErrorMessage = err.Error() - - return result - } - - // alright, so we can start building up some SQL now, as the other commands - // here can all occur within ALTER ROLE! - // - // We first build it up with the username, being careful to escape the - // identifier to avoid SQL injections :) - sql := fmt.Sprintf(sqlAlterRole, util.SQLQuoteIdentifier(request.Username)) - - // Though we do have an awesome function for setting a PostgreSQL password - // (SetPostgreSQLPassword) the problem is we are going to be adding too much - // to the string here, and we don't always know if the password is being - // updated, which is one of the requirements of the function. So we will - // perform any query execution here in this module - - // Speaking of passwords...let's first determine if the user updated their - // password. See generatePassword for how precedence is given for password - // updates - passwordType, _ := msgs.GetPasswordType(request.PasswordType) - isChanged, password, hashedPassword, err := generatePassword(result.Username, - request.Password, passwordType, request.RotatePassword, request.PasswordLength) - - // in the off-chance there is an error generating the password, record it - // and return - if err != nil { - log.Error(err) - - result.Error = true - result.ErrorMessage = err.Error() - - return result - } - - if isChanged { - result.Password = password - sql = fmt.Sprintf("%s %s", sql, - fmt.Sprintf(sqlPasswordClause, util.SQLQuoteLiteral(hashedPassword))) - - // Sync user to pgAdmin, if enabled - if err := updatePgAdmin(cluster, result.Username, result.Password); err != nil { - log.Error(err) - - result.Error = true - result.ErrorMessage = err.Error() - - return result - } - } - - // now, check to see if the request wants to expire the user's password - // this will leverage the PostgreSQL ability to set a date as "-infinity" - // so that the password is 100% expired - // - // Expiring the user also takes precedence over trying to move the update - // password timeline, which we check for next - // - // Next we check to ensure the user wants to explicitly un-expire a - // password, and/or ensure that the expiration time is unlimited. This takes - // precednece over setting an explicitly expiration period, which we check - // for last - switch { - case request.ExpireUser: - // append the VALID UNTIL special clause here for explicitly disallowing - // the user of a password - result.ValidUntil = util.SQLValidUntilNever - case request.PasswordValidAlways: - // append the VALID UNTIL special clause here for explicitly always - // allowing a password - result.ValidUntil = util.SQLValidUntilAlways - case request.PasswordAgeDays > 0: - // Move the user's password expiration date - result.ValidUntil = generateValidUntilDateString(request.PasswordAgeDays) - } - - // if ValidUntil is updated, continue to build the SQL - if result.ValidUntil != "" { - sql = fmt.Sprintf("%s %s", sql, - fmt.Sprintf(sqlValidUntilClause, util.SQLQuoteLiteral(result.ValidUntil))) - } - - // Now, determine if we want to enable or disable the login. Enable takes - // precedence over disable - // None of these have SQL injectionsas they are fixed constants - switch request.LoginState { - case msgs.UpdateUserLoginEnable: - sql = fmt.Sprintf("%s %s", sql, sqlEnableLoginClause) - case msgs.UpdateUserLoginDisable: - sql = fmt.Sprintf("%s %s", sql, sqlDisableLoginClause) - } - - // execute the SQL! if there is an error, return the results - if _, err := executeSQL(pod, cluster.Spec.Port, sql, []string{}); err != nil { - log.Error(err) - - result.Error = true - result.ErrorMessage = err.Error() - - // even though we return in the next line, having an explicit return here - // in case we add any additional logic beyond this point - return result - } - - // If the password did change, it is not updated in the database. If the user - // has a "managed" account (i.e. there is a secret for this user account"), - // we can now updated the value of that password in the secret - if isChanged { - secretName := fmt.Sprintf(util.UserSecretFormat, cluster.Spec.ClusterName, result.Username) - - // only call update user secret if the secret exists - if _, err := apiserver.Clientset.CoreV1().Secrets(cluster.Namespace).Get(secretName, metav1.GetOptions{}); err == nil { - // if we cannot update the user secret, only warn that we cannot do so - if err := util.UpdateUserSecret(apiserver.Clientset, cluster.Spec.ClusterName, - result.Username, result.Password, cluster.Namespace); err != nil { - log.Warn(err) - } - } - } - - return result -} diff --git a/internal/apiserver/userservice/userimpl_test.go b/internal/apiserver/userservice/userimpl_test.go deleted file mode 100644 index 71d3aa5fcf..0000000000 --- a/internal/apiserver/userservice/userimpl_test.go +++ /dev/null @@ -1,163 +0,0 @@ -package userservice - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "strings" - "testing" - - pgpassword "github.com/crunchydata/postgres-operator/internal/postgres/password" -) - -func TestGeneratePassword(t *testing.T) { - username := "" - password := "" - passwordType := pgpassword.MD5 - generateNewPassword := false - generatedPasswordLength := 32 - - t.Run("no changes", func(t *testing.T) { - - changed, _, _, err := generatePassword(username, password, passwordType, generateNewPassword, generatedPasswordLength) - - if err != nil { - t.Error(err) - return - } - - if changed { - t.Errorf("password should not be generated if password is empty and generate password is false") - } - }) - - t.Run("generate password", func(t *testing.T) { - generateNewPassword := true - - t.Run("valid", func(t *testing.T) { - changed, newPassword, _, err := generatePassword(username, password, passwordType, generateNewPassword, generatedPasswordLength) - - if err != nil { - t.Error(err) - return - } - - if !changed { - t.Errorf("new password should be returned") - } - - if len(newPassword) != generatedPasswordLength { - t.Errorf("generated password length expected %d actual %d", generatedPasswordLength, len(newPassword)) - } - }) - - t.Run("does not override custom password", func(t *testing.T) { - password := "custom" - changed, newPassword, _, err := generatePassword(username, password, passwordType, generateNewPassword, generatedPasswordLength) - - if err != nil { - t.Error(err) - return - } - - if !changed { - t.Errorf("new password should be returned") - } - - if password != newPassword { - t.Errorf("password should be %q but instead is %q", password, newPassword) - } - }) - - t.Run("password length can be adjusted", func(t *testing.T) { - generatedPasswordLength := 16 - changed, newPassword, _, err := generatePassword(username, password, passwordType, generateNewPassword, generatedPasswordLength) - - if err != nil { - t.Error(err) - return - } - - if !changed { - t.Errorf("new password should be returned") - } - - if len(newPassword) != generatedPasswordLength { - t.Errorf("generated password length expected %d actual %d", generatedPasswordLength, len(newPassword)) - } - }) - - t.Run("should be nonzero length", func(t *testing.T) { - generatedPasswordLength := 0 - changed, newPassword, _, err := generatePassword(username, password, passwordType, generateNewPassword, generatedPasswordLength) - - if err != nil { - t.Error(err) - return - } - - if !changed { - t.Errorf("new password should be returned") - } - - if len(newPassword) == 0 { - t.Error("password length should be greater than 0") - } - }) - }) - - t.Run("hashing", func(t *testing.T) { - username := "hippo" - password := "datalake" - - t.Run("md5", func(t *testing.T) { - changed, _, hashedPassword, err := generatePassword(username, password, - passwordType, generateNewPassword, generatedPasswordLength) - - if err != nil { - t.Error(err) - return - } - - if !changed { - t.Errorf("new password should be returned") - } - - if !strings.HasPrefix(hashedPassword, "md5") && len(hashedPassword) != 32 { - t.Errorf("not a valid md5 hash: %q", hashedPassword) - } - }) - - t.Run("scram-sha-256", func(t *testing.T) { - passwordType := pgpassword.SCRAM - changed, _, hashedPassword, err := generatePassword(username, password, - passwordType, generateNewPassword, generatedPasswordLength) - - if err != nil { - t.Error(err) - return - } - - if !changed { - t.Errorf("new password should be returned") - } - - if !strings.HasPrefix(hashedPassword, "SCRAM-SHA-256$") { - t.Errorf("not a valid scram-sha-256 verifier: %q", hashedPassword) - } - }) - }) - -} diff --git a/internal/apiserver/userservice/userservice.go b/internal/apiserver/userservice/userservice.go deleted file mode 100644 index 83994c90fa..0000000000 --- a/internal/apiserver/userservice/userservice.go +++ /dev/null @@ -1,254 +0,0 @@ -package userservice - -/* -Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "encoding/json" - "github.com/crunchydata/postgres-operator/internal/apiserver" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "net/http" -) - -// UserHandler provides a means to update a PostgreSQL user -// pgo update user -func UpdateUserHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /userupdate userservice userupdate - /*``` - Update a postgres user - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Update User Request" - // in: "body" - // schema: - // "$ref": "#/definitions/UpdateUserRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/UpdateUserResponse" - log.Debug("userservice.UpdateUserHandler called") - - var request msgs.UpdateUserRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - resp := msgs.UpdateUserResponse{} - username, err := apiserver.Authn(apiserver.UPDATE_USER_PERM, w, r) - if err != nil { - resp.Status = msgs.Status{Code: msgs.Error, Msg: apiserver.VERSION_MISMATCH_ERROR} - json.NewEncoder(w).Encode(resp) - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - if request.ClientVersion != msgs.PGO_VERSION { - resp.Status = msgs.Status{Code: msgs.Error, Msg: apiserver.VERSION_MISMATCH_ERROR} - json.NewEncoder(w).Encode(resp) - return - } - - _, err = apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace) - if err != nil { - resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()} - json.NewEncoder(w).Encode(resp) - return - } - - resp = UpdateUser(&request, username) - - json.NewEncoder(w).Encode(resp) -} - -// CreateUserHandler ... -// pgo create user -func CreateUserHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /usercreate userservice usercreate - /*``` - Create PostgreSQL user - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Create User Request" - // in: "body" - // schema: - // "$ref": "#/definitions/CreateUserRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/CreateUserResponse" - log.Debug("userservice.CreateUserHandler called") - username, err := apiserver.Authn(apiserver.CREATE_USER_PERM, w, r) - if err != nil { - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - var request msgs.CreateUserRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - resp := msgs.CreateUserResponse{} - - _, err = apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - json.NewEncoder(w).Encode(resp) - return - } - - if request.ClientVersion != msgs.PGO_VERSION { - resp.Status.Code = msgs.Error - resp.Status.Msg = apiserver.VERSION_MISMATCH_ERROR - json.NewEncoder(w).Encode(resp) - return - } - - resp = CreateUser(&request, username) - json.NewEncoder(w).Encode(resp) - -} - -// DeleteUserHandler ... -// pgo delete user someuser -// parameters name -// parameters selector -// returns a DeleteUserResponse -func DeleteUserHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /userdelete userservice userdelete - /*``` - Delete PostgreSQL user - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Delete User Request" - // in: "body" - // schema: - // "$ref": "#/definitions/DeleteUserRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/DeleteUserResponse" - var request msgs.DeleteUserRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - resp := msgs.DeleteUserResponse{} - if request.ClientVersion != msgs.PGO_VERSION { - resp.Status.Code = msgs.Error - resp.Status.Msg = apiserver.VERSION_MISMATCH_ERROR - json.NewEncoder(w).Encode(resp) - return - } - - log.Debugf("DeleteUserHandler parameters %v", request) - - pgouser, err := apiserver.Authn(apiserver.DELETE_USER_PERM, w, r) - if err != nil { - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - _, err = apiserver.GetNamespace(apiserver.Clientset, pgouser, request.Namespace) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - json.NewEncoder(w).Encode(resp) - return - } - - resp = DeleteUser(&request, pgouser) - json.NewEncoder(w).Encode(resp) - -} - -// ShowUserHandler allows one to display information about PostgreSQL uesrs that -// are in a PostgreSQL cluster -func ShowUserHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation POST /usershow userservice usershow - /*``` - Show PostgreSQL user(s) - */ - // --- - // produces: - // - application/json - // parameters: - // - name: "Show User Request" - // in: "body" - // schema: - // "$ref": "#/definitions/ShowUserRequest" - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/ShowUserResponse" - var request msgs.ShowUserRequest - _ = json.NewDecoder(r.Body).Decode(&request) - - log.Debugf("ShowUserHandler parameters [%v]", request) - - username, err := apiserver.Authn(apiserver.SHOW_SECRETS_PERM, w, r) - if err != nil { - return - } - - // a special authz check here: if the ShowSystemAccounts flag is set, ensure - // the user is authorized to show system accounts - if request.ShowSystemAccounts && - !apiserver.BasicAuthzCheck(username, apiserver.SHOW_SYSTEM_ACCOUNTS_PERM) { - log.Errorf("Authorization Failed %s username=[%s]", apiserver.SHOW_SYSTEM_ACCOUNTS_PERM, username) - http.Error(w, "Not authorized for this apiserver action", 403) - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - resp := msgs.ShowUserResponse{} - if request.ClientVersion != msgs.PGO_VERSION { - resp.Status = msgs.Status{Code: msgs.Error, Msg: apiserver.VERSION_MISMATCH_ERROR} - json.NewEncoder(w).Encode(resp) - return - } - - _, err = apiserver.GetNamespace(apiserver.Clientset, username, request.Namespace) - if err != nil { - resp.Status = msgs.Status{Code: msgs.Error, Msg: err.Error()} - json.NewEncoder(w).Encode(resp) - return - } - - resp = ShowUser(&request) - json.NewEncoder(w).Encode(resp) - -} diff --git a/internal/apiserver/versionservice/versionimpl.go b/internal/apiserver/versionservice/versionimpl.go deleted file mode 100644 index d2341d4e93..0000000000 --- a/internal/apiserver/versionservice/versionimpl.go +++ /dev/null @@ -1,40 +0,0 @@ -package versionservice - -/* -Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" -) - -// Version ... -// pgo version -func Version() msgs.VersionResponse { - resp := msgs.VersionResponse{} - resp.Status.Code = msgs.Ok - resp.Status.Msg = "apiserver version" - resp.Version = msgs.PGO_VERSION - - return resp -} - -func Health() msgs.VersionResponse { - resp := msgs.VersionResponse{} - resp.Status.Code = msgs.Ok - resp.Status.Msg = "healthy" - resp.Version = "healthy" - - return resp -} diff --git a/internal/apiserver/versionservice/versionservice.go b/internal/apiserver/versionservice/versionservice.go deleted file mode 100644 index 49735dff1a..0000000000 --- a/internal/apiserver/versionservice/versionservice.go +++ /dev/null @@ -1,92 +0,0 @@ -package versionservice - -/* -Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "encoding/json" - "github.com/crunchydata/postgres-operator/internal/apiserver" - log "github.com/sirupsen/logrus" - "net/http" -) - -// VersionHandler ... -// pgo version -func VersionHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation GET /version versionservice version - /*``` - - */ - // --- - // produces: - // - application/json - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/VersionResponse" - log.Debug("versionservice.VersionHandler called") - - _, err := apiserver.Authn(apiserver.VERSION_PERM, w, r) - if err != nil { - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - resp := Version() - - json.NewEncoder(w).Encode(resp) -} - -// HealthHandler ... - -func HealthHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation GET /health versionservice health - /*``` - - */ - // --- - // produces: - // - application/json - // responses: - // '200': - // description: Output - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - resp := Health() - - json.NewEncoder(w).Encode(resp) -} - -// HealthyHandler follows the health endpoint convention of HTTP/200 and -// body "ok" used by other cloud services, typically on /healthz -func HealthyHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation GET /healthz versionservice healthz - /*``` - - */ - // --- - // produces: - // - text/plain - // responses: - // '200': - // description: "Healthy: server is responding as expected" - w.WriteHeader(http.StatusOK) - w.Write([]byte("ok")) -} diff --git a/internal/apiserver/workflowservice/workflowimpl.go b/internal/apiserver/workflowservice/workflowimpl.go deleted file mode 100644 index 13c5f342a8..0000000000 --- a/internal/apiserver/workflowservice/workflowimpl.go +++ /dev/null @@ -1,55 +0,0 @@ -package workflowservice - -/* -Copyright 2018 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "errors" - - "github.com/crunchydata/postgres-operator/internal/apiserver" - "github.com/crunchydata/postgres-operator/internal/config" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" -) - -// ShowWorkflow ... -func ShowWorkflow(id, ns string) (msgs.ShowWorkflowDetail, error) { - - log.Debugf("ShowWorkflow called with id %s", id) - detail := msgs.ShowWorkflowDetail{} - - //get the pgtask for this workflow - - selector := crv1.PgtaskWorkflowID + "=" + id - - taskList, err := apiserver.Clientset.CrunchydataV1().Pgtasks(ns).List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - return detail, err - } - if len(taskList.Items) > 1 { - return detail, errors.New("more than 1 workflow id found for id " + id) - } - if len(taskList.Items) == 0 { - return detail, errors.New("workflow id NOT found for id " + id) - } - t := taskList.Items[0] - detail.ClusterName = t.Spec.Parameters[config.LABEL_PG_CLUSTER] - detail.Parameters = t.Spec.Parameters - - return detail, err - -} diff --git a/internal/apiserver/workflowservice/workflowservice.go b/internal/apiserver/workflowservice/workflowservice.go deleted file mode 100644 index 81aea9ff98..0000000000 --- a/internal/apiserver/workflowservice/workflowservice.go +++ /dev/null @@ -1,101 +0,0 @@ -package workflowservice - -/* -Copyright 2018 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "encoding/json" - "github.com/crunchydata/postgres-operator/internal/apiserver" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - "github.com/gorilla/mux" - log "github.com/sirupsen/logrus" - "net/http" -) - -// ShowWorkflowHandler ... -// returns a ShowWorkflowResponse -func ShowWorkflowHandler(w http.ResponseWriter, r *http.Request) { - // swagger:operation GET /workflow/{id} workflowservice workflow - // - // --- - // produces: - // - application/json - // parameters: - // - name: "id" - // description: "Workflow ID" - // in: "path" - // type: "string" - // required: true - // - name: "version" - // description: "Client Version" - // in: "path" - // type: "string" - // required: true - // - name: "namespace" - // description: "Namespace" - // in: "path" - // type: "string" - // required: true - // responses: - // '200': - // description: Output - // schema: - // "$ref": "#/definitions/CreatePolicyResponse" - var err error - var username, ns string - - vars := mux.Vars(r) - - workflowID := vars["id"] - clientVersion := r.URL.Query().Get("version") - namespace := r.URL.Query().Get("namespace") - - log.Debugf("ShowWorkflowHandler parameters version [%s] namespace [%s] id [%s]", clientVersion, namespace, workflowID) - - username, err = apiserver.Authn(apiserver.SHOW_WORKFLOW_PERM, w, r) - if err != nil { - return - } - - w.Header().Set("WWW-Authenticate", `Basic realm="Restricted"`) - w.Header().Set("Content-Type", "application/json") - w.WriteHeader(http.StatusOK) - - resp := msgs.ShowWorkflowResponse{} - resp.Status = msgs.Status{Code: msgs.Ok, Msg: ""} - - if clientVersion != msgs.PGO_VERSION { - resp.Status.Code = msgs.Error - resp.Status.Msg = apiserver.VERSION_MISMATCH_ERROR - json.NewEncoder(w).Encode(resp) - return - } - - ns, err = apiserver.GetNamespace(apiserver.Clientset, username, namespace) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - json.NewEncoder(w).Encode(resp) - return - } - - resp.Results, err = ShowWorkflow(workflowID, ns) - if err != nil { - resp.Status.Code = msgs.Error - resp.Status.Msg = err.Error() - } - - json.NewEncoder(w).Encode(resp) -} diff --git a/internal/bridge/client.go b/internal/bridge/client.go new file mode 100644 index 0000000000..d5ad8470f7 --- /dev/null +++ b/internal/bridge/client.go @@ -0,0 +1,819 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package bridge + +import ( + "bytes" + "context" + "encoding/json" + "errors" + "fmt" + "io" + "net/http" + "net/url" + "strconv" + "time" + + "k8s.io/apimachinery/pkg/util/intstr" + "k8s.io/apimachinery/pkg/util/uuid" + "k8s.io/apimachinery/pkg/util/wait" + + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +const defaultAPI = "https://api.crunchybridge.com" + +var errAuthentication = errors.New("authentication failed") + +type ClientInterface interface { + ListClusters(ctx context.Context, apiKey, teamId string) ([]*ClusterApiResource, error) + CreateCluster(ctx context.Context, apiKey string, clusterRequestPayload *PostClustersRequestPayload) (*ClusterApiResource, error) + DeleteCluster(ctx context.Context, apiKey, id string) (*ClusterApiResource, bool, error) + GetCluster(ctx context.Context, apiKey, id string) (*ClusterApiResource, error) + GetClusterStatus(ctx context.Context, apiKey, id string) (*ClusterStatusApiResource, error) + GetClusterUpgrade(ctx context.Context, apiKey, id string) (*ClusterUpgradeApiResource, error) + UpgradeCluster(ctx context.Context, apiKey, id string, clusterRequestPayload *PostClustersUpgradeRequestPayload) (*ClusterUpgradeApiResource, error) + UpgradeClusterHA(ctx context.Context, apiKey, id, action string) (*ClusterUpgradeApiResource, error) + UpdateCluster(ctx context.Context, apiKey, id string, clusterRequestPayload *PatchClustersRequestPayload) (*ClusterApiResource, error) + GetClusterRole(ctx context.Context, apiKey, clusterId, roleName string) (*ClusterRoleApiResource, error) +} + +type Client struct { + http.Client + wait.Backoff + + BaseURL url.URL + Version string +} + +// BRIDGE API RESPONSE OBJECTS + +// ClusterApiResource is used to hold cluster information received in Bridge API response. +type ClusterApiResource struct { + ID string `json:"id,omitempty"` + ClusterGroup *ClusterGroupApiResource `json:"cluster_group,omitempty"` + PrimaryClusterID string `json:"cluster_id,omitempty"` + CPU int64 `json:"cpu,omitempty"` + CreatedAt string `json:"created_at,omitempty"` + DiskUsage *ClusterDiskUsageApiResource `json:"disk_usage,omitempty"` + Environment string `json:"environment,omitempty"` + Host string `json:"host,omitempty"` + IsHA *bool `json:"is_ha,omitempty"` + IsProtected *bool `json:"is_protected,omitempty"` + IsSuspended *bool `json:"is_suspended,omitempty"` + Keychain string `json:"keychain_id,omitempty"` + MaintenanceWindowStart int64 `json:"maintenance_window_start,omitempty"` + MajorVersion int `json:"major_version,omitempty"` + Memory float64 `json:"memory,omitempty"` + ClusterName string `json:"name,omitempty"` + Network string `json:"network_id,omitempty"` + Parent string `json:"parent_id,omitempty"` + Plan string `json:"plan_id,omitempty"` + PostgresVersion intstr.IntOrString `json:"postgres_version_id,omitempty"` + Provider string `json:"provider_id,omitempty"` + Region string `json:"region_id,omitempty"` + Replicas []*ClusterApiResource `json:"replicas,omitempty"` + Storage int64 `json:"storage,omitempty"` + Tailscale *bool `json:"tailscale_active,omitempty"` + Team string `json:"team_id,omitempty"` + LastUpdate string `json:"updated_at,omitempty"` + ResponsePayload v1beta1.SchemalessObject `json:""` +} + +func (c *ClusterApiResource) AddDataToClusterStatus(cluster *v1beta1.CrunchyBridgeCluster) { + cluster.Status.ClusterName = c.ClusterName + cluster.Status.Host = c.Host + cluster.Status.ID = c.ID + cluster.Status.IsHA = c.IsHA + cluster.Status.IsProtected = c.IsProtected + cluster.Status.MajorVersion = c.MajorVersion + cluster.Status.Plan = c.Plan + cluster.Status.Storage = FromGibibytes(c.Storage) + cluster.Status.Responses.Cluster = c.ResponsePayload +} + +type ClusterList struct { + Clusters []*ClusterApiResource `json:"clusters"` +} + +// ClusterDiskUsageApiResource hold information on disk usage for a particular cluster. +type ClusterDiskUsageApiResource struct { + DiskAvailableMB int64 `json:"disk_available_mb,omitempty"` + DiskTotalSizeMB int64 `json:"disk_total_size_mb,omitempty"` + DiskUsedMB int64 `json:"disk_used_mb,omitempty"` +} + +// ClusterGroupApiResource holds information on a ClusterGroup +type ClusterGroupApiResource struct { + ID string `json:"id,omitempty"` + Clusters []*ClusterApiResource `json:"clusters,omitempty"` + Kind string `json:"kind,omitempty"` + Name string `json:"name,omitempty"` + Network string `json:"network_id,omitempty"` + Provider string `json:"provider_id,omitempty"` + Region string `json:"region_id,omitempty"` + Team string `json:"team_id,omitempty"` +} + +type ClusterStatusApiResource struct { + DiskUsage *ClusterDiskUsageApiResource `json:"disk_usage,omitempty"` + OldestBackup string `json:"oldest_backup_at,omitempty"` + OngoingUpgrade *ClusterUpgradeApiResource `json:"ongoing_upgrade,omitempty"` + State string `json:"state,omitempty"` + ResponsePayload v1beta1.SchemalessObject `json:""` +} + +func (c *ClusterStatusApiResource) AddDataToClusterStatus(cluster *v1beta1.CrunchyBridgeCluster) { + cluster.Status.State = c.State + cluster.Status.Responses.Status = c.ResponsePayload +} + +type ClusterUpgradeApiResource struct { + ClusterID string `json:"cluster_id,omitempty"` + Operations []*v1beta1.UpgradeOperation `json:"operations,omitempty"` + Team string `json:"team_id,omitempty"` + ResponsePayload v1beta1.SchemalessObject `json:""` +} + +func (c *ClusterUpgradeApiResource) AddDataToClusterStatus(cluster *v1beta1.CrunchyBridgeCluster) { + cluster.Status.OngoingUpgrade = c.Operations + cluster.Status.Responses.Upgrade = c.ResponsePayload +} + +type ClusterUpgradeOperationApiResource struct { + Flavor string `json:"flavor,omitempty"` + StartingFrom string `json:"starting_from,omitempty"` + State string `json:"state,omitempty"` +} + +// ClusterRoleApiResource is used for retrieving details on ClusterRole from the Bridge API +type ClusterRoleApiResource struct { + AccountEmail string `json:"account_email"` + AccountId string `json:"account_id"` + ClusterId string `json:"cluster_id"` + Flavor string `json:"flavor"` + Name string `json:"name"` + Password string `json:"password"` + Team string `json:"team_id"` + URI string `json:"uri"` +} + +// ClusterRoleList holds a slice of ClusterRoleApiResource +type ClusterRoleList struct { + Roles []*ClusterRoleApiResource `json:"roles"` +} + +// BRIDGE API REQUEST PAYLOADS + +// PatchClustersRequestPayload is used for updating various properties of an existing cluster. +type PatchClustersRequestPayload struct { + ClusterGroup string `json:"cluster_group_id,omitempty"` + // DashboardSettings *ClusterDashboardSettings `json:"dashboard_settings,omitempty"` + // TODO (dsessler7): Find docs for DashboardSettings and create appropriate struct + Environment string `json:"environment,omitempty"` + IsProtected *bool `json:"is_protected,omitempty"` + MaintenanceWindowStart int64 `json:"maintenance_window_start,omitempty"` + Name string `json:"name,omitempty"` +} + +// PostClustersRequestPayload is used for creating a new cluster. +type PostClustersRequestPayload struct { + Name string `json:"name"` + Plan string `json:"plan_id"` + Team string `json:"team_id"` + ClusterGroup string `json:"cluster_group_id,omitempty"` + Environment string `json:"environment,omitempty"` + IsHA bool `json:"is_ha,omitempty"` + Keychain string `json:"keychain_id,omitempty"` + Network string `json:"network_id,omitempty"` + PostgresVersion intstr.IntOrString `json:"postgres_version_id,omitempty"` + Provider string `json:"provider_id,omitempty"` + Region string `json:"region_id,omitempty"` + Storage int64 `json:"storage,omitempty"` +} + +// PostClustersUpgradeRequestPayload is used for creating a new cluster upgrade which may include +// changing its plan, upgrading its major version, or increasing its storage size. +type PostClustersUpgradeRequestPayload struct { + Plan string `json:"plan_id,omitempty"` + PostgresVersion intstr.IntOrString `json:"postgres_version_id,omitempty"` + UpgradeStartTime string `json:"starting_from,omitempty"` + Storage int64 `json:"storage,omitempty"` +} + +// PutClustersUpgradeRequestPayload is used for updating an ongoing or scheduled upgrade. +// TODO: Implement the ability to update an upgrade (this isn't currently being used) +type PutClustersUpgradeRequestPayload struct { + Plan string `json:"plan_id,omitempty"` + PostgresVersion intstr.IntOrString `json:"postgres_version_id,omitempty"` + UpgradeStartTime string `json:"starting_from,omitempty"` + Storage int64 `json:"storage,omitempty"` + UseMaintenanceWindow *bool `json:"use_cluster_maintenance_window,omitempty"` +} + +// BRIDGE CLIENT FUNCTIONS AND METHODS + +// NewClient creates a Client with backoff settings that amount to +// ~10 attempts over ~2 minutes. A default is used when apiURL is not +// an acceptable URL. +func NewClient(apiURL, version string) *Client { + // Use the default URL when the argument (1) does not parse at all, or + // (2) has the wrong scheme, or (3) has no hostname. + base, err := url.Parse(apiURL) + if err != nil || (base.Scheme != "http" && base.Scheme != "https") || base.Hostname() == "" { + base, _ = url.Parse(defaultAPI) + } + + return &Client{ + Backoff: wait.Backoff{ + Duration: time.Second, + Factor: 1.6, + Jitter: 0.2, + Steps: 10, + Cap: time.Minute, + }, + BaseURL: *base, + Version: version, + } +} + +// doWithBackoff performs HTTP requests until: +// 1. ctx is cancelled, +// 2. the server returns a status code below 500, "Internal Server Error", or +// 3. the backoff is exhausted. +// +// Be sure to close the [http.Response] Body when the returned error is nil. +// See [http.Client.Do] for more details. +func (c *Client) doWithBackoff( + ctx context.Context, method, path string, params url.Values, body []byte, headers http.Header, +) ( + *http.Response, error, +) { + var response *http.Response + + // Prepare a copy of the passed in headers so we can manipulate them. + if headers = headers.Clone(); headers == nil { + headers = make(http.Header) + } + + // Send a value that identifies this PATCH or POST request so it is safe to + // retry when the server does not respond. + // - https://docs.crunchybridge.com/api-concepts/idempotency/ + if method == http.MethodPatch || method == http.MethodPost { + headers.Set("Idempotency-Key", string(uuid.NewUUID())) + } + + headers.Set("User-Agent", "PGO/"+c.Version) + url := c.BaseURL.JoinPath(path) + if params != nil { + url.RawQuery = params.Encode() + } + urlString := url.String() + + err := wait.ExponentialBackoff(c.Backoff, func() (bool, error) { + // NOTE: The [net/http] package treats an empty [bytes.Reader] the same as nil. + request, err := http.NewRequestWithContext(ctx, method, urlString, bytes.NewReader(body)) + + if err == nil { + request.Header = headers.Clone() + + //nolint:bodyclose // This response is returned to the caller. + response, err = c.Client.Do(request) + } + + // An error indicates there was no response from the server, and the + // request may not have finished. The "Idempotency-Key" header above + // makes it safe to retry in this case. + finished := err == nil + + // When the request finishes with a server error, discard the body and retry. + // - https://docs.crunchybridge.com/api-concepts/getting-started/#status-codes + if finished && response.StatusCode >= 500 { + _ = response.Body.Close() + finished = false + } + + // Stop when the context is cancelled. + return finished, ctx.Err() + }) + + // Discard the response body when there is a timeout from backoff. + if response != nil && err != nil { + _ = response.Body.Close() + } + + // Return the last response, if any. + // Return the cancellation or timeout from backoff, if any. + return response, err +} + +// doWithRetry performs HTTP requests until: +// 1. ctx is cancelled, +// 2. the server returns a status code below 500, "Internal Server Error", +// that is not 429, "Too many requests", or +// 3. the backoff is exhausted. +// +// Be sure to close the [http.Response] Body when the returned error is nil. +// See [http.Client.Do] for more details. +func (c *Client) doWithRetry( + ctx context.Context, method, path string, params url.Values, body []byte, headers http.Header, +) ( + *http.Response, error, +) { + response, err := c.doWithBackoff(ctx, method, path, params, body, headers) + + // Retry the request when the server responds with "Too many requests". + // - https://docs.crunchybridge.com/api-concepts/getting-started/#status-codes + // - https://docs.crunchybridge.com/api-concepts/getting-started/#rate-limiting + for err == nil && response.StatusCode == 429 { + seconds, _ := strconv.Atoi(response.Header.Get("Retry-After")) + + // Only retry when the response indicates how long to wait. + if seconds <= 0 { + break + } + + // Discard the "Too many requests" response body, and retry. + _ = response.Body.Close() + + // Create a channel that sends after the delay indicated by the API. + timer := time.NewTimer(time.Duration(seconds) * time.Second) + defer timer.Stop() + + // Wait for the delay or context cancellation, whichever comes first. + select { + case <-timer.C: + // Try the request again. Check it in the loop condition. + response, err = c.doWithBackoff(ctx, method, path, params, body, headers) + timer.Stop() + + case <-ctx.Done(): + // Exit the loop and return the context cancellation. + err = ctx.Err() + } + } + + return response, err +} + +func (c *Client) CreateAuthObject(ctx context.Context, authn AuthObject) (AuthObject, error) { + var result AuthObject + + response, err := c.doWithRetry(ctx, "POST", "/vendor/operator/auth-objects", nil, nil, http.Header{ + "Accept": []string{"application/json"}, + "Authorization": []string{"Bearer " + authn.Secret}, + }) + + if err == nil { + defer response.Body.Close() + body, _ := io.ReadAll(response.Body) + + switch { + // 2xx, Successful + case response.StatusCode >= 200 && response.StatusCode < 300: + if err = json.Unmarshal(body, &result); err != nil { + err = fmt.Errorf("%w: %s", err, body) + } + + // 401, Unauthorized + case response.StatusCode == 401: + err = fmt.Errorf("%w: %s", errAuthentication, body) + + default: + //nolint:goerr113 // This is intentionally dynamic. + err = fmt.Errorf("%v: %s", response.Status, body) + } + } + + return result, err +} + +func (c *Client) CreateInstallation(ctx context.Context) (Installation, error) { + var result Installation + + response, err := c.doWithRetry(ctx, "POST", "/vendor/operator/installations", nil, nil, http.Header{ + "Accept": []string{"application/json"}, + }) + + if err == nil { + defer response.Body.Close() + body, _ := io.ReadAll(response.Body) + + switch { + // 2xx, Successful + case response.StatusCode >= 200 && response.StatusCode < 300: + if err = json.Unmarshal(body, &result); err != nil { + err = fmt.Errorf("%w: %s", err, body) + } + + default: + //nolint:goerr113 // This is intentionally dynamic. + err = fmt.Errorf("%v: %s", response.Status, body) + } + } + + return result, err +} + +// CRUNCHYBRIDGECLUSTER CRUD METHODS + +// ListClusters makes a GET request to the "/clusters" endpoint to retrieve a list of all clusters +// in Bridge that are owned by the team specified by the provided team id. +func (c *Client) ListClusters(ctx context.Context, apiKey, teamId string) ([]*ClusterApiResource, error) { + result := &ClusterList{} + + params := url.Values{} + if len(teamId) > 0 { + params.Add("team_id", teamId) + } + response, err := c.doWithRetry(ctx, "GET", "/clusters", params, nil, http.Header{ + "Accept": []string{"application/json"}, + "Authorization": []string{"Bearer " + apiKey}, + }) + + if err == nil { + defer response.Body.Close() + body, _ := io.ReadAll(response.Body) + + switch { + // 2xx, Successful + case response.StatusCode >= 200 && response.StatusCode < 300: + if err = json.Unmarshal(body, &result); err != nil { + err = fmt.Errorf("%w: %s", err, body) + } + + default: + //nolint:goerr113 // This is intentionally dynamic. + err = fmt.Errorf("%v: %s", response.Status, body) + } + } + + return result.Clusters, err +} + +// CreateCluster makes a POST request to the "/clusters" endpoint thereby creating a cluster +// in Bridge with the settings specified in the request payload. +func (c *Client) CreateCluster( + ctx context.Context, apiKey string, clusterRequestPayload *PostClustersRequestPayload, +) (*ClusterApiResource, error) { + result := &ClusterApiResource{} + + clusterbyte, err := json.Marshal(clusterRequestPayload) + if err != nil { + return result, err + } + + response, err := c.doWithRetry(ctx, "POST", "/clusters", nil, clusterbyte, http.Header{ + "Accept": []string{"application/json"}, + "Authorization": []string{"Bearer " + apiKey}, + }) + + if err == nil { + defer response.Body.Close() + body, _ := io.ReadAll(response.Body) + + switch { + // 2xx, Successful + case response.StatusCode >= 200 && response.StatusCode < 300: + if err = json.Unmarshal(body, &result); err != nil { + err = fmt.Errorf("%w: %s", err, body) + return result, err + } + if err = json.Unmarshal(body, &result.ResponsePayload); err != nil { + err = fmt.Errorf("%w: %s", err, body) + } + + default: + //nolint:goerr113 // This is intentionally dynamic. + err = fmt.Errorf("%v: %s", response.Status, body) + } + } + + return result, err +} + +// DeleteCluster calls the delete endpoint, returning +// +// the cluster, +// whether the cluster is deleted already, +// and an error. +func (c *Client) DeleteCluster(ctx context.Context, apiKey, id string) (*ClusterApiResource, bool, error) { + result := &ClusterApiResource{} + var deletedAlready bool + + response, err := c.doWithRetry(ctx, "DELETE", "/clusters/"+id, nil, nil, http.Header{ + "Accept": []string{"application/json"}, + "Authorization": []string{"Bearer " + apiKey}, + }) + + if err == nil { + defer response.Body.Close() + body, _ := io.ReadAll(response.Body) + + switch { + // 2xx, Successful + case response.StatusCode >= 200 && response.StatusCode < 300: + if err = json.Unmarshal(body, &result); err != nil { + err = fmt.Errorf("%w: %s", err, body) + } + + // Already deleted + // Bridge API returns 410 Gone for previously deleted clusters + // --https://docs.crunchybridge.com/api-concepts/idempotency#delete-semantics + // But also, if we can't find it... + // Maybe if no ID we return already deleted? + case response.StatusCode == 410: + fallthrough + case response.StatusCode == 404: + deletedAlready = true + err = nil + + default: + //nolint:goerr113 // This is intentionally dynamic. + err = fmt.Errorf("%v: %s", response.Status, body) + } + } + + return result, deletedAlready, err +} + +// GetCluster makes a GET request to the "/clusters/" endpoint, thereby retrieving details +// for a given cluster in Bridge specified by the provided cluster id. +func (c *Client) GetCluster(ctx context.Context, apiKey, id string) (*ClusterApiResource, error) { + result := &ClusterApiResource{} + + response, err := c.doWithRetry(ctx, "GET", "/clusters/"+id, nil, nil, http.Header{ + "Accept": []string{"application/json"}, + "Authorization": []string{"Bearer " + apiKey}, + }) + + if err == nil { + defer response.Body.Close() + body, _ := io.ReadAll(response.Body) + + switch { + // 2xx, Successful + case response.StatusCode >= 200 && response.StatusCode < 300: + if err = json.Unmarshal(body, &result); err != nil { + err = fmt.Errorf("%w: %s", err, body) + return result, err + } + if err = json.Unmarshal(body, &result.ResponsePayload); err != nil { + err = fmt.Errorf("%w: %s", err, body) + } + + default: + //nolint:goerr113 // This is intentionally dynamic. + err = fmt.Errorf("%v: %s", response.Status, body) + } + } + + return result, err +} + +// GetClusterStatus makes a GET request to the "/clusters//status" endpoint, thereby retrieving details +// for a given cluster's status in Bridge, specified by the provided cluster id. +func (c *Client) GetClusterStatus(ctx context.Context, apiKey, id string) (*ClusterStatusApiResource, error) { + result := &ClusterStatusApiResource{} + + response, err := c.doWithRetry(ctx, "GET", "/clusters/"+id+"/status", nil, nil, http.Header{ + "Accept": []string{"application/json"}, + "Authorization": []string{"Bearer " + apiKey}, + }) + + if err == nil { + defer response.Body.Close() + body, _ := io.ReadAll(response.Body) + + switch { + // 2xx, Successful + case response.StatusCode >= 200 && response.StatusCode < 300: + if err = json.Unmarshal(body, &result); err != nil { + err = fmt.Errorf("%w: %s", err, body) + return result, err + } + if err = json.Unmarshal(body, &result.ResponsePayload); err != nil { + err = fmt.Errorf("%w: %s", err, body) + } + + default: + //nolint:goerr113 // This is intentionally dynamic. + err = fmt.Errorf("%v: %s", response.Status, body) + } + } + + return result, err +} + +// GetClusterUpgrade makes a GET request to the "/clusters//upgrade" endpoint, thereby retrieving details +// for a given cluster's upgrade status in Bridge, specified by the provided cluster id. +func (c *Client) GetClusterUpgrade(ctx context.Context, apiKey, id string) (*ClusterUpgradeApiResource, error) { + result := &ClusterUpgradeApiResource{} + + response, err := c.doWithRetry(ctx, "GET", "/clusters/"+id+"/upgrade", nil, nil, http.Header{ + "Accept": []string{"application/json"}, + "Authorization": []string{"Bearer " + apiKey}, + }) + + if err == nil { + defer response.Body.Close() + body, _ := io.ReadAll(response.Body) + + switch { + // 2xx, Successful + case response.StatusCode >= 200 && response.StatusCode < 300: + if err = json.Unmarshal(body, &result); err != nil { + err = fmt.Errorf("%w: %s", err, body) + return result, err + } + if err = json.Unmarshal(body, &result.ResponsePayload); err != nil { + err = fmt.Errorf("%w: %s", err, body) + } + + default: + //nolint:goerr113 // This is intentionally dynamic. + err = fmt.Errorf("%v: %s", response.Status, body) + } + } + + return result, err +} + +// UpgradeCluster makes a POST request to the "/clusters//upgrade" endpoint, thereby attempting +// to upgrade certain settings for a given cluster in Bridge. +func (c *Client) UpgradeCluster( + ctx context.Context, apiKey, id string, clusterRequestPayload *PostClustersUpgradeRequestPayload, +) (*ClusterUpgradeApiResource, error) { + result := &ClusterUpgradeApiResource{} + + clusterbyte, err := json.Marshal(clusterRequestPayload) + if err != nil { + return result, err + } + + response, err := c.doWithRetry(ctx, "POST", "/clusters/"+id+"/upgrade", nil, clusterbyte, http.Header{ + "Accept": []string{"application/json"}, + "Authorization": []string{"Bearer " + apiKey}, + }) + + if err == nil { + defer response.Body.Close() + body, _ := io.ReadAll(response.Body) + + switch { + // 2xx, Successful + case response.StatusCode >= 200 && response.StatusCode < 300: + if err = json.Unmarshal(body, &result); err != nil { + err = fmt.Errorf("%w: %s", err, body) + return result, err + } + if err = json.Unmarshal(body, &result.ResponsePayload); err != nil { + err = fmt.Errorf("%w: %s", err, body) + } + + default: + //nolint:goerr113 // This is intentionally dynamic. + err = fmt.Errorf("%v: %s", response.Status, body) + } + } + + return result, err +} + +// UpgradeClusterHA makes a PUT request to the "/clusters//actions/" endpoint, +// where is either "enable-ha" or "disable-ha", thereby attempting to change the +// HA setting for a given cluster in Bridge. +func (c *Client) UpgradeClusterHA(ctx context.Context, apiKey, id, action string) (*ClusterUpgradeApiResource, error) { + result := &ClusterUpgradeApiResource{} + + response, err := c.doWithRetry(ctx, "PUT", "/clusters/"+id+"/actions/"+action, nil, nil, http.Header{ + "Accept": []string{"application/json"}, + "Authorization": []string{"Bearer " + apiKey}, + }) + + if err == nil { + defer response.Body.Close() + body, _ := io.ReadAll(response.Body) + + switch { + // 2xx, Successful + case response.StatusCode >= 200 && response.StatusCode < 300: + if err = json.Unmarshal(body, &result); err != nil { + err = fmt.Errorf("%w: %s", err, body) + return result, err + } + if err = json.Unmarshal(body, &result.ResponsePayload); err != nil { + err = fmt.Errorf("%w: %s", err, body) + } + + default: + //nolint:goerr113 // This is intentionally dynamic. + err = fmt.Errorf("%v: %s", response.Status, body) + } + } + + return result, err +} + +// UpdateCluster makes a PATCH request to the "/clusters/" endpoint, thereby attempting to +// update certain settings for a given cluster in Bridge. +func (c *Client) UpdateCluster( + ctx context.Context, apiKey, id string, clusterRequestPayload *PatchClustersRequestPayload, +) (*ClusterApiResource, error) { + result := &ClusterApiResource{} + + clusterbyte, err := json.Marshal(clusterRequestPayload) + if err != nil { + return result, err + } + + response, err := c.doWithRetry(ctx, "PATCH", "/clusters/"+id, nil, clusterbyte, http.Header{ + "Accept": []string{"application/json"}, + "Authorization": []string{"Bearer " + apiKey}, + }) + + if err == nil { + defer response.Body.Close() + body, _ := io.ReadAll(response.Body) + + switch { + // 2xx, Successful + case response.StatusCode >= 200 && response.StatusCode < 300: + if err = json.Unmarshal(body, &result); err != nil { + err = fmt.Errorf("%w: %s", err, body) + return result, err + } + if err = json.Unmarshal(body, &result.ResponsePayload); err != nil { + err = fmt.Errorf("%w: %s", err, body) + } + + default: + //nolint:goerr113 // This is intentionally dynamic. + err = fmt.Errorf("%v: %s", response.Status, body) + } + } + + return result, err +} + +// GetClusterRole sends a GET request to the "/clusters//roles/" endpoint, thereby retrieving +// Role information for a specific role from a specific cluster in Bridge. +func (c *Client) GetClusterRole(ctx context.Context, apiKey, clusterId, roleName string) (*ClusterRoleApiResource, error) { + result := &ClusterRoleApiResource{} + + response, err := c.doWithRetry(ctx, "GET", "/clusters/"+clusterId+"/roles/"+roleName, nil, nil, http.Header{ + "Accept": []string{"application/json"}, + "Authorization": []string{"Bearer " + apiKey}, + }) + + if err == nil { + defer response.Body.Close() + body, _ := io.ReadAll(response.Body) + + switch { + // 2xx, Successful + case response.StatusCode >= 200 && response.StatusCode < 300: + if err = json.Unmarshal(body, &result); err != nil { + err = fmt.Errorf("%w: %s", err, body) + } + + default: + //nolint:goerr113 // This is intentionally dynamic. + err = fmt.Errorf("%v: %s", response.Status, body) + } + } + + return result, err +} + +// ListClusterRoles sends a GET request to the "/clusters//roles" endpoint thereby retrieving +// a list of all cluster roles for a specific cluster in Bridge. +func (c *Client) ListClusterRoles(ctx context.Context, apiKey, id string) ([]*ClusterRoleApiResource, error) { + result := ClusterRoleList{} + + response, err := c.doWithRetry(ctx, "GET", "/clusters/"+id+"/roles", nil, nil, http.Header{ + "Accept": []string{"application/json"}, + "Authorization": []string{"Bearer " + apiKey}, + }) + + if err == nil { + defer response.Body.Close() + body, _ := io.ReadAll(response.Body) + + switch { + // 2xx, Successful + case response.StatusCode >= 200 && response.StatusCode < 300: + if err = json.Unmarshal(body, &result); err != nil { + err = fmt.Errorf("%w: %s", err, body) + } + + default: + //nolint:goerr113 // This is intentionally dynamic. + err = fmt.Errorf("%v: %s", response.Status, body) + } + } + + return result.Roles, err +} diff --git a/internal/bridge/client_test.go b/internal/bridge/client_test.go new file mode 100644 index 0000000000..28728c701c --- /dev/null +++ b/internal/bridge/client_test.go @@ -0,0 +1,1355 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package bridge + +import ( + "context" + "encoding/json" + "io" + "net/http" + "net/http/httptest" + "net/url" + "testing" + "time" + + gocmp "github.com/google/go-cmp/cmp" + gocmpopts "github.com/google/go-cmp/cmp/cmpopts" + "gotest.tools/v3/assert" + "k8s.io/apimachinery/pkg/util/intstr" + + "github.com/crunchydata/postgres-operator/internal/initialize" +) + +var testApiKey = "9012" +var testTeamId = "5678" + +// TestClientBackoff logs the backoff timing chosen by [NewClient] for use +// with `go test -v`. +func TestClientBackoff(t *testing.T) { + client := NewClient("", "") + var total time.Duration + + for i := 1; i <= 50 && client.Backoff.Steps > 0; i++ { + step := client.Backoff.Step() + total += step + + t.Logf("%02d:%20v%20v", i, step, total) + } +} + +func TestClientURL(t *testing.T) { + assert.Equal(t, defaultAPI, NewClient("", "").BaseURL.String(), + "expected the API constant to parse correctly") + + assert.Equal(t, defaultAPI, NewClient("/path", "").BaseURL.String()) + assert.Equal(t, defaultAPI, NewClient("http://:9999", "").BaseURL.String()) + assert.Equal(t, defaultAPI, NewClient("postgres://localhost", "").BaseURL.String()) + assert.Equal(t, defaultAPI, NewClient("postgres://localhost:5432", "").BaseURL.String()) + + assert.Equal(t, + "http://localhost:12345", NewClient("http://localhost:12345", "").BaseURL.String()) +} + +func TestClientDoWithBackoff(t *testing.T) { + t.Run("Arguments", func(t *testing.T) { + var bodies []string + var requests []http.Request + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + body, _ := io.ReadAll(r.Body) + bodies = append(bodies, string(body)) + requests = append(requests, *r) + + w.WriteHeader(http.StatusOK) + _, _ = w.Write([]byte(`some-response`)) + })) + t.Cleanup(server.Close) + + // Client with one attempt, i.e. no backoff. + client := NewClient(server.URL, "xyz") + client.Backoff.Steps = 1 + assert.Equal(t, client.BaseURL.String(), server.URL) + + ctx := context.Background() + params := url.Values{} + params.Add("foo", "bar") + response, err := client.doWithBackoff(ctx, + "ANY", "/some/path", params, []byte(`the-body`), + http.Header{"Some": []string{"header"}}) + + assert.NilError(t, err) + assert.Assert(t, response != nil) + t.Cleanup(func() { _ = response.Body.Close() }) + + // Arguments became Request fields, including the client version. + assert.Equal(t, len(requests), 1) + assert.Equal(t, bodies[0], "the-body") + assert.Equal(t, requests[0].Method, "ANY") + assert.Equal(t, requests[0].URL.String(), "/some/path?foo=bar") + assert.DeepEqual(t, requests[0].Header.Values("Some"), []string{"header"}) + assert.DeepEqual(t, requests[0].Header.Values("User-Agent"), []string{"PGO/xyz"}) + + body, _ := io.ReadAll(response.Body) + assert.Equal(t, string(body), "some-response") + }) + + t.Run("Idempotency", func(t *testing.T) { + var bodies []string + var requests []http.Request + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + body, _ := io.ReadAll(r.Body) + bodies = append(bodies, string(body)) + requests = append(requests, *r) + + switch len(requests) { + case 1, 2: + w.WriteHeader(http.StatusBadGateway) + default: + w.WriteHeader(http.StatusNotAcceptable) + } + })) + t.Cleanup(server.Close) + + // Client with brief backoff. + client := NewClient(server.URL, "") + client.Backoff.Duration = time.Millisecond + client.Backoff.Steps = 5 + assert.Equal(t, client.BaseURL.String(), server.URL) + + ctx := context.Background() + response, err := client.doWithBackoff(ctx, + "POST", "/anything", nil, []byte(`any-body`), + http.Header{"Any": []string{"thing"}}) + + assert.NilError(t, err) + assert.Assert(t, response != nil) + assert.NilError(t, response.Body.Close()) + + assert.Equal(t, len(requests), 3, "expected multiple requests") + + // Headers include an Idempotency-Key. + assert.Equal(t, bodies[0], "any-body") + assert.Equal(t, requests[0].Header.Get("Any"), "thing") + assert.Assert(t, requests[0].Header.Get("Idempotency-Key") != "") + + // Requests are identical, including the Idempotency-Key. + assert.Equal(t, bodies[0], bodies[1]) + assert.DeepEqual(t, requests[0], requests[1], + gocmpopts.IgnoreFields(http.Request{}, "Body"), + gocmpopts.IgnoreUnexported(http.Request{})) + + assert.Equal(t, bodies[1], bodies[2]) + assert.DeepEqual(t, requests[1], requests[2], + gocmpopts.IgnoreFields(http.Request{}, "Body"), + gocmpopts.IgnoreUnexported(http.Request{})) + + // Another, identical request gets a new Idempotency-Key. + response, err = client.doWithBackoff(ctx, + "POST", "/anything", nil, []byte(`any-body`), + http.Header{"Any": []string{"thing"}}) + + assert.NilError(t, err) + assert.Assert(t, response != nil) + assert.NilError(t, response.Body.Close()) + + prior := requests[0].Header.Get("Idempotency-Key") + assert.Assert(t, len(requests) > 3) + assert.Assert(t, requests[3].Header.Get("Idempotency-Key") != "") + assert.Assert(t, requests[3].Header.Get("Idempotency-Key") != prior, + "expected a new idempotency key") + }) + + t.Run("Backoff", func(t *testing.T) { + requests := 0 + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + requests++ + w.WriteHeader(http.StatusInternalServerError) + })) + t.Cleanup(server.Close) + + // Client with brief backoff. + client := NewClient(server.URL, "") + client.Backoff.Duration = time.Millisecond + client.Backoff.Steps = 5 + assert.Equal(t, client.BaseURL.String(), server.URL) + + ctx := context.Background() + _, err := client.doWithBackoff(ctx, "POST", "/any", nil, nil, nil) //nolint:bodyclose + assert.ErrorContains(t, err, "timed out waiting") + assert.Assert(t, requests > 0, "expected multiple requests") + }) + + t.Run("Cancellation", func(t *testing.T) { + requests := 0 + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + requests++ + w.WriteHeader(http.StatusServiceUnavailable) + })) + t.Cleanup(server.Close) + + // Client with lots of brief backoff. + client := NewClient(server.URL, "") + client.Backoff.Duration = time.Millisecond + client.Backoff.Steps = 100 + assert.Equal(t, client.BaseURL.String(), server.URL) + + ctx, cancel := context.WithTimeout(context.Background(), 50*time.Millisecond) + t.Cleanup(cancel) + + _, err := client.doWithBackoff(ctx, "POST", "/any", nil, nil, nil) //nolint:bodyclose + assert.ErrorIs(t, err, context.DeadlineExceeded) + assert.Assert(t, requests > 0, "expected multiple requests") + }) +} + +func TestClientDoWithRetry(t *testing.T) { + t.Run("Arguments", func(t *testing.T) { + var bodies []string + var requests []http.Request + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + body, _ := io.ReadAll(r.Body) + bodies = append(bodies, string(body)) + requests = append(requests, *r) + + w.WriteHeader(http.StatusOK) + _, _ = w.Write([]byte(`some-response`)) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "xyz") + assert.Equal(t, client.BaseURL.String(), server.URL) + + ctx := context.Background() + params := url.Values{} + params.Add("foo", "bar") + response, err := client.doWithRetry(ctx, + "ANY", "/some/path", params, []byte(`the-body`), + http.Header{"Some": []string{"header"}}) + + assert.NilError(t, err) + assert.Assert(t, response != nil) + t.Cleanup(func() { _ = response.Body.Close() }) + + // Arguments became Request fields, including the client version. + assert.Equal(t, len(requests), 1) + assert.Equal(t, bodies[0], "the-body") + assert.Equal(t, requests[0].Method, "ANY") + assert.Equal(t, requests[0].URL.String(), "/some/path?foo=bar") + assert.DeepEqual(t, requests[0].Header.Values("Some"), []string{"header"}) + assert.DeepEqual(t, requests[0].Header.Values("User-Agent"), []string{"PGO/xyz"}) + + body, _ := io.ReadAll(response.Body) + assert.Equal(t, string(body), "some-response") + }) + + t.Run("Throttling", func(t *testing.T) { + var bodies []string + var requests []http.Request + var times []time.Time + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + body, _ := io.ReadAll(r.Body) + bodies = append(bodies, string(body)) + requests = append(requests, *r) + times = append(times, time.Now()) + + switch len(requests) { + case 1: + w.Header().Set("Retry-After", "1") + w.WriteHeader(http.StatusTooManyRequests) + default: + w.WriteHeader(http.StatusOK) + } + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + ctx := context.Background() + response, err := client.doWithRetry(ctx, + "POST", "/anything", nil, []byte(`any-body`), + http.Header{"Any": []string{"thing"}}) + + assert.NilError(t, err) + assert.Assert(t, response != nil) + assert.NilError(t, response.Body.Close()) + + assert.Equal(t, len(requests), 2, "expected multiple requests") + + // Headers include an Idempotency-Key. + assert.Equal(t, bodies[0], "any-body") + assert.Equal(t, requests[0].Header.Get("Any"), "thing") + assert.Assert(t, requests[0].Header.Get("Idempotency-Key") != "") + + // Requests are identical, except for the Idempotency-Key. + assert.Equal(t, bodies[0], bodies[1]) + assert.DeepEqual(t, requests[0], requests[1], + gocmpopts.IgnoreFields(http.Request{}, "Body"), + gocmpopts.IgnoreUnexported(http.Request{}), + gocmp.FilterPath( + func(p gocmp.Path) bool { return p.String() == "Header" }, + gocmpopts.IgnoreMapEntries( + func(k string, v []string) bool { return k == "Idempotency-Key" }, + ), + ), + ) + + prior := requests[0].Header.Get("Idempotency-Key") + assert.Assert(t, requests[1].Header.Get("Idempotency-Key") != "") + assert.Assert(t, requests[1].Header.Get("Idempotency-Key") != prior, + "expected a new idempotency key") + + // Requests are delayed according the server's response. + // TODO: Mock the clock for faster tests. + assert.Assert(t, times[0].Add(time.Second).Before(times[1]), + "expected the second request over 1sec after the first") + }) + + t.Run("Cancellation", func(t *testing.T) { + requests := 0 + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + requests++ + w.Header().Set("Retry-After", "5") + w.WriteHeader(http.StatusTooManyRequests) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + ctx, cancel := context.WithTimeout(context.Background(), 50*time.Millisecond) + t.Cleanup(cancel) + + start := time.Now() + _, err := client.doWithRetry(ctx, "POST", "/any", nil, nil, nil) //nolint:bodyclose + assert.ErrorIs(t, err, context.DeadlineExceeded) + assert.Assert(t, time.Since(start) < time.Second) + assert.Equal(t, requests, 1, "expected one request") + }) + + t.Run("UnexpectedResponse", func(t *testing.T) { + for _, tt := range []struct { + Name string + Send func(http.ResponseWriter) + Expect func(testing.TB, http.Response) + }{ + { + Name: "NoHeader", + Send: func(w http.ResponseWriter) { + w.WriteHeader(http.StatusTooManyRequests) + }, + Expect: func(t testing.TB, r http.Response) { + t.Helper() + assert.Equal(t, r.StatusCode, http.StatusTooManyRequests) + }, + }, + { + Name: "ZeroHeader", + Send: func(w http.ResponseWriter) { + w.Header().Set("Retry-After", "0") + w.WriteHeader(http.StatusTooManyRequests) + }, + Expect: func(t testing.TB, r http.Response) { + t.Helper() + assert.Equal(t, r.Header.Get("Retry-After"), "0") + assert.Equal(t, r.StatusCode, http.StatusTooManyRequests) + }, + }, + { + Name: "NegativeHeader", + Send: func(w http.ResponseWriter) { + w.Header().Set("Retry-After", "-10") + w.WriteHeader(http.StatusTooManyRequests) + }, + Expect: func(t testing.TB, r http.Response) { + t.Helper() + assert.Equal(t, r.Header.Get("Retry-After"), "-10") + assert.Equal(t, r.StatusCode, http.StatusTooManyRequests) + }, + }, + { + Name: "TextHeader", + Send: func(w http.ResponseWriter) { + w.Header().Set("Retry-After", "bogus") + w.WriteHeader(http.StatusTooManyRequests) + }, + Expect: func(t testing.TB, r http.Response) { + t.Helper() + assert.Equal(t, r.Header.Get("Retry-After"), "bogus") + assert.Equal(t, r.StatusCode, http.StatusTooManyRequests) + }, + }, + } { + t.Run(tt.Name, func(t *testing.T) { + requests := 0 + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + requests++ + tt.Send(w) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + ctx := context.Background() + response, err := client.doWithRetry(ctx, "POST", "/any", nil, nil, nil) + assert.NilError(t, err) + assert.Assert(t, response != nil) + t.Cleanup(func() { _ = response.Body.Close() }) + + tt.Expect(t, *response) + + assert.Equal(t, requests, 1, "expected no retries") + }) + } + }) +} + +func TestClientCreateAuthObject(t *testing.T) { + t.Run("Arguments", func(t *testing.T) { + var requests []http.Request + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + body, _ := io.ReadAll(r.Body) + assert.Equal(t, len(body), 0) + requests = append(requests, *r) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + ctx := context.Background() + _, _ = client.CreateAuthObject(ctx, AuthObject{Secret: "sesame"}) + + assert.Equal(t, len(requests), 1) + assert.Equal(t, requests[0].Header.Get("Authorization"), "Bearer sesame") + }) + + t.Run("Unauthorized", func(t *testing.T) { + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusUnauthorized) + _, _ = w.Write([]byte(`some info`)) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + _, err := client.CreateAuthObject(context.Background(), AuthObject{}) + assert.ErrorContains(t, err, "authentication") + assert.ErrorContains(t, err, "some info") + assert.ErrorIs(t, err, errAuthentication) + }) + + t.Run("ErrorResponse", func(t *testing.T) { + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusNotFound) + _, _ = w.Write([]byte(`some message`)) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + _, err := client.CreateAuthObject(context.Background(), AuthObject{}) + assert.ErrorContains(t, err, "404 Not Found") + assert.ErrorContains(t, err, "some message") + }) + + t.Run("NoResponseBody", func(t *testing.T) { + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusOK) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + _, err := client.CreateAuthObject(context.Background(), AuthObject{}) + assert.ErrorContains(t, err, "unexpected end") + assert.ErrorContains(t, err, "JSON") + }) + + t.Run("ResponseNotJSON", func(t *testing.T) { + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusOK) + _, _ = w.Write([]byte(`asdf`)) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + _, err := client.CreateAuthObject(context.Background(), AuthObject{}) + assert.ErrorContains(t, err, "invalid") + assert.ErrorContains(t, err, "asdf") + }) +} + +func TestClientCreateInstallation(t *testing.T) { + t.Run("ErrorResponse", func(t *testing.T) { + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusNotFound) + _, _ = w.Write([]byte(`any content, any format`)) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + _, err := client.CreateInstallation(context.Background()) + assert.ErrorContains(t, err, "404 Not Found") + assert.ErrorContains(t, err, "any content, any format") + }) + + t.Run("NoResponseBody", func(t *testing.T) { + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusOK) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + _, err := client.CreateInstallation(context.Background()) + assert.ErrorContains(t, err, "unexpected end") + assert.ErrorContains(t, err, "JSON") + }) + + t.Run("ResponseNotJSON", func(t *testing.T) { + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusOK) + _, _ = w.Write([]byte(`asdf`)) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + _, err := client.CreateInstallation(context.Background()) + assert.ErrorContains(t, err, "invalid") + assert.ErrorContains(t, err, "asdf") + }) +} + +func TestListClusters(t *testing.T) { + responsePayload := &ClusterList{ + Clusters: []*ClusterApiResource{}, + } + firstClusterApiResource := &ClusterApiResource{ + ID: "1234", + } + secondClusterApiResource := &ClusterApiResource{ + ID: "2345", + } + + t.Run("WeSendCorrectData", func(t *testing.T) { + responsePayloadJson, err := json.Marshal(responsePayload) + assert.NilError(t, err) + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + assert.Equal(t, r.Method, "GET", "Expected GET method") + assert.Equal(t, r.URL.Path, "/clusters", "Expected path to be '/clusters'") + assert.Equal(t, r.Header.Get("Authorization"), "Bearer "+testApiKey, "Expected Authorization header to contain api key.") + assert.Equal(t, r.URL.Query()["team_id"][0], testTeamId, "Expected query params to contain team id.") + + w.WriteHeader(http.StatusOK) + _, _ = w.Write(responsePayloadJson) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + _, err = client.ListClusters(context.Background(), testApiKey, testTeamId) + assert.NilError(t, err) + }) + + t.Run("OkResponseNoClusters", func(t *testing.T) { + responsePayloadJson, err := json.Marshal(responsePayload) + assert.NilError(t, err) + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusOK) + _, _ = w.Write(responsePayloadJson) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + clusters, err := client.ListClusters(context.Background(), testApiKey, testTeamId) + assert.NilError(t, err) + assert.Equal(t, len(clusters), 0) + }) + + t.Run("OkResponseOneCluster", func(t *testing.T) { + responsePayload.Clusters = append(responsePayload.Clusters, firstClusterApiResource) + responsePayloadJson, err := json.Marshal(responsePayload) + assert.NilError(t, err) + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusOK) + _, _ = w.Write(responsePayloadJson) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + clusters, err := client.ListClusters(context.Background(), testApiKey, testTeamId) + assert.NilError(t, err) + assert.Equal(t, len(clusters), 1) + assert.Equal(t, clusters[0].ID, responsePayload.Clusters[0].ID) + }) + + t.Run("OkResponseTwoClusters", func(t *testing.T) { + responsePayload.Clusters = append(responsePayload.Clusters, secondClusterApiResource) + responsePayloadJson, err := json.Marshal(responsePayload) + assert.NilError(t, err) + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusOK) + _, _ = w.Write(responsePayloadJson) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + clusters, err := client.ListClusters(context.Background(), testApiKey, testTeamId) + assert.NilError(t, err) + assert.Equal(t, len(clusters), 2) + }) + + t.Run("ErrorResponse", func(t *testing.T) { + responsePayloadJson, err := json.Marshal(responsePayload) + assert.NilError(t, err) + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusBadRequest) + _, _ = w.Write(responsePayloadJson) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + _, err = client.ListClusters(context.Background(), testApiKey, testTeamId) + assert.Check(t, err != nil) + assert.ErrorContains(t, err, "400 Bad Request") + }) +} + +func TestCreateCluster(t *testing.T) { + clusterApiResource := &ClusterApiResource{ + ClusterName: "test-cluster1", + } + clusterRequestPayload := &PostClustersRequestPayload{ + Name: "test-cluster1", + } + + t.Run("WeSendCorrectData", func(t *testing.T) { + responsePayloadJson, err := json.Marshal(clusterApiResource) + assert.NilError(t, err) + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + var receivedPayload PostClustersRequestPayload + dec := json.NewDecoder(r.Body) + err = dec.Decode(&receivedPayload) + assert.NilError(t, err) + assert.Equal(t, r.Method, "POST", "Expected POST method") + assert.Equal(t, r.URL.Path, "/clusters", "Expected path to be '/clusters'") + assert.Equal(t, r.Header.Get("Authorization"), "Bearer "+testApiKey, "Expected Authorization header to contain api key.") + assert.Equal(t, receivedPayload, *clusterRequestPayload) + + w.WriteHeader(http.StatusOK) + _, _ = w.Write(responsePayloadJson) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + _, err = client.CreateCluster(context.Background(), testApiKey, clusterRequestPayload) + assert.NilError(t, err) + }) + + t.Run("OkResponse", func(t *testing.T) { + responsePayloadJson, err := json.Marshal(clusterApiResource) + assert.NilError(t, err) + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusOK) + _, _ = w.Write(responsePayloadJson) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + newCluster, err := client.CreateCluster(context.Background(), testApiKey, clusterRequestPayload) + assert.NilError(t, err) + assert.Equal(t, newCluster.ClusterName, clusterApiResource.ClusterName) + }) + + t.Run("ErrorResponse", func(t *testing.T) { + responsePayloadJson, err := json.Marshal(clusterApiResource) + assert.NilError(t, err) + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusBadRequest) + _, _ = w.Write(responsePayloadJson) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + _, err = client.CreateCluster(context.Background(), testApiKey, clusterRequestPayload) + assert.Check(t, err != nil) + assert.ErrorContains(t, err, "400 Bad Request") + }) +} + +func TestDeleteCluster(t *testing.T) { + clusterId := "1234" + clusterApiResource := &ClusterApiResource{ + ClusterName: "test-cluster1", + } + + t.Run("WeSendCorrectData", func(t *testing.T) { + responsePayloadJson, err := json.Marshal(clusterApiResource) + assert.NilError(t, err) + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + assert.Equal(t, r.Method, "DELETE", "Expected DELETE method") + assert.Equal(t, r.URL.Path, "/clusters/"+clusterId, "Expected path to be /clusters/"+clusterId) + assert.Equal(t, r.Header.Get("Authorization"), "Bearer "+testApiKey, "Expected Authorization header to contain api key.") + + w.WriteHeader(http.StatusOK) + _, _ = w.Write(responsePayloadJson) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + _, _, err = client.DeleteCluster(context.Background(), testApiKey, clusterId) + assert.NilError(t, err) + }) + + t.Run("OkResponse", func(t *testing.T) { + responsePayloadJson, err := json.Marshal(clusterApiResource) + assert.NilError(t, err) + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusOK) + _, _ = w.Write(responsePayloadJson) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + deletedCluster, deletedAlready, err := client.DeleteCluster(context.Background(), testApiKey, clusterId) + assert.NilError(t, err) + assert.Equal(t, deletedCluster.ClusterName, clusterApiResource.ClusterName) + assert.Equal(t, deletedAlready, false) + }) + + t.Run("GoneResponse", func(t *testing.T) { + responsePayloadJson, err := json.Marshal(clusterApiResource) + assert.NilError(t, err) + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusGone) + _, _ = w.Write(responsePayloadJson) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + _, deletedAlready, err := client.DeleteCluster(context.Background(), testApiKey, clusterId) + assert.NilError(t, err) + assert.Equal(t, deletedAlready, true) + }) + + t.Run("NotFoundResponse", func(t *testing.T) { + responsePayloadJson, err := json.Marshal(clusterApiResource) + assert.NilError(t, err) + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusNotFound) + _, _ = w.Write(responsePayloadJson) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + _, deletedAlready, err := client.DeleteCluster(context.Background(), testApiKey, clusterId) + assert.NilError(t, err) + assert.Equal(t, deletedAlready, true) + }) + + t.Run("ErrorResponse", func(t *testing.T) { + responsePayloadJson, err := json.Marshal(clusterApiResource) + assert.NilError(t, err) + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusBadRequest) + _, _ = w.Write(responsePayloadJson) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + _, _, err = client.DeleteCluster(context.Background(), testApiKey, clusterId) + assert.Check(t, err != nil) + assert.ErrorContains(t, err, "400 Bad Request") + }) +} + +func TestGetCluster(t *testing.T) { + clusterId := "1234" + clusterApiResource := &ClusterApiResource{ + ClusterName: "test-cluster1", + } + + t.Run("WeSendCorrectData", func(t *testing.T) { + responsePayloadJson, err := json.Marshal(clusterApiResource) + assert.NilError(t, err) + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + assert.Equal(t, r.Method, "GET", "Expected GET method") + assert.Equal(t, r.URL.Path, "/clusters/"+clusterId, "Expected path to be /clusters/"+clusterId) + assert.Equal(t, r.Header.Get("Authorization"), "Bearer "+testApiKey, "Expected Authorization header to contain api key.") + + w.WriteHeader(http.StatusOK) + _, _ = w.Write(responsePayloadJson) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + _, err = client.GetCluster(context.Background(), testApiKey, clusterId) + assert.NilError(t, err) + }) + + t.Run("OkResponse", func(t *testing.T) { + responsePayloadJson, err := json.Marshal(clusterApiResource) + assert.NilError(t, err) + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusOK) + _, _ = w.Write(responsePayloadJson) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + cluster, err := client.GetCluster(context.Background(), testApiKey, clusterId) + assert.NilError(t, err) + assert.Equal(t, cluster.ClusterName, clusterApiResource.ClusterName) + }) + + t.Run("ErrorResponse", func(t *testing.T) { + responsePayloadJson, err := json.Marshal(clusterApiResource) + assert.NilError(t, err) + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusBadRequest) + _, _ = w.Write(responsePayloadJson) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + _, err = client.GetCluster(context.Background(), testApiKey, clusterId) + assert.Check(t, err != nil) + assert.ErrorContains(t, err, "400 Bad Request") + }) +} + +func TestGetClusterStatus(t *testing.T) { + clusterId := "1234" + state := "Ready" + + clusterStatusApiResource := &ClusterStatusApiResource{ + State: state, + } + + t.Run("WeSendCorrectData", func(t *testing.T) { + responsePayloadJson, err := json.Marshal(clusterStatusApiResource) + assert.NilError(t, err) + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + assert.Equal(t, r.Method, "GET", "Expected GET method") + assert.Equal(t, r.URL.Path, "/clusters/"+clusterId+"/status", "Expected path to be /clusters/"+clusterId+"/status") + assert.Equal(t, r.Header.Get("Authorization"), "Bearer "+testApiKey, "Expected Authorization header to contain api key.") + + w.WriteHeader(http.StatusOK) + _, _ = w.Write(responsePayloadJson) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + _, err = client.GetClusterStatus(context.Background(), testApiKey, clusterId) + assert.NilError(t, err) + }) + + t.Run("OkResponse", func(t *testing.T) { + responsePayloadJson, err := json.Marshal(clusterStatusApiResource) + assert.NilError(t, err) + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusOK) + _, _ = w.Write(responsePayloadJson) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + clusterStatus, err := client.GetClusterStatus(context.Background(), testApiKey, clusterId) + assert.NilError(t, err) + assert.Equal(t, clusterStatus.State, state) + }) + + t.Run("ErrorResponse", func(t *testing.T) { + responsePayloadJson, err := json.Marshal(clusterStatusApiResource) + assert.NilError(t, err) + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusBadRequest) + _, _ = w.Write(responsePayloadJson) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + _, err = client.GetClusterStatus(context.Background(), testApiKey, clusterId) + assert.Check(t, err != nil) + assert.ErrorContains(t, err, "400 Bad Request") + }) +} + +func TestGetClusterUpgrade(t *testing.T) { + clusterId := "1234" + clusterUpgradeApiResource := &ClusterUpgradeApiResource{ + ClusterID: clusterId, + } + + t.Run("WeSendCorrectData", func(t *testing.T) { + responsePayloadJson, err := json.Marshal(clusterUpgradeApiResource) + assert.NilError(t, err) + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + assert.Equal(t, r.Method, "GET", "Expected GET method") + assert.Equal(t, r.URL.Path, "/clusters/"+clusterId+"/upgrade", "Expected path to be /clusters/"+clusterId+"/upgrade") + assert.Equal(t, r.Header.Get("Authorization"), "Bearer "+testApiKey, "Expected Authorization header to contain api key.") + + w.WriteHeader(http.StatusOK) + _, _ = w.Write(responsePayloadJson) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + _, err = client.GetClusterUpgrade(context.Background(), testApiKey, clusterId) + assert.NilError(t, err) + }) + + t.Run("OkResponse", func(t *testing.T) { + responsePayloadJson, err := json.Marshal(clusterUpgradeApiResource) + assert.NilError(t, err) + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusOK) + _, _ = w.Write(responsePayloadJson) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + clusterUpgrade, err := client.GetClusterUpgrade(context.Background(), testApiKey, clusterId) + assert.NilError(t, err) + assert.Equal(t, clusterUpgrade.ClusterID, clusterId) + }) + + t.Run("ErrorResponse", func(t *testing.T) { + responsePayloadJson, err := json.Marshal(clusterUpgradeApiResource) + assert.NilError(t, err) + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusBadRequest) + _, _ = w.Write(responsePayloadJson) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + _, err = client.GetClusterUpgrade(context.Background(), testApiKey, clusterId) + assert.Check(t, err != nil) + assert.ErrorContains(t, err, "400 Bad Request") + }) +} + +func TestUpgradeCluster(t *testing.T) { + clusterId := "1234" + clusterUpgradeApiResource := &ClusterUpgradeApiResource{ + ClusterID: clusterId, + } + clusterUpgradeRequestPayload := &PostClustersUpgradeRequestPayload{ + Plan: "standard-8", + PostgresVersion: intstr.FromInt(15), + UpgradeStartTime: "start-time", + Storage: 10, + } + + t.Run("WeSendCorrectData", func(t *testing.T) { + responsePayloadJson, err := json.Marshal(clusterUpgradeApiResource) + assert.NilError(t, err) + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + var receivedPayload PostClustersUpgradeRequestPayload + dec := json.NewDecoder(r.Body) + err = dec.Decode(&receivedPayload) + assert.NilError(t, err) + assert.Equal(t, r.Method, "POST", "Expected POST method") + assert.Equal(t, r.URL.Path, "/clusters/"+clusterId+"/upgrade", "Expected path to be /clusters/"+clusterId+"/upgrade") + assert.Equal(t, r.Header.Get("Authorization"), "Bearer "+testApiKey, "Expected Authorization header to contain api key.") + assert.Equal(t, receivedPayload, *clusterUpgradeRequestPayload) + + w.WriteHeader(http.StatusOK) + _, _ = w.Write(responsePayloadJson) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + _, err = client.UpgradeCluster(context.Background(), testApiKey, clusterId, clusterUpgradeRequestPayload) + assert.NilError(t, err) + }) + + t.Run("OkResponse", func(t *testing.T) { + responsePayloadJson, err := json.Marshal(clusterUpgradeApiResource) + assert.NilError(t, err) + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusOK) + _, _ = w.Write(responsePayloadJson) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + clusterUpgrade, err := client.UpgradeCluster(context.Background(), testApiKey, clusterId, clusterUpgradeRequestPayload) + assert.NilError(t, err) + assert.Equal(t, clusterUpgrade.ClusterID, clusterId) + }) + + t.Run("ErrorResponse", func(t *testing.T) { + responsePayloadJson, err := json.Marshal(clusterUpgradeApiResource) + assert.NilError(t, err) + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusBadRequest) + _, _ = w.Write(responsePayloadJson) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + _, err = client.UpgradeCluster(context.Background(), testApiKey, clusterId, clusterUpgradeRequestPayload) + assert.Check(t, err != nil) + assert.ErrorContains(t, err, "400 Bad Request") + }) +} + +func TestUpgradeClusterHA(t *testing.T) { + clusterId := "1234" + action := "enable-ha" + clusterUpgradeApiResource := &ClusterUpgradeApiResource{ + ClusterID: clusterId, + } + + t.Run("WeSendCorrectData", func(t *testing.T) { + responsePayloadJson, err := json.Marshal(clusterUpgradeApiResource) + assert.NilError(t, err) + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + assert.Equal(t, r.Method, "PUT", "Expected PUT method") + assert.Equal(t, r.URL.Path, "/clusters/"+clusterId+"/actions/"+action, + "Expected path to be /clusters/"+clusterId+"/actions/"+action) + assert.Equal(t, r.Header.Get("Authorization"), "Bearer "+testApiKey, "Expected Authorization header to contain api key.") + + w.WriteHeader(http.StatusOK) + _, _ = w.Write(responsePayloadJson) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + _, err = client.UpgradeClusterHA(context.Background(), testApiKey, clusterId, action) + assert.NilError(t, err) + }) + + t.Run("OkResponse", func(t *testing.T) { + responsePayloadJson, err := json.Marshal(clusterUpgradeApiResource) + assert.NilError(t, err) + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusOK) + _, _ = w.Write(responsePayloadJson) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + clusterUpgrade, err := client.UpgradeClusterHA(context.Background(), testApiKey, clusterId, action) + assert.NilError(t, err) + assert.Equal(t, clusterUpgrade.ClusterID, clusterId) + }) + + t.Run("ErrorResponse", func(t *testing.T) { + responsePayloadJson, err := json.Marshal(clusterUpgradeApiResource) + assert.NilError(t, err) + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusBadRequest) + _, _ = w.Write(responsePayloadJson) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + _, err = client.UpgradeClusterHA(context.Background(), testApiKey, clusterId, action) + assert.Check(t, err != nil) + assert.ErrorContains(t, err, "400 Bad Request") + }) +} + +func TestUpdateCluster(t *testing.T) { + clusterId := "1234" + clusterApiResource := &ClusterApiResource{ + ClusterName: "new-cluster-name", + } + clusterUpdateRequestPayload := &PatchClustersRequestPayload{ + IsProtected: initialize.Bool(true), + } + + t.Run("WeSendCorrectData", func(t *testing.T) { + responsePayloadJson, err := json.Marshal(clusterApiResource) + assert.NilError(t, err) + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + var receivedPayload PatchClustersRequestPayload + dec := json.NewDecoder(r.Body) + err = dec.Decode(&receivedPayload) + assert.NilError(t, err) + assert.Equal(t, r.Method, "PATCH", "Expected PATCH method") + assert.Equal(t, r.URL.Path, "/clusters/"+clusterId, "Expected path to be /clusters/"+clusterId) + assert.Equal(t, r.Header.Get("Authorization"), "Bearer "+testApiKey, "Expected Authorization header to contain api key.") + assert.Equal(t, *receivedPayload.IsProtected, *clusterUpdateRequestPayload.IsProtected) + + w.WriteHeader(http.StatusOK) + _, _ = w.Write(responsePayloadJson) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + _, err = client.UpdateCluster(context.Background(), testApiKey, clusterId, clusterUpdateRequestPayload) + assert.NilError(t, err) + }) + + t.Run("OkResponse", func(t *testing.T) { + responsePayloadJson, err := json.Marshal(clusterApiResource) + assert.NilError(t, err) + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusOK) + _, _ = w.Write(responsePayloadJson) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + clusterUpdate, err := client.UpdateCluster(context.Background(), testApiKey, clusterId, clusterUpdateRequestPayload) + assert.NilError(t, err) + assert.Equal(t, clusterUpdate.ClusterName, clusterApiResource.ClusterName) + }) + + t.Run("ErrorResponse", func(t *testing.T) { + responsePayloadJson, err := json.Marshal(clusterApiResource) + assert.NilError(t, err) + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusBadRequest) + _, _ = w.Write(responsePayloadJson) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + _, err = client.UpdateCluster(context.Background(), testApiKey, clusterId, clusterUpdateRequestPayload) + assert.Check(t, err != nil) + assert.ErrorContains(t, err, "400 Bad Request") + }) +} + +func TestGetClusterRole(t *testing.T) { + clusterId := "1234" + roleName := "application" + clusterRoleApiResource := &ClusterRoleApiResource{ + Name: roleName, + } + + t.Run("WeSendCorrectData", func(t *testing.T) { + responsePayloadJson, err := json.Marshal(clusterRoleApiResource) + assert.NilError(t, err) + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + assert.Equal(t, r.Method, "GET", "Expected GET method") + assert.Equal(t, r.URL.Path, "/clusters/"+clusterId+"/roles/"+roleName, + "Expected path to be /clusters/"+clusterId+"/roles/"+roleName) + assert.Equal(t, r.Header.Get("Authorization"), "Bearer "+testApiKey, "Expected Authorization header to contain api key.") + + w.WriteHeader(http.StatusOK) + _, _ = w.Write(responsePayloadJson) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + _, err = client.GetClusterRole(context.Background(), testApiKey, clusterId, roleName) + assert.NilError(t, err) + }) + + t.Run("OkResponse", func(t *testing.T) { + responsePayloadJson, err := json.Marshal(clusterRoleApiResource) + assert.NilError(t, err) + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusOK) + _, _ = w.Write(responsePayloadJson) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + clusterRole, err := client.GetClusterRole(context.Background(), testApiKey, clusterId, roleName) + assert.NilError(t, err) + assert.Equal(t, clusterRole.Name, roleName) + }) + + t.Run("ErrorResponse", func(t *testing.T) { + responsePayloadJson, err := json.Marshal(clusterRoleApiResource) + assert.NilError(t, err) + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusBadRequest) + _, _ = w.Write(responsePayloadJson) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + _, err = client.GetClusterRole(context.Background(), testApiKey, clusterId, roleName) + assert.Check(t, err != nil) + assert.ErrorContains(t, err, "400 Bad Request") + }) +} + +func TestListClusterRoles(t *testing.T) { + clusterId := "1234" + responsePayload := &ClusterRoleList{ + Roles: []*ClusterRoleApiResource{}, + } + applicationClusterRoleApiResource := &ClusterRoleApiResource{} + postgresClusterRoleApiResource := &ClusterRoleApiResource{} + + t.Run("WeSendCorrectData", func(t *testing.T) { + responsePayloadJson, err := json.Marshal(responsePayload) + assert.NilError(t, err) + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + assert.Equal(t, r.Method, "GET", "Expected GET method") + assert.Equal(t, r.URL.Path, "/clusters/"+clusterId+"/roles", "Expected path to be '/clusters/%s/roles'") + assert.Equal(t, r.Header.Get("Authorization"), "Bearer "+testApiKey, "Expected Authorization header to contain api key.") + + w.WriteHeader(http.StatusOK) + _, _ = w.Write(responsePayloadJson) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + _, err = client.ListClusterRoles(context.Background(), testApiKey, clusterId) + assert.NilError(t, err) + }) + + t.Run("OkResponse", func(t *testing.T) { + responsePayload.Roles = append(responsePayload.Roles, applicationClusterRoleApiResource, postgresClusterRoleApiResource) + responsePayloadJson, err := json.Marshal(responsePayload) + assert.NilError(t, err) + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusOK) + _, _ = w.Write(responsePayloadJson) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + clusterRoles, err := client.ListClusterRoles(context.Background(), testApiKey, clusterId) + assert.NilError(t, err) + assert.Equal(t, len(clusterRoles), 2) + }) + + t.Run("ErrorResponse", func(t *testing.T) { + responsePayloadJson, err := json.Marshal(responsePayload) + assert.NilError(t, err) + + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusBadRequest) + _, _ = w.Write(responsePayloadJson) + })) + t.Cleanup(server.Close) + + client := NewClient(server.URL, "") + assert.Equal(t, client.BaseURL.String(), server.URL) + + _, err = client.ListClusterRoles(context.Background(), testApiKey, clusterId) + assert.Check(t, err != nil) + assert.ErrorContains(t, err, "400 Bad Request") + }) +} diff --git a/internal/bridge/crunchybridgecluster/apply.go b/internal/bridge/crunchybridgecluster/apply.go new file mode 100644 index 0000000000..d77d719d6a --- /dev/null +++ b/internal/bridge/crunchybridgecluster/apply.go @@ -0,0 +1,47 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package crunchybridgecluster + +import ( + "context" + "reflect" + + "sigs.k8s.io/controller-runtime/pkg/client" +) + +// patch sends patch to object's endpoint in the Kubernetes API and updates +// object with any returned content. The fieldManager is set to r.Owner, but +// can be overridden in options. +// - https://docs.k8s.io/reference/using-api/server-side-apply/#managers +// +// NOTE: This function is duplicated from a version in the postgrescluster package +func (r *CrunchyBridgeClusterReconciler) patch( + ctx context.Context, object client.Object, + patch client.Patch, options ...client.PatchOption, +) error { + options = append([]client.PatchOption{r.Owner}, options...) + return r.Client.Patch(ctx, object, patch, options...) +} + +// apply sends an apply patch to object's endpoint in the Kubernetes API and +// updates object with any returned content. The fieldManager is set to +// r.Owner and the force parameter is true. +// - https://docs.k8s.io/reference/using-api/server-side-apply/#managers +// - https://docs.k8s.io/reference/using-api/server-side-apply/#conflicts +// +// NOTE: This function is duplicated from a version in the postgrescluster package +func (r *CrunchyBridgeClusterReconciler) apply(ctx context.Context, object client.Object) error { + // Generate an apply-patch by comparing the object to its zero value. + zero := reflect.New(reflect.TypeOf(object).Elem()).Interface() + data, err := client.MergeFrom(zero.(client.Object)).Data(object) + apply := client.RawPatch(client.Apply.Type(), data) + + // Send the apply-patch with force=true. + if err == nil { + err = r.patch(ctx, object, apply, client.ForceOwnership) + } + + return err +} diff --git a/internal/bridge/crunchybridgecluster/crunchybridgecluster_controller.go b/internal/bridge/crunchybridgecluster/crunchybridgecluster_controller.go new file mode 100644 index 0000000000..03d67442be --- /dev/null +++ b/internal/bridge/crunchybridgecluster/crunchybridgecluster_controller.go @@ -0,0 +1,701 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package crunchybridgecluster + +import ( + "context" + "fmt" + "strings" + "time" + + "github.com/pkg/errors" + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/equality" + "k8s.io/apimachinery/pkg/api/meta" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/util/intstr" + ctrl "sigs.k8s.io/controller-runtime" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/controller/controllerutil" + "sigs.k8s.io/controller-runtime/pkg/event" + + "github.com/crunchydata/postgres-operator/internal/bridge" + "github.com/crunchydata/postgres-operator/internal/controller/runtime" + pgoRuntime "github.com/crunchydata/postgres-operator/internal/controller/runtime" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +// CrunchyBridgeClusterReconciler reconciles a CrunchyBridgeCluster object +type CrunchyBridgeClusterReconciler struct { + client.Client + + Owner client.FieldOwner + + // For this iteration, we will only be setting conditions rather than + // setting conditions and emitting events. That may change in the future, + // so we're leaving this EventRecorder here for now. + // record.EventRecorder + + // NewClient is called each time a new Client is needed. + NewClient func() bridge.ClientInterface +} + +//+kubebuilder:rbac:groups="postgres-operator.crunchydata.com",resources="crunchybridgeclusters",verbs={list,watch} +//+kubebuilder:rbac:groups="",resources="secrets",verbs={list,watch} + +// SetupWithManager sets up the controller with the Manager. +func (r *CrunchyBridgeClusterReconciler) SetupWithManager( + mgr ctrl.Manager, +) error { + return ctrl.NewControllerManagedBy(mgr). + For(&v1beta1.CrunchyBridgeCluster{}). + Owns(&corev1.Secret{}). + // Wake periodically to check Bridge API for all CrunchyBridgeClusters. + // Potentially replace with different requeue times, remove the Watch function + // Smarter: retry after a certain time for each cluster: https://gist.github.com/cbandy/a5a604e3026630c5b08cfbcdfffd2a13 + WatchesRawSource( + pgoRuntime.NewTickerImmediate(5*time.Minute, event.GenericEvent{}, r.Watch()), + ). + // Watch secrets and filter for secrets mentioned by CrunchyBridgeClusters + Watches( + &corev1.Secret{}, + r.watchForRelatedSecret(), + ). + Complete(r) +} + +// The owner reference created by controllerutil.SetControllerReference blocks +// deletion. The OwnerReferencesPermissionEnforcement plugin requires that the +// creator of such a reference have either "delete" permission on the owner or +// "update" permission on the owner's "finalizers" subresource. +// - https://docs.k8s.io/reference/access-authn-authz/admission-controllers/ +// +kubebuilder:rbac:groups="postgres-operator.crunchydata.com",resources="crunchybridgeclusters/finalizers",verbs={update} + +// setControllerReference sets owner as a Controller OwnerReference on controlled. +// Only one OwnerReference can be a controller, so it returns an error if another +// is already set. +func (r *CrunchyBridgeClusterReconciler) setControllerReference( + owner *v1beta1.CrunchyBridgeCluster, controlled client.Object, +) error { + return controllerutil.SetControllerReference(owner, controlled, r.Client.Scheme()) +} + +//+kubebuilder:rbac:groups="postgres-operator.crunchydata.com",resources="crunchybridgeclusters",verbs={get,patch,update} +//+kubebuilder:rbac:groups="postgres-operator.crunchydata.com",resources="crunchybridgeclusters/status",verbs={patch,update} +//+kubebuilder:rbac:groups="postgres-operator.crunchydata.com",resources="crunchybridgeclusters/finalizers",verbs={patch,update} +//+kubebuilder:rbac:groups="",resources="secrets",verbs={get} + +// Reconcile does the work to move the current state of the world toward the +// desired state described in a [v1beta1.CrunchyBridgeCluster] identified by req. +func (r *CrunchyBridgeClusterReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { + log := ctrl.LoggerFrom(ctx) + + // Retrieve the crunchybridgecluster from the client cache, if it exists. A deferred + // function below will send any changes to its Status field. + // + // NOTE: No DeepCopy is necessary here because controller-runtime makes a + // copy before returning from its cache. + // - https://github.com/kubernetes-sigs/controller-runtime/issues/1235 + crunchybridgecluster := &v1beta1.CrunchyBridgeCluster{} + err := r.Get(ctx, req.NamespacedName, crunchybridgecluster) + + if err == nil { + // Write any changes to the crunchybridgecluster status on the way out. + before := crunchybridgecluster.DeepCopy() + defer func() { + if !equality.Semantic.DeepEqual(before.Status, crunchybridgecluster.Status) { + status := r.Status().Patch(ctx, crunchybridgecluster, client.MergeFrom(before), r.Owner) + + if err == nil && status != nil { + err = status + } else if status != nil { + log.Error(status, "Patching CrunchyBridgeCluster status") + } + } + }() + } else { + // NotFound cannot be fixed by requeuing so ignore it. During background + // deletion, we receive delete events from crunchybridgecluster's dependents after + // crunchybridgecluster is deleted. + return ctrl.Result{}, client.IgnoreNotFound(err) + } + + // Get and validate connection secret for requests + key, team, err := r.reconcileBridgeConnectionSecret(ctx, crunchybridgecluster) + if err != nil { + log.Error(err, "issue reconciling bridge connection secret") + + // Don't automatically requeue Secret issues. We are watching for + // related secrets, so will requeue when a related secret is touched. + // lint:ignore nilerr Return err as status, no requeue needed + return ctrl.Result{}, nil + } + + // Check for and handle deletion of cluster. Return early if it is being + // deleted or there was an error. Make sure finalizer is added if cluster + // is not being deleted. + if result, err := r.handleDelete(ctx, crunchybridgecluster, key); err != nil { + log.Error(err, "deleting") + return ctrl.Result{}, err + } else if result != nil { + if log := log.V(1); log.Enabled() { + log.Info("deleting", "result", fmt.Sprintf("%+v", *result)) + } + return *result, err + } + + // Wonder if there's a better way to handle adding/checking/removing statuses + // We did something in the upgrade controller + // Exit early if we can't create from this K8s object + // unless this K8s object has been changed (compare ObservedGeneration) + invalid := meta.FindStatusCondition(crunchybridgecluster.Status.Conditions, + v1beta1.ConditionReady) + if invalid != nil && + invalid.Status == metav1.ConditionFalse && + invalid.Reason == "ClusterInvalid" && + invalid.ObservedGeneration == crunchybridgecluster.GetGeneration() { + return ctrl.Result{}, nil + } + + // check for an upgrade error and return until observedGeneration has + // been incremented. + invalidUpgrade := meta.FindStatusCondition(crunchybridgecluster.Status.Conditions, + v1beta1.ConditionUpgrading) + if invalidUpgrade != nil && + invalidUpgrade.Status == metav1.ConditionFalse && + invalidUpgrade.Reason == "UpgradeError" && + invalidUpgrade.ObservedGeneration == crunchybridgecluster.GetGeneration() { + return ctrl.Result{}, nil + } + + // We should only be missing the ID if no create has been issued + // or the create was interrupted and we haven't received the ID. + if crunchybridgecluster.Status.ID == "" { + // Check if a cluster with the same name already exists + controllerResult, err := r.handleDuplicateClusterName(ctx, key, team, crunchybridgecluster) + if err != nil || controllerResult != nil { + return *controllerResult, err + } + + // if we've gotten here then no cluster exists with that name and we're missing the ID, ergo, create cluster + return r.handleCreateCluster(ctx, key, team, crunchybridgecluster), nil + } + + // If we reach this point, our CrunchyBridgeCluster object has an ID, so we want + // to fill in the details for the cluster, cluster status, and cluster upgrades + // from the Bridge API. + + // Get Cluster + err = r.handleGetCluster(ctx, key, crunchybridgecluster) + if err != nil { + return ctrl.Result{}, err + } + + // Get Cluster Status + err = r.handleGetClusterStatus(ctx, key, crunchybridgecluster) + if err != nil { + return ctrl.Result{}, err + } + + // Get Cluster Upgrade + err = r.handleGetClusterUpgrade(ctx, key, crunchybridgecluster) + if err != nil { + return ctrl.Result{}, err + } + + // Reconcile roles and their secrets + err = r.reconcilePostgresRoles(ctx, key, crunchybridgecluster) + if err != nil { + log.Error(err, "issue reconciling postgres user roles/secrets") + return ctrl.Result{}, err + } + + // For now, we skip updating until the upgrade status is cleared. + // For the future, we may want to update in-progress upgrades, + // and for that we will need a way tell that an upgrade in progress + // is the one we want to update. + // Consider: Perhaps add `generation` field to upgrade status? + // Checking this here also means that if an upgrade is requested through the GUI/API + // then we will requeue and wait for it to be done. + // TODO(crunchybridgecluster): Do we want the operator to interrupt + // upgrades created through the GUI/API? + if len(crunchybridgecluster.Status.OngoingUpgrade) != 0 { + return runtime.RequeueWithoutBackoff(3 * time.Minute), nil + } + + // Check if there's an upgrade difference for the three upgradeable fields that hit the upgrade endpoint + // Why PostgresVersion and MajorVersion? Because MajorVersion in the Status is sure to be + // an int of the major version, whereas Status.Responses.Cluster.PostgresVersion might be the ID + if (crunchybridgecluster.Spec.Storage != *crunchybridgecluster.Status.Storage) || + crunchybridgecluster.Spec.Plan != crunchybridgecluster.Status.Plan || + crunchybridgecluster.Spec.PostgresVersion != crunchybridgecluster.Status.MajorVersion { + return r.handleUpgrade(ctx, key, crunchybridgecluster), nil + } + + // Are there diffs between the cluster response from the Bridge API and the spec? + // HA diffs are sent to /clusters/{cluster_id}/actions/[enable|disable]-ha + // so have to know (a) to send and (b) which to send to + if crunchybridgecluster.Spec.IsHA != *crunchybridgecluster.Status.IsHA { + return r.handleUpgradeHA(ctx, key, crunchybridgecluster), nil + } + + // Check if there's a difference in is_protected, name, maintenance_window_start, etc. + // see https://docs.crunchybridge.com/api/cluster#update-cluster + // updates to these fields that hit the PATCH `clusters/` endpoint + if crunchybridgecluster.Spec.IsProtected != *crunchybridgecluster.Status.IsProtected || + crunchybridgecluster.Spec.ClusterName != crunchybridgecluster.Status.ClusterName { + return r.handleUpdate(ctx, key, crunchybridgecluster), nil + } + + log.Info("Reconciled") + // TODO(crunchybridgecluster): do we always want to requeue? Does the Watch mean we + // don't need this, or do we want both? + return runtime.RequeueWithoutBackoff(3 * time.Minute), nil +} + +// reconcileBridgeConnectionSecret looks for the Bridge connection secret specified by the cluster, +// and returns the API key and Team ID found in the secret, or sets conditions and returns an error +// if the secret is invalid. +func (r *CrunchyBridgeClusterReconciler) reconcileBridgeConnectionSecret( + ctx context.Context, crunchybridgecluster *v1beta1.CrunchyBridgeCluster, +) (string, string, error) { + key, team, err := r.GetSecretKeys(ctx, crunchybridgecluster) + if err != nil { + meta.SetStatusCondition(&crunchybridgecluster.Status.Conditions, metav1.Condition{ + ObservedGeneration: crunchybridgecluster.GetGeneration(), + Type: v1beta1.ConditionReady, + Status: metav1.ConditionUnknown, + Reason: "SecretInvalid", + Message: fmt.Sprintf( + "The condition of the cluster is unknown because the secret is invalid: %v", err), + }) + meta.SetStatusCondition(&crunchybridgecluster.Status.Conditions, metav1.Condition{ + Type: v1beta1.ConditionUpgrading, + Status: metav1.ConditionUnknown, + ObservedGeneration: crunchybridgecluster.GetGeneration(), + LastTransitionTime: metav1.Time{}, + Reason: "SecretInvalid", + Message: fmt.Sprintf( + "The condition of the upgrade(s) is unknown because the secret is invalid: %v", err), + }) + + return "", "", err + } + + return key, team, err +} + +// handleDuplicateClusterName checks Bridge for any already existing clusters that +// have the same name. It returns (nil, nil) when no cluster is found with the same +// name. It returns a controller result, indicating we should exit the reconcile loop, +// if a cluster with a duplicate name is found. The caller is responsible for +// returning controller result objects and errors to controller-runtime. +func (r *CrunchyBridgeClusterReconciler) handleDuplicateClusterName(ctx context.Context, + apiKey, teamId string, crunchybridgecluster *v1beta1.CrunchyBridgeCluster, +) (*ctrl.Result, error) { + log := ctrl.LoggerFrom(ctx) + + clusters, err := r.NewClient().ListClusters(ctx, apiKey, teamId) + if err != nil { + meta.SetStatusCondition(&crunchybridgecluster.Status.Conditions, metav1.Condition{ + ObservedGeneration: crunchybridgecluster.GetGeneration(), + Type: v1beta1.ConditionReady, + Status: metav1.ConditionUnknown, + Reason: "UnknownClusterState", + Message: fmt.Sprintf("Issue listing existing clusters in Bridge: %v", err), + }) + log.Error(err, "issue listing existing clusters in Bridge") + return &ctrl.Result{}, err + } + + for _, cluster := range clusters { + if crunchybridgecluster.Spec.ClusterName == cluster.ClusterName { + // Cluster with the same name exists so check for adoption annotation + adoptionID, annotationExists := crunchybridgecluster.Annotations[naming.CrunchyBridgeClusterAdoptionAnnotation] + if annotationExists && strings.EqualFold(adoptionID, cluster.ID) { + // Annotation is present with correct ID value; adopt cluster by assigning ID to status. + crunchybridgecluster.Status.ID = cluster.ID + // Requeue now that we have a cluster ID assigned + return &ctrl.Result{Requeue: true}, nil + } + + // If we made it here, the adoption annotation either doesn't exist or its value is incorrect. + // The user must either add it or change the name on the CR. + + // Set invalid status condition and create log message. + meta.SetStatusCondition(&crunchybridgecluster.Status.Conditions, metav1.Condition{ + ObservedGeneration: crunchybridgecluster.GetGeneration(), + Type: v1beta1.ConditionReady, + Status: metav1.ConditionFalse, + Reason: "DuplicateClusterName", + Message: fmt.Sprintf("A cluster with the same name already exists for this team (Team ID: %v). "+ + "Give the CrunchyBridgeCluster CR a unique name, or if you would like to take control of the "+ + "existing cluster, add the 'postgres-operator.crunchydata.com/adopt-bridge-cluster' "+ + "annotation and set its value to the existing cluster's ID (Cluster ID: %v).", teamId, cluster.ID), + }) + + log.Info(fmt.Sprintf("A cluster with the same name already exists for this team (Team ID: %v). "+ + "Give the CrunchyBridgeCluster CR a unique name, or if you would like to take control "+ + "of the existing cluster, add the 'postgres-operator.crunchydata.com/adopt-bridge-cluster' "+ + "annotation and set its value to the existing cluster's ID (Cluster ID: %v).", teamId, cluster.ID)) + + // We have an invalid cluster spec so we don't want to requeue + return &ctrl.Result{}, nil + } + } + + return nil, nil +} + +// handleCreateCluster handles creating new Crunchy Bridge Clusters +func (r *CrunchyBridgeClusterReconciler) handleCreateCluster(ctx context.Context, + apiKey, teamId string, crunchybridgecluster *v1beta1.CrunchyBridgeCluster, +) ctrl.Result { + log := ctrl.LoggerFrom(ctx) + + createClusterRequestPayload := &bridge.PostClustersRequestPayload{ + IsHA: crunchybridgecluster.Spec.IsHA, + Name: crunchybridgecluster.Spec.ClusterName, + Plan: crunchybridgecluster.Spec.Plan, + PostgresVersion: intstr.FromInt(crunchybridgecluster.Spec.PostgresVersion), + Provider: crunchybridgecluster.Spec.Provider, + Region: crunchybridgecluster.Spec.Region, + Storage: bridge.ToGibibytes(crunchybridgecluster.Spec.Storage), + Team: teamId, + } + cluster, err := r.NewClient().CreateCluster(ctx, apiKey, createClusterRequestPayload) + if err != nil { + log.Error(err, "issue creating cluster in Bridge") + // TODO(crunchybridgecluster): probably shouldn't set this condition unless response from Bridge + // indicates the payload is wrong + // Otherwise want a different condition + meta.SetStatusCondition(&crunchybridgecluster.Status.Conditions, metav1.Condition{ + ObservedGeneration: crunchybridgecluster.GetGeneration(), + Type: v1beta1.ConditionReady, + Status: metav1.ConditionFalse, + Reason: "ClusterInvalid", + Message: fmt.Sprintf( + "Cannot create from spec: %v", err), + }) + + // TODO(crunchybridgecluster): If the payload is wrong, we don't want to requeue, so pass nil error + // If the transmission hit a transient problem, we do want to requeue + return ctrl.Result{} + } + crunchybridgecluster.Status.ID = cluster.ID + + meta.SetStatusCondition(&crunchybridgecluster.Status.Conditions, metav1.Condition{ + ObservedGeneration: crunchybridgecluster.GetGeneration(), + Type: v1beta1.ConditionReady, + Status: metav1.ConditionUnknown, + Reason: "UnknownClusterState", + Message: "The condition of the cluster is unknown.", + }) + + meta.SetStatusCondition(&crunchybridgecluster.Status.Conditions, metav1.Condition{ + ObservedGeneration: crunchybridgecluster.GetGeneration(), + Type: v1beta1.ConditionUpgrading, + Status: metav1.ConditionUnknown, + Reason: "UnknownUpgradeState", + Message: "The condition of the upgrade(s) is unknown.", + }) + + return runtime.RequeueWithoutBackoff(3 * time.Minute) +} + +// handleGetCluster handles getting the cluster details from Bridge and +// updating the cluster CR's Status accordingly +func (r *CrunchyBridgeClusterReconciler) handleGetCluster(ctx context.Context, + apiKey string, crunchybridgecluster *v1beta1.CrunchyBridgeCluster, +) error { + log := ctrl.LoggerFrom(ctx) + + clusterDetails, err := r.NewClient().GetCluster(ctx, apiKey, crunchybridgecluster.Status.ID) + if err != nil { + meta.SetStatusCondition(&crunchybridgecluster.Status.Conditions, metav1.Condition{ + ObservedGeneration: crunchybridgecluster.GetGeneration(), + Type: v1beta1.ConditionReady, + Status: metav1.ConditionUnknown, + Reason: "UnknownClusterState", + Message: fmt.Sprintf("Issue getting cluster information from Bridge: %v", err), + }) + log.Error(err, "issue getting cluster information from Bridge") + return err + } + clusterDetails.AddDataToClusterStatus(crunchybridgecluster) + + return nil +} + +// handleGetClusterStatus handles getting the cluster status from Bridge and +// updating the cluster CR's Status accordingly +func (r *CrunchyBridgeClusterReconciler) handleGetClusterStatus(ctx context.Context, + apiKey string, crunchybridgecluster *v1beta1.CrunchyBridgeCluster, +) error { + log := ctrl.LoggerFrom(ctx) + + clusterStatus, err := r.NewClient().GetClusterStatus(ctx, apiKey, crunchybridgecluster.Status.ID) + if err != nil { + meta.SetStatusCondition(&crunchybridgecluster.Status.Conditions, metav1.Condition{ + ObservedGeneration: crunchybridgecluster.GetGeneration(), + Type: v1beta1.ConditionReady, + Status: metav1.ConditionUnknown, + Reason: "UnknownClusterState", + Message: fmt.Sprintf("Issue getting cluster status from Bridge: %v", err), + }) + crunchybridgecluster.Status.State = "unknown" + log.Error(err, "issue getting cluster status from Bridge") + return err + } + clusterStatus.AddDataToClusterStatus(crunchybridgecluster) + + if clusterStatus.State == "ready" { + meta.SetStatusCondition(&crunchybridgecluster.Status.Conditions, metav1.Condition{ + ObservedGeneration: crunchybridgecluster.GetGeneration(), + Type: v1beta1.ConditionReady, + Status: metav1.ConditionTrue, + Reason: clusterStatus.State, + Message: fmt.Sprintf("Bridge cluster state is %v.", clusterStatus.State), + }) + } else { + meta.SetStatusCondition(&crunchybridgecluster.Status.Conditions, metav1.Condition{ + ObservedGeneration: crunchybridgecluster.GetGeneration(), + Type: v1beta1.ConditionReady, + Status: metav1.ConditionFalse, + Reason: clusterStatus.State, + Message: fmt.Sprintf("Bridge cluster state is %v.", clusterStatus.State), + }) + } + + return nil +} + +// handleGetClusterUpgrade handles getting the ongoing upgrade operations from Bridge and +// updating the cluster CR's Status accordingly +func (r *CrunchyBridgeClusterReconciler) handleGetClusterUpgrade(ctx context.Context, + apiKey string, + crunchybridgecluster *v1beta1.CrunchyBridgeCluster, +) error { + log := ctrl.LoggerFrom(ctx) + + clusterUpgradeDetails, err := r.NewClient().GetClusterUpgrade(ctx, apiKey, crunchybridgecluster.Status.ID) + if err != nil { + meta.SetStatusCondition(&crunchybridgecluster.Status.Conditions, metav1.Condition{ + ObservedGeneration: crunchybridgecluster.GetGeneration(), + Type: v1beta1.ConditionUpgrading, + Status: metav1.ConditionUnknown, + Reason: "UnknownUpgradeState", + Message: fmt.Sprintf("Issue getting cluster upgrade from Bridge: %v", err), + }) + log.Error(err, "issue getting cluster upgrade from Bridge") + return err + } + clusterUpgradeDetails.AddDataToClusterStatus(crunchybridgecluster) + + if len(clusterUpgradeDetails.Operations) != 0 { + meta.SetStatusCondition(&crunchybridgecluster.Status.Conditions, metav1.Condition{ + ObservedGeneration: crunchybridgecluster.GetGeneration(), + Type: v1beta1.ConditionUpgrading, + Status: metav1.ConditionTrue, + Reason: clusterUpgradeDetails.Operations[0].Flavor, + Message: fmt.Sprintf( + "Performing an upgrade of type %v with a state of %v.", + clusterUpgradeDetails.Operations[0].Flavor, clusterUpgradeDetails.Operations[0].State), + }) + } else { + meta.SetStatusCondition(&crunchybridgecluster.Status.Conditions, metav1.Condition{ + ObservedGeneration: crunchybridgecluster.GetGeneration(), + Type: v1beta1.ConditionUpgrading, + Status: metav1.ConditionFalse, + Reason: "NoUpgradesInProgress", + Message: "No upgrades being performed", + }) + } + + return nil +} + +// handleUpgrade handles upgrades that hit the "POST /clusters//upgrade" endpoint +func (r *CrunchyBridgeClusterReconciler) handleUpgrade(ctx context.Context, + apiKey string, + crunchybridgecluster *v1beta1.CrunchyBridgeCluster, +) ctrl.Result { + log := ctrl.LoggerFrom(ctx) + + log.Info("Handling upgrade request") + + upgradeRequest := &bridge.PostClustersUpgradeRequestPayload{ + Plan: crunchybridgecluster.Spec.Plan, + PostgresVersion: intstr.FromInt(crunchybridgecluster.Spec.PostgresVersion), + Storage: bridge.ToGibibytes(crunchybridgecluster.Spec.Storage), + } + + clusterUpgrade, err := r.NewClient().UpgradeCluster(ctx, apiKey, + crunchybridgecluster.Status.ID, upgradeRequest) + if err != nil { + // TODO(crunchybridgecluster): consider what errors we might get + // and what different results/requeue times we want to return. + // Currently: don't requeue and wait for user to change spec. + meta.SetStatusCondition(&crunchybridgecluster.Status.Conditions, metav1.Condition{ + ObservedGeneration: crunchybridgecluster.GetGeneration(), + Type: v1beta1.ConditionUpgrading, + Status: metav1.ConditionFalse, + Reason: "UpgradeError", + Message: fmt.Sprintf( + "Error performing an upgrade: %s", err), + }) + log.Error(err, "Error while attempting cluster upgrade") + return ctrl.Result{} + } + clusterUpgrade.AddDataToClusterStatus(crunchybridgecluster) + + if len(clusterUpgrade.Operations) != 0 { + meta.SetStatusCondition(&crunchybridgecluster.Status.Conditions, metav1.Condition{ + ObservedGeneration: crunchybridgecluster.GetGeneration(), + Type: v1beta1.ConditionUpgrading, + Status: metav1.ConditionTrue, + Reason: clusterUpgrade.Operations[0].Flavor, + Message: fmt.Sprintf( + "Performing an upgrade of type %v with a state of %v.", + clusterUpgrade.Operations[0].Flavor, clusterUpgrade.Operations[0].State), + }) + } + + return runtime.RequeueWithoutBackoff(3 * time.Minute) +} + +// handleUpgradeHA handles upgrades that hit the +// "PUT /clusters//actions/[enable|disable]-ha" endpoint +func (r *CrunchyBridgeClusterReconciler) handleUpgradeHA(ctx context.Context, + apiKey string, + crunchybridgecluster *v1beta1.CrunchyBridgeCluster, +) ctrl.Result { + log := ctrl.LoggerFrom(ctx) + + log.Info("Handling HA change request") + + action := "enable-ha" + if !crunchybridgecluster.Spec.IsHA { + action = "disable-ha" + } + + clusterUpgrade, err := r.NewClient().UpgradeClusterHA(ctx, apiKey, crunchybridgecluster.Status.ID, action) + if err != nil { + // TODO(crunchybridgecluster): consider what errors we might get + // and what different results/requeue times we want to return. + // Currently: don't requeue and wait for user to change spec. + meta.SetStatusCondition(&crunchybridgecluster.Status.Conditions, metav1.Condition{ + ObservedGeneration: crunchybridgecluster.GetGeneration(), + Type: v1beta1.ConditionUpgrading, + Status: metav1.ConditionFalse, + Reason: "UpgradeError", + Message: fmt.Sprintf( + "Error performing an HA upgrade: %s", err), + }) + log.Error(err, "Error while attempting cluster HA change") + return ctrl.Result{} + } + clusterUpgrade.AddDataToClusterStatus(crunchybridgecluster) + if len(clusterUpgrade.Operations) != 0 { + meta.SetStatusCondition(&crunchybridgecluster.Status.Conditions, metav1.Condition{ + ObservedGeneration: crunchybridgecluster.GetGeneration(), + Type: v1beta1.ConditionUpgrading, + Status: metav1.ConditionTrue, + Reason: clusterUpgrade.Operations[0].Flavor, + Message: fmt.Sprintf( + "Performing an upgrade of type %v with a state of %v.", + clusterUpgrade.Operations[0].Flavor, clusterUpgrade.Operations[0].State), + }) + } + + return runtime.RequeueWithoutBackoff(3 * time.Minute) +} + +// handleUpdate handles upgrades that hit the "PATCH /clusters/" endpoint +func (r *CrunchyBridgeClusterReconciler) handleUpdate(ctx context.Context, + apiKey string, + crunchybridgecluster *v1beta1.CrunchyBridgeCluster, +) ctrl.Result { + log := ctrl.LoggerFrom(ctx) + + log.Info("Handling update request") + + updateRequest := &bridge.PatchClustersRequestPayload{ + IsProtected: &crunchybridgecluster.Spec.IsProtected, + Name: crunchybridgecluster.Spec.ClusterName, + } + + clusterUpdate, err := r.NewClient().UpdateCluster(ctx, apiKey, + crunchybridgecluster.Status.ID, updateRequest) + if err != nil { + // TODO(crunchybridgecluster): consider what errors we might get + // and what different results/requeue times we want to return. + // Currently: don't requeue and wait for user to change spec. + meta.SetStatusCondition(&crunchybridgecluster.Status.Conditions, metav1.Condition{ + ObservedGeneration: crunchybridgecluster.GetGeneration(), + Type: v1beta1.ConditionUpgrading, + Status: metav1.ConditionFalse, + Reason: "UpgradeError", + Message: fmt.Sprintf( + "Error performing an upgrade: %s", err), + }) + log.Error(err, "Error while attempting cluster update") + return ctrl.Result{} + } + clusterUpdate.AddDataToClusterStatus(crunchybridgecluster) + meta.SetStatusCondition(&crunchybridgecluster.Status.Conditions, metav1.Condition{ + ObservedGeneration: crunchybridgecluster.GetGeneration(), + Type: v1beta1.ConditionUpgrading, + Status: metav1.ConditionTrue, + Reason: "ClusterUpgrade", + Message: fmt.Sprintf( + "An upgrade is occurring, the clusters name is %v and the cluster is protected is %v.", + clusterUpdate.ClusterName, *clusterUpdate.IsProtected), + }) + + return runtime.RequeueWithoutBackoff(3 * time.Minute) +} + +// GetSecretKeys gets the secret and returns the expected API key and team id +// or an error if either of those fields or the Secret are missing +func (r *CrunchyBridgeClusterReconciler) GetSecretKeys( + ctx context.Context, crunchyBridgeCluster *v1beta1.CrunchyBridgeCluster, +) (string, string, error) { + + existing := &corev1.Secret{ObjectMeta: metav1.ObjectMeta{ + Namespace: crunchyBridgeCluster.GetNamespace(), + Name: crunchyBridgeCluster.Spec.Secret, + }} + + err := errors.WithStack( + r.Client.Get(ctx, client.ObjectKeyFromObject(existing), existing)) + + if err == nil { + if existing.Data["key"] != nil && existing.Data["team"] != nil { + return string(existing.Data["key"]), string(existing.Data["team"]), nil + } + err = fmt.Errorf("error handling secret; expected to find a key and a team: found key %t, found team %t", + existing.Data["key"] != nil, + existing.Data["team"] != nil) + } + + return "", "", err +} + +// deleteControlled safely deletes object when it is controlled by cluster. +func (r *CrunchyBridgeClusterReconciler) deleteControlled( + ctx context.Context, crunchyBridgeCluster *v1beta1.CrunchyBridgeCluster, object client.Object, +) error { + if metav1.IsControlledBy(object, crunchyBridgeCluster) { + uid := object.GetUID() + version := object.GetResourceVersion() + exactly := client.Preconditions{UID: &uid, ResourceVersion: &version} + + return r.Client.Delete(ctx, object, exactly) + } + + return nil +} diff --git a/internal/bridge/crunchybridgecluster/crunchybridgecluster_controller_test.go b/internal/bridge/crunchybridgecluster/crunchybridgecluster_controller_test.go new file mode 100644 index 0000000000..92d6b58d0e --- /dev/null +++ b/internal/bridge/crunchybridgecluster/crunchybridgecluster_controller_test.go @@ -0,0 +1,834 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package crunchybridgecluster + +import ( + "context" + "strings" + "testing" + "time" + + "gotest.tools/v3/assert" + corev1 "k8s.io/api/core/v1" + apierrors "k8s.io/apimachinery/pkg/api/errors" + "k8s.io/apimachinery/pkg/api/meta" + "k8s.io/apimachinery/pkg/api/resource" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + ctrl "sigs.k8s.io/controller-runtime" + "sigs.k8s.io/controller-runtime/pkg/client" + + "github.com/crunchydata/postgres-operator/internal/bridge" + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/testing/cmp" + "github.com/crunchydata/postgres-operator/internal/testing/require" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +var testTeamId = "5678" +var testApiKey = "9012" + +func TestReconcileBridgeConnectionSecret(t *testing.T) { + ctx := context.Background() + tClient := setupKubernetes(t) + require.ParallelCapacity(t, 0) + + reconciler := &CrunchyBridgeClusterReconciler{ + Client: tClient, + Owner: "crunchybridgecluster-controller", + } + + ns := setupNamespace(t, tClient).Name + cluster := testCluster() + cluster.Namespace = ns + + t.Run("Failure", func(t *testing.T) { + key, team, err := reconciler.reconcileBridgeConnectionSecret(ctx, cluster) + assert.Equal(t, key, "") + assert.Equal(t, team, "") + assert.Check(t, err != nil) + readyCondition := meta.FindStatusCondition(cluster.Status.Conditions, v1beta1.ConditionReady) + if assert.Check(t, readyCondition != nil) { + assert.Equal(t, readyCondition.Status, metav1.ConditionUnknown) + assert.Equal(t, readyCondition.Reason, "SecretInvalid") + assert.Check(t, cmp.Contains(readyCondition.Message, + "The condition of the cluster is unknown because the secret is invalid:")) + } + upgradingCondition := meta.FindStatusCondition(cluster.Status.Conditions, v1beta1.ConditionUpgrading) + if assert.Check(t, upgradingCondition != nil) { + assert.Equal(t, upgradingCondition.Status, metav1.ConditionUnknown) + assert.Equal(t, upgradingCondition.Reason, "SecretInvalid") + assert.Check(t, cmp.Contains(upgradingCondition.Message, + "The condition of the upgrade(s) is unknown because the secret is invalid:")) + } + }) + + t.Run("ValidSecretFound", func(t *testing.T) { + secret := &corev1.Secret{ + ObjectMeta: metav1.ObjectMeta{ + Name: "crunchy-bridge-api-key", + Namespace: ns, + }, + Data: map[string][]byte{ + "key": []byte(`asdf`), + "team": []byte(`jkl;`), + }, + } + assert.NilError(t, tClient.Create(ctx, secret)) + + key, team, err := reconciler.reconcileBridgeConnectionSecret(ctx, cluster) + assert.Equal(t, key, "asdf") + assert.Equal(t, team, "jkl;") + assert.NilError(t, err) + }) +} + +func TestHandleDuplicateClusterName(t *testing.T) { + ctx := context.Background() + tClient := setupKubernetes(t) + require.ParallelCapacity(t, 0) + + clusterInBridge := testClusterApiResource() + clusterInBridge.ClusterName = "bridge-cluster-1" // originally "hippo-cluster" + reconciler := &CrunchyBridgeClusterReconciler{ + Client: tClient, + Owner: "crunchybridgecluster-controller", + } + reconciler.NewClient = func() bridge.ClientInterface { + return &TestBridgeClient{ + ApiKey: testApiKey, + TeamId: testTeamId, + Clusters: []*bridge.ClusterApiResource{clusterInBridge}, + } + } + + ns := setupNamespace(t, tClient).Name + + t.Run("FailureToListClusters", func(t *testing.T) { + cluster := testCluster() + cluster.Namespace = ns + + controllerResult, err := reconciler.handleDuplicateClusterName(ctx, "bad_api_key", testTeamId, cluster) + assert.Check(t, err != nil) + assert.Equal(t, *controllerResult, ctrl.Result{}) + readyCondition := meta.FindStatusCondition(cluster.Status.Conditions, v1beta1.ConditionReady) + if assert.Check(t, readyCondition != nil) { + assert.Equal(t, readyCondition.Status, metav1.ConditionUnknown) + assert.Equal(t, readyCondition.Reason, "UnknownClusterState") + assert.Check(t, cmp.Contains(readyCondition.Message, + "Issue listing existing clusters in Bridge:")) + } + }) + + t.Run("NoDuplicateFound", func(t *testing.T) { + cluster := testCluster() + cluster.Namespace = ns + + controllerResult, err := reconciler.handleDuplicateClusterName(ctx, testApiKey, testTeamId, cluster) + assert.NilError(t, err) + assert.Check(t, controllerResult == nil) + }) + + t.Run("DuplicateFoundAdoptionAnnotationNotPresent", func(t *testing.T) { + cluster := testCluster() + cluster.Namespace = ns + cluster.Spec.ClusterName = "bridge-cluster-1" // originally "hippo-cluster" + + controllerResult, err := reconciler.handleDuplicateClusterName(ctx, testApiKey, testTeamId, cluster) + assert.NilError(t, err) + assert.Equal(t, *controllerResult, ctrl.Result{}) + readyCondition := meta.FindStatusCondition(cluster.Status.Conditions, v1beta1.ConditionReady) + if assert.Check(t, readyCondition != nil) { + assert.Equal(t, readyCondition.Status, metav1.ConditionFalse) + assert.Equal(t, readyCondition.Reason, "DuplicateClusterName") + assert.Check(t, cmp.Contains(readyCondition.Message, + "A cluster with the same name already exists for this team (Team ID: ")) + } + }) + + t.Run("DuplicateFoundAdoptionAnnotationPresent", func(t *testing.T) { + cluster := testCluster() + cluster.Namespace = ns + cluster.Spec.ClusterName = "bridge-cluster-1" // originally "hippo-cluster" + cluster.Annotations = map[string]string{} + cluster.Annotations[naming.CrunchyBridgeClusterAdoptionAnnotation] = "1234" + + controllerResult, err := reconciler.handleDuplicateClusterName(ctx, testApiKey, testTeamId, cluster) + assert.NilError(t, err) + assert.Equal(t, *controllerResult, ctrl.Result{Requeue: true}) + assert.Equal(t, cluster.Status.ID, "1234") + }) +} + +func TestHandleCreateCluster(t *testing.T) { + ctx := context.Background() + tClient := setupKubernetes(t) + require.ParallelCapacity(t, 0) + + ns := setupNamespace(t, tClient).Name + + reconciler := &CrunchyBridgeClusterReconciler{ + Client: tClient, + Owner: "crunchybridgecluster-controller", + } + reconciler.NewClient = func() bridge.ClientInterface { + return &TestBridgeClient{ + ApiKey: testApiKey, + TeamId: testTeamId, + Clusters: []*bridge.ClusterApiResource{}, + } + } + + t.Run("SuccessfulCreate", func(t *testing.T) { + cluster := testCluster() + cluster.Namespace = ns + + controllerResult := reconciler.handleCreateCluster(ctx, testApiKey, testTeamId, cluster) + assert.Equal(t, controllerResult.RequeueAfter, 3*time.Minute) + assert.Equal(t, cluster.Status.ID, "0") + + readyCondition := meta.FindStatusCondition(cluster.Status.Conditions, v1beta1.ConditionReady) + if assert.Check(t, readyCondition != nil) { + assert.Equal(t, readyCondition.Status, metav1.ConditionUnknown) + assert.Equal(t, readyCondition.Reason, "UnknownClusterState") + assert.Check(t, cmp.Contains(readyCondition.Message, + "The condition of the cluster is unknown.")) + } + + upgradingCondition := meta.FindStatusCondition(cluster.Status.Conditions, v1beta1.ConditionUpgrading) + if assert.Check(t, upgradingCondition != nil) { + assert.Equal(t, upgradingCondition.Status, metav1.ConditionUnknown) + assert.Equal(t, upgradingCondition.Reason, "UnknownUpgradeState") + assert.Check(t, cmp.Contains(upgradingCondition.Message, + "The condition of the upgrade(s) is unknown.")) + } + }) + + t.Run("UnsuccessfulCreate", func(t *testing.T) { + cluster := testCluster() + cluster.Namespace = ns + + controllerResult := reconciler.handleCreateCluster(ctx, "bad_api_key", testTeamId, cluster) + assert.Equal(t, controllerResult, ctrl.Result{}) + assert.Equal(t, cluster.Status.ID, "") + + readyCondition := meta.FindStatusCondition(cluster.Status.Conditions, v1beta1.ConditionReady) + if assert.Check(t, readyCondition != nil) { + assert.Equal(t, readyCondition.Status, metav1.ConditionFalse) + assert.Equal(t, readyCondition.Reason, "ClusterInvalid") + assert.Check(t, cmp.Contains(readyCondition.Message, + "Cannot create from spec:")) + } + + upgradingCondition := meta.FindStatusCondition(cluster.Status.Conditions, v1beta1.ConditionUpgrading) + assert.Check(t, upgradingCondition == nil) + }) +} + +func TestHandleGetCluster(t *testing.T) { + ctx := context.Background() + tClient := setupKubernetes(t) + require.ParallelCapacity(t, 0) + + ns := setupNamespace(t, tClient).Name + firstClusterInBridge := testClusterApiResource() + secondClusterInBridge := testClusterApiResource() + secondClusterInBridge.ID = "2345" // originally "1234" + secondClusterInBridge.ClusterName = "hippo-cluster-2" // originally "hippo-cluster" + + reconciler := &CrunchyBridgeClusterReconciler{ + Client: tClient, + Owner: "crunchybridgecluster-controller", + } + reconciler.NewClient = func() bridge.ClientInterface { + return &TestBridgeClient{ + ApiKey: testApiKey, + TeamId: testTeamId, + Clusters: []*bridge.ClusterApiResource{firstClusterInBridge, secondClusterInBridge}, + } + } + + t.Run("SuccessfulGet", func(t *testing.T) { + cluster := testCluster() + cluster.Namespace = ns + cluster.Status.ID = "1234" + + err := reconciler.handleGetCluster(ctx, testApiKey, cluster) + assert.NilError(t, err) + assert.Equal(t, cluster.Status.ClusterName, firstClusterInBridge.ClusterName) + assert.Equal(t, cluster.Status.Host, firstClusterInBridge.Host) + assert.Equal(t, cluster.Status.ID, firstClusterInBridge.ID) + assert.Equal(t, cluster.Status.IsHA, firstClusterInBridge.IsHA) + assert.Equal(t, cluster.Status.IsProtected, firstClusterInBridge.IsProtected) + assert.Equal(t, cluster.Status.MajorVersion, firstClusterInBridge.MajorVersion) + assert.Equal(t, cluster.Status.Plan, firstClusterInBridge.Plan) + assert.Equal(t, *cluster.Status.Storage, *bridge.FromGibibytes(firstClusterInBridge.Storage)) + }) + + t.Run("UnsuccessfulGet", func(t *testing.T) { + cluster := testCluster() + cluster.Namespace = ns + cluster.Status.ID = "bad_cluster_id" + + err := reconciler.handleGetCluster(ctx, testApiKey, cluster) + assert.Check(t, err != nil) + + readyCondition := meta.FindStatusCondition(cluster.Status.Conditions, v1beta1.ConditionReady) + if assert.Check(t, readyCondition != nil) { + assert.Equal(t, readyCondition.Status, metav1.ConditionUnknown) + assert.Equal(t, readyCondition.Reason, "UnknownClusterState") + assert.Check(t, cmp.Contains(readyCondition.Message, + "Issue getting cluster information from Bridge:")) + } + }) +} + +func TestHandleGetClusterStatus(t *testing.T) { + ctx := context.Background() + tClient := setupKubernetes(t) + require.ParallelCapacity(t, 0) + + ns := setupNamespace(t, tClient).Name + readyClusterId := "1234" + creatingClusterId := "7890" + readyClusterStatusInBridge := testClusterStatusApiResource(readyClusterId) + creatingClusterStatusInBridge := testClusterStatusApiResource(creatingClusterId) + creatingClusterStatusInBridge.State = "creating" // originally "ready" + + reconciler := &CrunchyBridgeClusterReconciler{ + Client: tClient, + Owner: "crunchybridgecluster-controller", + } + reconciler.NewClient = func() bridge.ClientInterface { + return &TestBridgeClient{ + ApiKey: testApiKey, + TeamId: testTeamId, + ClusterStatuses: map[string]*bridge.ClusterStatusApiResource{ + readyClusterId: readyClusterStatusInBridge, + creatingClusterId: creatingClusterStatusInBridge, + }, + } + } + + t.Run("SuccessReadyState", func(t *testing.T) { + cluster := testCluster() + cluster.Namespace = ns + cluster.Status.ID = readyClusterId + + err := reconciler.handleGetClusterStatus(ctx, testApiKey, cluster) + assert.NilError(t, err) + assert.Equal(t, cluster.Status.State, "ready") + readyCondition := meta.FindStatusCondition(cluster.Status.Conditions, v1beta1.ConditionReady) + if assert.Check(t, readyCondition != nil) { + assert.Equal(t, readyCondition.Status, metav1.ConditionTrue) + assert.Equal(t, readyCondition.Reason, "ready") + assert.Check(t, cmp.Contains(readyCondition.Message, + "Bridge cluster state is ready")) + } + }) + + t.Run("SuccessNonReadyState", func(t *testing.T) { + cluster := testCluster() + cluster.Namespace = ns + cluster.Status.ID = creatingClusterId + + err := reconciler.handleGetClusterStatus(ctx, testApiKey, cluster) + assert.NilError(t, err) + assert.Equal(t, cluster.Status.State, "creating") + readyCondition := meta.FindStatusCondition(cluster.Status.Conditions, v1beta1.ConditionReady) + if assert.Check(t, readyCondition != nil) { + assert.Equal(t, readyCondition.Status, metav1.ConditionFalse) + assert.Equal(t, readyCondition.Reason, "creating") + assert.Check(t, cmp.Contains(readyCondition.Message, + "Bridge cluster state is creating")) + } + }) + + t.Run("UnsuccessfulGet", func(t *testing.T) { + cluster := testCluster() + cluster.Namespace = ns + cluster.Status.ID = creatingClusterId + + err := reconciler.handleGetClusterStatus(ctx, "bad_api_key", cluster) + assert.Check(t, err != nil) + assert.Equal(t, cluster.Status.State, "unknown") + readyCondition := meta.FindStatusCondition(cluster.Status.Conditions, v1beta1.ConditionReady) + if assert.Check(t, readyCondition != nil) { + assert.Equal(t, readyCondition.Status, metav1.ConditionUnknown) + assert.Equal(t, readyCondition.Reason, "UnknownClusterState") + assert.Check(t, cmp.Contains(readyCondition.Message, + "Issue getting cluster status from Bridge:")) + } + }) +} + +func TestHandleGetClusterUpgrade(t *testing.T) { + ctx := context.Background() + tClient := setupKubernetes(t) + require.ParallelCapacity(t, 0) + + ns := setupNamespace(t, tClient).Name + upgradingClusterId := "1234" + notUpgradingClusterId := "7890" + upgradingClusterUpgradeInBridge := testClusterUpgradeApiResource(upgradingClusterId) + notUpgradingClusterUpgradeInBridge := testClusterUpgradeApiResource(notUpgradingClusterId) + notUpgradingClusterUpgradeInBridge.Operations = []*v1beta1.UpgradeOperation{} + + reconciler := &CrunchyBridgeClusterReconciler{ + Client: tClient, + Owner: "crunchybridgecluster-controller", + } + reconciler.NewClient = func() bridge.ClientInterface { + return &TestBridgeClient{ + ApiKey: testApiKey, + TeamId: testTeamId, + ClusterUpgrades: map[string]*bridge.ClusterUpgradeApiResource{ + upgradingClusterId: upgradingClusterUpgradeInBridge, + notUpgradingClusterId: notUpgradingClusterUpgradeInBridge, + }, + } + } + + t.Run("SuccessUpgrading", func(t *testing.T) { + cluster := testCluster() + cluster.Namespace = ns + cluster.Status.ID = upgradingClusterId + + err := reconciler.handleGetClusterUpgrade(ctx, testApiKey, cluster) + assert.NilError(t, err) + assert.Equal(t, *cluster.Status.OngoingUpgrade[0], v1beta1.UpgradeOperation{ + Flavor: "resize", + StartingFrom: "", + State: "in_progress", + }) + upgradingCondition := meta.FindStatusCondition(cluster.Status.Conditions, v1beta1.ConditionUpgrading) + if assert.Check(t, upgradingCondition != nil) { + assert.Equal(t, upgradingCondition.Status, metav1.ConditionTrue) + assert.Equal(t, upgradingCondition.Reason, "resize") + assert.Check(t, cmp.Contains(upgradingCondition.Message, + "Performing an upgrade of type resize with a state of in_progress.")) + } + }) + + t.Run("SuccessNotUpgrading", func(t *testing.T) { + cluster := testCluster() + cluster.Namespace = ns + cluster.Status.ID = notUpgradingClusterId + + err := reconciler.handleGetClusterUpgrade(ctx, testApiKey, cluster) + assert.NilError(t, err) + assert.Equal(t, len(cluster.Status.OngoingUpgrade), 0) + upgradingCondition := meta.FindStatusCondition(cluster.Status.Conditions, v1beta1.ConditionUpgrading) + if assert.Check(t, upgradingCondition != nil) { + assert.Equal(t, upgradingCondition.Status, metav1.ConditionFalse) + assert.Equal(t, upgradingCondition.Reason, "NoUpgradesInProgress") + assert.Check(t, cmp.Contains(upgradingCondition.Message, + "No upgrades being performed")) + } + }) + + t.Run("UnsuccessfulGet", func(t *testing.T) { + cluster := testCluster() + cluster.Namespace = ns + cluster.Status.ID = notUpgradingClusterId + + err := reconciler.handleGetClusterUpgrade(ctx, "bad_api_key", cluster) + assert.Check(t, err != nil) + upgradingCondition := meta.FindStatusCondition(cluster.Status.Conditions, v1beta1.ConditionUpgrading) + if assert.Check(t, upgradingCondition != nil) { + assert.Equal(t, upgradingCondition.Status, metav1.ConditionUnknown) + assert.Equal(t, upgradingCondition.Reason, "UnknownUpgradeState") + assert.Check(t, cmp.Contains(upgradingCondition.Message, + "Issue getting cluster upgrade from Bridge:")) + } + }) +} + +func TestHandleUpgrade(t *testing.T) { + ctx := context.Background() + tClient := setupKubernetes(t) + require.ParallelCapacity(t, 0) + + ns := setupNamespace(t, tClient).Name + clusterInBridge := testClusterApiResource() + + reconciler := &CrunchyBridgeClusterReconciler{ + Client: tClient, + Owner: "crunchybridgecluster-controller", + } + reconciler.NewClient = func() bridge.ClientInterface { + return &TestBridgeClient{ + ApiKey: testApiKey, + TeamId: testTeamId, + Clusters: []*bridge.ClusterApiResource{clusterInBridge}, + } + } + + t.Run("UpgradePlan", func(t *testing.T) { + cluster := testCluster() + cluster.Namespace = ns + cluster.Status.ID = "1234" + cluster.Spec.Plan = "standard-16" // originally "standard-8" + + controllerResult := reconciler.handleUpgrade(ctx, testApiKey, cluster) + assert.Equal(t, controllerResult.RequeueAfter, 3*time.Minute) + upgradingCondition := meta.FindStatusCondition(cluster.Status.Conditions, v1beta1.ConditionUpgrading) + if assert.Check(t, upgradingCondition != nil) { + assert.Equal(t, upgradingCondition.Status, metav1.ConditionTrue) + assert.Equal(t, upgradingCondition.Reason, "maintenance") + assert.Check(t, cmp.Contains(upgradingCondition.Message, + "Performing an upgrade of type maintenance with a state of in_progress.")) + assert.Equal(t, *cluster.Status.OngoingUpgrade[0], v1beta1.UpgradeOperation{ + Flavor: "maintenance", + StartingFrom: "", + State: "in_progress", + }) + } + }) + + t.Run("UpgradePostgres", func(t *testing.T) { + cluster := testCluster() + cluster.Namespace = ns + cluster.Status.ID = "1234" + cluster.Spec.PostgresVersion = 16 // originally "15" + + controllerResult := reconciler.handleUpgrade(ctx, testApiKey, cluster) + assert.Equal(t, controllerResult.RequeueAfter, 3*time.Minute) + upgradingCondition := meta.FindStatusCondition(cluster.Status.Conditions, v1beta1.ConditionUpgrading) + if assert.Check(t, upgradingCondition != nil) { + assert.Equal(t, upgradingCondition.Status, metav1.ConditionTrue) + assert.Equal(t, upgradingCondition.Reason, "major_version_upgrade") + assert.Check(t, cmp.Contains(upgradingCondition.Message, + "Performing an upgrade of type major_version_upgrade with a state of in_progress.")) + assert.Equal(t, *cluster.Status.OngoingUpgrade[0], v1beta1.UpgradeOperation{ + Flavor: "major_version_upgrade", + StartingFrom: "", + State: "in_progress", + }) + } + }) + + t.Run("UpgradeStorage", func(t *testing.T) { + cluster := testCluster() + cluster.Namespace = ns + cluster.Status.ID = "1234" + cluster.Spec.Storage = resource.MustParse("15Gi") // originally "10Gi" + + controllerResult := reconciler.handleUpgrade(ctx, testApiKey, cluster) + assert.Equal(t, controllerResult.RequeueAfter, 3*time.Minute) + upgradingCondition := meta.FindStatusCondition(cluster.Status.Conditions, v1beta1.ConditionUpgrading) + if assert.Check(t, upgradingCondition != nil) { + assert.Equal(t, upgradingCondition.Status, metav1.ConditionTrue) + assert.Equal(t, upgradingCondition.Reason, "resize") + assert.Check(t, cmp.Contains(upgradingCondition.Message, + "Performing an upgrade of type resize with a state of in_progress.")) + assert.Equal(t, *cluster.Status.OngoingUpgrade[0], v1beta1.UpgradeOperation{ + Flavor: "resize", + StartingFrom: "", + State: "in_progress", + }) + } + }) + + t.Run("UpgradeFailure", func(t *testing.T) { + cluster := testCluster() + cluster.Namespace = ns + cluster.Status.ID = "1234" + cluster.Spec.Storage = resource.MustParse("15Gi") // originally "10Gi" + + controllerResult := reconciler.handleUpgrade(ctx, "bad_api_key", cluster) + assert.Equal(t, controllerResult, ctrl.Result{}) + upgradingCondition := meta.FindStatusCondition(cluster.Status.Conditions, v1beta1.ConditionUpgrading) + if assert.Check(t, upgradingCondition != nil) { + assert.Equal(t, upgradingCondition.Status, metav1.ConditionFalse) + assert.Equal(t, upgradingCondition.Reason, "UpgradeError") + assert.Check(t, cmp.Contains(upgradingCondition.Message, + "Error performing an upgrade: boom")) + } + }) +} + +func TestHandleUpgradeHA(t *testing.T) { + ctx := context.Background() + tClient := setupKubernetes(t) + require.ParallelCapacity(t, 0) + + ns := setupNamespace(t, tClient).Name + clusterInBridgeWithHaDisabled := testClusterApiResource() + clusterInBridgeWithHaEnabled := testClusterApiResource() + clusterInBridgeWithHaEnabled.ID = "2345" // originally "1234" + clusterInBridgeWithHaEnabled.IsHA = initialize.Bool(true) // originally "false" + + reconciler := &CrunchyBridgeClusterReconciler{ + Client: tClient, + Owner: "crunchybridgecluster-controller", + } + reconciler.NewClient = func() bridge.ClientInterface { + return &TestBridgeClient{ + ApiKey: testApiKey, + TeamId: testTeamId, + Clusters: []*bridge.ClusterApiResource{clusterInBridgeWithHaDisabled, + clusterInBridgeWithHaEnabled}, + } + } + + t.Run("EnableHA", func(t *testing.T) { + cluster := testCluster() + cluster.Namespace = ns + cluster.Status.ID = "1234" + cluster.Spec.IsHA = true // originally "false" + + controllerResult := reconciler.handleUpgradeHA(ctx, testApiKey, cluster) + assert.Equal(t, controllerResult.RequeueAfter, 3*time.Minute) + upgradingCondition := meta.FindStatusCondition(cluster.Status.Conditions, v1beta1.ConditionUpgrading) + if assert.Check(t, upgradingCondition != nil) { + assert.Equal(t, upgradingCondition.Status, metav1.ConditionTrue) + assert.Equal(t, upgradingCondition.Reason, "ha_change") + assert.Check(t, cmp.Contains(upgradingCondition.Message, + "Performing an upgrade of type ha_change with a state of enabling_ha.")) + assert.Equal(t, *cluster.Status.OngoingUpgrade[0], v1beta1.UpgradeOperation{ + Flavor: "ha_change", + StartingFrom: "", + State: "enabling_ha", + }) + } + }) + + t.Run("DisableHA", func(t *testing.T) { + cluster := testCluster() + cluster.Namespace = ns + cluster.Status.ID = "2345" + + controllerResult := reconciler.handleUpgradeHA(ctx, testApiKey, cluster) + assert.Equal(t, controllerResult.RequeueAfter, 3*time.Minute) + upgradingCondition := meta.FindStatusCondition(cluster.Status.Conditions, v1beta1.ConditionUpgrading) + if assert.Check(t, upgradingCondition != nil) { + assert.Equal(t, upgradingCondition.Status, metav1.ConditionTrue) + assert.Equal(t, upgradingCondition.Reason, "ha_change") + assert.Check(t, cmp.Contains(upgradingCondition.Message, + "Performing an upgrade of type ha_change with a state of disabling_ha.")) + assert.Equal(t, *cluster.Status.OngoingUpgrade[0], v1beta1.UpgradeOperation{ + Flavor: "ha_change", + StartingFrom: "", + State: "disabling_ha", + }) + } + }) + + t.Run("UpgradeFailure", func(t *testing.T) { + cluster := testCluster() + cluster.Namespace = ns + cluster.Status.ID = "1234" + + controllerResult := reconciler.handleUpgradeHA(ctx, "bad_api_key", cluster) + assert.Equal(t, controllerResult, ctrl.Result{}) + upgradingCondition := meta.FindStatusCondition(cluster.Status.Conditions, v1beta1.ConditionUpgrading) + if assert.Check(t, upgradingCondition != nil) { + assert.Equal(t, upgradingCondition.Status, metav1.ConditionFalse) + assert.Equal(t, upgradingCondition.Reason, "UpgradeError") + assert.Check(t, cmp.Contains(upgradingCondition.Message, + "Error performing an HA upgrade: boom")) + } + }) +} + +func TestHandleUpdate(t *testing.T) { + ctx := context.Background() + tClient := setupKubernetes(t) + require.ParallelCapacity(t, 0) + + ns := setupNamespace(t, tClient).Name + clusterInBridge := testClusterApiResource() + + reconciler := &CrunchyBridgeClusterReconciler{ + Client: tClient, + Owner: "crunchybridgecluster-controller", + } + reconciler.NewClient = func() bridge.ClientInterface { + return &TestBridgeClient{ + ApiKey: testApiKey, + TeamId: testTeamId, + Clusters: []*bridge.ClusterApiResource{clusterInBridge}, + } + } + + t.Run("UpdateName", func(t *testing.T) { + cluster := testCluster() + cluster.Namespace = ns + cluster.Status.ID = "1234" + cluster.Spec.ClusterName = "new-cluster-name" // originally "hippo-cluster" + + controllerResult := reconciler.handleUpdate(ctx, testApiKey, cluster) + assert.Equal(t, controllerResult.RequeueAfter, 3*time.Minute) + upgradingCondition := meta.FindStatusCondition(cluster.Status.Conditions, v1beta1.ConditionUpgrading) + if assert.Check(t, upgradingCondition != nil) { + assert.Equal(t, upgradingCondition.Status, metav1.ConditionTrue) + assert.Equal(t, upgradingCondition.Reason, "ClusterUpgrade") + assert.Check(t, cmp.Contains(upgradingCondition.Message, + "An upgrade is occurring, the clusters name is new-cluster-name and the cluster is protected is false.")) + } + assert.Equal(t, cluster.Status.ClusterName, "new-cluster-name") + }) + + t.Run("UpdateIsProtected", func(t *testing.T) { + cluster := testCluster() + cluster.Namespace = ns + cluster.Status.ID = "1234" + cluster.Spec.IsProtected = true // originally "false" + + controllerResult := reconciler.handleUpdate(ctx, testApiKey, cluster) + assert.Equal(t, controllerResult.RequeueAfter, 3*time.Minute) + upgradingCondition := meta.FindStatusCondition(cluster.Status.Conditions, v1beta1.ConditionUpgrading) + if assert.Check(t, upgradingCondition != nil) { + assert.Equal(t, upgradingCondition.Status, metav1.ConditionTrue) + assert.Equal(t, upgradingCondition.Reason, "ClusterUpgrade") + assert.Check(t, cmp.Contains(upgradingCondition.Message, + "An upgrade is occurring, the clusters name is hippo-cluster and the cluster is protected is true.")) + } + assert.Equal(t, *cluster.Status.IsProtected, true) + }) + + t.Run("UpgradeFailure", func(t *testing.T) { + cluster := testCluster() + cluster.Namespace = ns + cluster.Status.ID = "1234" + cluster.Spec.IsProtected = true // originally "false" + + controllerResult := reconciler.handleUpdate(ctx, "bad_api_key", cluster) + assert.Equal(t, controllerResult, ctrl.Result{}) + upgradingCondition := meta.FindStatusCondition(cluster.Status.Conditions, v1beta1.ConditionUpgrading) + if assert.Check(t, upgradingCondition != nil) { + assert.Equal(t, upgradingCondition.Status, metav1.ConditionFalse) + assert.Equal(t, upgradingCondition.Reason, "UpgradeError") + assert.Check(t, cmp.Contains(upgradingCondition.Message, "Error performing an upgrade: boom")) + } + }) +} + +func TestGetSecretKeys(t *testing.T) { + ctx := context.Background() + tClient := setupKubernetes(t) + require.ParallelCapacity(t, 0) + + reconciler := &CrunchyBridgeClusterReconciler{ + Client: tClient, + Owner: "crunchybridgecluster-controller", + } + + ns := setupNamespace(t, tClient).Name + cluster := testCluster() + cluster.Namespace = ns + + t.Run("NoSecret", func(t *testing.T) { + apiKey, team, err := reconciler.GetSecretKeys(ctx, cluster) + assert.Equal(t, apiKey, "") + assert.Equal(t, team, "") + assert.ErrorContains(t, err, "secrets \"crunchy-bridge-api-key\" not found") + }) + + t.Run("SecretMissingApiKey", func(t *testing.T) { + cluster.Spec.Secret = "secret-missing-api-key" // originally "crunchy-bridge-api-key" + secret := &corev1.Secret{ + ObjectMeta: metav1.ObjectMeta{ + Name: "secret-missing-api-key", + Namespace: ns, + }, + Data: map[string][]byte{ + "team": []byte(`jkl;`), + }, + } + assert.NilError(t, tClient.Create(ctx, secret)) + + apiKey, team, err := reconciler.GetSecretKeys(ctx, cluster) + assert.Equal(t, apiKey, "") + assert.Equal(t, team, "") + assert.ErrorContains(t, err, "error handling secret; expected to find a key and a team: found key false, found team true") + + assert.NilError(t, tClient.Delete(ctx, secret)) + }) + + t.Run("SecretMissingTeamId", func(t *testing.T) { + cluster.Spec.Secret = "secret-missing-team-id" + secret := &corev1.Secret{ + ObjectMeta: metav1.ObjectMeta{ + Name: "secret-missing-team-id", + Namespace: ns, + }, + Data: map[string][]byte{ + "key": []byte(`asdf`), + }, + } + assert.NilError(t, tClient.Create(ctx, secret)) + + apiKey, team, err := reconciler.GetSecretKeys(ctx, cluster) + assert.Equal(t, apiKey, "") + assert.Equal(t, team, "") + assert.ErrorContains(t, err, "error handling secret; expected to find a key and a team: found key true, found team false") + }) + + t.Run("GoodSecret", func(t *testing.T) { + cluster.Spec.Secret = "crunchy-bridge-api-key" + secret := &corev1.Secret{ + ObjectMeta: metav1.ObjectMeta{ + Name: "crunchy-bridge-api-key", + Namespace: ns, + }, + Data: map[string][]byte{ + "key": []byte(`asdf`), + "team": []byte(`jkl;`), + }, + } + assert.NilError(t, tClient.Create(ctx, secret)) + + apiKey, team, err := reconciler.GetSecretKeys(ctx, cluster) + assert.Equal(t, apiKey, "asdf") + assert.Equal(t, team, "jkl;") + assert.NilError(t, err) + }) +} + +func TestDeleteControlled(t *testing.T) { + ctx := context.Background() + tClient := setupKubernetes(t) + require.ParallelCapacity(t, 1) + + ns := setupNamespace(t, tClient) + reconciler := &CrunchyBridgeClusterReconciler{ + Client: tClient, + Owner: "crunchybridgecluster-controller", + } + + cluster := testCluster() + cluster.Namespace = ns.Name + cluster.Name = strings.ToLower(t.Name()) // originally "hippo-cr" + assert.NilError(t, tClient.Create(ctx, cluster)) + + t.Run("NotControlled", func(t *testing.T) { + secret := &corev1.Secret{} + secret.Namespace = ns.Name + secret.Name = "solo" + + assert.NilError(t, tClient.Create(ctx, secret)) + + // No-op when there's no ownership + assert.NilError(t, reconciler.deleteControlled(ctx, cluster, secret)) + assert.NilError(t, tClient.Get(ctx, client.ObjectKeyFromObject(secret), secret)) + }) + + t.Run("Controlled", func(t *testing.T) { + secret := &corev1.Secret{} + secret.Namespace = ns.Name + secret.Name = "controlled" + + assert.NilError(t, reconciler.setControllerReference(cluster, secret)) + assert.NilError(t, tClient.Create(ctx, secret)) + + // Deletes when controlled by cluster. + assert.NilError(t, reconciler.deleteControlled(ctx, cluster, secret)) + + err := tClient.Get(ctx, client.ObjectKeyFromObject(secret), secret) + assert.Assert(t, apierrors.IsNotFound(err), "expected NotFound, got %#v", err) + }) +} diff --git a/internal/bridge/crunchybridgecluster/delete.go b/internal/bridge/crunchybridgecluster/delete.go new file mode 100644 index 0000000000..8dcada31cf --- /dev/null +++ b/internal/bridge/crunchybridgecluster/delete.go @@ -0,0 +1,70 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package crunchybridgecluster + +import ( + "context" + "time" + + ctrl "sigs.k8s.io/controller-runtime" + "sigs.k8s.io/controller-runtime/pkg/controller/controllerutil" + + "github.com/crunchydata/postgres-operator/internal/controller/runtime" + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +const finalizer = "crunchybridgecluster.postgres-operator.crunchydata.com/finalizer" + +// handleDelete sets a finalizer on cluster and performs the finalization of +// cluster when it is being deleted. It returns (nil, nil) when cluster is +// not being deleted and there are no errors patching the CrunchyBridgeCluster. +// The caller is responsible for returning other values to controller-runtime. +func (r *CrunchyBridgeClusterReconciler) handleDelete( + ctx context.Context, crunchybridgecluster *v1beta1.CrunchyBridgeCluster, key string, +) (*ctrl.Result, error) { + log := ctrl.LoggerFrom(ctx) + + // If the CrunchyBridgeCluster isn't being deleted, add the finalizer + if crunchybridgecluster.ObjectMeta.DeletionTimestamp.IsZero() { + if !controllerutil.ContainsFinalizer(crunchybridgecluster, finalizer) { + controllerutil.AddFinalizer(crunchybridgecluster, finalizer) + if err := r.Update(ctx, crunchybridgecluster); err != nil { + return nil, err + } + } + // If the CrunchyBridgeCluster is being deleted, + // handle the deletion, and remove the finalizer + } else { + if controllerutil.ContainsFinalizer(crunchybridgecluster, finalizer) { + log.Info("deleting cluster", "clusterName", crunchybridgecluster.Spec.ClusterName) + + // TODO(crunchybridgecluster): If is_protected is true, maybe skip this call, but allow the deletion of the K8s object? + _, deletedAlready, err := r.NewClient().DeleteCluster(ctx, key, crunchybridgecluster.Status.ID) + // Requeue if error + if err != nil { + return &ctrl.Result{}, err + } + + if !deletedAlready { + return initialize.Pointer(runtime.RequeueWithoutBackoff(time.Second)), err + } + + // Remove finalizer if deleted already + if deletedAlready { + log.Info("cluster deleted", "clusterName", crunchybridgecluster.Spec.ClusterName) + + controllerutil.RemoveFinalizer(crunchybridgecluster, finalizer) + if err := r.Update(ctx, crunchybridgecluster); err != nil { + return &ctrl.Result{}, err + } + } + } + // Stop reconciliation as the item is being deleted + return &ctrl.Result{}, nil + } + + return nil, nil +} diff --git a/internal/bridge/crunchybridgecluster/delete_test.go b/internal/bridge/crunchybridgecluster/delete_test.go new file mode 100644 index 0000000000..28e6feb1f8 --- /dev/null +++ b/internal/bridge/crunchybridgecluster/delete_test.go @@ -0,0 +1,133 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package crunchybridgecluster + +import ( + "context" + "testing" + "time" + + "gotest.tools/v3/assert" + ctrl "sigs.k8s.io/controller-runtime" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/controller/controllerutil" + + "github.com/crunchydata/postgres-operator/internal/bridge" + "github.com/crunchydata/postgres-operator/internal/testing/require" +) + +func TestHandleDeleteCluster(t *testing.T) { + ctx := context.Background() + tClient := setupKubernetes(t) + require.ParallelCapacity(t, 0) + + ns := setupNamespace(t, tClient).Name + + firstClusterInBridge := testClusterApiResource() + firstClusterInBridge.ClusterName = "bridge-cluster-1" + secondClusterInBridge := testClusterApiResource() + secondClusterInBridge.ClusterName = "bridge-cluster-2" + secondClusterInBridge.ID = "2345" + + reconciler := &CrunchyBridgeClusterReconciler{ + Client: tClient, + Owner: "crunchybridgecluster-controller", + } + testBridgeClient := &TestBridgeClient{ + ApiKey: "9012", + TeamId: "5678", + Clusters: []*bridge.ClusterApiResource{firstClusterInBridge, secondClusterInBridge}, + } + reconciler.NewClient = func() bridge.ClientInterface { + return testBridgeClient + } + + t.Run("SuccessfulDeletion", func(t *testing.T) { + // Create test cluster in kubernetes + cluster := testCluster() + cluster.Namespace = ns + cluster.Status.ID = "1234" + cluster.Spec.ClusterName = "bridge-cluster-1" + assert.NilError(t, tClient.Create(ctx, cluster)) + + // Run handleDelete + controllerResult, err := reconciler.handleDelete(ctx, cluster, "9012") + assert.NilError(t, err) + assert.Check(t, controllerResult == nil) + + // Make sure that finalizer was added + assert.Check(t, controllerutil.ContainsFinalizer(cluster, finalizer)) + + // Send delete request to kubernetes + assert.NilError(t, tClient.Delete(ctx, cluster)) + + // Get cluster from kubernetes and assert that the deletion timestamp was added + assert.NilError(t, tClient.Get(ctx, client.ObjectKeyFromObject(cluster), cluster)) + assert.Check(t, !cluster.ObjectMeta.DeletionTimestamp.IsZero()) + + // Note: We must run handleDelete multiple times because we don't want to remove the + // finalizer until we're sure that the cluster has been deleted from Bridge, so we + // have to do multiple calls/reconcile loops. + // Run handleDelete again to delete from Bridge + cluster.Status.ID = "1234" + controllerResult, err = reconciler.handleDelete(ctx, cluster, "9012") + assert.NilError(t, err) + assert.Equal(t, controllerResult.RequeueAfter, 1*time.Second) + assert.Equal(t, len(testBridgeClient.Clusters), 1) + assert.Equal(t, testBridgeClient.Clusters[0].ClusterName, "bridge-cluster-2") + + // Run handleDelete one last time to remove finalizer + controllerResult, err = reconciler.handleDelete(ctx, cluster, "9012") + assert.NilError(t, err) + assert.Equal(t, *controllerResult, ctrl.Result{}) + + // Make sure that finalizer was removed + assert.Check(t, !controllerutil.ContainsFinalizer(cluster, finalizer)) + }) + + t.Run("UnsuccessfulDeletion", func(t *testing.T) { + cluster := testCluster() + cluster.Namespace = ns + cluster.Status.ID = "2345" + cluster.Spec.ClusterName = "bridge-cluster-2" + assert.NilError(t, tClient.Create(ctx, cluster)) + + // Run handleDelete + controllerResult, err := reconciler.handleDelete(ctx, cluster, "9012") + assert.NilError(t, err) + assert.Check(t, controllerResult == nil) + + // Make sure that finalizer was added + assert.Check(t, controllerutil.ContainsFinalizer(cluster, finalizer)) + + // Send delete request to kubernetes + assert.NilError(t, tClient.Delete(ctx, cluster)) + + // Get cluster from kubernetes and assert that the deletion timestamp was added + assert.NilError(t, tClient.Get(ctx, client.ObjectKeyFromObject(cluster), cluster)) + assert.Check(t, !cluster.ObjectMeta.DeletionTimestamp.IsZero()) + + // Run handleDelete again to attempt to delete from Bridge, but provide bad api key + cluster.Status.ID = "2345" + controllerResult, err = reconciler.handleDelete(ctx, cluster, "bad_api_key") + assert.ErrorContains(t, err, "boom") + assert.Equal(t, *controllerResult, ctrl.Result{}) + + // Run handleDelete a couple times with good api key so test can cleanup properly. + // Note: We must run handleDelete multiple times because we don't want to remove the + // finalizer until we're sure that the cluster has been deleted from Bridge, so we + // have to do multiple calls/reconcile loops. + // delete from bridge + _, err = reconciler.handleDelete(ctx, cluster, "9012") + assert.NilError(t, err) + + // remove finalizer + _, err = reconciler.handleDelete(ctx, cluster, "9012") + assert.NilError(t, err) + + // Make sure that finalizer was removed + assert.Check(t, !controllerutil.ContainsFinalizer(cluster, finalizer)) + }) +} diff --git a/internal/bridge/crunchybridgecluster/helpers_test.go b/internal/bridge/crunchybridgecluster/helpers_test.go new file mode 100644 index 0000000000..f40ad3d054 --- /dev/null +++ b/internal/bridge/crunchybridgecluster/helpers_test.go @@ -0,0 +1,178 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package crunchybridgecluster + +import ( + "context" + "os" + "strconv" + "testing" + "time" + + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/resource" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/yaml" + + "github.com/crunchydata/postgres-operator/internal/bridge" + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/testing/require" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +// Scale extends d according to PGO_TEST_TIMEOUT_SCALE. +var Scale = func(d time.Duration) time.Duration { return d } + +// This function was duplicated from the postgrescluster package. +// TODO: Pull these duplicated functions out into a separate, shared package. +func init() { + setting := os.Getenv("PGO_TEST_TIMEOUT_SCALE") + factor, _ := strconv.ParseFloat(setting, 64) + + if setting != "" { + if factor <= 0 { + panic("PGO_TEST_TIMEOUT_SCALE must be a fractional number greater than zero") + } + + Scale = func(d time.Duration) time.Duration { + return time.Duration(factor * float64(d)) + } + } +} + +// setupKubernetes starts or connects to a Kubernetes API and returns a client +// that uses it. See [require.Kubernetes] for more details. +func setupKubernetes(t testing.TB) client.Client { + t.Helper() + + // Start and/or connect to a Kubernetes API, or Skip when that's not configured. + cc := require.Kubernetes(t) + + // Log the status of any test namespaces after this test fails. + t.Cleanup(func() { + if t.Failed() { + var namespaces corev1.NamespaceList + _ = cc.List(context.Background(), &namespaces, client.HasLabels{"postgres-operator-test"}) + + type shaped map[string]corev1.NamespaceStatus + result := make([]shaped, len(namespaces.Items)) + + for i, ns := range namespaces.Items { + result[i] = shaped{ns.Labels["postgres-operator-test"]: ns.Status} + } + + formatted, _ := yaml.Marshal(result) + t.Logf("Test Namespaces:\n%s", formatted) + } + }) + + return cc +} + +// setupNamespace creates a random namespace that will be deleted by t.Cleanup. +// +// Deprecated: Use [require.Namespace] instead. +func setupNamespace(t testing.TB, cc client.Client) *corev1.Namespace { + t.Helper() + return require.Namespace(t, cc) +} + +// testCluster defines a base cluster spec that can be used by tests to +// generate a CrunchyBridgeCluster CR +func testCluster() *v1beta1.CrunchyBridgeCluster { + cluster := v1beta1.CrunchyBridgeCluster{ + ObjectMeta: metav1.ObjectMeta{ + Name: "hippo-cr", + }, + Spec: v1beta1.CrunchyBridgeClusterSpec{ + ClusterName: "hippo-cluster", + IsHA: false, + PostgresVersion: 15, + Plan: "standard-8", + Provider: "aws", + Region: "us-east-2", + Secret: "crunchy-bridge-api-key", + Storage: resource.MustParse("10Gi"), + }, + } + return cluster.DeepCopy() +} + +func testClusterApiResource() *bridge.ClusterApiResource { + cluster := bridge.ClusterApiResource{ + ID: "1234", + Host: "example.com", + IsHA: initialize.Bool(false), + IsProtected: initialize.Bool(false), + MajorVersion: 15, + ClusterName: "hippo-cluster", + Plan: "standard-8", + Provider: "aws", + Region: "us-east-2", + Storage: 10, + Team: "5678", + } + return &cluster +} + +func testClusterStatusApiResource(clusterId string) *bridge.ClusterStatusApiResource { + teamId := "5678" + state := "ready" + + clusterStatus := bridge.ClusterStatusApiResource{ + DiskUsage: &bridge.ClusterDiskUsageApiResource{ + DiskAvailableMB: 16, + DiskTotalSizeMB: 16, + DiskUsedMB: 0, + }, + OldestBackup: "oldbackup", + OngoingUpgrade: &bridge.ClusterUpgradeApiResource{ + ClusterID: clusterId, + Operations: []*v1beta1.UpgradeOperation{}, + Team: teamId, + }, + State: state, + } + + return &clusterStatus +} + +func testClusterUpgradeApiResource(clusterId string) *bridge.ClusterUpgradeApiResource { + teamId := "5678" + + clusterUpgrade := bridge.ClusterUpgradeApiResource{ + ClusterID: clusterId, + Operations: []*v1beta1.UpgradeOperation{ + { + Flavor: "resize", + StartingFrom: "", + State: "in_progress", + }, + }, + Team: teamId, + } + + return &clusterUpgrade +} + +func testClusterRoleApiResource() *bridge.ClusterRoleApiResource { + clusterId := "1234" + teamId := "5678" + roleName := "application" + + clusterRole := bridge.ClusterRoleApiResource{ + AccountEmail: "test@email.com", + AccountId: "12345678", + ClusterId: clusterId, + Flavor: "chocolate", + Name: roleName, + Password: "application-password", + Team: teamId, + URI: "connection-string", + } + + return &clusterRole +} diff --git a/internal/bridge/crunchybridgecluster/mock_bridge_api.go b/internal/bridge/crunchybridgecluster/mock_bridge_api.go new file mode 100644 index 0000000000..5c6b243714 --- /dev/null +++ b/internal/bridge/crunchybridgecluster/mock_bridge_api.go @@ -0,0 +1,247 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package crunchybridgecluster + +import ( + "context" + "errors" + "fmt" + + "k8s.io/apimachinery/pkg/util/intstr" + + "github.com/crunchydata/postgres-operator/internal/bridge" + "github.com/crunchydata/postgres-operator/internal/initialize" + + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +type TestBridgeClient struct { + ApiKey string `json:"apiKey,omitempty"` + TeamId string `json:"teamId,omitempty"` + Clusters []*bridge.ClusterApiResource `json:"clusters,omitempty"` + ClusterRoles []*bridge.ClusterRoleApiResource `json:"clusterRoles,omitempty"` + ClusterStatuses map[string]*bridge.ClusterStatusApiResource `json:"clusterStatuses,omitempty"` + ClusterUpgrades map[string]*bridge.ClusterUpgradeApiResource `json:"clusterUpgrades,omitempty"` +} + +func (tbc *TestBridgeClient) ListClusters(ctx context.Context, apiKey, teamId string) ([]*bridge.ClusterApiResource, error) { + + if apiKey == tbc.ApiKey && teamId == tbc.TeamId { + return tbc.Clusters, nil + } + + return nil, errors.New("boom") +} + +func (tbc *TestBridgeClient) UpgradeCluster(ctx context.Context, apiKey, id string, clusterRequestPayload *bridge.PostClustersUpgradeRequestPayload, +) (*bridge.ClusterUpgradeApiResource, error) { + // look for cluster + var desiredCluster *bridge.ClusterApiResource + clusterFound := false + for _, cluster := range tbc.Clusters { + if cluster.ID == id { + desiredCluster = cluster + clusterFound = true + } + } + if !clusterFound { + return nil, errors.New("cluster not found") + } + + // happy path + if apiKey == tbc.ApiKey { + result := &bridge.ClusterUpgradeApiResource{ + ClusterID: id, + Team: tbc.TeamId, + } + if clusterRequestPayload.Plan != desiredCluster.Plan { + result.Operations = []*v1beta1.UpgradeOperation{ + { + Flavor: "maintenance", + StartingFrom: "", + State: "in_progress", + }, + } + } else if clusterRequestPayload.PostgresVersion != intstr.FromInt(desiredCluster.MajorVersion) { + result.Operations = []*v1beta1.UpgradeOperation{ + { + Flavor: "major_version_upgrade", + StartingFrom: "", + State: "in_progress", + }, + } + } else if clusterRequestPayload.Storage != desiredCluster.Storage { + result.Operations = []*v1beta1.UpgradeOperation{ + { + Flavor: "resize", + StartingFrom: "", + State: "in_progress", + }, + } + } + return result, nil + } + // sad path + return nil, errors.New("boom") +} + +func (tbc *TestBridgeClient) UpgradeClusterHA(ctx context.Context, apiKey, id, action string, +) (*bridge.ClusterUpgradeApiResource, error) { + // look for cluster + var desiredCluster *bridge.ClusterApiResource + clusterFound := false + for _, cluster := range tbc.Clusters { + if cluster.ID == id { + desiredCluster = cluster + clusterFound = true + } + } + if !clusterFound { + return nil, errors.New("cluster not found") + } + + // happy path + if apiKey == tbc.ApiKey { + result := &bridge.ClusterUpgradeApiResource{ + ClusterID: id, + Team: tbc.TeamId, + } + if action == "enable-ha" && !*desiredCluster.IsHA { + result.Operations = []*v1beta1.UpgradeOperation{ + { + Flavor: "ha_change", + StartingFrom: "", + State: "enabling_ha", + }, + } + } else if action == "disable-ha" && *desiredCluster.IsHA { + result.Operations = []*v1beta1.UpgradeOperation{ + { + Flavor: "ha_change", + StartingFrom: "", + State: "disabling_ha", + }, + } + } else { + return nil, errors.New("no change detected") + } + return result, nil + } + // sad path + return nil, errors.New("boom") +} + +func (tbc *TestBridgeClient) UpdateCluster(ctx context.Context, apiKey, id string, clusterRequestPayload *bridge.PatchClustersRequestPayload, +) (*bridge.ClusterApiResource, error) { + // look for cluster + var desiredCluster *bridge.ClusterApiResource + clusterFound := false + for _, cluster := range tbc.Clusters { + if cluster.ID == id { + desiredCluster = cluster + clusterFound = true + } + } + if !clusterFound { + return nil, errors.New("cluster not found") + } + + // happy path + if apiKey == tbc.ApiKey { + desiredCluster.ClusterName = clusterRequestPayload.Name + desiredCluster.IsProtected = clusterRequestPayload.IsProtected + return desiredCluster, nil + } + // sad path + return nil, errors.New("boom") +} + +func (tbc *TestBridgeClient) CreateCluster(ctx context.Context, apiKey string, + clusterRequestPayload *bridge.PostClustersRequestPayload) (*bridge.ClusterApiResource, error) { + + if apiKey == tbc.ApiKey && clusterRequestPayload.Team == tbc.TeamId && clusterRequestPayload.Name != "" && + clusterRequestPayload.Plan != "" { + cluster := &bridge.ClusterApiResource{ + ID: fmt.Sprint(len(tbc.Clusters)), + Host: "example.com", + IsHA: initialize.Bool(clusterRequestPayload.IsHA), + MajorVersion: clusterRequestPayload.PostgresVersion.IntValue(), + ClusterName: clusterRequestPayload.Name, + Plan: clusterRequestPayload.Plan, + Provider: clusterRequestPayload.Provider, + Region: clusterRequestPayload.Region, + Storage: clusterRequestPayload.Storage, + } + tbc.Clusters = append(tbc.Clusters, cluster) + + return cluster, nil + } + + return nil, errors.New("boom") +} + +func (tbc *TestBridgeClient) GetCluster(ctx context.Context, apiKey, id string) (*bridge.ClusterApiResource, error) { + + if apiKey == tbc.ApiKey { + for _, cluster := range tbc.Clusters { + if cluster.ID == id { + return cluster, nil + } + } + } + + return nil, errors.New("boom") +} + +func (tbc *TestBridgeClient) GetClusterStatus(ctx context.Context, apiKey, id string) (*bridge.ClusterStatusApiResource, error) { + + if apiKey == tbc.ApiKey { + return tbc.ClusterStatuses[id], nil + } + + return nil, errors.New("boom") +} + +func (tbc *TestBridgeClient) GetClusterUpgrade(ctx context.Context, apiKey, id string) (*bridge.ClusterUpgradeApiResource, error) { + + if apiKey == tbc.ApiKey { + return tbc.ClusterUpgrades[id], nil + } + + return nil, errors.New("boom") +} + +func (tbc *TestBridgeClient) GetClusterRole(ctx context.Context, apiKey, clusterId, roleName string) (*bridge.ClusterRoleApiResource, error) { + + if apiKey == tbc.ApiKey { + for _, clusterRole := range tbc.ClusterRoles { + if clusterRole.ClusterId == clusterId && clusterRole.Name == roleName { + return clusterRole, nil + } + } + } + + return nil, errors.New("boom") +} + +func (tbc *TestBridgeClient) DeleteCluster(ctx context.Context, apiKey, clusterId string) (*bridge.ClusterApiResource, bool, error) { + alreadyDeleted := true + var cluster *bridge.ClusterApiResource + + if apiKey == tbc.ApiKey { + for i := len(tbc.Clusters) - 1; i >= 0; i-- { + if tbc.Clusters[i].ID == clusterId { + cluster = tbc.Clusters[i] + alreadyDeleted = false + tbc.Clusters = append(tbc.Clusters[:i], tbc.Clusters[i+1:]...) + return cluster, alreadyDeleted, nil + } + } + } else { + return nil, alreadyDeleted, errors.New("boom") + } + + return nil, alreadyDeleted, nil +} diff --git a/internal/bridge/crunchybridgecluster/postgres.go b/internal/bridge/crunchybridgecluster/postgres.go new file mode 100644 index 0000000000..024631de67 --- /dev/null +++ b/internal/bridge/crunchybridgecluster/postgres.go @@ -0,0 +1,164 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package crunchybridgecluster + +import ( + "context" + "fmt" + + "github.com/pkg/errors" + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + + ctrl "sigs.k8s.io/controller-runtime" + "sigs.k8s.io/controller-runtime/pkg/client" + + "github.com/crunchydata/postgres-operator/internal/bridge" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +// generatePostgresRoleSecret returns a Secret containing a password and +// connection details for the appropriate database. +func (r *CrunchyBridgeClusterReconciler) generatePostgresRoleSecret( + cluster *v1beta1.CrunchyBridgeCluster, roleSpec *v1beta1.CrunchyBridgeClusterRoleSpec, + clusterRole *bridge.ClusterRoleApiResource, +) (*corev1.Secret, error) { + roleName := roleSpec.Name + secretName := roleSpec.SecretName + intent := &corev1.Secret{ObjectMeta: metav1.ObjectMeta{ + Namespace: cluster.Namespace, + Name: secretName, + }} + intent.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("Secret")) + intent.StringData = map[string]string{ + "name": clusterRole.Name, + "password": clusterRole.Password, + "uri": clusterRole.URI, + } + + intent.Annotations = cluster.Spec.Metadata.GetAnnotationsOrNil() + intent.Labels = naming.Merge( + cluster.Spec.Metadata.GetLabelsOrNil(), + map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelRole: naming.RoleCrunchyBridgeClusterPostgresRole, + naming.LabelCrunchyBridgeClusterPostgresRole: roleName, + }) + + err := errors.WithStack(r.setControllerReference(cluster, intent)) + + return intent, err +} + +// reconcilePostgresRoles writes the objects necessary to manage roles and their +// passwords in PostgreSQL. +func (r *CrunchyBridgeClusterReconciler) reconcilePostgresRoles( + ctx context.Context, apiKey string, cluster *v1beta1.CrunchyBridgeCluster, +) error { + _, _, err := r.reconcilePostgresRoleSecrets(ctx, apiKey, cluster) + + // TODO: If we ever add a PgAdmin feature to CrunchyBridgeCluster, we will + // want to add the role credentials to PgAdmin here + + return err +} + +func (r *CrunchyBridgeClusterReconciler) reconcilePostgresRoleSecrets( + ctx context.Context, apiKey string, cluster *v1beta1.CrunchyBridgeCluster, +) ( + []*v1beta1.CrunchyBridgeClusterRoleSpec, map[string]*corev1.Secret, error, +) { + log := ctrl.LoggerFrom(ctx) + specRoles := cluster.Spec.Roles + + // Index role specifications by PostgreSQL role name and make sure that none of the + // secretNames are identical in the spec + secretNames := make(map[string]bool) + roleSpecs := make(map[string]*v1beta1.CrunchyBridgeClusterRoleSpec, len(specRoles)) + for i := range specRoles { + if secretNames[specRoles[i].SecretName] { + // Duplicate secretName found, return early with error + err := errors.New("Two or more of the Roles in the CrunchyBridgeCluster spec " + + "have the same SecretName. Role SecretNames must be unique.") + return nil, nil, err + } + secretNames[specRoles[i].SecretName] = true + + roleSpecs[specRoles[i].Name] = specRoles[i] + } + + // Make sure that this cluster's role secret names are not being used by any other + // secrets in the namespace + allSecretsInNamespace := &corev1.SecretList{} + err := errors.WithStack(r.Client.List(ctx, allSecretsInNamespace, client.InNamespace(cluster.Namespace))) + if err != nil { + return nil, nil, err + } + for _, secret := range allSecretsInNamespace.Items { + if secretNames[secret.Name] { + existingSecretLabels := secret.GetLabels() + if existingSecretLabels[naming.LabelCluster] != cluster.Name || + existingSecretLabels[naming.LabelRole] != naming.RoleCrunchyBridgeClusterPostgresRole { + err = errors.New( + fmt.Sprintf("There is already an existing Secret in this namespace with the name %v. "+ + "Please choose a different name for this role's Secret.", secret.Name), + ) + return nil, nil, err + } + } + } + + // Gather existing role secrets + secrets := &corev1.SecretList{} + selector, err := naming.AsSelector(naming.CrunchyBridgeClusterPostgresRoles(cluster.Name)) + if err == nil { + err = errors.WithStack( + r.Client.List(ctx, secrets, + client.InNamespace(cluster.Namespace), + client.MatchingLabelsSelector{Selector: selector}, + )) + } + + // Index secrets by PostgreSQL role name and delete any that are not in the + // cluster spec. + roleSecrets := make(map[string]*corev1.Secret, len(secrets.Items)) + if err == nil { + for i := range secrets.Items { + secret := &secrets.Items[i] + secretRoleName := secret.Labels[naming.LabelCrunchyBridgeClusterPostgresRole] + + roleSpec, specified := roleSpecs[secretRoleName] + if specified && roleSpec.SecretName == secret.Name { + roleSecrets[secretRoleName] = secret + } else if err == nil { + err = errors.WithStack(r.deleteControlled(ctx, cluster, secret)) + } + } + } + + // Reconcile each PostgreSQL role in the cluster spec. + for roleName, role := range roleSpecs { + // Get ClusterRole from Bridge API + clusterRole, err := r.NewClient().GetClusterRole(ctx, apiKey, cluster.Status.ID, roleName) + // If issue with getting ClusterRole, log error and move on to next role + if err != nil { + // TODO (dsessler7): Emit event here? + log.Error(err, "issue retrieving cluster role from Bridge") + continue + } + if err == nil { + roleSecrets[roleName], err = r.generatePostgresRoleSecret(cluster, role, clusterRole) + } + if err == nil { + err = errors.WithStack(r.apply(ctx, roleSecrets[roleName])) + } + if err != nil { + log.Error(err, "Issue creating role secret.") + } + } + + return specRoles, roleSecrets, err +} diff --git a/internal/bridge/crunchybridgecluster/postgres_test.go b/internal/bridge/crunchybridgecluster/postgres_test.go new file mode 100644 index 0000000000..66add7b789 --- /dev/null +++ b/internal/bridge/crunchybridgecluster/postgres_test.go @@ -0,0 +1,239 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package crunchybridgecluster + +import ( + "context" + "testing" + + "sigs.k8s.io/controller-runtime/pkg/client" + + "gotest.tools/v3/assert" + corev1 "k8s.io/api/core/v1" + apierrors "k8s.io/apimachinery/pkg/api/errors" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + + "github.com/crunchydata/postgres-operator/internal/bridge" + "github.com/crunchydata/postgres-operator/internal/testing/require" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestGeneratePostgresRoleSecret(t *testing.T) { + tClient := setupKubernetes(t) + require.ParallelCapacity(t, 0) + + reconciler := &CrunchyBridgeClusterReconciler{ + Client: tClient, + Owner: "crunchybridgecluster-controller", + } + + cluster := testCluster() + cluster.Namespace = setupNamespace(t, tClient).Name + + spec := &v1beta1.CrunchyBridgeClusterRoleSpec{ + Name: "application", + SecretName: "application-role-secret", + } + role := &bridge.ClusterRoleApiResource{ + Name: "application", + Password: "password", + URI: "postgres://application:password@example.com:5432/postgres", + } + t.Run("ObjectMeta", func(t *testing.T) { + secret, err := reconciler.generatePostgresRoleSecret(cluster, spec, role) + assert.NilError(t, err) + + if assert.Check(t, secret != nil) { + assert.Equal(t, secret.Namespace, cluster.Namespace) + assert.Assert(t, metav1.IsControlledBy(secret, cluster)) + assert.DeepEqual(t, secret.Labels, map[string]string{ + "postgres-operator.crunchydata.com/cluster": "hippo-cr", + "postgres-operator.crunchydata.com/role": "cbc-pgrole", + "postgres-operator.crunchydata.com/cbc-pgrole": "application", + }) + } + }) + + t.Run("Data", func(t *testing.T) { + secret, err := reconciler.generatePostgresRoleSecret(cluster, spec, role) + assert.NilError(t, err) + + if assert.Check(t, secret != nil) { + assert.Equal(t, secret.StringData["name"], "application") + assert.Equal(t, secret.StringData["password"], "password") + assert.Equal(t, secret.StringData["uri"], + "postgres://application:password@example.com:5432/postgres") + } + }) +} + +func TestReconcilePostgresRoleSecrets(t *testing.T) { + ctx := context.Background() + tClient := setupKubernetes(t) + require.ParallelCapacity(t, 0) + + apiKey := "9012" + ns := setupNamespace(t, tClient).Name + + reconciler := &CrunchyBridgeClusterReconciler{ + Client: tClient, + Owner: "crunchybridgecluster-controller", + } + + t.Run("DuplicateSecretNameInSpec", func(t *testing.T) { + cluster := testCluster() + cluster.Namespace = ns + + spec1 := &v1beta1.CrunchyBridgeClusterRoleSpec{ + Name: "application", + SecretName: "role-secret", + } + spec2 := &v1beta1.CrunchyBridgeClusterRoleSpec{ + Name: "postgres", + SecretName: "role-secret", + } + cluster.Spec.Roles = append(cluster.Spec.Roles, spec1, spec2) + + roleSpecSlice, secretMap, err := reconciler.reconcilePostgresRoleSecrets(ctx, apiKey, cluster) + assert.Check(t, roleSpecSlice == nil) + assert.Check(t, secretMap == nil) + assert.ErrorContains(t, err, "Two or more of the Roles in the CrunchyBridgeCluster spec have "+ + "the same SecretName. Role SecretNames must be unique.", "expected duplicate secret name error") + }) + + t.Run("DuplicateSecretNameInNamespace", func(t *testing.T) { + secret := &corev1.Secret{ + ObjectMeta: metav1.ObjectMeta{ + Name: "role-secret", + Namespace: ns, + }, + StringData: map[string]string{ + "path": "stuff", + }, + } + assert.NilError(t, tClient.Create(ctx, secret)) + + cluster := testCluster() + cluster.Namespace = ns + + spec1 := &v1beta1.CrunchyBridgeClusterRoleSpec{ + Name: "application", + SecretName: "role-secret", + } + + cluster.Spec.Roles = append(cluster.Spec.Roles, spec1) + + roleSpecSlice, secretMap, err := reconciler.reconcilePostgresRoleSecrets(ctx, apiKey, cluster) + assert.Check(t, roleSpecSlice == nil) + assert.Check(t, secretMap == nil) + assert.ErrorContains(t, err, "There is already an existing Secret in this namespace with the name role-secret. "+ + "Please choose a different name for this role's Secret.", "expected duplicate secret name error") + }) + + t.Run("UnusedSecretsGetRemoved", func(t *testing.T) { + applicationRoleInBridge := testClusterRoleApiResource() + postgresRoleInBridge := testClusterRoleApiResource() + postgresRoleInBridge.Name = "postgres" + postgresRoleInBridge.Password = "postgres-password" + reconciler.NewClient = func() bridge.ClientInterface { + return &TestBridgeClient{ + ApiKey: apiKey, + TeamId: "5678", + ClusterRoles: []*bridge.ClusterRoleApiResource{applicationRoleInBridge, postgresRoleInBridge}, + } + } + + applicationSpec := &v1beta1.CrunchyBridgeClusterRoleSpec{ + Name: "application", + SecretName: "application-role-secret", + } + postgresSpec := &v1beta1.CrunchyBridgeClusterRoleSpec{ + Name: "postgres", + SecretName: "postgres-role-secret", + } + + cluster := testCluster() + cluster.Namespace = ns + cluster.Status.ID = "1234" + // Add one role to cluster spec + cluster.Spec.Roles = append(cluster.Spec.Roles, applicationSpec) + assert.NilError(t, tClient.Create(ctx, cluster)) + + applicationRole := &bridge.ClusterRoleApiResource{ + Name: "application", + Password: "application-password", + URI: "connection-string", + } + postgresRole := &bridge.ClusterRoleApiResource{ + Name: "postgres", + Password: "postgres-password", + URI: "connection-string", + } + + // Generate secrets + applicationSecret, err := reconciler.generatePostgresRoleSecret(cluster, applicationSpec, applicationRole) + assert.NilError(t, err) + postgresSecret, err := reconciler.generatePostgresRoleSecret(cluster, postgresSpec, postgresRole) + assert.NilError(t, err) + + // Create secrets in k8s + assert.NilError(t, tClient.Create(ctx, applicationSecret)) + assert.NilError(t, tClient.Create(ctx, postgresSecret)) + + roleSpecSlice, secretMap, err := reconciler.reconcilePostgresRoleSecrets(ctx, apiKey, cluster) + assert.Check(t, roleSpecSlice != nil) + assert.Check(t, secretMap != nil) + assert.NilError(t, err) + + // Assert that postgresSecret was deleted since its associated role is not in the spec + err = tClient.Get(ctx, client.ObjectKeyFromObject(postgresSecret), postgresSecret) + assert.Assert(t, apierrors.IsNotFound(err), "expected NotFound, got %#v", err) + + // Assert that applicationSecret is still there + err = tClient.Get(ctx, client.ObjectKeyFromObject(applicationSecret), applicationSecret) + assert.NilError(t, err) + }) + + t.Run("SecretsGetUpdated", func(t *testing.T) { + clusterRoleInBridge := testClusterRoleApiResource() + clusterRoleInBridge.Password = "different-password" + reconciler.NewClient = func() bridge.ClientInterface { + return &TestBridgeClient{ + ApiKey: apiKey, + TeamId: "5678", + ClusterRoles: []*bridge.ClusterRoleApiResource{clusterRoleInBridge}, + } + } + + cluster := testCluster() + cluster.Namespace = ns + err := tClient.Get(ctx, client.ObjectKeyFromObject(cluster), cluster) + assert.NilError(t, err) + cluster.Status.ID = "1234" + + spec1 := &v1beta1.CrunchyBridgeClusterRoleSpec{ + Name: "application", + SecretName: "application-role-secret", + } + role1 := &bridge.ClusterRoleApiResource{ + Name: "application", + Password: "test", + URI: "connection-string", + } + // Generate secret + secret1, err := reconciler.generatePostgresRoleSecret(cluster, spec1, role1) + assert.NilError(t, err) + + roleSpecSlice, secretMap, err := reconciler.reconcilePostgresRoleSecrets(ctx, apiKey, cluster) + assert.Check(t, roleSpecSlice != nil) + assert.Check(t, secretMap != nil) + assert.NilError(t, err) + + // Assert that secret1 was updated + err = tClient.Get(ctx, client.ObjectKeyFromObject(secret1), secret1) + assert.NilError(t, err) + assert.Equal(t, string(secret1.Data["password"]), "different-password") + }) +} diff --git a/internal/bridge/crunchybridgecluster/watches.go b/internal/bridge/crunchybridgecluster/watches.go new file mode 100644 index 0000000000..79687b3476 --- /dev/null +++ b/internal/bridge/crunchybridgecluster/watches.go @@ -0,0 +1,103 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package crunchybridgecluster + +import ( + "context" + + "k8s.io/client-go/util/workqueue" + ctrl "sigs.k8s.io/controller-runtime" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/event" + "sigs.k8s.io/controller-runtime/pkg/handler" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +// watchForRelatedSecret handles create/update/delete events for secrets, +// passing the Secret ObjectKey to findCrunchyBridgeClustersForSecret +func (r *CrunchyBridgeClusterReconciler) watchForRelatedSecret() handler.EventHandler { + handle := func(ctx context.Context, secret client.Object, q workqueue.RateLimitingInterface) { + key := client.ObjectKeyFromObject(secret) + + for _, cluster := range r.findCrunchyBridgeClustersForSecret(ctx, key) { + q.Add(ctrl.Request{ + NamespacedName: client.ObjectKeyFromObject(cluster), + }) + } + } + + return handler.Funcs{ + CreateFunc: func(ctx context.Context, e event.CreateEvent, q workqueue.RateLimitingInterface) { + handle(ctx, e.Object, q) + }, + UpdateFunc: func(ctx context.Context, e event.UpdateEvent, q workqueue.RateLimitingInterface) { + handle(ctx, e.ObjectNew, q) + }, + // If the secret is deleted, we want to reconcile + // in order to emit an event/status about this problem. + // We will also emit a matching event/status about this problem + // when we reconcile the cluster and can't find the secret. + // That way, users will get two alerts: one when the secret is deleted + // and another when the cluster is being reconciled. + DeleteFunc: func(ctx context.Context, e event.DeleteEvent, q workqueue.RateLimitingInterface) { + handle(ctx, e.Object, q) + }, + } +} + +//+kubebuilder:rbac:groups="postgres-operator.crunchydata.com",resources="crunchybridgeclusters",verbs={list} + +// findCrunchyBridgeClustersForSecret returns CrunchyBridgeClusters +// that are connected to the Secret +func (r *CrunchyBridgeClusterReconciler) findCrunchyBridgeClustersForSecret( + ctx context.Context, secret client.ObjectKey, +) []*v1beta1.CrunchyBridgeCluster { + var matching []*v1beta1.CrunchyBridgeCluster + var clusters v1beta1.CrunchyBridgeClusterList + + // NOTE: If this becomes slow due to a large number of CrunchyBridgeClusters in a single + // namespace, we can configure the [ctrl.Manager] field indexer and pass a + // [fields.Selector] here. + // - https://book.kubebuilder.io/reference/watching-resources/externally-managed.html + if err := r.List(ctx, &clusters, &client.ListOptions{ + Namespace: secret.Namespace, + }); err == nil { + for i := range clusters.Items { + if clusters.Items[i].Spec.Secret == secret.Name { + matching = append(matching, &clusters.Items[i]) + } + } + } + return matching +} + +//+kubebuilder:rbac:groups="postgres-operator.crunchydata.com",resources="crunchybridgeclusters",verbs={list} + +// Watch enqueues all existing CrunchyBridgeClusters for reconciles. +func (r *CrunchyBridgeClusterReconciler) Watch() handler.EventHandler { + return handler.EnqueueRequestsFromMapFunc(func(ctx context.Context, _ client.Object) []reconcile.Request { + log := ctrl.LoggerFrom(ctx) + + crunchyBridgeClusterList := &v1beta1.CrunchyBridgeClusterList{} + if err := r.List(ctx, crunchyBridgeClusterList); err != nil { + log.Error(err, "Error listing CrunchyBridgeClusters.") + } + + reconcileRequests := []reconcile.Request{} + for index := range crunchyBridgeClusterList.Items { + reconcileRequests = append(reconcileRequests, + reconcile.Request{ + NamespacedName: client.ObjectKeyFromObject( + &crunchyBridgeClusterList.Items[index], + ), + }, + ) + } + + return reconcileRequests + }) +} diff --git a/internal/bridge/crunchybridgecluster/watches_test.go b/internal/bridge/crunchybridgecluster/watches_test.go new file mode 100644 index 0000000000..48dba2ba14 --- /dev/null +++ b/internal/bridge/crunchybridgecluster/watches_test.go @@ -0,0 +1,84 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package crunchybridgecluster + +import ( + "context" + "testing" + + "gotest.tools/v3/assert" + corev1 "k8s.io/api/core/v1" + "sigs.k8s.io/controller-runtime/pkg/client" + + "github.com/crunchydata/postgres-operator/internal/testing/require" +) + +func TestFindCrunchyBridgeClustersForSecret(t *testing.T) { + ctx := context.Background() + tClient := setupKubernetes(t) + require.ParallelCapacity(t, 0) + + ns := setupNamespace(t, tClient) + reconciler := &CrunchyBridgeClusterReconciler{Client: tClient} + + secret := &corev1.Secret{} + secret.Namespace = ns.Name + secret.Name = "crunchy-bridge-api-key" + + assert.NilError(t, tClient.Create(ctx, secret)) + secretObjectKey := client.ObjectKeyFromObject(secret) + + t.Run("NoClusters", func(t *testing.T) { + clusters := reconciler.findCrunchyBridgeClustersForSecret(ctx, secretObjectKey) + + assert.Equal(t, len(clusters), 0) + }) + + t.Run("OneCluster", func(t *testing.T) { + cluster1 := testCluster() + cluster1.Namespace = ns.Name + cluster1.Name = "first-cluster" + assert.NilError(t, tClient.Create(ctx, cluster1)) + + clusters := reconciler.findCrunchyBridgeClustersForSecret(ctx, secretObjectKey) + + assert.Equal(t, len(clusters), 1) + assert.Equal(t, clusters[0].Name, "first-cluster") + }) + + t.Run("TwoClusters", func(t *testing.T) { + cluster2 := testCluster() + cluster2.Namespace = ns.Name + cluster2.Name = "second-cluster" + assert.NilError(t, tClient.Create(ctx, cluster2)) + clusters := reconciler.findCrunchyBridgeClustersForSecret(ctx, secretObjectKey) + + assert.Equal(t, len(clusters), 2) + clusterCount := map[string]int{} + for _, cluster := range clusters { + clusterCount[cluster.Name] += 1 + } + assert.Equal(t, clusterCount["first-cluster"], 1) + assert.Equal(t, clusterCount["second-cluster"], 1) + }) + + t.Run("ClusterWithDifferentSecretNameNotIncluded", func(t *testing.T) { + cluster3 := testCluster() + cluster3.Namespace = ns.Name + cluster3.Name = "third-cluster" + cluster3.Spec.Secret = "different-secret-name" + assert.NilError(t, tClient.Create(ctx, cluster3)) + clusters := reconciler.findCrunchyBridgeClustersForSecret(ctx, secretObjectKey) + + assert.Equal(t, len(clusters), 2) + clusterCount := map[string]int{} + for _, cluster := range clusters { + clusterCount[cluster.Name] += 1 + } + assert.Equal(t, clusterCount["first-cluster"], 1) + assert.Equal(t, clusterCount["second-cluster"], 1) + assert.Equal(t, clusterCount["third-cluster"], 0) + }) +} diff --git a/internal/bridge/installation.go b/internal/bridge/installation.go new file mode 100644 index 0000000000..c76a073348 --- /dev/null +++ b/internal/bridge/installation.go @@ -0,0 +1,280 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package bridge + +import ( + "context" + "encoding/json" + "errors" + "sync" + "time" + + corev1 "k8s.io/api/core/v1" + apierrors "k8s.io/apimachinery/pkg/api/errors" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/util/wait" + corev1apply "k8s.io/client-go/applyconfigurations/core/v1" + "sigs.k8s.io/controller-runtime/pkg/builder" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/event" + "sigs.k8s.io/controller-runtime/pkg/handler" + "sigs.k8s.io/controller-runtime/pkg/manager" + "sigs.k8s.io/controller-runtime/pkg/predicate" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + "sigs.k8s.io/yaml" + + "github.com/crunchydata/postgres-operator/internal/controller/runtime" + "github.com/crunchydata/postgres-operator/internal/logging" + "github.com/crunchydata/postgres-operator/internal/naming" +) + +// self is a singleton Installation. See [InstallationReconciler]. +var self = new(struct { + Installation + sync.RWMutex +}) + +type AuthObject struct { + ID string `json:"id"` + ExpiresAt time.Time `json:"expires_at"` + Secret string `json:"secret"` +} + +type Installation struct { + ID string `json:"id"` + AuthObject AuthObject `json:"auth_object"` +} + +type InstallationReconciler struct { + Owner client.FieldOwner + Reader interface { + Get(context.Context, client.ObjectKey, client.Object, ...client.GetOption) error + } + Writer interface { + Patch(context.Context, client.Object, client.Patch, ...client.PatchOption) error + } + + // Refresh is the frequency at which AuthObjects should be renewed. + Refresh time.Duration + + // SecretRef is the name of the corev1.Secret in which to store Bridge tokens. + SecretRef client.ObjectKey + + // NewClient is called each time a new Client is needed. + NewClient func() *Client +} + +// ManagedInstallationReconciler creates an [InstallationReconciler] and adds it to m. +func ManagedInstallationReconciler(m manager.Manager, newClient func() *Client) error { + kubernetes := m.GetClient() + reconciler := &InstallationReconciler{ + Owner: naming.ControllerBridge, + Reader: kubernetes, + Writer: kubernetes, + Refresh: 2 * time.Hour, + SecretRef: naming.AsObjectKey(naming.OperatorConfigurationSecret()), + NewClient: newClient, + } + + // NOTE: This name was selected to show something interesting in the logs. + // The default is "secret". + // TODO: Pick this name considering metrics and other controllers. + return builder.ControllerManagedBy(m).Named("installation"). + // + // Reconcile the one Secret that holds Bridge tokens. + For(&corev1.Secret{}, builder.WithPredicates( + predicate.NewPredicateFuncs(func(secret client.Object) bool { + return client.ObjectKeyFromObject(secret) == reconciler.SecretRef + }), + )). + // + // Wake periodically even when that Secret does not exist. + WatchesRawSource( + runtime.NewTickerImmediate(time.Hour, event.GenericEvent{}, + handler.EnqueueRequestsFromMapFunc( + func(context.Context, client.Object) []reconcile.Request { + return []reconcile.Request{{NamespacedName: reconciler.SecretRef}} + }, + ), + ), + ). + // + Complete(reconciler) +} + +func (r *InstallationReconciler) Reconcile( + ctx context.Context, request reconcile.Request) (reconcile.Result, error, +) { + result := reconcile.Result{} + secret := &corev1.Secret{} + err := client.IgnoreNotFound(r.Reader.Get(ctx, request.NamespacedName, secret)) + + if err == nil { + // It is easier later to treat a missing Secret the same as one that exists + // and is empty. Fill in the metadata with information from the request to + // make it so. + secret.Namespace, secret.Name = request.Namespace, request.Name + + result.RequeueAfter, err = r.reconcile(ctx, secret) + } + + // Nothing can be written to a deleted namespace. + if err != nil && apierrors.HasStatusCause(err, corev1.NamespaceTerminatingCause) { + return runtime.ErrorWithoutBackoff(err) + } + + // Write conflicts are returned as errors; log and retry with backoff. + if err != nil && apierrors.IsConflict(err) { + logging.FromContext(ctx).Info("Requeue", "reason", err) + return runtime.RequeueWithBackoff(), nil + } + + return result, err +} + +// reconcile looks for an Installation in read and stores it or another in +// the [self] singleton after a successful response from the Bridge API. +func (r *InstallationReconciler) reconcile( + ctx context.Context, read *corev1.Secret) (next time.Duration, err error, +) { + write, err := corev1apply.ExtractSecret(read, string(r.Owner)) + if err != nil { + return 0, err + } + + // We GET-extract-PATCH the Secret and do not build it up from scratch. + // Send the ResourceVersion from the GET in the body of every PATCH. + if len(read.ResourceVersion) != 0 { + write.WithResourceVersion(read.ResourceVersion) + } + + // Read the Installation from the Secret, if any. + var installation Installation + if yaml.Unmarshal(read.Data[KeyBridgeToken], &installation) != nil { + installation = Installation{} + } + + // When the Secret lacks an Installation, write the one we have in memory + // or register with the API for a new one. In both cases, we write to the + // Secret which triggers another reconcile. + if len(installation.ID) == 0 { + if len(self.ID) == 0 { + return 0, r.register(ctx, write) + } + + data := map[string][]byte{} + data[KeyBridgeToken], _ = json.Marshal(self.Installation) //nolint:errchkjson + + return 0, r.persist(ctx, write.WithData(data)) + } + + // Read the timestamp from the Secret, if any. + var touched time.Time + if yaml.Unmarshal(read.Data[KeyBridgeLocalTime], &touched) != nil { + touched = time.Time{} + } + + // Refresh the AuthObject when there is no Installation in memory, + // there is no timestamp, or the timestamp is far away. This writes to + // the Secret which triggers another reconcile. + if len(self.ID) == 0 || time.Since(touched) > r.Refresh || time.Until(touched) > r.Refresh { + return 0, r.refresh(ctx, installation, write) + } + + // Trigger another reconcile one interval after the stored timestamp. + return wait.Jitter(time.Until(touched.Add(r.Refresh)), 0.1), nil +} + +// persist uses Server-Side Apply to write config to Kubernetes. The Name and +// Namespace fields cannot be nil. +func (r *InstallationReconciler) persist( + ctx context.Context, config *corev1apply.SecretApplyConfiguration, +) error { + data, err := json.Marshal(config) + apply := client.RawPatch(client.Apply.Type(), data) + + // [client.Client] decides where to write by looking at the underlying type, + // namespace, and name of its [client.Object] argument. That is also where + // it stores the API response. + target := corev1.Secret{} + target.Namespace, target.Name = *config.Namespace, *config.Name + + if err == nil { + err = r.Writer.Patch(ctx, &target, apply, r.Owner, client.ForceOwnership) + } + + return err +} + +// refresh calls the Bridge API to refresh the AuthObject of installation. It +// combines the result with installation and stores that in the [self] singleton +// and the write object in Kubernetes. The Name and Namespace fields of the +// latter cannot be nil. +func (r *InstallationReconciler) refresh( + ctx context.Context, installation Installation, + write *corev1apply.SecretApplyConfiguration, +) error { + result, err := r.NewClient().CreateAuthObject(ctx, installation.AuthObject) + + // An authentication error means the installation is irrecoverably expired. + // Remove it from the singleton and move it to a dated entry in the Secret. + if err != nil && errors.Is(err, errAuthentication) { + self.Lock() + self.Installation = Installation{} + self.Unlock() + + keyExpiration := KeyBridgeToken + + installation.AuthObject.ExpiresAt.UTC().Format("--2006-01-02") + + data := make(map[string][]byte, 2) + data[KeyBridgeToken] = nil + data[keyExpiration], _ = json.Marshal(installation) //nolint:errchkjson + + return r.persist(ctx, write.WithData(data)) + } + + if err == nil { + installation.AuthObject = result + + // Store the new value in the singleton. + self.Lock() + self.Installation = installation + self.Unlock() + + // Store the new value in the Secret along with the current time. + data := make(map[string][]byte, 2) + data[KeyBridgeLocalTime], _ = metav1.Now().MarshalJSON() + data[KeyBridgeToken], _ = json.Marshal(installation) //nolint:errchkjson + + err = r.persist(ctx, write.WithData(data)) + } + + return err +} + +// register calls the Bridge API to register a new Installation. It stores the +// result in the [self] singleton and the write object in Kubernetes. The Name +// and Namespace fields of the latter cannot be nil. +func (r *InstallationReconciler) register( + ctx context.Context, write *corev1apply.SecretApplyConfiguration, +) error { + installation, err := r.NewClient().CreateInstallation(ctx) + + if err == nil { + // Store the new value in the singleton. + self.Lock() + self.Installation = installation + self.Unlock() + + // Store the new value in the Secret along with the current time. + data := make(map[string][]byte, 2) + data[KeyBridgeLocalTime], _ = metav1.Now().MarshalJSON() + data[KeyBridgeToken], _ = json.Marshal(installation) //nolint:errchkjson + + err = r.persist(ctx, write.WithData(data)) + } + + return err +} diff --git a/internal/bridge/installation_test.go b/internal/bridge/installation_test.go new file mode 100644 index 0000000000..96223a2233 --- /dev/null +++ b/internal/bridge/installation_test.go @@ -0,0 +1,491 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package bridge + +import ( + "context" + "encoding/json" + "errors" + "net/http" + "net/http/httptest" + "testing" + "time" + + "gotest.tools/v3/assert" + cmpopt "gotest.tools/v3/assert/opt" + corev1 "k8s.io/api/core/v1" + corev1apply "k8s.io/client-go/applyconfigurations/core/v1" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/yaml" + + "github.com/crunchydata/postgres-operator/internal/controller/runtime" + "github.com/crunchydata/postgres-operator/internal/testing/cmp" +) + +func TestExtractSecretContract(t *testing.T) { + // We expect ExtractSecret to populate GVK, Namespace, and Name. + + t.Run("GVK", func(t *testing.T) { + empty := &corev1.Secret{} + + extracted, err := corev1apply.ExtractSecret(empty, "") + assert.NilError(t, err) + + if assert.Check(t, extracted.APIVersion != nil) { + assert.Equal(t, *extracted.APIVersion, "v1") + } + if assert.Check(t, extracted.Kind != nil) { + assert.Equal(t, *extracted.Kind, "Secret") + } + }) + + t.Run("Name", func(t *testing.T) { + named := &corev1.Secret{} + named.Namespace, named.Name = "ns1", "s2" + + extracted, err := corev1apply.ExtractSecret(named, "") + assert.NilError(t, err) + + if assert.Check(t, extracted.Namespace != nil) { + assert.Equal(t, *extracted.Namespace, "ns1") + } + if assert.Check(t, extracted.Name != nil) { + assert.Equal(t, *extracted.Name, "s2") + } + }) + + t.Run("ResourceVersion", func(t *testing.T) { + versioned := &corev1.Secret{} + versioned.ResourceVersion = "asdf" + + extracted, err := corev1apply.ExtractSecret(versioned, "") + assert.NilError(t, err) + + // ResourceVersion is not copied from the original. + assert.Assert(t, extracted.ResourceVersion == nil) + }) +} + +func TestInstallationReconcile(t *testing.T) { + // Scenario: + // When there is no Secret and no Installation in memory, + // Then Reconcile should register with the API. + // + t.Run("FreshStart", func(t *testing.T) { + var reconciler *InstallationReconciler + var secret *corev1.Secret + + beforeEach := func() { + reconciler = new(InstallationReconciler) + secret = new(corev1.Secret) + self.Installation = Installation{} + } + + t.Run("ItRegisters", func(t *testing.T) { + beforeEach() + + // API double; spy on requests. + var requests []http.Request + { + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + requests = append(requests, *r) + _ = json.NewEncoder(w).Encode(map[string]any{ + "id": "abc", "auth_object": map[string]any{"secret": "xyz"}, + }) + })) + t.Cleanup(server.Close) + + reconciler.NewClient = func() *Client { + c := NewClient(server.URL, "") + c.Backoff.Steps = 1 + assert.Equal(t, c.BaseURL.String(), server.URL) + return c + } + } + + // Kubernetes double; spy on SSA patches. + var applies []string + { + reconciler.Writer = runtime.ClientPatch(func(ctx context.Context, obj client.Object, patch client.Patch, opts ...client.PatchOption) error { + assert.Equal(t, string(patch.Type()), "application/apply-patch+yaml") + + data, err := patch.Data(obj) + applies = append(applies, string(data)) + return err + }) + } + + ctx := context.Background() + next, err := reconciler.reconcile(ctx, secret) + assert.NilError(t, err) + assert.Assert(t, next == 0) + + // It calls the API. + assert.Equal(t, len(requests), 1) + assert.Equal(t, requests[0].Method, "POST") + assert.Equal(t, requests[0].URL.Path, "/vendor/operator/installations") + + // It stores the result in memory. + assert.Equal(t, self.ID, "abc") + assert.Equal(t, self.AuthObject.Secret, "xyz") + + // It stores the result in Kubernetes. + assert.Equal(t, len(applies), 1) + assert.Assert(t, cmp.Contains(applies[0], `"kind":"Secret"`)) + + var decoded corev1.Secret + assert.NilError(t, yaml.Unmarshal([]byte(applies[0]), &decoded)) + assert.Assert(t, cmp.Contains(string(decoded.Data["bridge-token"]), `"id":"abc"`)) + assert.Assert(t, cmp.Contains(string(decoded.Data["bridge-token"]), `"secret":"xyz"`)) + }) + + t.Run("KubernetesError", func(t *testing.T) { + beforeEach() + + // API double; successful. + { + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + _ = json.NewEncoder(w).Encode(map[string]any{ + "id": "123", "auth_object": map[string]any{"secret": "456"}, + }) + })) + t.Cleanup(server.Close) + + reconciler.NewClient = func() *Client { + c := NewClient(server.URL, "") + c.Backoff.Steps = 1 + assert.Equal(t, c.BaseURL.String(), server.URL) + return c + } + } + + // Kubernetes double; failure. + expected := errors.New("boom") + { + reconciler.Writer = runtime.ClientPatch(func(ctx context.Context, obj client.Object, patch client.Patch, opts ...client.PatchOption) error { + return expected + }) + } + + ctx := context.Background() + _, err := reconciler.reconcile(ctx, secret) + assert.Equal(t, err, expected, "expected a Kubernetes error") + + // It stores the API result in memory. + assert.Equal(t, self.ID, "123") + assert.Equal(t, self.AuthObject.Secret, "456") + }) + }) + + // Scenario: + // When there is no Secret but an Installation exists in memory, + // Then Reconcile should store it in Kubernetes. + // + t.Run("LostSecret", func(t *testing.T) { + var reconciler *InstallationReconciler + var secret *corev1.Secret + + beforeEach := func(token []byte) { + reconciler = new(InstallationReconciler) + secret = new(corev1.Secret) + secret.Data = map[string][]byte{ + KeyBridgeToken: token, + } + self.Installation = Installation{ID: "asdf"} + } + + for _, tt := range []struct { + Name string + Token []byte + }{ + {Name: "NoToken", Token: nil}, + {Name: "BadToken", Token: []byte(`asdf`)}, + } { + t.Run(tt.Name, func(t *testing.T) { + beforeEach(tt.Token) + + // Kubernetes double; spy on SSA patches. + var applies []string + { + reconciler.Writer = runtime.ClientPatch(func(ctx context.Context, obj client.Object, patch client.Patch, opts ...client.PatchOption) error { + assert.Equal(t, string(patch.Type()), "application/apply-patch+yaml") + + data, err := patch.Data(obj) + applies = append(applies, string(data)) + return err + }) + } + + ctx := context.Background() + next, err := reconciler.reconcile(ctx, secret) + assert.NilError(t, err) + assert.Assert(t, next == 0) + + assert.Equal(t, self.ID, "asdf", "expected no change to memory") + + // It stores the memory in Kubernetes. + assert.Equal(t, len(applies), 1) + assert.Assert(t, cmp.Contains(applies[0], `"kind":"Secret"`)) + + var decoded corev1.Secret + assert.NilError(t, yaml.Unmarshal([]byte(applies[0]), &decoded)) + assert.Assert(t, cmp.Contains(string(decoded.Data["bridge-token"]), `"id":"asdf"`)) + }) + } + + t.Run("KubernetesError", func(t *testing.T) { + beforeEach(nil) + + // Kubernetes double; failure. + expected := errors.New("boom") + { + reconciler.Writer = runtime.ClientPatch(func(ctx context.Context, obj client.Object, patch client.Patch, opts ...client.PatchOption) error { + return expected + }) + } + + ctx := context.Background() + _, err := reconciler.reconcile(ctx, secret) + assert.Equal(t, err, expected, "expected a Kubernetes error") + assert.Equal(t, self.ID, "asdf", "expected no change to memory") + }) + }) + + // Scenario: + // When there is a Secret but no Installation in memory, + // Then Reconcile should verify it in the API and store it in memory. + // + t.Run("Restart", func(t *testing.T) { + var reconciler *InstallationReconciler + var secret *corev1.Secret + + beforeEach := func() { + reconciler = new(InstallationReconciler) + secret = new(corev1.Secret) + secret.Data = map[string][]byte{ + KeyBridgeToken: []byte(`{ + "id":"xyz", "auth_object":{ + "secret":"abc", + "expires_at":"2020-10-28T05:06:07Z" + } + }`), + } + self.Installation = Installation{} + } + + t.Run("ItVerifies", func(t *testing.T) { + beforeEach() + + // API double; spy on requests. + var requests []http.Request + { + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + requests = append(requests, *r) + _ = json.NewEncoder(w).Encode(map[string]any{"secret": "def"}) + })) + t.Cleanup(server.Close) + + reconciler.NewClient = func() *Client { + c := NewClient(server.URL, "") + c.Backoff.Steps = 1 + assert.Equal(t, c.BaseURL.String(), server.URL) + return c + } + } + + // Kubernetes double; spy on SSA patches. + var applies []string + { + reconciler.Writer = runtime.ClientPatch(func(ctx context.Context, obj client.Object, patch client.Patch, opts ...client.PatchOption) error { + assert.Equal(t, string(patch.Type()), "application/apply-patch+yaml") + + data, err := patch.Data(obj) + applies = append(applies, string(data)) + return err + }) + } + + ctx := context.Background() + next, err := reconciler.reconcile(ctx, secret) + assert.NilError(t, err) + assert.Assert(t, next == 0) + + assert.Equal(t, len(requests), 1) + assert.Equal(t, requests[0].Header.Get("Authorization"), "Bearer abc") + assert.Equal(t, requests[0].Method, "POST") + assert.Equal(t, requests[0].URL.Path, "/vendor/operator/auth-objects") + + // It stores the result in memory. + assert.Equal(t, self.ID, "xyz") + assert.Equal(t, self.AuthObject.Secret, "def") + + // It stores the memory in Kubernetes. + assert.Equal(t, len(applies), 1) + assert.Assert(t, cmp.Contains(applies[0], `"kind":"Secret"`)) + + var decoded corev1.Secret + assert.NilError(t, yaml.Unmarshal([]byte(applies[0]), &decoded)) + assert.Assert(t, cmp.Contains(string(decoded.Data["bridge-token"]), `"id":"xyz"`)) + assert.Assert(t, cmp.Contains(string(decoded.Data["bridge-token"]), `"secret":"def"`)) + }) + + t.Run("Expired", func(t *testing.T) { + beforeEach() + + // API double; authentication error. + { + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusUnauthorized) + })) + t.Cleanup(server.Close) + + reconciler.NewClient = func() *Client { + c := NewClient(server.URL, "") + c.Backoff.Steps = 1 + assert.Equal(t, c.BaseURL.String(), server.URL) + return c + } + } + + // Kubernetes double; spy on SSA patches. + var applies []string + { + reconciler.Writer = runtime.ClientPatch(func(ctx context.Context, obj client.Object, patch client.Patch, opts ...client.PatchOption) error { + assert.Equal(t, string(patch.Type()), "application/apply-patch+yaml") + + data, err := patch.Data(obj) + applies = append(applies, string(data)) + return err + }) + } + + ctx := context.Background() + next, err := reconciler.reconcile(ctx, secret) + assert.NilError(t, err) + assert.Assert(t, next == 0) + + assert.DeepEqual(t, self.Installation, Installation{}) + + // It archives the expired one. + assert.Equal(t, len(applies), 1) + assert.Assert(t, cmp.Contains(applies[0], `"kind":"Secret"`)) + + var decoded corev1.Secret + assert.NilError(t, yaml.Unmarshal([]byte(applies[0]), &decoded)) + assert.Equal(t, len(decoded.Data["bridge-token"]), 0) + + archived := string(decoded.Data["bridge-token--2020-10-28"]) + assert.Assert(t, cmp.Contains(archived, `"id":"xyz"`)) + assert.Assert(t, cmp.Contains(archived, `"secret":"abc"`)) + }) + }) + + // Scenario: + // When there is an Installation in the Secret and in memory, + // Then Reconcile should refresh it periodically. + // + t.Run("Refresh", func(t *testing.T) { + var reconciler *InstallationReconciler + var secret *corev1.Secret + + beforeEach := func(timestamp []byte) { + reconciler = new(InstallationReconciler) + reconciler.Refresh = time.Minute + + secret = new(corev1.Secret) + secret.Data = map[string][]byte{ + KeyBridgeToken: []byte(`{"id":"ddd", "auth_object":{"secret":"eee"}}`), + KeyBridgeLocalTime: timestamp, + } + + self.Installation = Installation{ID: "ddd"} + } + + for _, tt := range []struct { + Name string + Timestamp []byte + }{ + {Name: "NoTimestamp", Timestamp: nil}, + {Name: "BadTimestamp", Timestamp: []byte(`asdf`)}, + {Name: "OldTimestamp", Timestamp: []byte(`"2020-10-10T20:20:20Z"`)}, + {Name: "FutureTimestamp", Timestamp: []byte(`"2030-10-10T20:20:20Z"`)}, + } { + t.Run(tt.Name, func(t *testing.T) { + beforeEach(tt.Timestamp) + + // API double; spy on requests. + var requests []http.Request + { + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + requests = append(requests, *r) + _ = json.NewEncoder(w).Encode(map[string]any{"secret": "fresh"}) + })) + t.Cleanup(server.Close) + + reconciler.NewClient = func() *Client { + c := NewClient(server.URL, "") + c.Backoff.Steps = 1 + assert.Equal(t, c.BaseURL.String(), server.URL) + return c + } + } + + // Kubernetes double; spy on SSA patches. + var applies []string + { + reconciler.Writer = runtime.ClientPatch(func(ctx context.Context, obj client.Object, patch client.Patch, opts ...client.PatchOption) error { + assert.Equal(t, string(patch.Type()), "application/apply-patch+yaml") + + data, err := patch.Data(obj) + applies = append(applies, string(data)) + return err + }) + } + + ctx := context.Background() + next, err := reconciler.reconcile(ctx, secret) + assert.NilError(t, err) + assert.Assert(t, next == 0) + + assert.Equal(t, len(requests), 1) + assert.Equal(t, requests[0].Header.Get("Authorization"), "Bearer eee") + assert.Equal(t, requests[0].Method, "POST") + assert.Equal(t, requests[0].URL.Path, "/vendor/operator/auth-objects") + + // It stores the result in memory. + assert.Equal(t, self.ID, "ddd") + assert.Equal(t, self.AuthObject.Secret, "fresh") + + // It stores the memory in Kubernetes. + assert.Equal(t, len(applies), 1) + assert.Assert(t, cmp.Contains(applies[0], `"kind":"Secret"`)) + + var decoded corev1.Secret + assert.NilError(t, yaml.Unmarshal([]byte(applies[0]), &decoded)) + assert.Assert(t, cmp.Contains(string(decoded.Data["bridge-token"]), `"id":"ddd"`)) + assert.Assert(t, cmp.Contains(string(decoded.Data["bridge-token"]), `"secret":"fresh"`)) + }) + } + + t.Run("CurrentTimestamp", func(t *testing.T) { + current := time.Now().Add(-15 * time.Minute) + currentJSON, _ := current.UTC().MarshalJSON() + + beforeEach(currentJSON) + reconciler.Refresh = time.Hour + + // Any API calls would panic because no spies are configured here. + + ctx := context.Background() + next, err := reconciler.reconcile(ctx, secret) + assert.NilError(t, err) + + // The next reconcile is scheduled around (60 - 15 =) 45 minutes + // from now, plus or minus (60 * 10% =) 6 minutes of jitter. + assert.DeepEqual(t, next, 45*time.Minute, + cmpopt.DurationWithThreshold(6*time.Minute)) + }) + }) +} diff --git a/internal/bridge/naming.go b/internal/bridge/naming.go new file mode 100644 index 0000000000..cabe8e9cf6 --- /dev/null +++ b/internal/bridge/naming.go @@ -0,0 +1,10 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package bridge + +const ( + KeyBridgeLocalTime = "bridge-local-time" + KeyBridgeToken = "bridge-token" +) diff --git a/internal/bridge/quantity.go b/internal/bridge/quantity.go new file mode 100644 index 0000000000..a948c6b4cf --- /dev/null +++ b/internal/bridge/quantity.go @@ -0,0 +1,44 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package bridge + +import ( + "fmt" + + "k8s.io/apimachinery/pkg/api/resource" +) + +func FromCPU(n int64) *resource.Quantity { + // Assume the Bridge API returns numbers that can be parsed by the + // [resource] package. + if q, err := resource.ParseQuantity(fmt.Sprint(n)); err == nil { + return &q + } + + return resource.NewQuantity(0, resource.DecimalSI) +} + +// FromGibibytes returns n gibibytes as a [resource.Quantity]. +func FromGibibytes(n int64) *resource.Quantity { + // Assume the Bridge API returns numbers that can be parsed by the + // [resource] package. + if q, err := resource.ParseQuantity(fmt.Sprint(n) + "Gi"); err == nil { + return &q + } + + return resource.NewQuantity(0, resource.BinarySI) +} + +// ToGibibytes returns q rounded up to a non-negative gibibyte. +func ToGibibytes(q resource.Quantity) int64 { + v := q.Value() + + if v <= 0 { + return 0 + } + + // https://stackoverflow.com/a/2745086 + return 1 + ((v - 1) >> 30) +} diff --git a/internal/bridge/quantity_test.go b/internal/bridge/quantity_test.go new file mode 100644 index 0000000000..7cfebb4a86 --- /dev/null +++ b/internal/bridge/quantity_test.go @@ -0,0 +1,59 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package bridge + +import ( + "testing" + + "gotest.tools/v3/assert" + "k8s.io/apimachinery/pkg/api/resource" +) + +func TestFromCPU(t *testing.T) { + zero := FromCPU(0) + assert.Assert(t, zero.IsZero()) + assert.Equal(t, zero.String(), "0") + + one := FromCPU(1) + assert.Equal(t, one.String(), "1") + + negative := FromCPU(-2) + assert.Equal(t, negative.String(), "-2") +} + +func TestFromGibibytes(t *testing.T) { + zero := FromGibibytes(0) + assert.Assert(t, zero.IsZero()) + assert.Equal(t, zero.String(), "0") + + one := FromGibibytes(1) + assert.Equal(t, one.String(), "1Gi") + + negative := FromGibibytes(-2) + assert.Equal(t, negative.String(), "-2Gi") +} + +func TestToGibibytes(t *testing.T) { + zero := resource.MustParse("0") + assert.Equal(t, ToGibibytes(zero), int64(0)) + + // Negative quantities become zero. + negative := resource.MustParse("-4G") + assert.Equal(t, ToGibibytes(negative), int64(0)) + + // Decimal quantities round up. + decimal := resource.MustParse("9000M") + assert.Equal(t, ToGibibytes(decimal), int64(9)) + + // Binary quantities round up. + binary := resource.MustParse("8000Mi") + assert.Equal(t, ToGibibytes(binary), int64(8)) + + fourGi := resource.MustParse("4096Mi") + assert.Equal(t, ToGibibytes(fourGi), int64(4)) + + moreThanFourGi := resource.MustParse("4097Mi") + assert.Equal(t, ToGibibytes(moreThanFourGi), int64(5)) +} diff --git a/internal/config/annotations.go b/internal/config/annotations.go deleted file mode 100644 index eb474738eb..0000000000 --- a/internal/config/annotations.go +++ /dev/null @@ -1,61 +0,0 @@ -package config - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -// annotations used by the operator -const ( - // ANNOTATION_BACKREST_RESTORE is used to annotate pgclusters that are restoring - ANNOTATION_BACKREST_RESTORE = "pgo-backrest-restore" - ANNOTATION_PGHA_BOOTSTRAP_REPLICA = "pgo-pgha-bootstrap-replica" - ANNOTATION_CLONE_BACKREST_PVC_SIZE = "clone-backrest-pvc-size" - ANNOTATION_CLONE_ENABLE_METRICS = "clone-enable-metrics" - ANNOTATION_CLONE_PVC_SIZE = "clone-pvc-size" - ANNOTATION_CLONE_SOURCE_CLUSTER_NAME = "clone-source-cluster-name" - ANNOTATION_CLONE_TARGET_CLUSTER_NAME = "clone-target-cluster-name" - ANNOTATION_PRIMARY_DEPLOYMENT = "primary-deployment" - // annotation to track the cluster's current primary - ANNOTATION_CURRENT_PRIMARY = "current-primary" - // annotation to indicate whether a cluster has been upgraded - ANNOTATION_IS_UPGRADED = "is-upgraded" - // annotation to store the Operator versions upgraded from and to - ANNOTATION_UPGRADE_INFO = "upgrade-info" - // annotation to store the string boolean, used when checking upgrade status - ANNOTATIONS_FALSE = "false" - // ANNOTATION_REPO_PATH is for storing the repository path for the pgBackRest repo in a cluster - ANNOTATION_REPO_PATH = "repo-path" - // ANNOTATION_PG_PORT is for storing the PostgreSQL port for a cluster - ANNOTATION_PG_PORT = "pg-port" - // ANNOTATION_S3_BUCKET is for storing the name of the S3 bucket used by pgBackRest in - // a cluster - ANNOTATION_S3_BUCKET = "s3-bucket" - // ANNOTATION_S3_ENDPOINT is for storing the name of the S3 endpoint used by pgBackRest in - // a cluster - ANNOTATION_S3_ENDPOINT = "s3-endpoint" - // ANNOTATION_S3_REGION is for storing the name of the S3 region used by pgBackRest in - // a cluster - ANNOTATION_S3_REGION = "s3-region" - // ANNOTATION_S3_URI_STYLE is for storing the the URI style that should be used to access a - // pgBackRest repository - ANNOTATION_S3_URI_STYLE = "s3-uri-style" - // ANNOTATION_S3_VERIFY_TLS is for storing the setting that determines whether or not TLS should - // be used to access a pgBackRest repository - ANNOTATION_S3_VERIFY_TLS = "s3-verify-tls" - // ANNOTATION_S3_BUCKET is for storing the SSHD port used by the pgBackRest repository - // service in a cluster - ANNOTATION_SSHD_PORT = "sshd-port" - // ANNOTATION_SUPPLEMENTAL_GROUPS is for storing the supplemental groups used with a cluster - ANNOTATION_SUPPLEMENTAL_GROUPS = "supplemental-groups" -) diff --git a/internal/config/config.go b/internal/config/config.go new file mode 100644 index 0000000000..e3f9ced215 --- /dev/null +++ b/internal/config/config.go @@ -0,0 +1,159 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package config + +import ( + "fmt" + "os" + + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +// defaultFromEnv reads the environment variable key when value is empty. +func defaultFromEnv(value, key string) string { + if value == "" { + return os.Getenv(key) + } + return value +} + +// FetchKeyCommand returns the fetch_key_cmd value stored in the encryption_key_command +// variable used to enable TDE. +func FetchKeyCommand(spec *v1beta1.PostgresClusterSpec) string { + if spec.Patroni != nil { + if spec.Patroni.DynamicConfiguration != nil { + configuration := spec.Patroni.DynamicConfiguration + if configuration != nil { + if postgresql, ok := configuration["postgresql"].(map[string]any); ok { + if parameters, ok := postgresql["parameters"].(map[string]any); ok { + if parameters["encryption_key_command"] != nil { + return fmt.Sprintf("%s", parameters["encryption_key_command"]) + } + } + } + } + } + } + return "" +} + +// Red Hat Marketplace requires operators to use environment variables be used +// for any image other than the operator itself. Those variables must start with +// "RELATED_IMAGE_" so that OSBS can transform their tag values into digests +// for a "disconnected" OLM CSV. + +// - https://redhat-connect.gitbook.io/certified-operator-guide/troubleshooting-and-resources/offline-enabled-operators +// - https://osbs.readthedocs.io/en/latest/users.html#pullspec-locations + +// PGBackRestContainerImage returns the container image to use for pgBackRest. +func PGBackRestContainerImage(cluster *v1beta1.PostgresCluster) string { + image := cluster.Spec.Backups.PGBackRest.Image + + return defaultFromEnv(image, "RELATED_IMAGE_PGBACKREST") +} + +// PGAdminContainerImage returns the container image to use for pgAdmin. +func PGAdminContainerImage(cluster *v1beta1.PostgresCluster) string { + var image string + if cluster.Spec.UserInterface != nil && + cluster.Spec.UserInterface.PGAdmin != nil { + image = cluster.Spec.UserInterface.PGAdmin.Image + } + + return defaultFromEnv(image, "RELATED_IMAGE_PGADMIN") +} + +// StandalonePGAdminContainerImage returns the container image to use for pgAdmin. +func StandalonePGAdminContainerImage(pgadmin *v1beta1.PGAdmin) string { + var image string + if pgadmin.Spec.Image != nil { + image = *pgadmin.Spec.Image + } + + return defaultFromEnv(image, "RELATED_IMAGE_STANDALONE_PGADMIN") +} + +// PGBouncerContainerImage returns the container image to use for pgBouncer. +func PGBouncerContainerImage(cluster *v1beta1.PostgresCluster) string { + var image string + if cluster.Spec.Proxy != nil && + cluster.Spec.Proxy.PGBouncer != nil { + image = cluster.Spec.Proxy.PGBouncer.Image + } + + return defaultFromEnv(image, "RELATED_IMAGE_PGBOUNCER") +} + +// PGExporterContainerImage returns the container image to use for the +// PostgreSQL Exporter. +func PGExporterContainerImage(cluster *v1beta1.PostgresCluster) string { + var image string + if cluster.Spec.Monitoring != nil && + cluster.Spec.Monitoring.PGMonitor != nil && + cluster.Spec.Monitoring.PGMonitor.Exporter != nil { + image = cluster.Spec.Monitoring.PGMonitor.Exporter.Image + } + + return defaultFromEnv(image, "RELATED_IMAGE_PGEXPORTER") +} + +// PostgresContainerImage returns the container image to use for PostgreSQL. +func PostgresContainerImage(cluster *v1beta1.PostgresCluster) string { + image := cluster.Spec.Image + key := "RELATED_IMAGE_POSTGRES_" + fmt.Sprint(cluster.Spec.PostgresVersion) + + if version := cluster.Spec.PostGISVersion; version != "" { + key += "_GIS_" + version + } + + return defaultFromEnv(image, key) +} + +// PGONamespace returns the namespace where the PGO is running, +// based on the env var from the DownwardAPI +// If no env var is found, returns "" +func PGONamespace() string { + return os.Getenv("PGO_NAMESPACE") +} + +// VerifyImageValues checks that all container images required by the +// spec are defined. If any are undefined, a list is returned in an error. +func VerifyImageValues(cluster *v1beta1.PostgresCluster) error { + + var images []string + + if PGBackRestContainerImage(cluster) == "" { + images = append(images, "crunchy-pgbackrest") + } + if PGAdminContainerImage(cluster) == "" && + cluster.Spec.UserInterface != nil && + cluster.Spec.UserInterface.PGAdmin != nil { + images = append(images, "crunchy-pgadmin4") + } + if PGBouncerContainerImage(cluster) == "" && + cluster.Spec.Proxy != nil && + cluster.Spec.Proxy.PGBouncer != nil { + images = append(images, "crunchy-pgbouncer") + } + if PGExporterContainerImage(cluster) == "" && + cluster.Spec.Monitoring != nil && + cluster.Spec.Monitoring.PGMonitor != nil && + cluster.Spec.Monitoring.PGMonitor.Exporter != nil { + images = append(images, "crunchy-postgres-exporter") + } + if PostgresContainerImage(cluster) == "" { + if cluster.Spec.PostGISVersion != "" { + images = append(images, "crunchy-postgres-gis") + } else { + images = append(images, "crunchy-postgres") + } + } + + if len(images) > 0 { + return fmt.Errorf("Missing image(s): %s", images) + } + + return nil +} diff --git a/internal/config/config_test.go b/internal/config/config_test.go new file mode 100644 index 0000000000..7b8ca2f863 --- /dev/null +++ b/internal/config/config_test.go @@ -0,0 +1,256 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package config + +import ( + "os" + "testing" + + "gotest.tools/v3/assert" + "sigs.k8s.io/yaml" + + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestFetchKeyCommand(t *testing.T) { + + spec1 := v1beta1.PostgresClusterSpec{} + assert.Assert(t, FetchKeyCommand(&spec1) == "") + + spec2 := v1beta1.PostgresClusterSpec{ + Patroni: &v1beta1.PatroniSpec{}, + } + assert.Assert(t, FetchKeyCommand(&spec2) == "") + + spec3 := v1beta1.PostgresClusterSpec{ + Patroni: &v1beta1.PatroniSpec{ + DynamicConfiguration: map[string]any{}, + }, + } + assert.Assert(t, FetchKeyCommand(&spec3) == "") + + spec4 := v1beta1.PostgresClusterSpec{ + Patroni: &v1beta1.PatroniSpec{ + DynamicConfiguration: map[string]any{ + "postgresql": map[string]any{}, + }, + }, + } + assert.Assert(t, FetchKeyCommand(&spec4) == "") + + spec5 := v1beta1.PostgresClusterSpec{ + Patroni: &v1beta1.PatroniSpec{ + DynamicConfiguration: map[string]any{ + "postgresql": map[string]any{ + "parameters": map[string]any{}, + }, + }, + }, + } + assert.Assert(t, FetchKeyCommand(&spec5) == "") + + spec6 := v1beta1.PostgresClusterSpec{ + Patroni: &v1beta1.PatroniSpec{ + DynamicConfiguration: map[string]any{ + "postgresql": map[string]any{ + "parameters": map[string]any{ + "encryption_key_command": "", + }, + }, + }, + }, + } + assert.Assert(t, FetchKeyCommand(&spec6) == "") + + spec7 := v1beta1.PostgresClusterSpec{ + Patroni: &v1beta1.PatroniSpec{ + DynamicConfiguration: map[string]any{ + "postgresql": map[string]any{ + "parameters": map[string]any{ + "encryption_key_command": "echo mykey", + }, + }, + }, + }, + } + assert.Assert(t, FetchKeyCommand(&spec7) == "echo mykey") + +} + +func TestPGAdminContainerImage(t *testing.T) { + cluster := &v1beta1.PostgresCluster{} + + t.Setenv("RELATED_IMAGE_PGADMIN", "") + os.Unsetenv("RELATED_IMAGE_PGADMIN") + assert.Equal(t, PGAdminContainerImage(cluster), "") + + t.Setenv("RELATED_IMAGE_PGADMIN", "") + assert.Equal(t, PGAdminContainerImage(cluster), "") + + t.Setenv("RELATED_IMAGE_PGADMIN", "env-var-pgadmin") + assert.Equal(t, PGAdminContainerImage(cluster), "env-var-pgadmin") + + assert.NilError(t, yaml.Unmarshal([]byte(`{ + userInterface: { pgAdmin: { image: spec-image } }, + }`), &cluster.Spec)) + assert.Equal(t, PGAdminContainerImage(cluster), "spec-image") +} + +func TestPGBackRestContainerImage(t *testing.T) { + cluster := &v1beta1.PostgresCluster{} + + t.Setenv("RELATED_IMAGE_PGBACKREST", "") + os.Unsetenv("RELATED_IMAGE_PGBACKREST") + assert.Equal(t, PGBackRestContainerImage(cluster), "") + + t.Setenv("RELATED_IMAGE_PGBACKREST", "") + assert.Equal(t, PGBackRestContainerImage(cluster), "") + + t.Setenv("RELATED_IMAGE_PGBACKREST", "env-var-pgbackrest") + assert.Equal(t, PGBackRestContainerImage(cluster), "env-var-pgbackrest") + + assert.NilError(t, yaml.Unmarshal([]byte(`{ + backups: { pgBackRest: { image: spec-image } }, + }`), &cluster.Spec)) + assert.Equal(t, PGBackRestContainerImage(cluster), "spec-image") +} + +func TestPGBouncerContainerImage(t *testing.T) { + cluster := &v1beta1.PostgresCluster{} + + t.Setenv("RELATED_IMAGE_PGBOUNCER", "") + os.Unsetenv("RELATED_IMAGE_PGBOUNCER") + assert.Equal(t, PGBouncerContainerImage(cluster), "") + + t.Setenv("RELATED_IMAGE_PGBOUNCER", "") + assert.Equal(t, PGBouncerContainerImage(cluster), "") + + t.Setenv("RELATED_IMAGE_PGBOUNCER", "env-var-pgbouncer") + assert.Equal(t, PGBouncerContainerImage(cluster), "env-var-pgbouncer") + + assert.NilError(t, yaml.Unmarshal([]byte(`{ + proxy: { pgBouncer: { image: spec-image } }, + }`), &cluster.Spec)) + assert.Equal(t, PGBouncerContainerImage(cluster), "spec-image") +} + +func TestPGExporterContainerImage(t *testing.T) { + cluster := &v1beta1.PostgresCluster{} + + t.Setenv("RELATED_IMAGE_PGEXPORTER", "") + os.Unsetenv("RELATED_IMAGE_PGEXPORTER") + assert.Equal(t, PGExporterContainerImage(cluster), "") + + t.Setenv("RELATED_IMAGE_PGEXPORTER", "") + assert.Equal(t, PGExporterContainerImage(cluster), "") + + t.Setenv("RELATED_IMAGE_PGEXPORTER", "env-var-pgexporter") + assert.Equal(t, PGExporterContainerImage(cluster), "env-var-pgexporter") + + assert.NilError(t, yaml.Unmarshal([]byte(`{ + monitoring: { pgMonitor: { exporter: { image: spec-image } } }, + }`), &cluster.Spec)) + assert.Equal(t, PGExporterContainerImage(cluster), "spec-image") +} + +func TestStandalonePGAdminContainerImage(t *testing.T) { + pgadmin := &v1beta1.PGAdmin{} + + t.Setenv("RELATED_IMAGE_STANDALONE_PGADMIN", "") + os.Unsetenv("RELATED_IMAGE_STANDALONE_PGADMIN") + assert.Equal(t, StandalonePGAdminContainerImage(pgadmin), "") + + t.Setenv("RELATED_IMAGE_STANDALONE_PGADMIN", "") + assert.Equal(t, StandalonePGAdminContainerImage(pgadmin), "") + + t.Setenv("RELATED_IMAGE_STANDALONE_PGADMIN", "env-var-pgadmin") + assert.Equal(t, StandalonePGAdminContainerImage(pgadmin), "env-var-pgadmin") + + assert.NilError(t, yaml.Unmarshal([]byte(`{ + image: spec-image + }`), &pgadmin.Spec)) + assert.Equal(t, StandalonePGAdminContainerImage(pgadmin), "spec-image") +} + +func TestPostgresContainerImage(t *testing.T) { + cluster := &v1beta1.PostgresCluster{} + cluster.Spec.PostgresVersion = 12 + + t.Setenv("RELATED_IMAGE_POSTGRES_12", "") + os.Unsetenv("RELATED_IMAGE_POSTGRES_12") + assert.Equal(t, PostgresContainerImage(cluster), "") + + t.Setenv("RELATED_IMAGE_POSTGRES_12", "") + assert.Equal(t, PostgresContainerImage(cluster), "") + + t.Setenv("RELATED_IMAGE_POSTGRES_12", "env-var-postgres") + assert.Equal(t, PostgresContainerImage(cluster), "env-var-postgres") + + cluster.Spec.Image = "spec-image" + assert.Equal(t, PostgresContainerImage(cluster), "spec-image") + + cluster.Spec.Image = "" + cluster.Spec.PostGISVersion = "3.0" + t.Setenv("RELATED_IMAGE_POSTGRES_12_GIS_3.0", "env-var-postgis") + assert.Equal(t, PostgresContainerImage(cluster), "env-var-postgis") + + cluster.Spec.Image = "spec-image" + assert.Equal(t, PostgresContainerImage(cluster), "spec-image") +} + +func TestVerifyImageValues(t *testing.T) { + cluster := &v1beta1.PostgresCluster{} + + verifyImageCheck := func(t *testing.T, envVar, errString string, cluster *v1beta1.PostgresCluster) { + + t.Setenv(envVar, "") + os.Unsetenv(envVar) + err := VerifyImageValues(cluster) + assert.ErrorContains(t, err, errString) + } + + t.Run("crunchy-postgres", func(t *testing.T) { + cluster.Spec.PostgresVersion = 14 + verifyImageCheck(t, "RELATED_IMAGE_POSTGRES_14", "crunchy-postgres", cluster) + }) + + t.Run("crunchy-postgres-gis", func(t *testing.T) { + cluster.Spec.PostGISVersion = "3.3" + verifyImageCheck(t, "RELATED_IMAGE_POSTGRES_14_GIS_3.3", "crunchy-postgres-gis", cluster) + }) + + t.Run("crunchy-pgbackrest", func(t *testing.T) { + verifyImageCheck(t, "RELATED_IMAGE_PGBACKREST", "crunchy-pgbackrest", cluster) + }) + + t.Run("crunchy-pgbouncer", func(t *testing.T) { + cluster.Spec.Proxy = new(v1beta1.PostgresProxySpec) + cluster.Spec.Proxy.PGBouncer = new(v1beta1.PGBouncerPodSpec) + verifyImageCheck(t, "RELATED_IMAGE_PGBOUNCER", "crunchy-pgbouncer", cluster) + }) + + t.Run("crunchy-pgadmin4", func(t *testing.T) { + cluster.Spec.UserInterface = new(v1beta1.UserInterfaceSpec) + cluster.Spec.UserInterface.PGAdmin = new(v1beta1.PGAdminPodSpec) + verifyImageCheck(t, "RELATED_IMAGE_PGADMIN", "crunchy-pgadmin4", cluster) + }) + + t.Run("crunchy-postgres-exporter", func(t *testing.T) { + cluster.Spec.Monitoring = new(v1beta1.MonitoringSpec) + cluster.Spec.Monitoring.PGMonitor = new(v1beta1.PGMonitorSpec) + cluster.Spec.Monitoring.PGMonitor.Exporter = new(v1beta1.ExporterSpec) + verifyImageCheck(t, "RELATED_IMAGE_PGEXPORTER", "crunchy-postgres-exporter", cluster) + }) + + t.Run("multiple images", func(t *testing.T) { + err := VerifyImageValues(cluster) + assert.ErrorContains(t, err, "crunchy-postgres-gis") + assert.ErrorContains(t, err, "crunchy-pgbackrest") + assert.ErrorContains(t, err, "crunchy-pgbouncer") + assert.ErrorContains(t, err, "crunchy-pgadmin4") + assert.ErrorContains(t, err, "crunchy-postgres-exporter") + }) + +} diff --git a/internal/config/defaults.go b/internal/config/defaults.go deleted file mode 100644 index d86e404eb7..0000000000 --- a/internal/config/defaults.go +++ /dev/null @@ -1,76 +0,0 @@ -package config - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "k8s.io/apimachinery/pkg/api/resource" -) - -// DefaultPgBouncerReplicas is the total number of Pods to place in a pgBouncer -// Deployment -const DefaultPgBouncerReplicas = 1 - -// Default resource values for deploying a PostgreSQL cluster. These values are -// utilized if the user has not provided these values either through -// configuration or from one-off API/CLI calls. -// -// These values were determined by either program defaults (e.g. the PostgreSQL -// one) and/or loose to vigorous experimentation and profiling -var ( - // DefaultBackrestRepoResourceMemory is the default value of the resource - // request for memory for a pgBackRest repository - DefaultBackrestResourceMemory = resource.MustParse("48Mi") - // DefaultInstanceResourceMemory is the default value of the resource request - // for memory for a PostgreSQL instance in a cluster - DefaultInstanceResourceMemory = resource.MustParse("512Mi") - // DefaultPgBouncerResourceMemory is the default value of the resource request - // for memory of a pgBouncer instance - DefaultPgBouncerResourceMemory = resource.MustParse("24Mi") - // DefaultExporterResourceMemory is the default value of the resource request - // for memory of a Crunchy Postgres Exporter instance - DefaultExporterResourceMemory = resource.MustParse("24Mi") -) - -// The following constants define the default refresh intervals for any informers created -// by that require a refresh interval -const ( - // ControllerGroupRefreshInterval is the default informer refresh interval in seconds - // for the controllers created by the Controller Manager that require a refresh interval - DefaultControllerGroupRefreshInterval = 60 - // NamespaceRefreshInterval is the default informer refresh interval in seconds - // for the Operator's namespace controller - DefaultNamespaceRefreshInterval = 60 -) - -// The following constants define the default number of workers created for the worker queues -// created within the various controller created by the Operator -const ( - // DefaultConfigMapWorkerCount defines the default number or workers for the worker queue - // in the ConfigMap controller - DefaultConfigMapWorkerCount = 2 - // DefaultNamespaceWorkerCount defines the default number or workers for the worker queue - // in the Namespace controller - DefaultNamespaceWorkerCount = 3 - // DefaultPGClusterWorkerCount defines the default number or workers for the worker queue - // in the PGCluster controller - DefaultPGClusterWorkerCount = 1 - // DefaultPGReplicaWorkerCount defines the default number or workers for the worker queue - // in the PGReplica controller - DefaultPGReplicaWorkerCount = 1 - // DefaultPGTaskWorkerCount defines the default number or workers for the worker queue - // in the PGTask controller - DefaultPGTaskWorkerCount = 1 -) diff --git a/internal/config/images.go b/internal/config/images.go deleted file mode 100644 index 905e46579c..0000000000 --- a/internal/config/images.go +++ /dev/null @@ -1,64 +0,0 @@ -package config - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -// a list of container images that are available -const ( - CONTAINER_IMAGE_PGO_BACKREST = "pgo-backrest" - CONTAINER_IMAGE_PGO_BACKREST_REPO = "pgo-backrest-repo" - CONTAINER_IMAGE_PGO_BACKREST_REPO_SYNC = "pgo-backrest-repo-sync" - CONTAINER_IMAGE_PGO_BACKREST_RESTORE = "pgo-backrest-restore" - CONTAINER_IMAGE_PGO_CLIENT = "pgo-client" - CONTAINER_IMAGE_PGO_RMDATA = "pgo-rmdata" - CONTAINER_IMAGE_PGO_SQL_RUNNER = "pgo-sqlrunner" - CONTAINER_IMAGE_CRUNCHY_ADMIN = "crunchy-admin" - CONTAINER_IMAGE_CRUNCHY_BACKREST_RESTORE = "crunchy-backrest-restore" - CONTAINER_IMAGE_CRUNCHY_POSTGRES_EXPORTER = "crunchy-postgres-exporter" - CONTAINER_IMAGE_CRUNCHY_GRAFANA = "crunchy-grafana" - CONTAINER_IMAGE_CRUNCHY_PGADMIN = "crunchy-pgadmin4" - CONTAINER_IMAGE_CRUNCHY_PGBADGER = "crunchy-pgbadger" - CONTAINER_IMAGE_CRUNCHY_PGBOUNCER = "crunchy-pgbouncer" - CONTAINER_IMAGE_CRUNCHY_PGDUMP = "crunchy-pgdump" - CONTAINER_IMAGE_CRUNCHY_PGRESTORE = "crunchy-pgrestore" - CONTAINER_IMAGE_CRUNCHY_POSTGRES_HA = "crunchy-postgres-ha" - CONTAINER_IMAGE_CRUNCHY_POSTGRES_GIS_HA = "crunchy-postgres-gis-ha" - CONTAINER_IMAGE_CRUNCHY_PROMETHEUS = "crunchy-prometheus" -) - -// a map of the "RELATED_IMAGE_*" environmental variables to their defined -// container image names, which allows certain packagers to inject the full -// definition for where to pull a container image from -// -// See: https://github.com/operator-framework/operator-lifecycle-manager/blob/master/doc/contributors/design-proposals/related-images.md -var RelatedImageMap = map[string]string{ - "RELATED_IMAGE_PGO_BACKREST": CONTAINER_IMAGE_PGO_BACKREST, - "RELATED_IMAGE_PGO_BACKREST_REPO": CONTAINER_IMAGE_PGO_BACKREST_REPO, - "RELATED_IMAGE_PGO_BACKREST_REPO_SYNC": CONTAINER_IMAGE_PGO_BACKREST_REPO_SYNC, - "RELATED_IMAGE_PGO_BACKREST_RESTORE": CONTAINER_IMAGE_PGO_BACKREST_RESTORE, - "RELATED_IMAGE_PGO_CLIENT": CONTAINER_IMAGE_PGO_CLIENT, - "RELATED_IMAGE_PGO_RMDATA": CONTAINER_IMAGE_PGO_RMDATA, - "RELATED_IMAGE_PGO_SQL_RUNNER": CONTAINER_IMAGE_PGO_SQL_RUNNER, - "RELATED_IMAGE_CRUNCHY_ADMIN": CONTAINER_IMAGE_CRUNCHY_ADMIN, - "RELATED_IMAGE_CRUNCHY_BACKREST_RESTORE": CONTAINER_IMAGE_CRUNCHY_BACKREST_RESTORE, - "RELATED_IMAGE_CRUNCHY_POSTGRES_EXPORTER": CONTAINER_IMAGE_CRUNCHY_POSTGRES_EXPORTER, - "RELATED_IMAGE_CRUNCHY_PGADMIN": CONTAINER_IMAGE_CRUNCHY_PGADMIN, - "RELATED_IMAGE_CRUNCHY_PGBADGER": CONTAINER_IMAGE_CRUNCHY_PGBADGER, - "RELATED_IMAGE_CRUNCHY_PGBOUNCER": CONTAINER_IMAGE_CRUNCHY_PGBOUNCER, - "RELATED_IMAGE_CRUNCHY_PGDUMP": CONTAINER_IMAGE_CRUNCHY_PGDUMP, - "RELATED_IMAGE_CRUNCHY_PGRESTORE": CONTAINER_IMAGE_CRUNCHY_PGRESTORE, - "RELATED_IMAGE_CRUNCHY_POSTGRES_HA": CONTAINER_IMAGE_CRUNCHY_POSTGRES_HA, - "RELATED_IMAGE_CRUNCHY_POSTGRES_GIS_HA": CONTAINER_IMAGE_CRUNCHY_POSTGRES_GIS_HA, -} diff --git a/internal/config/labels.go b/internal/config/labels.go deleted file mode 100644 index 6d20c72742..0000000000 --- a/internal/config/labels.go +++ /dev/null @@ -1,172 +0,0 @@ -package config - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -// resource labels used by the operator -const LABEL_NAME = "name" -const LABEL_SELECTOR = "selector" -const LABEL_OPERATOR = "postgres-operator" -const LABEL_PG_CLUSTER = "pg-cluster" -const LABEL_PG_CLUSTER_IDENTIFIER = "pg-cluster-id" -const LABEL_PG_DATABASE = "pgo-pg-database" - -const LABEL_PGTASK = "pg-task" - -const LABEL_AUTOFAIL = "autofail" -const LABEL_FAILOVER = "failover" - -const LABEL_TARGET = "target" -const LABEL_RMDATA = "pgrmdata" - -const LABEL_PGPOLICY = "pgpolicy" -const LABEL_INGEST = "ingest" -const LABEL_PGREMOVE = "pgremove" -const LABEL_PVCNAME = "pvcname" -const LABEL_EXPORTER = "crunchy-postgres-exporter" -const LABEL_EXPORTER_PG_USER = "ccp_monitoring" -const LABEL_ARCHIVE = "archive" -const LABEL_ARCHIVE_TIMEOUT = "archive-timeout" -const LABEL_CUSTOM_CONFIG = "custom-config" -const LABEL_NODE_LABEL_KEY = "NodeLabelKey" -const LABEL_NODE_LABEL_VALUE = "NodeLabelValue" -const LABEL_REPLICA_NAME = "replica-name" -const LABEL_CCP_IMAGE_TAG_KEY = "ccp-image-tag" -const LABEL_CCP_IMAGE_KEY = "ccp-image" -const LABEL_IMAGE_PREFIX = "image-prefix" -const LABEL_SERVICE_TYPE = "service-type" -const LABEL_POD_ANTI_AFFINITY = "pg-pod-anti-affinity" -const LABEL_SYNC_REPLICATION = "sync-replication" - -const LABEL_REPLICA_COUNT = "replica-count" -const LABEL_STORAGE_CONFIG = "storage-config" -const LABEL_NODE_LABEL = "node-label" -const LABEL_VERSION = "version" -const LABEL_PGO_VERSION = "pgo-version" -const LABEL_DELETE_DATA = "delete-data" -const LABEL_DELETE_DATA_STARTED = "delete-data-started" -const LABEL_DELETE_BACKUPS = "delete-backups" -const LABEL_IS_REPLICA = "is-replica" -const LABEL_IS_BACKUP = "is-backup" -const LABEL_STARTUP = "startup" -const LABEL_SHUTDOWN = "shutdown" - -// label for the pgcluster upgrade -const LABEL_UPGRADE = "upgrade" - -const LABEL_BACKREST = "pgo-backrest" -const LABEL_BACKREST_JOB = "pgo-backrest-job" -const LABEL_BACKREST_RESTORE = "pgo-backrest-restore" -const LABEL_CONTAINER_NAME = "containername" -const LABEL_POD_NAME = "podname" -const LABEL_BACKREST_REPO_SECRET = "backrest-repo-config" -const LABEL_BACKREST_COMMAND = "backrest-command" -const LABEL_BACKREST_RESTORE_FROM_CLUSTER = "backrest-restore-from-cluster" -const LABEL_BACKREST_RESTORE_OPTS = "backrest-restore-opts" -const LABEL_BACKREST_BACKUP_OPTS = "backrest-backup-opts" -const LABEL_BACKREST_OPTS = "backrest-opts" -const LABEL_BACKREST_PITR_TARGET = "backrest-pitr-target" -const LABEL_BACKREST_STORAGE_TYPE = "backrest-storage-type" -const LABEL_BACKREST_S3_VERIFY_TLS = "backrest-s3-verify-tls" -const LABEL_BADGER = "crunchy-pgbadger" -const LABEL_BADGER_CCPIMAGE = "crunchy-pgbadger" -const LABEL_BACKUP_TYPE_BACKREST = "pgbackrest" -const LABEL_BACKUP_TYPE_PGDUMP = "pgdump" - -const LABEL_PGDUMP_COMMAND = "pgdump" -const LABEL_PGDUMP_RESTORE = "pgdump-restore" -const LABEL_PGDUMP_OPTS = "pgdump-opts" -const LABEL_PGDUMP_HOST = "pgdump-host" -const LABEL_PGDUMP_DB = "pgdump-db" -const LABEL_PGDUMP_USER = "pgdump-user" -const LABEL_PGDUMP_PORT = "pgdump-port" -const LABEL_PGDUMP_ALL = "pgdump-all" -const LABEL_PGDUMP_PVC = "pgdump-pvc" - -const LABEL_RESTORE_TYPE_PGRESTORE = "pgrestore" -const LABEL_PGRESTORE_COMMAND = "pgrestore" -const LABEL_PGRESTORE_HOST = "pgrestore-host" -const LABEL_PGRESTORE_DB = "pgrestore-db" -const LABEL_PGRESTORE_USER = "pgrestore-user" -const LABEL_PGRESTORE_PORT = "pgrestore-port" -const LABEL_PGRESTORE_FROM_CLUSTER = "pgrestore-from-cluster" -const LABEL_PGRESTORE_FROM_PVC = "pgrestore-from-pvc" -const LABEL_PGRESTORE_OPTS = "pgrestore-opts" -const LABEL_PGRESTORE_PITR_TARGET = "pgrestore-pitr-target" - -const LABEL_DATA_ROOT = "data-root" -const LABEL_PVC_NAME = "pvc-name" -const LABEL_VOLUME_NAME = "volume-name" - -const LABEL_SESSION_ID = "sessionid" -const LABEL_USERNAME = "username" -const LABEL_ROLENAME = "rolename" -const LABEL_PASSWORD = "password" - -const LABEL_PGADMIN = "crunchy-pgadmin" -const LABEL_PGADMIN_TASK_ADD = "pgadmin-add" -const LABEL_PGADMIN_TASK_CLUSTER = "pgadmin-cluster" -const LABEL_PGADMIN_TASK_DELETE = "pgadmin-delete" - -const LABEL_PGBOUNCER = "crunchy-pgbouncer" - -const LABEL_JOB_NAME = "job-name" -const LABEL_PGBACKREST_STANZA = "pgbackrest-stanza" -const LABEL_PGBACKREST_DB_PATH = "pgbackrest-db-path" -const LABEL_PGBACKREST_REPO_PATH = "pgbackrest-repo-path" -const LABEL_PGBACKREST_REPO_HOST = "pgbackrest-repo-host" - -const LABEL_PGO_BACKREST_REPO = "pgo-backrest-repo" - -// a general label for grouping all the tasks...helps with cleanups -const LABEL_PGO_CLONE = "pgo-clone" - -// the individualized step labels -const LABEL_PGO_CLONE_STEP_1 = "pgo-clone-step-1" -const LABEL_PGO_CLONE_STEP_2 = "pgo-clone-step-2" -const LABEL_PGO_CLONE_STEP_3 = "pgo-clone-step-3" - -const LABEL_DEPLOYMENT_NAME = "deployment-name" -const LABEL_SERVICE_NAME = "service-name" -const LABEL_CURRENT_PRIMARY = "current-primary" - -const LABEL_CLAIM_NAME = "claimName" - -const LABEL_PGO_PGOUSER = "pgo-pgouser" -const LABEL_PGO_PGOROLE = "pgo-pgorole" -const LABEL_PGOUSER = "pgouser" -const LABEL_WORKFLOW_ID = "workflowid" // NOTE: this now matches crv1.PgtaskWorkflowID - -const LABEL_TRUE = "true" -const LABEL_FALSE = "false" - -const LABEL_NAMESPACE = "namespace" -const LABEL_PGO_INSTALLATION_NAME = "pgo-installation-name" -const LABEL_VENDOR = "vendor" -const LABEL_CRUNCHY = "crunchydata" -const LABEL_PGO_CREATED_BY = "pgo-created-by" -const LABEL_PGO_UPDATED_BY = "pgo-updated-by" - -const LABEL_FAILOVER_STARTED = "failover-started" - -const GLOBAL_CUSTOM_CONFIGMAP = "pgo-custom-pg-config" - -const LABEL_PGHA_SCOPE = "crunchy-pgha-scope" -const LABEL_PGHA_CONFIGMAP = "pgha-config" -const LABEL_PGHA_BACKUP_TYPE = "pgha-backup-type" -const LABEL_PGHA_ROLE = "role" -const LABEL_PGHA_ROLE_PRIMARY = "master" -const LABEL_PGHA_ROLE_REPLICA = "replica" -const LABEL_PGHA_BOOTSTRAP = "pgha-bootstrap" diff --git a/internal/config/pgoconfig.go b/internal/config/pgoconfig.go deleted file mode 100644 index f951f77dee..0000000000 --- a/internal/config/pgoconfig.go +++ /dev/null @@ -1,829 +0,0 @@ -package config - -/* -Copyright 2018 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "errors" - "fmt" - "io/ioutil" - "os" - "strconv" - "strings" - "text/template" - - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - - log "github.com/sirupsen/logrus" - v1 "k8s.io/api/core/v1" - "k8s.io/apimachinery/pkg/api/resource" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/util/validation" - "k8s.io/client-go/kubernetes" - "sigs.k8s.io/yaml" -) - -const CustomConfigMapName = "pgo-config" -const DefaultConfigsPath = "/default-pgo-config/" -const CustomConfigsPath = "/pgo-config/" - -var PgoDefaultServiceAccountTemplate *template.Template - -const PGODefaultServiceAccountPath = "pgo-default-sa.json" - -var PgoTargetRoleBindingTemplate *template.Template - -const PGOTargetRoleBindingPath = "pgo-target-role-binding.json" - -var PgoBackrestServiceAccountTemplate *template.Template - -const PGOBackrestServiceAccountPath = "pgo-backrest-sa.json" - -var PgoTargetServiceAccountTemplate *template.Template - -const PGOTargetServiceAccountPath = "pgo-target-sa.json" - -var PgoBackrestRoleTemplate *template.Template - -const PGOBackrestRolePath = "pgo-backrest-role.json" - -var PgoBackrestRoleBindingTemplate *template.Template - -const PGOBackrestRoleBindingPath = "pgo-backrest-role-binding.json" - -var PgoTargetRoleTemplate *template.Template - -const PGOTargetRolePath = "pgo-target-role.json" - -var PgoPgServiceAccountTemplate *template.Template - -const PGOPgServiceAccountPath = "pgo-pg-sa.json" - -var PgoPgRoleTemplate *template.Template - -const PGOPgRolePath = "pgo-pg-role.json" - -var PgoPgRoleBindingTemplate *template.Template - -const PGOPgRoleBindingPath = "pgo-pg-role-binding.json" - -var PolicyJobTemplate *template.Template - -const policyJobTemplatePath = "pgo.sqlrunner-template.json" - -var PVCTemplate *template.Template - -const pvcPath = "pvc.json" - -var ContainerResourcesTemplate *template.Template - -const containerResourcesTemplatePath = "container-resources.json" - -var AffinityTemplate *template.Template - -const affinityTemplatePath = "affinity.json" - -var PodAntiAffinityTemplate *template.Template - -const podAntiAffinityTemplatePath = "pod-anti-affinity.json" - -var PgoBackrestRepoServiceTemplate *template.Template - -const pgoBackrestRepoServiceTemplatePath = "pgo-backrest-repo-service-template.json" - -var PgoBackrestRepoTemplate *template.Template - -const pgoBackrestRepoTemplatePath = "pgo-backrest-repo-template.json" - -var PgmonitorEnvVarsTemplate *template.Template - -const pgmonitorEnvVarsPath = "pgmonitor-env-vars.json" - -var PgbackrestEnvVarsTemplate *template.Template - -const pgbackrestEnvVarsPath = "pgbackrest-env-vars.json" - -var PgbackrestS3EnvVarsTemplate *template.Template - -const pgbackrestS3EnvVarsPath = "pgbackrest-s3-env-vars.json" - -var PgAdminTemplate *template.Template - -const pgAdminTemplatePath = "pgadmin-template.json" - -var PgAdminServiceTemplate *template.Template - -const pgAdminServiceTemplatePath = "pgadmin-service-template.json" - -var PgbouncerTemplate *template.Template - -const pgbouncerTemplatePath = "pgbouncer-template.json" - -var PgbouncerConfTemplate *template.Template - -const pgbouncerConfTemplatePath = "pgbouncer.ini" - -var PgbouncerUsersTemplate *template.Template - -const pgbouncerUsersTemplatePath = "users.txt" - -var PgbouncerHBATemplate *template.Template - -const pgbouncerHBATemplatePath = "pgbouncer_hba.conf" - -var ServiceTemplate *template.Template - -const serviceTemplatePath = "cluster-service.json" - -var RmdatajobTemplate *template.Template - -const rmdatajobPath = "rmdata-job.json" - -var BackrestjobTemplate *template.Template - -const backrestjobPath = "backrest-job.json" - -var BackrestRestorejobTemplate *template.Template - -const backrestRestorejobPath = "backrest-restore-job.json" - -var PgDumpBackupJobTemplate *template.Template - -const pgDumpBackupJobPath = "pgdump-job.json" - -var PgRestoreJobTemplate *template.Template - -const pgRestoreJobPath = "pgrestore-job.json" - -var PVCMatchLabelsTemplate *template.Template - -const pvcMatchLabelsPath = "pvc-matchlabels.json" - -var PVCStorageClassTemplate *template.Template - -const pvcSCPath = "pvc-storageclass.json" - -var ExporterTemplate *template.Template - -const exporterTemplatePath = "exporter.json" - -var BadgerTemplate *template.Template - -const badgerTemplatePath = "pgbadger.json" - -var DeploymentTemplate *template.Template - -const deploymentTemplatePath = "cluster-deployment.json" - -var BootstrapTemplate *template.Template - -const bootstrapTemplatePath = "cluster-bootstrap-job.json" - -type ClusterStruct struct { - CCPImagePrefix string - CCPImageTag string - Policies string - Metrics bool - Badger bool - Port string - PGBadgerPort string - ExporterPort string - User string - Database string - PasswordAgeDays string - PasswordLength string - Replicas string - ServiceType string - BackrestPort int - BackrestS3Bucket string - BackrestS3Endpoint string - BackrestS3Region string - BackrestS3URIStyle string - BackrestS3VerifyTLS string - DisableAutofail bool - PgmonitorPassword string - EnableCrunchyadm bool - DisableReplicaStartFailReinit bool - PodAntiAffinity string - PodAntiAffinityPgBackRest string - PodAntiAffinityPgBouncer string - SyncReplication bool - DefaultInstanceResourceMemory resource.Quantity `json:"DefaultInstanceMemory"` - DefaultBackrestResourceMemory resource.Quantity `json:"DefaultBackrestMemory"` - DefaultPgBouncerResourceMemory resource.Quantity `json:"DefaultPgBouncerMemory"` - DefaultExporterResourceMemory resource.Quantity `json:"DefaultExporterMemory"` - DisableFSGroup bool -} - -type StorageStruct struct { - AccessMode string - Size string - StorageType string - StorageClass string - SupplementalGroups string - MatchLabels string -} - -// PgoStruct defines various configuration settings for the PostgreSQL Operator -type PgoStruct struct { - Audit bool - ConfigMapWorkerCount *int - ControllerGroupRefreshInterval *int - DisableReconcileRBAC bool - NamespaceRefreshInterval *int - NamespaceWorkerCount *int - PGClusterWorkerCount *int - PGOImagePrefix string - PGOImageTag string - PGReplicaWorkerCount *int - PGTaskWorkerCount *int -} - -type PgoConfig struct { - BasicAuth string - Cluster ClusterStruct - Pgo PgoStruct - PrimaryStorage string - WALStorage string - BackupStorage string - ReplicaStorage string - BackrestStorage string - Storage map[string]StorageStruct -} - -const DEFAULT_SERVICE_TYPE = "ClusterIP" -const LOAD_BALANCER_SERVICE_TYPE = "LoadBalancer" -const NODEPORT_SERVICE_TYPE = "NodePort" -const CONFIG_PATH = "pgo.yaml" - -var log_statement_values = []string{"ddl", "none", "mod", "all"} - -const DEFAULT_BACKREST_PORT = 2022 -const DEFAULT_PGADMIN_PORT = "5050" -const DEFAULT_PGBADGER_PORT = "10000" -const DEFAULT_EXPORTER_PORT = "9187" -const DEFAULT_POSTGRES_PORT = "5432" -const DEFAULT_PATRONI_PORT = "8009" - -func (c *PgoConfig) Validate() error { - var err error - errPrefix := "Error in pgoconfig: check pgo.yaml: " - - if c.Cluster.BackrestPort == 0 { - c.Cluster.BackrestPort = DEFAULT_BACKREST_PORT - log.Infof("setting BackrestPort to default %d", c.Cluster.BackrestPort) - } - if c.Cluster.PGBadgerPort == "" { - c.Cluster.PGBadgerPort = DEFAULT_PGBADGER_PORT - log.Infof("setting PGBadgerPort to default %s", c.Cluster.PGBadgerPort) - } else { - if _, err := strconv.Atoi(c.Cluster.PGBadgerPort); err != nil { - return errors.New(errPrefix + "Invalid PGBadgerPort: " + err.Error()) - } - } - if c.Cluster.ExporterPort == "" { - c.Cluster.ExporterPort = DEFAULT_EXPORTER_PORT - log.Infof("setting ExporterPort to default %s", c.Cluster.ExporterPort) - } else { - if _, err := strconv.Atoi(c.Cluster.ExporterPort); err != nil { - return errors.New(errPrefix + "Invalid ExporterPort: " + err.Error()) - } - } - if c.Cluster.Port == "" { - c.Cluster.Port = DEFAULT_POSTGRES_PORT - log.Infof("setting Postgres Port to default %s", c.Cluster.Port) - } else { - if _, err := strconv.Atoi(c.Cluster.Port); err != nil { - return errors.New(errPrefix + "Invalid Port: " + err.Error()) - } - } - - { - storageNotDefined := func(setting, value string) error { - return fmt.Errorf("%s%s setting is invalid: %q is not defined", errPrefix, setting, value) - } - if _, ok := c.Storage[c.PrimaryStorage]; !ok { - return storageNotDefined("PrimaryStorage", c.PrimaryStorage) - } - if _, ok := c.Storage[c.BackrestStorage]; !ok { - log.Warning("BackrestStorage setting not set, will use PrimaryStorage setting") - c.Storage[c.BackrestStorage] = c.Storage[c.PrimaryStorage] - } - if _, ok := c.Storage[c.BackupStorage]; !ok { - return storageNotDefined("BackupStorage", c.BackupStorage) - } - if _, ok := c.Storage[c.ReplicaStorage]; !ok { - return storageNotDefined("ReplicaStorage", c.ReplicaStorage) - } - if _, ok := c.Storage[c.WALStorage]; c.WALStorage != "" && !ok { - return storageNotDefined("WALStorage", c.WALStorage) - } - for k := range c.Storage { - _, err = c.GetStorageSpec(k) - if err != nil { - return err - } - } - } - - if c.Pgo.PGOImagePrefix == "" { - return errors.New(errPrefix + "Pgo.PGOImagePrefix is required") - } - if c.Pgo.PGOImageTag == "" { - return errors.New(errPrefix + "Pgo.PGOImageTag is required") - } - - if c.Cluster.ServiceType == "" { - log.Warn("Cluster.ServiceType not set, using default, ClusterIP ") - c.Cluster.ServiceType = DEFAULT_SERVICE_TYPE - } else { - if c.Cluster.ServiceType != DEFAULT_SERVICE_TYPE && - c.Cluster.ServiceType != LOAD_BALANCER_SERVICE_TYPE && - c.Cluster.ServiceType != NODEPORT_SERVICE_TYPE { - return errors.New(errPrefix + "Cluster.ServiceType is required to be either ClusterIP, NodePort, or LoadBalancer") - } - } - - if c.Cluster.CCPImagePrefix == "" { - return errors.New(errPrefix + "Cluster.CCPImagePrefix is required") - } - - if c.Cluster.CCPImageTag == "" { - return errors.New(errPrefix + "Cluster.CCPImageTag is required") - } - - if c.Cluster.User == "" { - return errors.New(errPrefix + "Cluster.User is required") - } else { - // validates that username can be used as the kubernetes secret name - // Must consist of lower case alphanumeric characters, - // '-' or '.', and must start and end with an alphanumeric character - errs := validation.IsDNS1123Subdomain(c.Cluster.User) - if len(errs) > 0 { - var msg string - for i := range errs { - msg = msg + errs[i] - } - return errors.New(errPrefix + msg) - } - - // validate any of the resources and if they are unavailable, set defaults - if c.Cluster.DefaultInstanceResourceMemory.IsZero() { - c.Cluster.DefaultInstanceResourceMemory = DefaultInstanceResourceMemory - } - - log.Infof("default instance memory set to [%s]", c.Cluster.DefaultInstanceResourceMemory.String()) - - if c.Cluster.DefaultBackrestResourceMemory.IsZero() { - c.Cluster.DefaultBackrestResourceMemory = DefaultBackrestResourceMemory - } - - log.Infof("default pgbackrest repository memory set to [%s]", c.Cluster.DefaultBackrestResourceMemory.String()) - - if c.Cluster.DefaultPgBouncerResourceMemory.IsZero() { - c.Cluster.DefaultPgBouncerResourceMemory = DefaultPgBouncerResourceMemory - } - - log.Infof("default pgbouncer memory set to [%s]", c.Cluster.DefaultPgBouncerResourceMemory.String()) - } - - // if provided, ensure that the type of pod anti-affinity values are valid - podAntiAffinityType := crv1.PodAntiAffinityType(c.Cluster.PodAntiAffinity) - if err := podAntiAffinityType.Validate(); err != nil { - return errors.New(errPrefix + "Invalid value provided for Cluster.PodAntiAffinityType") - } - - podAntiAffinityType = crv1.PodAntiAffinityType(c.Cluster.PodAntiAffinityPgBackRest) - if err := podAntiAffinityType.Validate(); err != nil { - return errors.New(errPrefix + "Invalid value provided for Cluster.PodAntiAffinityPgBackRest") - } - - podAntiAffinityType = crv1.PodAntiAffinityType(c.Cluster.PodAntiAffinityPgBouncer) - if err := podAntiAffinityType.Validate(); err != nil { - return errors.New(errPrefix + "Invalid value provided for Cluster.PodAntiAffinityPgBouncer") - } - - return err -} - -// GetPodAntiAffinitySpec accepts possible user-defined values for what the -// pod anti-affinity spec should be, which include rules for: -// - PostgreSQL instances -// - pgBackRest -// - pgBouncer -func (c *PgoConfig) GetPodAntiAffinitySpec(cluster, pgBackRest, pgBouncer crv1.PodAntiAffinityType) (crv1.PodAntiAffinitySpec, error) { - spec := crv1.PodAntiAffinitySpec{} - - // first, set the values for the PostgreSQL cluster, which is the "default" - // value. Otherwise, set the default to that in the configuration - if cluster != "" { - spec.Default = cluster - } else { - spec.Default = crv1.PodAntiAffinityType(c.Cluster.PodAntiAffinity) - } - - // perform a validation check against the default type - if err := spec.Default.Validate(); err != nil { - log.Error(err) - return spec, err - } - - // now that the default is set, determine if the user or the configuration - // overrode the settings for pgBackRest and pgBouncer. The heuristic is as - // such: - // - // 1. If the user provides a value, use that value - // 2. If there is a value provided in the configuration, use that value - // 3. If there is a value in the cluster default, use that value, which also - // encompasses using the default value in the config at this point in the - // execution. - // - // First, do pgBackRest: - switch { - case pgBackRest != "": - spec.PgBackRest = pgBackRest - case c.Cluster.PodAntiAffinityPgBackRest != "": - spec.PgBackRest = crv1.PodAntiAffinityType(c.Cluster.PodAntiAffinityPgBackRest) - case spec.Default != "": - spec.PgBackRest = spec.Default - } - - // perform a validation check against the pgBackRest type - if err := spec.PgBackRest.Validate(); err != nil { - log.Error(err) - return spec, err - } - - // Now, pgBouncer: - switch { - case pgBouncer != "": - spec.PgBouncer = pgBouncer - case c.Cluster.PodAntiAffinityPgBackRest != "": - spec.PgBouncer = crv1.PodAntiAffinityType(c.Cluster.PodAntiAffinityPgBouncer) - case spec.Default != "": - spec.PgBouncer = spec.Default - } - - // perform a validation check against the pgBackRest type - if err := spec.PgBouncer.Validate(); err != nil { - log.Error(err) - return spec, err - } - - return spec, nil -} - -func (c *PgoConfig) GetStorageSpec(name string) (crv1.PgStorageSpec, error) { - var err error - storage := crv1.PgStorageSpec{} - - s, ok := c.Storage[name] - if !ok { - err = errors.New("invalid Storage name " + name) - log.Error(err) - return storage, err - } - - storage.StorageClass = s.StorageClass - storage.AccessMode = s.AccessMode - storage.Size = s.Size - storage.StorageType = s.StorageType - storage.MatchLabels = s.MatchLabels - storage.SupplementalGroups = s.SupplementalGroups - - if storage.MatchLabels != "" { - test := strings.Split(storage.MatchLabels, "=") - if len(test) != 2 { - err = errors.New("invalid Storage config " + name + " MatchLabels needs to be in key=value format.") - log.Error(err) - return storage, err - } - } - - return storage, err - -} - -func (c *PgoConfig) GetConfig(clientset kubernetes.Interface, namespace string) error { - - cMap, rootPath := getRootPath(clientset, namespace) - - var yamlFile []byte - var err error - - //get the pgo.yaml config file - if cMap != nil { - str := cMap.Data[CONFIG_PATH] - if str == "" { - errMsg := fmt.Sprintf("could not get %s from ConfigMap", CONFIG_PATH) - return errors.New(errMsg) - } - yamlFile = []byte(str) - } else { - yamlFile, err = ioutil.ReadFile(rootPath + CONFIG_PATH) - if err != nil { - log.Errorf("yamlFile.Get err #%v ", err) - return err - } - } - - err = yaml.Unmarshal(yamlFile, c) - if err != nil { - log.Errorf("Unmarshal: %v", err) - return err - } - - // validate the pgo.yaml config file - if err := c.Validate(); err != nil { - log.Error(err) - return err - } - - c.CheckEnv() - - //load up all the templates - PgoDefaultServiceAccountTemplate, err = c.LoadTemplate(cMap, rootPath, PGODefaultServiceAccountPath) - if err != nil { - return err - } - PgoBackrestServiceAccountTemplate, err = c.LoadTemplate(cMap, rootPath, PGOBackrestServiceAccountPath) - if err != nil { - return err - } - PgoTargetServiceAccountTemplate, err = c.LoadTemplate(cMap, rootPath, PGOTargetServiceAccountPath) - if err != nil { - return err - } - PgoTargetRoleBindingTemplate, err = c.LoadTemplate(cMap, rootPath, PGOTargetRoleBindingPath) - if err != nil { - return err - } - PgoBackrestRoleTemplate, err = c.LoadTemplate(cMap, rootPath, PGOBackrestRolePath) - if err != nil { - return err - } - PgoBackrestRoleBindingTemplate, err = c.LoadTemplate(cMap, rootPath, PGOBackrestRoleBindingPath) - if err != nil { - return err - } - PgoTargetRoleTemplate, err = c.LoadTemplate(cMap, rootPath, PGOTargetRolePath) - if err != nil { - return err - } - PgoPgServiceAccountTemplate, err = c.LoadTemplate(cMap, rootPath, PGOPgServiceAccountPath) - if err != nil { - return err - } - PgoPgRoleTemplate, err = c.LoadTemplate(cMap, rootPath, PGOPgRolePath) - if err != nil { - return err - } - PgoPgRoleBindingTemplate, err = c.LoadTemplate(cMap, rootPath, PGOPgRoleBindingPath) - if err != nil { - return err - } - - PVCTemplate, err = c.LoadTemplate(cMap, rootPath, pvcPath) - if err != nil { - return err - } - - PolicyJobTemplate, err = c.LoadTemplate(cMap, rootPath, policyJobTemplatePath) - if err != nil { - return err - } - - ContainerResourcesTemplate, err = c.LoadTemplate(cMap, rootPath, containerResourcesTemplatePath) - if err != nil { - return err - } - - PgoBackrestRepoServiceTemplate, err = c.LoadTemplate(cMap, rootPath, pgoBackrestRepoServiceTemplatePath) - if err != nil { - return err - } - - PgoBackrestRepoTemplate, err = c.LoadTemplate(cMap, rootPath, pgoBackrestRepoTemplatePath) - if err != nil { - return err - } - - PgmonitorEnvVarsTemplate, err = c.LoadTemplate(cMap, rootPath, pgmonitorEnvVarsPath) - if err != nil { - return err - } - - PgbackrestEnvVarsTemplate, err = c.LoadTemplate(cMap, rootPath, pgbackrestEnvVarsPath) - if err != nil { - return err - } - - PgbackrestS3EnvVarsTemplate, err = c.LoadTemplate(cMap, rootPath, pgbackrestS3EnvVarsPath) - if err != nil { - return err - } - - PgAdminTemplate, err = c.LoadTemplate(cMap, rootPath, pgAdminTemplatePath) - if err != nil { - return err - } - - PgAdminServiceTemplate, err = c.LoadTemplate(cMap, rootPath, pgAdminServiceTemplatePath) - if err != nil { - return err - } - - PgbouncerTemplate, err = c.LoadTemplate(cMap, rootPath, pgbouncerTemplatePath) - if err != nil { - return err - } - - PgbouncerConfTemplate, err = c.LoadTemplate(cMap, rootPath, pgbouncerConfTemplatePath) - if err != nil { - return err - } - - PgbouncerUsersTemplate, err = c.LoadTemplate(cMap, rootPath, pgbouncerUsersTemplatePath) - if err != nil { - return err - } - - PgbouncerHBATemplate, err = c.LoadTemplate(cMap, rootPath, pgbouncerHBATemplatePath) - if err != nil { - return err - } - - ServiceTemplate, err = c.LoadTemplate(cMap, rootPath, serviceTemplatePath) - if err != nil { - return err - } - - RmdatajobTemplate, err = c.LoadTemplate(cMap, rootPath, rmdatajobPath) - if err != nil { - return err - } - - BackrestjobTemplate, err = c.LoadTemplate(cMap, rootPath, backrestjobPath) - if err != nil { - return err - } - - BackrestRestorejobTemplate, err = c.LoadTemplate(cMap, rootPath, backrestRestorejobPath) - if err != nil { - return err - } - - PgDumpBackupJobTemplate, err = c.LoadTemplate(cMap, rootPath, pgDumpBackupJobPath) - if err != nil { - return err - } - - PgRestoreJobTemplate, err = c.LoadTemplate(cMap, rootPath, pgRestoreJobPath) - if err != nil { - return err - } - - PVCMatchLabelsTemplate, err = c.LoadTemplate(cMap, rootPath, pvcMatchLabelsPath) - if err != nil { - return err - } - - PVCStorageClassTemplate, err = c.LoadTemplate(cMap, rootPath, pvcSCPath) - if err != nil { - return err - } - - AffinityTemplate, err = c.LoadTemplate(cMap, rootPath, affinityTemplatePath) - if err != nil { - return err - } - - PodAntiAffinityTemplate, err = c.LoadTemplate(cMap, rootPath, podAntiAffinityTemplatePath) - if err != nil { - return err - } - - ExporterTemplate, err = c.LoadTemplate(cMap, rootPath, exporterTemplatePath) - if err != nil { - return err - } - - BadgerTemplate, err = c.LoadTemplate(cMap, rootPath, badgerTemplatePath) - if err != nil { - return err - } - - DeploymentTemplate, err = c.LoadTemplate(cMap, rootPath, deploymentTemplatePath) - if err != nil { - return err - } - - BootstrapTemplate, err = c.LoadTemplate(cMap, rootPath, bootstrapTemplatePath) - if err != nil { - return err - } - - return nil -} - -func getRootPath(clientset kubernetes.Interface, namespace string) (*v1.ConfigMap, string) { - - cMap, err := clientset.CoreV1().ConfigMaps(namespace).Get(CustomConfigMapName, metav1.GetOptions{}) - if err == nil { - log.Infof("Config: %s ConfigMap found, using config files from the configmap", CustomConfigMapName) - return cMap, "" - } - log.Infof("Config: %s ConfigMap NOT found, using default baked-in config files from %s", CustomConfigMapName, DefaultConfigsPath) - - return nil, DefaultConfigsPath -} - -// LoadTemplate will load a JSON template from a path -func (c *PgoConfig) LoadTemplate(cMap *v1.ConfigMap, rootPath, path string) (*template.Template, error) { - var value string - var err error - - // Determine if there exists a configmap entry for the template file. - if cMap != nil { - // Get the data that is stored in the configmap - value = cMap.Data[path] - } - - // if the configmap does not exist, or there is no data in the configmap for - // this particular configuration template, attempt to load the template from - // the default configuration - if cMap == nil || value == "" { - value, err = c.DefaultTemplate(path) - - if err != nil { - return nil, err - } - } - - // if we have a value for the templated file, return - return template.Must(template.New(path).Parse(value)), nil - -} - -// DefaultTemplate attempts to load a default configuration template file -func (c *PgoConfig) DefaultTemplate(path string) (string, error) { - // set the lookup value for the file path based on the default configuration - // path and the template file requested to be loaded - fullPath := DefaultConfigsPath + path - - log.Debugf("No entry in cmap loading default path [%s]", fullPath) - - // read in the file from the default path - buf, err := ioutil.ReadFile(fullPath) - - if err != nil { - log.Errorf("error: could not read %s", fullPath) - log.Error(err) - return "", err - } - - // extract the value of the default configuration file and return - value := string(buf) - - return value, nil -} - -// CheckEnv is mostly used for the OLM deployment use case -// when someone wants to deploy with OLM, use the baked-in -// configuration, but use a different set of images, by -// setting these env vars in the OLM CSV, users can override -// the baked in images -func (c *PgoConfig) CheckEnv() { - pgoImageTag := os.Getenv("PGO_IMAGE_TAG") - if pgoImageTag != "" { - c.Pgo.PGOImageTag = pgoImageTag - log.Infof("CheckEnv: using PGO_IMAGE_TAG env var: %s", pgoImageTag) - } - pgoImagePrefix := os.Getenv("PGO_IMAGE_PREFIX") - if pgoImagePrefix != "" { - c.Pgo.PGOImagePrefix = pgoImagePrefix - log.Infof("CheckEnv: using PGO_IMAGE_PREFIX env var: %s", pgoImagePrefix) - } - ccpImageTag := os.Getenv("CCP_IMAGE_TAG") - if ccpImageTag != "" { - c.Cluster.CCPImageTag = ccpImageTag - log.Infof("CheckEnv: using CCP_IMAGE_TAG env var: %s", ccpImageTag) - } - ccpImagePrefix := os.Getenv("CCP_IMAGE_PREFIX") - if ccpImagePrefix != "" { - c.Cluster.CCPImagePrefix = ccpImagePrefix - log.Infof("CheckEnv: using CCP_IMAGE_PREFIX env var: %s", ccpImagePrefix) - } -} diff --git a/internal/config/volumes.go b/internal/config/volumes.go deleted file mode 100644 index d21c2d6a4e..0000000000 --- a/internal/config/volumes.go +++ /dev/null @@ -1,58 +0,0 @@ -package config - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - - core_v1 "k8s.io/api/core/v1" -) - -// volume configuration settings used by the PostgreSQL data directory and mount -const VOLUME_POSTGRESQL_DATA = "pgdata" -const VOLUME_POSTGRESQL_DATA_MOUNT_PATH = "/pgdata" - -// PostgreSQLWALVolumeMount returns the VolumeMount for the PostgreSQL WAL directory. -func PostgreSQLWALVolumeMount() core_v1.VolumeMount { - return core_v1.VolumeMount{Name: "pgwal", MountPath: "/pgwal"} -} - -// PostgreSQLWALPath returns the absolute path to a mounted WAL directory. -func PostgreSQLWALPath(cluster string) string { - return fmt.Sprintf("%s/%s-wal", PostgreSQLWALVolumeMount().MountPath, cluster) -} - -// volume configuration settings used by the pgBackRest repo mount -const VOLUME_PGBACKREST_REPO_NAME = "backrestrepo" -const VOLUME_PGBACKREST_REPO_MOUNT_PATH = "/backrestrepo" - -// volume configuration settings used by the SSHD secret -const VOLUME_SSHD_NAME = "sshd" -const VOLUME_SSHD_MOUNT_PATH = "/sshd" - -// volume configuration settings used by tablespaces - -// the pattern for the volume name used on a tablespace, which follows -// "tablespace-" -const VOLUME_TABLESPACE_NAME_PREFIX = "tablespace-" - -// the pattern for the path used to mount the volume of a tablespace, which -// follows "/tablespace/" -const VOLUME_TABLESPACE_PATH_PREFIX = "/tablespaces/" - -// the pattern for the name of a tablespace PVC, which is off the form: -// "-tablespace-" -const VOLUME_TABLESPACE_PVC_NAME_FORMAT = "%s-tablespace-%s" diff --git a/internal/controller/configmap/configmapcontroller.go b/internal/controller/configmap/configmapcontroller.go deleted file mode 100644 index a7145ea6bf..0000000000 --- a/internal/controller/configmap/configmapcontroller.go +++ /dev/null @@ -1,169 +0,0 @@ -package configmap - -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "github.com/crunchydata/postgres-operator/internal/config" - pgoinformers "github.com/crunchydata/postgres-operator/pkg/generated/informers/externalversions/crunchydata.com/v1" - pgolisters "github.com/crunchydata/postgres-operator/pkg/generated/listers/crunchydata.com/v1" - - log "github.com/sirupsen/logrus" - - apiv1 "k8s.io/api/core/v1" - utilruntime "k8s.io/apimachinery/pkg/util/runtime" - coreinformers "k8s.io/client-go/informers/core/v1" - "k8s.io/client-go/kubernetes" - corelisters "k8s.io/client-go/listers/core/v1" - "k8s.io/client-go/rest" - "k8s.io/client-go/tools/cache" - "k8s.io/client-go/util/workqueue" -) - -// Controller holds connections and other resources for the ConfigMap controller -type Controller struct { - cmRESTConfig *rest.Config - kubeclientset kubernetes.Interface - cmLister corelisters.ConfigMapLister - cmSynced cache.InformerSynced - pgclusterLister pgolisters.PgclusterLister - pgclusterSynced cache.InformerSynced - workqueue workqueue.RateLimitingInterface - workerCount int -} - -// NewConfigMapController is responsible for creating a new ConfigMap controller -func NewConfigMapController(restConfig *rest.Config, - clientset kubernetes.Interface, coreInformer coreinformers.ConfigMapInformer, - pgoInformer pgoinformers.PgclusterInformer, workerCount int) (*Controller, error) { - - controller := &Controller{ - cmRESTConfig: restConfig, - kubeclientset: clientset, - cmLister: coreInformer.Lister(), - cmSynced: coreInformer.Informer().HasSynced, - pgclusterLister: pgoInformer.Lister(), - pgclusterSynced: pgoInformer.Informer().HasSynced, - workqueue: workqueue.NewNamedRateLimitingQueue(workqueue.DefaultControllerRateLimiter(), - "ConfigMaps"), - workerCount: workerCount, - } - - coreInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{ - AddFunc: func(obj interface{}) { - controller.enqueueConfigMap(obj) - }, - UpdateFunc: func(old, new interface{}) { - controller.enqueueConfigMap(new) - }, - }) - - return controller, nil -} - -// RunWorker is a long-running function that will continually call the processNextWorkItem -// function in order to read and process a message on the worker queue. Once the worker queue -// is instructed to shutdown, a message is written to the done channel. -func (c *Controller) RunWorker(stopCh <-chan struct{}, doneCh chan<- struct{}) { - - go c.waitForShutdown(stopCh) - - for c.processNextWorkItem() { - } - - log.Debug("ConfigMap Contoller: worker queue has been shutdown, writing to the done channel") - - doneCh <- struct{}{} -} - -// waitForShutdown waits for a message on the stop channel and then shuts down the work queue -func (c *Controller) waitForShutdown(stopCh <-chan struct{}) { - <-stopCh - c.workqueue.ShutDown() - log.Debug("ConfigMap Contoller: received stop signal, worker queue told to shutdown") -} - -// ShutdownWorker shuts down the work queue -func (c *Controller) ShutdownWorker() { - c.workqueue.ShutDown() - log.Debug("ConfigMap Contoller: worker queue told to shutdown") -} - -// enqueueConfigMap inspects a configMap to determine if it should be added to the work queue. If -// so, the configMap resource is converted into a namespace/name string and is then added to the -// work queue -func (c *Controller) enqueueConfigMap(obj interface{}) { - - configMap := obj.(*apiv1.ConfigMap) - labels := configMap.GetObjectMeta().GetLabels() - - // Right now we only care about updates to the PGHA configMap, which is the configMap created - // for each cluster with label 'pgha-config'. Therefore, simply return if the configMap - // does not have this label, and don't add the resource to the queue. - if _, ok := labels[config.LABEL_PGHA_CONFIGMAP]; !ok { - return - } - - var key string - var err error - if key, err = cache.MetaNamespaceKeyFunc(obj); err != nil { - utilruntime.HandleError(err) - return - } - c.workqueue.Add(key) -} - -// processNextWorkItem will read a single work item off the work queue and processes it via -// the ConfigMap sync handler -func (c *Controller) processNextWorkItem() bool { - - obj, shutdown := c.workqueue.Get() - - if shutdown { - return false - } - - // We call Done here so the workqueue knows we have finished processing this item - defer c.workqueue.Done(obj) - - var key string - var ok bool - // We expect strings to come off the workqueue in the form namespace/name - if key, ok = obj.(string); !ok { - c.workqueue.Forget(obj) - log.Errorf("ConfigMap Controller: expected string in workqueue but got %#v", obj) - return true - } - - // Run handleConfigMapSync, passing it the namespace/name key of the configMap that - // needs to be synced - if err := c.handleConfigMapSync(key); err != nil { - // Put the item back on the workqueue to handle any transient errors - c.workqueue.AddRateLimited(key) - log.Errorf("ConfigMap Controller: error syncing ConfigMap '%s', will now requeue: %v", - key, err) - return true - } - - // Finally if no error has occurred forget this item - c.workqueue.Forget(obj) - - return true -} - -// WorkerCount returns the worker count for the controller -func (c *Controller) WorkerCount() int { - return c.workerCount -} diff --git a/internal/controller/configmap/synchandler.go b/internal/controller/configmap/synchandler.go deleted file mode 100644 index 9309c0555c..0000000000 --- a/internal/controller/configmap/synchandler.go +++ /dev/null @@ -1,119 +0,0 @@ -package configmap - -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "sync" - - log "github.com/sirupsen/logrus" - corev1 "k8s.io/api/core/v1" - kerrors "k8s.io/apimachinery/pkg/api/errors" - "k8s.io/client-go/tools/cache" - - "github.com/crunchydata/postgres-operator/internal/config" - cfg "github.com/crunchydata/postgres-operator/internal/operator/config" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" -) - -// handleConfigMapSync is responsible for syncing a configMap resource that has obtained from -// the ConfigMap controller's worker queue -func (c *Controller) handleConfigMapSync(key string) error { - - log.Debugf("ConfigMap Controller: handling a configmap sync for key %s", key) - - namespace, configMapName, err := cache.SplitMetaNamespaceKey(key) - if err != nil { - log.Error(err) - return nil - } - - configMap, err := c.cmLister.ConfigMaps(namespace).Get(configMapName) - if err != nil { - return err - } - clusterName := configMap.GetObjectMeta().GetLabels()[config.LABEL_PG_CLUSTER] - - cluster, err := c.pgclusterLister.Pgclusters(namespace).Get(clusterName) - if err != nil { - // If the pgcluster is not found, then simply log an error and return. This should not - // typically happen, but in the event of an orphaned configMap with no pgcluster we do - // not want to keep re-queueing the same item. If any other error is encountered then - // return that error. - if kerrors.IsNotFound(err) { - log.Errorf("ConfigMap Controller: cannot find pgcluster for configMap %s (namespace %s),"+ - "ignoring", configMapName, namespace) - return nil - } - return err - } - - // if an upgrade is pending for the cluster, then don't attempt to sync and just return - if cluster.GetAnnotations()[config.ANNOTATION_IS_UPGRADED] == config.ANNOTATIONS_FALSE { - log.Debugf("ConfigMap Controller: syncing of configMap %s (namespace %s) disabled pending the "+ - "upgrade of cluster %s", configMapName, namespace, clusterName) - return nil - } - - // disable syncing when the cluster isn't currently initialized - if cluster.Status.State != crv1.PgclusterStateInitialized { - return nil - } - - c.syncPGHAConfig(c.createPGHAConfigs(configMap, clusterName, - cluster.GetObjectMeta().GetLabels()[config.LABEL_PGHA_SCOPE])) - - return nil -} - -// createConfigurerMap creates the configs needed to sync the PGHA configMap -func (c *Controller) createPGHAConfigs(configMap *corev1.ConfigMap, - clusterName, clusterScope string) []cfg.Syncer { - - var configSyncers []cfg.Syncer - - configSyncers = append(configSyncers, cfg.NewDCS(configMap, c.kubeclientset, clusterScope)) - - localDBConfig, err := cfg.NewLocalDB(configMap, c.cmRESTConfig, c.kubeclientset) - // Just log the error and don't add to the map so a sync can still be attempted with - // any other configurers - if err != nil { - log.Error(err) - } else { - configSyncers = append(configSyncers, localDBConfig) - } - - return configSyncers -} - -// syncAllConfigs takes a map of configurers and runs their sync functions concurrently -func (c *Controller) syncPGHAConfig(configSyncers []cfg.Syncer) { - - var wg sync.WaitGroup - - for _, configSyncer := range configSyncers { - - wg.Add(1) - - go func(syncer cfg.Syncer) { - if err := syncer.Sync(); err != nil { - log.Error(err) - } - wg.Done() - }(configSyncer) - } - - wg.Wait() -} diff --git a/internal/controller/controllerutil.go b/internal/controller/controllerutil.go deleted file mode 100644 index bf54f98fce..0000000000 --- a/internal/controller/controllerutil.go +++ /dev/null @@ -1,101 +0,0 @@ -package controller - -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "encoding/json" - "errors" - - "github.com/crunchydata/postgres-operator/internal/config" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - pgo "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned" - log "github.com/sirupsen/logrus" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/types" -) - -// ErrControllerGroupExists is the error that is thrown when a controller group for a specific -// namespace already exists -var ErrControllerGroupExists = errors.New("A controller group for the namespace specified already" + - "exists") - -// WorkerRunner is an interface for controllers the have worker queues that need to be run -type WorkerRunner interface { - RunWorker(stopCh <-chan struct{}, doneCh chan<- struct{}) - WorkerCount() int -} - -// Manager defines the interface for a controller manager -type Manager interface { - AddGroup(namespace string) error - AddAndRunGroup(namespace string) error - RemoveAll() - RemoveGroup(namespace string) - RunAll() error - RunGroup(namespace string) error -} - -// InitializeReplicaCreation initializes the creation of replicas for a cluster. For a regular -// (i.e. non-standby) cluster this is called following the creation of the initial cluster backup, -// which is needed to bootstrap replicas. However, for a standby cluster this is called as -// soon as the primary PG pod reports ready and the cluster is marked as initialized. -func InitializeReplicaCreation(clientset pgo.Interface, clusterName, - namespace string) error { - - selector := config.LABEL_PG_CLUSTER + "=" + clusterName - pgreplicaList, err := clientset.CrunchydataV1().Pgreplicas(namespace).List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - log.Error(err) - return err - } - for _, pgreplica := range pgreplicaList.Items { - - if pgreplica.Annotations == nil { - pgreplica.Annotations = make(map[string]string) - } - - pgreplica.Annotations[config.ANNOTATION_PGHA_BOOTSTRAP_REPLICA] = "true" - - if _, err = clientset.CrunchydataV1().Pgreplicas(namespace).Update(&pgreplica); err != nil { - log.Error(err) - return err - } - } - return nil -} - -// SetClusterInitializedStatus sets the status of a pgcluster CR to indicate that it has been -// initialized. This is specifically done by patching the status of the pgcluster CR with the -// proper initialization status. -func SetClusterInitializedStatus(clientset pgo.Interface, clusterName, - namespace string) error { - - patch, err := json.Marshal(map[string]interface{}{ - "status": crv1.PgclusterStatus{ - State: crv1.PgclusterStateInitialized, - Message: "Cluster has been initialized", - }, - }) - if err == nil { - _, err = clientset.CrunchydataV1().Pgclusters(namespace).Patch(clusterName, types.MergePatchType, patch) - } - if err != nil { - log.Error(err) - return err - } - - return nil -} diff --git a/internal/controller/job/backresthandler.go b/internal/controller/job/backresthandler.go deleted file mode 100644 index f399cf1e3d..0000000000 --- a/internal/controller/job/backresthandler.go +++ /dev/null @@ -1,217 +0,0 @@ -package job - -/* -Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "fmt" - "time" - - log "github.com/sirupsen/logrus" - apiv1 "k8s.io/api/batch/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/controller" - "github.com/crunchydata/postgres-operator/internal/operator/backrest" - backrestoperator "github.com/crunchydata/postgres-operator/internal/operator/backrest" - clusteroperator "github.com/crunchydata/postgres-operator/internal/operator/cluster" - "github.com/crunchydata/postgres-operator/internal/util" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - "github.com/crunchydata/postgres-operator/pkg/events" -) - -// backrestUpdateHandler is responsible for handling updates to backrest jobs -func (c *Controller) handleBackrestUpdate(job *apiv1.Job) error { - - // return if job wasn't successful - if !isJobSuccessful(job) { - log.Debugf("jobController onUpdate job %s was unsuccessful and will be ignored", - job.Name) - return nil - } - - // return if job is being deleted - if isJobInForegroundDeletion(job) { - log.Debugf("jobController onUpdate job %s is being deleted and will be ignored", - job.Name) - return nil - } - - labels := job.GetObjectMeta().GetLabels() - - // Route the backrest job update to the appropriate function depending on the type of - // job. Please note that thee LABE_PGO_CLONE_STEP_2 label represents a special form of - // pgBackRest restore that is utilized as part of the clone process. Since jobs with - // the LABEL_PGO_CLONE_STEP_2 also inlcude the LABEL_BACKREST_RESTORE label, it is - // necessary to first check for the presence of the LABEL_PGO_CLONE_STEP_2 prior to the - // LABEL_BACKREST_RESTORE label to determine if the restore is part of and ongoing clone. - switch { - case labels[config.LABEL_BACKREST_COMMAND] == "backup": - c.handleBackrestBackupUpdate(job) - case labels[config.LABEL_PGO_CLONE_STEP_2] == "true": - c.handleCloneBackrestRestoreUpdate(job) - case labels[config.LABEL_BACKREST_COMMAND] == crv1.PgtaskBackrestStanzaCreate: - c.handleBackrestStanzaCreateUpdate(job) - } - - return nil -} - -// handleBackrestRestoreUpdate is responsible for handling updates to backrest restore jobs that -// have been submitted in order to clone a cluster -func (c *Controller) handleCloneBackrestRestoreUpdate(job *apiv1.Job) error { - - log.Debugf("jobController onUpdate clone step 2 job case") - log.Debugf("clone step 2 job status=%d", job.Status.Succeeded) - - if job.Status.Succeeded == 1 { - namespace := job.ObjectMeta.Namespace - sourceClusterName := job.ObjectMeta.Annotations[config.ANNOTATION_CLONE_SOURCE_CLUSTER_NAME] - targetClusterName := job.ObjectMeta.Annotations[config.ANNOTATION_CLONE_TARGET_CLUSTER_NAME] - workflowID := job.ObjectMeta.Labels[config.LABEL_WORKFLOW_ID] - - log.Debugf("workflow to update is %s", workflowID) - - // first, make sure the Pgtask resource knows that the job is complete, - // which is using this legacy bit of code - if err := util.Patch(c.Client.CrunchydataV1().RESTClient(), patchURL, crv1.JobCompletedStatus, patchResource, job.Name, namespace); err != nil { - log.Warn(err) - // we can continue on, even if this fails... - } - - // next, update the workflow to indicate that step 2 is complete - clusteroperator.UpdateCloneWorkflow(c.Client, namespace, workflowID, crv1.PgtaskWorkflowCloneClusterCreate) - - // alright, we can move on the step 3 which is the final step, where we - // create the cluster - cloneTask := util.CloneTask{ - BackrestPVCSize: job.ObjectMeta.Annotations[config.ANNOTATION_CLONE_BACKREST_PVC_SIZE], - EnableMetrics: job.ObjectMeta.Annotations[config.ANNOTATION_CLONE_ENABLE_METRICS] == "true", - PGOUser: job.ObjectMeta.Labels[config.LABEL_PGOUSER], - PVCSize: job.ObjectMeta.Annotations[config.ANNOTATION_CLONE_PVC_SIZE], - SourceClusterName: sourceClusterName, - TargetClusterName: targetClusterName, - TaskStepLabel: config.LABEL_PGO_CLONE_STEP_3, - TaskType: crv1.PgtaskCloneStep3, - Timestamp: time.Now(), - WorkflowID: workflowID, - } - - task := cloneTask.Create() - - // create the pgtask! - if _, err := c.Client.CrunchydataV1().Pgtasks(namespace).Create(task); err != nil { - log.Error(err) - errorMessage := fmt.Sprintf("Could not create pgtask for step 3: %s", err.Error()) - clusteroperator.PublishCloneEvent(events.EventCloneClusterFailure, namespace, task, errorMessage) - } - } - - return nil -} - -// handleBackrestRestoreUpdate is responsible for handling updates to backrest backup jobs -func (c *Controller) handleBackrestBackupUpdate(job *apiv1.Job) error { - - labels := job.GetObjectMeta().GetLabels() - - log.Debugf("jobController onUpdate backrest job case") - log.Debugf("got a backrest job status=%d", job.Status.Succeeded) - log.Debugf("update the status to completed here for backrest %s job %s", labels[config.LABEL_PG_CLUSTER], job.Name) - - if err := util.Patch(c.Client.CrunchydataV1().RESTClient(), patchURL, crv1.JobCompletedStatus, patchResource, job.Name, - job.ObjectMeta.Namespace); err != nil { - log.Errorf("error in patching pgtask %s: %s", job.ObjectMeta.SelfLink, err.Error()) - } - publishBackupComplete(labels[config.LABEL_PG_CLUSTER], job.ObjectMeta.Labels[config.LABEL_PG_CLUSTER_IDENTIFIER], job.ObjectMeta.Labels[config.LABEL_PGOUSER], "pgbackrest", job.ObjectMeta.Namespace, "") - - // If the completed backup was a cluster bootstrap backup, then mark the cluster as initialized - // and initiate the creation of any replicas. Otherwise if the completed backup was taken as - // the result of a failover, then proceed with tremove the "primary_on_role_change" tag. - if labels[config.LABEL_PGHA_BACKUP_TYPE] == crv1.BackupTypeBootstrap { - log.Debugf("jobController onUpdate initial backup complete") - - controller.SetClusterInitializedStatus(c.Client, labels[config.LABEL_PG_CLUSTER], - job.ObjectMeta.Namespace) - - // now initialize the creation of any replica - controller.InitializeReplicaCreation(c.Client, labels[config.LABEL_PG_CLUSTER], - job.ObjectMeta.Namespace) - - } else if labels[config.LABEL_PGHA_BACKUP_TYPE] == crv1.BackupTypeFailover { - err := clusteroperator.RemovePrimaryOnRoleChangeTag(c.Client, c.Client.Config, - labels[config.LABEL_PG_CLUSTER], job.ObjectMeta.Namespace) - if err != nil { - log.Error(err) - return err - } - } - return nil -} - -// handleBackrestRestoreUpdate is responsible for handling updates to backrest stanza create jobs -func (c *Controller) handleBackrestStanzaCreateUpdate(job *apiv1.Job) error { - - labels := job.GetObjectMeta().GetLabels() - log.Debugf("jobController onUpdate backrest stanza-create job case") - - // grab the cluster name and namespace for use in various places below - clusterName := labels[config.LABEL_PG_CLUSTER] - namespace := job.Namespace - - if job.Status.Succeeded == 1 { - log.Debugf("backrest stanza successfully created for cluster %s", clusterName) - log.Debugf("proceeding with the initial full backup for cluster %s as needed for replica creation", - clusterName) - - var backrestRepoPodName string - for _, cont := range job.Spec.Template.Spec.Containers { - for _, envVar := range cont.Env { - if envVar.Name == "PODNAME" { - backrestRepoPodName = envVar.Value - log.Debugf("the backrest repo pod for the initial backup of cluster %s is %s", - clusterName, backrestRepoPodName) - } - } - } - - cluster, err := c.Client.CrunchydataV1().Pgclusters(namespace).Get(clusterName, metav1.GetOptions{}) - if err != nil { - return err - } - // If the cluster is a standby cluster, then no need to proceed with backup creation. - // Instead the cluster can be set to initialized following creation of the stanza. - if cluster.Spec.Standby { - log.Debugf("job Controller: standby cluster %s will now be set to an initialized "+ - "status", clusterName) - controller.SetClusterInitializedStatus(c.Client, clusterName, namespace) - return nil - } - - // clean any backup resources that might already be present, e.g. when restoring and these - // resources might already exist from initial creation of the cluster - if err := backrest.CleanBackupResources(c.Client, job.ObjectMeta.Namespace, - clusterName); err != nil { - log.Error(err) - return err - } - - backrestoperator.CreateInitialBackup(c.Client, job.ObjectMeta.Namespace, - clusterName, backrestRepoPodName) - - } - return nil -} diff --git a/internal/controller/job/bootstraphandler.go b/internal/controller/job/bootstraphandler.go deleted file mode 100644 index 1b75d14803..0000000000 --- a/internal/controller/job/bootstraphandler.go +++ /dev/null @@ -1,163 +0,0 @@ -package job - -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "encoding/json" - "errors" - "fmt" - - "github.com/crunchydata/postgres-operator/internal/config" - backrestoperator "github.com/crunchydata/postgres-operator/internal/operator/backrest" - "github.com/crunchydata/postgres-operator/internal/util" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - - log "github.com/sirupsen/logrus" - apiv1 "k8s.io/api/batch/v1" - kerrors "k8s.io/apimachinery/pkg/api/errors" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/types" -) - -// handleBootstrapUpdate is responsible for handling updates to bootstrap jobs that are responsible -// for bootstrapping a cluster from an existing data source -func (c *Controller) handleBootstrapUpdate(job *apiv1.Job) error { - - clusterName := job.GetLabels()[config.LABEL_PG_CLUSTER] - namespace := job.GetNamespace() - labels := job.GetLabels() - - // return if job is being deleted - if isJobInForegroundDeletion(job) { - log.Debugf("jobController onUpdate job %s is being deleted and will be ignored", - job.Name) - return nil - } - - cluster, err := c.Client.CrunchydataV1().Pgclusters(namespace).Get(clusterName, metav1.GetOptions{}) - if err != nil { - return err - } - - // determine if cluster is labeled for restore - _, restore := cluster.GetAnnotations()[config.ANNOTATION_BACKREST_RESTORE] - - // if the job has exceeded its backoff limit then simply cleanup and bootstrap resources - if isBackoffLimitExceeded(job) { - log.Debugf("Backoff limit exceeded for bootstrap Job %s, will now cleanup bootstrap "+ - "resources", job.Name) - if err := c.cleanupBootstrapResources(job, cluster, restore); err != nil { - return err - } - return nil - } - - // return if job wasn't successful - if !isJobSuccessful(job) { - log.Debugf("jobController onUpdate job %s was unsuccessful and will be ignored", - job.Name) - return nil - } - - if err := util.ToggleAutoFailover(c.Client, true, clusterName, namespace); err != nil && - !errors.Is(err, util.ErrMissingConfigAnnotation) { - log.Warnf("jobController unable to toggle autofail during bootstrap, cluster could "+ - "initialize in a paused state: %s", err.Error()) - } - - // If the job was successful we updated the state of the pgcluster to a "bootstrapped" status. - // This will then trigger full initialization of the cluster. We also cleanup any resources - // from the bootstrap job. - if cluster.Status.State == crv1.PgclusterStateBootstrapping { - - if err := c.cleanupBootstrapResources(job, cluster, restore); err != nil { - return err - } - - patch, err := json.Marshal(map[string]interface{}{ - "status": crv1.PgclusterStatus{ - State: crv1.PgclusterStateBootstrapped, - Message: "Pgcluster successfully bootstrapped from an existing data source", - }, - }) - if err == nil { - _, err = c.Client.CrunchydataV1().Pgclusters(namespace).Patch(cluster.Name, types.MergePatchType, patch) - } - if err != nil { - log.Error(err) - return err - } - } - - if restore { - if err := backrestoperator.UpdateWorkflow(c.Client, labels[crv1.PgtaskWorkflowID], - namespace, crv1.PgtaskWorkflowBackrestRestorePrimaryCreatedStatus); err != nil { - log.Warn(err) - } - publishRestoreComplete(labels[config.LABEL_PG_CLUSTER], labels[config.LABEL_PG_CLUSTER_IDENTIFIER], - labels[config.LABEL_PGOUSER], job.ObjectMeta.Namespace) - } - - return nil -} - -// cleanupBootstrapResources is responsible for cleaning up the resources from a bootstrap Job. -// This includes deleting any pgBackRest repository and service created specifically the restore -// (i.e. a repository and service not associated with a current cluster but rather the cluster -// being restored from to bootstrap the cluster). -func (c *Controller) cleanupBootstrapResources(job *apiv1.Job, cluster *crv1.Pgcluster, - restore bool) error { - - namespace := job.GetNamespace() - var restoreClusterName string - var repoName string - - // clean the repo if a restore, or if a "bootstrap" repo - var cleanRepo bool - if restore { - restoreClusterName = job.GetLabels()[config.LABEL_PG_CLUSTER] - repoName = fmt.Sprintf(util.BackrestRepoDeploymentName, restoreClusterName) - cleanRepo = true - } else { - restoreClusterName = cluster.Spec.PGDataSource.RestoreFrom - repoName = fmt.Sprintf(util.BackrestRepoDeploymentName, restoreClusterName) - repoDeployment, err := c.Client.AppsV1().Deployments(namespace).Get(repoName, - metav1.GetOptions{}) - if err != nil { - return err - } - if _, ok := repoDeployment.GetLabels()[config.LABEL_PGHA_BOOTSTRAP]; ok { - cleanRepo = true - } - } - - if cleanRepo { - // now delete the service for the bootstrap repo - if err := c.Client.CoreV1().Services(namespace).Delete( - fmt.Sprintf(util.BackrestRepoServiceName, restoreClusterName), - &metav1.DeleteOptions{}); err != nil && !kerrors.IsNotFound(err) { - return err - } - - // and finally delete the bootstrap repo deployment - if err := c.Client.AppsV1().Deployments(namespace).Delete(repoName, - &metav1.DeleteOptions{}); err != nil && !kerrors.IsNotFound(err) { - return err - } - } - - return nil -} diff --git a/internal/controller/job/jobcontroller.go b/internal/controller/job/jobcontroller.go deleted file mode 100644 index 13d919ed5e..0000000000 --- a/internal/controller/job/jobcontroller.go +++ /dev/null @@ -1,118 +0,0 @@ -package job - -/* -Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/kubeapi" - log "github.com/sirupsen/logrus" - apiv1 "k8s.io/api/batch/v1" - batchinformers "k8s.io/client-go/informers/batch/v1" - "k8s.io/client-go/tools/cache" -) - -// Controller holds the connections for the controller -type Controller struct { - Client *kubeapi.Client - Informer batchinformers.JobInformer -} - -const ( - patchResource = "pgtasks" - patchURL = "/spec/status" -) - -// onAdd is called when a postgresql operator job is created and an associated add event is -// generated -func (c *Controller) onAdd(obj interface{}) { - - job := obj.(*apiv1.Job) - labels := job.GetObjectMeta().GetLabels() - - //only process jobs with with vendor=crunchydata label - if labels[config.LABEL_VENDOR] != "crunchydata" { - return - } - - log.Debugf("Job Controller: onAdd ns=%s jobName=%s", job.ObjectMeta.Namespace, job.ObjectMeta.SelfLink) -} - -// onUpdate is called when a postgresql operator job is created and an associated update event is -// generated -func (c *Controller) onUpdate(oldObj, newObj interface{}) { - - var err error - job := newObj.(*apiv1.Job) - labels := job.GetObjectMeta().GetLabels() - - //only process jobs with with vendor=crunchydata label - if labels[config.LABEL_VENDOR] != "crunchydata" { - return - } - - log.Debugf("[Job Controller] onUpdate ns=%s %s active=%d succeeded=%d conditions=[%v]", - job.ObjectMeta.Namespace, job.ObjectMeta.SelfLink, job.Status.Active, job.Status.Succeeded, - job.Status.Conditions) - - labelExists := func(k string) bool { _, ok := labels[k]; return ok } - // determine determine which handler to route the update event to - switch { - case labels[config.LABEL_RMDATA] == "true": - err = c.handleRMDataUpdate(job) - case labels[config.LABEL_BACKREST] == "true" || - labels[config.LABEL_BACKREST_RESTORE] == "true": - err = c.handleBackrestUpdate(job) - case labels[config.LABEL_BACKUP_TYPE_PGDUMP] == "true": - err = c.handlePGDumpUpdate(job) - case labels[config.LABEL_RESTORE_TYPE_PGRESTORE] == "true": - err = c.handlePGRestoreUpdate(job) - case labels[config.LABEL_PGO_CLONE_STEP_1] == "true": - err = c.handleRepoSyncUpdate(job) - case labelExists(config.LABEL_PGHA_BOOTSTRAP): - err = c.handleBootstrapUpdate(job) - } - - if err != nil { - log.Error(err) - } - return -} - -// onDelete is called when a postgresql operator job is deleted -func (c *Controller) onDelete(obj interface{}) { - - job := obj.(*apiv1.Job) - labels := job.GetObjectMeta().GetLabels() - - //only process jobs with with vendor=crunchydata label - if labels[config.LABEL_VENDOR] != "crunchydata" { - return - } - - log.Debugf("[Job Controller] onDelete ns=%s %s", job.ObjectMeta.Namespace, job.ObjectMeta.SelfLink) -} - -// AddJobEventHandler adds the job event handler to the job informer -func (c *Controller) AddJobEventHandler() { - - c.Informer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{ - AddFunc: c.onAdd, - UpdateFunc: c.onUpdate, - DeleteFunc: c.onDelete, - }) - - log.Debugf("Job Controller: added event handler to informer") -} diff --git a/internal/controller/job/jobevents.go b/internal/controller/job/jobevents.go deleted file mode 100644 index ef4f1a1760..0000000000 --- a/internal/controller/job/jobevents.go +++ /dev/null @@ -1,91 +0,0 @@ -package job - -/* -Copyright 2019 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "time" - - "github.com/crunchydata/postgres-operator/pkg/events" - log "github.com/sirupsen/logrus" -) - -func publishBackupComplete(clusterName, clusterIdentifier, username, backuptype, namespace, path string) { - topics := make([]string, 2) - topics[0] = events.EventTopicCluster - topics[1] = events.EventTopicBackup - - f := events.EventCreateBackupCompletedFormat{ - EventHeader: events.EventHeader{ - Namespace: namespace, - Username: username, - Topic: topics, - Timestamp: time.Now(), - EventType: events.EventCreateBackupCompleted, - }, - Clustername: clusterName, - BackupType: backuptype, - Path: path, - } - - err := events.Publish(f) - if err != nil { - log.Error(err.Error()) - } - -} - -func publishRestoreComplete(clusterName, identifier, username, namespace string) { - topics := make([]string, 1) - topics[0] = events.EventTopicCluster - - f := events.EventRestoreClusterCompletedFormat{ - EventHeader: events.EventHeader{ - Namespace: namespace, - Username: username, - Topic: topics, - Timestamp: time.Now(), - EventType: events.EventRestoreClusterCompleted, - }, - Clustername: clusterName, - } - - err := events.Publish(f) - if err != nil { - log.Error(err.Error()) - } - -} - -func publishDeleteClusterComplete(clusterName, identifier, username, namespace string) { - topics := make([]string, 1) - topics[0] = events.EventTopicCluster - - f := events.EventDeleteClusterCompletedFormat{ - EventHeader: events.EventHeader{ - Namespace: namespace, - Username: username, - Topic: topics, - Timestamp: time.Now(), - EventType: events.EventDeleteClusterCompleted, - }, - Clustername: clusterName, - } - - err := events.Publish(f) - if err != nil { - log.Error(err.Error()) - } -} diff --git a/internal/controller/job/jobutil.go b/internal/controller/job/jobutil.go deleted file mode 100644 index 78d6bb6e34..0000000000 --- a/internal/controller/job/jobutil.go +++ /dev/null @@ -1,49 +0,0 @@ -package job - -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - apiv1 "k8s.io/api/batch/v1" - meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1" -) - -// isBackoffLimitExceeded returns true if the jobs backoff limit has been exceeded -func isBackoffLimitExceeded(job *apiv1.Job) bool { - if job.Spec.BackoffLimit != nil { - return job.Status.Failed >= *job.Spec.BackoffLimit - } - return false -} - -// isJobSuccessful returns true if the job provided completed successfully. Otherwise -// it returns false. Per the Kubernetes documentation, "the completion time is only set -// when the job finishes successfully". Therefore, the presence of a completion time can -// be utilized to determine whether or not the job was successful. -func isJobSuccessful(job *apiv1.Job) bool { - return job.Status.CompletionTime != nil -} - -// isJobInForegroundDeletion determines if a job is currently being deleted using foreground -// cascading deletion, as indicated by the presence of value “foregroundDeletion” in the jobs -// metadata.finalizers. -func isJobInForegroundDeletion(job *apiv1.Job) bool { - for _, finalizer := range job.Finalizers { - if finalizer == meta_v1.FinalizerDeleteDependents { - return true - } - } - return false -} diff --git a/internal/controller/job/pgdumphandler.go b/internal/controller/job/pgdumphandler.go deleted file mode 100644 index 0fc8444f20..0000000000 --- a/internal/controller/job/pgdumphandler.go +++ /dev/null @@ -1,82 +0,0 @@ -package job - -/* -Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/util" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - log "github.com/sirupsen/logrus" - apiv1 "k8s.io/api/batch/v1" -) - -// handlePGDumpUpdate is responsible for handling updates to pg_dump jobs -func (c *Controller) handlePGDumpUpdate(job *apiv1.Job) error { - - labels := job.GetObjectMeta().GetLabels() - - log.Debugf("jobController onUpdate pgdump job case") - log.Debugf("pgdump job status=%d", job.Status.Succeeded) - log.Debugf("update the status to completed here for pgdump %s", labels[config.LABEL_PG_CLUSTER]) - - status := crv1.JobCompletedStatus + " [" + job.ObjectMeta.Name + "]" - if job.Status.Succeeded == 0 { - status = crv1.JobSubmittedStatus + " [" + job.ObjectMeta.Name + "]" - } - if job.Status.Failed > 0 { - status = crv1.JobErrorStatus + " [" + job.ObjectMeta.Name + "]" - } - - //update the pgdump task status to submitted - updates task, not the job. - dumpTask := labels[config.LABEL_PGTASK] - if err := util.Patch(c.Client.CrunchydataV1().RESTClient(), patchURL, status, patchResource, dumpTask, - job.ObjectMeta.Namespace); err != nil { - log.Error("error in patching pgtask " + job.ObjectMeta.SelfLink + err.Error()) - return err - } - - return nil -} - -// handlePGDumpUpdate is responsible for handling updates to pg_restore jobs -func (c *Controller) handlePGRestoreUpdate(job *apiv1.Job) error { - - labels := job.GetObjectMeta().GetLabels() - - log.Debugf("jobController onUpdate pgrestore job case") - log.Debugf("pgdump job status=%d", job.Status.Succeeded) - log.Debugf("update the status to completed here for pgrestore %s", labels[config.LABEL_PG_CLUSTER]) - - status := crv1.JobCompletedStatus + " [" + job.ObjectMeta.Name + "]" - - if job.Status.Succeeded == 0 { - status = crv1.JobSubmittedStatus + " [" + job.ObjectMeta.Name + "]" - } - - if job.Status.Failed > 0 { - status = crv1.JobErrorStatus + " [" + job.ObjectMeta.Name + "]" - } - - //update the pgdump task status to submitted - updates task, not the job. - restoreTask := labels[config.LABEL_PGTASK] - if err := util.Patch(c.Client.CrunchydataV1().RESTClient(), patchURL, status, patchResource, restoreTask, - job.ObjectMeta.Namespace); err != nil { - log.Error("error in patching pgtask " + job.ObjectMeta.SelfLink + err.Error()) - return err - } - - return nil -} diff --git a/internal/controller/job/reposynchandler.go b/internal/controller/job/reposynchandler.go deleted file mode 100644 index 136f63365a..0000000000 --- a/internal/controller/job/reposynchandler.go +++ /dev/null @@ -1,105 +0,0 @@ -package job - -/* -Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "fmt" - "time" - - "github.com/crunchydata/postgres-operator/internal/config" - clusteroperator "github.com/crunchydata/postgres-operator/internal/operator/cluster" - "github.com/crunchydata/postgres-operator/internal/util" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - "github.com/crunchydata/postgres-operator/pkg/events" - log "github.com/sirupsen/logrus" - apiv1 "k8s.io/api/batch/v1" -) - -// handleRepoSyncUpdate is responsible for handling updates to repo sync jobs -func (c *Controller) handleRepoSyncUpdate(job *apiv1.Job) error { - - // return if job wasn't successful - if !isJobSuccessful(job) { - log.Debugf("jobController onUpdate job %s was unsuccessful and will be ignored", - job.Name) - return nil - } - - // return if job is being deleted - if isJobInForegroundDeletion(job) { - log.Debugf("jobController onUpdate job %s is being deleted and will be ignored", - job.Name) - return nil - } - - log.Debugf("jobController onUpdate clone step 1 job case") - log.Debugf("clone step 1 job status=%d", job.Status.Succeeded) - - namespace := job.ObjectMeta.Namespace - sourceClusterName := job.ObjectMeta.Annotations[config.ANNOTATION_CLONE_SOURCE_CLUSTER_NAME] - targetClusterName := job.ObjectMeta.Annotations[config.ANNOTATION_CLONE_TARGET_CLUSTER_NAME] - workflowID := job.ObjectMeta.Labels[config.LABEL_WORKFLOW_ID] - - log.Debugf("workflow to update is %s", workflowID) - - // first, make sure the Pgtask resource knows that the job is complete, - // which is using this legacy bit of code - if err := util.Patch(c.Client.CrunchydataV1().RESTClient(), patchURL, crv1.JobCompletedStatus, patchResource, job.Name, namespace); err != nil { - log.Error(err) - // we can continue on, even if this fails... - } - - // next, update the workflow to indicate that step 1 is complete - clusteroperator.UpdateCloneWorkflow(c.Client, namespace, workflowID, crv1.PgtaskWorkflowCloneRestoreBackup) - - // determine the storage source (e.g. local or s3) to use for the restore based on the storage - // source utilized for the backrest repo sync job - var storageSource string - for _, envVar := range job.Spec.Template.Spec.Containers[0].Env { - if envVar.Name == "BACKREST_STORAGE_SOURCE" { - storageSource = envVar.Value - } - } - - // now, set up a new pgtask that will allow us to perform the restore - cloneTask := util.CloneTask{ - BackrestPVCSize: job.ObjectMeta.Annotations[config.ANNOTATION_CLONE_BACKREST_PVC_SIZE], - BackrestStorageSource: storageSource, - EnableMetrics: job.ObjectMeta.Annotations[config.ANNOTATION_CLONE_ENABLE_METRICS] == "true", - PGOUser: job.ObjectMeta.Labels[config.LABEL_PGOUSER], - PVCSize: job.ObjectMeta.Annotations[config.ANNOTATION_CLONE_PVC_SIZE], - SourceClusterName: sourceClusterName, - TargetClusterName: targetClusterName, - TaskStepLabel: config.LABEL_PGO_CLONE_STEP_2, - TaskType: crv1.PgtaskCloneStep2, - Timestamp: time.Now(), - WorkflowID: workflowID, - } - - task := cloneTask.Create() - - // finally, create the pgtask! - if _, err := c.Client.CrunchydataV1().Pgtasks(namespace).Create(task); err != nil { - log.Error(err) - errorMessage := fmt.Sprintf("Could not create pgtask for step 2: %s", err.Error()) - clusteroperator.PublishCloneEvent(events.EventCloneClusterFailure, namespace, task, errorMessage) - return err - } - - // ...we really shouldn't need a return here the way this function is - // constructed...but just in case - return nil -} diff --git a/internal/controller/job/rmdatahandler.go b/internal/controller/job/rmdatahandler.go deleted file mode 100644 index 4a6861c5c9..0000000000 --- a/internal/controller/job/rmdatahandler.go +++ /dev/null @@ -1,110 +0,0 @@ -package job - -/* -Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "fmt" - "time" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/operator/pvc" - log "github.com/sirupsen/logrus" - apiv1 "k8s.io/api/batch/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" -) - -const ( - deleteRMDataJobMaxTries = 10 - deleteRMDataJobDuration = 5 -) - -// handleRMDataUpdate is responsible for handling updates to rmdata jobs -func (c *Controller) handleRMDataUpdate(job *apiv1.Job) error { - - labels := job.GetObjectMeta().GetLabels() - - // return if job wasn't successful - if !isJobSuccessful(job) { - log.Debugf("jobController onUpdate rmdata job %s was unsuccessful and will be ignored", - job.Name) - return nil - } - - log.Debugf("jobController onUpdate rmdata job succeeded") - - publishDeleteClusterComplete(labels[config.LABEL_PG_CLUSTER], - job.ObjectMeta.Labels[config.LABEL_PG_CLUSTER_IDENTIFIER], - job.ObjectMeta.Labels[config.LABEL_PGOUSER], - job.ObjectMeta.Namespace) - - clusterName := labels[config.LABEL_PG_CLUSTER] - - deletePropagation := metav1.DeletePropagationForeground - err := c.Client. - BatchV1().Jobs(job.Namespace). - Delete(job.Name, &metav1.DeleteOptions{PropagationPolicy: &deletePropagation}) - if err != nil { - log.Error(err) - } - - removed := false - for i := 0; i < deleteRMDataJobMaxTries; i++ { - log.Debugf("sleeping while job %s is removed cleanly", job.Name) - time.Sleep(time.Second * time.Duration(deleteRMDataJobDuration)) - _, err := c.Client.BatchV1().Jobs(job.Namespace).Get(job.Name, metav1.GetOptions{}) - if err != nil { - removed = true - break - } - } - - if !removed { - return fmt.Errorf("could not remove Job %s for some reason after max tries", job.Name) - } - - //if a user has specified --archive for a cluster then - // an xlog PVC will be present and can be removed - pvcName := clusterName + "-xlog" - if err := pvc.DeleteIfExists(c.Client.Clientset, pvcName, job.Namespace); err != nil { - log.Error(err) - return err - } - - //delete any completed jobs for this cluster as a cleanup - jobList, err := c.Client. - BatchV1().Jobs(job.Namespace). - List(metav1.ListOptions{LabelSelector: config.LABEL_PG_CLUSTER + "=" + clusterName}) - if err != nil { - log.Error(err) - return err - } - - for _, j := range jobList.Items { - if j.Status.Succeeded > 0 { - log.Debugf("removing Job %s since it was completed", job.Name) - err := c.Client. - BatchV1().Jobs(job.Namespace). - Delete(j.Name, &metav1.DeleteOptions{PropagationPolicy: &deletePropagation}) - if err != nil { - log.Error(err) - return err - } - - } - } - - return nil -} diff --git a/internal/controller/manager/controllermanager.go b/internal/controller/manager/controllermanager.go deleted file mode 100644 index bb2c2e1039..0000000000 --- a/internal/controller/manager/controllermanager.go +++ /dev/null @@ -1,492 +0,0 @@ -package manager - -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "errors" - "fmt" - "sync" - "time" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/controller" - "github.com/crunchydata/postgres-operator/internal/controller/configmap" - "github.com/crunchydata/postgres-operator/internal/controller/job" - "github.com/crunchydata/postgres-operator/internal/controller/pgcluster" - "github.com/crunchydata/postgres-operator/internal/controller/pgpolicy" - "github.com/crunchydata/postgres-operator/internal/controller/pgreplica" - "github.com/crunchydata/postgres-operator/internal/controller/pgtask" - "github.com/crunchydata/postgres-operator/internal/controller/pod" - "github.com/crunchydata/postgres-operator/internal/kubeapi" - "github.com/crunchydata/postgres-operator/internal/ns" - "github.com/crunchydata/postgres-operator/internal/operator/operatorupgrade" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - informers "github.com/crunchydata/postgres-operator/pkg/generated/informers/externalversions" - log "github.com/sirupsen/logrus" - "golang.org/x/sync/semaphore" - - kubeinformers "k8s.io/client-go/informers" - "k8s.io/client-go/tools/cache" - "k8s.io/client-go/util/workqueue" -) - -// the following variables represent the resources the operator must has "list" access to in order -// to start an informer -var ( - listerResourcesCrunchy = []string{"pgtasks", "pgclusters", "pgreplicas", "pgpolicies"} - listerResourcesCore = []string{"pods", "configmaps"} -) - -// ControllerManager manages a map of controller groups, each of which is comprised of the various -// controllers needed to handle events within a specific namespace. Only one controllerGroup is -// allowed per namespace. -type ControllerManager struct { - mgrMutex sync.Mutex - controllers map[string]*controllerGroup - installationName string - namespaceOperatingMode ns.NamespaceOperatingMode - pgoConfig config.PgoConfig - pgoNamespace string - sem *semaphore.Weighted -} - -// controllerGroup is a struct for managing the various controllers created to handle events -// in a specific namespace -type controllerGroup struct { - stopCh chan struct{} - doneCh chan struct{} - started bool - pgoInformerFactory informers.SharedInformerFactory - kubeInformerFactory kubeinformers.SharedInformerFactory - kubeInformerFactoryWithRefresh kubeinformers.SharedInformerFactory - controllersWithWorkers []controller.WorkerRunner - informerSyncedFuncs []cache.InformerSynced - clientset kubeapi.Interface -} - -// NewControllerManager returns a new ControllerManager comprised of controllerGroups for each -// namespace included in the 'namespaces' parameter. -func NewControllerManager(namespaces []string, - pgoConfig config.PgoConfig, pgoNamespace, installationName string, - namespaceOperatingMode ns.NamespaceOperatingMode) (*ControllerManager, error) { - - controllerManager := ControllerManager{ - controllers: make(map[string]*controllerGroup), - installationName: installationName, - namespaceOperatingMode: namespaceOperatingMode, - pgoConfig: pgoConfig, - pgoNamespace: pgoNamespace, - sem: semaphore.NewWeighted(1), - } - - // create controller groups for each namespace provided - for _, ns := range namespaces { - if err := controllerManager.AddGroup(ns); err != nil { - log.Error(err) - return nil, err - } - } - - log.Debugf("Controller Manager: new controller manager created for namespaces %v", - namespaces) - - return &controllerManager, nil -} - -// AddGroup adds a new controller group for the namespace specified. Each controller -// group is comprised of controllers for the following resources: -// - pods -// - jobs -// - pgclusters -// - pgpolicys -// - pgtasks -// Two SharedInformerFactory's are utilized (one for Kube resources and one for PosgreSQL Operator -// resources) to create and track the informers for each type of resource, while any controllers -// utilizing worker queues are also tracked (this allows all informers and worker queues to be -// easily started as needed). Each controller group also receives its own clients, which can then -// be utilized by the various controllers within that controller group. -func (c *ControllerManager) AddGroup(namespace string) error { - - c.mgrMutex.Lock() - defer c.mgrMutex.Unlock() - - // only return an error if not a group already exists error - if err := c.addControllerGroup(namespace); err != nil && - !errors.Is(err, controller.ErrControllerGroupExists) { - return err - } - - return nil -} - -// AddAndRunGroup is a convenience function that adds a controller group for the -// namespace specified, and then immediately runs the controllers in that group. -func (c *ControllerManager) AddAndRunGroup(namespace string) error { - - if c.controllers[namespace] != nil && !c.pgoConfig.Pgo.DisableReconcileRBAC { - // first reconcile RBAC in the target namespace if RBAC reconciliation is enabled - c.reconcileRBAC(namespace) - } - - // now add and run the controller group - c.mgrMutex.Lock() - defer c.mgrMutex.Unlock() - - // only return an error if not a group already exists error - if err := c.addControllerGroup(namespace); err != nil && - !errors.Is(err, controller.ErrControllerGroupExists) { - return err - } - - if err := c.runControllerGroup(namespace); err != nil { - return err - } - - return nil -} - -// RemoveAll removes all controller groups managed by the controller manager, first stopping all -// controllers within each controller group managed by the controller manager. -func (c *ControllerManager) RemoveAll() { - - c.mgrMutex.Lock() - defer c.mgrMutex.Unlock() - - for ns := range c.controllers { - c.removeControllerGroup(ns) - } - - log.Debug("Controller Manager: all contollers groups have been removed") -} - -// RemoveGroup removes the controller group for the namespace specified, first stopping all -// controllers within that group -func (c *ControllerManager) RemoveGroup(namespace string) { - - c.mgrMutex.Lock() - defer c.mgrMutex.Unlock() - - c.removeControllerGroup(namespace) -} - -// RunAll runs all controllers across all controller groups managed by the controller manager. -func (c *ControllerManager) RunAll() error { - - c.mgrMutex.Lock() - defer c.mgrMutex.Unlock() - - for ns := range c.controllers { - if err := c.runControllerGroup(ns); err != nil { - return err - } - } - - log.Debug("Controller Manager: all contoller groups are now running") - - return nil -} - -// RunGroup runs the controllers within the controller group for the namespace specified. -func (c *ControllerManager) RunGroup(namespace string) error { - - c.mgrMutex.Lock() - defer c.mgrMutex.Unlock() - - if _, ok := c.controllers[namespace]; !ok { - log.Debugf("Controller Manager: unable to run controller group for namespace %s because "+ - "a controller group for this namespace does not exist", namespace) - return nil - } - - if err := c.runControllerGroup(namespace); err != nil { - return err - } - - log.Debugf("Controller Manager: the controller group for ns %s is now running", namespace) - - return nil -} - -// addControllerGroup adds a new controller group for the namespace specified -func (c *ControllerManager) addControllerGroup(namespace string) error { - - if _, ok := c.controllers[namespace]; ok { - log.Debugf("Controller Manager: a controller for namespace %s already exists", namespace) - return controller.ErrControllerGroupExists - } - - // create a client for kube resources - client, err := kubeapi.NewClient() - if err != nil { - log.Error(err) - return err - } - - pgoInformerFactory := informers.NewSharedInformerFactoryWithOptions(client, 0, - informers.WithNamespace(namespace)) - - kubeInformerFactory := kubeinformers.NewSharedInformerFactoryWithOptions(client, 0, - kubeinformers.WithNamespace(namespace)) - - kubeInformerFactoryWithRefresh := kubeinformers.NewSharedInformerFactoryWithOptions(client, - time.Duration(*c.pgoConfig.Pgo.ControllerGroupRefreshInterval)*time.Second, - kubeinformers.WithNamespace(namespace)) - - pgTaskcontroller := &pgtask.Controller{ - Client: client, - Queue: workqueue.NewRateLimitingQueue(workqueue.DefaultControllerRateLimiter()), - Informer: pgoInformerFactory.Crunchydata().V1().Pgtasks(), - PgtaskWorkerCount: *c.pgoConfig.Pgo.PGTaskWorkerCount, - } - - pgClustercontroller := &pgcluster.Controller{ - Client: client, - Queue: workqueue.NewRateLimitingQueue(workqueue.DefaultControllerRateLimiter()), - Informer: pgoInformerFactory.Crunchydata().V1().Pgclusters(), - PgclusterWorkerCount: *c.pgoConfig.Pgo.PGClusterWorkerCount, - } - - pgReplicacontroller := &pgreplica.Controller{ - Clientset: client, - Queue: workqueue.NewRateLimitingQueue(workqueue.DefaultControllerRateLimiter()), - Informer: pgoInformerFactory.Crunchydata().V1().Pgreplicas(), - PgreplicaWorkerCount: *c.pgoConfig.Pgo.PGReplicaWorkerCount, - } - - pgPolicycontroller := &pgpolicy.Controller{ - Clientset: client, - Informer: pgoInformerFactory.Crunchydata().V1().Pgpolicies(), - } - - podcontroller := &pod.Controller{ - Client: client, - Informer: kubeInformerFactory.Core().V1().Pods(), - } - - jobcontroller := &job.Controller{ - Client: client, - Informer: kubeInformerFactory.Batch().V1().Jobs(), - } - - configMapController, err := configmap.NewConfigMapController(client.Config, - client, kubeInformerFactoryWithRefresh.Core().V1().ConfigMaps(), - pgoInformerFactory.Crunchydata().V1().Pgclusters(), - *c.pgoConfig.Pgo.ConfigMapWorkerCount) - if err != nil { - log.Errorf("Unable to create ConfigMap controller: %v", err) - return err - } - - // add the proper event handler to the informer in each controller - pgTaskcontroller.AddPGTaskEventHandler() - pgClustercontroller.AddPGClusterEventHandler() - pgReplicacontroller.AddPGReplicaEventHandler() - pgPolicycontroller.AddPGPolicyEventHandler() - podcontroller.AddPodEventHandler() - jobcontroller.AddJobEventHandler() - - group := &controllerGroup{ - clientset: client, - stopCh: make(chan struct{}), - doneCh: make(chan struct{}), - pgoInformerFactory: pgoInformerFactory, - kubeInformerFactory: kubeInformerFactory, - kubeInformerFactoryWithRefresh: kubeInformerFactoryWithRefresh, - informerSyncedFuncs: []cache.InformerSynced{ - pgoInformerFactory.Crunchydata().V1().Pgtasks().Informer().HasSynced, - pgoInformerFactory.Crunchydata().V1().Pgclusters().Informer().HasSynced, - pgoInformerFactory.Crunchydata().V1().Pgreplicas().Informer().HasSynced, - pgoInformerFactory.Crunchydata().V1().Pgpolicies().Informer().HasSynced, - kubeInformerFactory.Core().V1().Pods().Informer().HasSynced, - kubeInformerFactory.Batch().V1().Jobs().Informer().HasSynced, - kubeInformerFactoryWithRefresh.Core().V1().ConfigMaps().Informer().HasSynced, - }, - } - - // store the controllers containing worker queues so that the queues can also be started - // when any informers in the controller are started - group.controllersWithWorkers = append(group.controllersWithWorkers, - pgTaskcontroller, pgClustercontroller, pgReplicacontroller, configMapController) - - c.controllers[namespace] = group - - log.Debugf("Controller Manager: added controller group for namespace %s", namespace) - - // now reconcile RBAC in the namespace if RBAC reconciliation is enabled - if !c.pgoConfig.Pgo.DisableReconcileRBAC { - c.reconcileRBAC(namespace) - } - - return nil -} - -// hasListerPrivs verifies the Operator has the privileges required to start the controllers -// for the namespace specified. -func (c *ControllerManager) hasListerPrivs(namespace string) bool { - - controllerGroup := c.controllers[namespace] - - var err error - var hasCrunchyPrivs, hasCorePrivs, hasBatchPrivs bool - - for _, listerResource := range listerResourcesCrunchy { - hasCrunchyPrivs, err = ns.CheckAccessPrivs(controllerGroup.clientset, - map[string][]string{listerResource: {"list"}}, - crv1.GroupName, namespace) - if err != nil { - log.Errorf(err.Error()) - } else if !hasCrunchyPrivs { - log.Errorf("Controller Manager: Controller Group for namespace %s does not have the "+ - "required list privileges for resource %s in the %s API", - namespace, listerResource, crv1.GroupName) - } - } - - for _, listerResource := range listerResourcesCore { - hasCorePrivs, err = ns.CheckAccessPrivs(controllerGroup.clientset, - map[string][]string{listerResource: {"list"}}, - "", namespace) - if err != nil { - log.Errorf(err.Error()) - } else if !hasCorePrivs { - log.Errorf("Controller Manager: Controller Group for namespace %s does not have the "+ - "required list privileges for resource %s in the Core API", - namespace, listerResource) - } - } - - hasBatchPrivs, err = ns.CheckAccessPrivs(controllerGroup.clientset, - map[string][]string{"jobs": {"list"}}, - "batch", namespace) - if err != nil { - log.Errorf(err.Error()) - } else if !hasBatchPrivs { - log.Errorf("Controller Manager: Controller Group for namespace %s does not have the "+ - "required list privileges for resource %s in the Batch API", - namespace, "jobs") - } - - return (hasCrunchyPrivs && hasCorePrivs && hasBatchPrivs) -} - -// runControllerGroup is responsible running the controllers for the controller group corresponding -// to the namespace provided -func (c *ControllerManager) runControllerGroup(namespace string) error { - - controllerGroup := c.controllers[namespace] - hasListerPrivs := c.hasListerPrivs(namespace) - - switch { - case controllerGroup.started && hasListerPrivs: - log.Debugf("Controller Manager: controller group for namespace %s is already running", - namespace) - return nil - case controllerGroup.started && !hasListerPrivs: - c.removeControllerGroup(namespace) - return fmt.Errorf("Controller Manager: removing the running controller group for "+ - "namespace %s because it no longer has the required privs, will attempt to "+ - "restart on the next ns refresh interval", namespace) - case !hasListerPrivs: - return fmt.Errorf("Controller Manager: cannot start controller group for namespace %s "+ - "because it does not have the required privs, will attempt to start on the next ns "+ - "refresh interval", namespace) - } - - // before starting, first successfully check the versions of all pgcluster's in the namespace - if err := operatorupgrade.CheckVersion(controllerGroup.clientset, namespace); err != nil { - log.Errorf("Controller Manager: Unsuccessful pgcluster version check for namespace %s, "+ - "the controller group will not be started", namespace) - return err - } - - controllerGroup.kubeInformerFactory.Start(controllerGroup.stopCh) - controllerGroup.pgoInformerFactory.Start(controllerGroup.stopCh) - controllerGroup.kubeInformerFactoryWithRefresh.Start(controllerGroup.stopCh) - - if ok := cache.WaitForNamedCacheSync(namespace, controllerGroup.stopCh, - controllerGroup.informerSyncedFuncs...); !ok { - return fmt.Errorf("Controller Manager: failed waiting for caches to sync") - } - - for _, worker := range controllerGroup.controllersWithWorkers { - for i := 0; i < worker.WorkerCount(); i++ { - go worker.RunWorker(controllerGroup.stopCh, controllerGroup.doneCh) - } - } - - controllerGroup.started = true - - log.Debugf("Controller Manager: controller group for namespace %s is now running", namespace) - - return nil -} - -// removeControllerGroup removes the controller group for the namespace specified. Any worker -// queues associated with the controllers inside of the controller group are first shutdown -// prior to removing the controller group. -func (c *ControllerManager) removeControllerGroup(namespace string) { - - if _, ok := c.controllers[namespace]; !ok { - log.Debugf("Controller Manager: no controller group to remove for ns %s", namespace) - return - } - - c.stopControllerGroup(namespace) - delete(c.controllers, namespace) - - log.Debugf("Controller Manager: the controller group for ns %s has been removed", namespace) -} - -// stopControllerGroup stops the controller group associated with the namespace specified. This is -// done by calling the ShutdownWorker function associated with the controller. If the controller -// does not have a ShutdownWorker function then no action is taken. -func (c *ControllerManager) stopControllerGroup(namespace string) { - - if _, ok := c.controllers[namespace]; !ok { - log.Debugf("Controller Manager: unable to stop controller group for namespace %s because "+ - "a controller group for this namespace does not exist", namespace) - return - } - - if !c.controllers[namespace].started { - log.Debugf("Controller Manager: controller group for namespace %s was never started, "+ - "skipping worker shutdown", namespace) - return - } - - controllerGroup := c.controllers[namespace] - - // close the stop channel to stop all informers and instruct the workers queues to shutdown - close(controllerGroup.stopCh) - - // wait for all worker queues to shutdown - log.Debugf("Waiting for %d workers in the controller group for namespace %s to shutdown", - len(controllerGroup.controllersWithWorkers), namespace) - var numWorkers int - for _, worker := range controllerGroup.controllersWithWorkers { - for i := 0; i < worker.WorkerCount(); i++ { - numWorkers++ - } - } - for i := 0; i < numWorkers; i++ { - <-controllerGroup.doneCh - } - close(controllerGroup.doneCh) - - controllerGroup.started = false - - log.Debugf("Controller Manager: the controller group for ns %s has been stopped", namespace) -} diff --git a/internal/controller/manager/rbac.go b/internal/controller/manager/rbac.go deleted file mode 100644 index 21f7db9cee..0000000000 --- a/internal/controller/manager/rbac.go +++ /dev/null @@ -1,142 +0,0 @@ -package manager - -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "text/template" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/ns" - - log "github.com/sirupsen/logrus" - v1 "k8s.io/api/core/v1" - kerrors "k8s.io/apimachinery/pkg/api/errors" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" -) - -const ( - // ErrReconcileRBAC defines the error string that is displayed when RBAC reconciliation is - // enabled for the current PostgreSQL Operator installation but the Operator is unable to - // to properly/fully reconcile its own RBAC in a target namespace. - ErrReconcileRBAC = "operator is unable to reconcile RBAC resource" -) - -// reconcileRBAC is responsible for reconciling the RBAC resources (ServiceAccounts, Roles and -// RoleBindings) required by the PostgreSQL Operator in a target namespace -func (c *ControllerManager) reconcileRBAC(targetNamespace string) { - - log.Debugf("Controller Manager: Now reconciling RBAC in namespace %s", targetNamespace) - - // Use the image pull secrets of the operator service account in the new namespace. - operator, err := c.controllers[targetNamespace].clientset.CoreV1(). - ServiceAccounts(c.pgoNamespace).Get(ns.OPERATOR_SERVICE_ACCOUNT, metav1.GetOptions{}) - if err != nil { - // just log an error and continue so that we can attempt to reconcile other RBAC resources - // that are not dependent on the Operator ServiceAccount, e.g. Roles and RoleBindings - log.Errorf("%s: %v", ErrReconcileRBAC, err) - } - - saCreatedOrUpdated := c.reconcileServiceAccounts(targetNamespace, - operator.ImagePullSecrets) - c.reconcileRoles(targetNamespace) - c.reconcileRoleBindings(targetNamespace) - - // If a SA was created or updated, or if it doesnt exist, ensure the image pull secrets - // are up to date - for _, reference := range operator.ImagePullSecrets { - - var doesNotExist bool - - if _, err := c.controllers[targetNamespace].clientset.CoreV1(). - Secrets(targetNamespace).Get( - reference.Name, metav1.GetOptions{}); err != nil { - if kerrors.IsNotFound(err) { - doesNotExist = true - } else { - log.Errorf("%s: %v", ErrReconcileRBAC, err) - continue - } - } - - if doesNotExist || saCreatedOrUpdated { - if err := ns.CopySecret(c.controllers[targetNamespace].clientset, reference.Name, - c.pgoNamespace, targetNamespace); err != nil { - log.Errorf("%s: %v", ErrReconcileRBAC, err) - } - } - } -} - -// reconcileRoles reconciles the Roles required by the operator in a target namespace -func (c *ControllerManager) reconcileRoles(targetNamespace string) { - - reconcileRoles := map[string]*template.Template{ - ns.PGO_TARGET_ROLE: config.PgoTargetRoleTemplate, - ns.PGO_BACKREST_ROLE: config.PgoBackrestRoleTemplate, - ns.PGO_PG_ROLE: config.PgoPgRoleTemplate, - } - - for role, template := range reconcileRoles { - if err := ns.ReconcileRole(c.controllers[targetNamespace].clientset, role, - targetNamespace, template); err != nil { - log.Errorf("%s: %v", ErrReconcileRBAC, err) - } - } -} - -// reconcileRoleBindings reconciles the RoleBindings required by the operator in a -// target namespace -func (c *ControllerManager) reconcileRoleBindings(targetNamespace string) { - - reconcileRoleBindings := map[string]*template.Template{ - ns.PGO_TARGET_ROLE_BINDING: config.PgoTargetRoleBindingTemplate, - ns.PGO_BACKREST_ROLE_BINDING: config.PgoBackrestRoleBindingTemplate, - ns.PGO_PG_ROLE_BINDING: config.PgoPgRoleBindingTemplate, - } - - for roleBinding, template := range reconcileRoleBindings { - if err := ns.ReconcileRoleBinding(c.controllers[targetNamespace].clientset, - c.pgoNamespace, roleBinding, targetNamespace, template); err != nil { - log.Errorf("%s: %v", ErrReconcileRBAC, err) - } - } -} - -// reconcileServiceAccounts reconciles the ServiceAccounts required by the operator in a -// target namespace -func (c *ControllerManager) reconcileServiceAccounts(targetNamespace string, - imagePullSecrets []v1.LocalObjectReference) (saCreatedOrUpdated bool) { - - reconcileServiceAccounts := map[string]*template.Template{ - ns.PGO_DEFAULT_SERVICE_ACCOUNT: config.PgoDefaultServiceAccountTemplate, - ns.PGO_TARGET_SERVICE_ACCOUNT: config.PgoTargetServiceAccountTemplate, - ns.PGO_BACKREST_SERVICE_ACCOUNT: config.PgoBackrestServiceAccountTemplate, - ns.PGO_PG_SERVICE_ACCOUNT: config.PgoPgServiceAccountTemplate, - } - - for serviceAccount, template := range reconcileServiceAccounts { - createdOrUpdated, err := ns.ReconcileServiceAccount(c.controllers[targetNamespace].clientset, - serviceAccount, targetNamespace, template, imagePullSecrets) - if err != nil { - log.Errorf("%s: %v", ErrReconcileRBAC, err) - continue - } - if !saCreatedOrUpdated && createdOrUpdated { - saCreatedOrUpdated = true - } - } - return -} diff --git a/internal/controller/namespace/namespacecontroller.go b/internal/controller/namespace/namespacecontroller.go deleted file mode 100644 index 6fc85f644f..0000000000 --- a/internal/controller/namespace/namespacecontroller.go +++ /dev/null @@ -1,168 +0,0 @@ -package namespace - -/* -Copyright 2019 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "github.com/crunchydata/postgres-operator/internal/controller" - - log "github.com/sirupsen/logrus" - corev1 "k8s.io/api/core/v1" - kerrors "k8s.io/apimachinery/pkg/api/errors" - utilruntime "k8s.io/apimachinery/pkg/util/runtime" - - coreinformers "k8s.io/client-go/informers/core/v1" - corelisters "k8s.io/client-go/listers/core/v1" - "k8s.io/client-go/tools/cache" - "k8s.io/client-go/util/workqueue" -) - -// Controller holds the connections for the controller -type Controller struct { - ControllerManager controller.Manager - Informer coreinformers.NamespaceInformer - namespaceLister corelisters.NamespaceLister - workqueue workqueue.RateLimitingInterface - workerCount int -} - -// NewNamespaceController creates a new namespace controller that will watch for namespace events -// as responds accordingly. This adding and removing controller groups as namespaces watched by the -// PostgreSQL Operator are added and deleted. -func NewNamespaceController(controllerManager controller.Manager, - informer coreinformers.NamespaceInformer, workerCount int) (*Controller, error) { - - controller := &Controller{ - ControllerManager: controllerManager, - Informer: informer, - namespaceLister: informer.Lister(), - workerCount: workerCount, - workqueue: workqueue.NewNamedRateLimitingQueue(workqueue.DefaultControllerRateLimiter(), - "Namespaces"), - } - - informer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{ - AddFunc: func(obj interface{}) { - controller.enqueueNamespace(obj) - }, - UpdateFunc: func(old, new interface{}) { - controller.enqueueNamespace(new) - }, - DeleteFunc: func(obj interface{}) { - controller.enqueueNamespace(obj) - }, - }) - - return controller, nil -} - -// RunWorker is a long-running function that will continually call the processNextWorkItem -// function in order to read and process a message on the worker queue. Once the worker queue -// is instructed to shutdown, a message is written to the done channel. -func (c *Controller) RunWorker(stopCh <-chan struct{}) { - - go c.waitForShutdown(stopCh) - - for c.processNextWorkItem() { - } -} - -// waitForShutdown waits for a message on the stop channel and then shuts down the work queue -func (c *Controller) waitForShutdown(stopCh <-chan struct{}) { - <-stopCh - c.workqueue.ShutDown() - log.Debug("Namespace Contoller: received stop signal, worker queue told to shutdown") -} - -// ShutdownWorker shuts down the work queue -func (c *Controller) ShutdownWorker() { - c.workqueue.ShutDown() - log.Debug("Namespace Contoller: worker queue told to shutdown") -} - -// enqueueNamespace inspects a namespace to determine if it should be added to the work queue. If -// so, the namespace resource is converted into a namespace/name string and is then added to the -// work queue -func (c *Controller) enqueueNamespace(obj interface{}) { - - var key string - var err error - if key, err = cache.MetaNamespaceKeyFunc(obj); err != nil { - utilruntime.HandleError(err) - return - } - c.workqueue.Add(key) -} - -// processNextWorkItem will read a single work item off the work queue and processes it via -// the Namespace sync handler -func (c *Controller) processNextWorkItem() bool { - - obj, shutdown := c.workqueue.Get() - - if shutdown { - return false - } - - // We call Done here so the workqueue knows we have finished processing this item - defer c.workqueue.Done(obj) - - var key string - var ok bool - // We expect strings to come off the workqueue in the form namespace/name - if key, ok = obj.(string); !ok { - c.workqueue.Forget(obj) - log.Errorf("Namespace Controller: expected string in workqueue but got %#v", obj) - return true - } - - _, namespace, err := cache.SplitMetaNamespaceKey(key) - if err != nil { - c.workqueue.Forget(obj) - log.Error(err) - return true - } - - // remove the controller group for the namespace if the namespace no longer exists or is - // termininating - ns, err := c.namespaceLister.Get(namespace) - if (err == nil && ns.Status.Phase == corev1.NamespaceTerminating) || - (err != nil && kerrors.IsNotFound(err)) { - c.ControllerManager.RemoveGroup(namespace) - c.workqueue.Forget(obj) - return true - } else if err != nil { - log.Errorf("Namespace Controller: error getting namespace %s from namespaceLister, will "+ - "now requeue: %v", key, err) - c.workqueue.AddRateLimited(key) - return true - } - - // Run AddAndRunGroup, passing it the namespace that needs to be synced - if err := c.ControllerManager.AddAndRunGroup(namespace); err != nil { - log.Errorf("Namespace Controller: error syncing Namespace '%s': %s", - key, err.Error()) - } - - // Finally if no error has occurred forget this item - c.workqueue.Forget(obj) - - return true -} - -// WorkerCount returns the worker count for the controller -func (c *Controller) WorkerCount() int { - return c.workerCount -} diff --git a/internal/controller/pgcluster/pgclustercontroller.go b/internal/controller/pgcluster/pgclustercontroller.go deleted file mode 100644 index a3d0973906..0000000000 --- a/internal/controller/pgcluster/pgclustercontroller.go +++ /dev/null @@ -1,471 +0,0 @@ -package pgcluster - -/* -Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "encoding/json" - "fmt" - "io/ioutil" - "reflect" - "strconv" - "strings" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/kubeapi" - backrestoperator "github.com/crunchydata/postgres-operator/internal/operator/backrest" - clusteroperator "github.com/crunchydata/postgres-operator/internal/operator/cluster" - "github.com/crunchydata/postgres-operator/internal/util" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - informers "github.com/crunchydata/postgres-operator/pkg/generated/informers/externalversions/crunchydata.com/v1" - - log "github.com/sirupsen/logrus" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/types" - "k8s.io/client-go/tools/cache" - "k8s.io/client-go/util/workqueue" -) - -// Controller holds the connections for the controller -type Controller struct { - Client *kubeapi.Client - Queue workqueue.RateLimitingInterface - Informer informers.PgclusterInformer - PgclusterWorkerCount int -} - -// onAdd is called when a pgcluster is added -func (c *Controller) onAdd(obj interface{}) { - cluster := obj.(*crv1.Pgcluster) - log.Debugf("[pgcluster Controller] ns %s onAdd %s", cluster.ObjectMeta.Namespace, cluster.ObjectMeta.SelfLink) - - //handle the case when the operator restarts and don't - //process already processed pgclusters - if cluster.Status.State == crv1.PgclusterStateProcessed { - log.Debug("pgcluster " + cluster.ObjectMeta.Name + " already processed") - return - } - - key, err := cache.MetaNamespaceKeyFunc(obj) - if err == nil { - log.Debugf("cluster putting key in queue %s", key) - c.Queue.Add(key) - } - -} - -// RunWorker is a long-running function that will continually call the -// processNextWorkItem function in order to read and process a message on the -// workqueue. -func (c *Controller) RunWorker(stopCh <-chan struct{}, doneCh chan<- struct{}) { - - go c.waitForShutdown(stopCh) - - for c.processNextItem() { - } - - log.Debug("pgcluster Contoller: worker queue has been shutdown, writing to the done channel") - doneCh <- struct{}{} -} - -// waitForShutdown waits for a message on the stop channel and then shuts down the work queue -func (c *Controller) waitForShutdown(stopCh <-chan struct{}) { - <-stopCh - c.Queue.ShutDown() - log.Debug("pgcluster Contoller: received stop signal, worker queue told to shutdown") -} - -func (c *Controller) processNextItem() bool { - // Wait until there is a new item in the working queue - key, quit := c.Queue.Get() - if quit { - return false - } - - log.Debugf("working on %s", key.(string)) - keyParts := strings.Split(key.(string), "/") - keyNamespace := keyParts[0] - keyResourceName := keyParts[1] - - log.Debugf("cluster add queue got key ns=[%s] resource=[%s]", keyNamespace, keyResourceName) - - // Tell the queue that we are done with processing this key. This unblocks the key for other workers - // This allows safe parallel processing because two pods with the same key are never processed in - // parallel. - defer c.Queue.Done(key) - - // Invoke the method containing the business logic - // in this case, the de-dupe logic is to test whether a cluster - // deployment exists , if so, then we don't create another - _, err := c.Client.AppsV1().Deployments(keyNamespace).Get(keyResourceName, metav1.GetOptions{}) - - if err == nil { - log.Debugf("cluster add - dep already found, not creating again") - c.Queue.Forget(key) - return true - } - - //get the pgcluster - cluster, err := c.Client.CrunchydataV1().Pgclusters(keyNamespace).Get(keyResourceName, metav1.GetOptions{}) - if err != nil { - log.Debugf("cluster add - pgcluster not found, this is invalid") - c.Queue.Forget(key) // NB(cbandy): This should probably be a retry. - return true - } - - if cluster.Spec.Status == crv1.CompletedStatus || - cluster.Status.State == crv1.PgclusterStateBootstrapping { - errorMsg := fmt.Sprintf("pgcluster Contoller: onAdd event received for cluster %s but "+ - "will not process because it either has a 'completed' status or is currently in a "+ - "'bootstrapping' state", cluster.GetName()) - log.Warn(errorMsg) - return true - } - - addIdentifier(cluster) - - // If bootstrapping from an existing data source then attempt to create the pgBackRest repository. - // If a repo already exists (e.g. because it is associated with a currently running cluster) then - // proceed with bootstrapping. - if cluster.Spec.PGDataSource.RestoreFrom != "" { - repoCreated, err := clusteroperator.AddBootstrapRepo(c.Client, cluster) - if err != nil { - log.Error(err) - c.Queue.AddRateLimited(key) - return true - } - // if no errors and no repo was created, then we know that the repo is for a currently running - // cluster and we can therefore proceed with bootstrapping. - if !repoCreated { - if err := clusteroperator.AddClusterBootstrap(c.Client, cluster); err != nil { - log.Error(err) - c.Queue.AddRateLimited(key) - return true - } - } - c.Queue.Forget(key) - return true - } - - patch, err := json.Marshal(map[string]interface{}{ - "status": crv1.PgclusterStatus{ - State: crv1.PgclusterStateProcessed, - Message: "Successfully processed Pgcluster by controller", - }, - }) - if err == nil { - _, err = c.Client.CrunchydataV1().Pgclusters(keyNamespace).Patch(cluster.Name, types.MergePatchType, patch) - } - if err != nil { - log.Errorf("ERROR updating pgcluster status on add: %s", err.Error()) - c.Queue.Forget(key) // NB(cbandy): This should probably be a retry. - return true - } - - log.Debugf("pgcluster added: %s", cluster.ObjectMeta.Name) - - // AddClusterBase creates all deployments for the cluster (in addition to various other supporting - // resources such as services, configMaps, secrets, etc.), but leaves them scaled to 0. This - // ensures all deployments exist as needed to properly orchestrate initialization of the - // cluster, e.g. we need to ensure the primary DB deployment resource has been created before - // bringing the repo deployment online, since that in turn will bring the primary DB online. - clusteroperator.AddClusterBase(c.Client, cluster, cluster.ObjectMeta.Namespace) - - c.Queue.Forget(key) - return true -} - -// onUpdate is called when a pgcluster is updated -func (c *Controller) onUpdate(oldObj, newObj interface{}) { - oldcluster := oldObj.(*crv1.Pgcluster) - newcluster := newObj.(*crv1.Pgcluster) - - log.Debugf("pgcluster onUpdate for cluster %s (namespace %s)", newcluster.ObjectMeta.Namespace, - newcluster.ObjectMeta.Name) - - // if the status of the pgcluster shows that it has been bootstrapped, then proceed with - // creating the cluster (i.e. the cluster deployment, services, etc.) - if newcluster.Spec.Status != crv1.CompletedStatus && - newcluster.Status.State == crv1.PgclusterStateBootstrapped { - clusteroperator.AddClusterBase(c.Client, newcluster, newcluster.GetNamespace()) - return - } - - // if the 'shutdown' parameter in the pgcluster update shows that the cluster should be either - // shutdown or started but its current status does not properly reflect that it is, then - // proceed with the logic needed to either shutdown or start the cluster - if newcluster.Spec.Shutdown && newcluster.Status.State != crv1.PgclusterStateShutdown { - clusteroperator.ShutdownCluster(c.Client, *newcluster) - } else if !newcluster.Spec.Shutdown && - newcluster.Status.State == crv1.PgclusterStateShutdown { - clusteroperator.StartupCluster(c.Client, *newcluster) - } - - // check to see if the "autofail" label on the pgcluster CR has been changed from either true to false, or from - // false to true. If it has been changed to false, autofail will then be disabled in the pg cluster. If has - // been changed to true, autofail will then be enabled in the pg cluster - if newcluster.ObjectMeta.Labels[config.LABEL_AUTOFAIL] != "" { - autofailEnabledOld, err := strconv.ParseBool(oldcluster.ObjectMeta.Labels[config.LABEL_AUTOFAIL]) - if err != nil { - log.Error(err) - return - } - autofailEnabledNew, err := strconv.ParseBool(newcluster.ObjectMeta.Labels[config.LABEL_AUTOFAIL]) - if err != nil { - log.Error(err) - return - } - if autofailEnabledNew != autofailEnabledOld { - util.ToggleAutoFailover(c.Client, autofailEnabledNew, - newcluster.ObjectMeta.Labels[config.LABEL_PGHA_SCOPE], - newcluster.ObjectMeta.Namespace) - } - - } - - // handle standby being enabled and disabled for the cluster - if oldcluster.Spec.Standby && !newcluster.Spec.Standby { - if err := clusteroperator.DisableStandby(c.Client, *newcluster); err != nil { - log.Error(err) - return - } - } else if !oldcluster.Spec.Standby && newcluster.Spec.Standby { - if err := clusteroperator.EnableStandby(c.Client, *newcluster); err != nil { - log.Error(err) - return - } - } - - // see if any of the resource values have changed for the database or exporter container, - // if so, update them - if !reflect.DeepEqual(oldcluster.Spec.Resources, newcluster.Spec.Resources) || - !reflect.DeepEqual(oldcluster.Spec.Limits, newcluster.Spec.Limits) || - !reflect.DeepEqual(oldcluster.Spec.ExporterResources, newcluster.Spec.ExporterResources) || - !reflect.DeepEqual(oldcluster.Spec.ExporterLimits, newcluster.Spec.ExporterLimits) { - if err := clusteroperator.UpdateResources(c.Client, c.Client.Config, newcluster); err != nil { - log.Error(err) - return - } - } - - // see if any of the pgBackRest repository resource values have changed, and - // if so, update them - if !reflect.DeepEqual(oldcluster.Spec.BackrestResources, newcluster.Spec.BackrestResources) || - !reflect.DeepEqual(oldcluster.Spec.BackrestLimits, newcluster.Spec.BackrestLimits) { - if err := backrestoperator.UpdateResources(c.Client, newcluster); err != nil { - log.Error(err) - return - } - } - - // see if any of the pgBouncer values have changed, and if so, update the - // pgBouncer deployment - if !reflect.DeepEqual(oldcluster.Spec.PgBouncer, newcluster.Spec.PgBouncer) { - if err := updatePgBouncer(c, oldcluster, newcluster); err != nil { - log.Error(err) - return - } - } - - // if we are not in a standby state, check to see if the tablespaces have - // differed, and if so, add the additional volumes to the primary and replicas - if !reflect.DeepEqual(oldcluster.Spec.TablespaceMounts, newcluster.Spec.TablespaceMounts) { - if err := updateTablespaces(c, oldcluster, newcluster); err != nil { - log.Error(err) - return - } - } - - // check to see if any of the annotations have been modified, in particular, - // the non-system annotations - if !reflect.DeepEqual(oldcluster.Spec.Annotations, newcluster.Spec.Annotations) { - if err := updateAnnotations(c, oldcluster, newcluster); err != nil { - log.Error(err) - return - } - } -} - -// onDelete is called when a pgcluster is deleted -func (c *Controller) onDelete(obj interface{}) { - //cluster := obj.(*crv1.Pgcluster) - // log.Debugf("[Controller] ns=%s onDelete %s", cluster.ObjectMeta.Namespace, cluster.ObjectMeta.SelfLink) - - //handle pgcluster cleanup - // clusteroperator.DeleteClusterBase(c.PgclusterClientset, c.PgclusterClient, cluster, cluster.ObjectMeta.Namespace) -} - -// AddPGClusterEventHandler adds the pgcluster event handler to the pgcluster informer -func (c *Controller) AddPGClusterEventHandler() { - - c.Informer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{ - AddFunc: c.onAdd, - UpdateFunc: c.onUpdate, - DeleteFunc: c.onDelete, - }) - - log.Debugf("pgcluster Controller: added event handler to informer") -} - -func addIdentifier(clusterCopy *crv1.Pgcluster) { - u, err := ioutil.ReadFile("/proc/sys/kernel/random/uuid") - if err != nil { - log.Error(err) - } - - clusterCopy.ObjectMeta.Labels[config.LABEL_PG_CLUSTER_IDENTIFIER] = string(u[:len(u)-1]) -} - -// updateAnnotations updates any custom annitations that may be on the managed -// deployments, which includes: -// -// - globally applied annotations -// - postgres instance specific annotations -// - pgBackRest instance specific annotations -// - pgBouncer instance specific annotations -func updateAnnotations(c *Controller, oldCluster *crv1.Pgcluster, newCluster *crv1.Pgcluster) error { - // so we have a two-tier problem we need to solve: - // 1. Which of the deployment types are being modified (or in the case of - // global, all of them)? - // 2. Which annotations are being added/modified/removed? Kubernetes actually - // has a convenient function for updating the annotations, so we do no - // need to do too much works - annotationsPostgres := map[string]string{} - annotationsBackrest := map[string]string{} - annotationsPgBouncer := map[string]string{} - - // check the individual deployment groups. If the annotations differ in either the specific group or - // in the global group, set them in their respective map - if !reflect.DeepEqual(oldCluster.Spec.Annotations.Postgres, newCluster.Spec.Annotations.Postgres) || - !reflect.DeepEqual(oldCluster.Spec.Annotations.Global, newCluster.Spec.Annotations.Global) { - // store the global annotations first - for k, v := range newCluster.Spec.Annotations.Global { - annotationsPostgres[k] = v - } - - // then store the postgres specific annotations - for k, v := range newCluster.Spec.Annotations.Postgres { - annotationsPostgres[k] = v - } - } - - if !reflect.DeepEqual(oldCluster.Spec.Annotations.Backrest, newCluster.Spec.Annotations.Backrest) || - !reflect.DeepEqual(oldCluster.Spec.Annotations.Global, newCluster.Spec.Annotations.Global) { - // store the global annotations first - for k, v := range newCluster.Spec.Annotations.Global { - annotationsBackrest[k] = v - } - - // then store the pgbackrest specific annotations - for k, v := range newCluster.Spec.Annotations.Backrest { - annotationsBackrest[k] = v - } - } - - if !reflect.DeepEqual(oldCluster.Spec.Annotations.PgBouncer, newCluster.Spec.Annotations.PgBouncer) || - !reflect.DeepEqual(oldCluster.Spec.Annotations.Global, newCluster.Spec.Annotations.Global) { - // store the global annotations first - for k, v := range newCluster.Spec.Annotations.Global { - annotationsPgBouncer[k] = v - } - - // then store the pgbouncer specific annotations - for k, v := range newCluster.Spec.Annotations.PgBouncer { - annotationsPgBouncer[k] = v - } - } - - // so if there are changes, we can apply them to the various deployments, - // but only do so if we have to - if len(annotationsPostgres) != 0 { - if err := clusteroperator.UpdateAnnotations(c.Client, c.Client.Config, newCluster, annotationsPostgres); err != nil { - return err - } - } - - if len(annotationsBackrest) != 0 { - if err := backrestoperator.UpdateAnnotations(c.Client, newCluster, annotationsBackrest); err != nil { - return err - } - } - - if len(annotationsPgBouncer) != 0 { - if err := clusteroperator.UpdatePgBouncerAnnotations(c.Client, newCluster, annotationsPgBouncer); err != nil { - return err - } - } - - return nil -} - -// updatePgBouncer updates the pgBouncer Deployment to reflect any changes that -// may be made, which include: -// - enabling a pgBouncer Deployment :) -// - disabling a pgBouncer Deployment :( -// - any changes to the resizing, etc. -func updatePgBouncer(c *Controller, oldCluster *crv1.Pgcluster, newCluster *crv1.Pgcluster) error { - log.Debugf("update pgbouncer for cluster %s", newCluster.Name) - - // first, handle the easy ones, i.e. determine if we are enabling or disabling - if oldCluster.Spec.PgBouncer.Enabled() != newCluster.Spec.PgBouncer.Enabled() { - log.Debugf("pgbouncer enabled: %t", newCluster.Spec.PgBouncer.Enabled()) - - // if this is being enabled, it's a simple step where we can return here - if newCluster.Spec.PgBouncer.Enabled() { - return clusteroperator.AddPgbouncer(c.Client, c.Client.Config, newCluster) - } - - // if we're not enabled, we're disabled - return clusteroperator.DeletePgbouncer(c.Client, c.Client.Config, newCluster) - } - - // otherwise, this is an update - return clusteroperator.UpdatePgbouncer(c.Client, oldCluster, newCluster) -} - -// updateTablespaces updates the PostgreSQL instance Deployments to reflect the -// new PostgreSQL tablespaces that should be added -func updateTablespaces(c *Controller, oldCluster *crv1.Pgcluster, newCluster *crv1.Pgcluster) error { - // to help the Operator function do less work, we will get a list of new - // tablespaces. Though these are already present in the CRD, this will isolate - // exactly which PVCs need to be created - // - // To do this, iterate through the the tablespace mount map that is present in - // the new cluster. - newTablespaces := map[string]crv1.PgStorageSpec{} - - for tablespaceName, storageSpec := range newCluster.Spec.TablespaceMounts { - // if the tablespace does not exist in the old version of the cluster, - // then add it in! - if _, ok := oldCluster.Spec.TablespaceMounts[tablespaceName]; !ok { - log.Debugf("new tablespace found: [%s]", tablespaceName) - - newTablespaces[tablespaceName] = storageSpec - } - } - - // alright, update the tablespace entries for this cluster! - // if it returns an error, pass the error back up to the caller - if err := clusteroperator.UpdateTablespaces(c.Client, c.Client.Config, newCluster, newTablespaces); err != nil { - return err - } - - return nil -} - -// WorkerCount returns the worker count for the controller -func (c *Controller) WorkerCount() int { - return c.PgclusterWorkerCount -} diff --git a/internal/controller/pgpolicy/pgpolicycontroller.go b/internal/controller/pgpolicy/pgpolicycontroller.go deleted file mode 100644 index 0f3ab76ff3..0000000000 --- a/internal/controller/pgpolicy/pgpolicycontroller.go +++ /dev/null @@ -1,129 +0,0 @@ -package pgpolicy - -/* -Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "encoding/json" - "time" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/kubeapi" - informers "github.com/crunchydata/postgres-operator/pkg/generated/informers/externalversions/crunchydata.com/v1" - log "github.com/sirupsen/logrus" - "k8s.io/apimachinery/pkg/types" - "k8s.io/client-go/tools/cache" - - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - "github.com/crunchydata/postgres-operator/pkg/events" -) - -// Controller holds connections for the controller -type Controller struct { - Clientset kubeapi.Interface - Informer informers.PgpolicyInformer -} - -// onAdd is called when a pgpolicy is added -func (c *Controller) onAdd(obj interface{}) { - policy := obj.(*crv1.Pgpolicy) - log.Debugf("[pgpolicy Controller] onAdd ns=%s %s", policy.ObjectMeta.Namespace, policy.ObjectMeta.SelfLink) - - //handle the case of when a pgpolicy is already processed, which - //is the case when the operator restarts - if policy.Status.State == crv1.PgpolicyStateProcessed { - log.Debug("pgpolicy " + policy.ObjectMeta.Name + " already processed") - return - } - - patch, err := json.Marshal(map[string]interface{}{ - "status": crv1.PgpolicyStatus{ - State: crv1.PgpolicyStateProcessed, - Message: "Successfully processed Pgpolicy by controller", - }, - }) - if err == nil { - _, err = c.Clientset.CrunchydataV1().Pgpolicies(policy.Namespace).Patch(policy.Name, types.MergePatchType, patch) - } - if err != nil { - log.Errorf("ERROR updating pgpolicy status: %s", err.Error()) - } - - //publish event - topics := make([]string, 1) - topics[0] = events.EventTopicPolicy - - f := events.EventCreatePolicyFormat{ - EventHeader: events.EventHeader{ - Namespace: policy.ObjectMeta.Namespace, - Username: policy.ObjectMeta.Labels[config.LABEL_PGOUSER], - Topic: topics, - Timestamp: time.Now(), - EventType: events.EventCreatePolicy, - }, - Policyname: policy.ObjectMeta.Name, - } - - err = events.Publish(f) - if err != nil { - log.Error(err.Error()) - } - -} - -// onUpdate is called when a pgpolicy is updated -func (c *Controller) onUpdate(oldObj, newObj interface{}) { -} - -// onDelete is called when a pgpolicy is deleted -func (c *Controller) onDelete(obj interface{}) { - policy := obj.(*crv1.Pgpolicy) - log.Debugf("[pgpolicy Controller] onDelete ns=%s %s", policy.ObjectMeta.Namespace, policy.ObjectMeta.SelfLink) - - log.Debugf("DELETED pgpolicy %s", policy.ObjectMeta.Name) - - //publish event - topics := make([]string, 1) - topics[0] = events.EventTopicPolicy - - f := events.EventDeletePolicyFormat{ - EventHeader: events.EventHeader{ - Namespace: policy.ObjectMeta.Namespace, - Username: policy.ObjectMeta.Labels[config.LABEL_PGOUSER], - Topic: topics, - Timestamp: time.Now(), - EventType: events.EventDeletePolicy, - }, - Policyname: policy.ObjectMeta.Name, - } - - err := events.Publish(f) - if err != nil { - log.Error(err.Error()) - } - -} - -// AddPGPolicyEventHandler adds the pgpolicy event handler to the pgpolicy informer -func (c *Controller) AddPGPolicyEventHandler() { - - c.Informer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{ - AddFunc: c.onAdd, - UpdateFunc: c.onUpdate, - DeleteFunc: c.onDelete, - }) - - log.Debugf("pgpolicy Controller: added event handler to informer") -} diff --git a/internal/controller/pgreplica/pgreplicacontroller.go b/internal/controller/pgreplica/pgreplicacontroller.go deleted file mode 100644 index e41babcb6f..0000000000 --- a/internal/controller/pgreplica/pgreplicacontroller.go +++ /dev/null @@ -1,243 +0,0 @@ -package pgreplica - -/* -Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "encoding/json" - "strings" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/kubeapi" - clusteroperator "github.com/crunchydata/postgres-operator/internal/operator/cluster" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - informers "github.com/crunchydata/postgres-operator/pkg/generated/informers/externalversions/crunchydata.com/v1" - log "github.com/sirupsen/logrus" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/types" - "k8s.io/client-go/tools/cache" - "k8s.io/client-go/util/workqueue" -) - -// Controller holds the connections for the controller -type Controller struct { - Clientset kubeapi.Interface - Queue workqueue.RateLimitingInterface - Informer informers.PgreplicaInformer - PgreplicaWorkerCount int -} - -// RunWorker is a long-running function that will continually call the -// processNextWorkItem function in order to read and process a message on the -// workqueue. -func (c *Controller) RunWorker(stopCh <-chan struct{}, doneCh chan<- struct{}) { - - go c.waitForShutdown(stopCh) - - for c.processNextItem() { - } - - log.Debug("pgreplica Contoller: worker queue has been shutdown, writing to the done channel") - doneCh <- struct{}{} -} - -// waitForShutdown waits for a message on the stop channel and then shuts down the work queue -func (c *Controller) waitForShutdown(stopCh <-chan struct{}) { - <-stopCh - c.Queue.ShutDown() - log.Debug("pgreplica Contoller: received stop signal, worker queue told to shutdown") -} - -func (c *Controller) processNextItem() bool { - // Wait until there is a new item in the working queue - key, quit := c.Queue.Get() - if quit { - return false - } - - log.Debugf("working on %s", key.(string)) - keyParts := strings.Split(key.(string), "/") - keyNamespace := keyParts[0] - keyResourceName := keyParts[1] - - log.Debugf("pgreplica queue got key ns=[%s] resource=[%s]", keyNamespace, keyResourceName) - - // Tell the queue that we are done with processing this key. This unblocks the key for other workers - // This allows safe parallel processing because two pods with the same key are never processed in - // parallel. - defer c.Queue.Done(key) - // Invoke the method containing the business logic - // in this case, the de-dupe logic is to test whether a replica - // deployment exists already , if so, then we don't create another - // backup job - _, err := c.Clientset. - AppsV1().Deployments(keyNamespace). - Get(keyResourceName, metav1.GetOptions{}) - - depRunning := err == nil - - if depRunning { - log.Debugf("working...found replica already, would do nothing") - } else { - log.Debugf("working...no replica found, means we process") - - //handle the case of when a pgreplica is added which is - //scaling up a cluster - replica, err := c.Clientset.CrunchydataV1().Pgreplicas(keyNamespace).Get(keyResourceName, metav1.GetOptions{}) - if err != nil { - log.Error(err) - c.Queue.Forget(key) // NB(cbandy): This should probably be a retry. - return true - } - - // get the pgcluster resource for the cluster the replica is a part of - cluster, err := c.Clientset.CrunchydataV1().Pgclusters(keyNamespace).Get(replica.Spec.ClusterName, metav1.GetOptions{}) - if err != nil { - log.Error(err) - c.Queue.Forget(key) // NB(cbandy): This should probably be a retry. - return true - } - - // only process pgreplica if cluster has been initialized - if cluster.Status.State == crv1.PgclusterStateInitialized { - clusteroperator.ScaleBase(c.Clientset, replica, replica.ObjectMeta.Namespace) - - patch, err := json.Marshal(map[string]interface{}{ - "status": crv1.PgreplicaStatus{ - State: crv1.PgreplicaStateProcessed, - Message: "Successfully processed Pgreplica by controller", - }, - }) - if err == nil { - _, err = c.Clientset.CrunchydataV1().Pgreplicas(replica.Namespace).Patch(replica.Name, types.MergePatchType, patch) - } - if err != nil { - log.Errorf("ERROR updating pgreplica status: %s", err.Error()) - } - } else { - patch, err := json.Marshal(map[string]interface{}{ - "status": crv1.PgreplicaStatus{ - State: crv1.PgreplicaStatePendingInit, - Message: "Pgreplica processing pending the creation of the initial backup", - }, - }) - if err == nil { - _, err = c.Clientset.CrunchydataV1().Pgreplicas(replica.Namespace).Patch(replica.Name, types.MergePatchType, patch) - } - if err != nil { - log.Errorf("ERROR updating pgreplica status: %s", err.Error()) - } - } - } - - c.Queue.Forget(key) - return true -} - -// onAdd is called when a pgreplica is added -func (c *Controller) onAdd(obj interface{}) { - replica := obj.(*crv1.Pgreplica) - - //handle the case of pgreplicas being processed already and - //when the operator restarts - if replica.Status.State == crv1.PgreplicaStateProcessed { - log.Debug("pgreplica " + replica.ObjectMeta.Name + " already processed") - return - } - - key, err := cache.MetaNamespaceKeyFunc(obj) - if err == nil { - log.Debugf("onAdd putting key in queue %s", key) - c.Queue.Add(key) - } - -} - -// onUpdate is called when a pgreplica is updated -func (c *Controller) onUpdate(oldObj, newObj interface{}) { - - newPgreplica := newObj.(*crv1.Pgreplica) - - log.Debugf("[pgreplica Controller] onUpdate ns=%s %s", newPgreplica.ObjectMeta.Namespace, - newPgreplica.ObjectMeta.SelfLink) - - // get the pgcluster resource for the cluster the replica is a part of - cluster, err := c.Clientset. - CrunchydataV1().Pgclusters(newPgreplica.Namespace). - Get(newPgreplica.Spec.ClusterName, metav1.GetOptions{}) - if err != nil { - log.Error(err) - return - } - - // only process pgreplica if cluster has been initialized - if cluster.Status.State == crv1.PgclusterStateInitialized && newPgreplica.Spec.Status != "complete" { - clusteroperator.ScaleBase(c.Clientset, newPgreplica, - newPgreplica.ObjectMeta.Namespace) - - patch, err := json.Marshal(map[string]interface{}{ - "status": crv1.PgreplicaStatus{ - State: crv1.PgreplicaStateProcessed, - Message: "Successfully processed Pgreplica by controller", - }, - }) - if err == nil { - _, err = c.Clientset.CrunchydataV1().Pgreplicas(newPgreplica.Namespace).Patch(newPgreplica.Name, types.MergePatchType, patch) - } - if err != nil { - log.Errorf("ERROR updating pgreplica status: %s", err.Error()) - } - } -} - -// onDelete is called when a pgreplica is deleted -func (c *Controller) onDelete(obj interface{}) { - replica := obj.(*crv1.Pgreplica) - log.Debugf("[pgreplica Controller] OnDelete ns=%s %s", replica.ObjectMeta.Namespace, replica.ObjectMeta.SelfLink) - - //make sure we are not removing a replica deployment - //that is now the primary after a failover - dep, err := c.Clientset. - AppsV1().Deployments(replica.ObjectMeta.Namespace). - Get(replica.Spec.Name, metav1.GetOptions{}) - if err == nil { - if dep.ObjectMeta.Labels[config.LABEL_SERVICE_NAME] == dep.ObjectMeta.Labels[config.LABEL_PG_CLUSTER] { - //the replica was made a primary at some point - //we will not scale down the deployment - log.Debugf("[pgreplica Controller] OnDelete not scaling down the replica since it is acting as a primary") - } else { - clusteroperator.ScaleDownBase(c.Clientset, replica, replica.ObjectMeta.Namespace) - } - } - -} - -// AddPGReplicaEventHandler adds the pgreplica event handler to the pgreplica informer -func (c *Controller) AddPGReplicaEventHandler() { - - // Your custom resource event handlers. - c.Informer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{ - AddFunc: c.onAdd, - UpdateFunc: c.onUpdate, - DeleteFunc: c.onDelete, - }) - - log.Debugf("pgreplica Controller: added event handler to informer") -} - -// WorkerCount returns the worker count for the controller -func (c *Controller) WorkerCount() int { - return c.PgreplicaWorkerCount -} diff --git a/internal/controller/pgtask/backresthandler.go b/internal/controller/pgtask/backresthandler.go deleted file mode 100644 index 8e8582364a..0000000000 --- a/internal/controller/pgtask/backresthandler.go +++ /dev/null @@ -1,65 +0,0 @@ -package pgtask - -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "github.com/crunchydata/postgres-operator/internal/config" - backrestoperator "github.com/crunchydata/postgres-operator/internal/operator/backrest" - clusteroperator "github.com/crunchydata/postgres-operator/internal/operator/cluster" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - - log "github.com/sirupsen/logrus" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" -) - -// handleBackrestRestore handles pgBackRest restores request via a pgtask -func (c *Controller) handleBackrestRestore(task *crv1.Pgtask) { - - namespace := task.GetNamespace() - clusterName := task.Spec.Parameters[config.LABEL_BACKREST_RESTORE_FROM_CLUSTER] - - cluster, err := c.Client.CrunchydataV1().Pgclusters(namespace).Get(clusterName, - metav1.GetOptions{}) - if err != nil { - log.Errorf("pgtask Controller: %s", err.Error()) - return - } - - cluster, err = backrestoperator.PrepareClusterForRestore(c.Client, cluster, task) - if err != nil { - log.Errorf("pgtask Controller: %s", err.Error()) - return - } - log.Debugf("pgtask Controller: finished preparing cluster %s for restore", clusterName) - - backrestoperator.UpdatePGClusterSpecForRestore(c.Client, cluster, task) - log.Debugf("pgtask Controller: finished updating %s spec for restore", clusterName) - - if err := clusteroperator.AddClusterBootstrap(c.Client, cluster); err != nil { - log.Errorf("pgtask Controller: %s", err.Error()) - return - } - log.Debugf("pgtask Controller: added restore job for cluster %s", clusterName) - - backrestoperator.PublishRestore(cluster.ObjectMeta.Labels[config.LABEL_PG_CLUSTER_IDENTIFIER], - clusterName, task.ObjectMeta.Labels[config.LABEL_PGOUSER], namespace) - - err = backrestoperator.UpdateWorkflow(c.Client, task.Spec.Parameters[crv1.PgtaskWorkflowID], - namespace, crv1.PgtaskWorkflowBackrestRestoreJobCreatedStatus) - if err != nil { - log.Errorf("pgtask Controller: %s", err.Error()) - } -} diff --git a/internal/controller/pgtask/pgtaskcontroller.go b/internal/controller/pgtask/pgtaskcontroller.go deleted file mode 100644 index e1dfbcd18b..0000000000 --- a/internal/controller/pgtask/pgtaskcontroller.go +++ /dev/null @@ -1,248 +0,0 @@ -package pgtask - -/* -Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "encoding/json" - "strings" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/kubeapi" - backrestoperator "github.com/crunchydata/postgres-operator/internal/operator/backrest" - clusteroperator "github.com/crunchydata/postgres-operator/internal/operator/cluster" - pgdumpoperator "github.com/crunchydata/postgres-operator/internal/operator/pgdump" - taskoperator "github.com/crunchydata/postgres-operator/internal/operator/task" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - pgo "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned" - informers "github.com/crunchydata/postgres-operator/pkg/generated/informers/externalversions/crunchydata.com/v1" - log "github.com/sirupsen/logrus" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/types" - "k8s.io/client-go/tools/cache" - "k8s.io/client-go/util/workqueue" -) - -// Controller holds connections for the controller -type Controller struct { - Client *kubeapi.Client - Queue workqueue.RateLimitingInterface - Informer informers.PgtaskInformer - PgtaskWorkerCount int -} - -// RunWorker is a long-running function that will continually call the -// processNextWorkItem function in order to read and process a message on the -// workqueue. -func (c *Controller) RunWorker(stopCh <-chan struct{}, doneCh chan<- struct{}) { - - go c.waitForShutdown(stopCh) - - for c.processNextItem() { - } - - log.Debug("pgtask Contoller: worker queue has been shutdown, writing to the done channel") - doneCh <- struct{}{} -} - -// waitForShutdown waits for a message on the stop channel and then shuts down the work queue -func (c *Controller) waitForShutdown(stopCh <-chan struct{}) { - <-stopCh - c.Queue.ShutDown() - log.Debug("pgtask Contoller: received stop signal, worker queue told to shutdown") -} - -func (c *Controller) processNextItem() bool { - // Wait until there is a new item in the working queue - key, quit := c.Queue.Get() - if quit { - return false - } - - log.Debugf("working on %s", key.(string)) - keyParts := strings.Split(key.(string), "/") - keyNamespace := keyParts[0] - keyResourceName := keyParts[1] - - log.Debugf("queue got key ns=[%s] resource=[%s]", keyNamespace, keyResourceName) - - // Tell the queue that we are done with processing this key. This unblocks the key for other workers - // This allows safe parallel processing because two pods with the same key are never processed in - // parallel. - defer c.Queue.Done(key) - - tmpTask, err := c.Client.CrunchydataV1().Pgtasks(keyNamespace).Get(keyResourceName, metav1.GetOptions{}) - if err != nil { - log.Errorf("ERROR onAdd getting pgtask : %s", err.Error()) - c.Queue.Forget(key) // NB(cbandy): This should probably be a retry. - return true - } - - //update pgtask - patch, err := json.Marshal(map[string]interface{}{ - "status": crv1.PgtaskStatus{ - State: crv1.PgtaskStateProcessed, - Message: "Successfully processed Pgtask by controller", - }, - }) - if err == nil { - _, err = c.Client.CrunchydataV1().Pgtasks(keyNamespace).Patch(tmpTask.Name, types.MergePatchType, patch) - } - if err != nil { - log.Errorf("ERROR onAdd updating pgtask status: %s", err.Error()) - c.Queue.Forget(key) // NB(cbandy): This should probably be a retry. - return true - } - - //process the incoming task - switch tmpTask.Spec.TaskType { - case crv1.PgtaskPgAdminAdd: - log.Debug("add pgadmin task added") - clusteroperator.AddPgAdminFromPgTask(c.Client, c.Client.Config, tmpTask) - case crv1.PgtaskPgAdminDelete: - log.Debug("delete pgadmin task added") - clusteroperator.DeletePgAdminFromPgTask(c.Client, c.Client.Config, tmpTask) - case crv1.PgtaskUpgrade: - log.Debug("upgrade task added") - clusteroperator.AddUpgrade(c.Client, tmpTask, keyNamespace) - case crv1.PgtaskFailover: - log.Debug("failover task added") - if !dupeFailover(c.Client, tmpTask, keyNamespace) { - clusteroperator.FailoverBase(keyNamespace, c.Client, tmpTask, c.Client.Config) - } else { - log.Debugf("skipping duplicate onAdd failover task %s/%s", keyNamespace, keyResourceName) - } - - case crv1.PgtaskDeleteData: - log.Debug("delete data task added") - if !dupeDeleteData(c.Client, tmpTask, keyNamespace) { - taskoperator.RemoveData(keyNamespace, c.Client, tmpTask) - } else { - log.Debugf("skipping duplicate onAdd delete data task %s/%s", keyNamespace, keyResourceName) - } - case crv1.PgtaskDeleteBackups: - log.Debug("delete backups task added") - taskoperator.RemoveBackups(keyNamespace, c.Client, tmpTask) - case crv1.PgtaskBackrest: - log.Debug("backrest task added") - backrestoperator.Backrest(keyNamespace, c.Client, tmpTask) - case crv1.PgtaskBackrestRestore: - log.Debug("backrest restore task added") - c.handleBackrestRestore(tmpTask) - - case crv1.PgtaskpgDump: - log.Debug("pgDump task added") - pgdumpoperator.Dump(keyNamespace, c.Client, tmpTask) - case crv1.PgtaskpgRestore: - log.Debug("pgDump restore task added") - pgdumpoperator.Restore(keyNamespace, c.Client, tmpTask) - - case crv1.PgtaskAutoFailover: - log.Debugf("autofailover task added %s", keyResourceName) - case crv1.PgtaskWorkflow: - log.Debugf("workflow task added [%s] ID [%s]", keyResourceName, tmpTask.Spec.Parameters[crv1.PgtaskWorkflowID]) - - case crv1.PgtaskCloneStep1, crv1.PgtaskCloneStep2, crv1.PgtaskCloneStep3: - log.Debugf("clone task added [%s]", keyResourceName) - clusteroperator.Clone(c.Client, c.Client.Config, keyNamespace, tmpTask) - - default: - log.Debugf("unknown task type on pgtask added [%s]", tmpTask.Spec.TaskType) - } - - c.Queue.Forget(key) - return true - -} - -// onAdd is called when a pgtask is added -func (c *Controller) onAdd(obj interface{}) { - task := obj.(*crv1.Pgtask) - - //handle the case of when the operator restarts, we do not want - //to process pgtasks already processed - if task.Status.State == crv1.PgtaskStateProcessed { - log.Debug("pgtask " + task.ObjectMeta.Name + " already processed") - return - } - - key, err := cache.MetaNamespaceKeyFunc(obj) - if err == nil { - log.Debugf("task putting key in queue %s", key) - c.Queue.Add(key) - } - -} - -// onUpdate is called when a pgtask is updated -func (c *Controller) onUpdate(oldObj, newObj interface{}) { - //task := newObj.(*crv1.Pgtask) - // log.Debugf("[Controller] onUpdate ns=%s %s", task.ObjectMeta.Namespace, task.ObjectMeta.SelfLink) -} - -// onDelete is called when a pgtask is deleted -func (c *Controller) onDelete(obj interface{}) { -} - -// AddPGTaskEventHandler adds the pgtask event handler to the pgtask informer -func (c *Controller) AddPGTaskEventHandler() { - - c.Informer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{ - AddFunc: c.onAdd, - UpdateFunc: c.onUpdate, - DeleteFunc: c.onDelete, - }) - - log.Debugf("pgtask Controller: added event handler to informer") -} - -//de-dupe logic for a failover, if the failover started -//parameter is set, it means a failover has already been -//started on this -func dupeFailover(clientset pgo.Interface, task *crv1.Pgtask, ns string) bool { - tmp, err := clientset.CrunchydataV1().Pgtasks(ns).Get(task.Spec.Name, metav1.GetOptions{}) - if err != nil { - //a big time error if this occurs - return false - } - - if tmp.Spec.Parameters[config.LABEL_FAILOVER_STARTED] == "" { - return false - } - - return true -} - -//de-dupe logic for a delete data, if the delete data job started -//parameter is set, it means a delete data job has already been -//started on this -func dupeDeleteData(clientset pgo.Interface, task *crv1.Pgtask, ns string) bool { - tmp, err := clientset.CrunchydataV1().Pgtasks(ns).Get(task.Spec.Name, metav1.GetOptions{}) - if err != nil { - //a big time error if this occurs - return false - } - - if tmp.Spec.Parameters[config.LABEL_DELETE_DATA_STARTED] == "" { - return false - } - - return true -} - -// WorkerCount returns the worker count for the controller -func (c *Controller) WorkerCount() int { - return c.PgtaskWorkerCount -} diff --git a/internal/controller/pgupgrade/apply.go b/internal/controller/pgupgrade/apply.go new file mode 100644 index 0000000000..71cf65cd4f --- /dev/null +++ b/internal/controller/pgupgrade/apply.go @@ -0,0 +1,43 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgupgrade + +import ( + "context" + "reflect" + + "sigs.k8s.io/controller-runtime/pkg/client" +) + +// patch sends patch to object's endpoint in the Kubernetes API and updates +// object with any returned content. The fieldManager is set to r.Owner, but +// can be overridden in options. +// - https://docs.k8s.io/reference/using-api/server-side-apply/#managers +func (r *PGUpgradeReconciler) patch( + ctx context.Context, object client.Object, + patch client.Patch, options ...client.PatchOption, +) error { + options = append([]client.PatchOption{r.Owner}, options...) + return r.Client.Patch(ctx, object, patch, options...) +} + +// apply sends an apply patch to object's endpoint in the Kubernetes API and +// updates object with any returned content. The fieldManager is set to +// r.Owner and the force parameter is true. +// - https://docs.k8s.io/reference/using-api/server-side-apply/#managers +// - https://docs.k8s.io/reference/using-api/server-side-apply/#conflicts +func (r *PGUpgradeReconciler) apply(ctx context.Context, object client.Object) error { + // Generate an apply-patch by comparing the object to its zero value. + zero := reflect.New(reflect.TypeOf(object).Elem()).Interface() + data, err := client.MergeFrom(zero.(client.Object)).Data(object) + apply := client.RawPatch(client.Apply.Type(), data) + + // Send the apply-patch with force=true. + if err == nil { + err = r.patch(ctx, object, apply, client.ForceOwnership) + } + + return err +} diff --git a/internal/controller/pgupgrade/jobs.go b/internal/controller/pgupgrade/jobs.go new file mode 100644 index 0000000000..a1722dfc12 --- /dev/null +++ b/internal/controller/pgupgrade/jobs.go @@ -0,0 +1,344 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgupgrade + +import ( + "context" + "fmt" + "strings" + + appsv1 "k8s.io/api/apps/v1" + batchv1 "k8s.io/api/batch/v1" + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/labels" + + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +// Upgrade job + +// pgUpgradeJob returns the ObjectMeta for the pg_upgrade Job utilized to +// upgrade from one major PostgreSQL version to another +func pgUpgradeJob(upgrade *v1beta1.PGUpgrade) metav1.ObjectMeta { + return metav1.ObjectMeta{ + Namespace: upgrade.Namespace, + Name: upgrade.Name + "-pgdata", + } +} + +// upgradeCommand returns an entrypoint that prepares the filesystem for +// and performs a PostgreSQL major version upgrade using pg_upgrade. +func upgradeCommand(upgrade *v1beta1.PGUpgrade, fetchKeyCommand string) []string { + oldVersion := fmt.Sprint(upgrade.Spec.FromPostgresVersion) + newVersion := fmt.Sprint(upgrade.Spec.ToPostgresVersion) + + // if the fetch key command is set for TDE, provide the value during initialization + initdb := `/usr/pgsql-"${new_version}"/bin/initdb -k -D /pgdata/pg"${new_version}"` + if fetchKeyCommand != "" { + initdb += ` --encryption-key-command "` + fetchKeyCommand + `"` + } + + args := []string{oldVersion, newVersion} + script := strings.Join([]string{ + `declare -r data_volume='/pgdata' old_version="$1" new_version="$2"`, + `printf 'Performing PostgreSQL upgrade from version "%s" to "%s" ...\n\n' "$@"`, + + // Note: Rather than import the nss_wrapper init container, as we do in + // the main postgres-operator, this job does the required nss_wrapper + // settings here. + + // Create a copy of the system group definitions, but remove the "postgres" + // group or any group with the current GID. Replace them with our own that + // has the current GID. + `gid=$(id -G); NSS_WRAPPER_GROUP=$(mktemp)`, + `(sed "/^postgres:x:/ d; /^[^:]*:x:${gid%% *}:/ d" /etc/group`, + `echo "postgres:x:${gid%% *}:") > "${NSS_WRAPPER_GROUP}"`, + + // Create a copy of the system user definitions, but remove the "postgres" + // user or any user with the current UID. Replace them with our own that + // has the current UID and GID. + `uid=$(id -u); NSS_WRAPPER_PASSWD=$(mktemp)`, + `(sed "/^postgres:x:/ d; /^[^:]*:x:${uid}:/ d" /etc/passwd`, + `echo "postgres:x:${uid}:${gid%% *}::${data_volume}:") > "${NSS_WRAPPER_PASSWD}"`, + + // Enable nss_wrapper so the current UID and GID resolve to "postgres". + // - https://cwrap.org/nss_wrapper.html + `export LD_PRELOAD='libnss_wrapper.so' NSS_WRAPPER_GROUP NSS_WRAPPER_PASSWD`, + + // Below is the pg_upgrade script used to upgrade a PostgresCluster from + // one major version to another. Additional information concerning the + // steps used and command flag specifics can be found in the documentation: + // - https://www.postgresql.org/docs/current/pgupgrade.html + + // To begin, we first move to the mounted /pgdata directory and create a + // new version directory which is then initialized with the initdb command. + `cd /pgdata || exit`, + `echo -e "Step 1: Making new pgdata directory...\n"`, + `mkdir /pgdata/pg"${new_version}"`, + `echo -e "Step 2: Initializing new pgdata directory...\n"`, + initdb, + + // Before running the upgrade check, which ensures the clusters are compatible, + // proper permissions have to be set on the old pgdata directory and the + // preload library settings must be copied over. + `echo -e "\nStep 3: Setting the expected permissions on the old pgdata directory...\n"`, + `chmod 700 /pgdata/pg"${old_version}"`, + `echo -e "Step 4: Copying shared_preload_libraries setting to new postgresql.conf file...\n"`, + `echo "shared_preload_libraries = '$(/usr/pgsql-"""${old_version}"""/bin/postgres -D \`, + `/pgdata/pg"""${old_version}""" -C shared_preload_libraries)'" >> /pgdata/pg"${new_version}"/postgresql.conf`, + + // Before the actual upgrade is run, we will run the upgrade --check to + // verify everything before actually changing any data. + `echo -e "Step 5: Running pg_upgrade check...\n"`, + `time /usr/pgsql-"${new_version}"/bin/pg_upgrade --old-bindir /usr/pgsql-"${old_version}"/bin \`, + `--new-bindir /usr/pgsql-"${new_version}"/bin --old-datadir /pgdata/pg"${old_version}"\`, + ` --new-datadir /pgdata/pg"${new_version}" --link --check`, + + // Assuming the check completes successfully, the pg_upgrade command will + // be run that actually prepares the upgraded pgdata directory. + `echo -e "\nStep 6: Running pg_upgrade...\n"`, + `time /usr/pgsql-"${new_version}"/bin/pg_upgrade --old-bindir /usr/pgsql-"${old_version}"/bin \`, + `--new-bindir /usr/pgsql-"${new_version}"/bin --old-datadir /pgdata/pg"${old_version}" \`, + `--new-datadir /pgdata/pg"${new_version}" --link`, + + // Since we have cleared the Patroni cluster step by removing the EndPoints, we copy patroni.dynamic.json + // from the old data dir to help retain PostgreSQL parameters you had set before. + // - https://patroni.readthedocs.io/en/latest/existing_data.html#major-upgrade-of-postgresql-version + `echo -e "\nStep 7: Copying patroni.dynamic.json...\n"`, + `cp /pgdata/pg"${old_version}"/patroni.dynamic.json /pgdata/pg"${new_version}"`, + + `echo -e "\npg_upgrade Job Complete!"`, + }, "\n") + + return append([]string{"bash", "-ceu", "--", script, "upgrade"}, args...) +} + +// generateUpgradeJob returns a Job that can upgrade the PostgreSQL data +// directory of the startup instance. +func (r *PGUpgradeReconciler) generateUpgradeJob( + _ context.Context, upgrade *v1beta1.PGUpgrade, + startup *appsv1.StatefulSet, fetchKeyCommand string, +) *batchv1.Job { + job := &batchv1.Job{} + job.SetGroupVersionKind(batchv1.SchemeGroupVersion.WithKind("Job")) + + job.Namespace = upgrade.Namespace + job.Name = pgUpgradeJob(upgrade).Name + + job.Annotations = upgrade.Spec.Metadata.GetAnnotationsOrNil() + job.Labels = Merge(upgrade.Spec.Metadata.GetLabelsOrNil(), + commonLabels(pgUpgrade, upgrade), //FIXME role pgupgrade + map[string]string{ + LabelVersion: fmt.Sprint(upgrade.Spec.ToPostgresVersion), + }) + + // Find the database container. + var database *corev1.Container + for i := range startup.Spec.Template.Spec.Containers { + container := startup.Spec.Template.Spec.Containers[i] + if container.Name == ContainerDatabase { + database = &container + } + } + + // Copy the pod template from the startup instance StatefulSet. This includes + // the service account, volumes, DNS policies, and scheduling constraints. + startup.Spec.Template.DeepCopyInto(&job.Spec.Template) + + // Use the same labels and annotations as the job. + job.Spec.Template.ObjectMeta = metav1.ObjectMeta{ + Annotations: job.Annotations, + Labels: job.Labels, + } + + // Use the image pull secrets specified for the upgrade image. + job.Spec.Template.Spec.ImagePullSecrets = upgrade.Spec.ImagePullSecrets + + // Attempt the upgrade exactly once. + job.Spec.BackoffLimit = initialize.Int32(0) + job.Spec.Template.Spec.RestartPolicy = corev1.RestartPolicyNever + + // Replace all containers with one that does the upgrade. + job.Spec.Template.Spec.EphemeralContainers = nil + job.Spec.Template.Spec.InitContainers = nil + job.Spec.Template.Spec.Containers = []corev1.Container{{ + // Copy volume mounts and the security context needed to access them + // from the database container. There is a downward API volume that + // refers back to the container by name, so use that same name here. + Name: database.Name, + SecurityContext: database.SecurityContext, + VolumeMounts: database.VolumeMounts, + + // Use our upgrade command and the specified image and resources. + Command: upgradeCommand(upgrade, fetchKeyCommand), + Image: pgUpgradeContainerImage(upgrade), + ImagePullPolicy: upgrade.Spec.ImagePullPolicy, + Resources: upgrade.Spec.Resources, + }} + + // The following will set these fields to null if not set in the spec + job.Spec.Template.Spec.Affinity = upgrade.Spec.Affinity + job.Spec.Template.Spec.PriorityClassName = + initialize.FromPointer(upgrade.Spec.PriorityClassName) + job.Spec.Template.Spec.Tolerations = upgrade.Spec.Tolerations + + r.setControllerReference(upgrade, job) + return job +} + +// Remove data job + +// removeDataCommand returns an entrypoint that removes certain directories. +// We currently target the `pgdata/pg{old_version}` and `pgdata/pg{old_version}_wal` +// directories for removal. +func removeDataCommand(upgrade *v1beta1.PGUpgrade) []string { + oldVersion := fmt.Sprint(upgrade.Spec.FromPostgresVersion) + + // Before removing the directories (both data and wal), we check that + // the directory is not in use by running `pg_controldata` and making sure + // the server state is "shut down in recovery" + // TODO(benjaminjb): pg_controldata seems pretty stable, but might want to + // experiment with a few more versions. + args := []string{oldVersion} + script := strings.Join([]string{ + `declare -r old_version="$1"`, + `printf 'Removing PostgreSQL data dir for pg%s...\n\n' "$@"`, + `echo -e "Checking the directory exists and isn't being used...\n"`, + `cd /pgdata || exit`, + // The string `shut down in recovery` is the dbstate that postgres sets from + // at least version 10 to 14 when a replica has been shut down. + // - https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/bin/pg_controldata/pg_controldata.c;h=f911f98d946d83f1191abf35239d9b4455c5f52a;hb=HEAD#l59 + // Note: `pg_controldata` is actually used by `pg_upgrade` before upgrading + // to make sure that the server in question is shut down as a primary; + // that aligns with our use here, where we're making sure that the server in question + // was shut down as a replica. + // - https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/bin/pg_upgrade/controldata.c;h=41b8f69b8cbe4f40e6098ad84c2e8e987e24edaf;hb=HEAD#l122 + `if [ "$(/usr/pgsql-"${old_version}"/bin/pg_controldata /pgdata/pg"${old_version}" | grep -c "shut down in recovery")" -ne 1 ]; then echo -e "Directory in use, cannot remove..."; exit 1; fi`, + `echo -e "Removing old pgdata directory...\n"`, + // When deleting the wal directory, use `realpath` to resolve the symlink from + // the pgdata directory. This is necessary because the wal directory can be + // mounted at different places depending on if an external wal PVC is used, + // i.e. `/pgdata/pg14_wal` vs `/pgwal/pg14_wal` + `rm -rf /pgdata/pg"${old_version}" "$(realpath /pgdata/pg${old_version}/pg_wal)"`, + `echo -e "Remove Data Job Complete!"`, + }, "\n") + + return append([]string{"bash", "-ceu", "--", script, "remove"}, args...) +} + +// generateRemoveDataJob returns a Job that can remove the data +// on the given replica StatefulSet +func (r *PGUpgradeReconciler) generateRemoveDataJob( + _ context.Context, upgrade *v1beta1.PGUpgrade, sts *appsv1.StatefulSet, +) *batchv1.Job { + job := &batchv1.Job{} + job.SetGroupVersionKind(batchv1.SchemeGroupVersion.WithKind("Job")) + + job.Namespace = upgrade.Namespace + job.Name = upgrade.Name + "-" + sts.Name + + job.Annotations = upgrade.Spec.Metadata.GetAnnotationsOrNil() + job.Labels = labels.Merge(upgrade.Spec.Metadata.GetLabelsOrNil(), + commonLabels(removeData, upgrade)) //FIXME role removedata + + // Find the database container. + var database *corev1.Container + for i := range sts.Spec.Template.Spec.Containers { + container := sts.Spec.Template.Spec.Containers[i] + if container.Name == ContainerDatabase { + database = &container + } + } + + // Copy the pod template from the sts instance StatefulSet. This includes + // the service account, volumes, DNS policies, and scheduling constraints. + sts.Spec.Template.DeepCopyInto(&job.Spec.Template) + + // Use the same labels and annotations as the job. + job.Spec.Template.ObjectMeta = metav1.ObjectMeta{ + Annotations: job.Annotations, + Labels: job.Labels, + } + + // Use the image pull secrets specified for the upgrade image. + job.Spec.Template.Spec.ImagePullSecrets = upgrade.Spec.ImagePullSecrets + + // Attempt the removal exactly once. + job.Spec.BackoffLimit = initialize.Int32(0) + job.Spec.Template.Spec.RestartPolicy = corev1.RestartPolicyNever + + // Replace all containers with one that removes the data. + job.Spec.Template.Spec.EphemeralContainers = nil + job.Spec.Template.Spec.InitContainers = nil + job.Spec.Template.Spec.Containers = []corev1.Container{{ + // Copy volume mounts and the security context needed to access them + // from the database container. There is a downward API volume that + // refers back to the container by name, so use that same name here. + // We are using a PG image in order to check that the PG server is down. + Name: database.Name, + SecurityContext: database.SecurityContext, + VolumeMounts: database.VolumeMounts, + + // Use our remove command and the specified resources. + Command: removeDataCommand(upgrade), + Image: pgUpgradeContainerImage(upgrade), + ImagePullPolicy: upgrade.Spec.ImagePullPolicy, + Resources: upgrade.Spec.Resources, + }} + + // The following will set these fields to null if not set in the spec + job.Spec.Template.Spec.Affinity = upgrade.Spec.Affinity + job.Spec.Template.Spec.PriorityClassName = + initialize.FromPointer(upgrade.Spec.PriorityClassName) + job.Spec.Template.Spec.Tolerations = upgrade.Spec.Tolerations + + r.setControllerReference(upgrade, job) + return job +} + +// Util functions + +// pgUpgradeContainerImage returns the container image to use for pg_upgrade. +func pgUpgradeContainerImage(upgrade *v1beta1.PGUpgrade) string { + var image string + if upgrade.Spec.Image != nil { + image = *upgrade.Spec.Image + } + return defaultFromEnv(image, "RELATED_IMAGE_PGUPGRADE") +} + +// verifyUpgradeImageValue checks that the upgrade container image required by the +// spec is defined. If it is undefined, an error is returned. +func verifyUpgradeImageValue(upgrade *v1beta1.PGUpgrade) error { + if pgUpgradeContainerImage(upgrade) == "" { + return fmt.Errorf("Missing crunchy-upgrade image") + } + return nil +} + +// jobFailed returns "true" if the Job provided has failed. Otherwise it returns "false". +func jobFailed(job *batchv1.Job) bool { + conditions := job.Status.Conditions + for i := range conditions { + if conditions[i].Type == batchv1.JobFailed { + return (conditions[i].Status == corev1.ConditionTrue) + } + } + return false +} + +// jobCompleted returns "true" if the Job provided completed successfully. Otherwise it returns +// "false". +func jobCompleted(job *batchv1.Job) bool { + conditions := job.Status.Conditions + for i := range conditions { + if conditions[i].Type == batchv1.JobComplete { + return (conditions[i].Status == corev1.ConditionTrue) + } + } + return false +} diff --git a/internal/controller/pgupgrade/jobs_test.go b/internal/controller/pgupgrade/jobs_test.go new file mode 100644 index 0000000000..8dfc4731a2 --- /dev/null +++ b/internal/controller/pgupgrade/jobs_test.go @@ -0,0 +1,283 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgupgrade + +import ( + "context" + "os" + "strings" + "testing" + + "gotest.tools/v3/assert" + appsv1 "k8s.io/api/apps/v1" + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/resource" + "sigs.k8s.io/yaml" + + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/testing/cmp" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestGenerateUpgradeJob(t *testing.T) { + ctx := context.Background() + reconciler := &PGUpgradeReconciler{} + + upgrade := &v1beta1.PGUpgrade{} + upgrade.Namespace = "ns1" + upgrade.Name = "pgu2" + upgrade.UID = "uid3" + upgrade.Spec.Image = initialize.Pointer("img4") + upgrade.Spec.PostgresClusterName = "pg5" + upgrade.Spec.FromPostgresVersion = 19 + upgrade.Spec.ToPostgresVersion = 25 + upgrade.Spec.Resources.Requests = corev1.ResourceList{ + corev1.ResourceCPU: resource.MustParse("3.14"), + } + + startup := &appsv1.StatefulSet{} + startup.Spec.Template.Spec = corev1.PodSpec{ + Containers: []corev1.Container{{ + Name: ContainerDatabase, + + SecurityContext: &corev1.SecurityContext{Privileged: new(bool)}, + VolumeMounts: []corev1.VolumeMount{ + {Name: "vm1", MountPath: "/mnt/some/such"}, + }, + }}, + Volumes: []corev1.Volume{ + { + Name: "vol2", + VolumeSource: corev1.VolumeSource{ + HostPath: new(corev1.HostPathVolumeSource), + }, + }, + }, + } + + job := reconciler.generateUpgradeJob(ctx, upgrade, startup, "") + assert.Assert(t, cmp.MarshalMatches(job, ` +apiVersion: batch/v1 +kind: Job +metadata: + creationTimestamp: null + labels: + postgres-operator.crunchydata.com/cluster: pg5 + postgres-operator.crunchydata.com/pgupgrade: pgu2 + postgres-operator.crunchydata.com/role: pgupgrade + postgres-operator.crunchydata.com/version: "25" + name: pgu2-pgdata + namespace: ns1 + ownerReferences: + - apiVersion: postgres-operator.crunchydata.com/v1beta1 + blockOwnerDeletion: true + controller: true + kind: PGUpgrade + name: pgu2 + uid: uid3 +spec: + backoffLimit: 0 + template: + metadata: + creationTimestamp: null + labels: + postgres-operator.crunchydata.com/cluster: pg5 + postgres-operator.crunchydata.com/pgupgrade: pgu2 + postgres-operator.crunchydata.com/role: pgupgrade + postgres-operator.crunchydata.com/version: "25" + spec: + containers: + - command: + - bash + - -ceu + - -- + - |- + declare -r data_volume='/pgdata' old_version="$1" new_version="$2" + printf 'Performing PostgreSQL upgrade from version "%s" to "%s" ...\n\n' "$@" + gid=$(id -G); NSS_WRAPPER_GROUP=$(mktemp) + (sed "/^postgres:x:/ d; /^[^:]*:x:${gid%% *}:/ d" /etc/group + echo "postgres:x:${gid%% *}:") > "${NSS_WRAPPER_GROUP}" + uid=$(id -u); NSS_WRAPPER_PASSWD=$(mktemp) + (sed "/^postgres:x:/ d; /^[^:]*:x:${uid}:/ d" /etc/passwd + echo "postgres:x:${uid}:${gid%% *}::${data_volume}:") > "${NSS_WRAPPER_PASSWD}" + export LD_PRELOAD='libnss_wrapper.so' NSS_WRAPPER_GROUP NSS_WRAPPER_PASSWD + cd /pgdata || exit + echo -e "Step 1: Making new pgdata directory...\n" + mkdir /pgdata/pg"${new_version}" + echo -e "Step 2: Initializing new pgdata directory...\n" + /usr/pgsql-"${new_version}"/bin/initdb -k -D /pgdata/pg"${new_version}" + echo -e "\nStep 3: Setting the expected permissions on the old pgdata directory...\n" + chmod 700 /pgdata/pg"${old_version}" + echo -e "Step 4: Copying shared_preload_libraries setting to new postgresql.conf file...\n" + echo "shared_preload_libraries = '$(/usr/pgsql-"""${old_version}"""/bin/postgres -D \ + /pgdata/pg"""${old_version}""" -C shared_preload_libraries)'" >> /pgdata/pg"${new_version}"/postgresql.conf + echo -e "Step 5: Running pg_upgrade check...\n" + time /usr/pgsql-"${new_version}"/bin/pg_upgrade --old-bindir /usr/pgsql-"${old_version}"/bin \ + --new-bindir /usr/pgsql-"${new_version}"/bin --old-datadir /pgdata/pg"${old_version}"\ + --new-datadir /pgdata/pg"${new_version}" --link --check + echo -e "\nStep 6: Running pg_upgrade...\n" + time /usr/pgsql-"${new_version}"/bin/pg_upgrade --old-bindir /usr/pgsql-"${old_version}"/bin \ + --new-bindir /usr/pgsql-"${new_version}"/bin --old-datadir /pgdata/pg"${old_version}" \ + --new-datadir /pgdata/pg"${new_version}" --link + echo -e "\nStep 7: Copying patroni.dynamic.json...\n" + cp /pgdata/pg"${old_version}"/patroni.dynamic.json /pgdata/pg"${new_version}" + echo -e "\npg_upgrade Job Complete!" + - upgrade + - "19" + - "25" + image: img4 + name: database + resources: + requests: + cpu: 3140m + securityContext: + privileged: false + volumeMounts: + - mountPath: /mnt/some/such + name: vm1 + restartPolicy: Never + volumes: + - hostPath: + path: "" + name: vol2 +status: {} + `)) + + tdeJob := reconciler.generateUpgradeJob(ctx, upgrade, startup, "echo testKey") + b, _ := yaml.Marshal(tdeJob) + assert.Assert(t, strings.Contains(string(b), + `/usr/pgsql-"${new_version}"/bin/initdb -k -D /pgdata/pg"${new_version}" --encryption-key-command "echo testKey"`)) +} + +func TestGenerateRemoveDataJob(t *testing.T) { + ctx := context.Background() + reconciler := &PGUpgradeReconciler{} + + upgrade := &v1beta1.PGUpgrade{} + upgrade.Namespace = "ns1" + upgrade.Name = "pgu2" + upgrade.UID = "uid3" + upgrade.Spec.Image = initialize.Pointer("img4") + upgrade.Spec.PostgresClusterName = "pg5" + upgrade.Spec.FromPostgresVersion = 19 + upgrade.Spec.ToPostgresVersion = 25 + upgrade.Spec.Resources.Requests = corev1.ResourceList{ + corev1.ResourceCPU: resource.MustParse("3.14"), + } + + sts := &appsv1.StatefulSet{} + sts.Name = "sts" + sts.Spec.Template.Spec = corev1.PodSpec{ + Containers: []corev1.Container{{ + Name: ContainerDatabase, + Image: "img3", + SecurityContext: &corev1.SecurityContext{Privileged: new(bool)}, + VolumeMounts: []corev1.VolumeMount{ + {Name: "vm1", MountPath: "/mnt/some/such"}, + }, + }}, + Volumes: []corev1.Volume{ + { + Name: "vol2", + VolumeSource: corev1.VolumeSource{ + HostPath: new(corev1.HostPathVolumeSource), + }, + }, + }, + } + + job := reconciler.generateRemoveDataJob(ctx, upgrade, sts) + assert.Assert(t, cmp.MarshalMatches(job, ` +apiVersion: batch/v1 +kind: Job +metadata: + creationTimestamp: null + labels: + postgres-operator.crunchydata.com/cluster: pg5 + postgres-operator.crunchydata.com/pgupgrade: pgu2 + postgres-operator.crunchydata.com/role: removedata + name: pgu2-sts + namespace: ns1 + ownerReferences: + - apiVersion: postgres-operator.crunchydata.com/v1beta1 + blockOwnerDeletion: true + controller: true + kind: PGUpgrade + name: pgu2 + uid: uid3 +spec: + backoffLimit: 0 + template: + metadata: + creationTimestamp: null + labels: + postgres-operator.crunchydata.com/cluster: pg5 + postgres-operator.crunchydata.com/pgupgrade: pgu2 + postgres-operator.crunchydata.com/role: removedata + spec: + containers: + - command: + - bash + - -ceu + - -- + - |- + declare -r old_version="$1" + printf 'Removing PostgreSQL data dir for pg%s...\n\n' "$@" + echo -e "Checking the directory exists and isn't being used...\n" + cd /pgdata || exit + if [ "$(/usr/pgsql-"${old_version}"/bin/pg_controldata /pgdata/pg"${old_version}" | grep -c "shut down in recovery")" -ne 1 ]; then echo -e "Directory in use, cannot remove..."; exit 1; fi + echo -e "Removing old pgdata directory...\n" + rm -rf /pgdata/pg"${old_version}" "$(realpath /pgdata/pg${old_version}/pg_wal)" + echo -e "Remove Data Job Complete!" + - remove + - "19" + image: img4 + name: database + resources: + requests: + cpu: 3140m + securityContext: + privileged: false + volumeMounts: + - mountPath: /mnt/some/such + name: vm1 + restartPolicy: Never + volumes: + - hostPath: + path: "" + name: vol2 +status: {} + `)) +} + +func TestPGUpgradeContainerImage(t *testing.T) { + upgrade := &v1beta1.PGUpgrade{} + + t.Setenv("RELATED_IMAGE_PGUPGRADE", "") + os.Unsetenv("RELATED_IMAGE_PGUPGRADE") + assert.Equal(t, pgUpgradeContainerImage(upgrade), "") + + t.Setenv("RELATED_IMAGE_PGUPGRADE", "") + assert.Equal(t, pgUpgradeContainerImage(upgrade), "") + + t.Setenv("RELATED_IMAGE_PGUPGRADE", "env-var-pgbackrest") + assert.Equal(t, pgUpgradeContainerImage(upgrade), "env-var-pgbackrest") + + assert.NilError(t, yaml.Unmarshal( + []byte(`{ image: spec-image }`), &upgrade.Spec)) + assert.Equal(t, pgUpgradeContainerImage(upgrade), "spec-image") +} + +func TestVerifyUpgradeImageValue(t *testing.T) { + upgrade := &v1beta1.PGUpgrade{} + + t.Run("crunchy-postgres", func(t *testing.T) { + t.Setenv("RELATED_IMAGE_PGUPGRADE", "") + os.Unsetenv("RELATED_IMAGE_PGUPGRADE") + err := verifyUpgradeImageValue(upgrade) + assert.ErrorContains(t, err, "crunchy-upgrade") + }) + +} diff --git a/internal/controller/pgupgrade/labels.go b/internal/controller/pgupgrade/labels.go new file mode 100644 index 0000000000..187fe6bf6f --- /dev/null +++ b/internal/controller/pgupgrade/labels.go @@ -0,0 +1,42 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgupgrade + +import ( + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +const ( + // ConditionPGUpgradeProgressing is the type used in a condition to indicate that + // an Postgres major upgrade is in progress. + ConditionPGUpgradeProgressing = "Progressing" + + // ConditionPGUpgradeSucceeded is the type used in a condition to indicate the + // status of a Postgres major upgrade. + ConditionPGUpgradeSucceeded = "Succeeded" + + labelPrefix = "postgres-operator.crunchydata.com/" + LabelPGUpgrade = labelPrefix + "pgupgrade" + LabelCluster = labelPrefix + "cluster" + LabelRole = labelPrefix + "role" + LabelVersion = labelPrefix + "version" + LabelPatroni = labelPrefix + "patroni" + LabelPGBackRestBackup = labelPrefix + "pgbackrest-backup" + LabelInstance = labelPrefix + "instance" + + ReplicaCreate = "replica-create" + ContainerDatabase = "database" + + pgUpgrade = "pgupgrade" + removeData = "removedata" +) + +func commonLabels(role string, upgrade *v1beta1.PGUpgrade) map[string]string { + return map[string]string{ + LabelPGUpgrade: upgrade.Name, + LabelCluster: upgrade.Spec.PostgresClusterName, + LabelRole: role, + } +} diff --git a/internal/controller/pgupgrade/pgupgrade_controller.go b/internal/controller/pgupgrade/pgupgrade_controller.go new file mode 100644 index 0000000000..d6d145b793 --- /dev/null +++ b/internal/controller/pgupgrade/pgupgrade_controller.go @@ -0,0 +1,513 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgupgrade + +import ( + "context" + "fmt" + + "github.com/pkg/errors" + batchv1 "k8s.io/api/batch/v1" + "k8s.io/apimachinery/pkg/api/equality" + "k8s.io/apimachinery/pkg/api/meta" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/client-go/tools/record" + "k8s.io/client-go/util/workqueue" + ctrl "sigs.k8s.io/controller-runtime" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/event" + "sigs.k8s.io/controller-runtime/pkg/handler" + + "github.com/crunchydata/postgres-operator/internal/config" + "github.com/crunchydata/postgres-operator/internal/controller/runtime" + "github.com/crunchydata/postgres-operator/internal/registration" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +const ( + AnnotationAllowUpgrade = "postgres-operator.crunchydata.com/allow-upgrade" +) + +// PGUpgradeReconciler reconciles a PGUpgrade object +type PGUpgradeReconciler struct { + Client client.Client + Owner client.FieldOwner + + Recorder record.EventRecorder + Registration registration.Registration +} + +//+kubebuilder:rbac:groups="batch",resources="jobs",verbs={list,watch} +//+kubebuilder:rbac:groups="postgres-operator.crunchydata.com",resources="pgupgrades",verbs={list,watch} +//+kubebuilder:rbac:groups="postgres-operator.crunchydata.com",resources="postgresclusters",verbs={list,watch} + +// SetupWithManager sets up the controller with the Manager. +func (r *PGUpgradeReconciler) SetupWithManager(mgr ctrl.Manager) error { + return ctrl.NewControllerManagedBy(mgr). + For(&v1beta1.PGUpgrade{}). + Owns(&batchv1.Job{}). + Watches( + v1beta1.NewPostgresCluster(), + r.watchPostgresClusters(), + ). + Complete(r) +} + +//+kubebuilder:rbac:groups="postgres-operator.crunchydata.com",resources="pgupgrades",verbs={list} + +// findUpgradesForPostgresCluster returns PGUpgrades that target cluster. +func (r *PGUpgradeReconciler) findUpgradesForPostgresCluster( + ctx context.Context, cluster client.ObjectKey, +) []*v1beta1.PGUpgrade { + var matching []*v1beta1.PGUpgrade + var upgrades v1beta1.PGUpgradeList + + // NOTE: If this becomes slow due to a large number of upgrades in a single + // namespace, we can configure the [ctrl.Manager] field indexer and pass a + // [fields.Selector] here. + // - https://book.kubebuilder.io/reference/watching-resources/externally-managed.html + if r.Client.List(ctx, &upgrades, &client.ListOptions{ + Namespace: cluster.Namespace, + }) == nil { + for i := range upgrades.Items { + if upgrades.Items[i].Spec.PostgresClusterName == cluster.Name { + matching = append(matching, &upgrades.Items[i]) + } + } + } + return matching +} + +// watchPostgresClusters returns a [handler.EventHandler] for PostgresClusters. +func (r *PGUpgradeReconciler) watchPostgresClusters() handler.Funcs { + handle := func(ctx context.Context, cluster client.Object, q workqueue.RateLimitingInterface) { + key := client.ObjectKeyFromObject(cluster) + + for _, upgrade := range r.findUpgradesForPostgresCluster(ctx, key) { + q.Add(ctrl.Request{ + NamespacedName: client.ObjectKeyFromObject(upgrade), + }) + } + } + + return handler.Funcs{ + CreateFunc: func(ctx context.Context, e event.CreateEvent, q workqueue.RateLimitingInterface) { + handle(ctx, e.Object, q) + }, + UpdateFunc: func(ctx context.Context, e event.UpdateEvent, q workqueue.RateLimitingInterface) { + handle(ctx, e.ObjectNew, q) + }, + DeleteFunc: func(ctx context.Context, e event.DeleteEvent, q workqueue.RateLimitingInterface) { + handle(ctx, e.Object, q) + }, + } +} + +//+kubebuilder:rbac:groups="postgres-operator.crunchydata.com",resources="pgupgrades",verbs={get} +//+kubebuilder:rbac:groups="postgres-operator.crunchydata.com",resources="pgupgrades/status",verbs={patch} +//+kubebuilder:rbac:groups="batch",resources="jobs",verbs={delete} +//+kubebuilder:rbac:groups="postgres-operator.crunchydata.com",resources="postgresclusters",verbs={get} +//+kubebuilder:rbac:groups="postgres-operator.crunchydata.com",resources="postgresclusters/status",verbs={patch} +//+kubebuilder:rbac:groups="batch",resources="jobs",verbs={create,patch} +//+kubebuilder:rbac:groups="batch",resources="jobs",verbs={list} +//+kubebuilder:rbac:groups="",resources="endpoints",verbs={get} +//+kubebuilder:rbac:groups="",resources="endpoints",verbs={delete} + +// Reconcile does the work to move the current state of the world toward the +// desired state described in a [v1beta1.PGUpgrade] identified by req. +func (r *PGUpgradeReconciler) Reconcile(ctx context.Context, req ctrl.Request) (result ctrl.Result, err error) { + log := ctrl.LoggerFrom(ctx) + + // Retrieve the upgrade from the client cache, if it exists. A deferred + // function below will send any changes to its Status field. + // + // NOTE: No DeepCopy is necessary here because controller-runtime makes a + // copy before returning from its cache. + // - https://github.com/kubernetes-sigs/controller-runtime/issues/1235 + upgrade := &v1beta1.PGUpgrade{} + err = r.Client.Get(ctx, req.NamespacedName, upgrade) + + if err == nil { + // Write any changes to the upgrade status on the way out. + before := upgrade.DeepCopy() + defer func() { + if !equality.Semantic.DeepEqual(before.Status, upgrade.Status) { + status := r.Client.Status().Patch(ctx, upgrade, client.MergeFrom(before), r.Owner) + + if err == nil && status != nil { + err = status + } else if status != nil { + log.Error(status, "Patching PGUpgrade status") + } + } + }() + } else { + // NotFound cannot be fixed by requeuing so ignore it. During background + // deletion, we receive delete events from upgrade's dependents after + // upgrade is deleted. + return ctrl.Result{}, client.IgnoreNotFound(err) + } + + // Validate the remainder of the upgrade specification. These can likely + // move to CEL rules or a webhook when supported. + + // Exit if upgrade success condition has already been reached. + // If a cluster needs multiple upgrades, it is currently only possible to delete and + // create a new pgupgrade rather than edit an existing succeeded upgrade. + // This controller may be changed in the future to allow multiple uses of + // a single pgupgrade; if that is the case, it will probably need to reset + // the succeeded condition and remove upgrade and removedata jobs. + succeeded := meta.FindStatusCondition(upgrade.Status.Conditions, + ConditionPGUpgradeSucceeded) + if succeeded != nil && succeeded.Reason == "PGUpgradeSucceeded" { + return + } + + if !r.UpgradeAuthorized(upgrade) { + return ctrl.Result{}, nil + } + + // Set progressing condition to true if it doesn't exist already + setStatusToProgressingIfReasonWas("", upgrade) + + // The "from" version must be smaller than the "to" version. + // An invalid PGUpgrade should not be requeued. + if upgrade.Spec.FromPostgresVersion >= upgrade.Spec.ToPostgresVersion { + + meta.SetStatusCondition(&upgrade.Status.Conditions, metav1.Condition{ + ObservedGeneration: upgrade.GetGeneration(), + Type: ConditionPGUpgradeProgressing, + Status: metav1.ConditionFalse, + Reason: "PGUpgradeInvalid", + Message: fmt.Sprintf( + "Cannot upgrade from postgres version %d to %d", + upgrade.Spec.FromPostgresVersion, upgrade.Spec.ToPostgresVersion), + }) + + return ctrl.Result{}, nil + } + + if err = verifyUpgradeImageValue(upgrade); err != nil { + + meta.SetStatusCondition(&upgrade.Status.Conditions, metav1.Condition{ + ObservedGeneration: upgrade.GetGeneration(), + Type: ConditionPGUpgradeProgressing, + Status: metav1.ConditionFalse, + Reason: "PGUpgradeInvalid", + Message: fmt.Sprintf("Error: %s", err), + }) + + return ctrl.Result{}, nil + } + + setStatusToProgressingIfReasonWas("PGUpgradeInvalid", upgrade) + + // Observations and cluster validation + // + // First, read everything we need from the API. Compare the state of the + // world to the upgrade specification, perform any remaining validation. + world, err := r.observeWorld(ctx, upgrade) + // If `observeWorld` returns an error, then exit early. + // If we do no exit here, err is assume nil + if err != nil { + meta.SetStatusCondition(&upgrade.Status.Conditions, metav1.Condition{ + ObservedGeneration: upgrade.Generation, + Type: ConditionPGUpgradeProgressing, + Status: metav1.ConditionFalse, + Reason: "PGClusterErrorWhenObservingWorld", + Message: err.Error(), + }) + + return // FIXME + } + + setStatusToProgressingIfReasonWas("PGClusterErrorWhenObservingWorld", upgrade) + + // ClusterNotFound cannot be fixed by requeuing. We will reconcile again when + // a matching PostgresCluster is created. Set a condition about our + // inability to proceed. + if world.ClusterNotFound != nil { + + meta.SetStatusCondition(&upgrade.Status.Conditions, metav1.Condition{ + ObservedGeneration: upgrade.Generation, + Type: ConditionPGUpgradeProgressing, + Status: metav1.ConditionFalse, + Reason: "PGClusterNotFound", + Message: world.ClusterNotFound.Error(), + }) + + return ctrl.Result{}, nil + } + + setStatusToProgressingIfReasonWas("PGClusterNotFound", upgrade) + + // Get the spec version to check if this cluster is at the requested version + version := int64(world.Cluster.Spec.PostgresVersion) + + // Get the status version and check the jobs to see if this upgrade has completed + statusVersion := int64(world.Cluster.Status.PostgresVersion) + upgradeJob := world.Jobs[pgUpgradeJob(upgrade).Name] + upgradeJobComplete := upgradeJob != nil && + jobCompleted(upgradeJob) + upgradeJobFailed := upgradeJob != nil && + jobFailed(upgradeJob) + + var removeDataJobsFailed bool + var removeDataJobsCompleted []*batchv1.Job + for _, job := range world.Jobs { + if job.GetLabels()[LabelRole] == removeData { + if jobCompleted(job) { + removeDataJobsCompleted = append(removeDataJobsCompleted, job) + } else if jobFailed(job) { + removeDataJobsFailed = true + break + } + } + } + removeDataJobsComplete := len(removeDataJobsCompleted) == world.ReplicasExpected + + // If the PostgresCluster is already set to the desired version, but the upgradejob has + // not completed successfully, the operator assumes that the cluster is already + // running the desired version. We consider this a no-op rather than a successful upgrade. + // Documentation should make it clear that the PostgresCluster postgresVersion + // should be updated _after_ the upgrade is considered successful. + if version == int64(upgrade.Spec.ToPostgresVersion) && !upgradeJobComplete { + meta.SetStatusCondition(&upgrade.Status.Conditions, metav1.Condition{ + ObservedGeneration: upgrade.Generation, + Type: ConditionPGUpgradeProgressing, + Status: metav1.ConditionFalse, + Reason: "PGUpgradeResolved", + Message: fmt.Sprintf( + "PostgresCluster %s is already running version %d", + upgrade.Spec.PostgresClusterName, upgrade.Spec.ToPostgresVersion), + }) + + return ctrl.Result{}, nil + } + + // This condition is unlikely to ever need to be changed, but is added just in case. + setStatusToProgressingIfReasonWas("PGUpgradeResolved", upgrade) + + if statusVersion == int64(upgrade.Spec.ToPostgresVersion) { + meta.SetStatusCondition(&upgrade.Status.Conditions, metav1.Condition{ + ObservedGeneration: upgrade.Generation, + Type: ConditionPGUpgradeProgressing, + Status: metav1.ConditionFalse, + Reason: "PGUpgradeCompleted", + Message: fmt.Sprintf( + "PostgresCluster %s is running version %d", + upgrade.Spec.PostgresClusterName, upgrade.Spec.ToPostgresVersion), + }) + + if upgradeJobComplete && removeDataJobsComplete { + meta.SetStatusCondition(&upgrade.Status.Conditions, metav1.Condition{ + ObservedGeneration: upgrade.Generation, + Type: ConditionPGUpgradeSucceeded, + Status: metav1.ConditionTrue, + Reason: "PGUpgradeSucceeded", + Message: fmt.Sprintf( + "PostgresCluster %s is ready to complete upgrade to version %d", + upgrade.Spec.PostgresClusterName, upgrade.Spec.ToPostgresVersion), + }) + } + + return ctrl.Result{}, nil + } + + // The upgrade needs to manipulate the data directory of the primary while + // Postgres is stopped. Wait until all instances are gone and the primary + // is identified. + // + // Requiring the cluster be shutdown also provides some assurance that the + // user understands downtime requirement of upgrading + if !world.ClusterShutdown { + meta.SetStatusCondition(&upgrade.Status.Conditions, metav1.Condition{ + ObservedGeneration: upgrade.Generation, + Type: ConditionPGUpgradeProgressing, + Status: metav1.ConditionFalse, + Reason: "PGClusterNotShutdown", + Message: "PostgresCluster instances still running", + }) + + return ctrl.Result{}, nil + } + + setStatusToProgressingIfReasonWas("PGClusterNotShutdown", upgrade) + + // A separate check for primary identification allows for cases where the + // PostgresCluster may not have been initialized properly. + if world.ClusterPrimary == nil { + meta.SetStatusCondition(&upgrade.Status.Conditions, metav1.Condition{ + ObservedGeneration: upgrade.Generation, + Type: ConditionPGUpgradeProgressing, + Status: metav1.ConditionFalse, + Reason: "PGClusterPrimaryNotIdentified", + Message: "PostgresCluster primary instance not identified", + }) + + return ctrl.Result{}, nil + } + + setStatusToProgressingIfReasonWas("PGClusterPrimaryNotIdentified", upgrade) + + if version != int64(upgrade.Spec.FromPostgresVersion) && + statusVersion != int64(upgrade.Spec.ToPostgresVersion) { + meta.SetStatusCondition(&upgrade.Status.Conditions, metav1.Condition{ + ObservedGeneration: upgrade.Generation, + Type: ConditionPGUpgradeProgressing, + Status: metav1.ConditionFalse, + Reason: "PGUpgradeInvalidForCluster", + Message: fmt.Sprintf( + "Current postgres version is %d, but upgrade expected %d", + version, upgrade.Spec.FromPostgresVersion), + }) + + return ctrl.Result{}, nil + } + + setStatusToProgressingIfReasonWas("PGUpgradeInvalidForCluster", upgrade) + + // Each upgrade can specify one cluster, but we also want to ensure that + // each cluster is managed by at most one upgrade. Check that the specified + // cluster is annotated with the name of *this* upgrade. + // + // Having an annotation on the cluster also provides some assurance that + // the user that created the upgrade also has authority to create or edit + // the cluster. + + if allowed := world.Cluster.GetAnnotations()[AnnotationAllowUpgrade] == upgrade.Name; !allowed { + meta.SetStatusCondition(&upgrade.Status.Conditions, metav1.Condition{ + ObservedGeneration: upgrade.Generation, + Type: ConditionPGUpgradeProgressing, + Status: metav1.ConditionFalse, + Reason: "PGClusterMissingRequiredAnnotation", + Message: fmt.Sprintf( + "PostgresCluster %s lacks annotation for upgrade %s", + upgrade.Spec.PostgresClusterName, upgrade.GetName()), + }) + + return ctrl.Result{}, nil + } + + setStatusToProgressingIfReasonWas("PGClusterMissingRequiredAnnotation", upgrade) + + // Currently our jobs are set to only run once, so if any job has failed, the + // upgrade has failed. + if upgradeJobFailed || removeDataJobsFailed { + meta.SetStatusCondition(&upgrade.Status.Conditions, metav1.Condition{ + ObservedGeneration: upgrade.Generation, + Type: ConditionPGUpgradeSucceeded, + Status: metav1.ConditionFalse, + Reason: "PGUpgradeFailed", + Message: "Upgrade jobs failed, please check individual pod logs", + }) + + return ctrl.Result{}, nil + } + + // If we have reached this point, all preconditions for upgrade are satisfied. + // If the jobs have already run to completion + // - delete the replica-create jobs to kick off a backup + // - delete the PostgresCluster.Status.Repos to kick off a reconcile + if upgradeJobComplete && removeDataJobsComplete && + statusVersion != int64(upgrade.Spec.ToPostgresVersion) { + + // Patroni will try to recreate replicas using pgBackRest. Convince PGO to + // take a recent backup by deleting its "replica-create" jobs. + for _, object := range world.Jobs { + if backup := object.Labels[LabelPGBackRestBackup]; err == nil && + backup == ReplicaCreate { + + uid := object.GetUID() + version := object.GetResourceVersion() + exactly := client.Preconditions{UID: &uid, ResourceVersion: &version} + // Jobs default to an `orphanDependents` policy, orphaning pods after deletion. + // We don't want that, so we set the delete policy explicitly. + // - https://kubernetes.io/docs/concepts/workloads/controllers/job/ + // - https://github.com/kubernetes/kubernetes/blob/master/pkg/registry/batch/job/strategy.go#L58 + propagate := client.PropagationPolicy(metav1.DeletePropagationBackground) + err = client.IgnoreNotFound(r.Client.Delete(ctx, object, exactly, propagate)) + } + } + + if err == nil { + patch := world.Cluster.DeepCopy() + + // Set the cluster status when we know the upgrade has completed successfully. + // This will serve to help the user see that the upgrade has completed if they + // are only watching the PostgresCluster + patch.Status.PostgresVersion = upgrade.Spec.ToPostgresVersion + + // Set the pgBackRest status for bootstrapping + patch.Status.PGBackRest.Repos = []v1beta1.RepoStatus{} + + err = r.Client.Status().Patch(ctx, patch, client.MergeFrom(world.Cluster), r.Owner) + } + + return ctrl.Result{}, err + } + + // TODO: error from apply could mean that the job exists with a different spec. + if err == nil && !upgradeJobComplete { + err = errors.WithStack(r.apply(ctx, + r.generateUpgradeJob(ctx, upgrade, world.ClusterPrimary, config.FetchKeyCommand(&world.Cluster.Spec)))) + } + + // Create the jobs to remove the data from the replicas, as long as + // the upgrade job has completed. + // (When the cluster is not shutdown, the `world.ClusterReplicas` will be [], + // so there should be no danger of accidentally targeting the primary.) + if err == nil && upgradeJobComplete && !removeDataJobsComplete { + for _, sts := range world.ClusterReplicas { + if err == nil { + err = r.apply(ctx, r.generateRemoveDataJob(ctx, upgrade, sts)) + } + } + } + + // The upgrade job generates a new system identifier for this cluster. + // Clear the old identifier from Patroni by deleting its DCS Endpoints. + // This is safe to do this when all Patroni processes are stopped + // (ClusterShutdown) and PGO has identified a leader to start first + // (ClusterPrimary). + // - https://github.com/zalando/patroni/blob/v2.1.2/docs/existing_data.rst + // + // TODO(cbandy): This works only when using Kubernetes Endpoints for DCS. + if len(world.PatroniEndpoints) > 0 { + for _, object := range world.PatroniEndpoints { + uid := object.GetUID() + version := object.GetResourceVersion() + exactly := client.Preconditions{UID: &uid, ResourceVersion: &version} + err = client.IgnoreNotFound(r.Client.Delete(ctx, object, exactly)) + } + + // Requeue to verify that Patroni endpoints are deleted + return runtime.RequeueWithBackoff(), err // FIXME + } + + // TODO: write upgradeJob back to world? No, we will wake and see it when it + // has some progress. OTOH, whatever we just wrote has the latest metadata.generation. + // TODO: consider what it means to "re-use" the same PGUpgrade for more than + // one postgres version. Should the job name include the version number? + + log.Info("Reconciled", "requeue", !result.IsZero() || err != nil) + return +} + +func setStatusToProgressingIfReasonWas(reason string, upgrade *v1beta1.PGUpgrade) { + progressing := meta.FindStatusCondition(upgrade.Status.Conditions, + ConditionPGUpgradeProgressing) + if progressing == nil || (progressing != nil && progressing.Reason == reason) { + meta.SetStatusCondition(&upgrade.Status.Conditions, metav1.Condition{ + ObservedGeneration: upgrade.GetGeneration(), + Type: ConditionPGUpgradeProgressing, + Status: metav1.ConditionTrue, + Reason: "PGUpgradeProgressing", + Message: fmt.Sprintf( + "Upgrade progressing for cluster %s", + upgrade.Spec.PostgresClusterName), + }) + } +} diff --git a/internal/controller/pgupgrade/registration.go b/internal/controller/pgupgrade/registration.go new file mode 100644 index 0000000000..05d0d80cbd --- /dev/null +++ b/internal/controller/pgupgrade/registration.go @@ -0,0 +1,27 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgupgrade + +import ( + "k8s.io/apimachinery/pkg/api/meta" + + "github.com/crunchydata/postgres-operator/internal/registration" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func (r *PGUpgradeReconciler) UpgradeAuthorized(upgrade *v1beta1.PGUpgrade) bool { + // Allow an upgrade in progress to complete, when the registration requirement is introduced. + // But don't allow new upgrades to be started until a valid token is applied. + progressing := meta.FindStatusCondition(upgrade.Status.Conditions, ConditionPGUpgradeProgressing) != nil + required := r.Registration.Required(r.Recorder, upgrade, &upgrade.Status.Conditions) + + // If a valid token has not been applied, warn the user. + if required && !progressing { + registration.SetRequiredWarning(r.Recorder, upgrade, &upgrade.Status.Conditions) + return false + } + + return true +} diff --git a/internal/controller/pgupgrade/registration_test.go b/internal/controller/pgupgrade/registration_test.go new file mode 100644 index 0000000000..dc3a4144bc --- /dev/null +++ b/internal/controller/pgupgrade/registration_test.go @@ -0,0 +1,95 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgupgrade + +import ( + "testing" + + "gotest.tools/v3/assert" + "k8s.io/apimachinery/pkg/api/meta" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/client-go/tools/record" + "sigs.k8s.io/controller-runtime/pkg/client" + + "github.com/crunchydata/postgres-operator/internal/controller/runtime" + "github.com/crunchydata/postgres-operator/internal/registration" + "github.com/crunchydata/postgres-operator/internal/testing/cmp" + "github.com/crunchydata/postgres-operator/internal/testing/events" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestUpgradeAuthorized(t *testing.T) { + t.Run("UpgradeAlreadyInProgress", func(t *testing.T) { + reconciler := new(PGUpgradeReconciler) + upgrade := new(v1beta1.PGUpgrade) + + for _, required := range []bool{false, true} { + reconciler.Registration = registration.RegistrationFunc( + func(record.EventRecorder, client.Object, *[]metav1.Condition) bool { + return required + }) + + meta.SetStatusCondition(&upgrade.Status.Conditions, metav1.Condition{ + Type: ConditionPGUpgradeProgressing, + Status: metav1.ConditionTrue, + }) + + result := reconciler.UpgradeAuthorized(upgrade) + assert.Assert(t, result, "expected signal to proceed") + + progressing := meta.FindStatusCondition(upgrade.Status.Conditions, ConditionPGUpgradeProgressing) + assert.Equal(t, progressing.Status, metav1.ConditionTrue) + } + }) + + t.Run("RegistrationRequired", func(t *testing.T) { + recorder := events.NewRecorder(t, runtime.Scheme) + upgrade := new(v1beta1.PGUpgrade) + upgrade.Name = "some-upgrade" + + reconciler := PGUpgradeReconciler{ + Recorder: recorder, + Registration: registration.RegistrationFunc( + func(record.EventRecorder, client.Object, *[]metav1.Condition) bool { + return true + }), + } + + meta.RemoveStatusCondition(&upgrade.Status.Conditions, ConditionPGUpgradeProgressing) + + result := reconciler.UpgradeAuthorized(upgrade) + assert.Assert(t, !result, "expected signal to not proceed") + + condition := meta.FindStatusCondition(upgrade.Status.Conditions, v1beta1.Registered) + if assert.Check(t, condition != nil) { + assert.Equal(t, condition.Status, metav1.ConditionFalse) + } + + if assert.Check(t, len(recorder.Events) > 0) { + assert.Equal(t, recorder.Events[0].Type, "Warning") + assert.Equal(t, recorder.Events[0].Regarding.Kind, "PGUpgrade") + assert.Equal(t, recorder.Events[0].Regarding.Name, "some-upgrade") + assert.Assert(t, cmp.Contains(recorder.Events[0].Note, "requires")) + } + }) + + t.Run("RegistrationCompleted", func(t *testing.T) { + reconciler := new(PGUpgradeReconciler) + upgrade := new(v1beta1.PGUpgrade) + + called := false + reconciler.Registration = registration.RegistrationFunc( + func(record.EventRecorder, client.Object, *[]metav1.Condition) bool { + called = true + return false + }) + + meta.RemoveStatusCondition(&upgrade.Status.Conditions, ConditionPGUpgradeProgressing) + + result := reconciler.UpgradeAuthorized(upgrade) + assert.Assert(t, result, "expected signal to proceed") + assert.Assert(t, called, "expected registration package to clear conditions") + }) +} diff --git a/internal/controller/pgupgrade/utils.go b/internal/controller/pgupgrade/utils.go new file mode 100644 index 0000000000..292107e440 --- /dev/null +++ b/internal/controller/pgupgrade/utils.go @@ -0,0 +1,64 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgupgrade + +import ( + "os" + + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/labels" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/controller/controllerutil" + + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +// The owner reference created by controllerutil.SetControllerReference blocks +// deletion. The OwnerReferencesPermissionEnforcement plugin requires that the +// creator of such a reference have either "delete" permission on the owner or +// "update" permission on the owner's "finalizers" subresource. +// - https://docs.k8s.io/reference/access-authn-authz/admission-controllers/ +// +kubebuilder:rbac:groups="postgres-operator.crunchydata.com",resources="pgupgrades/finalizers",verbs={update} + +// setControllerReference sets owner as a Controller OwnerReference on controlled. +// It panics if another controller is already set. +func (r *PGUpgradeReconciler) setControllerReference( + owner *v1beta1.PGUpgrade, controlled client.Object, +) { + if metav1.GetControllerOf(controlled) != nil { + panic(controllerutil.SetControllerReference(owner, controlled, r.Client.Scheme())) + } + + controlled.SetOwnerReferences(append( + controlled.GetOwnerReferences(), + metav1.OwnerReference{ + APIVersion: v1beta1.GroupVersion.String(), + Kind: "PGUpgrade", + Name: owner.GetName(), + UID: owner.GetUID(), + BlockOwnerDeletion: initialize.Pointer(true), + Controller: initialize.Pointer(true), + }, + )) +} + +// Merge takes sets of labels and merges them. The last set +// provided will win in case of conflicts. +func Merge(sets ...map[string]string) labels.Set { + merged := labels.Set{} + for _, set := range sets { + merged = labels.Merge(merged, set) + } + return merged +} + +// defaultFromEnv reads the environment variable key when value is empty. +func defaultFromEnv(value, key string) string { + if value == "" { + return os.Getenv(key) + } + return value +} diff --git a/internal/controller/pgupgrade/world.go b/internal/controller/pgupgrade/world.go new file mode 100644 index 0000000000..18d056fe25 --- /dev/null +++ b/internal/controller/pgupgrade/world.go @@ -0,0 +1,175 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgupgrade + +import ( + "context" + + "github.com/pkg/errors" + appsv1 "k8s.io/api/apps/v1" + batchv1 "k8s.io/api/batch/v1" + corev1 "k8s.io/api/core/v1" + apierrors "k8s.io/apimachinery/pkg/api/errors" + "k8s.io/apimachinery/pkg/labels" + "sigs.k8s.io/controller-runtime/pkg/client" + + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +// The client used by the controller sets up a cache and an informer for any GVK +// that it GETs. That informer needs the "watch" permission. +// - https://github.com/kubernetes-sigs/controller-runtime/issues/1249 +// - https://github.com/kubernetes-sigs/controller-runtime/issues/1454 +//+kubebuilder:rbac:groups="postgres-operator.crunchydata.com",resources="postgresclusters",verbs={get,watch} +//+kubebuilder:rbac:groups="",resources="endpoints",verbs={list,watch} +//+kubebuilder:rbac:groups="batch",resources="jobs",verbs={list,watch} +//+kubebuilder:rbac:groups="apps",resources="statefulsets",verbs={list,watch} + +func (r *PGUpgradeReconciler) observeWorld( + ctx context.Context, upgrade *v1beta1.PGUpgrade, +) (*World, error) { + selectCluster := labels.SelectorFromSet(labels.Set{ + LabelCluster: upgrade.Spec.PostgresClusterName, + }) + + world := NewWorld() + world.Upgrade = upgrade + + cluster := v1beta1.NewPostgresCluster() + err := errors.WithStack( + r.Client.Get(ctx, client.ObjectKey{ + Namespace: upgrade.Namespace, + Name: upgrade.Spec.PostgresClusterName, + }, cluster)) + err = world.populateCluster(cluster, err) + + if err == nil { + var endpoints corev1.EndpointsList + err = errors.WithStack( + r.Client.List(ctx, &endpoints, + client.InNamespace(upgrade.Namespace), + client.MatchingLabelsSelector{Selector: selectCluster}, + )) + world.populatePatroniEndpoints(endpoints.Items) + } + + if err == nil { + var jobs batchv1.JobList + err = errors.WithStack( + r.Client.List(ctx, &jobs, + client.InNamespace(upgrade.Namespace), + client.MatchingLabelsSelector{Selector: selectCluster}, + )) + for i := range jobs.Items { + world.Jobs[jobs.Items[i].Name] = &jobs.Items[i] + } + } + + if err == nil { + var statefulsets appsv1.StatefulSetList + err = errors.WithStack( + r.Client.List(ctx, &statefulsets, + client.InNamespace(upgrade.Namespace), + client.MatchingLabelsSelector{Selector: selectCluster}, + )) + world.populateStatefulSets(statefulsets.Items) + } + + if err == nil { + world.populateShutdown() + } + + return world, err +} + +func (w *World) populateCluster(cluster *v1beta1.PostgresCluster, err error) error { + if err == nil { + w.Cluster = cluster + w.ClusterNotFound = nil + + } else if apierrors.IsNotFound(err) { + w.Cluster = nil + w.ClusterNotFound = err + err = nil + } + return err +} + +func (w *World) populatePatroniEndpoints(endpoints []corev1.Endpoints) { + for index, endpoint := range endpoints { + if endpoint.Labels[LabelPatroni] != "" { + w.PatroniEndpoints = append(w.PatroniEndpoints, &endpoints[index]) + } + } +} + +// populateStatefulSets assigns +// a) the expected number of replicas -- the number of StatefulSets that have the expected +// LabelInstance label, minus 1 (for the primary) +// b) the primary StatefulSet and replica StatefulSets if the cluster is shutdown. +// When the cluster is not shutdown, we cannot verify which StatefulSet is the primary. +func (w *World) populateStatefulSets(statefulSets []appsv1.StatefulSet) { + w.ReplicasExpected = -1 + if w.Cluster != nil { + startup := w.Cluster.Status.StartupInstance + for index, sts := range statefulSets { + if sts.Labels[LabelInstance] != "" { + w.ReplicasExpected++ + if startup != "" { + switch sts.Name { + case startup: + w.ClusterPrimary = &statefulSets[index] + default: + w.ClusterReplicas = append(w.ClusterReplicas, &statefulSets[index]) + } + } + } + } + } +} + +func (w *World) populateShutdown() { + if w.Cluster != nil { + status := w.Cluster.Status + generation := status.ObservedGeneration + + // The cluster is "shutdown" only when it is specified *and* the status + // indicates all instances are stopped. + shutdownValue := w.Cluster.Spec.Shutdown + if shutdownValue != nil { + w.ClusterShutdown = *shutdownValue + } else { + w.ClusterShutdown = false + } + w.ClusterShutdown = w.ClusterShutdown && generation == w.Cluster.GetGeneration() + + sets := status.InstanceSets + for _, set := range sets { + if n := set.Replicas; n != 0 { + w.ClusterShutdown = false + } + } + } +} + +type World struct { + Cluster *v1beta1.PostgresCluster + Upgrade *v1beta1.PGUpgrade + + ClusterNotFound error + ClusterPrimary *appsv1.StatefulSet + ClusterReplicas []*appsv1.StatefulSet + ClusterShutdown bool + ReplicasExpected int + + PatroniEndpoints []*corev1.Endpoints + Jobs map[string]*batchv1.Job +} + +func NewWorld() *World { + return &World{ + Jobs: make(map[string]*batchv1.Job), + } +} diff --git a/internal/controller/pgupgrade/world_test.go b/internal/controller/pgupgrade/world_test.go new file mode 100644 index 0000000000..4aa24f714d --- /dev/null +++ b/internal/controller/pgupgrade/world_test.go @@ -0,0 +1,230 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgupgrade + +import ( + "fmt" + "testing" + + "gotest.tools/v3/assert" + appsv1 "k8s.io/api/apps/v1" + corev1 "k8s.io/api/core/v1" + apierrors "k8s.io/apimachinery/pkg/api/errors" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/runtime/schema" + + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestPopulateCluster(t *testing.T) { + t.Run("Found", func(t *testing.T) { + cluster := v1beta1.NewPostgresCluster() + cluster.SetName("cluster") + + world := NewWorld() + err := world.populateCluster(cluster, nil) + + assert.NilError(t, err) + assert.Equal(t, world.Cluster, cluster) + assert.Assert(t, world.ClusterNotFound == nil) + }) + + t.Run("NotFound", func(t *testing.T) { + cluster := v1beta1.NewPostgresCluster() + expected := apierrors.NewNotFound(schema.GroupResource{}, "name") + + world := NewWorld() + err := world.populateCluster(cluster, expected) + + assert.NilError(t, err, "NotFound is handled") + assert.Assert(t, world.Cluster == nil) + assert.Equal(t, world.ClusterNotFound, expected) + }) + + t.Run("Error", func(t *testing.T) { + cluster := v1beta1.NewPostgresCluster() + expected := fmt.Errorf("danger") + + world := NewWorld() + err := world.populateCluster(cluster, expected) + + assert.Equal(t, err, expected) + assert.Assert(t, world.Cluster == nil) + assert.Assert(t, world.ClusterNotFound == nil) + }) +} + +func TestPopulatePatroniEndpoint(t *testing.T) { + endpoints := []corev1.Endpoints{ + { + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + LabelPatroni: "west", + }, + }, + }, + { + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + LabelPatroni: "east", + }, + }, + }, + { + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "different-label": "north", + }, + }, + }, + } + + world := NewWorld() + world.populatePatroniEndpoints(endpoints) + + // The first two have the correct labels. + assert.DeepEqual(t, world.PatroniEndpoints, []*corev1.Endpoints{ + &endpoints[0], + &endpoints[1], + }) +} + +func TestPopulateShutdown(t *testing.T) { + t.Run("NoCluster", func(t *testing.T) { + world := NewWorld() + + world.populateShutdown() + assert.Assert(t, !world.ClusterShutdown) + }) + + t.Run("NotShutdown", func(t *testing.T) { + cluster := v1beta1.NewPostgresCluster() + cluster.Spec.Shutdown = initialize.Bool(false) + + world := NewWorld() + world.Cluster = cluster + + world.populateShutdown() + assert.Assert(t, !world.ClusterShutdown) + }) + + t.Run("OldStatus", func(t *testing.T) { + cluster := v1beta1.NewPostgresCluster() + cluster.SetGeneration(99) + cluster.Spec.Shutdown = initialize.Bool(true) + cluster.Status.ObservedGeneration = 21 + + world := NewWorld() + world.Cluster = cluster + + world.populateShutdown() + assert.Assert(t, !world.ClusterShutdown) + }) + + t.Run("InstancesRunning", func(t *testing.T) { + cluster := v1beta1.NewPostgresCluster() + cluster.SetGeneration(99) + cluster.Spec.Shutdown = initialize.Bool(true) + cluster.Status.ObservedGeneration = 99 + cluster.Status.InstanceSets = []v1beta1.PostgresInstanceSetStatus{{Replicas: 2}} + + world := NewWorld() + world.Cluster = cluster + + world.populateShutdown() + assert.Assert(t, !world.ClusterShutdown) + }) + + t.Run("InstancesStopped", func(t *testing.T) { + cluster := v1beta1.NewPostgresCluster() + cluster.SetGeneration(99) + cluster.Spec.Shutdown = initialize.Bool(true) + cluster.Status.ObservedGeneration = 99 + cluster.Status.InstanceSets = []v1beta1.PostgresInstanceSetStatus{{Replicas: 0}} + + world := NewWorld() + world.Cluster = cluster + + world.populateShutdown() + assert.Assert(t, world.ClusterShutdown) + }) +} + +func TestPopulateStatefulSets(t *testing.T) { + t.Run("NoPopulatesWithoutStartupGiven", func(t *testing.T) { + cluster := v1beta1.NewPostgresCluster() + world := NewWorld() + world.Cluster = cluster + + primary := appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "the-one", + Labels: map[string]string{ + LabelInstance: "whatever", + }, + }, + } + replica := appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "something-else", + Labels: map[string]string{ + LabelInstance: "whatever", + }, + }, + } + other := appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "repo-host", + Labels: map[string]string{ + "other-label": "other", + }, + }, + } + world.populateStatefulSets([]appsv1.StatefulSet{primary, replica, other}) + + assert.Assert(t, world.ClusterPrimary == nil) + assert.Assert(t, world.ClusterReplicas == nil) + assert.Assert(t, world.ReplicasExpected == 1) + }) + + t.Run("PopulatesWithStartupGiven", func(t *testing.T) { + cluster := v1beta1.NewPostgresCluster() + cluster.Status.StartupInstance = "the-one" + + world := NewWorld() + world.Cluster = cluster + + primary := appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "the-one", + Labels: map[string]string{ + LabelInstance: "whatever", + }, + }, + } + replica := appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "something-else", + Labels: map[string]string{ + LabelInstance: "whatever", + }, + }, + } + other := appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "repo-host", + Labels: map[string]string{ + "other-label": "other", + }, + }, + } + world.populateStatefulSets([]appsv1.StatefulSet{primary, replica, other}) + + assert.DeepEqual(t, world.ClusterPrimary, &primary) + assert.DeepEqual(t, world.ClusterReplicas, []*appsv1.StatefulSet{&replica}) + assert.Assert(t, world.ReplicasExpected == 1) + }) +} diff --git a/internal/controller/pod/inithandler.go b/internal/controller/pod/inithandler.go deleted file mode 100644 index ea19ec4e8d..0000000000 --- a/internal/controller/pod/inithandler.go +++ /dev/null @@ -1,284 +0,0 @@ -package pod - -/* -Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "fmt" - "strconv" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/controller" - "github.com/crunchydata/postgres-operator/internal/kubeapi" - "github.com/crunchydata/postgres-operator/internal/operator" - backrestoperator "github.com/crunchydata/postgres-operator/internal/operator/backrest" - clusteroperator "github.com/crunchydata/postgres-operator/internal/operator/cluster" - taskoperator "github.com/crunchydata/postgres-operator/internal/operator/task" - "github.com/crunchydata/postgres-operator/internal/util" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - apiv1 "k8s.io/api/core/v1" - kerrors "k8s.io/apimachinery/pkg/api/errors" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/types" - - log "github.com/sirupsen/logrus" -) - -// handleClusterInit is responsible for proceeding with initialization of the PG cluster once the -// primary PG pod for a new or restored PG cluster reaches a ready status -func (c *Controller) handleClusterInit(newPod *apiv1.Pod, cluster *crv1.Pgcluster) error { - - clusterName := cluster.GetName() - - // first check to see if the update is a repo pod. If so, then call repo init handler and - // return since the other handlers are only applicable to PG pods - if isBackRestRepoPod(newPod) { - log.Debugf("Pod Controller: calling pgBackRest repo init for cluster %s", clusterName) - if err := c.handleBackRestRepoInit(newPod, cluster); err != nil { - log.Error(err) - return err - } - return nil - } - - // handle common tasks for initializing a cluster, whether due to bootstap or reinitialization - // following a restore, or if a regular or standby cluster - if err := c.handleCommonInit(cluster); err != nil { - log.Error(err) - return err - } - - // call the standby init logic if a standby cluster - if cluster.Spec.Standby { - log.Debugf("Pod Controller: standby cluster detected during cluster %s init, calling "+ - "standby handler", clusterName) - return c.handleStandbyInit(cluster) - } - - // call bootstrap init for all other cluster initialization - log.Debugf("Pod Controller: calling bootstrap init for cluster %s", clusterName) - return c.handleBootstrapInit(newPod, cluster) -} - -// handleBackRestRepoInit handles cluster initialization tasks that must be executed once -// as a result of an update to a cluster's pgBackRest repository pod -func (c *Controller) handleBackRestRepoInit(newPod *apiv1.Pod, cluster *crv1.Pgcluster) error { - - // if the repo pod is for a cluster bootstrap, the kick of the bootstrap job and return - if _, ok := newPod.GetLabels()[config.LABEL_PGHA_BOOTSTRAP]; ok { - if err := clusteroperator.AddClusterBootstrap(c.Client, cluster); err != nil { - log.Error(err) - return err - } - return nil - } - - clusterInfo, err := clusteroperator.ScaleClusterDeployments(c.Client, *cluster, 1, - true, false, false, false) - if err != nil { - log.Error(err) - return err - } - - log.Debugf("Pod Controller: scaled primary deployment %s to 1 to proceed with initializing "+ - "cluster %s", clusterInfo.PrimaryDeployment, cluster.Name) - - return nil -} - -// handleCommonInit is resposible for handling common initilization tasks for a PG cluster -// regardless of the specific type of cluster (e.g. regualar or standby) or the reason the -// cluster is being initialized (initial bootstrap or restore) -func (c *Controller) handleCommonInit(cluster *crv1.Pgcluster) error { - - // Disable autofailover in the cluster that is now "Ready" if the autofail label is set - // to "false" on the pgcluster (i.e. label "autofail=true") - autofailEnabled, err := strconv.ParseBool(cluster.ObjectMeta.Labels[config.LABEL_AUTOFAIL]) - if err != nil { - log.Error(err) - return err - } else if !autofailEnabled { - util.ToggleAutoFailover(c.Client, false, - cluster.ObjectMeta.Labels[config.LABEL_PGHA_SCOPE], cluster.Namespace) - } - - operator.UpdatePGHAConfigInitFlag(c.Client, false, cluster.Name, - cluster.Namespace) - - return nil -} - -// handleBootstrapInit is resposible for handling cluster initilization (e.g. initiating pgBackRest -// stanza creation) when a the database container within the primary PG Pod for a new PG cluster -// enters a ready status -func (c *Controller) handleBootstrapInit(newPod *apiv1.Pod, cluster *crv1.Pgcluster) error { - - clusterName := cluster.Name - namespace := cluster.Namespace - - // determine if restore, and if delete the restore label since it is no longer needed - restoreLabelPatch := fmt.Sprintf(`[{"op": "remove", "path": "/metadata/annotations/%s"}]`, - config.LABEL_BACKREST_RESTORE) - if _, restore := cluster.GetAnnotations()[config.ANNOTATION_BACKREST_RESTORE]; restore { - log.Debugf("Pod Controller: restore detected for pgcluster %s, restore annotation will "+ - "be removed", cluster.GetName()) - if _, err := c.Client.CrunchydataV1().Pgclusters(namespace).Patch(cluster.GetName(), - types.JSONPatchType, []byte(restoreLabelPatch)); err != nil { - log.Errorf("Pod Controller unable to remove backrest restore annotation from "+ - "pgcluster %s: %s", cluster.GetName(), err.Error()) - } - } else { - log.Debugf("%s went to Ready from Not Ready, apply policies...", clusterName) - taskoperator.ApplyPolicies(clusterName, c.Client, c.Client.Config, namespace) - } - - taskoperator.CompleteCreateClusterWorkflow(clusterName, c.Client, namespace) - - //publish event for cluster complete - publishClusterComplete(clusterName, namespace, cluster) - // - - // first clean any stanza create resources from a previous stanza-create, e.g. during a - // restore when these resources may already exist from initial creation of the cluster - if err := backrestoperator.CleanStanzaCreateResources(namespace, clusterName, - c.Client); err != nil { - log.Error(err) - return err - } - - // create the pgBackRest stanza - backrestoperator.StanzaCreate(newPod.ObjectMeta.Namespace, clusterName, c.Client) - - // if this is a pgbouncer enabled cluster, add a pgbouncer - // Note: we only warn if we cannot create the pgBouncer, so eecution can - // continue - if cluster.Spec.PgBouncer.Enabled() { - if err := clusteroperator.AddPgbouncer(c.Client, c.Client.Config, cluster); err != nil { - log.Warn(err) - } - } - - return nil -} - -// handleStandbyInit is resposible for handling standby cluster initilization when the database -// container within the primary PG Pod for a new standby cluster enters a ready status -func (c *Controller) handleStandbyInit(cluster *crv1.Pgcluster) error { - - clusterName := cluster.Name - namespace := cluster.Namespace - - taskoperator.CompleteCreateClusterWorkflow(clusterName, c.Client, namespace) - - //publish event for cluster complete - publishClusterComplete(clusterName, namespace, cluster) - // - - // now scale any replicas deployments to 1 - clusteroperator.ScaleClusterDeployments(c.Client, *cluster, 1, false, true, false, false) - - // Proceed with stanza-creation of this is not a standby cluster, or if its - // a standby cluster that does not have "s3" storage only enabled. - // If this is a standby cluster and the pgBackRest storage type is set - // to "s3" for S3 storage only, set the cluster to an initialized status. - if cluster.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE] != "s3" { - // first try to delete any existing stanza create task and/or job - if err := c.Client.CrunchydataV1().Pgtasks(namespace).Delete(fmt.Sprintf("%s-%s", clusterName, - crv1.PgtaskBackrestStanzaCreate), &metav1.DeleteOptions{}); err != nil && !kerrors.IsNotFound(err) { - return err - } - deletePropagation := metav1.DeletePropagationForeground - if err := c.Client. - BatchV1().Jobs(namespace). - Delete(fmt.Sprintf("%s-%s", clusterName, crv1.PgtaskBackrestStanzaCreate), - &metav1.DeleteOptions{PropagationPolicy: &deletePropagation}); err != nil && !kerrors.IsNotFound(err) { - return err - } - backrestoperator.StanzaCreate(namespace, clusterName, c.Client) - } else { - controller.SetClusterInitializedStatus(c.Client, clusterName, namespace) - } - - // If a standby cluster initialize the creation of any replicas. Replicas - // can be initialized right away, i.e. there is no dependency on - // stanza-creation and/or the creation of any backups, since the replicas - // will be generated from the pgBackRest repository of an external PostgreSQL - // database (which should already exist). - controller.InitializeReplicaCreation(c.Client, clusterName, namespace) - - // if this is a pgbouncer enabled cluster, add a pgbouncer - // Note: we only warn if we cannot create the pgBouncer, so eecution can - // continue - if cluster.Spec.PgBouncer.Enabled() { - if err := clusteroperator.AddPgbouncer(c.Client, c.Client.Config, cluster); err != nil { - log.Warn(err) - } - } - - return nil -} - -// labelPostgresPodAndDeployment -// see if this is a primary or replica being created -// update service-name label on the pod for each case -// to match the correct Service selector for the PG cluster -func (c *Controller) labelPostgresPodAndDeployment(newpod *apiv1.Pod) { - - depName := newpod.ObjectMeta.Labels[config.LABEL_DEPLOYMENT_NAME] - ns := newpod.Namespace - - _, err := c.Client.CrunchydataV1().Pgreplicas(ns).Get(depName, metav1.GetOptions{}) - replica := err == nil - log.Debugf("checkPostgresPods --- dep %s replica %t", depName, replica) - - dep, err := c.Client.AppsV1().Deployments(ns).Get(depName, metav1.GetOptions{}) - if err != nil { - log.Errorf("could not get Deployment on pod Add %s", newpod.Name) - return - } - - serviceName := "" - - if dep.ObjectMeta.Labels[config.LABEL_SERVICE_NAME] != "" { - log.Debug("this means the deployment was already labeled") - log.Debug("which means its pod was restarted for some reason") - log.Debug("we will use the service name on the deployment") - serviceName = dep.ObjectMeta.Labels[config.LABEL_SERVICE_NAME] - } else if replica == false { - log.Debugf("primary pod ADDED %s service-name=%s", newpod.Name, newpod.ObjectMeta.Labels[config.LABEL_PG_CLUSTER]) - //add label onto pod "service-name=clustername" - serviceName = newpod.ObjectMeta.Labels[config.LABEL_PG_CLUSTER] - } else if replica == true { - log.Debugf("replica pod ADDED %s service-name=%s", newpod.Name, newpod.ObjectMeta.Labels[config.LABEL_PG_CLUSTER]+"-replica") - //add label onto pod "service-name=clustername-replica" - serviceName = newpod.ObjectMeta.Labels[config.LABEL_PG_CLUSTER] + "-replica" - } - - err = kubeapi.AddLabelToPod(c.Client, newpod, config.LABEL_SERVICE_NAME, serviceName, ns) - if err != nil { - log.Error(err) - log.Errorf(" could not add pod label for pod %s and label %s ...", newpod.Name, serviceName) - return - } - - //add the service name label to the Deployment - err = kubeapi.AddLabelToDeployment(c.Client, dep, config.LABEL_SERVICE_NAME, serviceName, ns) - - if err != nil { - log.Error("could not add label to deployment on pod add") - return - } - -} diff --git a/internal/controller/pod/podcontroller.go b/internal/controller/pod/podcontroller.go deleted file mode 100644 index cc05f84fb7..0000000000 --- a/internal/controller/pod/podcontroller.go +++ /dev/null @@ -1,280 +0,0 @@ -package pod - -/* -Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "strings" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/kubeapi" - "github.com/crunchydata/postgres-operator/internal/util" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - pgo "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned" - - log "github.com/sirupsen/logrus" - apiv1 "k8s.io/api/core/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - coreinformers "k8s.io/client-go/informers/core/v1" - "k8s.io/client-go/tools/cache" -) - -// Controller holds the connections for the controller -type Controller struct { - Client *kubeapi.Client - Informer coreinformers.PodInformer -} - -// onAdd is called when a pod is added -func (c *Controller) onAdd(obj interface{}) { - - newPod := obj.(*apiv1.Pod) - - newPodLabels := newPod.GetObjectMeta().GetLabels() - //only process pods with with vendor=crunchydata label - if newPodLabels[config.LABEL_VENDOR] == "crunchydata" { - log.Debugf("Pod Controller: onAdd processing the addition of pod %s in namespace %s", - newPod.Name, newPod.Namespace) - } - - //handle the case when a pg database pod is added - if isPostgresPod(newPod) { - c.labelPostgresPodAndDeployment(newPod) - return - } -} - -// onUpdate is called when a pod is updated -func (c *Controller) onUpdate(oldObj, newObj interface{}) { - - oldPod := oldObj.(*apiv1.Pod) - newPod := newObj.(*apiv1.Pod) - - newPodLabels := newPod.GetObjectMeta().GetLabels() - - //only process pods with with vendor=crunchydata label - if newPodLabels[config.LABEL_VENDOR] != "crunchydata" { - return - } - - log.Debugf("Pod Controller: onUpdate processing update for pod %s in namespace %s", - newPod.Name, newPod.Namespace) - - // we only care about pods attached to a specific cluster, so if this one isn't (as identified - // by the presence of the 'pg-cluster' label) then return - if _, ok := newPodLabels[config.LABEL_PG_CLUSTER]; !ok { - log.Debugf("Pod Controller: onUpdate ignoring update for pod %s in namespace %s since it "+ - "is not associated with a PG cluster", newPod.Name, newPod.Namespace) - return - } - - var clusterName string - bootstrapCluster := newPodLabels[config.LABEL_PGHA_BOOTSTRAP] - // Lookup the pgcluster CR for PG cluster associated with this Pod. Typically we will use the - // 'pg-cluster' label, but if a bootstrap pod we use the 'pgha-bootstrap' label. - if bootstrapCluster != "" { - clusterName = bootstrapCluster - } else { - clusterName = newPodLabels[config.LABEL_PG_CLUSTER] - } - namespace := newPod.ObjectMeta.Namespace - cluster, err := c.Client.CrunchydataV1().Pgclusters(namespace).Get(clusterName, metav1.GetOptions{}) - if err != nil { - log.Error(err.Error()) - return - } - - // For the following upgrade and cluster initialization scenarios we only care about updates - // where the database container within the pod is becoming ready. We can therefore return - // at this point if this condition is false. - if cluster.Status.State != crv1.PgclusterStateInitialized && - (isDBContainerBecomingReady(oldPod, newPod) || - isBackRestRepoBecomingReady(oldPod, newPod)) { - if err := c.handleClusterInit(newPod, cluster); err != nil { - log.Error(err) - return - } - return - } - - // Handle the "role" label change from "replica" to "primary" following a failover. This - // logic is only triggered when the cluster has already been initialized, which implies - // a failover or switchover has occurred. - if isPromotedPostgresPod(oldPod, newPod) { - log.Debugf("Pod Controller: pod %s in namespace %s promoted, calling pod promotion "+ - "handler", newPod.Name, newPod.Namespace) - - // update the pgcluster's current primary information to match the promotion - setCurrentPrimary(c.Client, newPod, cluster) - - if err := c.handlePostgresPodPromotion(newPod, *cluster); err != nil { - log.Error(err) - return - } - } - - if isPromotedStandby(oldPod, newPod) { - log.Debugf("Pod Controller: standby pod %s in namespace %s promoted, calling standby pod "+ - "promotion handler", newPod.Name, newPod.Namespace) - - if err := c.handleStandbyPromotion(newPod, *cluster); err != nil { - log.Error(err) - return - } - } - - return -} - -// setCurrentPrimary checks whether the newly promoted primary value differs from the pgcluster's -// current primary value. If different, patch the CRD's annotation to match the new value -func setCurrentPrimary(clientset pgo.Interface, newPod *apiv1.Pod, cluster *crv1.Pgcluster) { - // if a failover has occurred and the current primary has changed, update the pgcluster CRD's annotation accordingly - if cluster.Annotations[config.ANNOTATION_CURRENT_PRIMARY] != newPod.ObjectMeta.Labels[config.LABEL_DEPLOYMENT_NAME] { - err := util.CurrentPrimaryUpdate(clientset, cluster, newPod.ObjectMeta.Labels[config.LABEL_DEPLOYMENT_NAME], newPod.Namespace) - if err != nil { - log.Errorf("PodController unable to patch pgcluster %s with currentprimary value %s Error: %s", cluster.Spec.ClusterName, - newPod.ObjectMeta.Labels[config.LABEL_DEPLOYMENT_NAME], err) - return - } - } -} - -// onDelete is called when a pgcluster is deleted -func (c *Controller) onDelete(obj interface{}) { - - pod := obj.(*apiv1.Pod) - - labels := pod.GetObjectMeta().GetLabels() - if labels[config.LABEL_VENDOR] != "crunchydata" { - log.Debugf("Pod Controller: onDelete skipping pod that is not crunchydata %s", pod.ObjectMeta.SelfLink) - return - } -} - -// AddPodEventHandler adds the pod event handler to the pod informer -func (c *Controller) AddPodEventHandler() { - - c.Informer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{ - AddFunc: c.onAdd, - UpdateFunc: c.onUpdate, - DeleteFunc: c.onDelete, - }) - - log.Debugf("Pod Controller: added event handler to informer") -} - -// isBackRestRepoBecomingReady checks to see if the Pod update shows that the BackRest -// repo Pod has transitioned from an 'unready' status to a 'ready' status. -func isBackRestRepoBecomingReady(oldPod, newPod *apiv1.Pod) bool { - if !isBackRestRepoPod(newPod) { - return false - } - return isContainerBecomingReady("database", oldPod, newPod) -} - -// isBackRestRepoPod determines whether or not a pod is a pgBackRest repository Pod. This is -// determined by checking to see if the 'pgo-backrest-repo' label is present on the Pod (also, -// this controller will only process pod with the 'vendor=crunchydata' label, so that label is -// assumed to be present), specifically because this label will only be included on pgBackRest -// repository Pods. -func isBackRestRepoPod(newpod *apiv1.Pod) bool { - - _, backrestRepoLabelExists := newpod.ObjectMeta.Labels[config.LABEL_PGO_BACKREST_REPO] - - return backrestRepoLabelExists -} - -// isContainerBecomingReady determines whether or not that container specified is moving -// from an unready status to a ready status. -func isContainerBecomingReady(containerName string, oldPod, newPod *apiv1.Pod) bool { - var oldContainerStatus bool - // first see if the old version of the container was not ready - for _, v := range oldPod.Status.ContainerStatuses { - if v.Name == containerName { - oldContainerStatus = v.Ready - break - } - } - // if the old version of the container was not ready, now check if the - // new version is ready - if !oldContainerStatus { - for _, v := range newPod.Status.ContainerStatuses { - if v.Name == containerName { - if v.Ready { - return true - } - } - } - } - return false -} - -// isDBContainerBecomingReady checks to see if the Pod update shows that the Pod has -// transitioned from an 'unready' status to a 'ready' status. -func isDBContainerBecomingReady(oldPod, newPod *apiv1.Pod) bool { - if !isPostgresPod(newPod) { - return false - } - return isContainerBecomingReady("database", oldPod, newPod) -} - -// isPostgresPod determines whether or not a pod is a PostreSQL Pod, specifically either the -// primary or a replica pod within a PG cluster. This is determined by checking to see if the -// 'pgo-pg-database' label is present on the Pod (also, this controller will only process pod with -// the 'vendor=crunchydata' label, so that label is assumed to be present), specifically because -// this label will only be included on primary and replica PostgreSQL database pods (and will be -// present as soon as the deployment and pod is created). -func isPostgresPod(newpod *apiv1.Pod) bool { - - _, pgDatabaseLabelExists := newpod.ObjectMeta.Labels[config.LABEL_PG_DATABASE] - - return pgDatabaseLabelExists -} - -// isPromotedPostgresPod determines if the Pod update is the result of the promotion of the pod -// from a replica to the primary within a PG cluster. This is determined by comparing the 'role' -// label from the old Pod to the 'role' label in the New pod, specifically to determine if the -// label has changed from "promoted" to "primary" (this is the label change that will be performed -// by Patroni when promoting a pod). -func isPromotedPostgresPod(oldPod, newPod *apiv1.Pod) bool { - if !isPostgresPod(newPod) { - return false - } - if oldPod.ObjectMeta.Labels[config.LABEL_PGHA_ROLE] == "promoted" && - newPod.ObjectMeta.Labels[config.LABEL_PGHA_ROLE] == config.LABEL_PGHA_ROLE_PRIMARY { - return true - } - return false -} - -// isPromotedStandby determines if the Pod update is the result of the promotion of the standby pod -// from a replica to the primary within a PG cluster. This is determined by comparing the 'role' -// label from the old Pod to the 'role' label in the New pod, specifically to determine if the -// label has changed from "standby_leader" to "primary" (this is the label change that will be -// performed by Patroni when promoting a pod). -func isPromotedStandby(oldPod, newPod *apiv1.Pod) bool { - if !isPostgresPod(newPod) { - return false - } - - oldStatus := oldPod.Annotations["status"] - newStatus := newPod.Annotations["status"] - if strings.Contains(oldStatus, "\"role\":\"standby_leader\"") && - strings.Contains(newStatus, "\"role\":\"master\"") { - return true - } - return false -} diff --git a/internal/controller/pod/podevents.go b/internal/controller/pod/podevents.go deleted file mode 100644 index dc0b53766b..0000000000 --- a/internal/controller/pod/podevents.go +++ /dev/null @@ -1,72 +0,0 @@ -package pod - -/* -Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "time" - - "github.com/crunchydata/postgres-operator/internal/config" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - "github.com/crunchydata/postgres-operator/pkg/events" - log "github.com/sirupsen/logrus" -) - -func publishClusterComplete(clusterName, namespace string, cluster *crv1.Pgcluster) error { - //capture the cluster creation event - topics := make([]string, 1) - topics[0] = events.EventTopicCluster - - f := events.EventCreateClusterCompletedFormat{ - EventHeader: events.EventHeader{ - Namespace: namespace, - Username: cluster.Spec.UserLabels[config.LABEL_PGOUSER], - Topic: topics, - Timestamp: time.Now(), - EventType: events.EventCreateClusterCompleted, - }, - Clustername: clusterName, - WorkflowID: cluster.Spec.UserLabels[config.LABEL_WORKFLOW_ID], - } - - err := events.Publish(f) - if err != nil { - log.Error(err.Error()) - return err - } - return err - -} - -func publishPrimaryNotReady(clusterName, identifier, username, namespace string) { - topics := make([]string, 1) - topics[0] = events.EventTopicCluster - - f := events.EventPrimaryNotReadyFormat{ - EventHeader: events.EventHeader{ - Namespace: namespace, - Username: username, - Topic: topics, - Timestamp: time.Now(), - EventType: events.EventPrimaryNotReady, - }, - Clustername: clusterName, - } - - err := events.Publish(f) - if err != nil { - log.Error(err.Error()) - } -} diff --git a/internal/controller/pod/promotionhandler.go b/internal/controller/pod/promotionhandler.go deleted file mode 100644 index c9476b6782..0000000000 --- a/internal/controller/pod/promotionhandler.go +++ /dev/null @@ -1,194 +0,0 @@ -package pod - -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "encoding/json" - "fmt" - "strings" - "time" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/controller" - "github.com/crunchydata/postgres-operator/internal/kubeapi" - "github.com/crunchydata/postgres-operator/internal/operator/backrest" - clusteroperator "github.com/crunchydata/postgres-operator/internal/operator/cluster" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - log "github.com/sirupsen/logrus" - apiv1 "k8s.io/api/core/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/client-go/kubernetes" - "k8s.io/client-go/rest" -) - -var ( - // isInRecoveryCommand is the command run to determine if postgres is in recovery - isInRecoveryCMD []string = []string{"psql", "-t", "-c", "'SELECT pg_is_in_recovery();'", "-p"} - - // leaderStatusCMD is the command run to get the Patroni status for the primary - leaderStatusCMD []string = []string{"curl", fmt.Sprintf("localhost:%s/master", - config.DEFAULT_PATRONI_PORT)} - - // isStandbyDisabledTick is the duration of the tick used when waiting for standby mode to - // be disabled - isStandbyDisabledTick time.Duration = time.Millisecond * 500 - - // isStandbyDisabledTimeout is the amount of time to wait before timing out when waitig for - // standby mode to be disabled - isStandbyDisabledTimeout time.Duration = time.Minute * 5 -) - -// handlePostgresPodPromotion is responsible for handling updates to PG pods the occur as a result -// of a failover. Specifically, this handler is triggered when a replica has been promoted, and -// it now has either the "promoted" or "primary" role label. -func (c *Controller) handlePostgresPodPromotion(newPod *apiv1.Pod, cluster crv1.Pgcluster) error { - - if cluster.Status.State == crv1.PgclusterStateShutdown { - if err := c.handleStartupInit(cluster); err != nil { - return err - } - } - - if cluster.Status.State == crv1.PgclusterStateInitialized { - if err := cleanAndCreatePostFailoverBackup(c.Client, - cluster.Name, newPod.Namespace); err != nil { - log.Error(err) - return err - } - } - - return nil -} - -// handleStartupInit is resposible for handling cluster initilization for a cluster that has been -// restarted (after it was previously shutdown) -func (c *Controller) handleStartupInit(cluster crv1.Pgcluster) error { - - // since the cluster is just being restarted, it can just be set to initialized once the - // primary is ready - if err := controller.SetClusterInitializedStatus(c.Client, cluster.Name, - cluster.Namespace); err != nil { - log.Error(err) - return err - } - - // now scale any replicas deployments to 1 - clusteroperator.ScaleClusterDeployments(c.Client, cluster, 1, false, true, false, false) - - return nil -} - -// handleStandbyPodPromotion is responsible for handling updates to PG pods the occur as a result -// of disabling standby mode. Specifically, this handler is triggered when a standby leader -// is turned into a regular leader. -func (c *Controller) handleStandbyPromotion(newPod *apiv1.Pod, cluster crv1.Pgcluster) error { - - clusterName := cluster.Name - namespace := cluster.Namespace - - if err := waitForStandbyPromotion(c.Client.Config, c.Client, *newPod, cluster); err != nil { - return err - } - - // rotate the pgBouncer passwords if pgbouncer is enabled within the cluster - if cluster.Spec.PgBouncer.Enabled() { - if err := clusteroperator.RotatePgBouncerPassword(c.Client, c.Client.Config, &cluster); err != nil { - log.Error(err) - return err - } - } - - if err := cleanAndCreatePostFailoverBackup(c.Client, clusterName, namespace); err != nil { - log.Error(err) - return err - } - - return nil -} - -// waitForStandbyPromotion waits for standby mode to be disabled for a specific cluster and has -// been promoted. This is done by verifying that recovery is no longer enabled in the database, -// while also ensuring there are not any pending restarts for the database. -// done by confirming -func waitForStandbyPromotion(restConfig *rest.Config, clientset kubernetes.Interface, newPod apiv1.Pod, - cluster crv1.Pgcluster) error { - - var recoveryDisabled bool - - // wait for the server to accept writes to ensure standby has truly been disabled before - // proceeding - duration := time.After(isStandbyDisabledTimeout) - tick := time.NewTicker(isStandbyDisabledTick) - defer tick.Stop() - for { - select { - case <-duration: - return fmt.Errorf("timed out waiting for cluster %s to accept writes after disabling "+ - "standby mode", cluster.Name) - case <-tick.C: - if !recoveryDisabled { - cmd := isInRecoveryCMD - cmd = append(cmd, cluster.Spec.Port) - - isInRecoveryStr, _, _ := kubeapi.ExecToPodThroughAPI(restConfig, clientset, - cmd, newPod.Spec.Containers[0].Name, newPod.Name, - newPod.Namespace, nil) - if strings.Contains(isInRecoveryStr, "f") { - recoveryDisabled = true - } - } - if recoveryDisabled { - primaryJSONStr, _, _ := kubeapi.ExecToPodThroughAPI(restConfig, clientset, - leaderStatusCMD, newPod.Spec.Containers[0].Name, newPod.Name, - newPod.Namespace, nil) - var primaryJSON map[string]interface{} - json.Unmarshal([]byte(primaryJSONStr), &primaryJSON) - if primaryJSON["state"] == "running" && (primaryJSON["pending_restart"] == nil || - !primaryJSON["pending_restart"].(bool)) { - return nil - } - } - } - } -} - -// cleanAndCreatePostFailoverBackup cleans up any existing backup resources and then creates -// a pgtask to trigger the creation of a post-failover backup -func cleanAndCreatePostFailoverBackup(clientset kubeapi.Interface, clusterName, namespace string) error { - - //look up the backrest-repo pod name - selector := fmt.Sprintf("%s=%s,%s=true", config.LABEL_PG_CLUSTER, - clusterName, config.LABEL_PGO_BACKREST_REPO) - pods, err := clientset.CoreV1().Pods(namespace).List(metav1.ListOptions{LabelSelector: selector}) - if len(pods.Items) != 1 { - return fmt.Errorf("pods len != 1 for cluster %s", clusterName) - } else if err != nil { - return err - } - - if err := backrest.CleanBackupResources(clientset, namespace, - clusterName); err != nil { - log.Error(err) - return err - } - if _, err := backrest.CreatePostFailoverBackup(clientset, namespace, - clusterName, pods.Items[0].Name); err != nil { - log.Error(err) - return err - } - - return nil -} diff --git a/internal/controller/postgrescluster/apply.go b/internal/controller/postgrescluster/apply.go new file mode 100644 index 0000000000..2dae1f7d80 --- /dev/null +++ b/internal/controller/postgrescluster/apply.go @@ -0,0 +1,62 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgrescluster + +import ( + "context" + "reflect" + + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/equality" + "sigs.k8s.io/controller-runtime/pkg/client" + + "github.com/crunchydata/postgres-operator/internal/kubeapi" +) + +// apply sends an apply patch to object's endpoint in the Kubernetes API and +// updates object with any returned content. The fieldManager is set to +// r.Owner and the force parameter is true. +// - https://docs.k8s.io/reference/using-api/server-side-apply/#managers +// - https://docs.k8s.io/reference/using-api/server-side-apply/#conflicts +func (r *Reconciler) apply(ctx context.Context, object client.Object) error { + // Generate an apply-patch by comparing the object to its zero value. + zero := reflect.New(reflect.TypeOf(object).Elem()).Interface() + data, err := client.MergeFrom(zero.(client.Object)).Data(object) + apply := client.RawPatch(client.Apply.Type(), data) + + // Keep a copy of the object before any API calls. + intent := object.DeepCopyObject() + patch := kubeapi.NewJSONPatch() + + // Send the apply-patch with force=true. + if err == nil { + err = r.patch(ctx, object, apply, client.ForceOwnership) + } + + // Some fields cannot be server-side applied correctly. When their outcome + // does not match the intent, send a json-patch to get really specific. + switch actual := object.(type) { + case *corev1.Service: + applyServiceSpec(patch, actual.Spec, intent.(*corev1.Service).Spec, "spec") + } + + // Send the json-patch when necessary. + if err == nil && !patch.IsEmpty() { + err = r.patch(ctx, object, patch) + } + return err +} + +// applyServiceSpec is called by Reconciler.apply to work around issues +// with server-side apply. +func applyServiceSpec( + patch *kubeapi.JSON6902, actual, intent corev1.ServiceSpec, path ...string, +) { + // Service.Spec.Selector is not +mapType=atomic until Kubernetes 1.22. + // - https://issue.k8s.io/97970 + if !equality.Semantic.DeepEqual(actual.Selector, intent.Selector) { + patch.Replace(append(path, "selector")...)(intent.Selector) + } +} diff --git a/internal/controller/postgrescluster/apply_test.go b/internal/controller/postgrescluster/apply_test.go new file mode 100644 index 0000000000..c163e8a5ab --- /dev/null +++ b/internal/controller/postgrescluster/apply_test.go @@ -0,0 +1,302 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgrescluster + +import ( + "context" + "errors" + "regexp" + "strings" + "testing" + "time" + + "github.com/google/go-cmp/cmp" + "gotest.tools/v3/assert" + appsv1 "k8s.io/api/apps/v1" + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/equality" + apierrors "k8s.io/apimachinery/pkg/api/errors" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/util/version" + "k8s.io/client-go/discovery" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/controller/controllerutil" + + "github.com/crunchydata/postgres-operator/internal/testing/require" +) + +func TestServerSideApply(t *testing.T) { + ctx := context.Background() + cfg, cc := setupKubernetes(t) + require.ParallelCapacity(t, 0) + + ns := setupNamespace(t, cc) + + dc, err := discovery.NewDiscoveryClientForConfig(cfg) + assert.NilError(t, err) + + server, err := dc.ServerVersion() + assert.NilError(t, err) + + serverVersion, err := version.ParseGeneric(server.GitVersion) + assert.NilError(t, err) + + t.Run("ObjectMeta", func(t *testing.T) { + reconciler := Reconciler{Client: cc, Owner: client.FieldOwner(t.Name())} + constructor := func() *corev1.ConfigMap { + var cm corev1.ConfigMap + cm.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("ConfigMap")) + cm.Namespace, cm.Name = ns.Name, "object-meta" + cm.Data = map[string]string{"key": "value"} + return &cm + } + + // Create the object. + before := constructor() + assert.NilError(t, cc.Patch(ctx, before, client.Apply, reconciler.Owner)) + assert.Assert(t, before.GetResourceVersion() != "") + + // Allow the Kubernetes API clock to advance. + time.Sleep(time.Second) + + // client.Apply changes the ResourceVersion inadvertently. + after := constructor() + assert.NilError(t, cc.Patch(ctx, after, client.Apply, reconciler.Owner)) + assert.Assert(t, after.GetResourceVersion() != "") + + switch { + case serverVersion.LessThan(version.MustParseGeneric("1.25.15")): + case serverVersion.AtLeast(version.MustParseGeneric("1.26")) && serverVersion.LessThan(version.MustParseGeneric("1.26.10")): + case serverVersion.AtLeast(version.MustParseGeneric("1.27")) && serverVersion.LessThan(version.MustParseGeneric("1.27.7")): + + assert.Assert(t, after.GetResourceVersion() != before.GetResourceVersion(), + "expected https://issue.k8s.io/116861") + + default: + assert.Assert(t, after.GetResourceVersion() == before.GetResourceVersion()) + } + + // Our apply method generates the correct apply-patch. + again := constructor() + assert.NilError(t, reconciler.apply(ctx, again)) + assert.Assert(t, again.GetResourceVersion() != "") + assert.Assert(t, again.GetResourceVersion() == after.GetResourceVersion(), + "expected to correctly no-op") + }) + + t.Run("ControllerReference", func(t *testing.T) { + reconciler := Reconciler{Client: cc, Owner: client.FieldOwner(t.Name())} + + // Setup two possible controllers. + controller1 := new(corev1.ConfigMap) + controller1.Namespace, controller1.Name = ns.Name, "controller1" + assert.NilError(t, cc.Create(ctx, controller1)) + + controller2 := new(corev1.ConfigMap) + controller2.Namespace, controller2.Name = ns.Name, "controller2" + assert.NilError(t, cc.Create(ctx, controller2)) + + // Create an object that is controlled. + controlled := new(corev1.ConfigMap) + controlled.Namespace, controlled.Name = ns.Name, "controlled" + assert.NilError(t, + controllerutil.SetControllerReference(controller1, controlled, cc.Scheme())) + assert.NilError(t, cc.Create(ctx, controlled)) + + original := metav1.GetControllerOfNoCopy(controlled) + assert.Assert(t, original != nil) + + // Try to change the controller using client.Apply. + applied := new(corev1.ConfigMap) + applied.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("ConfigMap")) + applied.Namespace, applied.Name = controlled.Namespace, controlled.Name + assert.NilError(t, + controllerutil.SetControllerReference(controller2, applied, cc.Scheme())) + + err1 := cc.Patch(ctx, applied, client.Apply, client.ForceOwnership, reconciler.Owner) + + // Patch not accepted; the ownerReferences field is invalid. + assert.Assert(t, apierrors.IsInvalid(err1), "got %#v", err1) + assert.ErrorContains(t, err1, "one reference") + + var status *apierrors.StatusError + assert.Assert(t, errors.As(err1, &status)) + assert.Assert(t, status.ErrStatus.Details != nil) + assert.Assert(t, len(status.ErrStatus.Details.Causes) != 0) + assert.Equal(t, status.ErrStatus.Details.Causes[0].Field, "metadata.ownerReferences") + + // Try to change the controller using our apply method. + err2 := reconciler.apply(ctx, applied) + + // Same result; patch not accepted. + assert.DeepEqual(t, err1, err2, + // Message fields contain GoStrings of metav1.OwnerReference, 🤦 + // so ignore pointer addresses therein. + cmp.FilterPath(func(p cmp.Path) bool { + return p.Last().String() == ".Message" + }, cmp.Transformer("", func(s string) string { + return regexp.MustCompile(`\(0x[^)]+\)`).ReplaceAllString(s, "()") + })), + ) + }) + + t.Run("StatefulSetStatus", func(t *testing.T) { + constructor := func(name string) *appsv1.StatefulSet { + var sts appsv1.StatefulSet + sts.SetGroupVersionKind(appsv1.SchemeGroupVersion.WithKind("StatefulSet")) + sts.Namespace, sts.Name = ns.Name, name + sts.Spec.Selector = &metav1.LabelSelector{ + MatchLabels: map[string]string{"select": name}, + } + sts.Spec.Template.Labels = map[string]string{"select": name} + return &sts + } + + reconciler := Reconciler{Client: cc, Owner: client.FieldOwner(t.Name())} + upstream := constructor("status-upstream") + + // The structs defined in "k8s.io/api/apps/v1" marshal empty status fields. + switch { + case serverVersion.LessThan(version.MustParseGeneric("1.22")): + assert.ErrorContains(t, + cc.Patch(ctx, upstream, client.Apply, client.ForceOwnership, reconciler.Owner), + "field not declared in schema", + "expected https://issue.k8s.io/109210") + + default: + assert.NilError(t, + cc.Patch(ctx, upstream, client.Apply, client.ForceOwnership, reconciler.Owner)) + } + + // Our apply method generates the correct apply-patch. + again := constructor("status-local") + assert.NilError(t, reconciler.apply(ctx, again)) + }) + + t.Run("ServiceSelector", func(t *testing.T) { + constructor := func(name string) *corev1.Service { + var service corev1.Service + service.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("Service")) + service.Namespace, service.Name = ns.Name, name + service.Spec.Ports = []corev1.ServicePort{{ + Port: 9999, Protocol: corev1.ProtocolTCP, + }} + return &service + } + + t.Run("wrong-keys", func(t *testing.T) { + reconciler := Reconciler{Client: cc, Owner: client.FieldOwner(t.Name())} + + intent := constructor("some-selector") + intent.Spec.Selector = map[string]string{"k1": "v1"} + + // Create the Service. + before := intent.DeepCopy() + assert.NilError(t, + cc.Patch(ctx, before, client.Apply, client.ForceOwnership, reconciler.Owner)) + + // Something external mucks it up. + assert.NilError(t, + cc.Patch(ctx, before, + client.RawPatch(client.Merge.Type(), []byte(`{"spec":{"selector":{"bad":"v2"}}}`)), + client.FieldOwner("wrong"))) + + // client.Apply cannot correct it in old versions of Kubernetes. + after := intent.DeepCopy() + assert.NilError(t, + cc.Patch(ctx, after, client.Apply, client.ForceOwnership, reconciler.Owner)) + + switch { + case serverVersion.LessThan(version.MustParseGeneric("1.22")): + + assert.Assert(t, len(after.Spec.Selector) != len(intent.Spec.Selector), + "expected https://issue.k8s.io/97970, got %v", after.Spec.Selector) + + default: + assert.DeepEqual(t, after.Spec.Selector, intent.Spec.Selector) + } + + // Our apply method corrects it. + again := intent.DeepCopy() + assert.NilError(t, reconciler.apply(ctx, again)) + assert.DeepEqual(t, again.Spec.Selector, intent.Spec.Selector) + + var count int + var managed *metav1.ManagedFieldsEntry + for i := range again.ManagedFields { + if again.ManagedFields[i].Manager == t.Name() { + count++ + managed = &again.ManagedFields[i] + } + } + + assert.Equal(t, count, 1, "expected manager once in %v", again.ManagedFields) + assert.Equal(t, managed.Operation, metav1.ManagedFieldsOperationApply) + + assert.Assert(t, managed.FieldsV1 != nil) + assert.Assert(t, strings.Contains(string(managed.FieldsV1.Raw), `"f:selector":{`), + "expected f:selector in %s", managed.FieldsV1.Raw) + }) + + for _, tt := range []struct { + name string + selector map[string]string + }{ + {"zero", nil}, + {"empty", make(map[string]string)}, + } { + t.Run(tt.name, func(t *testing.T) { + reconciler := Reconciler{Client: cc, Owner: client.FieldOwner(t.Name())} + + intent := constructor(tt.name + "-selector") + intent.Spec.Selector = tt.selector + + // Create the Service. + before := intent.DeepCopy() + assert.NilError(t, + cc.Patch(ctx, before, client.Apply, client.ForceOwnership, reconciler.Owner)) + + // Something external mucks it up. + assert.NilError(t, + cc.Patch(ctx, before, + client.RawPatch(client.Merge.Type(), []byte(`{"spec":{"selector":{"bad":"v2"}}}`)), + client.FieldOwner("wrong"))) + + // client.Apply cannot correct it. + after := intent.DeepCopy() + assert.NilError(t, + cc.Patch(ctx, after, client.Apply, client.ForceOwnership, reconciler.Owner)) + + assert.Assert(t, len(after.Spec.Selector) != len(intent.Spec.Selector), + "got %v", after.Spec.Selector) + + // Our apply method corrects it. + again := intent.DeepCopy() + assert.NilError(t, reconciler.apply(ctx, again)) + assert.Assert(t, + equality.Semantic.DeepEqual(again.Spec.Selector, intent.Spec.Selector), + "\n--- again.Spec.Selector\n+++ intent.Spec.Selector\n%v", + cmp.Diff(again.Spec.Selector, intent.Spec.Selector)) + + var count int + var managed *metav1.ManagedFieldsEntry + for i := range again.ManagedFields { + if again.ManagedFields[i].Manager == t.Name() { + count++ + managed = &again.ManagedFields[i] + } + } + + assert.Equal(t, count, 1, "expected manager once in %v", again.ManagedFields) + assert.Equal(t, managed.Operation, metav1.ManagedFieldsOperationApply) + + // The selector field is forgotten, however. + assert.Assert(t, managed.FieldsV1 != nil) + assert.Assert(t, !strings.Contains(string(managed.FieldsV1.Raw), `"f:selector":{`), + "expected f:selector to be missing from %s", managed.FieldsV1.Raw) + }) + } + }) +} diff --git a/internal/controller/postgrescluster/cluster.go b/internal/controller/postgrescluster/cluster.go new file mode 100644 index 0000000000..3ba6eab0e8 --- /dev/null +++ b/internal/controller/postgrescluster/cluster.go @@ -0,0 +1,422 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgrescluster + +import ( + "context" + "fmt" + "io" + + "github.com/pkg/errors" + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/meta" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/util/intstr" + + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/patroni" + "github.com/crunchydata/postgres-operator/internal/pki" + "github.com/crunchydata/postgres-operator/internal/postgres" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +// +kubebuilder:rbac:groups="",resources="configmaps",verbs={create,patch} + +// reconcileClusterConfigMap writes the ConfigMap that contains generated +// files (etc) that apply to the entire cluster. +func (r *Reconciler) reconcileClusterConfigMap( + ctx context.Context, cluster *v1beta1.PostgresCluster, + pgHBAs postgres.HBAs, pgParameters postgres.Parameters, +) (*corev1.ConfigMap, error) { + clusterConfigMap := &corev1.ConfigMap{ObjectMeta: naming.ClusterConfigMap(cluster)} + clusterConfigMap.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("ConfigMap")) + + err := errors.WithStack(r.setControllerReference(cluster, clusterConfigMap)) + + clusterConfigMap.Annotations = naming.Merge(cluster.Spec.Metadata.GetAnnotationsOrNil()) + clusterConfigMap.Labels = naming.Merge(cluster.Spec.Metadata.GetLabelsOrNil(), + map[string]string{ + naming.LabelCluster: cluster.Name, + }) + + if err == nil { + err = patroni.ClusterConfigMap(ctx, cluster, pgHBAs, pgParameters, + clusterConfigMap) + } + if err == nil { + err = errors.WithStack(r.apply(ctx, clusterConfigMap)) + } + + return clusterConfigMap, err +} + +// +kubebuilder:rbac:groups="",resources="services",verbs={create,patch} + +// reconcileClusterPodService writes the Service that can provide stable DNS +// names to Pods related to cluster. +func (r *Reconciler) reconcileClusterPodService( + ctx context.Context, cluster *v1beta1.PostgresCluster, +) (*corev1.Service, error) { + clusterPodService := &corev1.Service{ObjectMeta: naming.ClusterPodService(cluster)} + clusterPodService.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("Service")) + + err := errors.WithStack(r.setControllerReference(cluster, clusterPodService)) + + clusterPodService.Annotations = naming.Merge(cluster.Spec.Metadata.GetAnnotationsOrNil()) + clusterPodService.Labels = naming.Merge(cluster.Spec.Metadata.GetLabelsOrNil(), + map[string]string{ + naming.LabelCluster: cluster.Name, + }) + + // Allocate no IP address (headless) and match any Pod with the cluster + // label, regardless of its readiness. Not particularly useful by itself, but + // this allows a properly configured Pod to get a DNS record based on its name. + // - https://docs.k8s.io/concepts/services-networking/service/#headless-services + // - https://docs.k8s.io/concepts/services-networking/dns-pod-service/#pods + clusterPodService.Spec.ClusterIP = corev1.ClusterIPNone + clusterPodService.Spec.PublishNotReadyAddresses = true + clusterPodService.Spec.Selector = map[string]string{ + naming.LabelCluster: cluster.Name, + } + + if err == nil { + err = errors.WithStack(r.apply(ctx, clusterPodService)) + } + + return clusterPodService, err +} + +// generateClusterPrimaryService returns a v1.Service and v1.Endpoints that +// resolve to the PostgreSQL primary instance. +func (r *Reconciler) generateClusterPrimaryService( + cluster *v1beta1.PostgresCluster, leader *corev1.Service, +) (*corev1.Service, *corev1.Endpoints, error) { + // We want to name and label our primary Service consistently. When Patroni is + // using Endpoints for its DCS, however, they and any Service that uses them + // must use the same name as the Patroni "scope" which has its own constraints. + // + // To stay free from those constraints, our primary Service resolves to the + // ClusterIP of the Service created in Reconciler.reconcilePatroniLeaderLease + // when Patroni is using Endpoints. + + service := &corev1.Service{ObjectMeta: naming.ClusterPrimaryService(cluster)} + service.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("Service")) + + service.Annotations = naming.Merge( + cluster.Spec.Metadata.GetAnnotationsOrNil()) + service.Labels = naming.Merge( + cluster.Spec.Metadata.GetLabelsOrNil(), + map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelRole: naming.RolePrimary, + }) + + err := errors.WithStack(r.setControllerReference(cluster, service)) + + // Endpoints for a Service have the same name as the Service. Copy labels, + // annotations, and ownership, too. + endpoints := &corev1.Endpoints{} + service.ObjectMeta.DeepCopyInto(&endpoints.ObjectMeta) + endpoints.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("Endpoints")) + + if leader == nil { + // TODO(cbandy): We need to build a different kind of Service here. + return nil, nil, errors.New("Patroni DCS other than Kubernetes Endpoints is not implemented") + } + + // Allocate no IP address (headless) and manage the Endpoints ourselves. + // - https://docs.k8s.io/concepts/services-networking/service/#headless-services + // - https://docs.k8s.io/concepts/services-networking/service/#services-without-selectors + service.Spec.ClusterIP = corev1.ClusterIPNone + service.Spec.Selector = nil + + service.Spec.Ports = []corev1.ServicePort{{ + Name: naming.PortPostgreSQL, + Port: *cluster.Spec.Port, + Protocol: corev1.ProtocolTCP, + TargetPort: intstr.FromString(naming.PortPostgreSQL), + }} + + // Resolve to the ClusterIP for which Patroni has configured the Endpoints. + endpoints.Subsets = []corev1.EndpointSubset{{ + Addresses: []corev1.EndpointAddress{{IP: leader.Spec.ClusterIP}}, + }} + + // Copy the EndpointPorts from the ServicePorts. + for _, sp := range service.Spec.Ports { + endpoints.Subsets[0].Ports = append(endpoints.Subsets[0].Ports, + corev1.EndpointPort{ + Name: sp.Name, + Port: sp.Port, + Protocol: sp.Protocol, + }) + } + + return service, endpoints, err +} + +// +kubebuilder:rbac:groups="",resources="endpoints",verbs={create,patch} +// +kubebuilder:rbac:groups="",resources="services",verbs={create,patch} + +// The OpenShift RestrictedEndpointsAdmission plugin requires special +// authorization to create Endpoints that contain ClusterIPs. +// - https://github.com/openshift/origin/pull/9383 +// +kubebuilder:rbac:groups="",resources="endpoints/restricted",verbs={create} + +// reconcileClusterPrimaryService writes the Service and Endpoints that resolve +// to the PostgreSQL primary instance. +func (r *Reconciler) reconcileClusterPrimaryService( + ctx context.Context, cluster *v1beta1.PostgresCluster, leader *corev1.Service, +) (*corev1.Service, error) { + service, endpoints, err := r.generateClusterPrimaryService(cluster, leader) + + if err == nil { + err = errors.WithStack(r.apply(ctx, service)) + } + if err == nil { + err = errors.WithStack(r.apply(ctx, endpoints)) + } + return service, err +} + +// generateClusterReplicaService returns a v1.Service that exposes PostgreSQL +// replica instances. +func (r *Reconciler) generateClusterReplicaService( + cluster *v1beta1.PostgresCluster) (*corev1.Service, error, +) { + service := &corev1.Service{ObjectMeta: naming.ClusterReplicaService(cluster)} + service.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("Service")) + + service.Annotations = cluster.Spec.Metadata.GetAnnotationsOrNil() + service.Labels = cluster.Spec.Metadata.GetLabelsOrNil() + + if spec := cluster.Spec.ReplicaService; spec != nil { + service.Annotations = naming.Merge(service.Annotations, + spec.Metadata.GetAnnotationsOrNil()) + service.Labels = naming.Merge(service.Labels, + spec.Metadata.GetLabelsOrNil()) + } + + // add our labels last so they aren't overwritten + service.Labels = naming.Merge( + service.Labels, + map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelRole: naming.RoleReplica, + }) + + // The TargetPort must be the name (not the number) of the PostgreSQL + // ContainerPort. This name allows the port number to differ between Pods, + // which can happen during a rolling update. + servicePort := corev1.ServicePort{ + Name: naming.PortPostgreSQL, + Port: *cluster.Spec.Port, + Protocol: corev1.ProtocolTCP, + TargetPort: intstr.FromString(naming.PortPostgreSQL), + } + + // Default to a service type of ClusterIP + service.Spec.Type = corev1.ServiceTypeClusterIP + + // Check user provided spec for a specified type + if spec := cluster.Spec.ReplicaService; spec != nil { + service.Spec.Type = corev1.ServiceType(spec.Type) + if spec.NodePort != nil { + if service.Spec.Type == corev1.ServiceTypeClusterIP { + // The NodePort can only be set when the Service type is NodePort or + // LoadBalancer. However, due to a known issue prior to Kubernetes + // 1.20, we clear these errors during our apply. To preserve the + // appropriate behavior, we log an Event and return an error. + // TODO(tjmoore4): Once Validation Rules are available, this check + // and event could potentially be removed in favor of that validation + r.Recorder.Eventf(cluster, corev1.EventTypeWarning, "MisconfiguredClusterIP", + "NodePort cannot be set with type ClusterIP on Service %q", service.Name) + return nil, fmt.Errorf("NodePort cannot be set with type ClusterIP on Service %q", service.Name) + } + servicePort.NodePort = *spec.NodePort + } + service.Spec.ExternalTrafficPolicy = initialize.FromPointer(spec.ExternalTrafficPolicy) + service.Spec.InternalTrafficPolicy = spec.InternalTrafficPolicy + } + service.Spec.Ports = []corev1.ServicePort{servicePort} + + // Allocate an IP address and let Kubernetes manage the Endpoints by + // selecting Pods with the Patroni replica role. + // - https://docs.k8s.io/concepts/services-networking/service/#defining-a-service + service.Spec.Selector = map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelRole: naming.RolePatroniReplica, + } + + err := errors.WithStack(r.setControllerReference(cluster, service)) + + return service, err +} + +// +kubebuilder:rbac:groups="",resources="services",verbs={create,patch} + +// reconcileClusterReplicaService writes the Service that exposes PostgreSQL +// replica instances. +func (r *Reconciler) reconcileClusterReplicaService( + ctx context.Context, cluster *v1beta1.PostgresCluster, +) (*corev1.Service, error) { + service, err := r.generateClusterReplicaService(cluster) + + if err == nil { + err = errors.WithStack(r.apply(ctx, service)) + } + return service, err +} + +// reconcileDataSource is responsible for reconciling the data source for a PostgreSQL cluster. +// This involves ensuring the PostgreSQL data directory for the cluster is properly populated +// prior to bootstrapping the cluster, specifically according to any data source configured in the +// PostgresCluster spec. +// TODO(benjaminjb): Right now the spec will accept a dataSource with both a PostgresCluster and +// a PGBackRest section, but the code will only honor the PostgresCluster in that case; this would +// be better handled with a webhook to reject a spec with both `dataSource.postgresCluster` and +// `dataSource.pgbackrest` fields +func (r *Reconciler) reconcileDataSource(ctx context.Context, + cluster *v1beta1.PostgresCluster, observed *observedInstances, + clusterVolumes []corev1.PersistentVolumeClaim, + rootCA *pki.RootCertificateAuthority, + backupsSpecFound bool, +) (bool, error) { + + // a hash func to hash the pgBackRest restore options + hashFunc := func(jobConfigs []string) (string, error) { + return safeHash32(func(w io.Writer) (err error) { + for _, o := range jobConfigs { + _, err = w.Write([]byte(o)) + } + return + }) + } + + // observe all resources currently relevant to reconciling data sources, and update status + // accordingly + endpoints, restoreJob, err := r.observeRestoreEnv(ctx, cluster) + if err != nil { + return false, errors.WithStack(err) + } + + // determine if the user wants to initialize the PG data directory + postgresDataInitRequested := cluster.Spec.DataSource != nil && + (cluster.Spec.DataSource.PostgresCluster != nil || + cluster.Spec.DataSource.PGBackRest != nil) + + // determine if the user has requested an in-place restore + restoreID := cluster.GetAnnotations()[naming.PGBackRestRestore] + restoreInPlaceRequested := restoreID != "" && + cluster.Spec.Backups.PGBackRest.Restore != nil && + *cluster.Spec.Backups.PGBackRest.Restore.Enabled + + // Set the proper data source for the restore based on whether we're initializing the PG + // data directory (e.g. for a new PostgreSQL cluster), or restoring an existing cluster + // in place (and therefore recreating the data directory). If the user hasn't requested + // PG data initialization or an in-place restore, then simply return. + var dataSource *v1beta1.PostgresClusterDataSource + var cloudDataSource *v1beta1.PGBackRestDataSource + switch { + case restoreInPlaceRequested: + dataSource = cluster.Spec.Backups.PGBackRest.Restore.PostgresClusterDataSource + case postgresDataInitRequested: + // there is no restore annotation when initializing a new cluster, so we create a + // restore ID for bootstrap + restoreID = "~pgo-bootstrap-" + cluster.GetName() + dataSource = cluster.Spec.DataSource.PostgresCluster + if dataSource == nil { + cloudDataSource = cluster.Spec.DataSource.PGBackRest + } + default: + return false, nil + } + + // check the cluster's conditions to determine if the PG data for the cluster has been + // initialized + dataSourceCondition := meta.FindStatusCondition(cluster.Status.Conditions, + ConditionPostgresDataInitialized) + postgresDataInitialized := dataSourceCondition != nil && + (dataSourceCondition.Status == metav1.ConditionTrue) + + // check the cluster's conditions to determine if an in-place restore is in progress, + // and if the reason for that condition indicates that the cluster has been prepared for + // restore + restoreCondition := meta.FindStatusCondition(cluster.Status.Conditions, + ConditionPGBackRestRestoreProgressing) + restoringInPlace := restoreCondition != nil && + (restoreCondition.Status == metav1.ConditionTrue) + readyForRestore := restoreCondition != nil && + restoringInPlace && + (restoreCondition.Reason == ReasonReadyForRestore) + + // check the restore status to see if the ID for the restore currently being requested (as + // provided by the user via annotation) has changed + var restoreIDStatus string + if cluster.Status.PGBackRest != nil && cluster.Status.PGBackRest.Restore != nil { + restoreIDStatus = cluster.Status.PGBackRest.Restore.ID + } + restoreIDChanged := (restoreID != restoreIDStatus) + + // calculate the configHash for the options in the current data source, and if an existing + // restore Job exists, determine if the config has changed + var configs []string + switch { + case dataSource != nil: + configs = []string{dataSource.ClusterName, dataSource.RepoName} + configs = append(configs, dataSource.Options...) + case cloudDataSource != nil: + configs = []string{cloudDataSource.Stanza, cloudDataSource.Repo.Name} + configs = append(configs, cloudDataSource.Options...) + } + configHash, err := hashFunc(configs) + if err != nil { + return false, errors.WithStack(err) + } + var configChanged bool + if restoreJob != nil { + configChanged = + (configHash != restoreJob.GetAnnotations()[naming.PGBackRestConfigHash]) + } + + // Proceed with preparing the cluster for restore (e.g. tearing down runners, the DCS, + // etc.) if: + // - A restore is already in progress, but the cluster has not yet been prepared + // - A restore is already in progress, but the config hash changed + // - The restore ID has changed (i.e. the user provide a new value for the restore + // annotation, indicating they want a new in-place restore) + if (restoringInPlace && (!readyForRestore || configChanged)) || restoreIDChanged { + if err := r.prepareForRestore(ctx, cluster, observed, endpoints, + restoreJob, restoreID); err != nil { + return true, err + } + // return early and don't restore (i.e. populate the data dir) until the cluster is + // prepared for restore + return true, nil + } + + // simply return if data is already initialized + if postgresDataInitialized { + return false, nil + } + + // proceed with initializing the PG data directory if not already initialized + switch { + case dataSource != nil: + if err := r.reconcilePostgresClusterDataSource(ctx, cluster, dataSource, + configHash, clusterVolumes, rootCA, + backupsSpecFound); err != nil { + return true, err + } + case cloudDataSource != nil: + if err := r.reconcileCloudBasedDataSource(ctx, cluster, cloudDataSource, + configHash, clusterVolumes); err != nil { + return true, err + } + } + // return early until the PG data directory is initialized + return true, nil +} diff --git a/internal/controller/postgrescluster/cluster_test.go b/internal/controller/postgrescluster/cluster_test.go new file mode 100644 index 0000000000..be9e371a56 --- /dev/null +++ b/internal/controller/postgrescluster/cluster_test.go @@ -0,0 +1,818 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgrescluster + +import ( + "context" + "testing" + + "github.com/pkg/errors" + "go.opentelemetry.io/otel" + "gotest.tools/v3/assert" + appsv1 "k8s.io/api/apps/v1" + batchv1 "k8s.io/api/batch/v1" + corev1 "k8s.io/api/core/v1" + rbacv1 "k8s.io/api/rbac/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" + "k8s.io/apimachinery/pkg/runtime" + "k8s.io/apimachinery/pkg/runtime/schema" + "k8s.io/client-go/tools/record" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/testing/cmp" + "github.com/crunchydata/postgres-operator/internal/testing/require" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +var gvks = []schema.GroupVersionKind{{ + Group: corev1.SchemeGroupVersion.Group, + Version: corev1.SchemeGroupVersion.Version, + Kind: "ConfigMapList", +}, { + Group: corev1.SchemeGroupVersion.Group, + Version: corev1.SchemeGroupVersion.Version, + Kind: "SecretList", +}, { + Group: appsv1.SchemeGroupVersion.Group, + Version: appsv1.SchemeGroupVersion.Version, + Kind: "StatefulSetList", +}, { + Group: appsv1.SchemeGroupVersion.Group, + Version: appsv1.SchemeGroupVersion.Version, + Kind: "DeploymentList", +}, { + Group: batchv1.SchemeGroupVersion.Group, + Version: batchv1.SchemeGroupVersion.Version, + Kind: "CronJobList", +}, { + Group: corev1.SchemeGroupVersion.Group, + Version: corev1.SchemeGroupVersion.Version, + Kind: "PersistentVolumeClaimList", +}, { + Group: corev1.SchemeGroupVersion.Group, + Version: corev1.SchemeGroupVersion.Version, + Kind: "ServiceList", +}, { + Group: corev1.SchemeGroupVersion.Group, + Version: corev1.SchemeGroupVersion.Version, + Kind: "EndpointsList", +}, { + Group: corev1.SchemeGroupVersion.Group, + Version: corev1.SchemeGroupVersion.Version, + Kind: "ServiceAccountList", +}, { + Group: rbacv1.SchemeGroupVersion.Group, + Version: rbacv1.SchemeGroupVersion.Version, + Kind: "RoleBindingList", +}, { + Group: rbacv1.SchemeGroupVersion.Group, + Version: rbacv1.SchemeGroupVersion.Version, + Kind: "RoleList", +}} + +func TestCustomLabels(t *testing.T) { + ctx := context.Background() + _, cc := setupKubernetes(t) + require.ParallelCapacity(t, 2) + + reconciler := &Reconciler{ + Client: cc, + Owner: client.FieldOwner(t.Name()), + Recorder: new(record.FakeRecorder), + Tracer: otel.Tracer(t.Name()), + } + + ns := setupNamespace(t, cc) + + reconcileTestCluster := func(cluster *v1beta1.PostgresCluster) { + assert.NilError(t, errors.WithStack(reconciler.Client.Create(ctx, cluster))) + t.Cleanup(func() { + // Remove finalizers, if any, so the namespace can terminate. + assert.Check(t, client.IgnoreNotFound( + reconciler.Client.Patch(ctx, cluster, client.RawPatch( + client.Merge.Type(), []byte(`{"metadata":{"finalizers":[]}}`))))) + }) + + // Reconcile the cluster + result, err := reconciler.Reconcile(ctx, reconcile.Request{ + NamespacedName: client.ObjectKeyFromObject(cluster), + }) + assert.NilError(t, err) + assert.Assert(t, result.Requeue == false) + } + + getUnstructuredLabels := func(cluster v1beta1.PostgresCluster, u unstructured.Unstructured) (map[string]map[string]string, error) { + var err error + labels := map[string]map[string]string{} + + if metav1.IsControlledBy(&u, &cluster) { + switch u.GetKind() { + case "StatefulSet": + var resource appsv1.StatefulSet + err = runtime.DefaultUnstructuredConverter. + FromUnstructured(u.UnstructuredContent(), &resource) + labels["resource"] = resource.GetLabels() + labels["podTemplate"] = resource.Spec.Template.GetLabels() + case "Deployment": + var resource appsv1.Deployment + err = runtime.DefaultUnstructuredConverter. + FromUnstructured(u.UnstructuredContent(), &resource) + labels["resource"] = resource.GetLabels() + labels["podTemplate"] = resource.Spec.Template.GetLabels() + case "CronJob": + var resource batchv1.CronJob + err = runtime.DefaultUnstructuredConverter. + FromUnstructured(u.UnstructuredContent(), &resource) + labels["resource"] = resource.GetLabels() + labels["jobTemplate"] = resource.Spec.JobTemplate.GetLabels() + labels["jobPodTemplate"] = resource.Spec.JobTemplate.Spec.Template.GetLabels() + default: + labels["resource"] = u.GetLabels() + } + } + return labels, err + } + + t.Run("Cluster", func(t *testing.T) { + cluster := testCluster() + cluster.ObjectMeta.Name = "global-cluster" + cluster.ObjectMeta.Namespace = ns.Name + cluster.Spec.InstanceSets = []v1beta1.PostgresInstanceSetSpec{{ + Name: "daisy-instance1", + Replicas: initialize.Int32(1), + DataVolumeClaimSpec: testVolumeClaimSpec(), + }, { + Name: "daisy-instance2", + Replicas: initialize.Int32(1), + DataVolumeClaimSpec: testVolumeClaimSpec(), + }} + cluster.Spec.Metadata = &v1beta1.Metadata{ + Labels: map[string]string{"my.cluster.label": "daisy"}, + } + testCronSchedule := "@yearly" + cluster.Spec.Backups.PGBackRest.Repos[0].BackupSchedules = &v1beta1.PGBackRestBackupSchedules{ + Full: &testCronSchedule, + Differential: &testCronSchedule, + Incremental: &testCronSchedule, + } + selector, err := naming.AsSelector(metav1.LabelSelector{ + MatchLabels: map[string]string{ + naming.LabelCluster: cluster.Name, + }, + }) + assert.NilError(t, err) + reconcileTestCluster(cluster) + + for _, gvk := range gvks { + uList := &unstructured.UnstructuredList{} + uList.SetGroupVersionKind(gvk) + assert.NilError(t, reconciler.Client.List(ctx, uList, + client.InNamespace(cluster.Namespace), + client.MatchingLabelsSelector{Selector: selector})) + + for i := range uList.Items { + u := uList.Items[i] + labels, err := getUnstructuredLabels(*cluster, u) + assert.NilError(t, err) + for resourceType, resourceLabels := range labels { + t.Run(u.GetKind()+"/"+u.GetName()+"/"+resourceType, func(t *testing.T) { + assert.Equal(t, resourceLabels["my.cluster.label"], "daisy") + }) + } + } + } + }) + + t.Run("Instance", func(t *testing.T) { + cluster := testCluster() + cluster.ObjectMeta.Name = "instance-cluster" + cluster.ObjectMeta.Namespace = ns.Name + cluster.Spec.InstanceSets = []v1beta1.PostgresInstanceSetSpec{{ + Name: "max-instance", + Replicas: initialize.Int32(1), + DataVolumeClaimSpec: testVolumeClaimSpec(), + Metadata: &v1beta1.Metadata{ + Labels: map[string]string{"my.instance.label": "max"}, + }, + }, { + Name: "lucy-instance", + Replicas: initialize.Int32(1), + DataVolumeClaimSpec: testVolumeClaimSpec(), + Metadata: &v1beta1.Metadata{ + Labels: map[string]string{"my.instance.label": "lucy"}, + }, + }} + reconcileTestCluster(cluster) + for _, set := range cluster.Spec.InstanceSets { + t.Run(set.Name, func(t *testing.T) { + selector, err := naming.AsSelector(metav1.LabelSelector{ + MatchLabels: map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelInstanceSet: set.Name, + }, + }) + assert.NilError(t, err) + + for _, gvk := range gvks { + uList := &unstructured.UnstructuredList{} + uList.SetGroupVersionKind(gvk) + assert.NilError(t, reconciler.Client.List(ctx, uList, + client.InNamespace(cluster.Namespace), + client.MatchingLabelsSelector{Selector: selector})) + + for i := range uList.Items { + u := uList.Items[i] + + labels, err := getUnstructuredLabels(*cluster, u) + assert.NilError(t, err) + for resourceType, resourceLabels := range labels { + t.Run(u.GetKind()+"/"+u.GetName()+"/"+resourceType, func(t *testing.T) { + assert.Equal(t, resourceLabels["my.instance.label"], set.Metadata.Labels["my.instance.label"]) + }) + } + } + } + }) + } + + }) + + t.Run("PGBackRest", func(t *testing.T) { + cluster := testCluster() + cluster.ObjectMeta.Name = "pgbackrest-cluster" + cluster.ObjectMeta.Namespace = ns.Name + cluster.Spec.Backups.PGBackRest.Metadata = &v1beta1.Metadata{ + Labels: map[string]string{"my.pgbackrest.label": "lucy"}, + } + testCronSchedule := "@yearly" + cluster.Spec.Backups.PGBackRest.Repos[0].BackupSchedules = &v1beta1.PGBackRestBackupSchedules{ + Full: &testCronSchedule, + Differential: &testCronSchedule, + Incremental: &testCronSchedule, + } + reconcileTestCluster(cluster) + + selector, err := naming.AsSelector(metav1.LabelSelector{ + MatchLabels: map[string]string{ + naming.LabelCluster: cluster.Name, + }, + MatchExpressions: []metav1.LabelSelectorRequirement{{ + Key: naming.LabelPGBackRest, + Operator: metav1.LabelSelectorOpExists}, + }, + }) + assert.NilError(t, err) + + for _, gvk := range gvks { + uList := &unstructured.UnstructuredList{} + uList.SetGroupVersionKind(gvk) + assert.NilError(t, reconciler.Client.List(ctx, uList, + client.InNamespace(cluster.Namespace), + client.MatchingLabelsSelector{Selector: selector})) + + for i := range uList.Items { + u := uList.Items[i] + + labels, err := getUnstructuredLabels(*cluster, u) + assert.NilError(t, err) + for resourceType, resourceLabels := range labels { + t.Run(u.GetKind()+"/"+u.GetName()+"/"+resourceType, func(t *testing.T) { + assert.Equal(t, resourceLabels["my.pgbackrest.label"], "lucy") + }) + } + } + } + }) + + t.Run("PGBouncer", func(t *testing.T) { + cluster := testCluster() + cluster.ObjectMeta.Name = "pgbouncer-cluster" + cluster.ObjectMeta.Namespace = ns.Name + cluster.Spec.Proxy.PGBouncer.Metadata = &v1beta1.Metadata{ + Labels: map[string]string{"my.pgbouncer.label": "lucy"}, + } + reconcileTestCluster(cluster) + + selector, err := naming.AsSelector(metav1.LabelSelector{ + MatchLabels: map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelRole: naming.RolePGBouncer, + }, + }) + assert.NilError(t, err) + + for _, gvk := range gvks { + uList := &unstructured.UnstructuredList{} + uList.SetGroupVersionKind(gvk) + assert.NilError(t, reconciler.Client.List(ctx, uList, + client.InNamespace(cluster.Namespace), + client.MatchingLabelsSelector{Selector: selector})) + + for i := range uList.Items { + u := uList.Items[i] + + labels, err := getUnstructuredLabels(*cluster, u) + assert.NilError(t, err) + for resourceType, resourceLabels := range labels { + t.Run(u.GetKind()+"/"+u.GetName()+"/"+resourceType, func(t *testing.T) { + assert.Equal(t, resourceLabels["my.pgbouncer.label"], "lucy") + }) + } + } + } + }) +} + +func TestCustomAnnotations(t *testing.T) { + ctx := context.Background() + _, cc := setupKubernetes(t) + require.ParallelCapacity(t, 2) + + reconciler := &Reconciler{ + Client: cc, + Owner: client.FieldOwner(t.Name()), + Recorder: new(record.FakeRecorder), + Tracer: otel.Tracer(t.Name()), + } + + ns := setupNamespace(t, cc) + + reconcileTestCluster := func(cluster *v1beta1.PostgresCluster) { + assert.NilError(t, errors.WithStack(reconciler.Client.Create(ctx, cluster))) + t.Cleanup(func() { + // Remove finalizers, if any, so the namespace can terminate. + assert.Check(t, client.IgnoreNotFound( + reconciler.Client.Patch(ctx, cluster, client.RawPatch( + client.Merge.Type(), []byte(`{"metadata":{"finalizers":[]}}`))))) + }) + + // Reconcile the cluster + result, err := reconciler.Reconcile(ctx, reconcile.Request{ + NamespacedName: client.ObjectKeyFromObject(cluster), + }) + assert.NilError(t, err) + assert.Assert(t, result.Requeue == false) + } + + getUnstructuredAnnotations := func(cluster v1beta1.PostgresCluster, u unstructured.Unstructured) (map[string]map[string]string, error) { + var err error + annotations := map[string]map[string]string{} + + if metav1.IsControlledBy(&u, &cluster) { + switch u.GetKind() { + case "StatefulSet": + var resource appsv1.StatefulSet + err = runtime.DefaultUnstructuredConverter. + FromUnstructured(u.UnstructuredContent(), &resource) + annotations["resource"] = resource.GetAnnotations() + annotations["podTemplate"] = resource.Spec.Template.GetAnnotations() + case "Deployment": + var resource appsv1.Deployment + err = runtime.DefaultUnstructuredConverter. + FromUnstructured(u.UnstructuredContent(), &resource) + annotations["resource"] = resource.GetAnnotations() + annotations["podTemplate"] = resource.Spec.Template.GetAnnotations() + case "CronJob": + var resource batchv1.CronJob + err = runtime.DefaultUnstructuredConverter. + FromUnstructured(u.UnstructuredContent(), &resource) + annotations["resource"] = resource.GetAnnotations() + annotations["jobTemplate"] = resource.Spec.JobTemplate.GetAnnotations() + annotations["jobPodTemplate"] = resource.Spec.JobTemplate.Spec.Template.GetAnnotations() + default: + annotations["resource"] = u.GetAnnotations() + } + } + return annotations, err + } + + t.Run("Cluster", func(t *testing.T) { + cluster := testCluster() + cluster.ObjectMeta.Name = "global-cluster" + cluster.ObjectMeta.Namespace = ns.Name + cluster.Spec.InstanceSets = []v1beta1.PostgresInstanceSetSpec{{ + Name: "daisy-instance1", + Replicas: initialize.Int32(1), + DataVolumeClaimSpec: testVolumeClaimSpec(), + }, { + Name: "daisy-instance2", + Replicas: initialize.Int32(1), + DataVolumeClaimSpec: testVolumeClaimSpec(), + }} + cluster.Spec.Metadata = &v1beta1.Metadata{ + Annotations: map[string]string{"my.cluster.annotation": "daisy"}, + } + testCronSchedule := "@yearly" + cluster.Spec.Backups.PGBackRest.Repos[0].BackupSchedules = &v1beta1.PGBackRestBackupSchedules{ + Full: &testCronSchedule, + Differential: &testCronSchedule, + Incremental: &testCronSchedule, + } + reconcileTestCluster(cluster) + + selector, err := naming.AsSelector(metav1.LabelSelector{ + MatchLabels: map[string]string{ + naming.LabelCluster: cluster.Name, + }, + }) + assert.NilError(t, err) + + for _, gvk := range gvks { + uList := &unstructured.UnstructuredList{} + uList.SetGroupVersionKind(gvk) + assert.NilError(t, reconciler.Client.List(ctx, uList, + client.InNamespace(cluster.Namespace), + client.MatchingLabelsSelector{Selector: selector})) + + for i := range uList.Items { + u := uList.Items[i] + annotations, err := getUnstructuredAnnotations(*cluster, u) + assert.NilError(t, err) + for resourceType, resourceAnnotations := range annotations { + t.Run(u.GetKind()+"/"+u.GetName()+"/"+resourceType, func(t *testing.T) { + assert.Equal(t, resourceAnnotations["my.cluster.annotation"], "daisy") + }) + } + } + } + }) + + t.Run("Instance", func(t *testing.T) { + cluster := testCluster() + cluster.ObjectMeta.Name = "instance-cluster" + cluster.ObjectMeta.Namespace = ns.Name + cluster.Spec.InstanceSets = []v1beta1.PostgresInstanceSetSpec{{ + Name: "max-instance", + Replicas: initialize.Int32(1), + DataVolumeClaimSpec: testVolumeClaimSpec(), + Metadata: &v1beta1.Metadata{ + Annotations: map[string]string{"my.instance.annotation": "max"}, + }, + }, { + Name: "lucy-instance", + Replicas: initialize.Int32(1), + DataVolumeClaimSpec: testVolumeClaimSpec(), + Metadata: &v1beta1.Metadata{ + Annotations: map[string]string{"my.instance.annotation": "lucy"}, + }, + }} + reconcileTestCluster(cluster) + for _, set := range cluster.Spec.InstanceSets { + t.Run(set.Name, func(t *testing.T) { + selector, err := naming.AsSelector(metav1.LabelSelector{ + MatchLabels: map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelInstanceSet: set.Name, + }, + }) + assert.NilError(t, err) + + for _, gvk := range gvks { + uList := &unstructured.UnstructuredList{} + uList.SetGroupVersionKind(gvk) + assert.NilError(t, reconciler.Client.List(ctx, uList, + client.InNamespace(cluster.Namespace), + client.MatchingLabelsSelector{Selector: selector})) + + for i := range uList.Items { + u := uList.Items[i] + + annotations, err := getUnstructuredAnnotations(*cluster, u) + assert.NilError(t, err) + for resourceType, resourceAnnotations := range annotations { + t.Run(u.GetKind()+"/"+u.GetName()+"/"+resourceType, func(t *testing.T) { + assert.Equal(t, resourceAnnotations["my.instance.annotation"], set.Metadata.Annotations["my.instance.annotation"]) + }) + } + } + } + }) + } + + }) + + t.Run("PGBackRest", func(t *testing.T) { + cluster := testCluster() + cluster.ObjectMeta.Name = "pgbackrest-cluster" + cluster.ObjectMeta.Namespace = ns.Name + cluster.Spec.Backups.PGBackRest.Metadata = &v1beta1.Metadata{ + Annotations: map[string]string{"my.pgbackrest.annotation": "lucy"}, + } + testCronSchedule := "@yearly" + cluster.Spec.Backups.PGBackRest.Repos[0].BackupSchedules = &v1beta1.PGBackRestBackupSchedules{ + Full: &testCronSchedule, + Differential: &testCronSchedule, + Incremental: &testCronSchedule, + } + reconcileTestCluster(cluster) + + selector, err := naming.AsSelector(metav1.LabelSelector{ + MatchLabels: map[string]string{ + naming.LabelCluster: cluster.Name, + }, + MatchExpressions: []metav1.LabelSelectorRequirement{{ + Key: naming.LabelPGBackRest, + Operator: metav1.LabelSelectorOpExists}, + }, + }) + assert.NilError(t, err) + + for _, gvk := range gvks { + uList := &unstructured.UnstructuredList{} + uList.SetGroupVersionKind(gvk) + assert.NilError(t, reconciler.Client.List(ctx, uList, + client.InNamespace(cluster.Namespace), + client.MatchingLabelsSelector{Selector: selector})) + + for i := range uList.Items { + u := uList.Items[i] + + annotations, err := getUnstructuredAnnotations(*cluster, u) + assert.NilError(t, err) + for resourceType, resourceAnnotations := range annotations { + t.Run(u.GetKind()+"/"+u.GetName()+"/"+resourceType, func(t *testing.T) { + assert.Equal(t, resourceAnnotations["my.pgbackrest.annotation"], "lucy") + }) + } + } + } + }) + + t.Run("PGBouncer", func(t *testing.T) { + cluster := testCluster() + cluster.ObjectMeta.Name = "pgbouncer-cluster" + cluster.ObjectMeta.Namespace = ns.Name + cluster.Spec.Proxy.PGBouncer.Metadata = &v1beta1.Metadata{ + Annotations: map[string]string{"my.pgbouncer.annotation": "lucy"}, + } + reconcileTestCluster(cluster) + + selector, err := naming.AsSelector(metav1.LabelSelector{ + MatchLabels: map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelRole: naming.RolePGBouncer, + }, + }) + assert.NilError(t, err) + + for _, gvk := range gvks { + uList := &unstructured.UnstructuredList{} + uList.SetGroupVersionKind(gvk) + assert.NilError(t, reconciler.Client.List(ctx, uList, + client.InNamespace(cluster.Namespace), + client.MatchingLabelsSelector{Selector: selector})) + + for i := range uList.Items { + u := uList.Items[i] + + annotations, err := getUnstructuredAnnotations(*cluster, u) + assert.NilError(t, err) + for resourceType, resourceAnnotations := range annotations { + t.Run(u.GetKind()+"/"+u.GetName()+"/"+resourceType, func(t *testing.T) { + assert.Equal(t, resourceAnnotations["my.pgbouncer.annotation"], "lucy") + }) + } + } + } + }) +} + +func TestGenerateClusterPrimaryService(t *testing.T) { + _, cc := setupKubernetes(t) + require.ParallelCapacity(t, 0) + + reconciler := &Reconciler{Client: cc} + + cluster := &v1beta1.PostgresCluster{} + cluster.Namespace = "ns2" + cluster.Name = "pg5" + cluster.Spec.Port = initialize.Int32(2600) + + leader := &corev1.Service{} + leader.Spec.ClusterIP = "1.9.8.3" + + _, _, err := reconciler.generateClusterPrimaryService(cluster, nil) + assert.ErrorContains(t, err, "not implemented") + + alwaysExpect := func(t testing.TB, service *corev1.Service, endpoints *corev1.Endpoints) { + assert.Assert(t, cmp.MarshalMatches(service.TypeMeta, ` +apiVersion: v1 +kind: Service + `)) + assert.Assert(t, cmp.MarshalMatches(service.ObjectMeta, ` +creationTimestamp: null +labels: + postgres-operator.crunchydata.com/cluster: pg5 + postgres-operator.crunchydata.com/role: primary +name: pg5-primary +namespace: ns2 +ownerReferences: +- apiVersion: postgres-operator.crunchydata.com/v1beta1 + blockOwnerDeletion: true + controller: true + kind: PostgresCluster + name: pg5 + uid: "" + `)) + assert.Assert(t, cmp.MarshalMatches(service.Spec.Ports, ` +- name: postgres + port: 2600 + protocol: TCP + targetPort: postgres + `)) + + assert.Equal(t, service.Spec.ClusterIP, "None") + assert.Assert(t, service.Spec.Selector == nil, + "got %v", service.Spec.Selector) + + assert.Assert(t, cmp.MarshalMatches(endpoints, ` +apiVersion: v1 +kind: Endpoints +metadata: + creationTimestamp: null + labels: + postgres-operator.crunchydata.com/cluster: pg5 + postgres-operator.crunchydata.com/role: primary + name: pg5-primary + namespace: ns2 + ownerReferences: + - apiVersion: postgres-operator.crunchydata.com/v1beta1 + blockOwnerDeletion: true + controller: true + kind: PostgresCluster + name: pg5 + uid: "" +subsets: +- addresses: + - ip: 1.9.8.3 + ports: + - name: postgres + port: 2600 + protocol: TCP + `)) + } + + service, endpoints, err := reconciler.generateClusterPrimaryService(cluster, leader) + assert.NilError(t, err) + alwaysExpect(t, service, endpoints) + + t.Run("LeaderLoadBalancer", func(t *testing.T) { + leader := leader.DeepCopy() + leader.Spec.Type = corev1.ServiceTypeLoadBalancer + leader.Status.LoadBalancer.Ingress = []corev1.LoadBalancerIngress{ + {IP: "55.44.33.22"}, + {IP: "99.88.77.66", Hostname: "some.host"}, + {IP: "1.2.3.4", Hostname: "only.the.first"}, + } + + service, endpoints, err := reconciler.generateClusterPrimaryService(cluster, leader) + assert.NilError(t, err) + alwaysExpect(t, service, endpoints) + + // generateClusterPrimaryService no longer sets ExternalIPs or ExternalName from + // LoadBalancer-type leader service + // - https://cloud.google.com/anthos/clusters/docs/security-bulletins#gcp-2020-015 + assert.Equal(t, len(service.Spec.ExternalIPs), 0) + assert.Equal(t, service.Spec.ExternalName, "") + }) +} + +func TestReconcileClusterPrimaryService(t *testing.T) { + ctx := context.Background() + _, cc := setupKubernetes(t) + require.ParallelCapacity(t, 1) + + reconciler := &Reconciler{Client: cc, Owner: client.FieldOwner(t.Name())} + + cluster := testCluster() + cluster.Namespace = setupNamespace(t, cc).Name + assert.NilError(t, cc.Create(ctx, cluster)) + + _, err := reconciler.reconcileClusterPrimaryService(ctx, cluster, nil) + assert.ErrorContains(t, err, "not implemented") + + leader := &corev1.Service{} + leader.Spec.ClusterIP = "192.0.2.10" + + service, err := reconciler.reconcileClusterPrimaryService(ctx, cluster, leader) + assert.NilError(t, err) + assert.Assert(t, service != nil && service.UID != "", "expected created service") +} + +func TestGenerateClusterReplicaServiceIntent(t *testing.T) { + _, cc := setupKubernetes(t) + require.ParallelCapacity(t, 0) + + reconciler := &Reconciler{Client: cc} + + cluster := &v1beta1.PostgresCluster{} + cluster.Namespace = "ns1" + cluster.Name = "pg2" + cluster.Spec.Port = initialize.Int32(9876) + + service, err := reconciler.generateClusterReplicaService(cluster) + assert.NilError(t, err) + + alwaysExpect := func(t testing.TB, service *corev1.Service) { + assert.Assert(t, cmp.MarshalMatches(service.TypeMeta, ` +apiVersion: v1 +kind: Service + `)) + assert.Assert(t, cmp.MarshalMatches(service.ObjectMeta, ` +creationTimestamp: null +labels: + postgres-operator.crunchydata.com/cluster: pg2 + postgres-operator.crunchydata.com/role: replica +name: pg2-replicas +namespace: ns1 +ownerReferences: +- apiVersion: postgres-operator.crunchydata.com/v1beta1 + blockOwnerDeletion: true + controller: true + kind: PostgresCluster + name: pg2 + uid: "" + `)) + } + + alwaysExpect(t, service) + assert.Assert(t, cmp.MarshalMatches(service.Spec, ` +ports: +- name: postgres + port: 9876 + protocol: TCP + targetPort: postgres +selector: + postgres-operator.crunchydata.com/cluster: pg2 + postgres-operator.crunchydata.com/role: replica +type: ClusterIP + `)) + + types := []struct { + Type string + Expect func(testing.TB, *corev1.Service) + }{ + {Type: "ClusterIP", Expect: func(t testing.TB, service *corev1.Service) { + assert.Equal(t, service.Spec.Type, corev1.ServiceTypeClusterIP) + }}, + {Type: "NodePort", Expect: func(t testing.TB, service *corev1.Service) { + assert.Equal(t, service.Spec.Type, corev1.ServiceTypeNodePort) + }}, + {Type: "LoadBalancer", Expect: func(t testing.TB, service *corev1.Service) { + assert.Equal(t, service.Spec.Type, corev1.ServiceTypeLoadBalancer) + }}, + } + + for _, test := range types { + t.Run(test.Type, func(t *testing.T) { + cluster := cluster.DeepCopy() + cluster.Spec.ReplicaService = &v1beta1.ServiceSpec{Type: test.Type} + + service, err := reconciler.generateClusterReplicaService(cluster) + assert.NilError(t, err) + alwaysExpect(t, service) + test.Expect(t, service) + assert.Assert(t, cmp.MarshalMatches(service.Spec.Ports, ` +- name: postgres + port: 9876 + protocol: TCP + targetPort: postgres + `)) + }) + } + + t.Run("AnnotationsLabels", func(t *testing.T) { + cluster := cluster.DeepCopy() + cluster.Spec.Metadata = &v1beta1.Metadata{ + Annotations: map[string]string{"some": "note"}, + Labels: map[string]string{"happy": "label"}, + } + + service, err := reconciler.generateClusterReplicaService(cluster) + assert.NilError(t, err) + + // Annotations present in the metadata. + assert.Assert(t, cmp.MarshalMatches(service.ObjectMeta.Annotations, ` +some: note + `)) + + // Labels present in the metadata. + assert.Assert(t, cmp.MarshalMatches(service.ObjectMeta.Labels, ` +happy: label +postgres-operator.crunchydata.com/cluster: pg2 +postgres-operator.crunchydata.com/role: replica + `)) + + // Labels not in the selector. + assert.Assert(t, cmp.MarshalMatches(service.Spec.Selector, ` +postgres-operator.crunchydata.com/cluster: pg2 +postgres-operator.crunchydata.com/role: replica + `)) + }) +} diff --git a/internal/controller/postgrescluster/controller.go b/internal/controller/postgrescluster/controller.go new file mode 100644 index 0000000000..d459d30a10 --- /dev/null +++ b/internal/controller/postgrescluster/controller.go @@ -0,0 +1,527 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgrescluster + +import ( + "context" + "errors" + "fmt" + "io" + "time" + + "go.opentelemetry.io/otel/trace" + appsv1 "k8s.io/api/apps/v1" + batchv1 "k8s.io/api/batch/v1" + corev1 "k8s.io/api/core/v1" + policyv1 "k8s.io/api/policy/v1" + rbacv1 "k8s.io/api/rbac/v1" + "k8s.io/apimachinery/pkg/api/equality" + apierrors "k8s.io/apimachinery/pkg/api/errors" + "k8s.io/apimachinery/pkg/api/meta" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/util/validation/field" + "k8s.io/client-go/discovery" + "k8s.io/client-go/tools/record" + "sigs.k8s.io/controller-runtime/pkg/builder" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/controller/controllerutil" + "sigs.k8s.io/controller-runtime/pkg/manager" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + + "github.com/crunchydata/postgres-operator/internal/config" + "github.com/crunchydata/postgres-operator/internal/controller/runtime" + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/logging" + "github.com/crunchydata/postgres-operator/internal/pgaudit" + "github.com/crunchydata/postgres-operator/internal/pgbackrest" + "github.com/crunchydata/postgres-operator/internal/pgbouncer" + "github.com/crunchydata/postgres-operator/internal/pgmonitor" + "github.com/crunchydata/postgres-operator/internal/pki" + "github.com/crunchydata/postgres-operator/internal/postgres" + "github.com/crunchydata/postgres-operator/internal/registration" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +const ( + // ControllerName is the name of the PostgresCluster controller + ControllerName = "postgrescluster-controller" +) + +// Reconciler holds resources for the PostgresCluster reconciler +type Reconciler struct { + Client client.Client + DiscoveryClient *discovery.DiscoveryClient + IsOpenShift bool + Owner client.FieldOwner + PodExec func( + ctx context.Context, namespace, pod, container string, + stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error + Recorder record.EventRecorder + Registration registration.Registration + Tracer trace.Tracer +} + +// +kubebuilder:rbac:groups="",resources="events",verbs={create,patch} +// +kubebuilder:rbac:groups="postgres-operator.crunchydata.com",resources="postgresclusters",verbs={get,list,watch} +// +kubebuilder:rbac:groups="postgres-operator.crunchydata.com",resources="postgresclusters/status",verbs={patch} + +// Reconcile reconciles a ConfigMap in a namespace managed by the PostgreSQL Operator +func (r *Reconciler) Reconcile( + ctx context.Context, request reconcile.Request) (reconcile.Result, error, +) { + ctx, span := r.Tracer.Start(ctx, "Reconcile") + log := logging.FromContext(ctx) + defer span.End() + + // get the postgrescluster from the cache + cluster := &v1beta1.PostgresCluster{} + if err := r.Client.Get(ctx, request.NamespacedName, cluster); err != nil { + // NotFound cannot be fixed by requeuing so ignore it. During background + // deletion, we receive delete events from cluster's dependents after + // cluster is deleted. + if err = client.IgnoreNotFound(err); err != nil { + log.Error(err, "unable to fetch PostgresCluster") + span.RecordError(err) + } + return runtime.ErrorWithBackoff(err) + } + + // Set any defaults that may not have been stored in the API. No DeepCopy + // is necessary because controller-runtime makes a copy before returning + // from its cache. + cluster.Default() + + if cluster.Spec.OpenShift == nil { + cluster.Spec.OpenShift = &r.IsOpenShift + } + + // Keep a copy of cluster prior to any manipulations. + before := cluster.DeepCopy() + + // NOTE(cbandy): When a namespace is deleted, objects owned by a + // PostgresCluster may be deleted before the PostgresCluster is deleted. + // When this happens, any attempt to reconcile those objects is rejected + // as Forbidden: "unable to create new content in namespace … because it is + // being terminated". + + // Check for and handle deletion of cluster. Return early if it is being + // deleted or there was an error. + if result, err := r.handleDelete(ctx, cluster); err != nil { + span.RecordError(err) + log.Error(err, "deleting") + return runtime.ErrorWithBackoff(err) + + } else if result != nil { + if log := log.V(1); log.Enabled() { + log.Info("deleting", "result", fmt.Sprintf("%+v", *result)) + } + return *result, nil + } + + // Perform initial validation on a cluster + // TODO: Move this to a defaulting (mutating admission) webhook + // to leverage regular validation. + + // verify all needed image values are defined + if err := config.VerifyImageValues(cluster); err != nil { + // warning event with missing image information + r.Recorder.Event(cluster, corev1.EventTypeWarning, "MissingRequiredImage", + err.Error()) + // specifically allow reconciliation if the cluster is shutdown to + // facilitate upgrades, otherwise return + if !initialize.FromPointer(cluster.Spec.Shutdown) { + return runtime.ErrorWithBackoff(err) + } + } + + if cluster.Spec.Standby != nil && + cluster.Spec.Standby.Enabled && + cluster.Spec.Standby.Host == "" && + cluster.Spec.Standby.RepoName == "" { + // When a standby cluster is requested but a repoName or host is not provided + // the cluster will be created as a non-standby. Reject any clusters with + // this configuration and provide an event + path := field.NewPath("spec", "standby") + err := field.Invalid(path, cluster.Name, "Standby requires a host or repoName to be enabled") + r.Recorder.Event(cluster, corev1.EventTypeWarning, "InvalidStandbyConfiguration", err.Error()) + return runtime.ErrorWithBackoff(err) + } + + var ( + clusterConfigMap *corev1.ConfigMap + clusterReplicationSecret *corev1.Secret + clusterPodService *corev1.Service + clusterVolumes []corev1.PersistentVolumeClaim + instanceServiceAccount *corev1.ServiceAccount + instances *observedInstances + patroniLeaderService *corev1.Service + primaryCertificate *corev1.SecretProjection + primaryService *corev1.Service + replicaService *corev1.Service + rootCA *pki.RootCertificateAuthority + monitoringSecret *corev1.Secret + exporterQueriesConfig *corev1.ConfigMap + exporterWebConfig *corev1.ConfigMap + err error + backupsSpecFound bool + backupsReconciliationAllowed bool + dedicatedSnapshotPVC *corev1.PersistentVolumeClaim + ) + + patchClusterStatus := func() error { + if !equality.Semantic.DeepEqual(before.Status, cluster.Status) { + // NOTE(cbandy): Kubernetes prior to v1.16.10 and v1.17.6 does not track + // managed fields on the status subresource: https://issue.k8s.io/88901 + if err := r.Client.Status().Patch( + ctx, cluster, client.MergeFrom(before), r.Owner); err != nil { + log.Error(err, "patching cluster status") + return err + } + log.V(1).Info("patched cluster status") + } + return nil + } + + if r.Registration != nil && r.Registration.Required(r.Recorder, cluster, &cluster.Status.Conditions) { + registration.SetAdvanceWarning(r.Recorder, cluster, &cluster.Status.Conditions) + } + cluster.Status.RegistrationRequired = nil + cluster.Status.TokenRequired = "" + + // if the cluster is paused, set a condition and return + if cluster.Spec.Paused != nil && *cluster.Spec.Paused { + meta.SetStatusCondition(&cluster.Status.Conditions, metav1.Condition{ + Type: v1beta1.PostgresClusterProgressing, + Status: metav1.ConditionFalse, + Reason: "Paused", + Message: "No spec changes will be applied and no other statuses will be updated.", + + ObservedGeneration: cluster.GetGeneration(), + }) + return runtime.ErrorWithBackoff(patchClusterStatus()) + } else { + meta.RemoveStatusCondition(&cluster.Status.Conditions, v1beta1.PostgresClusterProgressing) + } + + if err == nil { + backupsSpecFound, backupsReconciliationAllowed, err = r.BackupsEnabled(ctx, cluster) + + // If we cannot reconcile because the backup reconciliation is paused, set a condition and exit + if !backupsReconciliationAllowed { + meta.SetStatusCondition(&cluster.Status.Conditions, metav1.Condition{ + Type: v1beta1.PostgresClusterProgressing, + Status: metav1.ConditionFalse, + Reason: "Paused", + Message: "Reconciliation is paused: please fill in spec.backups " + + "or add the postgres-operator.crunchydata.com/authorizeBackupRemoval " + + "annotation to authorize backup removal.", + + ObservedGeneration: cluster.GetGeneration(), + }) + return runtime.ErrorWithBackoff(patchClusterStatus()) + } else { + meta.RemoveStatusCondition(&cluster.Status.Conditions, v1beta1.PostgresClusterProgressing) + } + } + + pgHBAs := postgres.NewHBAs() + pgmonitor.PostgreSQLHBAs(cluster, &pgHBAs) + pgbouncer.PostgreSQL(cluster, &pgHBAs) + + pgParameters := postgres.NewParameters() + pgaudit.PostgreSQLParameters(&pgParameters) + pgbackrest.PostgreSQL(cluster, &pgParameters, backupsSpecFound) + pgmonitor.PostgreSQLParameters(cluster, &pgParameters) + + // Set huge_pages = try if a hugepages resource limit > 0, otherwise set "off" + postgres.SetHugePages(cluster, &pgParameters) + + if err == nil { + rootCA, err = r.reconcileRootCertificate(ctx, cluster) + } + + if err == nil { + // Since any existing data directories must be moved prior to bootstrapping the + // cluster, further reconciliation will not occur until the directory move Jobs + // (if configured) have completed. Func reconcileDirMoveJobs() will therefore + // return a bool indicating that the controller should return early while any + // required Jobs are running, after which it will indicate that an early + // return is no longer needed, and reconciliation can proceed normally. + returnEarly, err := r.reconcileDirMoveJobs(ctx, cluster) + if err != nil || returnEarly { + return runtime.ErrorWithBackoff(errors.Join(err, patchClusterStatus())) + } + } + if err == nil { + clusterVolumes, err = r.observePersistentVolumeClaims(ctx, cluster) + } + if err == nil { + clusterVolumes, err = r.configureExistingPVCs(ctx, cluster, clusterVolumes) + } + if err == nil { + instances, err = r.observeInstances(ctx, cluster) + } + + result := reconcile.Result{} + + if err == nil { + var requeue time.Duration + if requeue, err = r.reconcilePatroniStatus(ctx, cluster, instances); err == nil && requeue > 0 { + result.RequeueAfter = requeue + } + } + if err == nil { + err = r.reconcilePatroniSwitchover(ctx, cluster, instances) + } + // reconcile the Pod service before reconciling any data source in case it is necessary + // to start Pods during data source reconciliation that require network connections (e.g. + // if it is necessary to start a dedicated repo host to bootstrap a new cluster using its + // own existing backups). + if err == nil { + clusterPodService, err = r.reconcileClusterPodService(ctx, cluster) + } + // reconcile the RBAC resources before reconciling any data source in case + // restore/move Job pods require the ServiceAccount to access any data source. + // e.g., we are restoring from an S3 source using an IAM for access + // - https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts-technical-overview.html + if err == nil { + instanceServiceAccount, err = r.reconcileRBACResources(ctx, cluster) + } + // First handle reconciling any data source configured for the PostgresCluster. This includes + // reconciling the data source defined to bootstrap a new cluster, as well as a reconciling + // a data source to perform restore in-place and re-bootstrap the cluster. + if err == nil { + // Since the PostgreSQL data source needs to be populated prior to bootstrapping the + // cluster, further reconciliation will not occur until the data source (if configured) is + // initialized. Func reconcileDataSource() will therefore return a bool indicating that + // the controller should return early while data initialization is in progress, after + // which it will indicate that an early return is no longer needed, and reconciliation + // can proceed normally. + returnEarly, err := r.reconcileDataSource(ctx, cluster, instances, clusterVolumes, rootCA, backupsSpecFound) + if err != nil || returnEarly { + return runtime.ErrorWithBackoff(errors.Join(err, patchClusterStatus())) + } + } + if err == nil { + clusterConfigMap, err = r.reconcileClusterConfigMap(ctx, cluster, pgHBAs, pgParameters) + } + if err == nil { + clusterReplicationSecret, err = r.reconcileReplicationSecret(ctx, cluster, rootCA) + } + if err == nil { + patroniLeaderService, err = r.reconcilePatroniLeaderLease(ctx, cluster) + } + if err == nil { + primaryService, err = r.reconcileClusterPrimaryService(ctx, cluster, patroniLeaderService) + } + if err == nil { + replicaService, err = r.reconcileClusterReplicaService(ctx, cluster) + } + if err == nil { + primaryCertificate, err = r.reconcileClusterCertificate(ctx, rootCA, cluster, primaryService, replicaService) + } + if err == nil { + err = r.reconcilePatroniDistributedConfiguration(ctx, cluster) + } + if err == nil { + err = r.reconcilePatroniDynamicConfiguration(ctx, cluster, instances, pgHBAs, pgParameters) + } + if err == nil { + monitoringSecret, err = r.reconcileMonitoringSecret(ctx, cluster) + } + if err == nil { + exporterQueriesConfig, err = r.reconcileExporterQueriesConfig(ctx, cluster) + } + if err == nil { + exporterWebConfig, err = r.reconcileExporterWebConfig(ctx, cluster) + } + if err == nil { + err = r.reconcileInstanceSets( + ctx, cluster, clusterConfigMap, clusterReplicationSecret, rootCA, + clusterPodService, instanceServiceAccount, instances, patroniLeaderService, + primaryCertificate, clusterVolumes, exporterQueriesConfig, exporterWebConfig, + backupsSpecFound, + ) + } + + if err == nil { + err = r.reconcilePostgresDatabases(ctx, cluster, instances) + } + if err == nil { + err = r.reconcilePostgresUsers(ctx, cluster, instances) + } + + if err == nil { + var next reconcile.Result + if next, err = r.reconcilePGBackRest(ctx, cluster, + instances, rootCA, backupsSpecFound); err == nil && !next.IsZero() { + result.Requeue = result.Requeue || next.Requeue + if next.RequeueAfter > 0 { + result.RequeueAfter = next.RequeueAfter + } + } + } + if err == nil { + dedicatedSnapshotPVC, err = r.reconcileDedicatedSnapshotVolume(ctx, cluster, clusterVolumes) + } + if err == nil { + err = r.reconcileVolumeSnapshots(ctx, cluster, dedicatedSnapshotPVC) + } + if err == nil { + err = r.reconcilePGBouncer(ctx, cluster, instances, primaryCertificate, rootCA) + } + if err == nil { + err = r.reconcilePGMonitor(ctx, cluster, instances, monitoringSecret) + } + if err == nil { + err = r.reconcileDatabaseInitSQL(ctx, cluster, instances) + } + if err == nil { + err = r.reconcilePGAdmin(ctx, cluster) + } + if err == nil { + // This is after [Reconciler.rolloutInstances] to ensure that recreating + // Pods takes precedence. + err = r.handlePatroniRestarts(ctx, cluster, instances) + } + + // at this point everything reconciled successfully, and we can update the + // observedGeneration + cluster.Status.ObservedGeneration = cluster.GetGeneration() + + log.V(1).Info("reconciled cluster") + + return result, errors.Join(err, patchClusterStatus()) +} + +// deleteControlled safely deletes object when it is controlled by cluster. +func (r *Reconciler) deleteControlled( + ctx context.Context, cluster *v1beta1.PostgresCluster, object client.Object, +) error { + if metav1.IsControlledBy(object, cluster) { + uid := object.GetUID() + version := object.GetResourceVersion() + exactly := client.Preconditions{UID: &uid, ResourceVersion: &version} + + return r.Client.Delete(ctx, object, exactly) + } + + return nil +} + +// patch sends patch to object's endpoint in the Kubernetes API and updates +// object with any returned content. The fieldManager is set to r.Owner, but +// can be overridden in options. +// - https://docs.k8s.io/reference/using-api/server-side-apply/#managers +func (r *Reconciler) patch( + ctx context.Context, object client.Object, + patch client.Patch, options ...client.PatchOption, +) error { + options = append([]client.PatchOption{r.Owner}, options...) + return r.Client.Patch(ctx, object, patch, options...) +} + +// The owner reference created by controllerutil.SetControllerReference blocks +// deletion. The OwnerReferencesPermissionEnforcement plugin requires that the +// creator of such a reference have either "delete" permission on the owner or +// "update" permission on the owner's "finalizers" subresource. +// - https://docs.k8s.io/reference/access-authn-authz/admission-controllers/ +// +kubebuilder:rbac:groups="postgres-operator.crunchydata.com",resources="postgresclusters/finalizers",verbs={update} + +// setControllerReference sets owner as a Controller OwnerReference on controlled. +// Only one OwnerReference can be a controller, so it returns an error if another +// is already set. +func (r *Reconciler) setControllerReference( + owner *v1beta1.PostgresCluster, controlled client.Object, +) error { + return controllerutil.SetControllerReference(owner, controlled, r.Client.Scheme()) +} + +// setOwnerReference sets an OwnerReference on the object without setting the +// owner as a controller. This allows for multiple OwnerReferences on an object. +func (r *Reconciler) setOwnerReference( + owner *v1beta1.PostgresCluster, controlled client.Object, +) error { + return controllerutil.SetOwnerReference(owner, controlled, r.Client.Scheme()) +} + +// +kubebuilder:rbac:groups="",resources="configmaps",verbs={get,list,watch} +// +kubebuilder:rbac:groups="",resources="endpoints",verbs={get,list,watch} +// +kubebuilder:rbac:groups="",resources="persistentvolumeclaims",verbs={get,list,watch} +// +kubebuilder:rbac:groups="",resources="secrets",verbs={get,list,watch} +// +kubebuilder:rbac:groups="",resources="services",verbs={get,list,watch} +// +kubebuilder:rbac:groups="",resources="serviceaccounts",verbs={get,list,watch} +// +kubebuilder:rbac:groups="apps",resources="deployments",verbs={get,list,watch} +// +kubebuilder:rbac:groups="apps",resources="statefulsets",verbs={get,list,watch} +// +kubebuilder:rbac:groups="batch",resources="jobs",verbs={get,list,watch} +// +kubebuilder:rbac:groups="rbac.authorization.k8s.io",resources="roles",verbs={get,list,watch} +// +kubebuilder:rbac:groups="rbac.authorization.k8s.io",resources="rolebindings",verbs={get,list,watch} +// +kubebuilder:rbac:groups="batch",resources="cronjobs",verbs={get,list,watch} +// +kubebuilder:rbac:groups="policy",resources="poddisruptionbudgets",verbs={get,list,watch} + +// SetupWithManager adds the PostgresCluster controller to the provided runtime manager +func (r *Reconciler) SetupWithManager(mgr manager.Manager) error { + if r.PodExec == nil { + var err error + r.PodExec, err = runtime.NewPodExecutor(mgr.GetConfig()) + if err != nil { + return err + } + } + + if r.DiscoveryClient == nil { + var err error + r.DiscoveryClient, err = discovery.NewDiscoveryClientForConfig(mgr.GetConfig()) + if err != nil { + return err + } + } + + return builder.ControllerManagedBy(mgr). + For(&v1beta1.PostgresCluster{}). + Owns(&corev1.ConfigMap{}). + Owns(&corev1.Endpoints{}). + Owns(&corev1.PersistentVolumeClaim{}). + Owns(&corev1.Secret{}). + Owns(&corev1.Service{}). + Owns(&corev1.ServiceAccount{}). + Owns(&appsv1.Deployment{}). + Owns(&appsv1.StatefulSet{}). + Owns(&batchv1.Job{}). + Owns(&rbacv1.Role{}). + Owns(&rbacv1.RoleBinding{}). + Owns(&batchv1.CronJob{}). + Owns(&policyv1.PodDisruptionBudget{}). + Watches(&corev1.Pod{}, r.watchPods()). + Watches(&appsv1.StatefulSet{}, + r.controllerRefHandlerFuncs()). // watch all StatefulSets + Complete(r) +} + +// GroupVersionKindExists checks to see whether a given Kind for a given +// GroupVersion exists in the Kubernetes API Server. +func (r *Reconciler) GroupVersionKindExists(groupVersion, kind string) (*bool, error) { + if r.DiscoveryClient == nil { + return initialize.Bool(false), nil + } + + resourceList, err := r.DiscoveryClient.ServerResourcesForGroupVersion(groupVersion) + if err != nil { + if apierrors.IsNotFound(err) { + return initialize.Bool(false), nil + } + + return nil, err + } + + for _, resource := range resourceList.APIResources { + if resource.Kind == kind { + return initialize.Bool(true), nil + } + } + + return initialize.Bool(false), nil +} diff --git a/internal/controller/postgrescluster/controller_ref_manager.go b/internal/controller/postgrescluster/controller_ref_manager.go new file mode 100644 index 0000000000..8c4a34189f --- /dev/null +++ b/internal/controller/postgrescluster/controller_ref_manager.go @@ -0,0 +1,204 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgrescluster + +import ( + "context" + + apierrors "k8s.io/apimachinery/pkg/api/errors" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/types" + "k8s.io/client-go/util/workqueue" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/controller/controllerutil" + "sigs.k8s.io/controller-runtime/pkg/event" + "sigs.k8s.io/controller-runtime/pkg/handler" + + "github.com/crunchydata/postgres-operator/internal/kubeapi" + "github.com/crunchydata/postgres-operator/internal/logging" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +// adoptObject adopts the provided Object by adding controller owner refs for the provided +// PostgresCluster. +func (r *Reconciler) adoptObject(ctx context.Context, postgresCluster *v1beta1.PostgresCluster, + obj client.Object) error { + + if err := controllerutil.SetControllerReference(postgresCluster, obj, + r.Client.Scheme()); err != nil { + return err + } + + patchBytes, err := kubeapi.NewMergePatch(). + Add("metadata", "ownerReferences")(obj.GetOwnerReferences()).Bytes() + if err != nil { + return err + } + + return r.Client.Patch(ctx, obj, client.RawPatch(types.StrategicMergePatchType, + patchBytes), &client.PatchOptions{ + FieldManager: ControllerName, + }) +} + +// claimObject is responsible for adopting or releasing Objects based on their current +// controller ownership and whether or not they meet the provided labeling requirements. +// This solution is modeled after the ControllerRefManager logic as found within the controller +// package in the Kubernetes source: +// https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/controller_ref_manager.go +// +// TODO do a non-cache based get of the PostgresCluster prior to adopting anything to prevent +// race conditions with the garbage collector (see +// https://github.com/kubernetes/kubernetes/issues/42639) +func (r *Reconciler) claimObject(ctx context.Context, postgresCluster *v1beta1.PostgresCluster, + obj client.Object) error { + + controllerRef := metav1.GetControllerOfNoCopy(obj) + if controllerRef != nil { + // if not owned by this postgrescluster then ignore + if controllerRef.UID != postgresCluster.GetUID() { + return nil + } + + // If owned by this PostgresCluster and the proper PostgresCluster label is present then + // return true. Additional labels checks can be added here as needed to determine whether + // or not a StatefulSet is part of a PostgreSQL cluster and should be adopted or released. + if _, ok := obj.GetLabels()[naming.LabelCluster]; ok { + return nil + } + + // If owned but selector doesn't match, then attempt to release. However, if the + // PostgresCluster is being deleted then simply return. + if postgresCluster.GetDeletionTimestamp() != nil { + return nil + } + + if err := r.releaseObject(ctx, postgresCluster, + obj); client.IgnoreNotFound(err) != nil { + return err + } + + // successfully released resource or resource no longer exists + return nil + } + + // At this point the resource has no controller ref and is therefore an orphan. Ignore if + // either the PostgresCluster resource or the orphaned resource is being deleted, or if the selector + // for the orphaned resource doesn't doesn't include the proper PostgresCluster label + _, hasPGClusterLabel := obj.GetLabels()[naming.LabelCluster] + if postgresCluster.GetDeletionTimestamp() != nil || !hasPGClusterLabel { + return nil + } + if obj.GetDeletionTimestamp() != nil { + return nil + } + if err := r.adoptObject(ctx, postgresCluster, obj); err != nil { + // If adopt attempt failed because the resource no longer exists, then simply + // ignore. Otherwise return the error. + if apierrors.IsNotFound(err) { + return nil + } + return err + } + + // successfully adopted resource + return nil +} + +// getPostgresClusterForObject is responsible for obtaining the PostgresCluster associated +// with an Object. +func (r *Reconciler) getPostgresClusterForObject(ctx context.Context, + obj client.Object) (bool, *v1beta1.PostgresCluster, error) { + + clusterName := "" + + // first see if it has a PostgresCluster ownership ref or a PostgresCluster label + controllerRef := metav1.GetControllerOfNoCopy(obj) + if controllerRef != nil && controllerRef.Kind == "PostgresCluster" { + clusterName = controllerRef.Name + } else if _, ok := obj.GetLabels()[naming.LabelCluster]; ok { + clusterName = obj.GetLabels()[naming.LabelCluster] + } + + if clusterName == "" { + return false, nil, nil + } + + postgresCluster := &v1beta1.PostgresCluster{} + if err := r.Client.Get(ctx, types.NamespacedName{ + Name: clusterName, + Namespace: obj.GetNamespace(), + }, postgresCluster); err != nil { + if apierrors.IsNotFound(err) { + return false, nil, nil + } + return false, nil, err + } + + return true, postgresCluster, nil +} + +// manageControllerRefs is responsible for determining whether or not an attempt should be made +// to adopt or release/orphan an Object. This includes obtaining the PostgresCluster for +// the Object and then calling the logic needed to either adopt or release it. +func (r *Reconciler) manageControllerRefs(ctx context.Context, + obj client.Object) error { + + found, postgresCluster, err := r.getPostgresClusterForObject(ctx, obj) + if err != nil { + return err + } + if !found { + return nil + } + + return r.claimObject(ctx, postgresCluster, obj) +} + +// releaseObject releases the provided Object set from ownership by the provided +// PostgresCluster. This is done by removing the PostgresCluster's controller owner +// refs from the Object. +func (r *Reconciler) releaseObject(ctx context.Context, + postgresCluster *v1beta1.PostgresCluster, obj client.Object) error { + + // TODO create a strategic merge type in kubeapi instead of using Merge7386 + patch, err := kubeapi.NewMergePatch(). + Add("metadata", "ownerReferences")([]map[string]string{{ + "$patch": "delete", + "uid": string(postgresCluster.GetUID()), + }}).Bytes() + if err != nil { + return err + } + + return r.Client.Patch(ctx, obj, client.RawPatch(types.StrategicMergePatchType, patch)) +} + +// controllerRefHandlerFuncs returns the handler funcs that should be utilized to watch +// StatefulSets within the cluster as needed to manage controller ownership refs. +func (r *Reconciler) controllerRefHandlerFuncs() *handler.Funcs { + + log := logging.FromContext(context.Background()) + errMsg := "managing StatefulSet controller refs" + + return &handler.Funcs{ + CreateFunc: func(ctx context.Context, updateEvent event.CreateEvent, workQueue workqueue.RateLimitingInterface) { + if err := r.manageControllerRefs(ctx, updateEvent.Object); err != nil { + log.Error(err, errMsg) + } + }, + UpdateFunc: func(ctx context.Context, updateEvent event.UpdateEvent, workQueue workqueue.RateLimitingInterface) { + if err := r.manageControllerRefs(ctx, updateEvent.ObjectNew); err != nil { + log.Error(err, errMsg) + } + }, + DeleteFunc: func(ctx context.Context, updateEvent event.DeleteEvent, workQueue workqueue.RateLimitingInterface) { + if err := r.manageControllerRefs(ctx, updateEvent.Object); err != nil { + log.Error(err, errMsg) + } + }, + } +} diff --git a/internal/controller/postgrescluster/controller_ref_manager_test.go b/internal/controller/postgrescluster/controller_ref_manager_test.go new file mode 100644 index 0000000000..8543fe390d --- /dev/null +++ b/internal/controller/postgrescluster/controller_ref_manager_test.go @@ -0,0 +1,162 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgrescluster + +import ( + "context" + "testing" + + appsv1 "k8s.io/api/apps/v1" + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "sigs.k8s.io/controller-runtime/pkg/client" + + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/testing/require" +) + +func TestManageControllerRefs(t *testing.T) { + _, tClient := setupKubernetes(t) + require.ParallelCapacity(t, 1) + + ctx := context.Background() + r := &Reconciler{Client: tClient} + clusterName := "hippo" + + cluster := testCluster() + cluster.Namespace = setupNamespace(t, tClient).Name + + // create the test PostgresCluster + if err := tClient.Create(ctx, cluster); err != nil { + t.Fatal(err) + } + + // create a base StatefulSet that can be used by the various tests below + objBase := &appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Namespace: cluster.Namespace, + }, + Spec: appsv1.StatefulSetSpec{ + Selector: &metav1.LabelSelector{ + MatchLabels: map[string]string{"label1": "val1"}, + }, + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{"label1": "val1"}, + }, + }, + }, + } + + t.Run("adopt Object", func(t *testing.T) { + + obj := objBase.DeepCopy() + obj.Name = "adopt" + obj.Labels = map[string]string{naming.LabelCluster: clusterName} + + if err := r.Client.Create(ctx, obj); err != nil { + t.Error(err) + } + + if err := r.manageControllerRefs(ctx, obj); err != nil { + t.Error(err) + } + + if err := tClient.Get(ctx, client.ObjectKeyFromObject(obj), obj); err != nil { + t.Error(err) + } + + var foundControllerOwnerRef bool + for _, ref := range obj.GetOwnerReferences() { + if *ref.Controller && *ref.BlockOwnerDeletion && + ref.UID == cluster.GetUID() && + ref.Name == clusterName && ref.Kind == "PostgresCluster" { + foundControllerOwnerRef = true + break + } + } + + if !foundControllerOwnerRef { + t.Error("unable to find expected controller ref") + } + }) + + t.Run("release Object", func(t *testing.T) { + + isTrue := true + obj := objBase.DeepCopy() + obj.Name = "release" + obj.OwnerReferences = append(obj.OwnerReferences, metav1.OwnerReference{ + APIVersion: "group/version", + Kind: "PostgresCluster", + Name: clusterName, + UID: cluster.GetUID(), + Controller: &isTrue, + BlockOwnerDeletion: &isTrue, + }) + + if err := r.Client.Create(ctx, obj); err != nil { + t.Error(err) + } + + if err := r.manageControllerRefs(ctx, obj); err != nil { + t.Error(err) + } + + if err := tClient.Get(ctx, client.ObjectKeyFromObject(obj), obj); err != nil { + t.Error(err) + } + + if len(obj.GetOwnerReferences()) != 0 { + t.Error("expected orphaned Object but found controller ref") + } + }) + + t.Run("ignore Object: no matching labels or owner refs", func(t *testing.T) { + + obj := objBase.DeepCopy() + obj.Name = "ignore-no-labels-refs" + obj.Labels = map[string]string{"ignore-label": "ignore-value"} + + if err := r.Client.Create(ctx, obj); err != nil { + t.Error(err) + } + + if err := r.manageControllerRefs(ctx, obj); err != nil { + t.Error(err) + } + + if err := tClient.Get(ctx, client.ObjectKeyFromObject(obj), obj); err != nil { + t.Error(err) + } + + if len(obj.GetOwnerReferences()) != 0 { + t.Error("expected orphaned Object but found controller ref") + } + }) + + t.Run("ignore Object: PostgresCluster does not exist", func(t *testing.T) { + + obj := objBase.DeepCopy() + obj.Name = "ignore-no-postgrescluster" + obj.Labels = map[string]string{naming.LabelCluster: "nonexistent"} + + if err := r.Client.Create(ctx, obj); err != nil { + t.Error(err) + } + + if err := r.manageControllerRefs(ctx, obj); err != nil { + t.Error(err) + } + + if err := tClient.Get(ctx, client.ObjectKeyFromObject(obj), obj); err != nil { + t.Error(err) + } + + if len(obj.GetOwnerReferences()) != 0 { + t.Error("expected orphaned Object but found controller ref") + } + }) +} diff --git a/internal/controller/postgrescluster/controller_test.go b/internal/controller/postgrescluster/controller_test.go new file mode 100644 index 0000000000..e6fdc5cb86 --- /dev/null +++ b/internal/controller/postgrescluster/controller_test.go @@ -0,0 +1,559 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgrescluster + +import ( + "context" + "fmt" + "strings" + "testing" + + . "github.com/onsi/ginkgo/v2" + . "github.com/onsi/gomega" + . "github.com/onsi/gomega/gstruct" + "github.com/pkg/errors" + + "go.opentelemetry.io/otel" + "gotest.tools/v3/assert" + appsv1 "k8s.io/api/apps/v1" + corev1 "k8s.io/api/core/v1" + apierrors "k8s.io/apimachinery/pkg/api/errors" + "k8s.io/apimachinery/pkg/api/meta" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/util/rand" + "k8s.io/apimachinery/pkg/util/version" + "k8s.io/client-go/tools/record" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + "sigs.k8s.io/yaml" + + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/registration" + "github.com/crunchydata/postgres-operator/internal/testing/require" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestDeleteControlled(t *testing.T) { + ctx := context.Background() + _, cc := setupKubernetes(t) + require.ParallelCapacity(t, 1) + + ns := setupNamespace(t, cc) + reconciler := Reconciler{Client: cc} + + cluster := testCluster() + cluster.Namespace = ns.Name + cluster.Name = strings.ToLower(t.Name()) + assert.NilError(t, cc.Create(ctx, cluster)) + + t.Run("NoOwnership", func(t *testing.T) { + secret := &corev1.Secret{} + secret.Namespace = ns.Name + secret.Name = "solo" + + assert.NilError(t, cc.Create(ctx, secret)) + + // No-op when there's no ownership + assert.NilError(t, reconciler.deleteControlled(ctx, cluster, secret)) + assert.NilError(t, cc.Get(ctx, client.ObjectKeyFromObject(secret), secret)) + }) + + t.Run("Owned", func(t *testing.T) { + secret := &corev1.Secret{} + secret.Namespace = ns.Name + secret.Name = "owned" + + assert.NilError(t, reconciler.setOwnerReference(cluster, secret)) + assert.NilError(t, cc.Create(ctx, secret)) + + // No-op when not controlled by cluster. + assert.NilError(t, reconciler.deleteControlled(ctx, cluster, secret)) + assert.NilError(t, cc.Get(ctx, client.ObjectKeyFromObject(secret), secret)) + }) + + t.Run("Controlled", func(t *testing.T) { + secret := &corev1.Secret{} + secret.Namespace = ns.Name + secret.Name = "controlled" + + assert.NilError(t, reconciler.setControllerReference(cluster, secret)) + assert.NilError(t, cc.Create(ctx, secret)) + + // Deletes when controlled by cluster. + assert.NilError(t, reconciler.deleteControlled(ctx, cluster, secret)) + + err := cc.Get(ctx, client.ObjectKeyFromObject(secret), secret) + assert.Assert(t, apierrors.IsNotFound(err), "expected NotFound, got %#v", err) + }) +} + +var olmClusterYAML = ` +metadata: + name: olm +spec: + postgresVersion: 13 + image: postgres + instances: + - name: register-now + dataVolumeClaimSpec: + accessModes: + - "ReadWriteMany" + resources: + requests: + storage: 1Gi + backups: + pgbackrest: + image: pgbackrest + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi +` + +var _ = Describe("PostgresCluster Reconciler", func() { + var test struct { + Namespace *corev1.Namespace + Reconciler Reconciler + Recorder *record.FakeRecorder + } + + BeforeEach(func() { + ctx := context.Background() + + test.Namespace = &corev1.Namespace{} + test.Namespace.Name = "postgres-operator-test-" + rand.String(6) + Expect(suite.Client.Create(ctx, test.Namespace)).To(Succeed()) + + test.Recorder = record.NewFakeRecorder(100) + test.Recorder.IncludeObject = true + + test.Reconciler.Client = suite.Client + test.Reconciler.Owner = "asdf" + test.Reconciler.Recorder = test.Recorder + test.Reconciler.Registration = nil + test.Reconciler.Tracer = otel.Tracer("asdf") + }) + + AfterEach(func() { + ctx := context.Background() + + if test.Namespace != nil { + Expect(suite.Client.Delete(ctx, test.Namespace)).To(Succeed()) + } + }) + + create := func(clusterYAML string) *v1beta1.PostgresCluster { + ctx := context.Background() + + var cluster v1beta1.PostgresCluster + Expect(yaml.Unmarshal([]byte(clusterYAML), &cluster)).To(Succeed()) + + cluster.Namespace = test.Namespace.Name + Expect(suite.Client.Create(ctx, &cluster)).To(Succeed()) + + return &cluster + } + + reconcile := func(cluster *v1beta1.PostgresCluster) reconcile.Result { + ctx := context.Background() + + result, err := test.Reconciler.Reconcile(ctx, + reconcile.Request{NamespacedName: client.ObjectKeyFromObject(cluster)}, + ) + Expect(err).ToNot(HaveOccurred(), func() string { + var t interface{ StackTrace() errors.StackTrace } + if errors.As(err, &t) { + return fmt.Sprintf("[partial] error trace:%+v\n", t.StackTrace()[:1]) + } + return "" + }) + + return result + } + + Context("Cluster with Registration Requirement, no token", func() { + var cluster *v1beta1.PostgresCluster + + BeforeEach(func() { + test.Reconciler.Registration = registration.RegistrationFunc( + func(record.EventRecorder, client.Object, *[]metav1.Condition) bool { + return true + }) + + cluster = create(olmClusterYAML) + Expect(reconcile(cluster)).To(BeZero()) + }) + + AfterEach(func() { + ctx := context.Background() + + if cluster != nil { + Expect(client.IgnoreNotFound( + suite.Client.Delete(ctx, cluster), + )).To(Succeed()) + + // Remove finalizers, if any, so the namespace can terminate. + Expect(client.IgnoreNotFound( + suite.Client.Patch(ctx, cluster, client.RawPatch( + client.Merge.Type(), []byte(`{"metadata":{"finalizers":[]}}`))), + )).To(Succeed()) + } + }) + + Specify("Cluster RegistrationRequired Status", func() { + existing := &v1beta1.PostgresCluster{} + Expect(suite.Client.Get( + context.Background(), client.ObjectKeyFromObject(cluster), existing, + )).To(Succeed()) + + Expect(meta.IsStatusConditionFalse(existing.Status.Conditions, v1beta1.Registered)).To(BeTrue()) + + event, ok := <-test.Recorder.Events + Expect(ok).To(BeTrue()) + Expect(event).To(ContainSubstring("Register Soon")) + }) + }) + + Context("Cluster", func() { + var cluster *v1beta1.PostgresCluster + + BeforeEach(func() { + cluster = create(` +metadata: + name: carlos +spec: + postgresVersion: 13 + image: postgres + instances: + - name: samba + dataVolumeClaimSpec: + accessModes: + - "ReadWriteMany" + resources: + requests: + storage: 1Gi + backups: + pgbackrest: + image: pgbackrest + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi +`) + Expect(reconcile(cluster)).To(BeZero()) + }) + + AfterEach(func() { + ctx := context.Background() + + if cluster != nil { + Expect(client.IgnoreNotFound( + suite.Client.Delete(ctx, cluster), + )).To(Succeed()) + + // Remove finalizers, if any, so the namespace can terminate. + Expect(client.IgnoreNotFound( + suite.Client.Patch(ctx, cluster, client.RawPatch( + client.Merge.Type(), []byte(`{"metadata":{"finalizers":[]}}`))), + )).To(Succeed()) + } + }) + + Specify("Cluster ConfigMap", func() { + ccm := &corev1.ConfigMap{} + Expect(suite.Client.Get(context.Background(), client.ObjectKey{ + Namespace: test.Namespace.Name, Name: "carlos-config", + }, ccm)).To(Succeed()) + + Expect(ccm.Labels[naming.LabelCluster]).To(Equal("carlos")) + Expect(ccm.OwnerReferences).To(ContainElement( + MatchFields(IgnoreExtras, Fields{ + "Controller": PointTo(BeTrue()), + "Name": Equal(cluster.Name), + "UID": Equal(cluster.UID), + }), + )) + Expect(ccm.ManagedFields).To(ContainElement( + MatchFields(IgnoreExtras, Fields{ + "Manager": Equal(string(test.Reconciler.Owner)), + "Operation": Equal(metav1.ManagedFieldsOperationApply), + }), + )) + + Expect(ccm.Data["patroni.yaml"]).ToNot(BeZero()) + }) + + Specify("Cluster Pod Service", func() { + cps := &corev1.Service{} + Expect(suite.Client.Get(context.Background(), client.ObjectKey{ + Namespace: test.Namespace.Name, Name: "carlos-pods", + }, cps)).To(Succeed()) + + Expect(cps.Labels[naming.LabelCluster]).To(Equal("carlos")) + Expect(cps.OwnerReferences).To(ContainElement( + MatchFields(IgnoreExtras, Fields{ + "Controller": PointTo(BeTrue()), + "Name": Equal(cluster.Name), + "UID": Equal(cluster.UID), + }), + )) + Expect(cps.ManagedFields).To(ContainElement( + MatchFields(IgnoreExtras, Fields{ + "Manager": Equal(string(test.Reconciler.Owner)), + "Operation": Equal(metav1.ManagedFieldsOperationApply), + }), + )) + + Expect(cps.Spec.ClusterIP).To(Equal("None"), "headless") + Expect(cps.Spec.PublishNotReadyAddresses).To(BeTrue()) + Expect(cps.Spec.Selector).To(Equal(map[string]string{ + naming.LabelCluster: "carlos", + })) + }) + + Specify("Cluster Status", func() { + existing := &v1beta1.PostgresCluster{} + Expect(suite.Client.Get( + context.Background(), client.ObjectKeyFromObject(cluster), existing, + )).To(Succeed()) + + Expect(existing.Status.ObservedGeneration).To(Equal(cluster.Generation)) + + // The interaction between server-side apply and subresources can have + // unexpected results. However we manipulate Status, the controller must + // only ever take ownership of the "status" field or fields within it-- + // never the "spec" field. Some known issues are: + // - https://issue.k8s.io/75564 + // - https://issue.k8s.io/82046 + // + // The "metadata.finalizers" field is also okay. + // - https://book.kubebuilder.io/reference/using-finalizers.html + // + // NOTE(cbandy): Kubernetes prior to v1.16.10 and v1.17.6 does not track + // managed fields on the status subresource: https://issue.k8s.io/88901 + switch { + case suite.ServerVersion.LessThan(version.MustParseGeneric("1.22")): + + // Kubernetes 1.22 began tracking subresources in managed fields. + // - https://pr.k8s.io/100970 + Expect(existing.ManagedFields).To(ContainElement( + MatchFields(IgnoreExtras, Fields{ + "Manager": Equal(string(test.Reconciler.Owner)), + "FieldsV1": PointTo(MatchAllFields(Fields{ + "Raw": WithTransform(func(in []byte) (out map[string]interface{}) { + Expect(yaml.Unmarshal(in, &out)).To(Succeed()) + return out + }, MatchAllKeys(Keys{ + "f:metadata": MatchAllKeys(Keys{ + "f:finalizers": Not(BeZero()), + }), + "f:status": Not(BeZero()), + })), + })), + }), + ), `controller should manage only "finalizers" and "status"`) + + default: + Expect(existing.ManagedFields).To(ContainElements( + MatchFields(IgnoreExtras, Fields{ + "Manager": Equal(string(test.Reconciler.Owner)), + "FieldsV1": PointTo(MatchAllFields(Fields{ + "Raw": WithTransform(func(in []byte) (out map[string]interface{}) { + Expect(yaml.Unmarshal(in, &out)).To(Succeed()) + return out + }, MatchAllKeys(Keys{ + "f:metadata": MatchAllKeys(Keys{ + "f:finalizers": Not(BeZero()), + }), + })), + })), + }), + MatchFields(IgnoreExtras, Fields{ + "Manager": Equal(string(test.Reconciler.Owner)), + "FieldsV1": PointTo(MatchAllFields(Fields{ + "Raw": WithTransform(func(in []byte) (out map[string]interface{}) { + Expect(yaml.Unmarshal(in, &out)).To(Succeed()) + return out + }, MatchAllKeys(Keys{ + "f:status": Not(BeZero()), + })), + })), + }), + ), `controller should manage only "finalizers" and "status"`) + } + }) + + Specify("Patroni Distributed Configuration", func() { + ds := &corev1.Service{} + Expect(suite.Client.Get(context.Background(), client.ObjectKey{ + Namespace: test.Namespace.Name, Name: "carlos-ha-config", + }, ds)).To(Succeed()) + + Expect(ds.Labels[naming.LabelCluster]).To(Equal("carlos")) + Expect(ds.Labels[naming.LabelPatroni]).To(Equal("carlos-ha")) + Expect(ds.OwnerReferences).To(ContainElement( + MatchFields(IgnoreExtras, Fields{ + "Controller": PointTo(BeTrue()), + "Name": Equal(cluster.Name), + "UID": Equal(cluster.UID), + }), + )) + Expect(ds.ManagedFields).To(ContainElement( + MatchFields(IgnoreExtras, Fields{ + "Manager": Equal(string(test.Reconciler.Owner)), + "Operation": Equal(metav1.ManagedFieldsOperationApply), + }), + )) + + Expect(ds.Spec.ClusterIP).To(Equal("None"), "headless") + Expect(ds.Spec.Selector).To(BeNil(), "no endpoints") + }) + }) + + Context("Instance", func() { + var ( + cluster *v1beta1.PostgresCluster + instances appsv1.StatefulSetList + instance appsv1.StatefulSet + ) + + BeforeEach(func() { + cluster = create(` +metadata: + name: carlos +spec: + postgresVersion: 13 + image: postgres + instances: + - name: samba + dataVolumeClaimSpec: + accessModes: + - "ReadWriteMany" + resources: + requests: + storage: 1Gi + backups: + pgbackrest: + image: pgbackrest + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi +`) + Expect(reconcile(cluster)).To(BeZero()) + + Expect(suite.Client.List(context.Background(), &instances, + client.InNamespace(test.Namespace.Name), + client.MatchingLabels{ + naming.LabelCluster: "carlos", + naming.LabelInstanceSet: "samba", + }, + )).To(Succeed()) + Expect(instances.Items).To(HaveLen(1)) + + instance = instances.Items[0] + }) + + AfterEach(func() { + ctx := context.Background() + + if cluster != nil { + Expect(client.IgnoreNotFound( + suite.Client.Delete(ctx, cluster), + )).To(Succeed()) + + // Remove finalizers, if any, so the namespace can terminate. + Expect(client.IgnoreNotFound( + suite.Client.Patch(ctx, cluster, client.RawPatch( + client.Merge.Type(), []byte(`{"metadata":{"finalizers":[]}}`))), + )).To(Succeed()) + } + }) + + Specify("Instance ConfigMap", func() { + icm := &corev1.ConfigMap{} + Expect(suite.Client.Get(context.Background(), client.ObjectKey{ + Namespace: test.Namespace.Name, Name: instance.Name + "-config", + }, icm)).To(Succeed()) + + Expect(icm.Labels[naming.LabelCluster]).To(Equal("carlos")) + Expect(icm.Labels[naming.LabelInstance]).To(Equal(instance.Name)) + Expect(icm.OwnerReferences).To(ContainElement( + MatchFields(IgnoreExtras, Fields{ + "Controller": PointTo(BeTrue()), + "Name": Equal(cluster.Name), + "UID": Equal(cluster.UID), + }), + )) + Expect(icm.ManagedFields).To(ContainElement( + MatchFields(IgnoreExtras, Fields{ + "Manager": Equal(string(test.Reconciler.Owner)), + "Operation": Equal(metav1.ManagedFieldsOperationApply), + }), + )) + + Expect(icm.Data["patroni.yaml"]).ToNot(BeZero()) + }) + + Specify("Instance StatefulSet", func() { + Expect(instance.Labels[naming.LabelCluster]).To(Equal("carlos")) + Expect(instance.Labels[naming.LabelInstanceSet]).To(Equal("samba")) + Expect(instance.Labels[naming.LabelInstance]).To(Equal(instance.Name)) + Expect(instance.OwnerReferences).To(ContainElement( + MatchFields(IgnoreExtras, Fields{ + "Controller": PointTo(BeTrue()), + "Name": Equal(cluster.Name), + "UID": Equal(cluster.UID), + }), + )) + Expect(instance.ManagedFields).To(ContainElement( + MatchFields(IgnoreExtras, Fields{ + "Manager": Equal(string(test.Reconciler.Owner)), + "Operation": Equal(metav1.ManagedFieldsOperationApply), + }), + )) + + Expect(instance.Spec).To(MatchFields(IgnoreExtras, Fields{ + "PodManagementPolicy": Equal(appsv1.OrderedReadyPodManagement), + "Replicas": PointTo(BeEquivalentTo(1)), + "RevisionHistoryLimit": PointTo(BeEquivalentTo(0)), + "ServiceName": Equal("carlos-pods"), + "UpdateStrategy": Equal(appsv1.StatefulSetUpdateStrategy{ + Type: appsv1.OnDeleteStatefulSetStrategyType, + }), + })) + }) + + It("resets Instance StatefulSet.Spec.Replicas", func() { + ctx := context.Background() + patch := client.MergeFrom(instance.DeepCopy()) + *instance.Spec.Replicas = 2 + + Expect(suite.Client.Patch(ctx, &instance, patch)).To(Succeed()) + Expect(instance.Spec.Replicas).To(PointTo(BeEquivalentTo(2))) + + Expect(reconcile(cluster)).To(BeZero()) + Expect(suite.Client.Get( + ctx, client.ObjectKeyFromObject(&instance), &instance, + )).To(Succeed()) + Expect(instance.Spec.Replicas).To(PointTo(BeEquivalentTo(1))) + }) + }) +}) diff --git a/internal/controller/postgrescluster/delete.go b/internal/controller/postgrescluster/delete.go new file mode 100644 index 0000000000..63fc007f40 --- /dev/null +++ b/internal/controller/postgrescluster/delete.go @@ -0,0 +1,104 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgrescluster + +import ( + "context" + + "github.com/pkg/errors" + "k8s.io/apimachinery/pkg/util/sets" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +// +kubebuilder:rbac:groups="postgres-operator.crunchydata.com",resources="postgresclusters",verbs={patch} + +// handleDelete sets a finalizer on cluster and performs the finalization of +// cluster when it is being deleted. It returns (nil, nil) when cluster is +// not being deleted. The caller is responsible for returning other values to +// controller-runtime. +func (r *Reconciler) handleDelete( + ctx context.Context, cluster *v1beta1.PostgresCluster, +) (*reconcile.Result, error) { + finalizers := sets.NewString(cluster.Finalizers...) + + // An object with Finalizers does not go away when deleted in the Kubernetes + // API. Instead, it is given a DeletionTimestamp so that controllers can + // react before it goes away. The object will remain in this state until + // its Finalizers list is empty. Controllers are expected to remove their + // finalizer from this list when they have completed their work. + // - https://docs.k8s.io/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#finalizers + // - https://book.kubebuilder.io/reference/using-finalizers.html + + // TODO(cbandy): Foreground deletion also involves a finalizer. The garbage + // collector deletes dependents *before* their owner. + // - https://docs.k8s.io/concepts/workloads/controllers/garbage-collection/#foreground-cascading-deletion + + if cluster.DeletionTimestamp.IsZero() { + if finalizers.Has(naming.Finalizer) { + // The cluster is not being deleted and the finalizer is set. + // The caller can do what they like. + return nil, nil + } + + // The cluster is not being deleted and needs a finalizer; set it. + + // The Finalizers field is shared by multiple controllers, but the + // server-side merge strategy does not work on our custom resource due + // to a bug in Kubernetes. Build a merge-patch that includes the full + // list of Finalizers plus ResourceVersion to detect conflicts with + // other potential writers. + // - https://issue.k8s.io/99730 + before := cluster.DeepCopy() + // Make another copy so that Patch doesn't write back to cluster. + intent := before.DeepCopy() + intent.Finalizers = append(intent.Finalizers, naming.Finalizer) + err := errors.WithStack(r.patch(ctx, intent, + client.MergeFromWithOptions(before, client.MergeFromWithOptimisticLock{}))) + + // The caller can do what they like or requeue upon error. + return nil, err + } + + if !finalizers.Has(naming.Finalizer) { + // The cluster is being deleted and there is no finalizer. + // The caller should listen for another event. + return &reconcile.Result{}, nil + } + + // The cluster is being deleted and our finalizer is still set; run our + // finalizer logic. + + if result, err := r.deleteInstances(ctx, cluster); err != nil { + return nil, err + } else if result != nil { + return result, nil + } + + // Instances are stopped, now cleanup some Patroni stuff. + if err := r.deletePatroniArtifacts(ctx, cluster); err != nil { + return nil, err + } + + // Our finalizer logic is finished; remove our finalizer. + // The Finalizers field is shared by multiple controllers, but the + // server-side merge strategy does not work on our custom resource due to a + // bug in Kubernetes. Build a merge-patch that includes the full list of + // Finalizers plus ResourceVersion to detect conflicts with other potential + // writers. + // - https://issue.k8s.io/99730 + before := cluster.DeepCopy() + // Make another copy so that Patch doesn't write back to cluster. + intent := before.DeepCopy() + intent.Finalizers = finalizers.Delete(naming.Finalizer).List() + err := errors.WithStack(r.patch(ctx, intent, + client.MergeFromWithOptions(before, client.MergeFromWithOptimisticLock{}))) + + // The caller should wait for further events or requeue upon error. + return &reconcile.Result{}, err +} diff --git a/internal/controller/postgrescluster/helpers_test.go b/internal/controller/postgrescluster/helpers_test.go new file mode 100644 index 0000000000..0536b466d4 --- /dev/null +++ b/internal/controller/postgrescluster/helpers_test.go @@ -0,0 +1,228 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgrescluster + +import ( + "context" + "os" + "strconv" + "testing" + "time" + + batchv1 "k8s.io/api/batch/v1" + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/resource" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/client-go/rest" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/manager" + "sigs.k8s.io/yaml" + + "github.com/crunchydata/postgres-operator/internal/controller/runtime" + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/testing/require" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +var ( + //TODO(tjmoore4): With the new RELATED_IMAGES defaulting behavior, tests could be refactored + // to reference those environment variables instead of hard coded image values + CrunchyPostgresHAImage = "registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-13.6-1" + CrunchyPGBackRestImage = "registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:ubi8-2.38-0" + CrunchyPGBouncerImage = "registry.developers.crunchydata.com/crunchydata/crunchy-pgbouncer:ubi8-1.16-2" +) + +// Scale extends d according to PGO_TEST_TIMEOUT_SCALE. +var Scale = func(d time.Duration) time.Duration { return d } + +func init() { + setting := os.Getenv("PGO_TEST_TIMEOUT_SCALE") + factor, _ := strconv.ParseFloat(setting, 64) + + if setting != "" { + if factor <= 0 { + panic("PGO_TEST_TIMEOUT_SCALE must be a fractional number greater than zero") + } + + Scale = func(d time.Duration) time.Duration { + return time.Duration(factor * float64(d)) + } + } +} + +// setupKubernetes starts or connects to a Kubernetes API and returns a client +// that uses it. See [require.Kubernetes] for more details. +func setupKubernetes(t testing.TB) (*rest.Config, client.Client) { + t.Helper() + + // Start and/or connect to a Kubernetes API, or Skip when that's not configured. + cfg, cc := require.Kubernetes2(t) + + // Log the status of any test namespaces after this test fails. + t.Cleanup(func() { + if t.Failed() { + var namespaces corev1.NamespaceList + _ = cc.List(context.Background(), &namespaces, client.HasLabels{"postgres-operator-test"}) + + type shaped map[string]corev1.NamespaceStatus + result := make([]shaped, len(namespaces.Items)) + + for i, ns := range namespaces.Items { + result[i] = shaped{ns.Labels["postgres-operator-test"]: ns.Status} + } + + formatted, _ := yaml.Marshal(result) + t.Logf("Test Namespaces:\n%s", formatted) + } + }) + + return cfg, cc +} + +// setupNamespace creates a random namespace that will be deleted by t.Cleanup. +// +// Deprecated: Use [require.Namespace] instead. +func setupNamespace(t testing.TB, cc client.Client) *corev1.Namespace { + t.Helper() + return require.Namespace(t, cc) +} + +func testVolumeClaimSpec() corev1.PersistentVolumeClaimSpec { + // Defines a volume claim spec that can be used to create instances + return corev1.PersistentVolumeClaimSpec{ + AccessModes: []corev1.PersistentVolumeAccessMode{corev1.ReadWriteOnce}, + Resources: corev1.VolumeResourceRequirements{ + Requests: map[corev1.ResourceName]resource.Quantity{ + corev1.ResourceStorage: resource.MustParse("1Gi"), + }, + }, + } +} + +func testCluster() *v1beta1.PostgresCluster { + // Defines a base cluster spec that can be used by tests to generate a + // cluster with an expected number of instances + cluster := v1beta1.PostgresCluster{ + ObjectMeta: metav1.ObjectMeta{ + Name: "hippo", + }, + Spec: v1beta1.PostgresClusterSpec{ + PostgresVersion: 13, + Image: CrunchyPostgresHAImage, + ImagePullSecrets: []corev1.LocalObjectReference{{ + Name: "myImagePullSecret"}, + }, + InstanceSets: []v1beta1.PostgresInstanceSetSpec{{ + Name: "instance1", + Replicas: initialize.Int32(1), + DataVolumeClaimSpec: testVolumeClaimSpec(), + }}, + Backups: v1beta1.Backups{ + PGBackRest: v1beta1.PGBackRestArchive{ + Image: CrunchyPGBackRestImage, + Repos: []v1beta1.PGBackRestRepo{{ + Name: "repo1", + Volume: &v1beta1.RepoPVC{ + VolumeClaimSpec: testVolumeClaimSpec(), + }, + }}, + }, + }, + Proxy: &v1beta1.PostgresProxySpec{ + PGBouncer: &v1beta1.PGBouncerPodSpec{ + Image: CrunchyPGBouncerImage, + }, + }, + }, + } + return cluster.DeepCopy() +} + +func testBackupJob(cluster *v1beta1.PostgresCluster) *batchv1.Job { + job := batchv1.Job{ + TypeMeta: metav1.TypeMeta{ + APIVersion: batchv1.SchemeGroupVersion.String(), + Kind: "Job", + }, + ObjectMeta: metav1.ObjectMeta{ + Name: "backup-job-1", + Namespace: cluster.Namespace, + Labels: map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelPGBackRestBackup: "", + naming.LabelPGBackRestRepo: "repo1", + }, + }, + Spec: batchv1.JobSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{{Name: "test", Image: "test"}}, + RestartPolicy: corev1.RestartPolicyNever, + }, + }, + }, + } + + return job.DeepCopy() +} + +func testRestoreJob(cluster *v1beta1.PostgresCluster) *batchv1.Job { + job := batchv1.Job{ + TypeMeta: metav1.TypeMeta{ + APIVersion: batchv1.SchemeGroupVersion.String(), + Kind: "Job", + }, + ObjectMeta: metav1.ObjectMeta{ + Name: "restore-job-1", + Namespace: cluster.Namespace, + Labels: naming.PGBackRestRestoreJobLabels(cluster.Name), + }, + Spec: batchv1.JobSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{{Name: "test", Image: "test"}}, + RestartPolicy: corev1.RestartPolicyNever, + }, + }, + }, + } + + return job.DeepCopy() +} + +// setupManager creates the runtime manager used during controller testing +func setupManager(t *testing.T, cfg *rest.Config, + controllerSetup func(mgr manager.Manager)) (context.Context, context.CancelFunc) { + ctx, cancel := context.WithCancel(context.Background()) + + // Disable health endpoints + options := runtime.Options{} + options.HealthProbeBindAddress = "0" + options.Metrics.BindAddress = "0" + + mgr, err := runtime.NewManager(cfg, options) + if err != nil { + t.Fatal(err) + } + + controllerSetup(mgr) + + go func() { + if err := mgr.Start(ctx); err != nil { + t.Error(err) + } + }() + t.Log("Manager started") + + return ctx, cancel +} + +// teardownManager stops the runtime manager when the tests +// have completed +func teardownManager(cancel context.CancelFunc, t *testing.T) { + cancel() + t.Log("Manager stopped") +} diff --git a/internal/controller/postgrescluster/instance.go b/internal/controller/postgrescluster/instance.go new file mode 100644 index 0000000000..66321cc738 --- /dev/null +++ b/internal/controller/postgrescluster/instance.go @@ -0,0 +1,1525 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgrescluster + +import ( + "context" + "fmt" + "io" + "sort" + "strings" + "time" + + "github.com/pkg/errors" + "go.opentelemetry.io/otel/attribute" + "go.opentelemetry.io/otel/trace" + appsv1 "k8s.io/api/apps/v1" + corev1 "k8s.io/api/core/v1" + policyv1 "k8s.io/api/policy/v1" + "k8s.io/apimachinery/pkg/api/resource" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" + "k8s.io/apimachinery/pkg/runtime/schema" + "k8s.io/apimachinery/pkg/util/intstr" + "k8s.io/apimachinery/pkg/util/sets" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + + "github.com/crunchydata/postgres-operator/internal/config" + "github.com/crunchydata/postgres-operator/internal/controller/runtime" + "github.com/crunchydata/postgres-operator/internal/feature" + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/logging" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/patroni" + "github.com/crunchydata/postgres-operator/internal/pgbackrest" + "github.com/crunchydata/postgres-operator/internal/pki" + "github.com/crunchydata/postgres-operator/internal/postgres" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +// Instance represents a single PostgreSQL instance of a PostgresCluster. +type Instance struct { + Name string + Pods []*corev1.Pod + Runner *appsv1.StatefulSet + Spec *v1beta1.PostgresInstanceSetSpec +} + +// IsAvailable is used to choose which instances to redeploy during rolling +// update. It combines information from metadata and status similar to the +// notion of "available" in corev1.Deployment and "healthy" in appsv1.StatefulSet. +func (i Instance) IsAvailable() (available bool, known bool) { + // StatefulSet will have its own notion of Available in the future. + // - https://docs.k8s.io/concepts/workloads/controllers/statefulset/#minimum-ready-seconds + + terminating, knownTerminating := i.IsTerminating() + ready, knownReady := i.IsReady() + + return ready && !terminating, knownReady && knownTerminating +} + +// IsPrimary returns whether or not this instance is the Patroni leader. +func (i Instance) IsPrimary() (primary bool, known bool) { + if len(i.Pods) != 1 { + return false, false + } + + return i.Pods[0].Labels[naming.LabelRole] == naming.RolePatroniLeader, true +} + +// IsReady returns whether or not this instance is ready to receive PostgreSQL +// connections. +func (i Instance) IsReady() (ready bool, known bool) { + if len(i.Pods) == 1 { + for _, condition := range i.Pods[0].Status.Conditions { + if condition.Type == corev1.PodReady { + return condition.Status == corev1.ConditionTrue, true + } + } + } + + return false, false +} + +// IsRunning returns whether or not container is running. +func (i Instance) IsRunning(container string) (running bool, known bool) { + if len(i.Pods) == 1 { + for _, status := range i.Pods[0].Status.ContainerStatuses { + if status.Name == container { + return status.State.Running != nil, true + } + } + for _, status := range i.Pods[0].Status.InitContainerStatuses { + if status.Name == container { + return status.State.Running != nil, true + } + } + } + + return false, false +} + +// IsTerminating returns whether or not this instance is in the process of not +// running. +func (i Instance) IsTerminating() (terminating bool, known bool) { + if len(i.Pods) != 1 { + return false, false + } + + // k8s.io/kubernetes/pkg/registry/core/pod.Strategy implements + // k8s.io/apiserver/pkg/registry/rest.RESTGracefulDeleteStrategy so that it + // can set DeletionTimestamp to corev1.PodSpec.TerminationGracePeriodSeconds + // in the future. + // - https://releases.k8s.io/v1.21.0/pkg/registry/core/pod/strategy.go#L135 + // - https://releases.k8s.io/v1.21.0/staging/src/k8s.io/apiserver/pkg/registry/rest/delete.go + return i.Pods[0].DeletionTimestamp != nil, true +} + +// IsWritable returns whether or not a PostgreSQL connection could write to its +// database. +func (i Instance) IsWritable() (writable, known bool) { + if len(i.Pods) != 1 { + return false, false + } + + member := i.Pods[0].Annotations["status"] + role := strings.Index(member, `"role":`) + + if role < 0 { + return false, false + } + + // TODO(cbandy): Update this to consider when Patroni is paused. + + return strings.HasPrefix(member[role:], `"role":"master"`), true +} + +// PodMatchesPodTemplate returns whether or not the Pod for this instance +// matches its specified PodTemplate. When it does not match, the Pod needs to +// be redeployed. +func (i Instance) PodMatchesPodTemplate() (matches bool, known bool) { + if i.Runner == nil || len(i.Pods) != 1 { + return false, false + } + + if i.Runner.Status.ObservedGeneration != i.Runner.Generation { + return false, false + } + + // When the Status is up-to-date, compare the revision of the Pod to that + // of the PodTemplate. + podRevision := i.Pods[0].Labels[appsv1.StatefulSetRevisionLabel] + return podRevision == i.Runner.Status.UpdateRevision, true +} + +// instanceSorter implements sort.Interface for some instance comparison. +type instanceSorter struct { + instances []*Instance + less func(i, j *Instance) bool +} + +func (s *instanceSorter) Len() int { + return len(s.instances) +} +func (s *instanceSorter) Less(i, j int) bool { + return s.less(s.instances[i], s.instances[j]) +} +func (s *instanceSorter) Swap(i, j int) { + s.instances[i], s.instances[j] = s.instances[j], s.instances[i] +} + +// byPriority returns a sort.Interface that sorts instances by how much we want +// each to keep running. The primary instance, when known, is always the highest +// priority. Two instances with otherwise-identical priority are ranked by Name. +func byPriority(instances []*Instance) sort.Interface { + return &instanceSorter{instances: instances, less: func(a, b *Instance) bool { + // The primary instance is the highest priority. + if primary, known := a.IsPrimary(); known && primary { + return false + } + if primary, known := b.IsPrimary(); known && primary { + return true + } + + // An available instance is a higher priority than not. + if available, known := a.IsAvailable(); known && available { + return false + } + if available, known := b.IsAvailable(); known && available { + return true + } + + return a.Name < b.Name + }} +} + +// observedInstances represents all the PostgreSQL instances of a single PostgresCluster. +type observedInstances struct { + byName map[string]*Instance + bySet map[string][]*Instance + forCluster []*Instance + setNames sets.Set[string] +} + +// newObservedInstances builds an observedInstances from Kubernetes API objects. +func newObservedInstances( + cluster *v1beta1.PostgresCluster, + runners []appsv1.StatefulSet, + pods []corev1.Pod, +) *observedInstances { + observed := observedInstances{ + byName: make(map[string]*Instance), + bySet: make(map[string][]*Instance), + setNames: make(sets.Set[string]), + } + + sets := make(map[string]*v1beta1.PostgresInstanceSetSpec) + for i := range cluster.Spec.InstanceSets { + name := cluster.Spec.InstanceSets[i].Name + sets[name] = &cluster.Spec.InstanceSets[i] + observed.setNames.Insert(name) + } + for i := range runners { + ri := runners[i].Name + rs := runners[i].Labels[naming.LabelInstanceSet] + + instance := &Instance{ + Name: ri, + Runner: &runners[i], + Spec: sets[rs], + } + + observed.byName[ri] = instance + observed.bySet[rs] = append(observed.bySet[rs], instance) + observed.forCluster = append(observed.forCluster, instance) + observed.setNames.Insert(rs) + } + for i := range pods { + pi := pods[i].Labels[naming.LabelInstance] + ps := pods[i].Labels[naming.LabelInstanceSet] + + instance := observed.byName[pi] + if instance == nil { + instance = &Instance{ + Name: pi, + Spec: sets[ps], + } + observed.byName[pi] = instance + observed.bySet[ps] = append(observed.bySet[ps], instance) + observed.forCluster = append(observed.forCluster, instance) + observed.setNames.Insert(ps) + } + instance.Pods = append(instance.Pods, &pods[i]) + } + + return &observed +} + +// writablePod looks at observedInstances and finds an instance that matches +// a few conditions. The instance should be non-terminating, running, and +// writable i.e. the instance with the primary. If such an instance exists, it +// is returned along with the instance pod. +func (observed *observedInstances) writablePod(container string) (*corev1.Pod, *Instance) { + if observed == nil { + return nil, nil + } + + for _, instance := range observed.forCluster { + if terminating, known := instance.IsTerminating(); terminating || !known { + continue + } + if writable, known := instance.IsWritable(); !writable || !known { + continue + } + running, known := instance.IsRunning(container) + if running && known && len(instance.Pods) > 0 { + return instance.Pods[0], instance + } + } + + return nil, nil +} + +// +kubebuilder:rbac:groups="",resources="pods",verbs={list} +// +kubebuilder:rbac:groups="apps",resources="statefulsets",verbs={list} + +// observeInstances populates cluster.Status.InstanceSets with observations and +// builds an observedInstances by reading from the Kubernetes API. +func (r *Reconciler) observeInstances( + ctx context.Context, cluster *v1beta1.PostgresCluster, +) (*observedInstances, error) { + pods := &corev1.PodList{} + runners := &appsv1.StatefulSetList{} + + autogrow := feature.Enabled(ctx, feature.AutoGrowVolumes) + + selector, err := naming.AsSelector(naming.ClusterInstances(cluster.Name)) + if err == nil { + err = errors.WithStack( + r.Client.List(ctx, pods, + client.InNamespace(cluster.Namespace), + client.MatchingLabelsSelector{Selector: selector}, + )) + } + if err == nil { + err = errors.WithStack( + r.Client.List(ctx, runners, + client.InNamespace(cluster.Namespace), + client.MatchingLabelsSelector{Selector: selector}, + )) + } + + observed := newObservedInstances(cluster, runners.Items, pods.Items) + + // Save desired volume size values in case the status is removed. + // This may happen in cases where the Pod is restarted, the cluster + // is shutdown, etc. Only save values for instances defined in the spec. + previousDesiredRequests := make(map[string]string) + if autogrow { + for _, statusIS := range cluster.Status.InstanceSets { + if statusIS.DesiredPGDataVolume != nil { + for k, v := range statusIS.DesiredPGDataVolume { + previousDesiredRequests[k] = v + } + } + } + } + + // Fill out status sorted by set name. + cluster.Status.InstanceSets = cluster.Status.InstanceSets[:0] + for _, name := range sets.List(observed.setNames) { + status := v1beta1.PostgresInstanceSetStatus{Name: name} + status.DesiredPGDataVolume = make(map[string]string) + + for _, instance := range observed.bySet[name] { + status.Replicas += int32(len(instance.Pods)) //nolint:gosec + + if ready, known := instance.IsReady(); known && ready { + status.ReadyReplicas++ + } + if matches, known := instance.PodMatchesPodTemplate(); known && matches { + status.UpdatedReplicas++ + } + if autogrow { + // Store desired pgData volume size for each instance Pod. + // The 'suggested-pgdata-pvc-size' annotation value is stored in the PostgresCluster + // status so that 1) it is available to the function 'reconcilePostgresDataVolume' + // and 2) so that the value persists after Pod restart and cluster shutdown events. + for _, pod := range instance.Pods { + // don't set an empty status + if pod.Annotations["suggested-pgdata-pvc-size"] != "" { + status.DesiredPGDataVolume[instance.Name] = pod.Annotations["suggested-pgdata-pvc-size"] + } + } + } + } + + // If autogrow is enabled, get the desired volume size for each instance. + if autogrow { + for _, instance := range observed.bySet[name] { + status.DesiredPGDataVolume[instance.Name] = r.storeDesiredRequest(ctx, cluster, + name, status.DesiredPGDataVolume[instance.Name], previousDesiredRequests[instance.Name]) + } + } + + cluster.Status.InstanceSets = append(cluster.Status.InstanceSets, status) + } + + return observed, err +} + +// storeDesiredRequest saves the appropriate request value to the PostgresCluster +// status. If the value has grown, create an Event. +func (r *Reconciler) storeDesiredRequest( + ctx context.Context, cluster *v1beta1.PostgresCluster, + instanceSetName, desiredRequest, desiredRequestBackup string, +) string { + var current resource.Quantity + var previous resource.Quantity + var err error + log := logging.FromContext(ctx) + + // Parse the desired request from the cluster's status. + if desiredRequest != "" { + current, err = resource.ParseQuantity(desiredRequest) + if err != nil { + log.Error(err, "Unable to parse pgData volume request from status ("+ + desiredRequest+") for "+cluster.Name+"/"+instanceSetName) + // If there was an error parsing the value, treat as unset (equivalent to zero). + desiredRequest = "" + current, _ = resource.ParseQuantity("") + + } + } + + // Parse the desired request from the status backup. + if desiredRequestBackup != "" { + previous, err = resource.ParseQuantity(desiredRequestBackup) + if err != nil { + log.Error(err, "Unable to parse pgData volume request from status backup ("+ + desiredRequestBackup+") for "+cluster.Name+"/"+instanceSetName) + // If there was an error parsing the value, treat as unset (equivalent to zero). + desiredRequestBackup = "" + previous, _ = resource.ParseQuantity("") + + } + } + + // Determine if the limit is set for this instance set. + var limitSet bool + for _, specInstance := range cluster.Spec.InstanceSets { + if specInstance.Name == instanceSetName { + limitSet = !specInstance.DataVolumeClaimSpec.Resources.Limits.Storage().IsZero() + } + } + + if limitSet && current.Value() > previous.Value() { + r.Recorder.Eventf(cluster, corev1.EventTypeNormal, "VolumeAutoGrow", + "pgData volume expansion to %v requested for %s/%s.", + current.String(), cluster.Name, instanceSetName) + } + + // If the desired size was not observed, update with previously stored value. + // This can happen in scenarios where the annotation on the Pod is missing + // such as when the cluster is shutdown or a Pod is in the middle of a restart. + if desiredRequest == "" { + desiredRequest = desiredRequestBackup + } + + return desiredRequest +} + +// +kubebuilder:rbac:groups="",resources="pods",verbs={list} +// +kubebuilder:rbac:groups="apps",resources="statefulsets",verbs={patch} + +// deleteInstances gracefully stops instances of cluster to avoid failovers and +// unclean shutdowns of PostgreSQL. It returns (nil, nil) when finished. +func (r *Reconciler) deleteInstances( + ctx context.Context, cluster *v1beta1.PostgresCluster, +) (*reconcile.Result, error) { + // Find all instance pods to determine which to shutdown and in what order. + pods := &corev1.PodList{} + instances, err := naming.AsSelector(naming.ClusterInstances(cluster.Name)) + if err == nil { + err = errors.WithStack( + r.Client.List(ctx, pods, + client.InNamespace(cluster.Namespace), + client.MatchingLabelsSelector{Selector: instances}, + )) + } + if err != nil { + return nil, err + } + + if len(pods.Items) == 0 { + // There are no instances, so there's nothing to do. + // The caller can do what they like. + return nil, nil + } + + // There are some instances, so the caller should at least wait for further + // events. + result := reconcile.Result{} + + // stop schedules pod for deletion by scaling its controller to zero. + stop := func(pod *corev1.Pod) error { + instance := &unstructured.Unstructured{} + instance.SetNamespace(cluster.Namespace) + + switch owner := metav1.GetControllerOfNoCopy(pod); { + case owner == nil: + return errors.Errorf("pod %q has no owner", client.ObjectKeyFromObject(pod)) + + case owner.Kind == "StatefulSet": + instance.SetAPIVersion(owner.APIVersion) + instance.SetKind(owner.Kind) + instance.SetName(owner.Name) + + default: + return errors.Errorf("unexpected kind %q", owner.Kind) + } + + // apps/v1.Deployment, apps/v1.ReplicaSet, and apps/v1.StatefulSet all + // have a "spec.replicas" field with the same meaning. + patch := client.RawPatch(client.Merge.Type(), []byte(`{"spec":{"replicas":0}}`)) + err := errors.WithStack(r.patch(ctx, instance, patch)) + + // When the pod controller is missing, requeue rather than return an + // error. The garbage collector will stop the pod, and it is not our + // mistake that something else is deleting objects. Use RequeueAfter to + // avoid being rate-limited due to a deluge of delete events. + if err != nil { + result = runtime.RequeueWithoutBackoff(10 * time.Second) + } + return client.IgnoreNotFound(err) + } + + if len(pods.Items) == 1 { + // There's one instance; stop it. + return &result, stop(&pods.Items[0]) + } + + // There are multiple instances; stop the replicas. When none are found, + // requeue to try again. + + result.Requeue = true + for i := range pods.Items { + role := pods.Items[i].Labels[naming.LabelRole] + if err == nil && role == naming.RolePatroniReplica { + err = stop(&pods.Items[i]) + result.Requeue = false + } + + // An instance without a role label is not participating in the Patroni + // cluster. It may be unhealthy or has not yet (re-)joined. Go ahead and + // stop these as well. + if err == nil && len(role) == 0 { + err = stop(&pods.Items[i]) + result.Requeue = false + } + } + + return &result, err +} + +// +kubebuilder:rbac:groups="",resources="configmaps",verbs={delete,list} +// +kubebuilder:rbac:groups="",resources="secrets",verbs={delete,list} +// +kubebuilder:rbac:groups="",resources="persistentvolumeclaims",verbs={delete,list} +// +kubebuilder:rbac:groups="apps",resources="statefulsets",verbs={delete,list} + +// deleteInstance will delete all resources related to a single instance +func (r *Reconciler) deleteInstance( + ctx context.Context, + cluster *v1beta1.PostgresCluster, + instanceName string, +) error { + gvks := []schema.GroupVersionKind{{ + Group: corev1.SchemeGroupVersion.Group, + Version: corev1.SchemeGroupVersion.Version, + Kind: "ConfigMapList", + }, { + Group: corev1.SchemeGroupVersion.Group, + Version: corev1.SchemeGroupVersion.Version, + Kind: "SecretList", + }, { + Group: appsv1.SchemeGroupVersion.Group, + Version: appsv1.SchemeGroupVersion.Version, + Kind: "StatefulSetList", + }, { + Group: corev1.SchemeGroupVersion.Group, + Version: corev1.SchemeGroupVersion.Version, + Kind: "PersistentVolumeClaimList", + }} + + selector, err := naming.AsSelector(naming.ClusterInstance(cluster.Name, instanceName)) + for _, gvk := range gvks { + if err == nil { + uList := &unstructured.UnstructuredList{} + uList.SetGroupVersionKind(gvk) + + err = errors.WithStack( + r.Client.List(ctx, uList, + client.InNamespace(cluster.GetNamespace()), + client.MatchingLabelsSelector{Selector: selector}, + )) + + for i := range uList.Items { + if err == nil { + err = errors.WithStack(client.IgnoreNotFound( + r.deleteControlled(ctx, cluster, &uList.Items[i]))) + } + } + } + } + + return err +} + +// reconcileInstanceSets reconciles instance sets in the environment to match +// the current spec. This is done by scaling up or down instances where necessary +func (r *Reconciler) reconcileInstanceSets( + ctx context.Context, + cluster *v1beta1.PostgresCluster, + clusterConfigMap *corev1.ConfigMap, + clusterReplicationSecret *corev1.Secret, + rootCA *pki.RootCertificateAuthority, + clusterPodService *corev1.Service, + instanceServiceAccount *corev1.ServiceAccount, + instances *observedInstances, + patroniLeaderService *corev1.Service, + primaryCertificate *corev1.SecretProjection, + clusterVolumes []corev1.PersistentVolumeClaim, + exporterQueriesConfig, exporterWebConfig *corev1.ConfigMap, + backupsSpecFound bool, +) error { + + // Go through the observed instances and check if a primary has been determined. + // If the cluster is being shutdown and this instance is the primary, store + // the instance name as the startup instance. If the primary can be determined + // from the instance and the cluster is not being shutdown, clear any stored + // startup instance values. + for _, instance := range instances.forCluster { + if primary, known := instance.IsPrimary(); primary && known { + if cluster.Spec.Shutdown != nil && *cluster.Spec.Shutdown { + cluster.Status.StartupInstance = instance.Name + cluster.Status.StartupInstanceSet = instance.Spec.Name + } else { + cluster.Status.StartupInstance = "" + cluster.Status.StartupInstanceSet = "" + } + } + } + + // get the number of instance pods from the observedInstance information + var numInstancePods int + for i := range instances.forCluster { + numInstancePods += len(instances.forCluster[i].Pods) + } + + // Range over instance sets to scale up and ensure that each set has + // at least the number of replicas defined in the spec. The set can + // have more replicas than defined + for i := range cluster.Spec.InstanceSets { + set := &cluster.Spec.InstanceSets[i] + _, err := r.scaleUpInstances( + ctx, cluster, instances, set, + clusterConfigMap, clusterReplicationSecret, + rootCA, clusterPodService, instanceServiceAccount, + patroniLeaderService, primaryCertificate, + findAvailableInstanceNames(*set, instances, clusterVolumes), + numInstancePods, clusterVolumes, exporterQueriesConfig, exporterWebConfig, + backupsSpecFound, + ) + + if err == nil { + err = r.reconcileInstanceSetPodDisruptionBudget(ctx, cluster, set) + } + if err != nil { + return err + } + } + + // Scaledown is called on the whole cluster in order to consider all + // instances. This is necessary because we have no way to determine + // which instance or instance set contains the primary pod. + err := r.scaleDownInstances(ctx, cluster, instances) + if err != nil { + return err + } + + // Cleanup Instance Set resources that are no longer needed + err = r.cleanupPodDisruptionBudgets(ctx, cluster) + if err != nil { + return err + } + + // Rollout changes to instances by calling rolloutInstance. + err = r.rolloutInstances(ctx, cluster, instances, + func(ctx context.Context, instance *Instance) error { + return r.rolloutInstance(ctx, cluster, instances, instance) + }) + + return err +} + +// +kubebuilder:rbac:groups="policy",resources="poddisruptionbudgets",verbs={list} + +// cleanupPodDisruptionBudgets removes pdbs that do not have an +// associated Instance Set +func (r *Reconciler) cleanupPodDisruptionBudgets( + ctx context.Context, + cluster *v1beta1.PostgresCluster, +) error { + selector, err := naming.AsSelector(naming.ClusterInstanceSets(cluster.Name)) + + pdbList := &policyv1.PodDisruptionBudgetList{} + if err == nil { + err = r.Client.List(ctx, pdbList, + client.InNamespace(cluster.Namespace), client.MatchingLabelsSelector{ + Selector: selector, + }) + } + + if err == nil { + setNames := sets.Set[string]{} + for _, set := range cluster.Spec.InstanceSets { + setNames.Insert(set.Name) + } + for i := range pdbList.Items { + pdb := pdbList.Items[i] + if err == nil && !setNames.Has(pdb.Labels[naming.LabelInstanceSet]) { + err = client.IgnoreNotFound(r.deleteControlled(ctx, cluster, &pdb)) + } + } + } + + return client.IgnoreNotFound(err) +} + +// TODO (andrewlecuyer): If relevant instance volume (PVC) information is captured for each +// Instance contained within observedInstances, this function might no longer be necessary. +// Instead, available names could be derived by looking at observed Instances that have data +// volumes, but no associated runner. + +// findAvailableInstanceNames finds any instance names that are available for reuse within a +// specific instance set. Available instance names are determined by finding any instance PVCs +// for the instance set specified that are not currently associated with an instance, and then +// returning the instance names associated with those PVC's. +func findAvailableInstanceNames(set v1beta1.PostgresInstanceSetSpec, + observedInstances *observedInstances, clusterVolumes []corev1.PersistentVolumeClaim) []string { + + availableInstanceNames := []string{} + + // first identify any PGDATA volumes for the instance set specified + setVolumes := []corev1.PersistentVolumeClaim{} + for _, pvc := range clusterVolumes { + // ignore PGDATA PVCs that are terminating + if pvc.GetDeletionTimestamp() != nil { + continue + } + pvcSet := pvc.GetLabels()[naming.LabelInstanceSet] + pvcRole := pvc.GetLabels()[naming.LabelRole] + if pvcRole == naming.RolePostgresData && pvcSet == set.Name { + setVolumes = append(setVolumes, pvc) + } + } + + // If there is a WAL volume defined for the instance set, then a matching WAL volume + // must also be found in order for the volumes to be reused. Therefore, filter out + // any available PGDATA volumes for the instance set that have no corresponding WAL + // volumes (which means new PVCs will simply be reconciled instead). + if set.WALVolumeClaimSpec != nil { + setVolumesWithWAL := []corev1.PersistentVolumeClaim{} + for _, setVol := range setVolumes { + setVolInstance := setVol.GetLabels()[naming.LabelInstance] + for _, pvc := range clusterVolumes { + // ignore WAL PVCs that are terminating + if pvc.GetDeletionTimestamp() != nil { + continue + } + pvcSet := pvc.GetLabels()[naming.LabelInstanceSet] + pvcInstance := pvc.GetLabels()[naming.LabelInstance] + pvcRole := pvc.GetLabels()[naming.LabelRole] + if pvcRole == naming.RolePostgresWAL && pvcSet == set.Name && + pvcInstance == setVolInstance { + setVolumesWithWAL = append(setVolumesWithWAL, pvc) + } + } + } + setVolumes = setVolumesWithWAL + } + + // Determine whether or not the PVC is associated with an existing instance within the same + // instance set. If not, then the instance name associated with that PVC can be be reused. + for _, pvc := range setVolumes { + pvcInstanceName := pvc.GetLabels()[naming.LabelInstance] + instance := observedInstances.byName[pvcInstanceName] + if instance == nil || instance.Runner == nil { + availableInstanceNames = append(availableInstanceNames, pvcInstanceName) + } + } + + return availableInstanceNames +} + +// +kubebuilder:rbac:groups="",resources="pods",verbs={delete} + +// rolloutInstance redeploys the Pod of instance by deleting it. Its StatefulSet +// will recreate it according to its current PodTemplate. When instance is the +// primary of a cluster with failover, it is demoted instead. +func (r *Reconciler) rolloutInstance( + ctx context.Context, cluster *v1beta1.PostgresCluster, + instances *observedInstances, instance *Instance, +) error { + // The StatefulSet and number of Pods should have already been verified, but + // check again rather than panic. + // TODO(cbandy): The check for StatefulSet can go away if we watch Pod deletes. + if instance.Runner == nil || len(instance.Pods) != 1 { + return errors.Errorf( + "unexpected instance state during rollout: %v has %v pods", + instance.Name, len(instance.Pods)) + } + + pod := instance.Pods[0] + exec := func(_ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string) error { + return r.PodExec(ctx, pod.Namespace, pod.Name, naming.ContainerDatabase, stdin, stdout, stderr, command...) + } + + primary, known := instance.IsPrimary() + primary = primary && known + + // When the cluster has more than one instance participating in failover, + // perform a controlled switchover to one of those instances. Patroni will + // choose the best candidate and demote the primary. It stops PostgreSQL + // using what it calls "graceful" mode: it takes an immediate checkpoint in + // the background then uses "pg_ctl" to perform a "fast" shutdown when the + // checkpoint completes. + // - https://github.com/zalando/patroni/blob/v2.0.2/patroni/ha.py#L815 + // - https://www.postgresql.org/docs/current/sql-checkpoint.html + // + // NOTE(cbandy): The StatefulSet controlling this Pod reflects this change + // in its Status and triggers another reconcile. + if primary && len(instances.forCluster) > 1 { + var span trace.Span + ctx, span = r.Tracer.Start(ctx, "patroni-change-primary") + defer span.End() + + success, err := patroni.Executor(exec).ChangePrimaryAndWait(ctx, pod.Name, "") + if err = errors.WithStack(err); err == nil && !success { + err = errors.New("unable to switchover") + } + + span.RecordError(err) + return err + } + + // When the cluster has only one instance for failover, perform a series of + // immediate checkpoints to increase the likelihood that a "fast" shutdown + // will complete before the SIGKILL near TerminationGracePeriodSeconds. + // - https://docs.k8s.io/concepts/workloads/pods/pod-lifecycle/#pod-termination + if primary { + graceSeconds := int64(corev1.DefaultTerminationGracePeriodSeconds) + if pod.Spec.TerminationGracePeriodSeconds != nil { + graceSeconds = *pod.Spec.TerminationGracePeriodSeconds + } + + checkpoint := func(ctx context.Context) (time.Duration, error) { + ctx, span := r.Tracer.Start(ctx, "postgresql-checkpoint") + defer span.End() + + start := time.Now() + stdout, stderr, err := postgres.Executor(exec). + ExecInDatabasesFromQuery(ctx, `SELECT pg_catalog.current_database()`, + `SET statement_timeout = :'timeout'; CHECKPOINT;`, + map[string]string{ + "timeout": fmt.Sprintf("%ds", graceSeconds), + "ON_ERROR_STOP": "on", // Abort when any one statement fails. + "QUIET": "on", // Do not print successful statements to stdout. + }) + err = errors.WithStack(err) + elapsed := time.Since(start) + + logging.FromContext(ctx).V(1).Info("attempted checkpoint", + "duration", elapsed, "stdout", stdout, "stderr", stderr) + + span.RecordError(err) + return elapsed, err + } + + duration, err := checkpoint(ctx) + threshold := time.Duration(graceSeconds/2) * time.Second + + // The first checkpoint could be flushing up to "checkpoint_timeout" + // or "max_wal_size" worth of data. Try once more to get a sense of + // how long "fast" shutdown might take. + if err == nil && duration > threshold { + duration, err = checkpoint(ctx) + } + + // Communicate the lack or slowness of CHECKPOINT and shutdown anyway. + if err != nil { + r.Recorder.Eventf(cluster, corev1.EventTypeWarning, "NoCheckpoint", + "Unable to checkpoint primary before shutdown: %v", err) + } else if duration > threshold { + r.Recorder.Eventf(cluster, corev1.EventTypeWarning, "SlowCheckpoint", + "Shutting down primary despite checkpoint taking over %v", duration) + } + } + + // Delete the Pod so its controlling StatefulSet will recreate it. Patroni + // will receive a SIGTERM and use "pg_ctl" to perform a "fast" shutdown of + // PostgreSQL without taking a checkpoint. + // - https://github.com/zalando/patroni/blob/v2.0.2/patroni/ha.py#L1465 + // + // NOTE(cbandy): This could return an apierrors.IsConflict() which should be + // retried by another reconcile (not ignored). + return errors.WithStack( + r.Client.Delete(ctx, pod, client.Preconditions{ + UID: &pod.UID, + ResourceVersion: &pod.ResourceVersion, + })) +} + +// rolloutInstances compares instances to cluster and calls redeploy on those +// that need their Pod recreated. It considers the overall availability of +// cluster and minimizes Patroni failovers. +func (r *Reconciler) rolloutInstances( + ctx context.Context, + cluster *v1beta1.PostgresCluster, + instances *observedInstances, + redeploy func(context.Context, *Instance) error, +) error { + var err error + var consider []*Instance + var numAvailable int + var numSpecified int + + ctx, span := r.Tracer.Start(ctx, "rollout-instances") + defer span.End() + + for _, set := range cluster.Spec.InstanceSets { + numSpecified += int(*set.Replicas) + } + + for _, instance := range instances.forCluster { + // Skip instances that have no set in cluster spec. They should not be + // redeployed and should not count toward availability. + if instance.Spec == nil { + continue + } + + // Skip instances that are or might be terminating. They should not be + // redeployed right now and cannot count toward availability. + if terminating, known := instance.IsTerminating(); !known || terminating { + continue + } + + if available, known := instance.IsAvailable(); known && available { + numAvailable++ + } + + if matches, known := instance.PodMatchesPodTemplate(); known && !matches { + consider = append(consider, instance) + continue + } + } + + const maxUnavailable = 1 + numUnavailable := numSpecified - numAvailable + + // When multiple instances need to redeploy, sort them so the lowest + // priority instances are first. + if len(consider) > 1 { + sort.Sort(byPriority(consider)) + } + + span.SetAttributes( + attribute.Int("instances", len(instances.forCluster)), + attribute.Int("specified", numSpecified), + attribute.Int("available", numAvailable), + attribute.Int("considering", len(consider)), + ) + + // Redeploy instances up to the allowed maximum while "rolling over" any + // unavailable instances. + // - https://issue.k8s.io/67250 + for _, instance := range consider { + if err == nil { + if available, known := instance.IsAvailable(); known && !available { + err = redeploy(ctx, instance) + } else if numUnavailable < maxUnavailable { + err = redeploy(ctx, instance) + numUnavailable++ + } + } + } + + span.RecordError(err) + return err +} + +// scaleDownInstances removes extra instances from a cluster until it matches +// the spec. This function can delete the primary instance and force the +// cluster to failover under two conditions: +// - If the instance set that contains the primary instance is removed from +// the spec +// - If the instance set that contains the primary instance is updated to +// have 0 replicas +// +// If either of these conditions are met then the primary instance will be +// marked for deletion and deleted after all other instances +func (r *Reconciler) scaleDownInstances( + ctx context.Context, + cluster *v1beta1.PostgresCluster, + observedInstances *observedInstances, +) error { + + // want defines the number of replicas we want for each instance set + want := map[string]int{} + for _, set := range cluster.Spec.InstanceSets { + want[set.Name] = int(*set.Replicas) + } + + // grab all pods for the cluster using the observed instances + pods := []corev1.Pod{} + for instanceIndex := range observedInstances.forCluster { + for podIndex := range observedInstances.forCluster[instanceIndex].Pods { + pods = append(pods, *observedInstances.forCluster[instanceIndex].Pods[podIndex]) + } + } + + // namesToKeep defines the names of any instances that should be kept + namesToKeep := sets.NewString() + for _, pod := range podsToKeep(pods, want) { + namesToKeep.Insert(pod.Labels[naming.LabelInstance]) + } + + for _, instance := range observedInstances.forCluster { + for _, pod := range instance.Pods { + if !namesToKeep.Has(pod.Labels[naming.LabelInstance]) { + err := r.deleteInstance(ctx, cluster, pod.Labels[naming.LabelInstance]) + if err != nil { + return err + } + } + } + } + + return nil +} + +// podsToKeep takes a list of pods and a map containing +// the number of replicas we want for each instance set +// then returns a list of the pods that we want to keep +func podsToKeep(instances []corev1.Pod, want map[string]int) []corev1.Pod { + + f := func(instances []corev1.Pod, want int) []corev1.Pod { + keep := []corev1.Pod{} + + if want > 0 { + for _, instance := range instances { + if instance.Labels[naming.LabelRole] == "master" { + keep = append(keep, instance) + } + } + } + + for _, instance := range instances { + if instance.Labels[naming.LabelRole] != "master" && len(keep) < want { + keep = append(keep, instance) + } + } + + return keep + } + + keepPodList := []corev1.Pod{} + for name, num := range want { + list := []corev1.Pod{} + for _, instance := range instances { + if instance.Labels[naming.LabelInstanceSet] == name { + list = append(list, instance) + } + } + keepPodList = append(keepPodList, f(list, num)...) + } + + return keepPodList + +} + +// +kubebuilder:rbac:groups="apps",resources="statefulsets",verbs={list} + +// scaleUpInstances updates the cluster until the number of instances matches +// the cluster spec +func (r *Reconciler) scaleUpInstances( + ctx context.Context, + cluster *v1beta1.PostgresCluster, + observed *observedInstances, + set *v1beta1.PostgresInstanceSetSpec, + clusterConfigMap *corev1.ConfigMap, + clusterReplicationSecret *corev1.Secret, + rootCA *pki.RootCertificateAuthority, + clusterPodService *corev1.Service, + instanceServiceAccount *corev1.ServiceAccount, + patroniLeaderService *corev1.Service, + primaryCertificate *corev1.SecretProjection, + availableInstanceNames []string, + numInstancePods int, + clusterVolumes []corev1.PersistentVolumeClaim, + exporterQueriesConfig, exporterWebConfig *corev1.ConfigMap, + backupsSpecFound bool, +) ([]*appsv1.StatefulSet, error) { + log := logging.FromContext(ctx) + + instanceNames := sets.NewString() + instances := []*appsv1.StatefulSet{} + for i := range observed.bySet[set.Name] { + oi := observed.bySet[set.Name][i] + // an instance might not have a runner if it was deleted + if oi.Runner != nil { + instanceNames.Insert(oi.Name) + instances = append(instances, oi.Runner) + } + } + // While there are fewer instances than specified, generate another empty one + // and append it. + for len(instances) < int(*set.Replicas) { + var span trace.Span + ctx, span = r.Tracer.Start(ctx, "generateInstanceName") + next := naming.GenerateInstance(cluster, set) + // if there are any available instance names (as determined by observing any PVCs for the + // instance set that are not currently associated with an instance, e.g. in the event the + // instance STS was deleted), then reuse them instead of generating a new name + if len(availableInstanceNames) > 0 { + next.Name = availableInstanceNames[0] + availableInstanceNames = availableInstanceNames[1:] + } else { + for instanceNames.Has(next.Name) { + next = naming.GenerateInstance(cluster, set) + } + } + span.End() + + instanceNames.Insert(next.Name) + instances = append(instances, &appsv1.StatefulSet{ObjectMeta: next}) + } + + var err error + for i := range instances { + err = r.reconcileInstance( + ctx, cluster, observed.byName[instances[i].Name], set, + clusterConfigMap, clusterReplicationSecret, + rootCA, clusterPodService, instanceServiceAccount, + patroniLeaderService, primaryCertificate, instances[i], + numInstancePods, clusterVolumes, exporterQueriesConfig, exporterWebConfig, + backupsSpecFound, + ) + } + if err == nil { + log.V(1).Info("reconciled instance set", "instance-set", set.Name) + } + + return instances, err +} + +// +kubebuilder:rbac:groups="apps",resources="statefulsets",verbs={create,patch} + +// reconcileInstance writes instance according to spec of cluster. +// See Reconciler.reconcileInstanceSet. +func (r *Reconciler) reconcileInstance( + ctx context.Context, + cluster *v1beta1.PostgresCluster, + observed *Instance, + spec *v1beta1.PostgresInstanceSetSpec, + clusterConfigMap *corev1.ConfigMap, + clusterReplicationSecret *corev1.Secret, + rootCA *pki.RootCertificateAuthority, + clusterPodService *corev1.Service, + instanceServiceAccount *corev1.ServiceAccount, + patroniLeaderService *corev1.Service, + primaryCertificate *corev1.SecretProjection, + instance *appsv1.StatefulSet, + numInstancePods int, + clusterVolumes []corev1.PersistentVolumeClaim, + exporterQueriesConfig, exporterWebConfig *corev1.ConfigMap, + backupsSpecFound bool, +) error { + log := logging.FromContext(ctx).WithValues("instance", instance.Name) + ctx = logging.NewContext(ctx, log) + + existing := instance.DeepCopy() + *instance = appsv1.StatefulSet{} + instance.SetGroupVersionKind(appsv1.SchemeGroupVersion.WithKind("StatefulSet")) + instance.Namespace, instance.Name = existing.Namespace, existing.Name + err := errors.WithStack(r.setControllerReference(cluster, instance)) + if err == nil { + generateInstanceStatefulSetIntent(ctx, cluster, spec, + clusterPodService.Name, instanceServiceAccount.Name, instance, + numInstancePods) + } + + var ( + instanceConfigMap *corev1.ConfigMap + instanceCertificates *corev1.Secret + postgresDataVolume *corev1.PersistentVolumeClaim + postgresWALVolume *corev1.PersistentVolumeClaim + tablespaceVolumes []*corev1.PersistentVolumeClaim + ) + + if err == nil { + instanceConfigMap, err = r.reconcileInstanceConfigMap(ctx, cluster, spec, instance) + } + if err == nil { + instanceCertificates, err = r.reconcileInstanceCertificates( + ctx, cluster, spec, instance, rootCA) + } + if err == nil { + postgresDataVolume, err = r.reconcilePostgresDataVolume(ctx, cluster, spec, instance, clusterVolumes, nil) + } + if err == nil { + postgresWALVolume, err = r.reconcilePostgresWALVolume(ctx, cluster, spec, instance, observed, clusterVolumes) + } + if err == nil { + tablespaceVolumes, err = r.reconcileTablespaceVolumes(ctx, cluster, spec, instance, clusterVolumes) + } + if err == nil { + postgres.InstancePod( + ctx, cluster, spec, + primaryCertificate, replicationCertSecretProjection(clusterReplicationSecret), + postgresDataVolume, postgresWALVolume, tablespaceVolumes, + &instance.Spec.Template.Spec) + + if backupsSpecFound { + addPGBackRestToInstancePodSpec( + ctx, cluster, instanceCertificates, &instance.Spec.Template.Spec) + } + + err = patroni.InstancePod( + ctx, cluster, clusterConfigMap, clusterPodService, patroniLeaderService, + spec, instanceCertificates, instanceConfigMap, &instance.Spec.Template) + } + + // Add pgMonitor resources to the instance Pod spec + if err == nil { + err = addPGMonitorToInstancePodSpec(ctx, cluster, &instance.Spec.Template, exporterQueriesConfig, exporterWebConfig) + } + + // add nss_wrapper init container and add nss_wrapper env vars to the database and pgbackrest + // containers + if err == nil { + addNSSWrapper( + config.PostgresContainerImage(cluster), + cluster.Spec.ImagePullPolicy, + &instance.Spec.Template) + + } + // add an emptyDir volume to the PodTemplateSpec and an associated '/tmp' volume mount to + // all containers included within that spec + if err == nil { + addTMPEmptyDir(&instance.Spec.Template) + } + + // mount shared memory to the Postgres instance + if err == nil { + addDevSHM(&instance.Spec.Template) + } + + if err == nil { + err = errors.WithStack(r.apply(ctx, instance)) + } + if err == nil { + log.V(1).Info("reconciled instance", "instance", instance.Name) + } + + return err +} + +func generateInstanceStatefulSetIntent(_ context.Context, + cluster *v1beta1.PostgresCluster, + spec *v1beta1.PostgresInstanceSetSpec, + clusterPodServiceName string, + instanceServiceAccountName string, + sts *appsv1.StatefulSet, + numInstancePods int, +) { + sts.Annotations = naming.Merge( + cluster.Spec.Metadata.GetAnnotationsOrNil(), + spec.Metadata.GetAnnotationsOrNil()) + sts.Labels = naming.Merge( + cluster.Spec.Metadata.GetLabelsOrNil(), + spec.Metadata.GetLabelsOrNil(), + map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelInstanceSet: spec.Name, + naming.LabelInstance: sts.Name, + naming.LabelData: naming.DataPostgres, + }) + sts.Spec.Selector = &metav1.LabelSelector{ + MatchLabels: map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelInstanceSet: spec.Name, + naming.LabelInstance: sts.Name, + }, + } + sts.Spec.Template.Annotations = naming.Merge( + cluster.Spec.Metadata.GetAnnotationsOrNil(), + spec.Metadata.GetAnnotationsOrNil(), + ) + sts.Spec.Template.Labels = naming.Merge( + cluster.Spec.Metadata.GetLabelsOrNil(), + spec.Metadata.GetLabelsOrNil(), + map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelInstanceSet: spec.Name, + naming.LabelInstance: sts.Name, + naming.LabelData: naming.DataPostgres, + }) + + // Don't clutter the namespace with extra ControllerRevisions. + // The "controller-revision-hash" label still exists on the Pod. + sts.Spec.RevisionHistoryLimit = initialize.Int32(0) + + // Give the Pod a stable DNS record based on its name. + // - https://docs.k8s.io/concepts/workloads/controllers/statefulset/#stable-network-id + // - https://docs.k8s.io/concepts/services-networking/dns-pod-service/#pods + sts.Spec.ServiceName = clusterPodServiceName + + // Disable StatefulSet's "RollingUpdate" strategy. The rolloutInstances + // method considers Pods across the entire PostgresCluster and deletes + // them to trigger updates. + // - https://docs.k8s.io/concepts/workloads/controllers/statefulset/#on-delete + sts.Spec.UpdateStrategy.Type = appsv1.OnDeleteStatefulSetStrategyType + + // Use scheduling constraints from the cluster spec. + sts.Spec.Template.Spec.Affinity = spec.Affinity + sts.Spec.Template.Spec.Tolerations = spec.Tolerations + sts.Spec.Template.Spec.TopologySpreadConstraints = spec.TopologySpreadConstraints + sts.Spec.Template.Spec.PriorityClassName = initialize.FromPointer(spec.PriorityClassName) + + // if default pod scheduling is not explicitly disabled, add the default + // pod topology spread constraints + if !initialize.FromPointer(cluster.Spec.DisableDefaultPodScheduling) { + sts.Spec.Template.Spec.TopologySpreadConstraints = append( + sts.Spec.Template.Spec.TopologySpreadConstraints, + defaultTopologySpreadConstraints( + naming.ClusterDataForPostgresAndPGBackRest(cluster.Name), + )...) + } + + // Though we use a StatefulSet to keep an instance running, we only ever + // want one Pod from it. This means that Replicas should only ever be + // 1, the default case for a running cluster, or 0, if the existing replicas + // value is set to 0 due to being 'shutdown'. + // The logic below is designed to make sure that the primary/leader instance + // is always the first to startup and the last to shutdown. + if cluster.Status.StartupInstance == "" { + // there is no designated startup instance; all instances should run. + sts.Spec.Replicas = initialize.Int32(1) + } else if cluster.Status.StartupInstance != sts.Name { + // there is a startup instance defined, but not this instance; do not run. + sts.Spec.Replicas = initialize.Int32(0) + } else if cluster.Spec.Shutdown != nil && *cluster.Spec.Shutdown && + numInstancePods <= 1 { + // this is the last instance of the shutdown sequence; do not run. + sts.Spec.Replicas = initialize.Int32(0) + } else { + // this is the designated instance, but + // - others are still running during shutdown, or + // - it is time to startup. + sts.Spec.Replicas = initialize.Int32(1) + } + + // Restart containers any time they stop, die, are killed, etc. + // - https://docs.k8s.io/concepts/workloads/pods/pod-lifecycle/#restart-policy + sts.Spec.Template.Spec.RestartPolicy = corev1.RestartPolicyAlways + + // ShareProcessNamespace makes Kubernetes' pause process PID 1 and lets + // containers see each other's processes. + // - https://docs.k8s.io/tasks/configure-pod-container/share-process-namespace/ + sts.Spec.Template.Spec.ShareProcessNamespace = initialize.Bool(true) + + // Patroni calls the Kubernetes API and pgBackRest may interact with a cloud + // storage provider. Use the instance ServiceAccount and automatically mount + // its Kubernetes credentials. + // - https://cloud.google.com/kubernetes-engine/docs/concepts/workload-identity + // - https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html + sts.Spec.Template.Spec.ServiceAccountName = instanceServiceAccountName + + // Disable environment variables for services other than the Kubernetes API. + // - https://docs.k8s.io/concepts/services-networking/connect-applications-service/#accessing-the-service + // - https://releases.k8s.io/v1.23.0/pkg/kubelet/kubelet_pods.go#L553-L563 + sts.Spec.Template.Spec.EnableServiceLinks = initialize.Bool(false) + + sts.Spec.Template.Spec.SecurityContext = postgres.PodSecurityContext(cluster) + + // Set the image pull secrets, if any exist. + // This is set here rather than using the service account due to the lack + // of propagation to existing pods when the CRD is updated: + // https://github.com/kubernetes/kubernetes/issues/88456 + sts.Spec.Template.Spec.ImagePullSecrets = cluster.Spec.ImagePullSecrets +} + +// addPGBackRestToInstancePodSpec adds pgBackRest configurations and sidecars +// to the PodSpec. +func addPGBackRestToInstancePodSpec( + ctx context.Context, cluster *v1beta1.PostgresCluster, + instanceCertificates *corev1.Secret, instancePod *corev1.PodSpec, +) { + pgbackrest.AddServerToInstancePod(ctx, cluster, instancePod, + instanceCertificates.Name) + + pgbackrest.AddConfigToInstancePod(cluster, instancePod) +} + +// +kubebuilder:rbac:groups="",resources="configmaps",verbs={create,patch} + +// reconcileInstanceConfigMap writes the ConfigMap that contains generated +// files (etc) that apply to instance of cluster. +func (r *Reconciler) reconcileInstanceConfigMap( + ctx context.Context, cluster *v1beta1.PostgresCluster, spec *v1beta1.PostgresInstanceSetSpec, + instance *appsv1.StatefulSet, +) (*corev1.ConfigMap, error) { + instanceConfigMap := &corev1.ConfigMap{ObjectMeta: naming.InstanceConfigMap(instance)} + instanceConfigMap.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("ConfigMap")) + + // TODO(cbandy): Instance StatefulSet as owner? + err := errors.WithStack(r.setControllerReference(cluster, instanceConfigMap)) + + instanceConfigMap.Annotations = naming.Merge( + cluster.Spec.Metadata.GetAnnotationsOrNil(), + spec.Metadata.GetAnnotationsOrNil()) + instanceConfigMap.Labels = naming.Merge( + cluster.Spec.Metadata.GetLabelsOrNil(), + spec.Metadata.GetLabelsOrNil(), + map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelInstanceSet: spec.Name, + naming.LabelInstance: instance.Name, + }) + + if err == nil { + err = patroni.InstanceConfigMap(ctx, cluster, spec, instanceConfigMap) + } + if err == nil { + err = errors.WithStack(r.apply(ctx, instanceConfigMap)) + } + + return instanceConfigMap, err +} + +// +kubebuilder:rbac:groups="",resources="secrets",verbs={get} +// +kubebuilder:rbac:groups="",resources="secrets",verbs={create,patch} + +// reconcileInstanceCertificates writes the Secret that contains certificates +// and private keys for instance of cluster. +func (r *Reconciler) reconcileInstanceCertificates( + ctx context.Context, cluster *v1beta1.PostgresCluster, + spec *v1beta1.PostgresInstanceSetSpec, instance *appsv1.StatefulSet, + root *pki.RootCertificateAuthority, +) (*corev1.Secret, error) { + existing := &corev1.Secret{ObjectMeta: naming.InstanceCertificates(instance)} + err := errors.WithStack(client.IgnoreNotFound( + r.Client.Get(ctx, client.ObjectKeyFromObject(existing), existing))) + + instanceCerts := &corev1.Secret{ObjectMeta: naming.InstanceCertificates(instance)} + instanceCerts.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("Secret")) + + // TODO(cbandy): Instance StatefulSet as owner? + if err == nil { + err = errors.WithStack(r.setControllerReference(cluster, instanceCerts)) + } + + instanceCerts.Annotations = naming.Merge( + cluster.Spec.Metadata.GetAnnotationsOrNil(), + spec.Metadata.GetAnnotationsOrNil()) + instanceCerts.Labels = naming.Merge( + cluster.Spec.Metadata.GetLabelsOrNil(), + spec.Metadata.GetLabelsOrNil(), + map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelInstanceSet: spec.Name, + naming.LabelInstance: instance.Name, + }) + + // This secret is holding certificates, but the "kubernetes.io/tls" type + // expects an *unencrypted* private key. We're also adding other values and + // other formats, so indicate that with the "Opaque" type. + // - https://docs.k8s.io/concepts/configuration/secret/#secret-types + instanceCerts.Type = corev1.SecretTypeOpaque + instanceCerts.Data = make(map[string][]byte) + + var leafCert *pki.LeafCertificate + + if err == nil { + leafCert, err = r.instanceCertificate(ctx, instance, existing, instanceCerts, root) + } + if err == nil { + err = patroni.InstanceCertificates(ctx, + root.Certificate, leafCert.Certificate, + leafCert.PrivateKey, instanceCerts) + } + if err == nil { + err = pgbackrest.InstanceCertificates(ctx, cluster, + root.Certificate, leafCert.Certificate, leafCert.PrivateKey, + instanceCerts) + } + if err == nil { + err = errors.WithStack(r.apply(ctx, instanceCerts)) + } + + return instanceCerts, err +} + +// +kubebuilder:rbac:groups="policy",resources="poddisruptionbudgets",verbs={create,patch,get,delete} + +// reconcileInstanceSetPodDisruptionBudget creates a PDB for an instance set. A +// PDB will be created when the minAvailable is determined to be greater than 0. +// MinAvailable can be defined in the spec or a default value will be set based +// on the number of replicas in the instance set. +func (r *Reconciler) reconcileInstanceSetPodDisruptionBudget( + ctx context.Context, + cluster *v1beta1.PostgresCluster, + spec *v1beta1.PostgresInstanceSetSpec, +) error { + if spec.Replicas == nil { + // Replicas should always have a value because of defaults in the spec + return errors.New("Replicas should be defined") + } + minAvailable := getMinAvailable(spec.MinAvailable, *spec.Replicas) + + meta := naming.InstanceSet(cluster, spec) + meta.Labels = naming.Merge(cluster.Spec.Metadata.GetLabelsOrNil(), + spec.Metadata.GetLabelsOrNil(), + map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelInstanceSet: spec.Name, + }) + meta.Annotations = naming.Merge(cluster.Spec.Metadata.GetAnnotationsOrNil(), + spec.Metadata.GetAnnotationsOrNil()) + + selector := naming.ClusterInstanceSet(cluster.Name, spec.Name) + pdb, err := r.generatePodDisruptionBudget(cluster, meta, minAvailable, selector) + + // If 'minAvailable' is set to '0', we will not reconcile the PDB. If one + // already exists, we will remove it. + var scaled int + if err == nil { + scaled, err = intstr.GetScaledValueFromIntOrPercent(minAvailable, int(*spec.Replicas), true) + } + if err == nil && scaled <= 0 { + err := errors.WithStack(r.Client.Get(ctx, client.ObjectKeyFromObject(pdb), pdb)) + if err == nil { + err = errors.WithStack(r.deleteControlled(ctx, cluster, pdb)) + } + return client.IgnoreNotFound(err) + } + + if err == nil { + err = errors.WithStack(r.apply(ctx, pdb)) + } + return err +} diff --git a/internal/controller/postgrescluster/instance.md b/internal/controller/postgrescluster/instance.md new file mode 100644 index 0000000000..f0de4c5d7a --- /dev/null +++ b/internal/controller/postgrescluster/instance.md @@ -0,0 +1,112 @@ + + +## Shutdown and Startup Logic Detail + +The Shutdown/Startup process used by the `postgresclusters` is somewhat nuanced +and may be a bit difficult to understand by just reviewing the code and +associated comments. To help clarify, here is a brief explanation of the logic +being used. + +### Startup Instance Value + +The first code block to consider is found in the `observeInstances` function: + +``` +// Go through the observed instances and check if a primary has been determined. +// If the cluster is being shutdown and this instance is the primary, store +// the instance name as the startup instance. If the primary can be determined +// from the instance and the cluster is not being shutdown, clear any stored +// startup instance values. +for _, instance := range observed.forCluster { + if primary, known := instance.IsPrimary(); primary && known { + if cluster.Spec.Shutdown != nil && *cluster.Spec.Shutdown { + cluster.Status.StartupInstance = instance.Name + } else { + cluster.Status.StartupInstance = "" + } + } +} +``` + +This sets the `StartupInstance` status value, which stores the primary/leader +instance name during a `postgrescluster` shutdown. When the cluster is restarted, +this value is cleared so that it only appears in the `postgrescluster` status +while the cluster is shutdown. + +### Other Key Values + +Besides the stored `StartupInstance` name, the two other values used to set +the replica count are the `Shutdown` value from the `postgrescluster` spec +and the current pod count per cluster. With these values, the solution used +in the code can be represented by: + +`Replicas = (SSI match & ~Single Pod) | (SSI match & ~Shutdown) | (SSI blank)` + +where +`Replicas` is the number of replica pods to be created, either zero or one + +`SSI` refers to the status value for `StartupInstance`, either matching the +instance name or set to blank ("") + + `Single Pod` refers to whether the cluster has a single pod left running, i.e. + the primary/leader + + `Shutdown` is whether the cluster is configured to be shutdown + +### Logic Map + +With this, the grid below shows the expected replica count value, depending on +the values. Below, the letters represent the following: + +M = StartupInstance matches the instance name + +E = StartupInstance is empty + +S = cluster is configured to Shutdown + +P = a single pod exists + +When the letter is capitalized, that indicates the statement is `true` +if lowercase, the statement is `false`. + +| | em | eM | EM | Em | +|----|---|----|----|----| +| sp | 0 | 1 | 1 | 1 | +| sP | 0 | 1 | 1 | 1 | +| SP | 0 | 0 | 1 | 1 | +| Sp | 0 | 1 | 1 | 1 | + + +### Implementation + +Following this, we have the `if/else` block as found in the +`generateInstanceStatefulSetIntent` function: + +``` +if cluster.Status.StartupInstance == "" { + // there is no designated startup instance; all instances should run. + sts.Spec.Replicas = initialize.Int32(1) + } else if cluster.Status.StartupInstance != sts.Name { + // there is a startup instance defined, but not this instance; do not run. + sts.Spec.Replicas = initialize.Int32(0) + } else if cluster.Spec.Shutdown != nil && *cluster.Spec.Shutdown && + numInstancePods <= 1 { + // this is the last instance of the shutdown sequence; do not run. + sts.Spec.Replicas = initialize.Int32(0) + } else { + // this is the designated instance, but + // - others are still running during shutdown, or + // - it is time to startup. + sts.Spec.Replicas = initialize.Int32(1) + } +``` + +Which allows the correct replica count to be set during both startup and +shutdown. During a shutdown, all pods other than the primary will begin +termination first, followed by the primary. On startup, a reversed process +will be followed. In cases where the `StartupInstance` value is not set, all +pods will be allowed to start at the same time. diff --git a/internal/controller/postgrescluster/instance_rollout_test.go b/internal/controller/postgrescluster/instance_rollout_test.go new file mode 100644 index 0000000000..e668907497 --- /dev/null +++ b/internal/controller/postgrescluster/instance_rollout_test.go @@ -0,0 +1,643 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgrescluster + +import ( + "context" + "io" + "strings" + "testing" + + "go.opentelemetry.io/otel" + "go.opentelemetry.io/otel/attribute" + "go.opentelemetry.io/otel/sdk/trace" + "go.opentelemetry.io/otel/sdk/trace/tracetest" + "gotest.tools/v3/assert" + appsv1 "k8s.io/api/apps/v1" + corev1 "k8s.io/api/core/v1" + apierrors "k8s.io/apimachinery/pkg/api/errors" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/util/sets" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/client/fake" + + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/testing/cmp" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestReconcilerRolloutInstance(t *testing.T) { + ctx := context.Background() + cluster := new(v1beta1.PostgresCluster) + + t.Run("Singleton", func(t *testing.T) { + instances := []*Instance{ + { + Name: "one", + Pods: []*corev1.Pod{{ + ObjectMeta: metav1.ObjectMeta{ + Namespace: "ns1", + Name: "one-pod-bruh", + Labels: map[string]string{ + "controller-revision-hash": "gamma", + "postgres-operator.crunchydata.com/role": "master", + }, + }, + Status: corev1.PodStatus{ + Conditions: []corev1.PodCondition{{ + Type: corev1.PodReady, + Status: corev1.ConditionTrue, + }}, + }, + }}, + Runner: &appsv1.StatefulSet{}, + }, + } + observed := &observedInstances{forCluster: instances} + + key := client.ObjectKey{Namespace: "ns1", Name: "one-pod-bruh"} + reconciler := &Reconciler{} + reconciler.Client = fake.NewClientBuilder().WithObjects(instances[0].Pods[0]).Build() + reconciler.Tracer = otel.Tracer(t.Name()) + + execCalls := 0 + reconciler.PodExec = func( + ctx context.Context, namespace, pod, container string, stdin io.Reader, _, _ io.Writer, command ...string, + ) error { + execCalls++ + + // Execute on the Pod. + assert.Equal(t, namespace, "ns1") + assert.Equal(t, pod, "one-pod-bruh") + assert.Equal(t, container, "database") + + // Checkpoint with timeout. + b, _ := io.ReadAll(stdin) + assert.Equal(t, string(b), "SET statement_timeout = :'timeout'; CHECKPOINT;") + commandString := strings.Join(command, " ") + assert.Assert(t, cmp.Contains(commandString, "psql")) + assert.Assert(t, cmp.Contains(commandString, "--set=timeout=")) + + return nil + } + + assert.NilError(t, reconciler.Client.Get(ctx, key, &corev1.Pod{}), + "bug in test: expected pod to exist") + + assert.NilError(t, reconciler.rolloutInstance(ctx, cluster, observed, instances[0])) + assert.Equal(t, execCalls, 1, "expected PodExec to be called") + + err := reconciler.Client.Get(ctx, key, &corev1.Pod{}) + assert.Assert(t, apierrors.IsNotFound(err), + "expected pod to be deleted, got: %#v", err) + }) + + t.Run("Multiple", func(t *testing.T) { + instances := []*Instance{ + { + Name: "primary", + Pods: []*corev1.Pod{{ + ObjectMeta: metav1.ObjectMeta{ + Namespace: "ns1", + Name: "the-pod", + Labels: map[string]string{ + "controller-revision-hash": "gamma", + "postgres-operator.crunchydata.com/role": "master", + }, + }, + }}, + Runner: &appsv1.StatefulSet{}, + }, + { + Name: "other", + Pods: []*corev1.Pod{{}}, + Runner: &appsv1.StatefulSet{}, + }, + } + observed := &observedInstances{forCluster: instances} + + t.Run("Success", func(t *testing.T) { + execCalls := 0 + reconciler := &Reconciler{} + reconciler.Tracer = otel.Tracer(t.Name()) + reconciler.PodExec = func( + ctx context.Context, namespace, pod, container string, _ io.Reader, stdout, _ io.Writer, command ...string, + ) error { + execCalls++ + + // Execute on the Pod. + assert.Equal(t, namespace, "ns1") + assert.Equal(t, pod, "the-pod") + assert.Equal(t, container, "database") + + // A switchover to any viable candidate. + assert.DeepEqual(t, command[:2], []string{"patronictl", "switchover"}) + assert.Assert(t, sets.NewString(command...).Has("--master=the-pod")) + assert.Assert(t, sets.NewString(command...).Has("--candidate=")) + + // Indicate success through stdout. + _, _ = stdout.Write([]byte("switched over")) + + return nil + } + + assert.NilError(t, reconciler.rolloutInstance(ctx, cluster, observed, instances[0])) + assert.Equal(t, execCalls, 1, "expected PodExec to be called") + }) + + t.Run("Failure", func(t *testing.T) { + reconciler := &Reconciler{} + reconciler.Tracer = otel.Tracer(t.Name()) + reconciler.PodExec = func( + ctx context.Context, _, _, _ string, _ io.Reader, _, _ io.Writer, _ ...string, + ) error { + // Nothing useful in stdout. + return nil + } + + err := reconciler.rolloutInstance(ctx, cluster, observed, instances[0]) + assert.ErrorContains(t, err, "switchover") + }) + }) +} + +func TestReconcilerRolloutInstances(t *testing.T) { + ctx := context.Background() + reconciler := &Reconciler{Tracer: otel.Tracer(t.Name())} + + accumulate := func(on *[]*Instance) func(context.Context, *Instance) error { + return func(_ context.Context, i *Instance) error { *on = append(*on, i); return nil } + } + + logSpanAttributes := func(t testing.TB) { + recorder := tracetest.NewSpanRecorder() + provider := trace.NewTracerProvider(trace.WithSpanProcessor(recorder)) + + former := reconciler.Tracer + reconciler.Tracer = provider.Tracer(t.Name()) + + t.Cleanup(func() { + reconciler.Tracer = former + for _, span := range recorder.Ended() { + attr := attribute.NewSet(span.Attributes()...) + t.Log(span.Name(), attr.Encoded(attribute.DefaultEncoder())) + } + }) + } + + // Nothing specified, nothing observed, nothing to do. + t.Run("Empty", func(t *testing.T) { + cluster := new(v1beta1.PostgresCluster) + observed := new(observedInstances) + + logSpanAttributes(t) + assert.NilError(t, reconciler.rolloutInstances(ctx, cluster, observed, + func(context.Context, *Instance) error { + t.Fatal("expected no redeploys") + return nil + })) + }) + + // Single healthy instance; nothing to do. + t.Run("Steady", func(t *testing.T) { + cluster := new(v1beta1.PostgresCluster) + cluster.Spec.InstanceSets = []v1beta1.PostgresInstanceSetSpec{ + {Name: "00", Replicas: initialize.Int32(1)}, + } + instances := []*Instance{ + { + Name: "one", + Spec: &cluster.Spec.InstanceSets[0], + Pods: []*corev1.Pod{{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "controller-revision-hash": "gamma", + "postgres-operator.crunchydata.com/role": "master", + }, + }, + Status: corev1.PodStatus{ + Conditions: []corev1.PodCondition{{ + Type: corev1.PodReady, + Status: corev1.ConditionTrue, + }}, + }, + }}, + Runner: &appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Generation: 1, + }, + Status: appsv1.StatefulSetStatus{ + ObservedGeneration: 1, + UpdateRevision: "gamma", + }, + }, + }, + } + observed := &observedInstances{forCluster: instances} + + logSpanAttributes(t) + assert.NilError(t, reconciler.rolloutInstances(ctx, cluster, observed, + func(context.Context, *Instance) error { + t.Fatal("expected no redeploys") + return nil + })) + }) + + // Single healthy instance, Pod does not match PodTemplate. + t.Run("SingletonOutdated", func(t *testing.T) { + cluster := new(v1beta1.PostgresCluster) + cluster.Spec.InstanceSets = []v1beta1.PostgresInstanceSetSpec{ + {Name: "00", Replicas: initialize.Int32(1)}, + } + instances := []*Instance{ + { + Name: "one", + Spec: &cluster.Spec.InstanceSets[0], + Pods: []*corev1.Pod{{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "controller-revision-hash": "beta", + "postgres-operator.crunchydata.com/role": "master", + }, + }, + Status: corev1.PodStatus{ + Conditions: []corev1.PodCondition{{ + Type: corev1.PodReady, + Status: corev1.ConditionTrue, + }}, + }, + }}, + Runner: &appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Generation: 1, + }, + Status: appsv1.StatefulSetStatus{ + ObservedGeneration: 1, + UpdateRevision: "gamma", + }, + }, + }, + } + observed := &observedInstances{forCluster: instances} + + var redeploys []*Instance + + logSpanAttributes(t) + assert.NilError(t, reconciler.rolloutInstances(ctx, cluster, observed, accumulate(&redeploys))) + assert.Equal(t, len(redeploys), 1) + assert.Equal(t, redeploys[0].Name, "one") + }) + + // Two ready instances do not match PodTemplate, no primary. + t.Run("ManyOutdated", func(t *testing.T) { + cluster := new(v1beta1.PostgresCluster) + cluster.Spec.InstanceSets = []v1beta1.PostgresInstanceSetSpec{ + {Name: "00", Replicas: initialize.Int32(2)}, + } + instances := []*Instance{ + { + Name: "one", + Spec: &cluster.Spec.InstanceSets[0], + Pods: []*corev1.Pod{{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "controller-revision-hash": "beta", + }, + }, + Status: corev1.PodStatus{ + Conditions: []corev1.PodCondition{{ + Type: corev1.PodReady, + Status: corev1.ConditionTrue, + }}, + }, + }}, + Runner: &appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Generation: 1, + }, + Status: appsv1.StatefulSetStatus{ + ObservedGeneration: 1, + UpdateRevision: "gamma", + }, + }, + }, + { + Name: "two", + Spec: &cluster.Spec.InstanceSets[0], + Pods: []*corev1.Pod{{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "controller-revision-hash": "beta", + }, + }, + Status: corev1.PodStatus{ + Conditions: []corev1.PodCondition{{ + Type: corev1.PodReady, + Status: corev1.ConditionTrue, + }}, + }, + }}, + Runner: &appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Generation: 1, + }, + Status: appsv1.StatefulSetStatus{ + ObservedGeneration: 1, + UpdateRevision: "gamma", + }, + }, + }, + } + observed := &observedInstances{forCluster: instances} + + var redeploys []*Instance + + logSpanAttributes(t) + assert.NilError(t, reconciler.rolloutInstances(ctx, cluster, observed, accumulate(&redeploys))) + assert.Equal(t, len(redeploys), 1) + assert.Equal(t, redeploys[0].Name, "one", `expected the "lowest" name`) + }) + + // Two ready instances do not match PodTemplate, with primary. The replica is redeployed. + t.Run("ManyOutdatedWithPrimary", func(t *testing.T) { + cluster := new(v1beta1.PostgresCluster) + cluster.Spec.InstanceSets = []v1beta1.PostgresInstanceSetSpec{ + {Name: "00", Replicas: initialize.Int32(2)}, + } + instances := []*Instance{ + { + Name: "outdated", + Spec: &cluster.Spec.InstanceSets[0], + Pods: []*corev1.Pod{{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "controller-revision-hash": "beta", + "postgres-operator.crunchydata.com/role": "master", + }, + }, + Status: corev1.PodStatus{ + Conditions: []corev1.PodCondition{{ + Type: corev1.PodReady, + Status: corev1.ConditionTrue, + }}, + }, + }}, + Runner: &appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Generation: 1, + }, + Status: appsv1.StatefulSetStatus{ + ObservedGeneration: 1, + UpdateRevision: "gamma", + }, + }, + }, + { + Name: "not-primary", + Spec: &cluster.Spec.InstanceSets[0], + Pods: []*corev1.Pod{{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "controller-revision-hash": "beta", + }, + }, + Status: corev1.PodStatus{ + Conditions: []corev1.PodCondition{{ + Type: corev1.PodReady, + Status: corev1.ConditionTrue, + }}, + }, + }}, + Runner: &appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Generation: 1, + }, + Status: appsv1.StatefulSetStatus{ + ObservedGeneration: 1, + UpdateRevision: "gamma", + }, + }, + }, + } + observed := &observedInstances{forCluster: instances} + + var redeploys []*Instance + + logSpanAttributes(t) + assert.NilError(t, reconciler.rolloutInstances(ctx, cluster, observed, accumulate(&redeploys))) + assert.Equal(t, len(redeploys), 1) + assert.Equal(t, redeploys[0].Name, "not-primary") + }) + + // Two instances do not match PodTemplate, one is not ready. Redeploy that one. + t.Run("ManyOutdatedWithNotReady", func(t *testing.T) { + cluster := new(v1beta1.PostgresCluster) + cluster.Spec.InstanceSets = []v1beta1.PostgresInstanceSetSpec{ + {Name: "00", Replicas: initialize.Int32(2)}, + } + instances := []*Instance{ + { + Name: "outdated", + Spec: &cluster.Spec.InstanceSets[0], + Pods: []*corev1.Pod{{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "controller-revision-hash": "beta", + }, + }, + Status: corev1.PodStatus{ + Conditions: []corev1.PodCondition{{ + Type: corev1.PodReady, + Status: corev1.ConditionTrue, + }}, + }, + }}, + Runner: &appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Generation: 1, + }, + Status: appsv1.StatefulSetStatus{ + ObservedGeneration: 1, + UpdateRevision: "gamma", + }, + }, + }, + { + Name: "not-ready", + Spec: &cluster.Spec.InstanceSets[0], + Pods: []*corev1.Pod{{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "controller-revision-hash": "beta", + }, + }, + Status: corev1.PodStatus{ + Conditions: []corev1.PodCondition{{ + Type: corev1.PodReady, + Status: corev1.ConditionFalse, + }}, + }, + }}, + Runner: &appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Generation: 1, + }, + Status: appsv1.StatefulSetStatus{ + ObservedGeneration: 1, + UpdateRevision: "gamma", + }, + }, + }, + } + observed := &observedInstances{forCluster: instances} + + var redeploys []*Instance + + logSpanAttributes(t) + assert.NilError(t, reconciler.rolloutInstances(ctx, cluster, observed, accumulate(&redeploys))) + assert.Equal(t, len(redeploys), 1) + assert.Equal(t, redeploys[0].Name, "not-ready") + }) + + // Two instances do not match PodTemplate, one is terminating. Do nothing. + t.Run("ManyOutdatedWithTerminating", func(t *testing.T) { + cluster := new(v1beta1.PostgresCluster) + cluster.Spec.InstanceSets = []v1beta1.PostgresInstanceSetSpec{ + {Name: "00", Replicas: initialize.Int32(2)}, + } + instances := []*Instance{ + { + Name: "outdated", + Spec: &cluster.Spec.InstanceSets[0], + Pods: []*corev1.Pod{{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "controller-revision-hash": "beta", + }, + }, + Status: corev1.PodStatus{ + Conditions: []corev1.PodCondition{{ + Type: corev1.PodReady, + Status: corev1.ConditionTrue, + }}, + }, + }}, + Runner: &appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Generation: 1, + }, + Status: appsv1.StatefulSetStatus{ + ObservedGeneration: 1, + UpdateRevision: "gamma", + }, + }, + }, + { + Name: "terminating", + Spec: &cluster.Spec.InstanceSets[0], + Pods: []*corev1.Pod{{ + ObjectMeta: metav1.ObjectMeta{ + DeletionTimestamp: new(metav1.Time), + Labels: map[string]string{ + "controller-revision-hash": "beta", + }, + }, + Status: corev1.PodStatus{ + Conditions: []corev1.PodCondition{{ + Type: corev1.PodReady, + Status: corev1.ConditionTrue, + }}, + }, + }}, + Runner: &appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Generation: 1, + }, + Status: appsv1.StatefulSetStatus{ + ObservedGeneration: 1, + UpdateRevision: "gamma", + }, + }, + }, + } + observed := &observedInstances{forCluster: instances} + + logSpanAttributes(t) + assert.NilError(t, reconciler.rolloutInstances(ctx, cluster, observed, + func(context.Context, *Instance) error { + t.Fatal("expected no redeploys") + return nil + })) + }) + + // Two instances do not match PodTemplate, one is orphaned. Do nothing. + t.Run("ManyOutdatedWithOrphan", func(t *testing.T) { + cluster := new(v1beta1.PostgresCluster) + cluster.Spec.InstanceSets = []v1beta1.PostgresInstanceSetSpec{ + {Name: "00", Replicas: initialize.Int32(2)}, + } + instances := []*Instance{ + { + Name: "outdated", + Spec: &cluster.Spec.InstanceSets[0], + Pods: []*corev1.Pod{{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "controller-revision-hash": "beta", + }, + }, + Status: corev1.PodStatus{ + Conditions: []corev1.PodCondition{{ + Type: corev1.PodReady, + Status: corev1.ConditionTrue, + }}, + }, + }}, + Runner: &appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Generation: 1, + }, + Status: appsv1.StatefulSetStatus{ + ObservedGeneration: 1, + UpdateRevision: "gamma", + }, + }, + }, + { + Name: "orphan", + Pods: []*corev1.Pod{{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "controller-revision-hash": "beta", + }, + }, + Status: corev1.PodStatus{ + Conditions: []corev1.PodCondition{{ + Type: corev1.PodReady, + Status: corev1.ConditionTrue, + }}, + }, + }}, + Runner: &appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Generation: 1, + }, + Status: appsv1.StatefulSetStatus{ + ObservedGeneration: 1, + UpdateRevision: "gamma", + }, + }, + }, + } + observed := &observedInstances{forCluster: instances} + + logSpanAttributes(t) + assert.NilError(t, reconciler.rolloutInstances(ctx, cluster, observed, + func(context.Context, *Instance) error { + t.Fatal("expected no redeploys") + return nil + })) + }) +} diff --git a/internal/controller/postgrescluster/instance_test.go b/internal/controller/postgrescluster/instance_test.go new file mode 100644 index 0000000000..f7f59f50a5 --- /dev/null +++ b/internal/controller/postgrescluster/instance_test.go @@ -0,0 +1,2152 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgrescluster + +import ( + "context" + "fmt" + "os" + "sort" + "strings" + "testing" + "time" + + "github.com/go-logr/logr/funcr" + "github.com/google/go-cmp/cmp/cmpopts" + "github.com/pkg/errors" + "go.opentelemetry.io/otel" + "gotest.tools/v3/assert" + appsv1 "k8s.io/api/apps/v1" + corev1 "k8s.io/api/core/v1" + policyv1 "k8s.io/api/policy/v1" + apierrors "k8s.io/apimachinery/pkg/api/errors" + "k8s.io/apimachinery/pkg/api/resource" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" + "k8s.io/apimachinery/pkg/runtime/schema" + "k8s.io/apimachinery/pkg/types" + "k8s.io/apimachinery/pkg/util/intstr" + "k8s.io/apimachinery/pkg/util/sets" + "k8s.io/apimachinery/pkg/util/wait" + "k8s.io/client-go/tools/record" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + + "github.com/crunchydata/postgres-operator/internal/controller/runtime" + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/logging" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/testing/cmp" + "github.com/crunchydata/postgres-operator/internal/testing/events" + "github.com/crunchydata/postgres-operator/internal/testing/require" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestInstanceIsRunning(t *testing.T) { + var instance Instance + var known, running bool + + // No pods + running, known = instance.IsRunning("any") + assert.Assert(t, !known) + assert.Assert(t, !running) + + // No statuses + instance.Pods = []*corev1.Pod{{}} + running, known = instance.IsRunning("any") + assert.Assert(t, !known) + assert.Assert(t, !running) + + // No states + instance.Pods[0].Status.ContainerStatuses = []corev1.ContainerStatus{{ + Name: "c1", + }} + running, known = instance.IsRunning("c1") + assert.Assert(t, known) + assert.Assert(t, !running) + + running, known = instance.IsRunning("missing") + assert.Assert(t, !known) + assert.Assert(t, !running) + + // Running state + // - https://releases.k8s.io/v1.21.0/staging/src/k8s.io/kubectl/pkg/cmd/debug/debug.go#L668 + instance.Pods[0].Status.ContainerStatuses[0].State.Running = + new(corev1.ContainerStateRunning) + + running, known = instance.IsRunning("c1") + assert.Assert(t, known) + assert.Assert(t, running) + + running, known = instance.IsRunning("missing") + assert.Assert(t, !known) + assert.Assert(t, !running) + + // Init containers + instance.Pods[0].Status.InitContainerStatuses = []corev1.ContainerStatus{{ + Name: "i1", + State: corev1.ContainerState{ + Running: new(corev1.ContainerStateRunning), + }, + }} + + running, known = instance.IsRunning("i1") + assert.Assert(t, known) + assert.Assert(t, running) +} + +func TestInstanceIsWritable(t *testing.T) { + var instance Instance + var known, writable bool + + // No pods + writable, known = instance.IsWritable() + assert.Assert(t, !known) + assert.Assert(t, !writable) + + // No annotations + instance.Pods = []*corev1.Pod{{}} + writable, known = instance.IsWritable() + assert.Assert(t, !known) + assert.Assert(t, !writable) + + // No role + instance.Pods[0].Annotations = map[string]string{"status": `{}`} + writable, known = instance.IsWritable() + assert.Assert(t, !known) + assert.Assert(t, !writable) + + // Patroni leader + instance.Pods[0].Annotations["status"] = `{"role":"master"}` + writable, known = instance.IsWritable() + assert.Assert(t, known) + assert.Assert(t, writable) + + // Patroni replica + instance.Pods[0].Annotations["status"] = `{"role":"replica"}` + writable, known = instance.IsWritable() + assert.Assert(t, known) + assert.Assert(t, !writable) + + // Patroni standby leader + instance.Pods[0].Annotations["status"] = `{"role":"standby_leader"}` + writable, known = instance.IsWritable() + assert.Assert(t, known) + assert.Assert(t, !writable) +} + +func TestNewObservedInstances(t *testing.T) { + t.Run("Empty", func(t *testing.T) { + cluster := new(v1beta1.PostgresCluster) + observed := newObservedInstances(cluster, nil, nil) + + assert.Equal(t, len(observed.forCluster), 0) + assert.Equal(t, len(observed.byName), 0) + assert.Equal(t, len(observed.bySet), 0) + }) + + t.Run("PodMissingOthers", func(t *testing.T) { + cluster := new(v1beta1.PostgresCluster) + observed := newObservedInstances( + cluster, + nil, + []corev1.Pod{ + { + ObjectMeta: metav1.ObjectMeta{ + Name: "some-pod-name", + Labels: map[string]string{ + "postgres-operator.crunchydata.com/instance-set": "missing", + "postgres-operator.crunchydata.com/instance": "the-name", + }, + }, + }, + }) + + // Registers as an instance. + assert.Equal(t, len(observed.forCluster), 1) + assert.Equal(t, len(observed.byName), 1) + assert.Equal(t, len(observed.bySet), 1) + + instance := observed.forCluster[0] + assert.Equal(t, instance.Name, "the-name") + assert.Equal(t, len(instance.Pods), 1) // The Pod + assert.Assert(t, instance.Runner == nil) // No matching StatefulSet + assert.Assert(t, instance.Spec == nil) // No matching PostgresInstanceSetSpec + + // Lookup based on its labels. + assert.Equal(t, observed.byName["the-name"], instance) + assert.DeepEqual(t, observed.bySet["missing"], []*Instance{instance}) + assert.DeepEqual(t, sets.List(observed.setNames), []string{"missing"}) + }) + + t.Run("RunnerMissingOthers", func(t *testing.T) { + cluster := new(v1beta1.PostgresCluster) + observed := newObservedInstances( + cluster, + []appsv1.StatefulSet{ + { + ObjectMeta: metav1.ObjectMeta{ + Name: "the-name", + Labels: map[string]string{ + "postgres-operator.crunchydata.com/instance-set": "missing", + }, + }, + }, + }, + nil) + + // Registers as an instance. + assert.Equal(t, len(observed.forCluster), 1) + assert.Equal(t, len(observed.byName), 1) + assert.Equal(t, len(observed.bySet), 1) + + instance := observed.forCluster[0] + assert.Equal(t, instance.Name, "the-name") + assert.Equal(t, len(instance.Pods), 0) // No matching Pods + assert.Assert(t, instance.Runner != nil) // The StatefulSet + assert.Assert(t, instance.Spec == nil) // No matching PostgresInstanceSetSpec + + // Lookup based on its name and labels. + assert.Equal(t, observed.byName["the-name"], instance) + assert.DeepEqual(t, observed.bySet["missing"], []*Instance{instance}) + assert.DeepEqual(t, sets.List(observed.setNames), []string{"missing"}) + }) + + t.Run("Matching", func(t *testing.T) { + cluster := new(v1beta1.PostgresCluster) + cluster.Spec.InstanceSets = []v1beta1.PostgresInstanceSetSpec{{Name: "00"}} + + observed := newObservedInstances( + cluster, + []appsv1.StatefulSet{ + { + ObjectMeta: metav1.ObjectMeta{ + Name: "the-name", + Labels: map[string]string{ + "postgres-operator.crunchydata.com/instance-set": "00", + }, + }, + }, + }, + []corev1.Pod{ + { + ObjectMeta: metav1.ObjectMeta{ + Name: "some-pod-name", + Labels: map[string]string{ + "postgres-operator.crunchydata.com/instance-set": "00", + "postgres-operator.crunchydata.com/instance": "the-name", + }, + }, + }, + }) + + // Registers as one instance. + assert.Equal(t, len(observed.forCluster), 1) + assert.Equal(t, len(observed.byName), 1) + assert.Equal(t, len(observed.bySet), 1) + + instance := observed.forCluster[0] + assert.Equal(t, instance.Name, "the-name") + assert.Equal(t, len(instance.Pods), 1) // The Pod + assert.Assert(t, instance.Runner != nil) // The StatefulSet + assert.Assert(t, instance.Spec != nil) // The PostgresInstanceSetSpec + + // Lookup based on its name and labels. + assert.Equal(t, observed.byName["the-name"], instance) + assert.DeepEqual(t, observed.bySet["00"], []*Instance{instance}) + assert.DeepEqual(t, sets.List(observed.setNames), []string{"00"}) + }) +} + +func TestStoreDesiredRequest(t *testing.T) { + ctx := context.Background() + + setupLogCapture := func(ctx context.Context) (context.Context, *[]string) { + calls := []string{} + testlog := funcr.NewJSON(func(object string) { + calls = append(calls, object) + }, funcr.Options{ + Verbosity: 1, + }) + return logging.NewContext(ctx, testlog), &calls + } + + cluster := v1beta1.PostgresCluster{ + ObjectMeta: metav1.ObjectMeta{ + Name: "rhino", + Namespace: "test-namespace", + }, + Spec: v1beta1.PostgresClusterSpec{ + InstanceSets: []v1beta1.PostgresInstanceSetSpec{{ + Name: "red", + Replicas: initialize.Int32(1), + DataVolumeClaimSpec: corev1.PersistentVolumeClaimSpec{ + AccessModes: []corev1.PersistentVolumeAccessMode{corev1.ReadWriteOnce}, + Resources: corev1.VolumeResourceRequirements{ + Limits: map[corev1.ResourceName]resource.Quantity{ + corev1.ResourceStorage: resource.MustParse("1Gi"), + }}}, + }, { + Name: "blue", + Replicas: initialize.Int32(1), + }}}} + + t.Run("BadRequestNoBackup", func(t *testing.T) { + recorder := events.NewRecorder(t, runtime.Scheme) + reconciler := &Reconciler{Recorder: recorder} + ctx, logs := setupLogCapture(ctx) + + value := reconciler.storeDesiredRequest(ctx, &cluster, "red", "woot", "") + + assert.Equal(t, value, "") + assert.Equal(t, len(recorder.Events), 0) + assert.Equal(t, len(*logs), 1) + assert.Assert(t, cmp.Contains((*logs)[0], "Unable to parse pgData volume request from status")) + }) + + t.Run("BadRequestWithBackup", func(t *testing.T) { + recorder := events.NewRecorder(t, runtime.Scheme) + reconciler := &Reconciler{Recorder: recorder} + ctx, logs := setupLogCapture(ctx) + + value := reconciler.storeDesiredRequest(ctx, &cluster, "red", "foo", "1Gi") + + assert.Equal(t, value, "1Gi") + assert.Equal(t, len(recorder.Events), 0) + assert.Equal(t, len(*logs), 1) + assert.Assert(t, cmp.Contains((*logs)[0], "Unable to parse pgData volume request from status (foo) for rhino/red")) + }) + + t.Run("NoLimitNoEvent", func(t *testing.T) { + recorder := events.NewRecorder(t, runtime.Scheme) + reconciler := &Reconciler{Recorder: recorder} + ctx, logs := setupLogCapture(ctx) + + value := reconciler.storeDesiredRequest(ctx, &cluster, "blue", "1Gi", "") + + assert.Equal(t, value, "1Gi") + assert.Equal(t, len(*logs), 0) + assert.Equal(t, len(recorder.Events), 0) + }) + + t.Run("BadBackupRequest", func(t *testing.T) { + recorder := events.NewRecorder(t, runtime.Scheme) + reconciler := &Reconciler{Recorder: recorder} + ctx, logs := setupLogCapture(ctx) + + value := reconciler.storeDesiredRequest(ctx, &cluster, "red", "2Gi", "bar") + + assert.Equal(t, value, "2Gi") + assert.Equal(t, len(*logs), 1) + assert.Assert(t, cmp.Contains((*logs)[0], "Unable to parse pgData volume request from status backup (bar) for rhino/red")) + assert.Equal(t, len(recorder.Events), 1) + assert.Equal(t, recorder.Events[0].Regarding.Name, cluster.Name) + assert.Equal(t, recorder.Events[0].Reason, "VolumeAutoGrow") + assert.Equal(t, recorder.Events[0].Note, "pgData volume expansion to 2Gi requested for rhino/red.") + }) + + t.Run("ValueUpdateWithEvent", func(t *testing.T) { + recorder := events.NewRecorder(t, runtime.Scheme) + reconciler := &Reconciler{Recorder: recorder} + ctx, logs := setupLogCapture(ctx) + + value := reconciler.storeDesiredRequest(ctx, &cluster, "red", "1Gi", "") + + assert.Equal(t, value, "1Gi") + assert.Equal(t, len(*logs), 0) + assert.Equal(t, len(recorder.Events), 1) + assert.Equal(t, recorder.Events[0].Regarding.Name, cluster.Name) + assert.Equal(t, recorder.Events[0].Reason, "VolumeAutoGrow") + assert.Equal(t, recorder.Events[0].Note, "pgData volume expansion to 1Gi requested for rhino/red.") + }) + + t.Run("NoLimitNoEvent", func(t *testing.T) { + recorder := events.NewRecorder(t, runtime.Scheme) + reconciler := &Reconciler{Recorder: recorder} + ctx, logs := setupLogCapture(ctx) + + value := reconciler.storeDesiredRequest(ctx, &cluster, "blue", "1Gi", "") + + assert.Equal(t, value, "1Gi") + assert.Equal(t, len(*logs), 0) + assert.Equal(t, len(recorder.Events), 0) + }) +} + +func TestWritablePod(t *testing.T) { + container := "container" + + t.Run("empty observed", func(t *testing.T) { + observed := &observedInstances{} + + pod, instance := observed.writablePod("container") + assert.Assert(t, pod == nil) + assert.Assert(t, instance == nil) + }) + t.Run("terminating", func(t *testing.T) { + instances := []*Instance{ + { + Name: "instance", + Pods: []*corev1.Pod{{ + ObjectMeta: metav1.ObjectMeta{ + Namespace: "namespace", + Name: "pod", + Annotations: map[string]string{ + "status": `{"role":"master"}`, + }, + DeletionTimestamp: &metav1.Time{}, + }, + Status: corev1.PodStatus{ + ContainerStatuses: []corev1.ContainerStatus{{ + Name: container, + State: corev1.ContainerState{ + Running: new(corev1.ContainerStateRunning), + }, + }}, + }, + }}, + Runner: &appsv1.StatefulSet{}, + }, + } + observed := &observedInstances{forCluster: instances} + + terminating, known := observed.forCluster[0].IsTerminating() + assert.Assert(t, terminating && known) + + pod, instance := observed.writablePod("container") + assert.Assert(t, pod == nil) + assert.Assert(t, instance == nil) + }) + t.Run("not running", func(t *testing.T) { + instances := []*Instance{ + { + Name: "instance", + Pods: []*corev1.Pod{{ + ObjectMeta: metav1.ObjectMeta{ + Namespace: "namespace", + Name: "pod", + Annotations: map[string]string{ + "status": `{"role":"master"}`, + }, + }, + Status: corev1.PodStatus{ + ContainerStatuses: []corev1.ContainerStatus{{ + Name: container, + State: corev1.ContainerState{ + Waiting: new(corev1.ContainerStateWaiting)}, + }}, + }, + }}, + Runner: &appsv1.StatefulSet{}, + }, + } + observed := &observedInstances{forCluster: instances} + + running, known := observed.forCluster[0].IsRunning(container) + assert.Check(t, !running && known) + + pod, instance := observed.writablePod("container") + assert.Assert(t, pod == nil) + assert.Assert(t, instance == nil) + }) + t.Run("not writable", func(t *testing.T) { + instances := []*Instance{ + { + Name: "instance", + Pods: []*corev1.Pod{{ + ObjectMeta: metav1.ObjectMeta{ + Namespace: "namespace", + Name: "pod", + Annotations: map[string]string{ + "status": `{"role":"replica"}`, + }, + }, + Status: corev1.PodStatus{ + ContainerStatuses: []corev1.ContainerStatus{{ + Name: container, + State: corev1.ContainerState{ + Running: new(corev1.ContainerStateRunning), + }, + }}, + }, + }}, + Runner: &appsv1.StatefulSet{}, + }, + } + observed := &observedInstances{forCluster: instances} + + writable, known := observed.forCluster[0].IsWritable() + assert.Check(t, !writable && known) + + pod, instance := observed.writablePod("container") + assert.Assert(t, pod == nil) + assert.Assert(t, instance == nil) + }) + t.Run("writable instance exists", func(t *testing.T) { + instances := []*Instance{ + { + Name: "instance", + Pods: []*corev1.Pod{{ + ObjectMeta: metav1.ObjectMeta{ + Namespace: "namespace", + Name: "pod", + Annotations: map[string]string{ + "status": `{"role":"master"}`, + }, + }, + Status: corev1.PodStatus{ + ContainerStatuses: []corev1.ContainerStatus{{ + Name: container, + State: corev1.ContainerState{ + Running: new(corev1.ContainerStateRunning), + }, + }}, + }, + }}, + Runner: &appsv1.StatefulSet{}, + }, + } + observed := &observedInstances{forCluster: instances} + + terminating, known := observed.forCluster[0].IsTerminating() + assert.Check(t, !terminating && known) + writable, known := observed.forCluster[0].IsWritable() + assert.Check(t, writable && known) + running, known := observed.forCluster[0].IsRunning(container) + assert.Check(t, running && known) + + pod, instance := observed.writablePod("container") + assert.Assert(t, pod != nil) + assert.Assert(t, instance != nil) + }) +} + +func TestAddPGBackRestToInstancePodSpec(t *testing.T) { + t.Parallel() + + ctx := context.Background() + cluster := v1beta1.PostgresCluster{} + cluster.Name = "hippo" + cluster.Default() + + certificates := corev1.Secret{} + certificates.Name = "some-secret" + + pod := corev1.PodSpec{ + Containers: []corev1.Container{ + {Name: "database"}, + {Name: "other"}, + }, + Volumes: []corev1.Volume{ + {Name: "other"}, + {Name: "postgres-data"}, + {Name: "postgres-wal"}, + }, + } + + t.Run("NoVolumeRepo", func(t *testing.T) { + cluster := cluster.DeepCopy() + cluster.Spec.Backups.PGBackRest.Repos = nil + + out := pod.DeepCopy() + addPGBackRestToInstancePodSpec(ctx, cluster, &certificates, out) + + // Only Containers and Volumes fields have changed. + assert.DeepEqual(t, pod, *out, cmpopts.IgnoreFields(pod, "Containers", "Volumes")) + + // Only database container has mounts. + // Other containers are ignored. + assert.Assert(t, cmp.MarshalMatches(out.Containers, ` +- name: database + resources: {} + volumeMounts: + - mountPath: /etc/pgbackrest/conf.d + name: pgbackrest-config + readOnly: true +- name: other + resources: {} +- command: + - pgbackrest + - server + livenessProbe: + exec: + command: + - pgbackrest + - server-ping + name: pgbackrest + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault + volumeMounts: + - mountPath: /etc/pgbackrest/server + name: pgbackrest-server + readOnly: true + - mountPath: /pgdata + name: postgres-data + - mountPath: /pgwal + name: postgres-wal + - mountPath: /etc/pgbackrest/conf.d + name: pgbackrest-config + readOnly: true +- command: + - bash + - -ceu + - -- + - |- + monitor() { + exec {fd}<> <(:||:) + until read -r -t 5 -u "${fd}"; do + if + [[ "${filename}" -nt "/proc/self/fd/${fd}" ]] && + pkill -HUP --exact --parent=0 pgbackrest + then + exec {fd}>&- && exec {fd}<> <(:||:) + stat --dereference --format='Loaded configuration dated %y' "${filename}" + elif + { [[ "${directory}" -nt "/proc/self/fd/${fd}" ]] || + [[ "${authority}" -nt "/proc/self/fd/${fd}" ]] + } && + pkill -HUP --exact --parent=0 pgbackrest + then + exec {fd}>&- && exec {fd}<> <(:||:) + stat --format='Loaded certificates dated %y' "${directory}" + fi + done + }; export directory="$1" authority="$2" filename="$3"; export -f monitor; exec -a "$0" bash -ceu monitor + - pgbackrest-config + - /etc/pgbackrest/server + - /etc/pgbackrest/conf.d/~postgres-operator/tls-ca.crt + - /etc/pgbackrest/conf.d/~postgres-operator_server.conf + name: pgbackrest-config + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault + volumeMounts: + - mountPath: /etc/pgbackrest/server + name: pgbackrest-server + readOnly: true + - mountPath: /etc/pgbackrest/conf.d + name: pgbackrest-config + readOnly: true + `)) + + // Instance configuration files with certificates. + // Other volumes are ignored. + assert.Assert(t, cmp.MarshalMatches(out.Volumes, ` +- name: other +- name: postgres-data +- name: postgres-wal +- name: pgbackrest-server + projected: + sources: + - secret: + items: + - key: pgbackrest-server.crt + path: server-tls.crt + - key: pgbackrest-server.key + mode: 384 + path: server-tls.key + name: some-secret +- name: pgbackrest-config + projected: + sources: + - configMap: + items: + - key: pgbackrest_instance.conf + path: pgbackrest_instance.conf + - key: config-hash + path: config-hash + - key: pgbackrest-server.conf + path: ~postgres-operator_server.conf + name: hippo-pgbackrest-config + - secret: + items: + - key: pgbackrest.ca-roots + path: ~postgres-operator/tls-ca.crt + - key: pgbackrest-client.crt + path: ~postgres-operator/client-tls.crt + - key: pgbackrest-client.key + mode: 384 + path: ~postgres-operator/client-tls.key + name: hippo-pgbackrest + `)) + }) + + t.Run("OneVolumeRepo", func(t *testing.T) { + alwaysExpect := func(t testing.TB, result *corev1.PodSpec) { + // Only Containers and Volumes fields have changed. + assert.DeepEqual(t, pod, *result, cmpopts.IgnoreFields(pod, "Containers", "Volumes")) + + // Instance configuration files plus client and server certificates. + // The server certificate comes from the instance Secret. + // Other volumes are untouched. + assert.Assert(t, cmp.MarshalMatches(result.Volumes, ` +- name: other +- name: postgres-data +- name: postgres-wal +- name: pgbackrest-server + projected: + sources: + - secret: + items: + - key: pgbackrest-server.crt + path: server-tls.crt + - key: pgbackrest-server.key + mode: 384 + path: server-tls.key + name: some-secret +- name: pgbackrest-config + projected: + sources: + - configMap: + items: + - key: pgbackrest_instance.conf + path: pgbackrest_instance.conf + - key: config-hash + path: config-hash + - key: pgbackrest-server.conf + path: ~postgres-operator_server.conf + name: hippo-pgbackrest-config + - secret: + items: + - key: pgbackrest.ca-roots + path: ~postgres-operator/tls-ca.crt + - key: pgbackrest-client.crt + path: ~postgres-operator/client-tls.crt + - key: pgbackrest-client.key + mode: 384 + path: ~postgres-operator/client-tls.key + name: hippo-pgbackrest + `)) + } + + cluster := cluster.DeepCopy() + cluster.Spec.Backups.PGBackRest.Repos = []v1beta1.PGBackRestRepo{ + { + Name: "repo1", + Volume: new(v1beta1.RepoPVC), + }, + } + + out := pod.DeepCopy() + addPGBackRestToInstancePodSpec(ctx, cluster, &certificates, out) + alwaysExpect(t, out) + + // The TLS server is added and configuration mounted. + // It has PostgreSQL volumes mounted while other volumes are ignored. + assert.Assert(t, cmp.MarshalMatches(out.Containers, ` +- name: database + resources: {} + volumeMounts: + - mountPath: /etc/pgbackrest/conf.d + name: pgbackrest-config + readOnly: true +- name: other + resources: {} +- command: + - pgbackrest + - server + livenessProbe: + exec: + command: + - pgbackrest + - server-ping + name: pgbackrest + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault + volumeMounts: + - mountPath: /etc/pgbackrest/server + name: pgbackrest-server + readOnly: true + - mountPath: /pgdata + name: postgres-data + - mountPath: /pgwal + name: postgres-wal + - mountPath: /etc/pgbackrest/conf.d + name: pgbackrest-config + readOnly: true +- command: + - bash + - -ceu + - -- + - |- + monitor() { + exec {fd}<> <(:||:) + until read -r -t 5 -u "${fd}"; do + if + [[ "${filename}" -nt "/proc/self/fd/${fd}" ]] && + pkill -HUP --exact --parent=0 pgbackrest + then + exec {fd}>&- && exec {fd}<> <(:||:) + stat --dereference --format='Loaded configuration dated %y' "${filename}" + elif + { [[ "${directory}" -nt "/proc/self/fd/${fd}" ]] || + [[ "${authority}" -nt "/proc/self/fd/${fd}" ]] + } && + pkill -HUP --exact --parent=0 pgbackrest + then + exec {fd}>&- && exec {fd}<> <(:||:) + stat --format='Loaded certificates dated %y' "${directory}" + fi + done + }; export directory="$1" authority="$2" filename="$3"; export -f monitor; exec -a "$0" bash -ceu monitor + - pgbackrest-config + - /etc/pgbackrest/server + - /etc/pgbackrest/conf.d/~postgres-operator/tls-ca.crt + - /etc/pgbackrest/conf.d/~postgres-operator_server.conf + name: pgbackrest-config + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault + volumeMounts: + - mountPath: /etc/pgbackrest/server + name: pgbackrest-server + readOnly: true + - mountPath: /etc/pgbackrest/conf.d + name: pgbackrest-config + readOnly: true + `)) + + t.Run("CustomResources", func(t *testing.T) { + cluster := cluster.DeepCopy() + cluster.Spec.Backups.PGBackRest.Sidecars = &v1beta1.PGBackRestSidecars{ + PGBackRest: &v1beta1.Sidecar{ + Resources: &corev1.ResourceRequirements{ + Requests: corev1.ResourceList{ + corev1.ResourceCPU: resource.MustParse("5m"), + }, + Limits: corev1.ResourceList{ + corev1.ResourceMemory: resource.MustParse("9Mi"), + }, + }, + }, + } + + before := out.DeepCopy() + out := pod.DeepCopy() + addPGBackRestToInstancePodSpec(ctx, cluster, &certificates, out) + alwaysExpect(t, out) + + // Only the TLS server container changed. + assert.Equal(t, len(before.Containers), len(out.Containers)) + assert.Assert(t, len(before.Containers) > 2) + assert.DeepEqual(t, before.Containers[:2], out.Containers[:2]) + + // It has the custom resources. + assert.Assert(t, cmp.MarshalMatches(out.Containers[2:], ` +- command: + - pgbackrest + - server + livenessProbe: + exec: + command: + - pgbackrest + - server-ping + name: pgbackrest + resources: + limits: + memory: 9Mi + requests: + cpu: 5m + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault + volumeMounts: + - mountPath: /etc/pgbackrest/server + name: pgbackrest-server + readOnly: true + - mountPath: /pgdata + name: postgres-data + - mountPath: /pgwal + name: postgres-wal + - mountPath: /etc/pgbackrest/conf.d + name: pgbackrest-config + readOnly: true +- command: + - bash + - -ceu + - -- + - |- + monitor() { + exec {fd}<> <(:||:) + until read -r -t 5 -u "${fd}"; do + if + [[ "${filename}" -nt "/proc/self/fd/${fd}" ]] && + pkill -HUP --exact --parent=0 pgbackrest + then + exec {fd}>&- && exec {fd}<> <(:||:) + stat --dereference --format='Loaded configuration dated %y' "${filename}" + elif + { [[ "${directory}" -nt "/proc/self/fd/${fd}" ]] || + [[ "${authority}" -nt "/proc/self/fd/${fd}" ]] + } && + pkill -HUP --exact --parent=0 pgbackrest + then + exec {fd}>&- && exec {fd}<> <(:||:) + stat --format='Loaded certificates dated %y' "${directory}" + fi + done + }; export directory="$1" authority="$2" filename="$3"; export -f monitor; exec -a "$0" bash -ceu monitor + - pgbackrest-config + - /etc/pgbackrest/server + - /etc/pgbackrest/conf.d/~postgres-operator/tls-ca.crt + - /etc/pgbackrest/conf.d/~postgres-operator_server.conf + name: pgbackrest-config + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault + volumeMounts: + - mountPath: /etc/pgbackrest/server + name: pgbackrest-server + readOnly: true + - mountPath: /etc/pgbackrest/conf.d + name: pgbackrest-config + readOnly: true + `)) + }) + }) + +} + +func TestPodsToKeep(t *testing.T) { + for _, test := range []struct { + name string + instances []corev1.Pod + want map[string]int + checks func(*testing.T, []corev1.Pod) + }{ + { + name: "RemoveSetWithMasterOnly", + instances: []corev1.Pod{ + { + ObjectMeta: metav1.ObjectMeta{ + Name: "daisy-asdf", + Labels: map[string]string{ + naming.LabelRole: "master", + naming.LabelInstanceSet: "daisy", + }, + }, + }, + }, + want: map[string]int{}, + checks: func(t *testing.T, p []corev1.Pod) { + assert.Equal(t, len(p), 0) + }, + }, { + name: "RemoveSetWithReplicaOnly", + instances: []corev1.Pod{ + { + ObjectMeta: metav1.ObjectMeta{ + Name: "daisy-asdf", + Labels: map[string]string{ + naming.LabelRole: "replica", + naming.LabelInstanceSet: "daisy", + }, + }, + }, + }, + want: map[string]int{}, + checks: func(t *testing.T, p []corev1.Pod) { + assert.Equal(t, len(p), 0) + }, + }, { + name: "KeepMasterOnly", + instances: []corev1.Pod{ + { + ObjectMeta: metav1.ObjectMeta{ + Name: "daisy-asdf", + Labels: map[string]string{ + naming.LabelRole: "master", + naming.LabelInstanceSet: "daisy", + }, + }, + }, + }, + want: map[string]int{ + "daisy": 1, + }, + checks: func(t *testing.T, p []corev1.Pod) { + assert.Equal(t, len(p), 1) + }, + }, { + name: "KeepNoRoleLabels", + instances: []corev1.Pod{ + { + ObjectMeta: metav1.ObjectMeta{ + Name: "daisy-asdf", + Labels: map[string]string{ + naming.LabelInstanceSet: "daisy", + }, + }, + }, + }, + want: map[string]int{ + "daisy": 1, + }, + checks: func(t *testing.T, p []corev1.Pod) { + assert.Equal(t, len(p), 1) + }, + }, { + name: "RemoveSetWithNoRoleLabels", + instances: []corev1.Pod{ + { + ObjectMeta: metav1.ObjectMeta{ + Name: "daisy-asdf", + Labels: map[string]string{ + naming.LabelInstanceSet: "daisy", + }, + }, + }, + }, + want: map[string]int{}, + checks: func(t *testing.T, p []corev1.Pod) { + assert.Equal(t, len(p), 0) + }, + }, { + name: "KeepUnknownRoleLabel", + instances: []corev1.Pod{ + { + ObjectMeta: metav1.ObjectMeta{ + Name: "daisy-asdf", + Labels: map[string]string{ + naming.LabelRole: "unknownLabelRole", + naming.LabelInstanceSet: "daisy", + }, + }, + }, + }, + want: map[string]int{ + "daisy": 1, + }, + checks: func(t *testing.T, p []corev1.Pod) { + assert.Equal(t, len(p), 1) + }, + }, { + name: "RemoveSetWithUnknownRoleLabel", + instances: []corev1.Pod{ + { + ObjectMeta: metav1.ObjectMeta{ + Name: "daisy-asdf", + Labels: map[string]string{ + naming.LabelRole: "unknownLabelRole", + naming.LabelInstanceSet: "daisy", + }, + }, + }, + }, + want: map[string]int{}, + checks: func(t *testing.T, p []corev1.Pod) { + assert.Equal(t, len(p), 0) + }, + }, { + name: "MasterLastInSet", + instances: []corev1.Pod{ + { + ObjectMeta: metav1.ObjectMeta{ + Name: "daisy-asdf", + Labels: map[string]string{ + naming.LabelRole: "replica", + naming.LabelInstanceSet: "daisy", + }, + }, + }, + { + ObjectMeta: metav1.ObjectMeta{ + Name: "daisy-poih", + Labels: map[string]string{ + naming.LabelRole: "master", + naming.LabelInstanceSet: "daisy", + }, + }, + }, + }, + want: map[string]int{ + "daisy": 1, + }, + checks: func(t *testing.T, p []corev1.Pod) { + assert.Equal(t, len(p), 1) + assert.Equal(t, p[0].Labels[naming.LabelRole], "master") + }, + }, { + name: "ScaleDownSetWithMaster", + instances: []corev1.Pod{ + { + ObjectMeta: metav1.ObjectMeta{ + Name: "max-asdf", + Labels: map[string]string{ + naming.LabelRole: "replica", + naming.LabelInstanceSet: "max", + }, + }, + }, + { + ObjectMeta: metav1.ObjectMeta{ + Name: "daisy-poih", + Labels: map[string]string{ + naming.LabelRole: "master", + naming.LabelInstanceSet: "daisy", + }, + }, + }, + { + ObjectMeta: metav1.ObjectMeta{ + Name: "daisy-dogs", + Labels: map[string]string{ + naming.LabelRole: "replica", + naming.LabelInstanceSet: "daisy", + }, + }, + }, + { + ObjectMeta: metav1.ObjectMeta{ + Name: "max-dogs", + Labels: map[string]string{ + naming.LabelRole: "replica", + naming.LabelInstanceSet: "daisy", + }, + }, + }, + }, + want: map[string]int{ + "max": 1, + "daisy": 1, + }, + checks: func(t *testing.T, p []corev1.Pod) { + assert.Equal(t, len(p), 2) + assert.Equal(t, p[0].Labels[naming.LabelRole], "master") + assert.Equal(t, p[0].Labels[naming.LabelInstanceSet], "daisy") + assert.Equal(t, p[1].Labels[naming.LabelRole], "replica") + assert.Equal(t, p[1].Labels[naming.LabelInstanceSet], "max") + }, + }, { + name: "ScaleDownSetWithoutMaster", + instances: []corev1.Pod{ + { + ObjectMeta: metav1.ObjectMeta{ + Name: "max-asdf", + Labels: map[string]string{ + naming.LabelRole: "master", + naming.LabelInstanceSet: "max", + }, + }, + }, + { + ObjectMeta: metav1.ObjectMeta{ + Name: "daisy-poih", + Labels: map[string]string{ + naming.LabelRole: "replica", + naming.LabelInstanceSet: "daisy", + }, + }, + }, + { + ObjectMeta: metav1.ObjectMeta{ + Name: "daisy-dogs", + Labels: map[string]string{ + naming.LabelRole: "replica", + naming.LabelInstanceSet: "daisy", + }, + }, + }, + { + ObjectMeta: metav1.ObjectMeta{ + Name: "max-dogs", + Labels: map[string]string{ + naming.LabelRole: "replica", + naming.LabelInstanceSet: "daisy", + }, + }, + }, + }, + want: map[string]int{ + "max": 1, + "daisy": 2, + }, + checks: func(t *testing.T, p []corev1.Pod) { + assert.Equal(t, len(p), 3) + assert.Equal(t, p[0].Labels[naming.LabelRole], "master") + assert.Equal(t, p[0].Labels[naming.LabelInstanceSet], "max") + assert.Equal(t, p[1].Labels[naming.LabelInstanceSet], "daisy") + assert.Equal(t, p[1].Labels[naming.LabelRole], "replica") + assert.Equal(t, p[2].Labels[naming.LabelInstanceSet], "daisy") + assert.Equal(t, p[2].Labels[naming.LabelRole], "replica") + }, + }, { + name: "ScaleMasterSetToZero", + instances: []corev1.Pod{ + { + ObjectMeta: metav1.ObjectMeta{ + Name: "max-asdf", + Labels: map[string]string{ + naming.LabelRole: "master", + naming.LabelInstanceSet: "max", + }, + }, + }, + { + ObjectMeta: metav1.ObjectMeta{ + Name: "daisy-poih", + Labels: map[string]string{ + naming.LabelRole: "replica", + naming.LabelInstanceSet: "daisy", + }, + }, + }, + { + ObjectMeta: metav1.ObjectMeta{ + Name: "daisy-dogs", + Labels: map[string]string{ + naming.LabelRole: "replica", + naming.LabelInstanceSet: "daisy", + }, + }, + }, + }, + want: map[string]int{ + "max": 0, + "daisy": 2, + }, + checks: func(t *testing.T, p []corev1.Pod) { + assert.Equal(t, len(p), 2) + assert.Equal(t, p[0].Labels[naming.LabelRole], "replica") + assert.Equal(t, p[0].Labels[naming.LabelInstanceSet], "daisy") + assert.Equal(t, p[1].Labels[naming.LabelRole], "replica") + assert.Equal(t, p[1].Labels[naming.LabelInstanceSet], "daisy") + }, + }, { + name: "RemoveMasterInstanceSet", + instances: []corev1.Pod{ + { + ObjectMeta: metav1.ObjectMeta{ + Name: "max-asdf", + Labels: map[string]string{ + naming.LabelRole: "master", + naming.LabelInstanceSet: "max", + }, + }, + }, + { + ObjectMeta: metav1.ObjectMeta{ + Name: "daisy-poih", + Labels: map[string]string{ + naming.LabelRole: "replica", + naming.LabelInstanceSet: "daisy", + }, + }, + }, + { + ObjectMeta: metav1.ObjectMeta{ + Name: "daisy-dogs", + Labels: map[string]string{ + naming.LabelRole: "replica", + naming.LabelInstanceSet: "daisy", + }, + }, + }, + { + ObjectMeta: metav1.ObjectMeta{ + Name: "max-dogs", + Labels: map[string]string{ + naming.LabelRole: "replica", + naming.LabelInstanceSet: "daisy", + }, + }, + }, + }, + want: map[string]int{ + "daisy": 3, + }, + checks: func(t *testing.T, p []corev1.Pod) { + assert.Equal(t, len(p), 3) + assert.Equal(t, p[0].Labels[naming.LabelRole], "replica") + assert.Equal(t, p[0].Labels[naming.LabelInstanceSet], "daisy") + assert.Equal(t, p[1].Labels[naming.LabelRole], "replica") + assert.Equal(t, p[1].Labels[naming.LabelInstanceSet], "daisy") + assert.Equal(t, p[2].Labels[naming.LabelRole], "replica") + assert.Equal(t, p[2].Labels[naming.LabelInstanceSet], "daisy") + }, + }, + } { + t.Run(test.name, func(t *testing.T) { + keep := podsToKeep(test.instances, test.want) + sort.Slice(keep, func(i, j int) bool { + return keep[i].Labels[naming.LabelRole] == "master" + }) + test.checks(t, keep) + }) + } +} + +func TestDeleteInstance(t *testing.T) { + if strings.EqualFold(os.Getenv("USE_EXISTING_CLUSTER"), "true") { + t.Skip("FLAKE: other controllers (PVC, STS) update objects causing conflicts when we deleteControlled") + } + + ctx := context.Background() + _, cc := setupKubernetes(t) + require.ParallelCapacity(t, 1) + + reconciler := &Reconciler{ + Client: cc, + Owner: client.FieldOwner(t.Name()), + Recorder: new(record.FakeRecorder), + Tracer: otel.Tracer(t.Name()), + } + + // Define, Create, and Reconcile a cluster to get an instance running in kube + cluster := testCluster() + cluster.Namespace = setupNamespace(t, cc).Name + + assert.NilError(t, errors.WithStack(reconciler.Client.Create(ctx, cluster))) + t.Cleanup(func() { + // Remove finalizers, if any, so the namespace can terminate. + assert.Check(t, client.IgnoreNotFound( + reconciler.Client.Patch(ctx, cluster, client.RawPatch( + client.Merge.Type(), []byte(`{"metadata":{"finalizers":[]}}`))))) + }) + + // Reconcile the entire cluster so that we don't have to create all the + // resources needed to reconcile a single instance (cm,secrets,svc, etc.) + result, err := reconciler.Reconcile(ctx, reconcile.Request{ + NamespacedName: client.ObjectKeyFromObject(cluster), + }) + assert.NilError(t, err) + assert.Assert(t, result.Requeue == false) + + stsList := &appsv1.StatefulSetList{} + assert.NilError(t, reconciler.Client.List(ctx, stsList, + client.InNamespace(cluster.Namespace), + client.MatchingLabels{ + naming.LabelCluster: cluster.Name, + naming.LabelInstanceSet: cluster.Spec.InstanceSets[0].Name, + })) + + // Grab the instance name off of the instance set at index0 + instanceName := stsList.Items[0].Labels[naming.LabelInstance] + + // Use the instance name to delete the single instance + assert.NilError(t, reconciler.deleteInstance(ctx, cluster, instanceName)) + + gvks := []schema.GroupVersionKind{ + corev1.SchemeGroupVersion.WithKind("PersistentVolumeClaim"), + corev1.SchemeGroupVersion.WithKind("ConfigMap"), + corev1.SchemeGroupVersion.WithKind("Secret"), + appsv1.SchemeGroupVersion.WithKind("StatefulSet"), + } + + selector, err := naming.AsSelector(metav1.LabelSelector{ + MatchLabels: map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelInstance: instanceName, + }}) + assert.NilError(t, err) + + for _, gvk := range gvks { + t.Run(gvk.Kind, func(t *testing.T) { + ctx := context.Background() + err := wait.PollUntilContextTimeout(ctx, time.Second*3, Scale(time.Second*30), false, func(ctx context.Context) (bool, error) { + uList := &unstructured.UnstructuredList{} + uList.SetGroupVersionKind(gvk) + assert.NilError(t, errors.WithStack(reconciler.Client.List(ctx, uList, + client.InNamespace(cluster.Namespace), + client.MatchingLabelsSelector{Selector: selector}))) + + if len(uList.Items) == 0 { + return true, nil + } + + // Check existing objects for deletionTimestamp ensuring they + // are staged for delete + deleted := true + for i := range uList.Items { + u := uList.Items[i] + if u.GetDeletionTimestamp() == nil { + deleted = false + } + } + + // We have found objects that are not staged for delete + // so deleteInstance has failed + return deleted, nil + }) + assert.NilError(t, err) + }) + } +} + +func TestGenerateInstanceStatefulSetIntent(t *testing.T) { + type intentParams struct { + cluster *v1beta1.PostgresCluster + spec *v1beta1.PostgresInstanceSetSpec + clusterPodServiceName string + instanceServiceAccountName string + sts *appsv1.StatefulSet + shutdown bool + startupInstance string + numInstancePods int + } + + for _, test := range []struct { + name string + ip intentParams + run func(*testing.T, *appsv1.StatefulSet) + }{{ + name: "cluster pod service name", + ip: intentParams{ + clusterPodServiceName: "daisy-svc", + }, + run: func(t *testing.T, ss *appsv1.StatefulSet) { + assert.Equal(t, ss.Spec.ServiceName, "daisy-svc") + }, + }, { + name: "instance service account name", + ip: intentParams{ + instanceServiceAccountName: "daisy-sa", + }, + run: func(t *testing.T, ss *appsv1.StatefulSet) { + assert.Equal(t, ss.Spec.Template.Spec.ServiceAccountName, "daisy-sa") + }, + }, { + name: "custom affinity", + ip: intentParams{ + spec: &v1beta1.PostgresInstanceSetSpec{ + Affinity: &corev1.Affinity{}, + }, + }, + run: func(t *testing.T, ss *appsv1.StatefulSet) { + assert.Assert(t, ss.Spec.Template.Spec.Affinity != nil) + }, + }, { + name: "custom tolerations", + ip: intentParams{ + spec: &v1beta1.PostgresInstanceSetSpec{ + Tolerations: []corev1.Toleration{}, + }, + }, + run: func(t *testing.T, ss *appsv1.StatefulSet) { + assert.Assert(t, ss.Spec.Template.Spec.Tolerations != nil) + }, + }, { + name: "custom topology spread constraints", + ip: intentParams{ + spec: &v1beta1.PostgresInstanceSetSpec{ + TopologySpreadConstraints: []corev1.TopologySpreadConstraint{}, + }, + }, + run: func(t *testing.T, ss *appsv1.StatefulSet) { + assert.Assert(t, ss.Spec.Template.Spec.TopologySpreadConstraints != nil) + }, + }, { + name: "shutdown replica", + ip: intentParams{ + shutdown: true, + numInstancePods: 2, + startupInstance: "testInstance1", + }, + run: func(t *testing.T, ss *appsv1.StatefulSet) { + assert.Equal(t, *ss.Spec.Replicas, int32(0)) + }, + }, { + name: "shutdown primary", + ip: intentParams{ + shutdown: true, + numInstancePods: 1, + startupInstance: "testInstance1", + }, + run: func(t *testing.T, ss *appsv1.StatefulSet) { + assert.Equal(t, *ss.Spec.Replicas, int32(0)) + }, + }, { + name: "startup primary", + ip: intentParams{ + shutdown: false, + numInstancePods: 0, + }, + run: func(t *testing.T, ss *appsv1.StatefulSet) { + assert.Equal(t, *ss.Spec.Replicas, int32(1)) + }, + }, { + name: "startup replica", + ip: intentParams{ + shutdown: false, + numInstancePods: 1, + }, + run: func(t *testing.T, ss *appsv1.StatefulSet) { + assert.Equal(t, *ss.Spec.Replicas, int32(1)) + }, + }, { + name: "do not startup replica", + ip: intentParams{ + shutdown: false, + numInstancePods: 0, + startupInstance: "testInstance1", + }, + run: func(t *testing.T, ss *appsv1.StatefulSet) { + assert.Equal(t, *ss.Spec.Replicas, int32(0)) + }, + }, { + name: "do not shutdown primary", + ip: intentParams{ + shutdown: true, + numInstancePods: 2, + }, + run: func(t *testing.T, ss *appsv1.StatefulSet) { + assert.Equal(t, *ss.Spec.Replicas, int32(1)) + }, + }, { + name: "check imagepullsecret", + run: func(t *testing.T, ss *appsv1.StatefulSet) { + assert.Assert(t, ss.Spec.Template.Spec.ImagePullSecrets != nil) + assert.Equal(t, ss.Spec.Template.Spec.ImagePullSecrets[0].Name, + "myImagePullSecret") + }, + }, { + name: "check pod priority", + ip: intentParams{ + spec: &v1beta1.PostgresInstanceSetSpec{ + PriorityClassName: initialize.String("some-priority-class"), + }, + }, + run: func(t *testing.T, ss *appsv1.StatefulSet) { + assert.Equal(t, ss.Spec.Template.Spec.PriorityClassName, + "some-priority-class") + }, + }, { + name: "check default scheduling constraints are added", + run: func(t *testing.T, ss *appsv1.StatefulSet) { + assert.Equal(t, len(ss.Spec.Template.Spec.TopologySpreadConstraints), 2) + assert.Assert(t, cmp.MarshalMatches(ss.Spec.Template.Spec.TopologySpreadConstraints, ` +- labelSelector: + matchExpressions: + - key: postgres-operator.crunchydata.com/data + operator: In + values: + - postgres + - pgbackrest + matchLabels: + postgres-operator.crunchydata.com/cluster: hippo + maxSkew: 1 + topologyKey: kubernetes.io/hostname + whenUnsatisfiable: ScheduleAnyway +- labelSelector: + matchExpressions: + - key: postgres-operator.crunchydata.com/data + operator: In + values: + - postgres + - pgbackrest + matchLabels: + postgres-operator.crunchydata.com/cluster: hippo + maxSkew: 1 + topologyKey: topology.kubernetes.io/zone + whenUnsatisfiable: ScheduleAnyway + `)) + }, + }, { + name: "check default scheduling constraints are appended to existing", + ip: intentParams{ + spec: &v1beta1.PostgresInstanceSetSpec{ + Name: "instance1", + TopologySpreadConstraints: []corev1.TopologySpreadConstraint{{ + MaxSkew: int32(1), + TopologyKey: "kubernetes.io/hostname", + WhenUnsatisfiable: corev1.ScheduleAnyway, + LabelSelector: &metav1.LabelSelector{ + MatchExpressions: []metav1.LabelSelectorRequirement{ + {Key: naming.LabelCluster, Operator: "In", Values: []string{"somename"}}, + {Key: naming.LabelData, Operator: "Exists"}, + }, + }, + }}, + }, + }, + run: func(t *testing.T, ss *appsv1.StatefulSet) { + assert.Equal(t, len(ss.Spec.Template.Spec.TopologySpreadConstraints), 3) + assert.Assert(t, cmp.MarshalMatches(ss.Spec.Template.Spec.TopologySpreadConstraints, ` +- labelSelector: + matchExpressions: + - key: postgres-operator.crunchydata.com/cluster + operator: In + values: + - somename + - key: postgres-operator.crunchydata.com/data + operator: Exists + maxSkew: 1 + topologyKey: kubernetes.io/hostname + whenUnsatisfiable: ScheduleAnyway +- labelSelector: + matchExpressions: + - key: postgres-operator.crunchydata.com/data + operator: In + values: + - postgres + - pgbackrest + matchLabels: + postgres-operator.crunchydata.com/cluster: hippo + maxSkew: 1 + topologyKey: kubernetes.io/hostname + whenUnsatisfiable: ScheduleAnyway +- labelSelector: + matchExpressions: + - key: postgres-operator.crunchydata.com/data + operator: In + values: + - postgres + - pgbackrest + matchLabels: + postgres-operator.crunchydata.com/cluster: hippo + maxSkew: 1 + topologyKey: topology.kubernetes.io/zone + whenUnsatisfiable: ScheduleAnyway + `)) + }, + }, { + name: "check defined constraint when defaults disabled", + ip: intentParams{ + cluster: &v1beta1.PostgresCluster{ + ObjectMeta: metav1.ObjectMeta{ + Name: "hippo", + }, + Spec: v1beta1.PostgresClusterSpec{ + PostgresVersion: 13, + DisableDefaultPodScheduling: initialize.Bool(true), + InstanceSets: []v1beta1.PostgresInstanceSetSpec{{ + Name: "instance1", + Replicas: initialize.Int32(1), + DataVolumeClaimSpec: testVolumeClaimSpec(), + TopologySpreadConstraints: []corev1.TopologySpreadConstraint{{ + MaxSkew: int32(1), + TopologyKey: "kubernetes.io/hostname", + WhenUnsatisfiable: corev1.ScheduleAnyway, + LabelSelector: &metav1.LabelSelector{ + MatchExpressions: []metav1.LabelSelectorRequirement{ + {Key: naming.LabelCluster, Operator: "In", Values: []string{"somename"}}, + {Key: naming.LabelData, Operator: "Exists"}, + }, + }, + }}, + }}, + }, + }, + spec: &v1beta1.PostgresInstanceSetSpec{ + Name: "instance1", + TopologySpreadConstraints: []corev1.TopologySpreadConstraint{{ + MaxSkew: int32(1), + TopologyKey: "kubernetes.io/hostname", + WhenUnsatisfiable: corev1.ScheduleAnyway, + LabelSelector: &metav1.LabelSelector{ + MatchExpressions: []metav1.LabelSelectorRequirement{ + {Key: naming.LabelCluster, Operator: "In", Values: []string{"somename"}}, + {Key: naming.LabelData, Operator: "Exists"}, + }, + }, + }}, + }, + }, + run: func(t *testing.T, ss *appsv1.StatefulSet) { + assert.Equal(t, len(ss.Spec.Template.Spec.TopologySpreadConstraints), 1) + assert.Assert(t, cmp.MarshalMatches(ss.Spec.Template.Spec.TopologySpreadConstraints, + `- labelSelector: + matchExpressions: + - key: postgres-operator.crunchydata.com/cluster + operator: In + values: + - somename + - key: postgres-operator.crunchydata.com/data + operator: Exists + maxSkew: 1 + topologyKey: kubernetes.io/hostname + whenUnsatisfiable: ScheduleAnyway +`)) + }, + }} { + test := test + t.Run(test.name, func(t *testing.T) { + + cluster := test.ip.cluster + if cluster == nil { + cluster = testCluster() + } + + cluster.Default() + cluster.UID = types.UID("hippouid") + cluster.Namespace = test.name + "-ns" + cluster.Spec.Shutdown = &test.ip.shutdown + cluster.Status.StartupInstance = test.ip.startupInstance + + spec := test.ip.spec + if spec == nil { + spec = new(v1beta1.PostgresInstanceSetSpec) + spec.Default(0) + } + + clusterPodServiceName := test.ip.clusterPodServiceName + instanceServiceAccountName := test.ip.instanceServiceAccountName + sts := test.ip.sts + if sts == nil { + sts = &appsv1.StatefulSet{} + } + + generateInstanceStatefulSetIntent(context.Background(), + cluster, spec, + clusterPodServiceName, + instanceServiceAccountName, + sts, + test.ip.numInstancePods, + ) + + test.run(t, sts) + + if assert.Check(t, sts.Spec.Template.Spec.EnableServiceLinks != nil) { + assert.Equal(t, *sts.Spec.Template.Spec.EnableServiceLinks, false) + } + }) + } +} + +func TestFindAvailableInstanceNames(t *testing.T) { + + testCases := []struct { + set v1beta1.PostgresInstanceSetSpec + fakeObservedInstances *observedInstances + fakeClusterVolumes []corev1.PersistentVolumeClaim + expectedInstanceNames []string + }{{ + set: v1beta1.PostgresInstanceSetSpec{Name: "instance1"}, + fakeObservedInstances: newObservedInstances( + &v1beta1.PostgresCluster{Spec: v1beta1.PostgresClusterSpec{ + InstanceSets: []v1beta1.PostgresInstanceSetSpec{{}}, + }}, + []appsv1.StatefulSet{{}}, + []corev1.Pod{}, + ), + fakeClusterVolumes: []corev1.PersistentVolumeClaim{{}}, + expectedInstanceNames: []string{}, + }, { + set: v1beta1.PostgresInstanceSetSpec{Name: "instance1"}, + fakeObservedInstances: newObservedInstances( + &v1beta1.PostgresCluster{Spec: v1beta1.PostgresClusterSpec{ + InstanceSets: []v1beta1.PostgresInstanceSetSpec{{Name: "instance1"}}, + }}, + []appsv1.StatefulSet{{ObjectMeta: metav1.ObjectMeta{ + Name: "instance1-abc", + Labels: map[string]string{ + naming.LabelInstanceSet: "instance1"}}}}, + []corev1.Pod{}, + ), + fakeClusterVolumes: []corev1.PersistentVolumeClaim{{ObjectMeta: metav1.ObjectMeta{ + Name: "instance1-abc-def", + Labels: map[string]string{ + naming.LabelRole: naming.RolePostgresData, + naming.LabelInstanceSet: "instance1", + naming.LabelInstance: "instance1-abc"}}}}, + expectedInstanceNames: []string{}, + }, { + set: v1beta1.PostgresInstanceSetSpec{Name: "instance1"}, + fakeObservedInstances: newObservedInstances( + &v1beta1.PostgresCluster{Spec: v1beta1.PostgresClusterSpec{ + InstanceSets: []v1beta1.PostgresInstanceSetSpec{{Name: "instance1"}}, + }}, + []appsv1.StatefulSet{{ObjectMeta: metav1.ObjectMeta{ + Name: "instance1-abc", + Labels: map[string]string{ + naming.LabelInstanceSet: "instance1"}}}}, + []corev1.Pod{}, + ), + fakeClusterVolumes: []corev1.PersistentVolumeClaim{}, + expectedInstanceNames: []string{}, + }, { + set: v1beta1.PostgresInstanceSetSpec{Name: "instance1"}, + fakeObservedInstances: newObservedInstances( + &v1beta1.PostgresCluster{Spec: v1beta1.PostgresClusterSpec{ + InstanceSets: []v1beta1.PostgresInstanceSetSpec{{Name: "instance1"}}, + }}, + []appsv1.StatefulSet{{ObjectMeta: metav1.ObjectMeta{ + Name: "instance1-abc", + Labels: map[string]string{ + naming.LabelInstanceSet: "instance1"}}}}, + []corev1.Pod{}, + ), + fakeClusterVolumes: []corev1.PersistentVolumeClaim{ + {ObjectMeta: metav1.ObjectMeta{ + Name: "instance1-abc-def", + Labels: map[string]string{ + naming.LabelRole: naming.RolePostgresData, + naming.LabelInstanceSet: "instance1", + naming.LabelInstance: "instance1-abc"}}}, + {ObjectMeta: metav1.ObjectMeta{ + Name: "instance1-abc-efg", + Labels: map[string]string{ + naming.LabelRole: naming.RolePostgresData, + naming.LabelInstanceSet: "instance1", + naming.LabelInstance: "instance1-def"}}}, + }, + expectedInstanceNames: []string{"instance1-def"}, + }, { + set: v1beta1.PostgresInstanceSetSpec{Name: "instance1"}, + fakeObservedInstances: newObservedInstances( + &v1beta1.PostgresCluster{Spec: v1beta1.PostgresClusterSpec{ + InstanceSets: []v1beta1.PostgresInstanceSetSpec{{Name: "instance1"}}, + }}, + []appsv1.StatefulSet{{ObjectMeta: metav1.ObjectMeta{ + Name: "instance1-abc", + Labels: map[string]string{ + naming.LabelInstanceSet: "instance1"}}}}, + []corev1.Pod{}, + ), + fakeClusterVolumes: []corev1.PersistentVolumeClaim{{ObjectMeta: metav1.ObjectMeta{ + Name: "instance1-abc-def", + Labels: map[string]string{ + naming.LabelRole: naming.RolePostgresData, + naming.LabelInstanceSet: "instance1", + naming.LabelInstance: "instance1-def"}}}}, + expectedInstanceNames: []string{"instance1-def"}, + }, { + set: v1beta1.PostgresInstanceSetSpec{Name: "instance1", + WALVolumeClaimSpec: &corev1.PersistentVolumeClaimSpec{}}, + fakeObservedInstances: newObservedInstances( + &v1beta1.PostgresCluster{Spec: v1beta1.PostgresClusterSpec{ + InstanceSets: []v1beta1.PostgresInstanceSetSpec{{Name: "instance1"}}, + }}, + []appsv1.StatefulSet{{ObjectMeta: metav1.ObjectMeta{ + Name: "instance1-abc", + Labels: map[string]string{ + naming.LabelInstanceSet: "instance1"}}}}, + []corev1.Pod{}, + ), + fakeClusterVolumes: []corev1.PersistentVolumeClaim{ + {ObjectMeta: metav1.ObjectMeta{ + Name: "instance1-abc-def", + Labels: map[string]string{ + naming.LabelRole: naming.RolePostgresData, + naming.LabelInstanceSet: "instance1", + naming.LabelInstance: "instance1-abc"}}}, + {ObjectMeta: metav1.ObjectMeta{ + Name: "instance1-abc-def", + Labels: map[string]string{ + naming.LabelRole: naming.RolePostgresWAL, + naming.LabelInstanceSet: "instance1", + naming.LabelInstance: "instance1-abc"}}}}, + expectedInstanceNames: []string{}, + }, { + set: v1beta1.PostgresInstanceSetSpec{Name: "instance1", + WALVolumeClaimSpec: &corev1.PersistentVolumeClaimSpec{}}, + fakeObservedInstances: newObservedInstances( + &v1beta1.PostgresCluster{Spec: v1beta1.PostgresClusterSpec{ + InstanceSets: []v1beta1.PostgresInstanceSetSpec{{Name: "instance1"}}, + }}, + []appsv1.StatefulSet{}, + []corev1.Pod{}, + ), + fakeClusterVolumes: []corev1.PersistentVolumeClaim{ + {ObjectMeta: metav1.ObjectMeta{ + Name: "instance1-def-ghi", + Labels: map[string]string{ + naming.LabelRole: naming.RolePostgresData, + naming.LabelInstanceSet: "instance1", + naming.LabelInstance: "instance1-def"}}}, + {ObjectMeta: metav1.ObjectMeta{ + Name: "instance1-def-jkl", + Labels: map[string]string{ + naming.LabelRole: naming.RolePostgresWAL, + naming.LabelInstanceSet: "instance1", + naming.LabelInstance: "instance1-def"}}}}, + expectedInstanceNames: []string{"instance1-def"}, + }, { + set: v1beta1.PostgresInstanceSetSpec{Name: "instance1", + WALVolumeClaimSpec: &corev1.PersistentVolumeClaimSpec{}}, + fakeObservedInstances: newObservedInstances( + &v1beta1.PostgresCluster{Spec: v1beta1.PostgresClusterSpec{ + InstanceSets: []v1beta1.PostgresInstanceSetSpec{{Name: "instance1"}}, + }}, + []appsv1.StatefulSet{}, + []corev1.Pod{}, + ), + fakeClusterVolumes: []corev1.PersistentVolumeClaim{{ObjectMeta: metav1.ObjectMeta{ + Name: "instance1-def-ghi", + Labels: map[string]string{ + naming.LabelRole: naming.RolePostgresData, + naming.LabelInstanceSet: "instance1", + naming.LabelInstance: "instance1-def"}}}}, + expectedInstanceNames: []string{}, + }} + + for _, tc := range testCases { + var walEnabled string + if tc.set.WALVolumeClaimSpec != nil { + walEnabled = ", WAL volume enabled" + } + name := fmt.Sprintf("%d set(s), %d volume(s)%s: expect %d instance names(s)", + len(tc.fakeObservedInstances.setNames), len(tc.fakeClusterVolumes), walEnabled, + len(tc.expectedInstanceNames)) + t.Run(name, func(t *testing.T) { + assert.DeepEqual(t, findAvailableInstanceNames(tc.set, tc.fakeObservedInstances, + tc.fakeClusterVolumes), tc.expectedInstanceNames) + }) + } +} + +func TestReconcileInstanceSetPodDisruptionBudget(t *testing.T) { + ctx := context.Background() + _, cc := setupKubernetes(t) + require.ParallelCapacity(t, 1) + + r := &Reconciler{ + Client: cc, + Owner: client.FieldOwner(t.Name()), + } + + foundPDB := func( + cluster *v1beta1.PostgresCluster, + spec *v1beta1.PostgresInstanceSetSpec, + ) bool { + got := &policyv1.PodDisruptionBudget{} + err := r.Client.Get(ctx, + naming.AsObjectKey(naming.InstanceSet(cluster, spec)), + got) + return !apierrors.IsNotFound(err) + + } + + ns := setupNamespace(t, cc) + + t.Run("empty", func(t *testing.T) { + cluster := &v1beta1.PostgresCluster{} + spec := &v1beta1.PostgresInstanceSetSpec{} + + assert.Error(t, r.reconcileInstanceSetPodDisruptionBudget(ctx, cluster, spec), + "Replicas should be defined") + }) + + t.Run("not created", func(t *testing.T) { + cluster := testCluster() + cluster.Namespace = ns.Name + spec := &cluster.Spec.InstanceSets[0] + spec.MinAvailable = initialize.Pointer(intstr.FromInt32(0)) + assert.NilError(t, r.reconcileInstanceSetPodDisruptionBudget(ctx, cluster, spec)) + assert.Assert(t, !foundPDB(cluster, spec)) + }) + + t.Run("int created", func(t *testing.T) { + cluster := testCluster() + cluster.Namespace = ns.Name + spec := &cluster.Spec.InstanceSets[0] + spec.MinAvailable = initialize.Pointer(intstr.FromInt32(1)) + + assert.NilError(t, r.Client.Create(ctx, cluster)) + t.Cleanup(func() { assert.Check(t, r.Client.Delete(ctx, cluster)) }) + + assert.NilError(t, r.reconcileInstanceSetPodDisruptionBudget(ctx, cluster, spec)) + assert.Assert(t, foundPDB(cluster, spec)) + + t.Run("deleted", func(t *testing.T) { + spec.MinAvailable = initialize.Pointer(intstr.FromInt32(0)) + err := r.reconcileInstanceSetPodDisruptionBudget(ctx, cluster, spec) + if apierrors.IsConflict(err) { + // When running in an existing environment another controller will sometimes update + // the object. This leads to an error where the ResourceVersion of the object does + // not match what we expect. When we run into this conflict, try to reconcile the + // object again. + err = r.reconcileInstanceSetPodDisruptionBudget(ctx, cluster, spec) + } + assert.NilError(t, err, errors.Unwrap(err)) + assert.Assert(t, !foundPDB(cluster, spec)) + }) + }) + + t.Run("str created", func(t *testing.T) { + cluster := testCluster() + cluster.Namespace = ns.Name + spec := &cluster.Spec.InstanceSets[0] + spec.MinAvailable = initialize.Pointer(intstr.FromString("50%")) + + assert.NilError(t, r.Client.Create(ctx, cluster)) + t.Cleanup(func() { assert.Check(t, r.Client.Delete(ctx, cluster)) }) + + assert.NilError(t, r.reconcileInstanceSetPodDisruptionBudget(ctx, cluster, spec)) + assert.Assert(t, foundPDB(cluster, spec)) + + t.Run("deleted", func(t *testing.T) { + spec.MinAvailable = initialize.Pointer(intstr.FromString("0%")) + err := r.reconcileInstanceSetPodDisruptionBudget(ctx, cluster, spec) + if apierrors.IsConflict(err) { + // When running in an existing environment another controller will sometimes update + // the object. This leads to an error where the ResourceVersion of the object does + // not match what we expect. When we run into this conflict, try to reconcile the + // object again. + err = r.reconcileInstanceSetPodDisruptionBudget(ctx, cluster, spec) + } + assert.NilError(t, err, errors.Unwrap(err)) + assert.Assert(t, !foundPDB(cluster, spec)) + }) + + t.Run("delete with 00%", func(t *testing.T) { + spec.MinAvailable = initialize.Pointer(intstr.FromString("50%")) + + assert.NilError(t, r.reconcileInstanceSetPodDisruptionBudget(ctx, cluster, spec)) + assert.Assert(t, foundPDB(cluster, spec)) + + t.Run("deleted", func(t *testing.T) { + spec.MinAvailable = initialize.Pointer(intstr.FromString("00%")) + err := r.reconcileInstanceSetPodDisruptionBudget(ctx, cluster, spec) + if apierrors.IsConflict(err) { + // When running in an existing environment another controller will sometimes update + // the object. This leads to an error where the ResourceVersion of the object does + // not match what we expect. When we run into this conflict, try to reconcile the + // object again. + t.Log("conflict:", err) + err = r.reconcileInstanceSetPodDisruptionBudget(ctx, cluster, spec) + } + assert.NilError(t, err, "\n%#v", errors.Unwrap(err)) + assert.Assert(t, !foundPDB(cluster, spec)) + }) + }) + }) +} + +func TestCleanupDisruptionBudgets(t *testing.T) { + ctx := context.Background() + _, cc := setupKubernetes(t) + require.ParallelCapacity(t, 1) + + r := &Reconciler{ + Client: cc, + Owner: client.FieldOwner(t.Name()), + } + + ns := setupNamespace(t, cc) + + generatePDB := func( + t *testing.T, + cluster *v1beta1.PostgresCluster, + spec *v1beta1.PostgresInstanceSetSpec, + minAvailable *intstr.IntOrString, + ) *policyv1.PodDisruptionBudget { + meta := naming.InstanceSet(cluster, spec) + meta.Labels = map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelInstanceSet: spec.Name, + } + pdb, err := r.generatePodDisruptionBudget( + cluster, + meta, + minAvailable, + naming.ClusterInstanceSet(cluster.Name, spec.Name), + ) + assert.NilError(t, err) + return pdb + } + + createPDB := func( + pdb *policyv1.PodDisruptionBudget, + ) error { + return r.Client.Create(ctx, pdb) + } + + foundPDB := func( + pdb *policyv1.PodDisruptionBudget, + ) bool { + return !apierrors.IsNotFound( + r.Client.Get(ctx, client.ObjectKeyFromObject(pdb), + &policyv1.PodDisruptionBudget{})) + } + + t.Run("pdbs not found", func(t *testing.T) { + cluster := testCluster() + assert.NilError(t, r.cleanupPodDisruptionBudgets(ctx, cluster)) + }) + + t.Run("pdbs found", func(t *testing.T) { + cluster := testCluster() + cluster.Namespace = ns.Name + spec := &cluster.Spec.InstanceSets[0] + spec.MinAvailable = initialize.Pointer(intstr.FromInt32(1)) + + assert.NilError(t, r.Client.Create(ctx, cluster)) + t.Cleanup(func() { assert.Check(t, r.Client.Delete(ctx, cluster)) }) + + expectedPDB := generatePDB(t, cluster, spec, + initialize.Pointer(intstr.FromInt32(1))) + assert.NilError(t, createPDB(expectedPDB)) + + t.Run("no instances were removed", func(t *testing.T) { + assert.Assert(t, foundPDB(expectedPDB)) + assert.NilError(t, r.cleanupPodDisruptionBudgets(ctx, cluster)) + assert.Assert(t, foundPDB(expectedPDB)) + }) + + t.Run("cleanup leftover pdb", func(t *testing.T) { + leftoverPDB := generatePDB(t, cluster, &v1beta1.PostgresInstanceSetSpec{ + Name: "old-instance", + Replicas: initialize.Int32(1), + }, initialize.Pointer(intstr.FromInt32(1))) + assert.NilError(t, createPDB(leftoverPDB)) + + assert.Assert(t, foundPDB(expectedPDB)) + assert.Assert(t, foundPDB(leftoverPDB)) + err := r.cleanupPodDisruptionBudgets(ctx, cluster) + + // The disruption controller updates the status of a PDB any time a + // related Pod changes. When this happens, the resourceVersion of + // the PDB does not match what we expect and we get a conflict. Retry. + if apierrors.IsConflict(err) { + t.Log("conflict:", err) + err = r.cleanupPodDisruptionBudgets(ctx, cluster) + } + + assert.NilError(t, err, "\n%#v", errors.Unwrap(err)) + assert.Assert(t, foundPDB(expectedPDB)) + assert.Assert(t, !foundPDB(leftoverPDB)) + }) + }) +} diff --git a/internal/controller/postgrescluster/patroni.go b/internal/controller/postgrescluster/patroni.go new file mode 100644 index 0000000000..1c5ac93eed --- /dev/null +++ b/internal/controller/postgrescluster/patroni.go @@ -0,0 +1,604 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgrescluster + +import ( + "context" + "fmt" + "io" + "time" + + "github.com/pkg/errors" + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/util/intstr" + "sigs.k8s.io/controller-runtime/pkg/client" + + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/logging" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/patroni" + "github.com/crunchydata/postgres-operator/internal/pki" + "github.com/crunchydata/postgres-operator/internal/postgres" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +// +kubebuilder:rbac:groups="",resources="endpoints",verbs={deletecollection} + +func (r *Reconciler) deletePatroniArtifacts( + ctx context.Context, cluster *v1beta1.PostgresCluster, +) error { + // TODO(cbandy): This could also be accomplished by adopting the Endpoints + // as Patroni creates them. Would their events cause too many reconciles? + // Foreground deletion may force us to adopt and set finalizers anyway. + + selector, err := naming.AsSelector(naming.ClusterPatronis(cluster)) + if err == nil { + err = errors.WithStack( + r.Client.DeleteAllOf(ctx, &corev1.Endpoints{}, + client.InNamespace(cluster.Namespace), + client.MatchingLabelsSelector{Selector: selector}, + )) + } + + return err +} + +func (r *Reconciler) handlePatroniRestarts( + ctx context.Context, cluster *v1beta1.PostgresCluster, instances *observedInstances, +) error { + const container = naming.ContainerDatabase + var primaryNeedsRestart, replicaNeedsRestart *Instance + + // Look for one primary and one replica that need to restart. Ignore + // containers that are terminating or not running; Kubernetes will start + // them again, and calls to their Patroni API will likely be interrupted anyway. + for _, instance := range instances.forCluster { + if len(instance.Pods) > 0 && patroni.PodRequiresRestart(instance.Pods[0]) { + if terminating, known := instance.IsTerminating(); terminating || !known { + continue + } + if running, known := instance.IsRunning(container); !running || !known { + continue + } + + if primary, _ := instance.IsPrimary(); primary { + primaryNeedsRestart = instance + } else { + replicaNeedsRestart = instance + } + if primaryNeedsRestart != nil && replicaNeedsRestart != nil { + break + } + } + } + + // When the primary instance needs to restart, restart it and return early. + // Some PostgreSQL settings must be changed on the primary before any + // progress can be made on the replicas, e.g. decreasing "max_connections". + // Another reconcile will trigger when an instance with pending restarts + // updates its status in DCS. See [Reconciler.watchPods]. + // + // NOTE: In Patroni v2.1.1, regardless of the PostgreSQL parameter, the + // primary indicates it needs to restart one "loop_wait" *after* the + // replicas indicate it. So, even though we consider the primary ahead of + // replicas here, replicas will typically restart first because we see them + // first. + if primaryNeedsRestart != nil { + exec := patroni.Executor(func( + ctx context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + pod := primaryNeedsRestart.Pods[0] + return r.PodExec(ctx, pod.Namespace, pod.Name, container, stdin, stdout, stderr, command...) + }) + + return errors.WithStack(exec.RestartPendingMembers(ctx, "master", naming.PatroniScope(cluster))) + } + + // When the primary does not need to restart but a replica does, restart all + // replicas that still need it. + // + // NOTE: This does not always clear the "needs restart" indicator on a replica. + // Patroni sets that when a parameter must be increased to match the minimum + // required of data on disk. When that happens, restarts occur (i.e. downtime) + // but the affected parameter cannot change until the replica has replayed + // the new minimum from the primary, e.g. decreasing "max_connections". + // - https://github.com/zalando/patroni/blob/v2.1.1/patroni/postgresql/config.py#L1069 + // + // TODO(cbandy): The above could interact badly with delayed replication. + // When we offer per-instance PostgreSQL configuration, we may need to revisit + // how we decide when to restart. + // - https://www.postgresql.org/docs/current/runtime-config-replication.html + if replicaNeedsRestart != nil { + exec := patroni.Executor(func( + ctx context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + pod := replicaNeedsRestart.Pods[0] + return r.PodExec(ctx, pod.Namespace, pod.Name, container, stdin, stdout, stderr, command...) + }) + + return errors.WithStack(exec.RestartPendingMembers(ctx, "replica", naming.PatroniScope(cluster))) + } + + // Nothing needs to restart. + return nil +} + +// +kubebuilder:rbac:groups="",resources="services",verbs={create,patch} + +// reconcilePatroniDistributedConfiguration sets labels and ownership on the +// objects Patroni creates for its distributed configuration. +func (r *Reconciler) reconcilePatroniDistributedConfiguration( + ctx context.Context, cluster *v1beta1.PostgresCluster, +) error { + // When using Endpoints for DCS, Patroni needs a Service to ensure that the + // Endpoints object is not removed by Kubernetes at startup. Patroni will + // create this object if it has permission to do so, but it won't set any + // ownership. + // - https://releases.k8s.io/v1.16.0/pkg/controller/endpoint/endpoints_controller.go#L547 + // - https://releases.k8s.io/v1.20.0/pkg/controller/endpoint/endpoints_controller.go#L580 + // - https://github.com/zalando/patroni/blob/v2.0.1/patroni/dcs/kubernetes.py#L865-L881 + dcsService := &corev1.Service{ObjectMeta: naming.PatroniDistributedConfiguration(cluster)} + dcsService.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("Service")) + + err := errors.WithStack(r.setControllerReference(cluster, dcsService)) + + dcsService.Annotations = naming.Merge( + cluster.Spec.Metadata.GetAnnotationsOrNil()) + dcsService.Labels = naming.Merge( + cluster.Spec.Metadata.GetLabelsOrNil(), + map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelPatroni: naming.PatroniScope(cluster), + }) + + // Allocate no IP address (headless) and create no Endpoints. + // - https://docs.k8s.io/concepts/services-networking/service/#headless-services + dcsService.Spec.ClusterIP = corev1.ClusterIPNone + dcsService.Spec.Selector = nil + + if err == nil { + err = errors.WithStack(r.apply(ctx, dcsService)) + } + + // TODO(cbandy): DCS "failover_path"; `failover` and `switchover` create "{scope}-failover" endpoints. + // TODO(cbandy): DCS "sync_path"; `synchronous_mode` uses "{scope}-sync" endpoints. + + return err +} + +// +kubebuilder:rbac:resources="pods",verbs={get,list} + +func (r *Reconciler) reconcilePatroniDynamicConfiguration( + ctx context.Context, cluster *v1beta1.PostgresCluster, instances *observedInstances, + pgHBAs postgres.HBAs, pgParameters postgres.Parameters, +) error { + if !patroni.ClusterBootstrapped(cluster) { + // Patroni has not yet bootstrapped. Dynamic configuration happens through + // configuration files during bootstrap, so there's nothing to do here. + return nil + } + + var pod *corev1.Pod + for _, instance := range instances.forCluster { + if terminating, known := instance.IsTerminating(); !terminating && known { + running, known := instance.IsRunning(naming.ContainerDatabase) + + if running && known && len(instance.Pods) > 0 { + pod = instance.Pods[0] + break + } + } + } + if pod == nil { + // There are no running Patroni containers; nothing to do. + return nil + } + + // NOTE(cbandy): Despite the guards above, calling PodExec may still fail + // due to a missing or stopped container. + + exec := func(ctx context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string) error { + return r.PodExec(ctx, pod.Namespace, pod.Name, naming.ContainerDatabase, stdin, stdout, stderr, command...) + } + + var configuration map[string]any + if cluster.Spec.Patroni != nil { + configuration = cluster.Spec.Patroni.DynamicConfiguration + } + configuration = patroni.DynamicConfiguration(cluster, configuration, pgHBAs, pgParameters) + + return errors.WithStack( + patroni.Executor(exec).ReplaceConfiguration(ctx, configuration)) +} + +// generatePatroniLeaderLeaseService returns a v1.Service that exposes the +// Patroni leader when Patroni is using Endpoints for its leader elections. +func (r *Reconciler) generatePatroniLeaderLeaseService( + cluster *v1beta1.PostgresCluster) (*corev1.Service, error, +) { + service := &corev1.Service{ObjectMeta: naming.PatroniLeaderEndpoints(cluster)} + service.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("Service")) + + service.Annotations = naming.Merge( + cluster.Spec.Metadata.GetAnnotationsOrNil()) + service.Labels = naming.Merge( + cluster.Spec.Metadata.GetLabelsOrNil()) + + if spec := cluster.Spec.Service; spec != nil { + service.Annotations = naming.Merge(service.Annotations, + spec.Metadata.GetAnnotationsOrNil()) + service.Labels = naming.Merge(service.Labels, + spec.Metadata.GetLabelsOrNil()) + } + + // add our labels last so they aren't overwritten + service.Labels = naming.Merge(service.Labels, + map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelPatroni: naming.PatroniScope(cluster), + }) + + // Allocate an IP address and/or node port and let Patroni manage the Endpoints. + // Patroni will ensure that they always route to the elected leader. + // - https://docs.k8s.io/concepts/services-networking/service/#services-without-selectors + service.Spec.Selector = nil + + // The TargetPort must be the name (not the number) of the PostgreSQL + // ContainerPort. This name allows the port number to differ between + // instances, which can happen during a rolling update. + servicePort := corev1.ServicePort{ + Name: naming.PortPostgreSQL, + Port: *cluster.Spec.Port, + Protocol: corev1.ProtocolTCP, + TargetPort: intstr.FromString(naming.PortPostgreSQL), + } + + if spec := cluster.Spec.Service; spec == nil { + service.Spec.Type = corev1.ServiceTypeClusterIP + } else { + service.Spec.Type = corev1.ServiceType(spec.Type) + if spec.NodePort != nil { + if service.Spec.Type == corev1.ServiceTypeClusterIP { + // The NodePort can only be set when the Service type is NodePort or + // LoadBalancer. However, due to a known issue prior to Kubernetes + // 1.20, we clear these errors during our apply. To preserve the + // appropriate behavior, we log an Event and return an error. + // TODO(tjmoore4): Once Validation Rules are available, this check + // and event could potentially be removed in favor of that validation + r.Recorder.Eventf(cluster, corev1.EventTypeWarning, "MisconfiguredClusterIP", + "NodePort cannot be set with type ClusterIP on Service %q", service.Name) + return nil, fmt.Errorf("NodePort cannot be set with type ClusterIP on Service %q", service.Name) + } + servicePort.NodePort = *spec.NodePort + } + service.Spec.ExternalTrafficPolicy = initialize.FromPointer(spec.ExternalTrafficPolicy) + service.Spec.InternalTrafficPolicy = spec.InternalTrafficPolicy + } + service.Spec.Ports = []corev1.ServicePort{servicePort} + + err := errors.WithStack(r.setControllerReference(cluster, service)) + return service, err +} + +// +kubebuilder:rbac:groups="",resources="services",verbs={create,patch} + +// reconcilePatroniLeaderLease sets labels and ownership on the objects Patroni +// creates for its leader elections. When Patroni is using Endpoints for this, +// the returned Service resolves to the elected leader. Otherwise, it is nil. +func (r *Reconciler) reconcilePatroniLeaderLease( + ctx context.Context, cluster *v1beta1.PostgresCluster, +) (*corev1.Service, error) { + // When using Endpoints for DCS, Patroni needs a Service to ensure that the + // Endpoints object is not removed by Kubernetes at startup. + // - https://releases.k8s.io/v1.16.0/pkg/controller/endpoint/endpoints_controller.go#L547 + // - https://releases.k8s.io/v1.20.0/pkg/controller/endpoint/endpoints_controller.go#L580 + service, err := r.generatePatroniLeaderLeaseService(cluster) + if err == nil { + err = errors.WithStack(r.apply(ctx, service)) + } + return service, err +} + +// +kubebuilder:rbac:groups="",resources="endpoints",verbs={get} + +// reconcilePatroniStatus populates cluster.Status.Patroni with observations. +func (r *Reconciler) reconcilePatroniStatus( + ctx context.Context, cluster *v1beta1.PostgresCluster, + observedInstances *observedInstances, +) (time.Duration, error) { + var requeue time.Duration + log := logging.FromContext(ctx) + + var readyInstance bool + for _, instance := range observedInstances.forCluster { + if r, _ := instance.IsReady(); r { + readyInstance = true + } + } + + dcs := &corev1.Endpoints{ObjectMeta: naming.PatroniDistributedConfiguration(cluster)} + err := errors.WithStack(client.IgnoreNotFound( + r.Client.Get(ctx, client.ObjectKeyFromObject(dcs), dcs))) + + if err == nil { + if dcs.Annotations["initialize"] != "" { + // After bootstrap, Patroni writes the cluster system identifier to DCS. + cluster.Status.Patroni.SystemIdentifier = dcs.Annotations["initialize"] + } else if readyInstance { + // While we typically expect a value for the initialize key to be present in the + // Endpoints above by the time the StatefulSet for any instance indicates "ready" + // (since Patroni writes this value after successful cluster bootstrap, at which time + // the initial primary should transition to "ready"), sometimes this is not the case + // and the "initialize" key is not yet present. Therefore, if a "ready" instance + // is detected in the cluster we assume this is the case, and simply log a message and + // requeue in order to try again until the expected value is found. + log.Info("detected ready instance but no initialize value") + requeue = time.Second + } + } + + return requeue, err +} + +// reconcileReplicationSecret creates a secret containing the TLS +// certificate, key and CA certificate for use with the replication and +// pg_rewind accounts in Postgres. +// TODO: As part of future work we will use this secret to setup a superuser +// account and enable cert authentication for that user +func (r *Reconciler) reconcileReplicationSecret( + ctx context.Context, cluster *v1beta1.PostgresCluster, + root *pki.RootCertificateAuthority, +) (*corev1.Secret, error) { + + // if a custom postgrescluster secret is provided, just return it + if cluster.Spec.CustomReplicationClientTLSSecret != nil { + custom := &corev1.Secret{ObjectMeta: metav1.ObjectMeta{ + Name: cluster.Spec.CustomReplicationClientTLSSecret.Name, + Namespace: cluster.Namespace, + }} + err := errors.WithStack(r.Client.Get(ctx, + client.ObjectKeyFromObject(custom), custom)) + return custom, err + } + + existing := &corev1.Secret{ObjectMeta: naming.ReplicationClientCertSecret(cluster)} + err := errors.WithStack(client.IgnoreNotFound( + r.Client.Get(ctx, client.ObjectKeyFromObject(existing), existing))) + + leaf := &pki.LeafCertificate{} + commonName := postgres.ReplicationUser + dnsNames := []string{commonName} + + if err == nil { + // Unmarshal and validate the stored leaf. These first errors can + // be ignored because they result in an invalid leaf which is then + // correctly regenerated. + _ = leaf.Certificate.UnmarshalText(existing.Data[naming.ReplicationCert]) + _ = leaf.PrivateKey.UnmarshalText(existing.Data[naming.ReplicationPrivateKey]) + + leaf, err = root.RegenerateLeafWhenNecessary(leaf, commonName, dnsNames) + err = errors.WithStack(err) + } + + intent := &corev1.Secret{ObjectMeta: naming.ReplicationClientCertSecret(cluster)} + intent.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("Secret")) + intent.Data = make(map[string][]byte) + + // set labels and annotations + intent.Annotations = naming.Merge( + cluster.Spec.Metadata.GetAnnotationsOrNil()) + intent.Labels = naming.Merge( + cluster.Spec.Metadata.GetLabelsOrNil(), + map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelClusterCertificate: "replication-client-tls", + }) + + if err := errors.WithStack(r.setControllerReference(cluster, intent)); err != nil { + return nil, err + } + if err == nil { + intent.Data[naming.ReplicationCert], err = leaf.Certificate.MarshalText() + err = errors.WithStack(err) + } + if err == nil { + intent.Data[naming.ReplicationPrivateKey], err = leaf.PrivateKey.MarshalText() + err = errors.WithStack(err) + } + if err == nil { + intent.Data[naming.ReplicationCACert], err = root.Certificate.MarshalText() + err = errors.WithStack(err) + } + if err == nil { + err = errors.WithStack(r.apply(ctx, intent)) + } + return intent, err +} + +// replicationCertSecretProjection returns a secret projection of the postgrescluster's +// client certificate and key to include in the instance configuration volume. +func replicationCertSecretProjection(certificate *corev1.Secret) *corev1.SecretProjection { + return &corev1.SecretProjection{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: certificate.Name, + }, + Items: []corev1.KeyToPath{ + { + Key: naming.ReplicationCert, + Path: naming.ReplicationCertPath, + }, + { + Key: naming.ReplicationPrivateKey, + Path: naming.ReplicationPrivateKeyPath, + }, + { + Key: naming.ReplicationCACert, + Path: naming.ReplicationCACertPath, + }, + }, + } +} + +func (r *Reconciler) reconcilePatroniSwitchover(ctx context.Context, + cluster *v1beta1.PostgresCluster, instances *observedInstances) error { + log := logging.FromContext(ctx) + + // If switchover is not enabled, clear out the Patroni switchover status fields + // which might have been set by previous switchovers. + // This also gives the user a way to easily recover and try again: if the operator + // runs into a problem with a switchover, turning `cluster.Spec.Patroni.Switchover` + // to `false` will clear the fields before another attempt + if cluster.Spec.Patroni == nil || + cluster.Spec.Patroni.Switchover == nil || + !cluster.Spec.Patroni.Switchover.Enabled { + cluster.Status.Patroni.Switchover = nil + cluster.Status.Patroni.SwitchoverTimeline = nil + return nil + } + + annotation := cluster.GetAnnotations()[naming.PatroniSwitchover] + spec := cluster.Spec.Patroni.Switchover + status := cluster.Status.Patroni.Switchover + + // If the status has been updated with the trigger annotation, the requested + // switchover has been successful, and the `SwitchoverTimeline` field can be cleared + if annotation == "" || (status != nil && *status == annotation) { + cluster.Status.Patroni.SwitchoverTimeline = nil + return nil + } + + // If we've reached this point, we assume a switchover request or in progress + // and need to make sure the prerequisites are met, e.g., more than one pod, + // a running instance to issue the switchover command to, etc. + if len(instances.forCluster) <= 1 { + // TODO: event + // TODO: Possible webhook validation + return errors.New("Need more than one instance to switchover") + } + + // TODO: Add webhook validation that requires a targetInstance when requesting failover + if spec.Type == v1beta1.PatroniSwitchoverTypeFailover { + if spec.TargetInstance == nil || *spec.TargetInstance == "" { + // TODO: event + return errors.New("TargetInstance required when running failover") + } + } + + // Determine if user is specifying a target instance. Validate the + // provided instance has been observed in the cluster. + var targetInstance *Instance + if spec.TargetInstance != nil && *spec.TargetInstance != "" { + for _, instance := range instances.forCluster { + if *spec.TargetInstance == instance.Name { + targetInstance = instance + } + } + if targetInstance == nil { + // TODO: event + return errors.New("TargetInstance was specified but not found in the cluster") + } + if len(targetInstance.Pods) != 1 { + // We expect that a target instance should have one associated pod. + return errors.Errorf( + "TargetInstance should have one pod. Pods (%d)", len(targetInstance.Pods)) + } + } else { + log.V(1).Info("TargetInstance not provided") + } + + // Find a running Pod that can be used to define a PodExec function. + var runningPod *corev1.Pod + for _, instance := range instances.forCluster { + if running, known := instance.IsRunning(naming.ContainerDatabase); running && + known && len(instance.Pods) == 1 { + + runningPod = instance.Pods[0] + break + } + } + if runningPod == nil { + return errors.New("Could not find a running pod when attempting switchover.") + } + exec := func(_ context.Context, stdin io.Reader, stdout, stderr io.Writer, + command ...string) error { + return r.PodExec(ctx, runningPod.Namespace, runningPod.Name, naming.ContainerDatabase, stdin, + stdout, stderr, command...) + } + + // To ensure idempotency, the operator verifies that the timeline reported by Patroni + // matches the timeline that was present when the switchover was first requested. + // TODO(benjaminjb): consider pulling the timeline from the pod annotation; manual experiments + // have shown that the annotation on the Leader pod is up to date during a switchover, but + // missing from the Replica pods. + timeline, err := patroni.Executor(exec).GetTimeline(ctx) + + if err != nil { + return err + } + + if timeline == 0 { + return errors.New("error getting and parsing current timeline") + } + + statusTimeline := cluster.Status.Patroni.SwitchoverTimeline + + // If the `SwitchoverTimeline` field is empty, this is the first reconcile after + // a switchover has been requested and we need to fill in the field with the current TL + // as reported by Patroni. + // We return from here without calling for an explicit requeue, but since we're updating + // the object, we will reconcile this again for the actual switchover/failover action. + if statusTimeline == nil || (statusTimeline != nil && *statusTimeline == 0) { + log.V(1).Info("Setting SwitchoverTimeline", "timeline", timeline) + cluster.Status.Patroni.SwitchoverTimeline = &timeline + return nil + } + + // If the `SwitchoverTimeline` field does not match the current timeline as reported by Patroni, + // then we assume a switchover has been completed, and we have reached this point because the + // cache does not yet have the updated `cluster.Status.Patroni.Switchover` field. + if statusTimeline != nil && *statusTimeline != timeline { + log.V(1).Info("SwitchoverTimeline does not match current timeline, assuming already completed switchover") + cluster.Status.Patroni.Switchover = initialize.String(annotation) + cluster.Status.Patroni.SwitchoverTimeline = nil + return nil + } + + // We have the pod executor, now we need to figure out which API call to use + // In the default case we will be using SwitchoverAndWait. This API call uses + // a Patronictl switchover to move to the target instance. + action := func(ctx context.Context, exec patroni.Executor, next string) (bool, error) { + success, err := exec.SwitchoverAndWait(ctx, next) + return success, errors.WithStack(err) + } + + if spec.Type == v1beta1.PatroniSwitchoverTypeFailover { + // When a failover has been requested we use FailoverAndWait to change the primary. + action = func(ctx context.Context, exec patroni.Executor, next string) (bool, error) { + success, err := exec.FailoverAndWait(ctx, next) + return success, errors.WithStack(err) + } + } + + // If target instance has not been provided, we will pass in an empty string to patronictl + nextPrimary := "" + if targetInstance != nil { + nextPrimary = targetInstance.Pods[0].Name + } + + success, err := action(ctx, exec, nextPrimary) + if err = errors.WithStack(err); err == nil && !success { + err = errors.New("unable to switchover") + } + + // If we've reached this point, a switchover has successfully been triggered + // and we set the status accordingly. + if err == nil { + cluster.Status.Patroni.Switchover = initialize.String(annotation) + cluster.Status.Patroni.SwitchoverTimeline = nil + } + + return err +} diff --git a/internal/controller/postgrescluster/patroni_test.go b/internal/controller/postgrescluster/patroni_test.go new file mode 100644 index 0000000000..b2a457685b --- /dev/null +++ b/internal/controller/postgrescluster/patroni_test.go @@ -0,0 +1,1005 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgrescluster + +import ( + "context" + "fmt" + "io" + "os" + "strconv" + "strings" + "testing" + "time" + + "github.com/pkg/errors" + "gotest.tools/v3/assert" + appsv1 "k8s.io/api/apps/v1" + corev1 "k8s.io/api/core/v1" + apierrors "k8s.io/apimachinery/pkg/api/errors" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/types" + "k8s.io/client-go/tools/record" + "sigs.k8s.io/controller-runtime/pkg/client" + + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/testing/cmp" + "github.com/crunchydata/postgres-operator/internal/testing/require" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestGeneratePatroniLeaderLeaseService(t *testing.T) { + _, cc := setupKubernetes(t) + require.ParallelCapacity(t, 0) + + reconciler := &Reconciler{ + Client: cc, + Recorder: new(record.FakeRecorder), + } + + cluster := &v1beta1.PostgresCluster{} + cluster.Namespace = "ns1" + cluster.Name = "pg2" + cluster.Spec.Port = initialize.Int32(9876) + + alwaysExpect := func(t testing.TB, service *corev1.Service) { + assert.Assert(t, cmp.MarshalMatches(service.TypeMeta, ` +apiVersion: v1 +kind: Service + `)) + assert.Assert(t, cmp.MarshalMatches(service.ObjectMeta, ` +creationTimestamp: null +labels: + postgres-operator.crunchydata.com/cluster: pg2 + postgres-operator.crunchydata.com/patroni: pg2-ha +name: pg2-ha +namespace: ns1 +ownerReferences: +- apiVersion: postgres-operator.crunchydata.com/v1beta1 + blockOwnerDeletion: true + controller: true + kind: PostgresCluster + name: pg2 + uid: "" + `)) + + // Always gets a ClusterIP (never None). + assert.Equal(t, service.Spec.ClusterIP, "") + assert.Assert(t, service.Spec.Selector == nil, + "got %v", service.Spec.Selector) + } + + t.Run("NoServiceSpec", func(t *testing.T) { + service, err := reconciler.generatePatroniLeaderLeaseService(cluster) + assert.NilError(t, err) + alwaysExpect(t, service) + // Defaults to ClusterIP. + assert.Equal(t, service.Spec.Type, corev1.ServiceTypeClusterIP) + assert.Assert(t, cmp.MarshalMatches(service.Spec.Ports, ` +- name: postgres + port: 9876 + protocol: TCP + targetPort: postgres + `)) + }) + + t.Run("AnnotationsLabels", func(t *testing.T) { + cluster := cluster.DeepCopy() + cluster.Spec.Metadata = &v1beta1.Metadata{ + Annotations: map[string]string{"a": "v1"}, + Labels: map[string]string{"b": "v2"}, + } + + service, err := reconciler.generatePatroniLeaderLeaseService(cluster) + assert.NilError(t, err) + + // Annotations present in the metadata. + assert.DeepEqual(t, service.ObjectMeta.Annotations, map[string]string{ + "a": "v1", + }) + + // Labels present in the metadata. + assert.DeepEqual(t, service.ObjectMeta.Labels, map[string]string{ + "b": "v2", + "postgres-operator.crunchydata.com/cluster": "pg2", + "postgres-operator.crunchydata.com/patroni": "pg2-ha", + }) + + // Labels not in the selector. + assert.Assert(t, service.Spec.Selector == nil, + "got %v", service.Spec.Selector) + + // Add metadata to individual service + cluster.Spec.Service = &v1beta1.ServiceSpec{ + Metadata: &v1beta1.Metadata{ + Annotations: map[string]string{"c": "v3"}, + Labels: map[string]string{"d": "v4", + "postgres-operator.crunchydata.com/cluster": "wrongName"}, + }, + } + + service, err = reconciler.generatePatroniLeaderLeaseService(cluster) + assert.NilError(t, err) + + // Annotations present in the metadata. + assert.DeepEqual(t, service.ObjectMeta.Annotations, map[string]string{ + "a": "v1", + "c": "v3", + }) + + // Labels present in the metadata. + assert.DeepEqual(t, service.ObjectMeta.Labels, map[string]string{ + "b": "v2", + "d": "v4", + "postgres-operator.crunchydata.com/cluster": "pg2", + "postgres-operator.crunchydata.com/patroni": "pg2-ha", + }) + + // Labels not in the selector. + assert.Assert(t, service.Spec.Selector == nil, + "got %v", service.Spec.Selector) + }) + + types := []struct { + Type string + Expect func(testing.TB, *corev1.Service) + }{ + {Type: "ClusterIP", Expect: func(t testing.TB, service *corev1.Service) { + assert.Equal(t, service.Spec.Type, corev1.ServiceTypeClusterIP) + }}, + {Type: "NodePort", Expect: func(t testing.TB, service *corev1.Service) { + assert.Equal(t, service.Spec.Type, corev1.ServiceTypeNodePort) + }}, + {Type: "LoadBalancer", Expect: func(t testing.TB, service *corev1.Service) { + assert.Equal(t, service.Spec.Type, corev1.ServiceTypeLoadBalancer) + }}, + } + + for _, test := range types { + t.Run(test.Type, func(t *testing.T) { + cluster := cluster.DeepCopy() + cluster.Spec.Service = &v1beta1.ServiceSpec{Type: test.Type} + + service, err := reconciler.generatePatroniLeaderLeaseService(cluster) + assert.NilError(t, err) + alwaysExpect(t, service) + test.Expect(t, service) + assert.Assert(t, cmp.MarshalMatches(service.Spec.Ports, ` +- name: postgres + port: 9876 + protocol: TCP + targetPort: postgres + `)) + }) + } + + typesAndPort := []struct { + Description string + Type string + NodePort *int32 + Expect func(testing.TB, *corev1.Service, error) + }{ + {Description: "ClusterIP with Port 32000", Type: "ClusterIP", + NodePort: initialize.Int32(32000), Expect: func(t testing.TB, service *corev1.Service, err error) { + assert.ErrorContains(t, err, "NodePort cannot be set with type ClusterIP on Service \"pg2-ha\"") + assert.Assert(t, service == nil) + }}, + {Description: "NodePort with Port 32001", Type: "NodePort", + NodePort: initialize.Int32(32001), Expect: func(t testing.TB, service *corev1.Service, err error) { + assert.NilError(t, err) + alwaysExpect(t, service) + assert.Equal(t, service.Spec.Type, corev1.ServiceTypeNodePort) + assert.Assert(t, cmp.MarshalMatches(service.Spec.Ports, ` +- name: postgres + nodePort: 32001 + port: 9876 + protocol: TCP + targetPort: postgres +`)) + }}, + {Description: "LoadBalancer with Port 32002", Type: "LoadBalancer", + NodePort: initialize.Int32(32002), Expect: func(t testing.TB, service *corev1.Service, err error) { + assert.Equal(t, service.Spec.Type, corev1.ServiceTypeLoadBalancer) + assert.NilError(t, err) + alwaysExpect(t, service) + assert.Assert(t, cmp.MarshalMatches(service.Spec.Ports, ` +- name: postgres + nodePort: 32002 + port: 9876 + protocol: TCP + targetPort: postgres +`)) + }}, + } + + for _, test := range typesAndPort { + t.Run(test.Description, func(t *testing.T) { + cluster := cluster.DeepCopy() + cluster.Spec.Service = &v1beta1.ServiceSpec{Type: test.Type, NodePort: test.NodePort} + + service, err := reconciler.generatePatroniLeaderLeaseService(cluster) + test.Expect(t, service, err) + }) + } +} + +func TestReconcilePatroniLeaderLease(t *testing.T) { + ctx := context.Background() + _, cc := setupKubernetes(t) + require.ParallelCapacity(t, 1) + + ns := setupNamespace(t, cc) + reconciler := &Reconciler{Client: cc, Owner: client.FieldOwner(t.Name())} + + cluster := testCluster() + cluster.Namespace = ns.Name + assert.NilError(t, cc.Create(ctx, cluster)) + + t.Run("NoServiceSpec", func(t *testing.T) { + service, err := reconciler.reconcilePatroniLeaderLease(ctx, cluster) + assert.NilError(t, err) + assert.Assert(t, service != nil) + t.Cleanup(func() { assert.Check(t, cc.Delete(ctx, service)) }) + + assert.Assert(t, service.Spec.ClusterIP != "", + "expected to be assigned a ClusterIP") + }) + + serviceTypes := []string{"ClusterIP", "NodePort", "LoadBalancer"} + + // Confirm that each ServiceType can be reconciled. + for _, serviceType := range serviceTypes { + t.Run(serviceType, func(t *testing.T) { + cluster := cluster.DeepCopy() + cluster.Spec.Service = &v1beta1.ServiceSpec{Type: serviceType} + + service, err := reconciler.reconcilePatroniLeaderLease(ctx, cluster) + assert.NilError(t, err) + assert.Assert(t, service != nil) + t.Cleanup(func() { assert.Check(t, cc.Delete(ctx, service)) }) + + assert.Assert(t, service.Spec.ClusterIP != "", + "expected to be assigned a ClusterIP") + }) + } + + // CRD validation looks only at the new/incoming value of fields. Confirm + // that each ServiceType can change to any other ServiceType. Forbidding + // certain transitions requires a validating webhook. + serviceTypeChangeClusterCounter := 0 + for _, beforeType := range serviceTypes { + for _, changeType := range serviceTypes { + t.Run(beforeType+"To"+changeType, func(t *testing.T) { + // Creating fresh clusters for these tests + cluster := testCluster() + cluster.Namespace = ns.Name + + // Note (dsessler): Adding a number to each cluster name to make cluster/service + // names unique to work around an intermittent race condition where a service + // from a prior test has not been deleted yet when the next test runs, causing + // the test to fail due to non-matching IP addresses. + cluster.Name += "-" + strconv.Itoa(serviceTypeChangeClusterCounter) + assert.NilError(t, cc.Create(ctx, cluster)) + + cluster.Spec.Service = &v1beta1.ServiceSpec{Type: beforeType} + + before, err := reconciler.reconcilePatroniLeaderLease(ctx, cluster) + assert.NilError(t, err) + t.Cleanup(func() { assert.Check(t, cc.Delete(ctx, before)) }) + + cluster.Spec.Service.Type = changeType + + after, err := reconciler.reconcilePatroniLeaderLease(ctx, cluster) + + // LoadBalancers are provisioned by a separate controller that + // updates the Service soon after creation. The API may return + // a conflict error when we race to update it, even though we + // don't send a resourceVersion in our payload. Retry. + if apierrors.IsConflict(err) { + t.Log("conflict:", err) + after, err = reconciler.reconcilePatroniLeaderLease(ctx, cluster) + } + + assert.NilError(t, err, "\n%#v", errors.Unwrap(err)) + assert.Equal(t, after.Spec.ClusterIP, before.Spec.ClusterIP, + "expected to keep the same ClusterIP") + serviceTypeChangeClusterCounter++ + }) + } + } +} + +func TestPatroniReplicationSecret(t *testing.T) { + // Garbage collector cleans up test resources before the test completes + if strings.EqualFold(os.Getenv("USE_EXISTING_CLUSTER"), "true") { + t.Skip("USE_EXISTING_CLUSTER: Test fails due to garbage collection") + } + + _, tClient := setupKubernetes(t) + require.ParallelCapacity(t, 0) + + ctx := context.Background() + r := &Reconciler{Client: tClient, Owner: client.FieldOwner(t.Name())} + + // test postgrescluster values + var ( + clusterName = "hippocluster" + clusterUID = types.UID("hippouid") + ) + + // create a PostgresCluster to test with + postgresCluster := &v1beta1.PostgresCluster{ + ObjectMeta: metav1.ObjectMeta{ + Name: clusterName, + Namespace: setupNamespace(t, tClient).Name, + UID: clusterUID, + }, + } + + rootCA, err := r.reconcileRootCertificate(ctx, postgresCluster) + assert.NilError(t, err) + + t.Run("reconcile", func(t *testing.T) { + _, err = r.reconcileReplicationSecret(ctx, postgresCluster, rootCA) + assert.NilError(t, err) + }) + + t.Run("validate", func(t *testing.T) { + + patroniReplicationSecret := &corev1.Secret{ObjectMeta: naming.ReplicationClientCertSecret(postgresCluster)} + patroniReplicationSecret.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("Secret")) + err = r.Client.Get(ctx, client.ObjectKeyFromObject(patroniReplicationSecret), patroniReplicationSecret) + assert.NilError(t, err) + + t.Run("ca.crt", func(t *testing.T) { + + clientCert, ok := patroniReplicationSecret.Data["ca.crt"] + assert.Assert(t, ok) + + assert.Assert(t, strings.HasPrefix(string(clientCert), "-----BEGIN CERTIFICATE-----")) + assert.Assert(t, strings.HasSuffix(string(clientCert), "-----END CERTIFICATE-----\n")) + }) + + t.Run("tls.crt", func(t *testing.T) { + + clientCert, ok := patroniReplicationSecret.Data["tls.crt"] + assert.Assert(t, ok) + + assert.Assert(t, strings.HasPrefix(string(clientCert), "-----BEGIN CERTIFICATE-----")) + assert.Assert(t, strings.HasSuffix(string(clientCert), "-----END CERTIFICATE-----\n")) + }) + + t.Run("tls.key", func(t *testing.T) { + + clientKey, ok := patroniReplicationSecret.Data["tls.key"] + assert.Assert(t, ok) + + assert.Assert(t, strings.HasPrefix(string(clientKey), "-----BEGIN EC PRIVATE KEY-----")) + assert.Assert(t, strings.HasSuffix(string(clientKey), "-----END EC PRIVATE KEY-----\n")) + }) + + }) + + t.Run("check replication certificate secret projection", func(t *testing.T) { + // example auto-generated secret projection + testSecretProjection := &corev1.SecretProjection{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: naming.ReplicationClientCertSecret(postgresCluster).Name, + }, + Items: []corev1.KeyToPath{ + { + Key: naming.ReplicationCert, + Path: naming.ReplicationCertPath, + }, + { + Key: naming.ReplicationPrivateKey, + Path: naming.ReplicationPrivateKeyPath, + }, + { + Key: naming.ReplicationCACert, + Path: naming.ReplicationCACertPath, + }, + }, + } + + rootCA, err := r.reconcileRootCertificate(ctx, postgresCluster) + assert.NilError(t, err) + + testReplicationSecret, err := r.reconcileReplicationSecret(ctx, postgresCluster, rootCA) + assert.NilError(t, err) + + t.Run("check standard secret projection", func(t *testing.T) { + secretCertProj := replicationCertSecretProjection(testReplicationSecret) + + assert.DeepEqual(t, testSecretProjection, secretCertProj) + }) + }) + +} + +func TestReconcilePatroniStatus(t *testing.T) { + ctx := context.Background() + _, tClient := setupKubernetes(t) + require.ParallelCapacity(t, 0) + + ns := setupNamespace(t, tClient) + r := &Reconciler{Client: tClient, Owner: client.FieldOwner(t.Name())} + + systemIdentifier := "6952526174828511264" + createResources := func(index, readyReplicas int, + writeAnnotation bool) (*v1beta1.PostgresCluster, *observedInstances) { + + i := strconv.Itoa(index) + clusterName := "patroni-status-" + i + instanceName := "test-instance-" + i + instanceSet := "set-" + i + + labels := map[string]string{ + naming.LabelCluster: clusterName, + naming.LabelInstanceSet: instanceSet, + naming.LabelInstance: instanceName, + } + + postgresCluster := &v1beta1.PostgresCluster{ + ObjectMeta: metav1.ObjectMeta{ + Name: clusterName, + Namespace: ns.Name, + }, + } + + runner := &appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Namespace: ns.Name, + Name: instanceName, + Labels: labels, + }, + Spec: appsv1.StatefulSetSpec{ + Selector: &metav1.LabelSelector{ + MatchLabels: labels, + }, + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Labels: labels, + }, + }, + }, + } + + endpoints := &corev1.Endpoints{ + ObjectMeta: naming.PatroniDistributedConfiguration(postgresCluster), + } + if writeAnnotation { + endpoints.ObjectMeta.Annotations = make(map[string]string) + endpoints.ObjectMeta.Annotations["initialize"] = systemIdentifier + } + assert.NilError(t, tClient.Create(ctx, endpoints, &client.CreateOptions{})) + + instance := &Instance{ + Name: instanceName, Runner: runner, + } + for i := 0; i < readyReplicas; i++ { + instance.Pods = append(instance.Pods, &corev1.Pod{ + Status: corev1.PodStatus{ + Conditions: []corev1.PodCondition{{ + Type: corev1.PodReady, + Status: corev1.ConditionTrue, + Reason: "test", + Message: "test", + }}, + }, + }) + } + observedInstances := &observedInstances{} + observedInstances.forCluster = []*Instance{instance} + + return postgresCluster, observedInstances + } + + testsCases := []struct { + requeueExpected bool + readyReplicas int + writeAnnotation bool + }{ + {requeueExpected: false, readyReplicas: 1, writeAnnotation: true}, + {requeueExpected: true, readyReplicas: 1, writeAnnotation: false}, + {requeueExpected: false, readyReplicas: 0, writeAnnotation: false}, + {requeueExpected: false, readyReplicas: 0, writeAnnotation: false}, + } + + for i, tc := range testsCases { + t.Run(fmt.Sprintf("%+v", tc), func(t *testing.T) { + postgresCluster, observedInstances := createResources(i, tc.readyReplicas, + tc.writeAnnotation) + requeue, err := r.reconcilePatroniStatus(ctx, postgresCluster, observedInstances) + if tc.requeueExpected { + assert.NilError(t, err) + assert.Equal(t, requeue, time.Second) + } else { + assert.NilError(t, err) + assert.Equal(t, requeue, time.Duration(0)) + } + }) + } +} + +func TestReconcilePatroniSwitchover(t *testing.T) { + _, client := setupKubernetes(t) + require.ParallelCapacity(t, 0) + + var called, failover, callError, callFails bool + var timelineCallNoLeader, timelineCall bool + r := Reconciler{ + Client: client, + PodExec: func(ctx context.Context, namespace, pod, container string, + stdin io.Reader, stdout, stderr io.Writer, command ...string) error { + called = true + switch { + case timelineCall: + timelineCall = false + stdout.Write([]byte(`[{"Cluster": "hippo-ha", "Member": "hippo-instance1-67mc-0", "Host": "hippo-instance1-67mc-0.hippo-pods", "Role": "Leader", "State": "running", "TL": 4}, {"Cluster": "hippo-ha", "Member": "hippo-instance1-ltcf-0", "Host": "hippo-instance1-ltcf-0.hippo-pods", "Role": "Replica", "State": "running", "TL": 4, "Lag in MB": 0}]`)) + case timelineCallNoLeader: + stdout.Write([]byte(`[{"Cluster": "hippo-ha", "Member": "hippo-instance1-ltcf-0", "Host": "hippo-instance1-ltcf-0.hippo-pods", "Role": "Replica", "State": "running", "TL": 4, "Lag in MB": 0}]`)) + case callError: + return errors.New("boom") + case callFails: + stdout.Write([]byte("bang")) + case failover: + stdout.Write([]byte("failed over")) + default: + stdout.Write([]byte("switched over")) + } + return nil + }, + } + + ctx := context.Background() + + getObserved := func() *observedInstances { + instances := []*Instance{{ + Name: "target", + Pods: []*corev1.Pod{{ + ObjectMeta: metav1.ObjectMeta{ + Name: "pod", + }, + Status: corev1.PodStatus{ + ContainerStatuses: []corev1.ContainerStatus{{ + Name: naming.ContainerDatabase, + State: corev1.ContainerState{ + Running: new(corev1.ContainerStateRunning), + }, + }}, + }, + }}, + Runner: &appsv1.StatefulSet{}, + }, { + Name: "other", + Pods: []*corev1.Pod{{ + ObjectMeta: metav1.ObjectMeta{ + Name: "pod", + }, + Status: corev1.PodStatus{ + ContainerStatuses: []corev1.ContainerStatus{{ + Name: naming.ContainerDatabase, + State: corev1.ContainerState{ + Running: new(corev1.ContainerStateRunning), + }, + }}, + }, + }}, + Runner: &appsv1.StatefulSet{}, + }} + return &observedInstances{forCluster: instances} + } + + t.Run("empty", func(t *testing.T) { + cluster := testCluster() + observed := newObservedInstances(cluster, nil, nil) + assert.NilError(t, r.reconcilePatroniSwitchover(ctx, cluster, observed)) + }) + + t.Run("early validation", func(t *testing.T) { + for _, test := range []struct { + desc string + enabled bool + trigger string + status string + soType string + target string + check func(*testing.T, error, *v1beta1.PostgresCluster) + }{ + { + desc: "Switchover not enabled", + enabled: false, + check: func(t *testing.T, err error, cluster *v1beta1.PostgresCluster) { + assert.NilError(t, err) + assert.Assert(t, cluster.Status.Patroni.SwitchoverTimeline == nil) + assert.Assert(t, cluster.Status.Patroni.Switchover == nil) + }, + }, + { + desc: "Switchover trigger annotation not found", + enabled: true, + check: func(t *testing.T, err error, cluster *v1beta1.PostgresCluster) { + assert.NilError(t, err) + assert.Assert(t, cluster.Status.Patroni.SwitchoverTimeline == nil) + assert.Assert(t, cluster.Status.Patroni.Switchover == nil) + }, + }, + { + desc: "Status matches trigger annotation", + enabled: true, trigger: "triggered", status: "triggered", + check: func(t *testing.T, err error, cluster *v1beta1.PostgresCluster) { + assert.NilError(t, err) + assert.Assert(t, cluster.Status.Patroni.SwitchoverTimeline == nil) + assert.Equal(t, *cluster.Status.Patroni.Switchover, "triggered") + }, + }, + { + desc: "failover requested without a target", + enabled: true, trigger: "triggered", soType: "Failover", + check: func(t *testing.T, err error, cluster *v1beta1.PostgresCluster) { + assert.Error(t, err, "TargetInstance required when running failover") + assert.Equal(t, *cluster.Status.Patroni.SwitchoverTimeline, int64(2)) + assert.Assert(t, cluster.Status.Patroni.Switchover == nil) + }, + }, + { + desc: "target instance was specified but not found", + enabled: true, trigger: "triggered", target: "bad-target", + check: func(t *testing.T, err error, cluster *v1beta1.PostgresCluster) { + assert.Error(t, err, "TargetInstance was specified but not found in the cluster") + assert.Equal(t, *cluster.Status.Patroni.SwitchoverTimeline, int64(2)) + assert.Assert(t, cluster.Status.Patroni.Switchover == nil) + }, + }, + } { + t.Run(test.desc, func(t *testing.T) { + cluster := testCluster() + cluster.Spec.InstanceSets = []v1beta1.PostgresInstanceSetSpec{{ + Name: "target", + Replicas: initialize.Int32(2), + DataVolumeClaimSpec: testVolumeClaimSpec(), + }} + if test.enabled { + cluster.Spec.Patroni = &v1beta1.PatroniSpec{ + Switchover: &v1beta1.PatroniSwitchover{ + Enabled: true, + }, + } + } + if test.trigger != "" { + cluster.Annotations = map[string]string{ + naming.PatroniSwitchover: test.trigger, + } + } + if test.status != "" { + cluster.Status = v1beta1.PostgresClusterStatus{ + Patroni: v1beta1.PatroniStatus{ + Switchover: initialize.String(test.status), + }, + } + } + if test.soType != "" { + cluster.Spec.Patroni.Switchover.Type = test.soType + } + if test.target != "" { + cluster.Spec.Patroni.Switchover.TargetInstance = initialize.String(test.target) + } + cluster.Status.Patroni.SwitchoverTimeline = initialize.Int64(2) + test.check(t, r.reconcilePatroniSwitchover(ctx, cluster, getObserved()), cluster) + }) + } + }) + + t.Run("validate target instance", func(t *testing.T) { + cluster := testCluster() + cluster.Annotations = map[string]string{ + naming.PatroniSwitchover: "trigger", + } + cluster.Spec.Patroni = &v1beta1.PatroniSpec{ + Switchover: &v1beta1.PatroniSwitchover{ + Enabled: true, + TargetInstance: initialize.String("target"), + }, + } + + t.Run("has no pods", func(t *testing.T) { + instances := []*Instance{{ + Name: "target", + }, { + Name: "target2", + }} + observed := &observedInstances{forCluster: instances} + + assert.Error(t, r.reconcilePatroniSwitchover(ctx, cluster, observed), + "TargetInstance should have one pod. Pods (0)") + }) + + t.Run("not running", func(t *testing.T) { + instances := []*Instance{{ + Name: "target", + Pods: []*corev1.Pod{{ + ObjectMeta: metav1.ObjectMeta{ + Name: "pod", + }, + }}, + Runner: &appsv1.StatefulSet{}, + }, {Name: "other"}} + instances[0].Pods[0].Status = corev1.PodStatus{ + ContainerStatuses: []corev1.ContainerStatus{{ + Name: naming.ContainerDatabase, + State: corev1.ContainerState{ + Terminated: new(corev1.ContainerStateTerminated), + }, + }}, + } + observed := &observedInstances{forCluster: instances} + + assert.Error(t, r.reconcilePatroniSwitchover(ctx, cluster, observed), + "Could not find a running pod when attempting switchover.") + }) + }) + + t.Run("need replica to switch", func(t *testing.T) { + cluster := testCluster() + cluster.Annotations = map[string]string{ + naming.PatroniSwitchover: "trigger", + } + cluster.Spec.Patroni = &v1beta1.PatroniSpec{ + Switchover: &v1beta1.PatroniSwitchover{ + Enabled: true, + TargetInstance: initialize.String("target"), + }, + } + + observed := &observedInstances{forCluster: []*Instance{{ + Name: "target", + }}} + assert.Error(t, r.reconcilePatroniSwitchover(ctx, cluster, observed), + "Need more than one instance to switchover") + }) + + t.Run("timeline getting call errors", func(t *testing.T) { + cluster := testCluster() + cluster.Annotations = map[string]string{ + naming.PatroniSwitchover: "trigger", + } + cluster.Spec.Patroni = &v1beta1.PatroniSpec{ + Switchover: &v1beta1.PatroniSwitchover{ + Enabled: true, + }, + } + cluster.Spec.InstanceSets = []v1beta1.PostgresInstanceSetSpec{{ + Name: "target", + Replicas: initialize.Int32(2), + DataVolumeClaimSpec: testVolumeClaimSpec(), + }} + timelineCall, timelineCallNoLeader = false, false + called, failover, callError, callFails = false, false, true, false + err := r.reconcilePatroniSwitchover(ctx, cluster, getObserved()) + assert.Error(t, err, "boom") + assert.Assert(t, called) + assert.Assert(t, cluster.Status.Patroni.Switchover == nil) + }) + + t.Run("timeline getting call returns no leader", func(t *testing.T) { + cluster := testCluster() + cluster.Annotations = map[string]string{ + naming.PatroniSwitchover: "trigger", + } + cluster.Spec.Patroni = &v1beta1.PatroniSpec{ + Switchover: &v1beta1.PatroniSwitchover{ + Enabled: true, + }, + } + cluster.Spec.InstanceSets = []v1beta1.PostgresInstanceSetSpec{{ + Name: "target", + Replicas: initialize.Int32(2), + DataVolumeClaimSpec: testVolumeClaimSpec(), + }} + timelineCall, timelineCallNoLeader = false, true + called, failover, callError, callFails = false, false, false, false + err := r.reconcilePatroniSwitchover(ctx, cluster, getObserved()) + assert.Error(t, err, "error getting and parsing current timeline") + assert.Assert(t, called) + assert.Assert(t, cluster.Status.Patroni.Switchover == nil) + }) + + t.Run("timeline set", func(t *testing.T) { + cluster := testCluster() + cluster.Annotations = map[string]string{ + naming.PatroniSwitchover: "trigger", + } + cluster.Spec.Patroni = &v1beta1.PatroniSpec{ + Switchover: &v1beta1.PatroniSwitchover{ + Enabled: true, + }, + } + cluster.Spec.InstanceSets = []v1beta1.PostgresInstanceSetSpec{{ + Name: "target", + Replicas: initialize.Int32(2), + DataVolumeClaimSpec: testVolumeClaimSpec(), + }} + timelineCall, timelineCallNoLeader = true, false + called, failover, callError, callFails = false, false, false, false + err := r.reconcilePatroniSwitchover(ctx, cluster, getObserved()) + assert.NilError(t, err) + assert.Assert(t, called) + assert.Equal(t, *cluster.Status.Patroni.SwitchoverTimeline, int64(4)) + }) + + t.Run("timeline mismatch, timeline cleared", func(t *testing.T) { + cluster := testCluster() + cluster.Annotations = map[string]string{ + naming.PatroniSwitchover: "trigger", + } + cluster.Spec.Patroni = &v1beta1.PatroniSpec{ + Switchover: &v1beta1.PatroniSwitchover{ + Enabled: true, + }, + } + cluster.Spec.InstanceSets = []v1beta1.PostgresInstanceSetSpec{{ + Name: "target", + Replicas: initialize.Int32(2), + DataVolumeClaimSpec: testVolumeClaimSpec(), + }} + cluster.Status.Patroni.SwitchoverTimeline = initialize.Int64(11) + timelineCall, timelineCallNoLeader = true, false + called, failover, callError, callFails = false, false, false, false + err := r.reconcilePatroniSwitchover(ctx, cluster, getObserved()) + assert.NilError(t, err) + assert.Assert(t, called) + assert.Assert(t, cluster.Status.Patroni.SwitchoverTimeline == nil) + }) + + t.Run("timeline cleared when status is updated", func(t *testing.T) { + cluster := testCluster() + cluster.Annotations = map[string]string{ + naming.PatroniSwitchover: "trigger", + } + cluster.Spec.Patroni = &v1beta1.PatroniSpec{ + Switchover: &v1beta1.PatroniSwitchover{ + Enabled: true, + }, + } + cluster.Spec.InstanceSets = []v1beta1.PostgresInstanceSetSpec{{ + Name: "target", + Replicas: initialize.Int32(2), + DataVolumeClaimSpec: testVolumeClaimSpec(), + }} + cluster.Status.Patroni.SwitchoverTimeline = initialize.Int64(11) + timelineCall, timelineCallNoLeader = true, false + called, failover, callError, callFails = false, false, false, false + err := r.reconcilePatroniSwitchover(ctx, cluster, getObserved()) + assert.NilError(t, err) + assert.Assert(t, called) + assert.Assert(t, cluster.Status.Patroni.SwitchoverTimeline == nil) + }) + + t.Run("switchover call fails", func(t *testing.T) { + cluster := testCluster() + cluster.Annotations = map[string]string{ + naming.PatroniSwitchover: "trigger", + } + cluster.Spec.Patroni = &v1beta1.PatroniSpec{ + Switchover: &v1beta1.PatroniSwitchover{ + Enabled: true, + }, + } + cluster.Spec.InstanceSets = []v1beta1.PostgresInstanceSetSpec{{ + Name: "target", + Replicas: initialize.Int32(2), + DataVolumeClaimSpec: testVolumeClaimSpec(), + }} + cluster.Status.Patroni.SwitchoverTimeline = initialize.Int64(4) + timelineCall, timelineCallNoLeader = true, false + called, failover, callError, callFails = false, false, false, true + err := r.reconcilePatroniSwitchover(ctx, cluster, getObserved()) + assert.Error(t, err, "unable to switchover") + assert.Assert(t, called) + assert.Assert(t, cluster.Status.Patroni.Switchover == nil) + assert.Equal(t, *cluster.Status.Patroni.SwitchoverTimeline, int64(4)) + }) + + t.Run("switchover call errors", func(t *testing.T) { + cluster := testCluster() + cluster.Annotations = map[string]string{ + naming.PatroniSwitchover: "trigger", + } + cluster.Spec.Patroni = &v1beta1.PatroniSpec{ + Switchover: &v1beta1.PatroniSwitchover{ + Enabled: true, + }, + } + cluster.Spec.InstanceSets = []v1beta1.PostgresInstanceSetSpec{{ + Name: "target", + Replicas: initialize.Int32(2), + DataVolumeClaimSpec: testVolumeClaimSpec(), + }} + cluster.Status.Patroni.SwitchoverTimeline = initialize.Int64(4) + timelineCall, timelineCallNoLeader = true, false + called, failover, callError, callFails = false, false, true, false + err := r.reconcilePatroniSwitchover(ctx, cluster, getObserved()) + assert.Error(t, err, "boom") + assert.Assert(t, called) + assert.Assert(t, cluster.Status.Patroni.Switchover == nil) + }) + + t.Run("switchover called", func(t *testing.T) { + cluster := testCluster() + cluster.Annotations = map[string]string{ + naming.PatroniSwitchover: "trigger", + } + cluster.Spec.Patroni = &v1beta1.PatroniSpec{ + Switchover: &v1beta1.PatroniSwitchover{ + Enabled: true, + }, + } + cluster.Spec.InstanceSets = []v1beta1.PostgresInstanceSetSpec{{ + Name: "target", + Replicas: initialize.Int32(2), + DataVolumeClaimSpec: testVolumeClaimSpec(), + }} + cluster.Status.Patroni.SwitchoverTimeline = initialize.Int64(4) + timelineCall, timelineCallNoLeader = true, false + called, failover, callError, callFails = false, false, false, false + assert.NilError(t, r.reconcilePatroniSwitchover(ctx, cluster, getObserved())) + assert.Assert(t, called) + assert.Equal(t, *cluster.Status.Patroni.Switchover, "trigger") + assert.Assert(t, cluster.Status.Patroni.SwitchoverTimeline == nil) + }) + + t.Run("targeted switchover called", func(t *testing.T) { + cluster := testCluster() + cluster.Annotations = map[string]string{ + naming.PatroniSwitchover: "trigger", + } + cluster.Spec.Patroni = &v1beta1.PatroniSpec{ + Switchover: &v1beta1.PatroniSwitchover{ + Enabled: true, + TargetInstance: initialize.String("target"), + }, + } + cluster.Spec.InstanceSets = []v1beta1.PostgresInstanceSetSpec{{ + Name: "target", + Replicas: initialize.Int32(2), + DataVolumeClaimSpec: testVolumeClaimSpec(), + }} + cluster.Status.Patroni.SwitchoverTimeline = initialize.Int64(4) + timelineCall, timelineCallNoLeader = true, false + called, failover, callError, callFails = false, false, false, false + assert.NilError(t, r.reconcilePatroniSwitchover(ctx, cluster, getObserved())) + assert.Assert(t, called) + assert.Equal(t, *cluster.Status.Patroni.Switchover, "trigger") + assert.Assert(t, cluster.Status.Patroni.SwitchoverTimeline == nil) + }) + + t.Run("targeted failover called", func(t *testing.T) { + cluster := testCluster() + cluster.Annotations = map[string]string{ + naming.PatroniSwitchover: "trigger", + } + cluster.Spec.Patroni = &v1beta1.PatroniSpec{ + Switchover: &v1beta1.PatroniSwitchover{ + Enabled: true, + Type: "Failover", + TargetInstance: initialize.String("target"), + }, + } + cluster.Spec.InstanceSets = []v1beta1.PostgresInstanceSetSpec{{ + Name: "target", + Replicas: initialize.Int32(2), + DataVolumeClaimSpec: testVolumeClaimSpec(), + }} + cluster.Status.Patroni.SwitchoverTimeline = initialize.Int64(4) + timelineCall, timelineCallNoLeader = true, false + called, failover, callError, callFails = false, true, false, false + assert.NilError(t, r.reconcilePatroniSwitchover(ctx, cluster, getObserved())) + assert.Assert(t, called) + assert.Equal(t, *cluster.Status.Patroni.Switchover, "trigger") + assert.Assert(t, cluster.Status.Patroni.SwitchoverTimeline == nil) + }) +} diff --git a/internal/controller/postgrescluster/pgadmin.go b/internal/controller/postgrescluster/pgadmin.go new file mode 100644 index 0000000000..c0a936ba1f --- /dev/null +++ b/internal/controller/postgrescluster/pgadmin.go @@ -0,0 +1,503 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgrescluster + +import ( + "context" + "fmt" + "io" + + "github.com/pkg/errors" + appsv1 "k8s.io/api/apps/v1" + corev1 "k8s.io/api/core/v1" + apierrors "k8s.io/apimachinery/pkg/api/errors" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/util/intstr" + "sigs.k8s.io/controller-runtime/pkg/client" + + "github.com/crunchydata/postgres-operator/internal/config" + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/logging" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/pgadmin" + "github.com/crunchydata/postgres-operator/internal/postgres" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +// reconcilePGAdmin writes the objects necessary to run a pgAdmin Pod. +func (r *Reconciler) reconcilePGAdmin( + ctx context.Context, cluster *v1beta1.PostgresCluster, +) error { + // NOTE: [Reconciler.reconcilePGAdminUsers] is called in [Reconciler.reconcilePostgresUsers]. + + // TODO(tjmoore4): Currently, the returned service is only used in tests, + // but it may be useful during upcoming feature enhancements. If not, we + // may consider removing the service return altogether and refactoring + // this function to only return errors. + _, err := r.reconcilePGAdminService(ctx, cluster) + + var configmap *corev1.ConfigMap + var dataVolume *corev1.PersistentVolumeClaim + + if err == nil { + configmap, err = r.reconcilePGAdminConfigMap(ctx, cluster) + } + if err == nil { + dataVolume, err = r.reconcilePGAdminDataVolume(ctx, cluster) + } + if err == nil { + err = r.reconcilePGAdminStatefulSet(ctx, cluster, configmap, dataVolume) + } + return err +} + +// generatePGAdminConfigMap returns a v1.ConfigMap for pgAdmin. +func (r *Reconciler) generatePGAdminConfigMap( + cluster *v1beta1.PostgresCluster) (*corev1.ConfigMap, bool, error, +) { + configmap := &corev1.ConfigMap{ObjectMeta: naming.ClusterPGAdmin(cluster)} + configmap.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("ConfigMap")) + + if cluster.Spec.UserInterface == nil || cluster.Spec.UserInterface.PGAdmin == nil { + return configmap, false, nil + } + + configmap.Annotations = naming.Merge( + cluster.Spec.Metadata.GetAnnotationsOrNil(), + cluster.Spec.UserInterface.PGAdmin.Metadata.GetAnnotationsOrNil()) + configmap.Labels = naming.Merge( + cluster.Spec.Metadata.GetLabelsOrNil(), + cluster.Spec.UserInterface.PGAdmin.Metadata.GetLabelsOrNil(), + map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelRole: naming.RolePGAdmin, + }) + + err := errors.WithStack(pgadmin.ConfigMap(cluster, configmap)) + if err == nil { + err = errors.WithStack(r.setControllerReference(cluster, configmap)) + } + + return configmap, true, err +} + +// +kubebuilder:rbac:groups="",resources="configmaps",verbs={get} +// +kubebuilder:rbac:groups="",resources="configmaps",verbs={create,delete,patch} + +// reconcilePGAdminConfigMap writes the ConfigMap for pgAdmin. +func (r *Reconciler) reconcilePGAdminConfigMap( + ctx context.Context, cluster *v1beta1.PostgresCluster, +) (*corev1.ConfigMap, error) { + configmap, specified, err := r.generatePGAdminConfigMap(cluster) + + if err == nil && !specified { + // pgAdmin is disabled; delete the ConfigMap if it exists. Check the + // client cache first using Get. + key := client.ObjectKeyFromObject(configmap) + err := errors.WithStack(r.Client.Get(ctx, key, configmap)) + if err == nil { + err = errors.WithStack(r.deleteControlled(ctx, cluster, configmap)) + } + return nil, client.IgnoreNotFound(err) + } + + if err == nil { + err = errors.WithStack(r.apply(ctx, configmap)) + } + return configmap, err +} + +// generatePGAdminService returns a v1.Service that exposes pgAdmin pods. +// The ServiceType comes from the cluster user interface spec. +func (r *Reconciler) generatePGAdminService( + cluster *v1beta1.PostgresCluster) (*corev1.Service, bool, error, +) { + service := &corev1.Service{ObjectMeta: naming.ClusterPGAdmin(cluster)} + service.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("Service")) + + if cluster.Spec.UserInterface == nil || cluster.Spec.UserInterface.PGAdmin == nil { + return service, false, nil + } + + service.Annotations = naming.Merge( + cluster.Spec.Metadata.GetAnnotationsOrNil(), + cluster.Spec.UserInterface.PGAdmin.Metadata.GetAnnotationsOrNil()) + service.Labels = naming.Merge( + cluster.Spec.Metadata.GetLabelsOrNil(), + cluster.Spec.UserInterface.PGAdmin.Metadata.GetLabelsOrNil()) + + if spec := cluster.Spec.UserInterface.PGAdmin.Service; spec != nil { + service.Annotations = naming.Merge(service.Annotations, + spec.Metadata.GetAnnotationsOrNil()) + service.Labels = naming.Merge(service.Labels, + spec.Metadata.GetLabelsOrNil()) + } + + // add our labels last so they aren't overwritten + service.Labels = naming.Merge(service.Labels, + map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelRole: naming.RolePGAdmin, + }) + + // Allocate an IP address and/or node port and let Kubernetes manage the + // Endpoints by selecting Pods with the pgAdmin role. + // - https://docs.k8s.io/concepts/services-networking/service/#defining-a-service + service.Spec.Selector = map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelRole: naming.RolePGAdmin, + } + + // The TargetPort must be the name (not the number) of the pgAdmin + // ContainerPort. This name allows the port number to differ between Pods, + // which can happen during a rolling update. + // + // TODO(tjmoore4): A custom service port is not currently supported as this + // requires updates to the pgAdmin service configuration. + servicePort := corev1.ServicePort{ + Name: naming.PortPGAdmin, + Port: 5050, + Protocol: corev1.ProtocolTCP, + TargetPort: intstr.FromString(naming.PortPGAdmin), + } + + if spec := cluster.Spec.UserInterface.PGAdmin.Service; spec == nil { + service.Spec.Type = corev1.ServiceTypeClusterIP + } else { + service.Spec.Type = corev1.ServiceType(spec.Type) + if spec.NodePort != nil { + if service.Spec.Type == corev1.ServiceTypeClusterIP { + // The NodePort can only be set when the Service type is NodePort or + // LoadBalancer. However, due to a known issue prior to Kubernetes + // 1.20, we clear these errors during our apply. To preserve the + // appropriate behavior, we log an Event and return an error. + // TODO(tjmoore4): Once Validation Rules are available, this check + // and event could potentially be removed in favor of that validation + r.Recorder.Eventf(cluster, corev1.EventTypeWarning, "MisconfiguredClusterIP", + "NodePort cannot be set with type ClusterIP on Service %q", service.Name) + return nil, true, fmt.Errorf("NodePort cannot be set with type ClusterIP on Service %q", service.Name) + } + servicePort.NodePort = *spec.NodePort + } + service.Spec.ExternalTrafficPolicy = initialize.FromPointer(spec.ExternalTrafficPolicy) + service.Spec.InternalTrafficPolicy = spec.InternalTrafficPolicy + } + service.Spec.Ports = []corev1.ServicePort{servicePort} + + err := errors.WithStack(r.setControllerReference(cluster, service)) + + return service, true, err +} + +// +kubebuilder:rbac:groups="",resources="services",verbs={get} +// +kubebuilder:rbac:groups="",resources="services",verbs={create,delete,patch} + +// reconcilePGAdminService writes the Service that resolves to pgAdmin. +func (r *Reconciler) reconcilePGAdminService( + ctx context.Context, cluster *v1beta1.PostgresCluster, +) (*corev1.Service, error) { + service, specified, err := r.generatePGAdminService(cluster) + + if err == nil && !specified { + // pgAdmin is disabled; delete the Service if it exists. Check the client + // cache first using Get. + key := client.ObjectKeyFromObject(service) + err := errors.WithStack(r.Client.Get(ctx, key, service)) + if err == nil { + err = errors.WithStack(r.deleteControlled(ctx, cluster, service)) + } + return nil, client.IgnoreNotFound(err) + } + + if err == nil { + err = errors.WithStack(r.apply(ctx, service)) + } + return service, err +} + +// +kubebuilder:rbac:groups="apps",resources="statefulsets",verbs={get} +// +kubebuilder:rbac:groups="apps",resources="statefulsets",verbs={create,delete,patch} + +// reconcilePGAdminStatefulSet writes the StatefulSet that runs pgAdmin. +func (r *Reconciler) reconcilePGAdminStatefulSet( + ctx context.Context, cluster *v1beta1.PostgresCluster, + configmap *corev1.ConfigMap, dataVolume *corev1.PersistentVolumeClaim, +) error { + sts := &appsv1.StatefulSet{ObjectMeta: naming.ClusterPGAdmin(cluster)} + sts.SetGroupVersionKind(appsv1.SchemeGroupVersion.WithKind("StatefulSet")) + + if cluster.Spec.UserInterface == nil || cluster.Spec.UserInterface.PGAdmin == nil { + // pgAdmin is disabled; delete the Deployment if it exists. Check the + // client cache first using Get. + key := client.ObjectKeyFromObject(sts) + err := errors.WithStack(r.Client.Get(ctx, key, sts)) + if err == nil { + err = errors.WithStack(r.deleteControlled(ctx, cluster, sts)) + } + return client.IgnoreNotFound(err) + } + + sts.Annotations = naming.Merge( + cluster.Spec.Metadata.GetAnnotationsOrNil(), + cluster.Spec.UserInterface.PGAdmin.Metadata.GetAnnotationsOrNil()) + sts.Labels = naming.Merge( + cluster.Spec.Metadata.GetLabelsOrNil(), + cluster.Spec.UserInterface.PGAdmin.Metadata.GetLabelsOrNil(), + map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelRole: naming.RolePGAdmin, + naming.LabelData: naming.DataPGAdmin, + }) + sts.Spec.Selector = &metav1.LabelSelector{ + MatchLabels: map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelRole: naming.RolePGAdmin, + }, + } + sts.Spec.Template.Annotations = naming.Merge( + cluster.Spec.Metadata.GetAnnotationsOrNil(), + cluster.Spec.UserInterface.PGAdmin.Metadata.GetAnnotationsOrNil()) + sts.Spec.Template.Labels = naming.Merge( + cluster.Spec.Metadata.GetLabelsOrNil(), + cluster.Spec.UserInterface.PGAdmin.Metadata.GetLabelsOrNil(), + map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelRole: naming.RolePGAdmin, + naming.LabelData: naming.DataPGAdmin, + }) + + // if the shutdown flag is set, set pgAdmin replicas to 0 + if cluster.Spec.Shutdown != nil && *cluster.Spec.Shutdown { + sts.Spec.Replicas = initialize.Int32(0) + } else { + sts.Spec.Replicas = cluster.Spec.UserInterface.PGAdmin.Replicas + } + + // Don't clutter the namespace with extra ControllerRevisions. + sts.Spec.RevisionHistoryLimit = initialize.Int32(0) + + // Give the Pod a stable DNS record based on its name. + // - https://docs.k8s.io/concepts/workloads/controllers/statefulset/#stable-network-id + // - https://docs.k8s.io/concepts/services-networking/dns-pod-service/#pods + sts.Spec.ServiceName = naming.ClusterPodService(cluster).Name + + // Use StatefulSet's "RollingUpdate" strategy and "Parallel" policy to roll + // out changes to pods even when not Running or not Ready. + // - https://docs.k8s.io/concepts/workloads/controllers/statefulset/#rolling-updates + // - https://docs.k8s.io/concepts/workloads/controllers/statefulset/#forced-rollback + // - https://kep.k8s.io/3541 + sts.Spec.PodManagementPolicy = appsv1.ParallelPodManagement + sts.Spec.UpdateStrategy.Type = appsv1.RollingUpdateStatefulSetStrategyType + + // Use scheduling constraints from the cluster spec. + sts.Spec.Template.Spec.Affinity = cluster.Spec.UserInterface.PGAdmin.Affinity + sts.Spec.Template.Spec.Tolerations = cluster.Spec.UserInterface.PGAdmin.Tolerations + sts.Spec.Template.Spec.PriorityClassName = + initialize.FromPointer(cluster.Spec.UserInterface.PGAdmin.PriorityClassName) + sts.Spec.Template.Spec.TopologySpreadConstraints = + cluster.Spec.UserInterface.PGAdmin.TopologySpreadConstraints + + // Restart containers any time they stop, die, are killed, etc. + // - https://docs.k8s.io/concepts/workloads/pods/pod-lifecycle/#restart-policy + sts.Spec.Template.Spec.RestartPolicy = corev1.RestartPolicyAlways + + // pgAdmin does not make any Kubernetes API calls. Use the default + // ServiceAccount and do not mount its credentials. + sts.Spec.Template.Spec.AutomountServiceAccountToken = initialize.Bool(false) + + // Do not add environment variables describing services in this namespace. + sts.Spec.Template.Spec.EnableServiceLinks = initialize.Bool(false) + + sts.Spec.Template.Spec.SecurityContext = postgres.PodSecurityContext(cluster) + + // set the image pull secrets, if any exist + sts.Spec.Template.Spec.ImagePullSecrets = cluster.Spec.ImagePullSecrets + + // Previous versions of PGO used a StatefulSet Pod Management Policy that could leave the Pod + // in a failed state. When we see that it has the wrong policy, we will delete the StatefulSet + // and then recreate it with the correct policy, as this is not a property that can be patched. + // When we delete the StatefulSet, we will leave its Pods in place. They will be claimed by + // the StatefulSet that gets created in the next reconcile. + existing := &appsv1.StatefulSet{} + if err := errors.WithStack(r.Client.Get(ctx, client.ObjectKeyFromObject(sts), existing)); err != nil { + if !apierrors.IsNotFound(err) { + return err + } + } else { + if existing.Spec.PodManagementPolicy != sts.Spec.PodManagementPolicy { + // We want to delete the STS without affecting the Pods, so we set the PropagationPolicy to Orphan. + // The orphaned Pods will be claimed by the StatefulSet that will be created in the next reconcile. + uid := existing.GetUID() + version := existing.GetResourceVersion() + exactly := client.Preconditions{UID: &uid, ResourceVersion: &version} + propagate := client.PropagationPolicy(metav1.DeletePropagationOrphan) + + return errors.WithStack(client.IgnoreNotFound(r.Client.Delete(ctx, existing, exactly, propagate))) + } + } + + if err := errors.WithStack(r.setControllerReference(cluster, sts)); err != nil { + return err + } + + pgadmin.Pod(cluster, configmap, &sts.Spec.Template.Spec, dataVolume) + + // add nss_wrapper init container and add nss_wrapper env vars to the pgAdmin + // container + addNSSWrapper( + config.PGAdminContainerImage(cluster), + cluster.Spec.ImagePullPolicy, + &sts.Spec.Template) + + // add an emptyDir volume to the PodTemplateSpec and an associated '/tmp' + // volume mount to all containers included within that spec + addTMPEmptyDir(&sts.Spec.Template) + + return errors.WithStack(r.apply(ctx, sts)) +} + +// +kubebuilder:rbac:groups="",resources="persistentvolumeclaims",verbs={create,patch} + +// reconcilePGAdminDataVolume writes the PersistentVolumeClaim for instance's +// pgAdmin data volume. +func (r *Reconciler) reconcilePGAdminDataVolume( + ctx context.Context, cluster *v1beta1.PostgresCluster, +) (*corev1.PersistentVolumeClaim, error) { + + labelMap := map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelRole: naming.RolePGAdmin, + naming.LabelData: naming.DataPGAdmin, + } + + pvc := &corev1.PersistentVolumeClaim{ObjectMeta: naming.ClusterPGAdmin(cluster)} + pvc.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("PersistentVolumeClaim")) + + if cluster.Spec.UserInterface == nil || cluster.Spec.UserInterface.PGAdmin == nil { + // pgAdmin is disabled; delete the PVC if it exists. Check the client + // cache first using Get. + key := client.ObjectKeyFromObject(pvc) + err := errors.WithStack(r.Client.Get(ctx, key, pvc)) + if err == nil { + err = errors.WithStack(r.deleteControlled(ctx, cluster, pvc)) + } + return nil, client.IgnoreNotFound(err) + } + + pvc.Annotations = naming.Merge( + cluster.Spec.Metadata.GetAnnotationsOrNil(), + ) + pvc.Labels = naming.Merge( + cluster.Spec.Metadata.GetLabelsOrNil(), + labelMap, + ) + pvc.Spec = cluster.Spec.UserInterface.PGAdmin.DataVolumeClaimSpec + + err := errors.WithStack(r.setControllerReference(cluster, pvc)) + + if err == nil { + err = r.handlePersistentVolumeClaimError(cluster, + errors.WithStack(r.apply(ctx, pvc))) + } + + return pvc, err +} + +// +kubebuilder:rbac:groups="",resources="pods",verbs={get} + +// reconcilePGAdminUsers creates users inside of pgAdmin. +func (r *Reconciler) reconcilePGAdminUsers( + ctx context.Context, cluster *v1beta1.PostgresCluster, + specUsers []v1beta1.PostgresUserSpec, userSecrets map[string]*corev1.Secret, +) error { + const container = naming.ContainerPGAdmin + var podExecutor pgadmin.Executor + + if cluster.Spec.UserInterface == nil || cluster.Spec.UserInterface.PGAdmin == nil { + // pgAdmin is disabled; clear its status. + // TODO(cbandy): Revisit this approach when there is more than one UI. + cluster.Status.UserInterface = nil + return nil + } + + // Find the running pgAdmin container. When there is none, return early. + + pod := &corev1.Pod{ObjectMeta: naming.ClusterPGAdmin(cluster)} + pod.Name += "-0" + + err := errors.WithStack(r.Client.Get(ctx, client.ObjectKeyFromObject(pod), pod)) + if err != nil { + return client.IgnoreNotFound(err) + } + + var running bool + for _, status := range pod.Status.ContainerStatuses { + if status.Name == container { + running = status.State.Running != nil + } + } + if terminating := pod.DeletionTimestamp != nil; running && !terminating { + ctx = logging.NewContext(ctx, logging.FromContext(ctx).WithValues("pod", pod.Name)) + + podExecutor = func( + ctx context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + return r.PodExec(ctx, pod.Namespace, pod.Name, container, stdin, stdout, stderr, command...) + } + } + if podExecutor == nil { + return nil + } + + // Calculate a hash of the commands that should be executed in pgAdmin. + + passwords := make(map[string]string, len(userSecrets)) + for userName := range userSecrets { + passwords[userName] = string(userSecrets[userName].Data["password"]) + } + + write := func(ctx context.Context, exec pgadmin.Executor) error { + return pgadmin.WriteUsersInPGAdmin(ctx, cluster, exec, specUsers, passwords) + } + + revision, err := safeHash32(func(hasher io.Writer) error { + // Discard log messages about executing. + return write(logging.NewContext(ctx, logging.Discard()), func( + _ context.Context, stdin io.Reader, _, _ io.Writer, command ...string, + ) error { + _, err := fmt.Fprint(hasher, command) + if err == nil && stdin != nil { + _, err = io.Copy(hasher, stdin) + } + return err + }) + }) + + if err == nil && + cluster.Status.UserInterface != nil && + cluster.Status.UserInterface.PGAdmin.UsersRevision == revision { + // The necessary commands have already been run; there's nothing more to do. + + // TODO(cbandy): Give the user a way to trigger execution regardless. + // The value of an annotation could influence the hash, for example. + return nil + } + + // Run the necessary commands and record their hash in cluster.Status. + // Include the hash in any log messages. + + if err == nil { + log := logging.FromContext(ctx).WithValues("revision", revision) + err = errors.WithStack(write(logging.NewContext(ctx, log), podExecutor)) + } + if err == nil { + if cluster.Status.UserInterface == nil { + cluster.Status.UserInterface = new(v1beta1.PostgresUserInterfaceStatus) + } + cluster.Status.UserInterface.PGAdmin.UsersRevision = revision + } + + return err +} diff --git a/internal/controller/postgrescluster/pgadmin_test.go b/internal/controller/postgrescluster/pgadmin_test.go new file mode 100644 index 0000000000..92ec6f42f1 --- /dev/null +++ b/internal/controller/postgrescluster/pgadmin_test.go @@ -0,0 +1,881 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgrescluster + +import ( + "context" + "io" + "strconv" + "testing" + + "github.com/pkg/errors" + "gotest.tools/v3/assert" + appsv1 "k8s.io/api/apps/v1" + corev1 "k8s.io/api/core/v1" + apierrors "k8s.io/apimachinery/pkg/api/errors" + "k8s.io/apimachinery/pkg/api/resource" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/client-go/tools/record" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/client/fake" + + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/testing/cmp" + "github.com/crunchydata/postgres-operator/internal/testing/require" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestGeneratePGAdminConfigMap(t *testing.T) { + _, cc := setupKubernetes(t) + require.ParallelCapacity(t, 0) + + reconciler := &Reconciler{Client: cc} + + cluster := &v1beta1.PostgresCluster{} + cluster.Namespace = "some-ns" + cluster.Name = "pg1" + + t.Run("Unspecified", func(t *testing.T) { + for _, spec := range []*v1beta1.UserInterfaceSpec{ + nil, new(v1beta1.UserInterfaceSpec), + } { + cluster := cluster.DeepCopy() + cluster.Spec.UserInterface = spec + + configmap, specified, err := reconciler.generatePGAdminConfigMap(cluster) + assert.NilError(t, err) + assert.Assert(t, !specified) + + assert.Equal(t, configmap.Namespace, cluster.Namespace) + assert.Equal(t, configmap.Name, "pg1-pgadmin") + } + }) + + cluster.Spec.UserInterface = &v1beta1.UserInterfaceSpec{ + PGAdmin: &v1beta1.PGAdminPodSpec{}, + } + + t.Run("Data,ObjectMeta,TypeMeta", func(t *testing.T) { + cluster := cluster.DeepCopy() + + configmap, specified, err := reconciler.generatePGAdminConfigMap(cluster) + assert.NilError(t, err) + assert.Assert(t, specified) + + assert.Assert(t, cmp.MarshalMatches(configmap.TypeMeta, ` +apiVersion: v1 +kind: ConfigMap + `)) + assert.Assert(t, cmp.MarshalMatches(configmap.ObjectMeta, ` +creationTimestamp: null +labels: + postgres-operator.crunchydata.com/cluster: pg1 + postgres-operator.crunchydata.com/role: pgadmin +name: pg1-pgadmin +namespace: some-ns +ownerReferences: +- apiVersion: postgres-operator.crunchydata.com/v1beta1 + blockOwnerDeletion: true + controller: true + kind: PostgresCluster + name: pg1 + uid: "" + `)) + + assert.Assert(t, len(configmap.Data) > 0, "expected some configuration") + }) + + t.Run("Annotations,Labels", func(t *testing.T) { + cluster := cluster.DeepCopy() + cluster.Spec.Metadata = &v1beta1.Metadata{ + Annotations: map[string]string{"a": "v1", "b": "v2"}, + Labels: map[string]string{"c": "v3", "d": "v4"}, + } + cluster.Spec.UserInterface.PGAdmin.Metadata = &v1beta1.Metadata{ + Annotations: map[string]string{"a": "v5", "e": "v6"}, + Labels: map[string]string{"c": "v7", "f": "v8"}, + } + + configmap, specified, err := reconciler.generatePGAdminConfigMap(cluster) + assert.NilError(t, err) + assert.Assert(t, specified) + + // Annotations present in the metadata. + assert.DeepEqual(t, configmap.ObjectMeta.Annotations, map[string]string{ + "a": "v5", "b": "v2", "e": "v6", + }) + + // Labels present in the metadata. + assert.DeepEqual(t, configmap.ObjectMeta.Labels, map[string]string{ + "c": "v7", "d": "v4", "f": "v8", + "postgres-operator.crunchydata.com/cluster": "pg1", + "postgres-operator.crunchydata.com/role": "pgadmin", + }) + }) +} + +func TestGeneratePGAdminService(t *testing.T) { + _, cc := setupKubernetes(t) + require.ParallelCapacity(t, 0) + + reconciler := &Reconciler{ + Client: cc, + Recorder: new(record.FakeRecorder), + } + + cluster := &v1beta1.PostgresCluster{} + cluster.Namespace = "my-ns" + cluster.Name = "my-cluster" + + t.Run("Unspecified", func(t *testing.T) { + for _, spec := range []*v1beta1.UserInterfaceSpec{ + nil, new(v1beta1.UserInterfaceSpec), + } { + cluster := cluster.DeepCopy() + cluster.Spec.UserInterface = spec + + service, specified, err := reconciler.generatePGAdminService(cluster) + assert.NilError(t, err) + assert.Assert(t, !specified) + + assert.Assert(t, cmp.MarshalMatches(service.ObjectMeta, ` +creationTimestamp: null +name: my-cluster-pgadmin +namespace: my-ns + `)) + } + }) + + cluster.Spec.UserInterface = &v1beta1.UserInterfaceSpec{ + PGAdmin: &v1beta1.PGAdminPodSpec{}, + } + + alwaysExpect := func(t testing.TB, service *corev1.Service) { + assert.Assert(t, cmp.MarshalMatches(service.TypeMeta, ` +apiVersion: v1 +kind: Service + `)) + assert.Assert(t, cmp.MarshalMatches(service.ObjectMeta, ` +creationTimestamp: null +labels: + postgres-operator.crunchydata.com/cluster: my-cluster + postgres-operator.crunchydata.com/role: pgadmin +name: my-cluster-pgadmin +namespace: my-ns +ownerReferences: +- apiVersion: postgres-operator.crunchydata.com/v1beta1 + blockOwnerDeletion: true + controller: true + kind: PostgresCluster + name: my-cluster + uid: "" + `)) + + // Always gets a ClusterIP (never None). + assert.Equal(t, service.Spec.ClusterIP, "") + assert.DeepEqual(t, service.Spec.Selector, map[string]string{ + "postgres-operator.crunchydata.com/cluster": "my-cluster", + "postgres-operator.crunchydata.com/role": "pgadmin", + }) + } + + t.Run("AnnotationsLabels", func(t *testing.T) { + cluster := cluster.DeepCopy() + cluster.Spec.Metadata = &v1beta1.Metadata{ + Annotations: map[string]string{"a": "v1"}, + Labels: map[string]string{"b": "v2"}, + } + + service, specified, err := reconciler.generatePGAdminService(cluster) + assert.NilError(t, err) + assert.Assert(t, specified) + + // Annotations present in the metadata. + assert.DeepEqual(t, service.ObjectMeta.Annotations, map[string]string{ + "a": "v1", + }) + + // Labels present in the metadata. + assert.DeepEqual(t, service.ObjectMeta.Labels, map[string]string{ + "b": "v2", + "postgres-operator.crunchydata.com/cluster": "my-cluster", + "postgres-operator.crunchydata.com/role": "pgadmin", + }) + + // Labels not in the selector. + assert.DeepEqual(t, service.Spec.Selector, map[string]string{ + "postgres-operator.crunchydata.com/cluster": "my-cluster", + "postgres-operator.crunchydata.com/role": "pgadmin", + }) + + // Add metadata to individual service + cluster.Spec.UserInterface.PGAdmin.Service = &v1beta1.ServiceSpec{ + Metadata: &v1beta1.Metadata{ + Annotations: map[string]string{"c": "v3"}, + Labels: map[string]string{"d": "v4", + "postgres-operator.crunchydata.com/cluster": "wrongName"}, + }, + } + + service, specified, err = reconciler.generatePGAdminService(cluster) + assert.NilError(t, err) + assert.Assert(t, specified) + + // Annotations present in the metadata. + assert.DeepEqual(t, service.ObjectMeta.Annotations, map[string]string{ + "a": "v1", + "c": "v3", + }) + + // Labels present in the metadata. + assert.DeepEqual(t, service.ObjectMeta.Labels, map[string]string{ + "b": "v2", + "d": "v4", + "postgres-operator.crunchydata.com/cluster": "my-cluster", + "postgres-operator.crunchydata.com/role": "pgadmin", + }) + + // Labels not in the selector. + assert.DeepEqual(t, service.Spec.Selector, map[string]string{ + "postgres-operator.crunchydata.com/cluster": "my-cluster", + "postgres-operator.crunchydata.com/role": "pgadmin", + }) + }) + + t.Run("NoServiceSpec", func(t *testing.T) { + service, specified, err := reconciler.generatePGAdminService(cluster) + assert.NilError(t, err) + assert.Assert(t, specified) + alwaysExpect(t, service) + // Defaults to ClusterIP. + assert.Equal(t, service.Spec.Type, corev1.ServiceTypeClusterIP) + assert.Assert(t, cmp.MarshalMatches(service.Spec.Ports, ` +- name: pgadmin + port: 5050 + protocol: TCP + targetPort: pgadmin +`)) + }) + + types := []struct { + Type string + Expect func(testing.TB, *corev1.Service) + }{ + {Type: "ClusterIP", Expect: func(t testing.TB, service *corev1.Service) { + assert.Equal(t, service.Spec.Type, corev1.ServiceTypeClusterIP) + }}, + {Type: "NodePort", Expect: func(t testing.TB, service *corev1.Service) { + assert.Equal(t, service.Spec.Type, corev1.ServiceTypeNodePort) + }}, + {Type: "LoadBalancer", Expect: func(t testing.TB, service *corev1.Service) { + assert.Equal(t, service.Spec.Type, corev1.ServiceTypeLoadBalancer) + }}, + } + + for _, test := range types { + t.Run(test.Type, func(t *testing.T) { + cluster := cluster.DeepCopy() + cluster.Spec.UserInterface.PGAdmin.Service = &v1beta1.ServiceSpec{Type: test.Type} + + service, specified, err := reconciler.generatePGAdminService(cluster) + assert.NilError(t, err) + assert.Assert(t, specified) + alwaysExpect(t, service) + test.Expect(t, service) + assert.Assert(t, cmp.MarshalMatches(service.Spec.Ports, ` +- name: pgadmin + port: 5050 + protocol: TCP + targetPort: pgadmin +`)) + }) + } + + typesAndPort := []struct { + Description string + Type string + NodePort *int32 + Expect func(testing.TB, *corev1.Service, error) + }{ + {Description: "ClusterIP with Port 32000", Type: "ClusterIP", + NodePort: initialize.Int32(32000), Expect: func(t testing.TB, service *corev1.Service, err error) { + assert.ErrorContains(t, err, "NodePort cannot be set with type ClusterIP on Service \"my-cluster-pgadmin\"") + assert.Assert(t, service == nil) + }}, + {Description: "NodePort with Port 32001", Type: "NodePort", + NodePort: initialize.Int32(32001), Expect: func(t testing.TB, service *corev1.Service, err error) { + assert.NilError(t, err) + assert.Equal(t, service.Spec.Type, corev1.ServiceTypeNodePort) + alwaysExpect(t, service) + assert.Assert(t, cmp.MarshalMatches(service.Spec.Ports, ` +- name: pgadmin + nodePort: 32001 + port: 5050 + protocol: TCP + targetPort: pgadmin +`)) + }}, + {Description: "LoadBalancer with Port 32002", Type: "LoadBalancer", + NodePort: initialize.Int32(32002), Expect: func(t testing.TB, service *corev1.Service, err error) { + assert.NilError(t, err) + assert.Equal(t, service.Spec.Type, corev1.ServiceTypeLoadBalancer) + alwaysExpect(t, service) + assert.Assert(t, cmp.MarshalMatches(service.Spec.Ports, ` +- name: pgadmin + nodePort: 32002 + port: 5050 + protocol: TCP + targetPort: pgadmin +`)) + }}, + } + + for _, test := range typesAndPort { + t.Run(test.Description, func(t *testing.T) { + cluster := cluster.DeepCopy() + cluster.Spec.UserInterface.PGAdmin.Service = + &v1beta1.ServiceSpec{Type: test.Type, NodePort: test.NodePort} + + service, specified, err := reconciler.generatePGAdminService(cluster) + test.Expect(t, service, err) + // whether or not an error is encountered, 'specified' is true because + // the service *should* exist + assert.Assert(t, specified) + + }) + } +} + +func TestReconcilePGAdminService(t *testing.T) { + ctx := context.Background() + _, cc := setupKubernetes(t) + require.ParallelCapacity(t, 1) + + reconciler := &Reconciler{Client: cc, Owner: client.FieldOwner(t.Name())} + + cluster := testCluster() + cluster.Namespace = setupNamespace(t, cc).Name + assert.NilError(t, cc.Create(ctx, cluster)) + + t.Run("Unspecified", func(t *testing.T) { + cluster := cluster.DeepCopy() + cluster.Spec.UserInterface = nil + + service, err := reconciler.reconcilePGAdminService(ctx, cluster) + assert.NilError(t, err) + assert.Assert(t, service == nil) + }) + + cluster.Spec.UserInterface = &v1beta1.UserInterfaceSpec{ + PGAdmin: &v1beta1.PGAdminPodSpec{}, + } + + t.Run("NoServiceSpec", func(t *testing.T) { + service, err := reconciler.reconcilePGAdminService(ctx, cluster) + assert.NilError(t, err) + assert.Assert(t, service != nil) + t.Cleanup(func() { assert.Check(t, cc.Delete(ctx, service)) }) + + assert.Assert(t, service.Spec.ClusterIP != "", + "expected to be assigned a ClusterIP") + }) + + serviceTypes := []string{"ClusterIP", "NodePort", "LoadBalancer"} + + // Confirm that each ServiceType can be reconciled. + for _, serviceType := range serviceTypes { + t.Run(serviceType, func(t *testing.T) { + cluster := cluster.DeepCopy() + cluster.Spec.UserInterface.PGAdmin.Service = &v1beta1.ServiceSpec{Type: serviceType} + + service, err := reconciler.reconcilePGAdminService(ctx, cluster) + assert.NilError(t, err) + assert.Assert(t, service != nil) + t.Cleanup(func() { assert.Check(t, cc.Delete(ctx, service)) }) + + assert.Assert(t, service.Spec.ClusterIP != "", + "expected to be assigned a ClusterIP") + }) + } + + // CRD validation looks only at the new/incoming value of fields. Confirm + // that each ServiceType can change to any other ServiceType. Forbidding + // certain transitions requires a validating webhook. + serviceTypeChangeClusterCounter := 0 + for _, beforeType := range serviceTypes { + for _, changeType := range serviceTypes { + t.Run(beforeType+"To"+changeType, func(t *testing.T) { + // Creating fresh clusters for these tests + clusterNamespace := cluster.Namespace + cluster := testCluster() + cluster.Namespace = clusterNamespace + + // Note (dsessler): Adding a number to each cluster name to make cluster/service + // names unique to work around an intermittent race condition where a service + // from a prior test has not been deleted yet when the next test runs, causing + // the test to fail due to non-matching IP addresses. + cluster.Name += "-" + strconv.Itoa(serviceTypeChangeClusterCounter) + assert.NilError(t, cc.Create(ctx, cluster)) + + cluster.Spec.UserInterface = &v1beta1.UserInterfaceSpec{ + PGAdmin: &v1beta1.PGAdminPodSpec{}, + } + cluster.Spec.UserInterface.PGAdmin.Service = &v1beta1.ServiceSpec{Type: beforeType} + + before, err := reconciler.reconcilePGAdminService(ctx, cluster) + assert.NilError(t, err) + t.Cleanup(func() { assert.Check(t, cc.Delete(ctx, before)) }) + + cluster.Spec.UserInterface.PGAdmin.Service.Type = changeType + + after, err := reconciler.reconcilePGAdminService(ctx, cluster) + + // LoadBalancers are provisioned by a separate controller that + // updates the Service soon after creation. The API may return + // a conflict error when we race to update it, even though we + // don't send a resourceVersion in our payload. Retry. + if apierrors.IsConflict(err) { + t.Log("conflict:", err) + after, err = reconciler.reconcilePGAdminService(ctx, cluster) + } + + assert.NilError(t, err, "\n%#v", errors.Unwrap(err)) + assert.Equal(t, after.Spec.ClusterIP, before.Spec.ClusterIP, + "expected to keep the same ClusterIP") + serviceTypeChangeClusterCounter++ + }) + } + } +} + +func TestReconcilePGAdminStatefulSet(t *testing.T) { + ctx := context.Background() + _, cc := setupKubernetes(t) + require.ParallelCapacity(t, 1) + + reconciler := &Reconciler{Client: cc, Owner: client.FieldOwner(t.Name())} + + ns := setupNamespace(t, cc) + cluster := pgAdminTestCluster(*ns) + + assert.NilError(t, cc.Create(ctx, cluster)) + t.Cleanup(func() { assert.Check(t, cc.Delete(ctx, cluster)) }) + + configmap := &corev1.ConfigMap{} + configmap.Name = "test-cm" + + pvc := &corev1.PersistentVolumeClaim{} + pvc.Name = "test-pvc" + + t.Run("verify StatefulSet", func(t *testing.T) { + err := reconciler.reconcilePGAdminStatefulSet(ctx, cluster, configmap, pvc) + assert.NilError(t, err) + + selector, err := naming.AsSelector(metav1.LabelSelector{ + MatchLabels: map[string]string{ + naming.LabelCluster: cluster.Name, + }, + }) + assert.NilError(t, err) + + list := appsv1.StatefulSetList{} + assert.NilError(t, cc.List(ctx, &list, client.InNamespace(cluster.Namespace), + client.MatchingLabelsSelector{Selector: selector})) + assert.Equal(t, len(list.Items), 1) + assert.Equal(t, list.Items[0].Spec.ServiceName, "test-cluster-pods") + + template := list.Items[0].Spec.Template.DeepCopy() + + // Containers and Volumes should be populated. + assert.Assert(t, len(template.Spec.Containers) != 0) + assert.Assert(t, len(template.Spec.InitContainers) != 0) + assert.Assert(t, len(template.Spec.Volumes) != 0) + + // Ignore Containers and Volumes in the comparison below. + template.Spec.Containers = nil + template.Spec.InitContainers = nil + template.Spec.Volumes = nil + + assert.Assert(t, cmp.MarshalMatches(template.ObjectMeta, ` +creationTimestamp: null +labels: + postgres-operator.crunchydata.com/cluster: test-cluster + postgres-operator.crunchydata.com/data: pgadmin + postgres-operator.crunchydata.com/role: pgadmin + `)) + + compare := ` +automountServiceAccountToken: false +containers: null +dnsPolicy: ClusterFirst +enableServiceLinks: false +restartPolicy: Always +schedulerName: default-scheduler +securityContext: + fsGroup: 26 + fsGroupChangePolicy: OnRootMismatch +terminationGracePeriodSeconds: 30 + ` + + assert.Assert(t, cmp.MarshalMatches(template.Spec, compare)) + }) + + t.Run("verify customized deployment", func(t *testing.T) { + + customcluster := pgAdminTestCluster(*ns) + + // add pod level customizations + customcluster.Name = "custom-cluster" + + // annotation and label + customcluster.Spec.UserInterface.PGAdmin.Metadata = &v1beta1.Metadata{ + Annotations: map[string]string{ + "annotation1": "annotationvalue", + }, + Labels: map[string]string{ + "label1": "labelvalue", + }, + } + + // scheduling constraints + customcluster.Spec.UserInterface.PGAdmin.Affinity = &corev1.Affinity{ + NodeAffinity: &corev1.NodeAffinity{ + RequiredDuringSchedulingIgnoredDuringExecution: &corev1.NodeSelector{ + NodeSelectorTerms: []corev1.NodeSelectorTerm{{ + MatchExpressions: []corev1.NodeSelectorRequirement{{ + Key: "key", + Operator: "Exists", + }}, + }}, + }, + }, + } + customcluster.Spec.UserInterface.PGAdmin.Tolerations = []corev1.Toleration{ + {Key: "sometoleration"}, + } + + if cluster.Spec.UserInterface.PGAdmin.PriorityClassName != nil { + customcluster.Spec.UserInterface.PGAdmin.PriorityClassName = initialize.String("testpriorityclass") + } + + customcluster.Spec.UserInterface.PGAdmin.TopologySpreadConstraints = []corev1.TopologySpreadConstraint{ + { + MaxSkew: int32(1), + TopologyKey: "fakekey", + WhenUnsatisfiable: corev1.ScheduleAnyway, + LabelSelector: &metav1.LabelSelector{ + MatchExpressions: []metav1.LabelSelectorRequirement{ + {Key: naming.LabelCluster, Operator: "In", Values: []string{"somename"}}, + {Key: naming.LabelData, Operator: "Exists"}, + }, + }, + }, + } + + // set an image pull secret + customcluster.Spec.ImagePullSecrets = []corev1.LocalObjectReference{{ + Name: "myImagePullSecret"}} + + assert.NilError(t, cc.Create(ctx, customcluster)) + t.Cleanup(func() { assert.Check(t, cc.Delete(ctx, customcluster)) }) + + err := reconciler.reconcilePGAdminStatefulSet(ctx, customcluster, configmap, pvc) + assert.NilError(t, err) + + selector, err := naming.AsSelector(metav1.LabelSelector{ + MatchLabels: map[string]string{ + naming.LabelCluster: customcluster.Name, + }, + }) + assert.NilError(t, err) + + list := appsv1.StatefulSetList{} + assert.NilError(t, cc.List(ctx, &list, client.InNamespace(customcluster.Namespace), + client.MatchingLabelsSelector{Selector: selector})) + assert.Equal(t, len(list.Items), 1) + assert.Equal(t, list.Items[0].Spec.ServiceName, "custom-cluster-pods") + + template := list.Items[0].Spec.Template.DeepCopy() + + // Containers and Volumes should be populated. + assert.Assert(t, len(template.Spec.Containers) != 0) + assert.Assert(t, len(template.Spec.InitContainers) != 0) + assert.Assert(t, len(template.Spec.Volumes) != 0) + + // Ignore Containers and Volumes in the comparison below. + template.Spec.Containers = nil + template.Spec.InitContainers = nil + template.Spec.Volumes = nil + + assert.Assert(t, cmp.MarshalMatches(template.ObjectMeta, ` +annotations: + annotation1: annotationvalue +creationTimestamp: null +labels: + label1: labelvalue + postgres-operator.crunchydata.com/cluster: custom-cluster + postgres-operator.crunchydata.com/data: pgadmin + postgres-operator.crunchydata.com/role: pgadmin + `)) + + compare := ` +affinity: + nodeAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: key + operator: Exists +automountServiceAccountToken: false +containers: null +dnsPolicy: ClusterFirst +enableServiceLinks: false +imagePullSecrets: +- name: myImagePullSecret +restartPolicy: Always +schedulerName: default-scheduler +securityContext: + fsGroup: 26 + fsGroupChangePolicy: OnRootMismatch +terminationGracePeriodSeconds: 30 +tolerations: +- key: sometoleration +topologySpreadConstraints: +- labelSelector: + matchExpressions: + - key: postgres-operator.crunchydata.com/cluster + operator: In + values: + - somename + - key: postgres-operator.crunchydata.com/data + operator: Exists + maxSkew: 1 + topologyKey: fakekey + whenUnsatisfiable: ScheduleAnyway +` + + assert.Assert(t, cmp.MarshalMatches(template.Spec, compare)) + }) +} + +func TestReconcilePGAdminDataVolume(t *testing.T) { + ctx := context.Background() + _, tClient := setupKubernetes(t) + require.ParallelCapacity(t, 1) + + reconciler := &Reconciler{ + Client: tClient, + Owner: client.FieldOwner(t.Name()), + } + + ns := setupNamespace(t, tClient) + cluster := pgAdminTestCluster(*ns) + + assert.NilError(t, tClient.Create(ctx, cluster)) + t.Cleanup(func() { assert.Check(t, tClient.Delete(ctx, cluster)) }) + + t.Run("DataVolume", func(t *testing.T) { + pvc, err := reconciler.reconcilePGAdminDataVolume(ctx, cluster) + assert.NilError(t, err) + + assert.Assert(t, metav1.IsControlledBy(pvc, cluster)) + + assert.Equal(t, pvc.Labels[naming.LabelCluster], cluster.Name) + assert.Equal(t, pvc.Labels[naming.LabelRole], naming.RolePGAdmin) + assert.Equal(t, pvc.Labels[naming.LabelData], naming.DataPGAdmin) + + assert.Assert(t, cmp.MarshalMatches(pvc.Spec, ` +accessModes: +- ReadWriteOnce +resources: + requests: + storage: 1Gi +storageClassName: storage-class-for-data +volumeMode: Filesystem + `)) + }) +} + +func TestReconcilePGAdminUsers(t *testing.T) { + ctx := context.Background() + + t.Run("Disabled", func(t *testing.T) { + r := new(Reconciler) + cluster := new(v1beta1.PostgresCluster) + assert.NilError(t, r.reconcilePGAdminUsers(ctx, cluster, nil, nil)) + }) + + // pgAdmin enabled + cluster := &v1beta1.PostgresCluster{} + cluster.Namespace = "ns1" + cluster.Name = "pgc1" + cluster.Spec.Port = initialize.Int32(5432) + cluster.Spec.UserInterface = + &v1beta1.UserInterfaceSpec{PGAdmin: &v1beta1.PGAdminPodSpec{}} + + t.Run("NoPods", func(t *testing.T) { + r := new(Reconciler) + r.Client = fake.NewClientBuilder().Build() + assert.NilError(t, r.reconcilePGAdminUsers(ctx, cluster, nil, nil)) + }) + + // Pod in the namespace + pod := corev1.Pod{} + pod.Namespace = cluster.Namespace + pod.Name = cluster.Name + "-pgadmin-0" + + t.Run("ContainerNotRunning", func(t *testing.T) { + pod := pod.DeepCopy() + + pod.DeletionTimestamp = nil + pod.Status.ContainerStatuses = nil + + r := new(Reconciler) + r.Client = fake.NewClientBuilder().WithObjects(pod).Build() + + assert.NilError(t, r.reconcilePGAdminUsers(ctx, cluster, nil, nil)) + }) + + t.Run("PodTerminating", func(t *testing.T) { + pod := pod.DeepCopy() + + // Must add finalizer when adding deletion timestamp otherwise fake client will panic: + // https://github.com/kubernetes-sigs/controller-runtime/pull/2316 + pod.Finalizers = append(pod.Finalizers, "some-finalizer") + + pod.DeletionTimestamp = new(metav1.Time) + *pod.DeletionTimestamp = metav1.Now() + pod.Status.ContainerStatuses = + []corev1.ContainerStatus{{Name: naming.ContainerPGAdmin}} + pod.Status.ContainerStatuses[0].State.Running = + new(corev1.ContainerStateRunning) + + r := new(Reconciler) + r.Client = fake.NewClientBuilder().WithObjects(pod).Build() + + assert.NilError(t, r.reconcilePGAdminUsers(ctx, cluster, nil, nil)) + }) + + t.Run("PodHealthy", func(t *testing.T) { + cluster := cluster.DeepCopy() + pod := pod.DeepCopy() + + pod.DeletionTimestamp = nil + pod.Status.ContainerStatuses = + []corev1.ContainerStatus{{Name: naming.ContainerPGAdmin}} + pod.Status.ContainerStatuses[0].State.Running = + new(corev1.ContainerStateRunning) + + r := new(Reconciler) + r.Client = fake.NewClientBuilder().WithObjects(pod).Build() + + calls := 0 + r.PodExec = func( + ctx context.Context, namespace, pod, container string, + stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + calls++ + + assert.Equal(t, pod, "pgc1-pgadmin-0") + assert.Equal(t, namespace, cluster.Namespace) + assert.Equal(t, container, naming.ContainerPGAdmin) + + return nil + } + + assert.NilError(t, r.reconcilePGAdminUsers(ctx, cluster, nil, nil)) + assert.Equal(t, calls, 1, "PodExec should be called once") + + assert.NilError(t, r.reconcilePGAdminUsers(ctx, cluster, nil, nil)) + assert.Equal(t, calls, 1, "PodExec should not be called again") + + // Do the thing when users change. + users := []v1beta1.PostgresUserSpec{{Name: "u1"}} + + assert.NilError(t, r.reconcilePGAdminUsers(ctx, cluster, users, nil)) + assert.Equal(t, calls, 2, "PodExec should be called once") + + assert.NilError(t, r.reconcilePGAdminUsers(ctx, cluster, users, nil)) + assert.Equal(t, calls, 2, "PodExec should not be called again") + + // Do the thing when passwords change. + passwords := map[string]*corev1.Secret{ + "u1": {Data: map[string][]byte{"password": []byte(`something`)}}, + } + + assert.NilError(t, r.reconcilePGAdminUsers(ctx, cluster, users, passwords)) + assert.Equal(t, calls, 3, "PodExec should be called once") + + assert.NilError(t, r.reconcilePGAdminUsers(ctx, cluster, users, passwords)) + assert.Equal(t, calls, 3, "PodExec should not be called again") + + passwords["u1"].Data["password"] = []byte(`rotated`) + + assert.NilError(t, r.reconcilePGAdminUsers(ctx, cluster, users, passwords)) + assert.Equal(t, calls, 4, "PodExec should be called once") + + assert.NilError(t, r.reconcilePGAdminUsers(ctx, cluster, users, passwords)) + assert.Equal(t, calls, 4, "PodExec should not be called again") + + t.Run("ThenDisabled", func(t *testing.T) { + // TODO(cbandy): Revisit this when there is more than one UI. + cluster := cluster.DeepCopy() + cluster.Spec.UserInterface = nil + + assert.Assert(t, cluster.Status.UserInterface != nil, "expected some status") + + r := new(Reconciler) + assert.NilError(t, r.reconcilePGAdminUsers(ctx, cluster, users, passwords)) + assert.Assert(t, cluster.Status.UserInterface == nil, "expected no status") + }) + }) +} + +func pgAdminTestCluster(ns corev1.Namespace) *v1beta1.PostgresCluster { + return &v1beta1.PostgresCluster{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-cluster", + Namespace: ns.Name, + }, + Spec: v1beta1.PostgresClusterSpec{ + PostgresVersion: 13, + InstanceSets: []v1beta1.PostgresInstanceSetSpec{{ + DataVolumeClaimSpec: testVolumeClaimSpec(), + }}, + Backups: v1beta1.Backups{ + PGBackRest: v1beta1.PGBackRestArchive{ + Repos: []v1beta1.PGBackRestRepo{{ + Name: "repo1", + Volume: &v1beta1.RepoPVC{ + VolumeClaimSpec: corev1.PersistentVolumeClaimSpec{ + AccessModes: []corev1.PersistentVolumeAccessMode{corev1.ReadWriteOnce}, + Resources: corev1.VolumeResourceRequirements{ + Requests: corev1.ResourceList{ + corev1.ResourceStorage: resource.MustParse("1Gi"), + }, + }, + }, + }, + }}, + }, + }, + UserInterface: &v1beta1.UserInterfaceSpec{ + PGAdmin: &v1beta1.PGAdminPodSpec{ + Image: "test-image", + DataVolumeClaimSpec: corev1.PersistentVolumeClaimSpec{ + AccessModes: []corev1.PersistentVolumeAccessMode{corev1.ReadWriteOnce}, + Resources: corev1.VolumeResourceRequirements{ + Requests: corev1.ResourceList{ + corev1.ResourceStorage: resource.MustParse("1Gi"), + }, + }, + StorageClassName: initialize.String("storage-class-for-data"), + }, + }, + }, + }, + } +} diff --git a/internal/controller/postgrescluster/pgbackrest.go b/internal/controller/postgrescluster/pgbackrest.go new file mode 100644 index 0000000000..836df047fc --- /dev/null +++ b/internal/controller/postgrescluster/pgbackrest.go @@ -0,0 +1,3101 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgrescluster + +import ( + "context" + "fmt" + "io" + "reflect" + "regexp" + "sort" + "strings" + "time" + + "github.com/pkg/errors" + appsv1 "k8s.io/api/apps/v1" + batchv1 "k8s.io/api/batch/v1" + corev1 "k8s.io/api/core/v1" + rbacv1 "k8s.io/api/rbac/v1" + apierrors "k8s.io/apimachinery/pkg/api/errors" + "k8s.io/apimachinery/pkg/api/meta" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" + "k8s.io/apimachinery/pkg/runtime" + "k8s.io/apimachinery/pkg/runtime/schema" + "k8s.io/apimachinery/pkg/types" + utilerrors "k8s.io/apimachinery/pkg/util/errors" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/controller/controllerutil" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + + "github.com/crunchydata/postgres-operator/internal/config" + "github.com/crunchydata/postgres-operator/internal/feature" + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/logging" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/patroni" + "github.com/crunchydata/postgres-operator/internal/pgbackrest" + "github.com/crunchydata/postgres-operator/internal/pki" + "github.com/crunchydata/postgres-operator/internal/postgres" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +const ( + // ConditionPostgresDataInitialized is the type used in a condition to indicate whether or not the + // PostgresCluster's PostgreSQL data directory has been initialized (e.g. via a restore) + ConditionPostgresDataInitialized = "PostgresDataInitialized" + + // ConditionManualBackupSuccessful is the type used in a condition to indicate whether or not + // the manual backup for the current backup ID (as provided via annotation) was successful + ConditionManualBackupSuccessful = "PGBackRestManualBackupSuccessful" + + // ConditionReplicaCreate is the type used in a condition to indicate whether or not + // pgBackRest can be utilized for replica creation + ConditionReplicaCreate = "PGBackRestReplicaCreate" + + // ConditionReplicaRepoReady is the type used in a condition to indicate whether or not + // the pgBackRest repository for creating replicas is ready + ConditionReplicaRepoReady = "PGBackRestReplicaRepoReady" + + // ConditionRepoHostReady is the type used in a condition to indicate whether or not a + // pgBackRest repository host PostgresCluster is ready + ConditionRepoHostReady = "PGBackRestRepoHostReady" + + // ConditionPGBackRestRestoreProgressing is the type used in a condition to indicate that + // and in-place pgBackRest restore is in progress + ConditionPGBackRestRestoreProgressing = "PGBackRestoreProgressing" + + // EventRepoHostNotFound is used to indicate that a pgBackRest repository was not + // found when reconciling + EventRepoHostNotFound = "RepoDeploymentNotFound" + + // EventRepoHostCreated is the event reason utilized when a pgBackRest repository host is + // created + EventRepoHostCreated = "RepoHostCreated" + + // EventUnableToCreateStanzas is the event reason utilized when pgBackRest is unable to create + // stanzas for the repositories in a PostgreSQL cluster + EventUnableToCreateStanzas = "UnableToCreateStanzas" + + // EventStanzasCreated is the event reason utilized when a pgBackRest stanza create command + // completes successfully + EventStanzasCreated = "StanzasCreated" + + // EventUnableToCreatePGBackRestCronJob is the event reason utilized when a pgBackRest backup + // CronJob fails to create successfully + EventUnableToCreatePGBackRestCronJob = "UnableToCreatePGBackRestCronJob" + + // ReasonReadyForRestore is the reason utilized within ConditionPGBackRestRestoreProgressing + // to indicate that the restore Job can proceed because the cluster is now ready to be + // restored (i.e. it has been properly prepared for a restore). + ReasonReadyForRestore = "ReadyForRestore" +) + +// backup types +const ( + full = "full" + differential = "diff" + incremental = "incr" +) + +// regexRepoIndex is the regex used to obtain the repo index from a pgBackRest repo name +var regexRepoIndex = regexp.MustCompile(`\d+`) + +// RepoResources is used to store various resources for pgBackRest repositories and +// repository hosts +type RepoResources struct { + hosts []*appsv1.StatefulSet + cronjobs []*batchv1.CronJob + manualBackupJobs []*batchv1.Job + replicaCreateBackupJobs []*batchv1.Job + pvcs []*corev1.PersistentVolumeClaim + sas []*corev1.ServiceAccount + roles []*rbacv1.Role + rolebindings []*rbacv1.RoleBinding +} + +// applyRepoHostIntent ensures the pgBackRest repository host StatefulSet is synchronized with the +// proper configuration according to the provided PostgresCluster custom resource. This is done by +// applying the PostgresCluster controller's fully specified intent for the repository host +// StatefulSet. Any changes to the deployment spec as a result of synchronization will result in a +// rollout of the pgBackRest repository host StatefulSet in accordance with its configured +// strategy. +func (r *Reconciler) applyRepoHostIntent(ctx context.Context, postgresCluster *v1beta1.PostgresCluster, + repoHostName string, repoResources *RepoResources, + observedInstances *observedInstances) (*appsv1.StatefulSet, error) { + + repo, err := r.generateRepoHostIntent(ctx, postgresCluster, repoHostName, repoResources, observedInstances) + if err != nil { + return nil, err + } + + // Previous versions of PGO used a StatefulSet Pod Management Policy that could leave the Pod + // in a failed state. When we see that it has the wrong policy, we will delete the StatefulSet + // and then recreate it with the correct policy, as this is not a property that can be patched. + // When we delete the StatefulSet, we will leave its Pods in place. They will be claimed by + // the StatefulSet that gets created in the next reconcile. + existing := &appsv1.StatefulSet{} + if err := errors.WithStack(r.Client.Get(ctx, client.ObjectKeyFromObject(repo), existing)); err != nil { + if !apierrors.IsNotFound(err) { + return nil, err + } + } else { + if existing.Spec.PodManagementPolicy != repo.Spec.PodManagementPolicy { + // We want to delete the STS without affecting the Pods, so we set the PropagationPolicy to Orphan. + // The orphaned Pods will be claimed by the new StatefulSet that gets created in the next reconcile. + uid := existing.GetUID() + version := existing.GetResourceVersion() + exactly := client.Preconditions{UID: &uid, ResourceVersion: &version} + propagate := client.PropagationPolicy(metav1.DeletePropagationOrphan) + + return repo, errors.WithStack(r.Client.Delete(ctx, existing, exactly, propagate)) + } + } + + if err := r.apply(ctx, repo); err != nil { + return nil, err + } + + return repo, nil +} + +// +kubebuilder:rbac:groups="",resources="persistentvolumeclaims",verbs={create,patch} + +// applyRepoVolumeIntent ensures the pgBackRest repository host deployment is synchronized with the +// proper configuration according to the provided PostgresCluster custom resource. This is done by +// applying the PostgresCluster controller's fully specified intent for the PersistentVolumeClaim +// representing a repository. +func (r *Reconciler) applyRepoVolumeIntent(ctx context.Context, + postgresCluster *v1beta1.PostgresCluster, spec corev1.PersistentVolumeClaimSpec, + repoName string, repoResources *RepoResources) (*corev1.PersistentVolumeClaim, error) { + + repo, err := r.generateRepoVolumeIntent(postgresCluster, spec, repoName, repoResources) + if err != nil { + return nil, errors.WithStack(err) + } + + if err := r.apply(ctx, repo); err != nil { + return nil, r.handlePersistentVolumeClaimError(postgresCluster, + errors.WithStack(err)) + } + + return repo, nil +} + +// +kubebuilder:rbac:groups="apps",resources="statefulsets",verbs={list} +// +kubebuilder:rbac:groups="batch",resources="cronjobs",verbs={list} +// +kubebuilder:rbac:groups="batch",resources="jobs",verbs={list} +// +kubebuilder:rbac:groups="",resources="configmaps",verbs={list} +// +kubebuilder:rbac:groups="",resources="persistentvolumeclaims",verbs={list} +// +kubebuilder:rbac:groups="",resources="secrets",verbs={list} +// +kubebuilder:rbac:groups="",resources="serviceaccounts",verbs={list} +// +kubebuilder:rbac:groups="rbac.authorization.k8s.io",resources="roles",verbs={list} +// +kubebuilder:rbac:groups="rbac.authorization.k8s.io",resources="rolebindings",verbs={list} + +// getPGBackRestResources returns the existing pgBackRest resources that should utilized by the +// PostgresCluster controller during reconciliation. Any items returned are verified to be owned +// by the PostgresCluster controller and still applicable per the current PostgresCluster spec. +// Additionally, any resources identified that no longer correspond to any current configuration +// are deleted. +func (r *Reconciler) getPGBackRestResources(ctx context.Context, + postgresCluster *v1beta1.PostgresCluster, + backupsSpecFound bool, +) (*RepoResources, error) { + + repoResources := &RepoResources{} + + gvks := []schema.GroupVersionKind{{ + Group: appsv1.SchemeGroupVersion.Group, + Version: appsv1.SchemeGroupVersion.Version, + Kind: "StatefulSetList", + }, { + Group: batchv1.SchemeGroupVersion.Group, + Version: batchv1.SchemeGroupVersion.Version, + Kind: "CronJobList", + }, { + Group: batchv1.SchemeGroupVersion.Group, + Version: batchv1.SchemeGroupVersion.Version, + Kind: "JobList", + }, { + Group: corev1.SchemeGroupVersion.Group, + Version: corev1.SchemeGroupVersion.Version, + Kind: "ConfigMapList", + }, { + Group: corev1.SchemeGroupVersion.Group, + Version: corev1.SchemeGroupVersion.Version, + Kind: "PersistentVolumeClaimList", + }, { + Group: corev1.SchemeGroupVersion.Group, + Version: corev1.SchemeGroupVersion.Version, + Kind: "SecretList", + }, { + Group: corev1.SchemeGroupVersion.Group, + Version: corev1.SchemeGroupVersion.Version, + Kind: "ServiceAccountList", + }, { + Group: rbacv1.SchemeGroupVersion.Group, + Version: rbacv1.SchemeGroupVersion.Version, + Kind: "RoleList", + }, { + Group: rbacv1.SchemeGroupVersion.Group, + Version: rbacv1.SchemeGroupVersion.Version, + Kind: "RoleBindingList", + }} + + selector := naming.PGBackRestSelector(postgresCluster.GetName()) + for _, gvk := range gvks { + uList := &unstructured.UnstructuredList{} + uList.SetGroupVersionKind(gvk) + if err := r.Client.List(ctx, uList, + client.InNamespace(postgresCluster.GetNamespace()), + client.MatchingLabelsSelector{Selector: selector}); err != nil { + return nil, errors.WithStack(err) + } + if len(uList.Items) == 0 { + continue + } + + owned, err := r.cleanupRepoResources(ctx, postgresCluster, uList.Items, backupsSpecFound) + if err != nil { + return nil, errors.WithStack(err) + } + uList.Items = owned + if err := unstructuredToRepoResources(gvk.Kind, repoResources, + uList); err != nil { + return nil, errors.WithStack(err) + } + + // if the current objects are Jobs, update the status for the Jobs + // created by the pgBackRest scheduled backup CronJobs + if gvk.Kind == "JobList" { + r.setScheduledJobStatus(ctx, postgresCluster, uList.Items) + } + + } + + return repoResources, nil +} + +// +kubebuilder:rbac:groups="",resources="persistentvolumeclaims",verbs={delete} +// +kubebuilder:rbac:groups="",resources="serviceaccounts",verbs={delete} +// +kubebuilder:rbac:groups="apps",resources="statefulsets",verbs={delete} +// +kubebuilder:rbac:groups="batch",resources="cronjobs",verbs={delete} +// +kubebuilder:rbac:groups="rbac.authorization.k8s.io",resources="roles",verbs={delete} +// +kubebuilder:rbac:groups="rbac.authorization.k8s.io",resources="rolebindings",verbs={delete} + +// cleanupRepoResources cleans up pgBackRest repository resources that should no longer be +// reconciled by deleting them. This includes deleting repos (i.e. PersistentVolumeClaims) that +// are no longer associated with any repository configured within the PostgresCluster spec, or any +// pgBackRest repository host resources if a repository host is no longer configured. +func (r *Reconciler) cleanupRepoResources(ctx context.Context, + postgresCluster *v1beta1.PostgresCluster, + ownedResources []unstructured.Unstructured, + backupsSpecFound bool, +) ([]unstructured.Unstructured, error) { + + // stores the resources that should not be deleted + ownedNoDelete := []unstructured.Unstructured{} + for i, owned := range ownedResources { + delete := true + + // helper to determine if a label is present in the PostgresCluster + hasLabel := func(label string) bool { _, ok := owned.GetLabels()[label]; return ok } + + // this switch identifies the type of pgBackRest resource via its labels, and then + // determines whether or not it should be deleted according to the current PostgresCluster + // spec + switch { + case hasLabel(naming.LabelPGBackRestConfig): + if !backupsSpecFound { + break + } + // Simply add the things we never want to delete (e.g. the pgBackRest configuration) + // to the slice and do not delete + ownedNoDelete = append(ownedNoDelete, owned) + delete = false + case hasLabel(naming.LabelPGBackRestDedicated): + if !backupsSpecFound { + break + } + // Any resources from before 5.1 that relate to the previously required + // SSH configuration should be deleted. + // TODO(tjmoore4): This can be removed once 5.0 is EOL. + if owned.GetName() != naming.PGBackRestSSHConfig(postgresCluster).Name && + owned.GetName() != naming.PGBackRestSSHSecret(postgresCluster).Name { + // If a dedicated repo host resource and a dedicated repo host is enabled, then + // add to the slice and do not delete. + ownedNoDelete = append(ownedNoDelete, owned) + delete = false + } + case hasLabel(naming.LabelPGBackRestRepoVolume): + if !backupsSpecFound { + break + } + // If a volume (PVC) is identified for a repo that no longer exists in the + // spec then delete it. Otherwise add it to the slice and continue. + for _, repo := range postgresCluster.Spec.Backups.PGBackRest.Repos { + // we only care about cleaning up local repo volumes (PVCs), and ignore other repo + // types (e.g. for external Azure, GCS or S3 repositories) + if repo.Volume != nil && + (repo.Name == owned.GetLabels()[naming.LabelPGBackRestRepo]) { + ownedNoDelete = append(ownedNoDelete, owned) + delete = false + } + } + case hasLabel(naming.LabelPGBackRestBackup): + if !backupsSpecFound { + break + } + // If a Job is identified for a repo that no longer exists in the spec then + // delete it. Otherwise add it to the slice and continue. + for _, repo := range postgresCluster.Spec.Backups.PGBackRest.Repos { + if repo.Name == owned.GetLabels()[naming.LabelPGBackRestRepo] { + ownedNoDelete = append(ownedNoDelete, owned) + delete = false + } + } + case hasLabel(naming.LabelPGBackRestCronJob): + if !backupsSpecFound { + break + } + for _, repo := range postgresCluster.Spec.Backups.PGBackRest.Repos { + if repo.Name == owned.GetLabels()[naming.LabelPGBackRestRepo] { + if backupScheduleFound(repo, + owned.GetLabels()[naming.LabelPGBackRestCronJob]) { + delete = false + ownedNoDelete = append(ownedNoDelete, owned) + } + break + } + } + case hasLabel(naming.LabelPGBackRestRestore): + if !backupsSpecFound { + break + } + + // If the restore job has the PGBackRestBackupJobCompletion annotation, it is + // used for volume snapshots and should not be deleted (volume snapshots code + // will clean it up when appropriate). + if _, ok := owned.GetAnnotations()[naming.PGBackRestBackupJobCompletion]; ok { + ownedNoDelete = append(ownedNoDelete, owned) + delete = false + } + + // When a cluster is prepared for restore, the system identifier is removed from status + // and the cluster is therefore no longer bootstrapped. Only once the restore Job is + // complete will the cluster then be bootstrapped again, which means by the time we + // detect a restore Job here and a bootstrapped cluster, the Job and any associated + // configuration resources can be safely removed. + if !patroni.ClusterBootstrapped(postgresCluster) { + ownedNoDelete = append(ownedNoDelete, owned) + delete = false + } + case hasLabel(naming.LabelPGBackRest): + if !backupsSpecFound { + break + } + ownedNoDelete = append(ownedNoDelete, owned) + delete = false + } + + // If nothing has specified that the resource should not be deleted, then delete + if delete { + if err := r.Client.Delete(ctx, &ownedResources[i], + client.PropagationPolicy(metav1.DeletePropagationBackground)); err != nil { + return []unstructured.Unstructured{}, errors.WithStack(err) + } + } + } + + // return the remaining resources after properly cleaning up any that should no longer exist + return ownedNoDelete, nil +} + +// backupScheduleFound returns true if the CronJob in question should be created as +// defined by the postgrescluster CRD, otherwise it returns false. +func backupScheduleFound(repo v1beta1.PGBackRestRepo, backupType string) bool { + if repo.BackupSchedules != nil { + switch backupType { + case full: + return repo.BackupSchedules.Full != nil + case differential: + return repo.BackupSchedules.Differential != nil + case incremental: + return repo.BackupSchedules.Incremental != nil + default: + return false + } + } + return false +} + +// unstructuredToRepoResources converts unstructured pgBackRest repository resources (specifically +// unstructured StatefulSetLists and PersistentVolumeClaimList) into their structured equivalent. +func unstructuredToRepoResources(kind string, repoResources *RepoResources, + uList *unstructured.UnstructuredList) error { + + switch kind { + case "StatefulSetList": + var stsList appsv1.StatefulSetList + if err := runtime.DefaultUnstructuredConverter. + FromUnstructured(uList.UnstructuredContent(), &stsList); err != nil { + return errors.WithStack(err) + } + for i := range stsList.Items { + repoResources.hosts = append(repoResources.hosts, &stsList.Items[i]) + } + case "CronJobList": + var cronList batchv1.CronJobList + if err := runtime.DefaultUnstructuredConverter. + FromUnstructured(uList.UnstructuredContent(), &cronList); err != nil { + return errors.WithStack(err) + } + for i := range cronList.Items { + repoResources.cronjobs = append(repoResources.cronjobs, &cronList.Items[i]) + } + case "JobList": + var jobList batchv1.JobList + if err := runtime.DefaultUnstructuredConverter. + FromUnstructured(uList.UnstructuredContent(), &jobList); err != nil { + return errors.WithStack(err) + } + // we care about replica create backup jobs and manual backup jobs + for i, job := range jobList.Items { + switch job.GetLabels()[naming.LabelPGBackRestBackup] { + case string(naming.BackupReplicaCreate): + repoResources.replicaCreateBackupJobs = + append(repoResources.replicaCreateBackupJobs, &jobList.Items[i]) + case string(naming.BackupManual): + repoResources.manualBackupJobs = + append(repoResources.manualBackupJobs, &jobList.Items[i]) + } + } + case "ConfigMapList": + // Repository host now uses mTLS for encryption, authentication, and authorization. + // Configmaps for SSHD are no longer managed here. + case "PersistentVolumeClaimList": + var pvcList corev1.PersistentVolumeClaimList + if err := runtime.DefaultUnstructuredConverter. + FromUnstructured(uList.UnstructuredContent(), &pvcList); err != nil { + return errors.WithStack(err) + } + for i := range pvcList.Items { + repoResources.pvcs = append(repoResources.pvcs, &pvcList.Items[i]) + } + case "SecretList": + // Repository host now uses mTLS for encryption, authentication, and authorization. + // Secrets for SSHD are no longer managed here. + // TODO(tjmoore4): Consider adding all pgBackRest secrets to RepoResources to + // observe all pgBackRest secrets in one place. + case "ServiceAccountList": + var saList corev1.ServiceAccountList + if err := runtime.DefaultUnstructuredConverter. + FromUnstructured(uList.UnstructuredContent(), &saList); err != nil { + return errors.WithStack(err) + } + for i := range saList.Items { + repoResources.sas = append(repoResources.sas, &saList.Items[i]) + } + case "RoleList": + var roleList rbacv1.RoleList + if err := runtime.DefaultUnstructuredConverter. + FromUnstructured(uList.UnstructuredContent(), &roleList); err != nil { + return errors.WithStack(err) + } + for i := range roleList.Items { + repoResources.roles = append(repoResources.roles, &roleList.Items[i]) + } + case "RoleBindingList": + var rb rbacv1.RoleBindingList + if err := runtime.DefaultUnstructuredConverter. + FromUnstructured(uList.UnstructuredContent(), &rb); err != nil { + return errors.WithStack(err) + } + for i := range rb.Items { + repoResources.rolebindings = append(repoResources.rolebindings, &rb.Items[i]) + } + default: + return fmt.Errorf("unexpected kind %q", kind) + } + + return nil +} + +// setScheduledJobStatus sets the status of the scheduled pgBackRest backup Jobs +// on the postgres cluster CRD +func (r *Reconciler) setScheduledJobStatus(ctx context.Context, + postgresCluster *v1beta1.PostgresCluster, + items []unstructured.Unstructured) { + log := logging.FromContext(ctx) + + uList := &unstructured.UnstructuredList{Items: items} + var jobList batchv1.JobList + if err := runtime.DefaultUnstructuredConverter. + FromUnstructured(uList.UnstructuredContent(), &jobList); err != nil { + // as this is only setting a status that is not otherwise used + // by the Operator, simply log an error and return rather than + // bubble this up to the other functions + log.Error(err, "unable to convert unstructured objects to jobs, "+ + "unable to set scheduled backup status") + return + } + + // TODO(tjmoore4): PGBackRestScheduledBackupStatus can likely be combined with + // PGBackRestJobStatus as they both contain most of the same information + scheduledStatus := []v1beta1.PGBackRestScheduledBackupStatus{} + for _, job := range jobList.Items { + // we only care about the scheduled backup Jobs created by the + // associated CronJobs + if job.GetLabels()[naming.LabelPGBackRestCronJob] != "" { + sbs := v1beta1.PGBackRestScheduledBackupStatus{} + + if len(job.OwnerReferences) > 0 { + sbs.CronJobName = job.OwnerReferences[0].Name + } + sbs.RepoName = job.GetLabels()[naming.LabelPGBackRestRepo] + sbs.Type = job.GetLabels()[naming.LabelPGBackRestCronJob] + sbs.StartTime = job.Status.StartTime + sbs.CompletionTime = job.Status.CompletionTime + sbs.Active = job.Status.Active + sbs.Succeeded = job.Status.Succeeded + sbs.Failed = job.Status.Failed + + scheduledStatus = append(scheduledStatus, sbs) + } + } + + // if nil, create the pgBackRest status + if postgresCluster.Status.PGBackRest == nil { + postgresCluster.Status.PGBackRest = &v1beta1.PGBackRestStatus{} + } + postgresCluster.Status.PGBackRest.ScheduledBackups = scheduledStatus +} + +// generateRepoHostIntent creates and populates StatefulSet with the PostgresCluster's full intent +// as needed to create and reconcile a pgBackRest dedicated repository host within the kubernetes +// cluster. +func (r *Reconciler) generateRepoHostIntent(ctx context.Context, postgresCluster *v1beta1.PostgresCluster, + repoHostName string, repoResources *RepoResources, observedInstances *observedInstances, +) (*appsv1.StatefulSet, error) { + + annotations := naming.Merge( + postgresCluster.Spec.Metadata.GetAnnotationsOrNil(), + postgresCluster.Spec.Backups.PGBackRest.Metadata.GetAnnotationsOrNil()) + labels := naming.Merge( + postgresCluster.Spec.Metadata.GetLabelsOrNil(), + postgresCluster.Spec.Backups.PGBackRest.Metadata.GetLabelsOrNil(), + naming.PGBackRestDedicatedLabels(postgresCluster.GetName()), + map[string]string{ + naming.LabelData: naming.DataPGBackRest, + }) + + repo := &appsv1.StatefulSet{ + TypeMeta: metav1.TypeMeta{ + APIVersion: appsv1.SchemeGroupVersion.String(), + Kind: "StatefulSet", + }, + ObjectMeta: metav1.ObjectMeta{ + Name: repoHostName, + Namespace: postgresCluster.GetNamespace(), + Labels: labels, + Annotations: annotations, + }, + Spec: appsv1.StatefulSetSpec{ + Selector: &metav1.LabelSelector{ + MatchLabels: naming.PGBackRestDedicatedLabels(postgresCluster.GetName()), + }, + ServiceName: naming.ClusterPodService(postgresCluster).Name, + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Labels: labels, + Annotations: annotations, + }, + }, + }, + } + + if repoHost := postgresCluster.Spec.Backups.PGBackRest.RepoHost; repoHost != nil { + repo.Spec.Template.Spec.Affinity = repoHost.Affinity + repo.Spec.Template.Spec.Tolerations = repoHost.Tolerations + repo.Spec.Template.Spec.TopologySpreadConstraints = repoHost.TopologySpreadConstraints + repo.Spec.Template.Spec.PriorityClassName = initialize.FromPointer(repoHost.PriorityClassName) + } + + // if default pod scheduling is not explicitly disabled, add the default + // pod topology spread constraints + if !initialize.FromPointer(postgresCluster.Spec.DisableDefaultPodScheduling) { + repo.Spec.Template.Spec.TopologySpreadConstraints = append( + repo.Spec.Template.Spec.TopologySpreadConstraints, + defaultTopologySpreadConstraints( + naming.ClusterDataForPostgresAndPGBackRest(postgresCluster.Name), + )...) + } + + // Set the image pull secrets, if any exist. + // This is set here rather than using the service account due to the lack + // of propagation to existing pods when the CRD is updated: + // https://github.com/kubernetes/kubernetes/issues/88456 + repo.Spec.Template.Spec.ImagePullSecrets = postgresCluster.Spec.ImagePullSecrets + + // determine if any PG Pods still exist + var instancePodExists bool + for _, instance := range observedInstances.forCluster { + if len(instance.Pods) > 0 { + instancePodExists = true + break + } + } + + // if the cluster is set to be shutdown and no instance Pods remain, stop the repohost pod + if postgresCluster.Spec.Shutdown != nil && *postgresCluster.Spec.Shutdown && + !instancePodExists { + repo.Spec.Replicas = initialize.Int32(0) + } else { + // the cluster should not be shutdown, set this value to 1 + repo.Spec.Replicas = initialize.Int32(1) + } + + // Use StatefulSet's "RollingUpdate" strategy and "Parallel" policy to roll + // out changes to pods even when not Running or not Ready. + // - https://docs.k8s.io/concepts/workloads/controllers/statefulset/#rolling-updates + // - https://docs.k8s.io/concepts/workloads/controllers/statefulset/#forced-rollback + // - https://kep.k8s.io/3541 + repo.Spec.PodManagementPolicy = appsv1.ParallelPodManagement + repo.Spec.UpdateStrategy.Type = appsv1.RollingUpdateStatefulSetStrategyType + + // Restart containers any time they stop, die, are killed, etc. + // - https://docs.k8s.io/concepts/workloads/pods/pod-lifecycle/#restart-policy + repo.Spec.Template.Spec.RestartPolicy = corev1.RestartPolicyAlways + + // When ShareProcessNamespace is enabled, Kubernetes' pause process becomes + // PID 1 and reaps those processes when they complete. + // - https://github.com/kubernetes/kubernetes/commit/81d27aa23969b77f + // + // The pgBackRest TLS server must be signaled when its configuration or + // certificates change. Let containers see each other's processes. + // - https://docs.k8s.io/tasks/configure-pod-container/share-process-namespace/ + repo.Spec.Template.Spec.ShareProcessNamespace = initialize.Bool(true) + + // pgBackRest does not make any Kubernetes API calls. Use the default + // ServiceAccount and do not mount its credentials. + repo.Spec.Template.Spec.AutomountServiceAccountToken = initialize.Bool(false) + + // Do not add environment variables describing services in this namespace. + repo.Spec.Template.Spec.EnableServiceLinks = initialize.Bool(false) + + repo.Spec.Template.Spec.SecurityContext = postgres.PodSecurityContext(postgresCluster) + + pgbackrest.AddServerToRepoPod(ctx, postgresCluster, &repo.Spec.Template.Spec) + + if pgbackrest.RepoHostVolumeDefined(postgresCluster) { + // add the init container to make the pgBackRest repo volume log directory + pgbackrest.MakePGBackrestLogDir(&repo.Spec.Template, postgresCluster) + + // add pgBackRest repo volumes to pod + if err := pgbackrest.AddRepoVolumesToPod(postgresCluster, &repo.Spec.Template, + getRepoPVCNames(postgresCluster, repoResources.pvcs), + naming.PGBackRestRepoContainerName); err != nil { + return nil, errors.WithStack(err) + } + } + // add configs to pod + pgbackrest.AddConfigToRepoPod(postgresCluster, &repo.Spec.Template.Spec) + + // add nss_wrapper init container and add nss_wrapper env vars to the pgbackrest + // container + addNSSWrapper( + config.PGBackRestContainerImage(postgresCluster), + postgresCluster.Spec.ImagePullPolicy, + &repo.Spec.Template) + + addTMPEmptyDir(&repo.Spec.Template) + + // set ownership references + if err := controllerutil.SetControllerReference(postgresCluster, repo, + r.Client.Scheme()); err != nil { + return nil, err + } + + return repo, nil +} + +func (r *Reconciler) generateRepoVolumeIntent(postgresCluster *v1beta1.PostgresCluster, + spec corev1.PersistentVolumeClaimSpec, repoName string, + repoResources *RepoResources) (*corev1.PersistentVolumeClaim, error) { + + annotations := naming.Merge( + postgresCluster.Spec.Metadata.GetAnnotationsOrNil(), + postgresCluster.Spec.Backups.PGBackRest.Metadata.GetAnnotationsOrNil()) + labels := naming.Merge( + postgresCluster.Spec.Metadata.GetLabelsOrNil(), + postgresCluster.Spec.Backups.PGBackRest.Metadata.GetLabelsOrNil(), + naming.PGBackRestRepoVolumeLabels(postgresCluster.GetName(), repoName), + ) + + // generate the default metadata + meta := naming.PGBackRestRepoVolume(postgresCluster, repoName) + + // but if there is an existing volume for this PVC, use it + repoPVCNames := getRepoPVCNames(postgresCluster, repoResources.pvcs) + if repoPVCNames[repoName] != "" { + meta = metav1.ObjectMeta{ + Name: repoPVCNames[repoName], + Namespace: postgresCluster.GetNamespace(), + } + } + + meta.Labels = labels + meta.Annotations = annotations + + repoVol := &corev1.PersistentVolumeClaim{ + TypeMeta: metav1.TypeMeta{ + APIVersion: corev1.SchemeGroupVersion.String(), + Kind: "PersistentVolumeClaim", + }, + ObjectMeta: meta, + Spec: spec, + } + + // set ownership references + if err := controllerutil.SetControllerReference(postgresCluster, repoVol, + r.Client.Scheme()); err != nil { + return nil, err + } + + return repoVol, nil +} + +// generateBackupJobSpecIntent generates a JobSpec for a pgBackRest backup job +func generateBackupJobSpecIntent(ctx context.Context, postgresCluster *v1beta1.PostgresCluster, + repo v1beta1.PGBackRestRepo, serviceAccountName string, + labels, annotations map[string]string, opts ...string) *batchv1.JobSpec { + + repoIndex := regexRepoIndex.FindString(repo.Name) + cmdOpts := []string{ + "--stanza=" + pgbackrest.DefaultStanzaName, + "--repo=" + repoIndex, + } + // If VolumeSnapshots are enabled, use archive-copy and archive-check options + if postgresCluster.Spec.Backups.Snapshots != nil && feature.Enabled(ctx, feature.VolumeSnapshots) { + cmdOpts = append(cmdOpts, "--archive-copy=y", "--archive-check=y") + } + + cmdOpts = append(cmdOpts, opts...) + + container := corev1.Container{ + Command: []string{"/opt/crunchy/bin/pgbackrest"}, + Env: []corev1.EnvVar{ + {Name: "COMMAND", Value: "backup"}, + {Name: "COMMAND_OPTS", Value: strings.Join(cmdOpts, " ")}, + {Name: "COMPARE_HASH", Value: "true"}, + {Name: "CONTAINER", Value: naming.PGBackRestRepoContainerName}, + {Name: "NAMESPACE", Value: postgresCluster.GetNamespace()}, + {Name: "SELECTOR", Value: naming.PGBackRestDedicatedSelector(postgresCluster.GetName()).String()}, + }, + Image: config.PGBackRestContainerImage(postgresCluster), + ImagePullPolicy: postgresCluster.Spec.ImagePullPolicy, + Name: naming.PGBackRestRepoContainerName, + SecurityContext: initialize.RestrictedSecurityContext(), + } + + if postgresCluster.Spec.Backups.PGBackRest.Jobs != nil { + container.Resources = postgresCluster.Spec.Backups.PGBackRest.Jobs.Resources + } + + jobSpec := &batchv1.JobSpec{ + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{Labels: labels, Annotations: annotations}, + Spec: corev1.PodSpec{ + Containers: []corev1.Container{container}, + + // Disable environment variables for services other than the Kubernetes API. + // - https://docs.k8s.io/concepts/services-networking/connect-applications-service/#accessing-the-service + // - https://releases.k8s.io/v1.23.0/pkg/kubelet/kubelet_pods.go#L553-L563 + EnableServiceLinks: initialize.Bool(false), + + // Set RestartPolicy to "Never" since we want a new Pod to be created by the Job + // controller when there is a failure (instead of the container simply restarting). + // This will ensure the Job always has the latest configs mounted following a + // failure as needed to successfully verify config hashes and run the Job. + RestartPolicy: corev1.RestartPolicyNever, + SecurityContext: initialize.PodSecurityContext(), + ServiceAccountName: serviceAccountName, + }, + }, + } + + if jobs := postgresCluster.Spec.Backups.PGBackRest.Jobs; jobs != nil { + jobSpec.TTLSecondsAfterFinished = jobs.TTLSecondsAfterFinished + } + + // set the priority class name, tolerations, and affinity, if they exist + if postgresCluster.Spec.Backups.PGBackRest.Jobs != nil { + jobSpec.Template.Spec.Tolerations = postgresCluster.Spec.Backups.PGBackRest.Jobs.Tolerations + jobSpec.Template.Spec.Affinity = postgresCluster.Spec.Backups.PGBackRest.Jobs.Affinity + jobSpec.Template.Spec.PriorityClassName = + initialize.FromPointer(postgresCluster.Spec.Backups.PGBackRest.Jobs.PriorityClassName) + } + + // Set the image pull secrets, if any exist. + // This is set here rather than using the service account due to the lack + // of propagation to existing pods when the CRD is updated: + // https://github.com/kubernetes/kubernetes/issues/88456 + jobSpec.Template.Spec.ImagePullSecrets = postgresCluster.Spec.ImagePullSecrets + + // add pgBackRest configs to template + pgbackrest.AddConfigToRepoPod(postgresCluster, &jobSpec.Template.Spec) + + return jobSpec +} + +// +kubebuilder:rbac:groups="",resources="configmaps",verbs={delete,list} +// +kubebuilder:rbac:groups="",resources="secrets",verbs={list,delete} +// +kubebuilder:rbac:groups="",resources="endpoints",verbs={get} +// +kubebuilder:rbac:groups="batch",resources="jobs",verbs={list} + +// observeRestoreEnv observes the current Kubernetes environment to obtain any resources applicable +// to performing pgBackRest restores (e.g. when initializing a new cluster using an existing +// pgBackRest backup, or when restoring in-place). This includes finding any existing Endpoints +// created by Patroni (i.e. DCS, leader and failover Endpoints), while then also finding any existing +// restore Jobs and then updating pgBackRest restore status accordingly. +func (r *Reconciler) observeRestoreEnv(ctx context.Context, + cluster *v1beta1.PostgresCluster) ([]corev1.Endpoints, *batchv1.Job, error) { + + // lookup the various patroni endpoints + leaderEP, dcsEP, failoverEP := corev1.Endpoints{}, corev1.Endpoints{}, corev1.Endpoints{} + currentEndpoints := []corev1.Endpoints{} + if err := r.Client.Get(ctx, naming.AsObjectKey(naming.PatroniLeaderEndpoints(cluster)), + &leaderEP); err != nil { + if !apierrors.IsNotFound(err) { + return nil, nil, errors.WithStack(err) + } + } else { + currentEndpoints = append(currentEndpoints, leaderEP) + } + if err := r.Client.Get(ctx, naming.AsObjectKey(naming.PatroniDistributedConfiguration(cluster)), + &dcsEP); err != nil { + if !apierrors.IsNotFound(err) { + return nil, nil, errors.WithStack(err) + } + } else { + currentEndpoints = append(currentEndpoints, dcsEP) + } + if err := r.Client.Get(ctx, naming.AsObjectKey(naming.PatroniTrigger(cluster)), + &failoverEP); err != nil { + if !apierrors.IsNotFound(err) { + return nil, nil, errors.WithStack(err) + } + } else { + currentEndpoints = append(currentEndpoints, failoverEP) + } + + restoreJobs := &batchv1.JobList{} + if err := r.Client.List(ctx, restoreJobs, &client.ListOptions{ + Namespace: cluster.Namespace, + LabelSelector: naming.PGBackRestRestoreJobSelector(cluster.GetName()), + }); err != nil { + return nil, nil, errors.WithStack(err) + } + var restoreJob *batchv1.Job + if len(restoreJobs.Items) > 1 { + return nil, nil, errors.WithStack( + errors.New("invalid number of restore Jobs found when attempting to reconcile a " + + "pgBackRest data source")) + } else if len(restoreJobs.Items) == 1 { + restoreJob = &restoreJobs.Items[0] + } + + if restoreJob != nil { + + completed := jobCompleted(restoreJob) + failed := jobFailed(restoreJob) + + if cluster.Status.PGBackRest != nil && cluster.Status.PGBackRest.Restore != nil { + cluster.Status.PGBackRest.Restore.StartTime = restoreJob.Status.StartTime + cluster.Status.PGBackRest.Restore.CompletionTime = restoreJob.Status.CompletionTime + cluster.Status.PGBackRest.Restore.Succeeded = restoreJob.Status.Succeeded + cluster.Status.PGBackRest.Restore.Failed = restoreJob.Status.Failed + cluster.Status.PGBackRest.Restore.Active = restoreJob.Status.Active + if completed || failed { + cluster.Status.PGBackRest.Restore.Finished = true + } + } + + // update the data source initialized condition if the Job has finished running, and is + // therefore in a completed or failed + if completed { + meta.SetStatusCondition(&cluster.Status.Conditions, metav1.Condition{ + ObservedGeneration: cluster.GetGeneration(), + Type: ConditionPostgresDataInitialized, + Status: metav1.ConditionTrue, + Reason: "PGBackRestRestoreComplete", + Message: "pgBackRest restore completed successfully", + }) + meta.RemoveStatusCondition(&cluster.Status.Conditions, + ConditionPGBackRestRestoreProgressing) + + // The clone process used to create resources that were used only + // by the restore job. Clean them up if they still exist. + selector := naming.PGBackRestRestoreConfigSelector(cluster.GetName()) + restoreConfigMaps := &corev1.ConfigMapList{} + if err := r.Client.List(ctx, restoreConfigMaps, &client.ListOptions{ + Namespace: cluster.Namespace, + LabelSelector: selector, + }); err != nil { + return nil, nil, errors.WithStack(err) + } + for i := range restoreConfigMaps.Items { + if err := r.Client.Delete(ctx, &restoreConfigMaps.Items[i]); err != nil { + return nil, nil, errors.WithStack(err) + } + } + restoreSecrets := &corev1.SecretList{} + if err := r.Client.List(ctx, restoreSecrets, &client.ListOptions{ + Namespace: cluster.Namespace, + LabelSelector: selector, + }); err != nil { + return nil, nil, errors.WithStack(err) + } + for i := range restoreSecrets.Items { + if err := r.Client.Delete(ctx, &restoreSecrets.Items[i]); err != nil { + return nil, nil, errors.WithStack(err) + } + } + } else if failed { + meta.SetStatusCondition(&cluster.Status.Conditions, metav1.Condition{ + ObservedGeneration: cluster.GetGeneration(), + Type: ConditionPostgresDataInitialized, + Status: metav1.ConditionFalse, + Reason: "PGBackRestRestoreFailed", + Message: "pgBackRest restore failed", + }) + } + } + + return currentEndpoints, restoreJob, nil +} + +// +kubebuilder:rbac:groups="",resources="endpoints",verbs={delete} +// +kubebuilder:rbac:groups="apps",resources="statefulsets",verbs={delete} +// +kubebuilder:rbac:groups="batch",resources="jobs",verbs={delete} + +// prepareForRestore is responsible for reconciling an in place restore for the PostgresCluster. +// This includes setting a "PreparingForRestore" condition, and then removing all existing +// instance runners, as well as any Endpoints created by Patroni. And once the cluster is no +// longer running, the "PostgresDataInitialized" condition is removed, which will cause the +// cluster to re-bootstrap using a restored data directory. +func (r *Reconciler) prepareForRestore(ctx context.Context, + cluster *v1beta1.PostgresCluster, observed *observedInstances, + currentEndpoints []corev1.Endpoints, restoreJob *batchv1.Job, restoreID string) error { + + setPreparingClusterCondition := func(resource string) { + meta.SetStatusCondition(&cluster.Status.Conditions, metav1.Condition{ + ObservedGeneration: cluster.GetGeneration(), + Type: ConditionPGBackRestRestoreProgressing, + Status: metav1.ConditionTrue, + Reason: "RestoreInPlaceRequested", + Message: fmt.Sprintf("Preparing cluster to restore in-place: %s", + resource), + }) + } + + cluster.Status.PGBackRest = &v1beta1.PGBackRestStatus{} + cluster.Status.PGBackRest.Restore = &v1beta1.PGBackRestJobStatus{ + ID: restoreID, + } + + // find all runners, the primary, and determine if the cluster is still running + var clusterRunning bool + runners := []*appsv1.StatefulSet{} + var primary *Instance + for i, instance := range observed.forCluster { + if !clusterRunning { + clusterRunning, _ = instance.IsRunning(naming.ContainerDatabase) + } + if instance.Runner != nil { + runners = append(runners, instance.Runner) + } + if isPrimary, _ := instance.IsPrimary(); isPrimary { + primary = observed.forCluster[i] + } + } + + // Set the proper startup instance for the restore. This specifically enables a delta + // restore by attempting to find an existing instance whose PVC (if it exists, e.g. as + // in the case of an in-place restore where all PVCs are kept in place) can be utilized + // for the restore. The primary is preferred, but otherwise we will just grab the first + // runner we find. If no runner can be identified, then a new instance name is + // generated, which means a non-delta restore will occur into an empty data volume (note that + // a new name/empty volume is always used when the restore is to bootstrap a new cluster). + if cluster.Status.StartupInstance == "" { + if primary != nil { + cluster.Status.StartupInstance = primary.Name + cluster.Status.StartupInstanceSet = primary.Spec.Name + } else if len(runners) > 0 { + cluster.Status.StartupInstance = runners[0].GetName() + cluster.Status.StartupInstanceSet = + runners[0].GetLabels()[naming.LabelInstanceSet] + } else if len(cluster.Spec.InstanceSets) > 0 { + // Generate a hash that will be used make sure that the startup + // instance is named consistently + cluster.Status.StartupInstance = naming.GenerateStartupInstance(cluster, + &cluster.Spec.InstanceSets[0]).Name + cluster.Status.StartupInstanceSet = cluster.Spec.InstanceSets[0].Name + } else { + return errors.New("unable to determine startup instance for restore") + } + } + + // remove any existing restore Jobs + if restoreJob != nil { + setPreparingClusterCondition("removing restore job") + if err := r.Client.Delete(ctx, restoreJob, + client.PropagationPolicy(metav1.DeletePropagationBackground)); err != nil { + return errors.WithStack(err) + } + return nil + } + + if clusterRunning { + setPreparingClusterCondition("removing runners") + for _, runner := range runners { + err := r.Client.Delete(ctx, runner, + client.PropagationPolicy(metav1.DeletePropagationForeground)) + if client.IgnoreNotFound(err) != nil { + return errors.WithStack(err) + } + } + return nil + } + + // if everything is gone, proceed with re-bootstrapping the cluster via an in-place restore + if len(currentEndpoints) == 0 { + meta.RemoveStatusCondition(&cluster.Status.Conditions, ConditionPostgresDataInitialized) + meta.SetStatusCondition(&cluster.Status.Conditions, metav1.Condition{ + ObservedGeneration: cluster.GetGeneration(), + Type: ConditionPGBackRestRestoreProgressing, + Status: metav1.ConditionTrue, + Reason: ReasonReadyForRestore, + Message: "Restoring cluster in-place", + }) + // the cluster is no longer bootstrapped + cluster.Status.Patroni.SystemIdentifier = "" + // the restore will change the contents of the database, so the pgbouncer and exporter hashes + // are no longer valid + cluster.Status.Proxy.PGBouncer.PostgreSQLRevision = "" + cluster.Status.Monitoring.ExporterConfiguration = "" + return nil + } + + setPreparingClusterCondition("removing DCS") + // delete any Endpoints + for i := range currentEndpoints { + if err := r.Client.Delete(ctx, ¤tEndpoints[i]); client.IgnoreNotFound(err) != nil { + return errors.WithStack(err) + } + } + + return nil +} + +// +kubebuilder:rbac:groups="batch",resources="jobs",verbs={patch} + +// reconcileRestoreJob is responsible for reconciling a Job that performs a pgBackRest restore in +// order to populate a PGDATA directory. +func (r *Reconciler) reconcileRestoreJob(ctx context.Context, + cluster *v1beta1.PostgresCluster, sourceCluster *v1beta1.PostgresCluster, + pgdataVolume, pgwalVolume *corev1.PersistentVolumeClaim, + pgtablespaceVolumes []*corev1.PersistentVolumeClaim, + dataSource *v1beta1.PostgresClusterDataSource, + instanceName, instanceSetName, configHash, stanzaName string) error { + + repoName := dataSource.RepoName + options := dataSource.Options + + // ensure options are properly set + // TODO (andrewlecuyer): move validation logic to a webhook + for _, opt := range options { + var msg string + switch { + // Since '--repo' can be set with or without an equals ('=') sign, we check for both + // usage patterns. + case strings.Contains(opt, "--repo=") || strings.Contains(opt, "--repo "): + msg = "Option '--repo' is not allowed: please use the 'repoName' field instead." + case strings.Contains(opt, "--stanza"): + msg = "Option '--stanza' is not allowed: the operator will automatically set this " + + "option" + case strings.Contains(opt, "--pg1-path"): + msg = "Option '--pg1-path' is not allowed: the operator will automatically set this " + + "option" + case strings.Contains(opt, "--target-action"): + msg = "Option '--target-action' is not allowed: the operator will automatically set this " + + "option " + case strings.Contains(opt, "--link-map"): + msg = "Option '--link-map' is not allowed: the operator will automatically set this " + + "option " + } + if msg != "" { + r.Recorder.Eventf(cluster, corev1.EventTypeWarning, "InvalidDataSource", msg, repoName) + return nil + } + } + + pgdata := postgres.DataDirectory(cluster) + // combine options provided by user in the spec with those populated by the operator for a + // successful restore + opts := append(options, []string{ + "--stanza=" + stanzaName, + "--pg1-path=" + pgdata, + "--repo=" + regexRepoIndex.FindString(repoName)}...) + + var deltaOptFound, foundTarget bool + for _, opt := range opts { + switch { + case strings.Contains(opt, "--target"): + foundTarget = true + case strings.Contains(opt, "--delta"): + deltaOptFound = true + } + } + if !deltaOptFound { + opts = append(opts, "--delta") + } + + // Note on the pgBackRest option `--target-action` in the restore job: + // (a) `--target-action` is only allowed if `--target` and `type` are set; + // TODO(benjaminjb): ensure that `type` is set as well before accepting `target-action` + // (b) our restore job assumes the `hot_standby: on` default, which is true of Postgres >= 10; + // (c) pgBackRest passes the `--target-action` setting as `recovery-target-action` + // in PostgreSQL versions >=9.5 and as `pause_at_recovery_target` on earlier 9.x versions. + // But note, pgBackRest may assume a default action of `pause` and may not pass any setting + // - https://pgbackrest.org/command.html#command-restore/category-command/option-type + // - https://www.postgresql.org/docs/14/runtime-config-wal.html#RUNTIME-CONFIG-WAL-RECOVERY-TARGET + // - https://github.com/pgbackrest/pgbackrest/blob/bb03b3f41942d0b781931092a76877ad309001ef/src/command/restore/restore.c#L1623 + // - https://github.com/pgbackrest/pgbackrest/issues/1314 + // - https://github.com/pgbackrest/pgbackrest/issues/987 + if foundTarget { + opts = append(opts, "--target-action=promote") + } + + for i, instanceSpec := range cluster.Spec.InstanceSets { + if instanceSpec.Name == instanceSetName { + opts = append(opts, "--link-map=pg_wal="+postgres.WALDirectory(cluster, + &cluster.Spec.InstanceSets[i])) + } + } + + // Check to see if huge pages have been requested in the spec. If they have, include 'huge_pages = try' + // in the restore command. If they haven't, include 'huge_pages = off'. + hugePagesSetting := "off" + if postgres.HugePagesRequested(cluster) { + hugePagesSetting = "try" + } + + // NOTE (andrewlecuyer): Forcing users to put each argument separately might prevent the need + // to do any escaping or use eval. + cmd := pgbackrest.RestoreCommand(pgdata, hugePagesSetting, config.FetchKeyCommand(&cluster.Spec), + pgtablespaceVolumes, strings.Join(opts, " ")) + + // create the volume resources required for the postgres data directory + dataVolumeMount := postgres.DataVolumeMount() + dataVolume := corev1.Volume{ + Name: dataVolumeMount.Name, + VolumeSource: corev1.VolumeSource{ + PersistentVolumeClaim: &corev1.PersistentVolumeClaimVolumeSource{ + ClaimName: pgdataVolume.GetName(), + }, + }, + } + volumes := []corev1.Volume{dataVolume} + volumeMounts := []corev1.VolumeMount{dataVolumeMount} + + if pgwalVolume != nil { + walVolumeMount := postgres.WALVolumeMount() + walVolume := corev1.Volume{ + Name: walVolumeMount.Name, + VolumeSource: corev1.VolumeSource{ + PersistentVolumeClaim: &corev1.PersistentVolumeClaimVolumeSource{ + ClaimName: pgwalVolume.GetName(), + }, + }, + } + volumes = append(volumes, walVolume) + volumeMounts = append(volumeMounts, walVolumeMount) + } + + for _, pgtablespaceVolume := range pgtablespaceVolumes { + tablespaceVolumeMount := postgres.TablespaceVolumeMount( + pgtablespaceVolume.Labels[naming.LabelData]) + tablespaceVolume := corev1.Volume{ + Name: tablespaceVolumeMount.Name, + VolumeSource: corev1.VolumeSource{ + PersistentVolumeClaim: &corev1.PersistentVolumeClaimVolumeSource{ + ClaimName: pgtablespaceVolume.GetName(), + }, + }, + } + volumes = append(volumes, tablespaceVolume) + volumeMounts = append(volumeMounts, tablespaceVolumeMount) + } + + restoreJob := &batchv1.Job{} + if err := r.generateRestoreJobIntent(cluster, configHash, instanceName, cmd, + volumeMounts, volumes, dataSource, restoreJob); err != nil { + return errors.WithStack(err) + } + + // add pgBackRest configs to template + pgbackrest.AddConfigToRestorePod(cluster, sourceCluster, &restoreJob.Spec.Template.Spec) + + // add nss_wrapper init container and add nss_wrapper env vars to the pgbackrest restore + // container + addNSSWrapper( + config.PGBackRestContainerImage(cluster), + cluster.Spec.ImagePullPolicy, + &restoreJob.Spec.Template) + + addTMPEmptyDir(&restoreJob.Spec.Template) + + return errors.WithStack(r.apply(ctx, restoreJob)) +} + +func (r *Reconciler) generateRestoreJobIntent(cluster *v1beta1.PostgresCluster, + configHash, instanceName string, cmd []string, + volumeMounts []corev1.VolumeMount, volumes []corev1.Volume, + dataSource *v1beta1.PostgresClusterDataSource, job *batchv1.Job) error { + + meta := naming.PGBackRestRestoreJob(cluster) + + annotations := naming.Merge( + cluster.Spec.Metadata.GetAnnotationsOrNil(), + cluster.Spec.Backups.PGBackRest.Metadata.GetAnnotationsOrNil(), + map[string]string{naming.PGBackRestConfigHash: configHash}) + labels := naming.Merge( + cluster.Spec.Metadata.GetLabelsOrNil(), + cluster.Spec.Backups.PGBackRest.Metadata.GetLabelsOrNil(), + naming.PGBackRestRestoreJobLabels(cluster.Name), + map[string]string{naming.LabelStartupInstance: instanceName}, + ) + meta.Annotations = annotations + meta.Labels = labels + + job.ObjectMeta = meta + job.Spec = batchv1.JobSpec{ + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Annotations: annotations, + Labels: labels, + }, + Spec: corev1.PodSpec{ + Containers: []corev1.Container{{ + Command: cmd, + Image: config.PostgresContainerImage(cluster), + ImagePullPolicy: cluster.Spec.ImagePullPolicy, + Name: naming.PGBackRestRestoreContainerName, + VolumeMounts: volumeMounts, + Env: []corev1.EnvVar{{Name: "PGHOST", Value: "/tmp"}}, + SecurityContext: initialize.RestrictedSecurityContext(), + Resources: dataSource.Resources, + }}, + RestartPolicy: corev1.RestartPolicyNever, + Volumes: volumes, + Affinity: dataSource.Affinity, + Tolerations: dataSource.Tolerations, + }, + }, + } + + // Set the image pull secrets, if any exist. + // This is set here rather than using the service account due to the lack + // of propagation to existing pods when the CRD is updated: + // https://github.com/kubernetes/kubernetes/issues/88456 + job.Spec.Template.Spec.ImagePullSecrets = cluster.Spec.ImagePullSecrets + + // pgBackRest does not make any Kubernetes API calls, but it may interact + // with a cloud storage provider. Use the instance ServiceAccount for its + // possible cloud identity without mounting its Kubernetes API credentials. + // - https://cloud.google.com/kubernetes-engine/docs/concepts/workload-identity + // - https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html + job.Spec.Template.Spec.AutomountServiceAccountToken = initialize.Bool(false) + job.Spec.Template.Spec.ServiceAccountName = naming.ClusterInstanceRBAC(cluster).Name + + // Do not add environment variables describing services in this namespace. + job.Spec.Template.Spec.EnableServiceLinks = initialize.Bool(false) + + job.Spec.Template.Spec.SecurityContext = postgres.PodSecurityContext(cluster) + + // set the priority class name, if it exists + job.Spec.Template.Spec.PriorityClassName = initialize.FromPointer(dataSource.PriorityClassName) + + job.SetGroupVersionKind(batchv1.SchemeGroupVersion.WithKind("Job")) + if err := errors.WithStack(r.setControllerReference(cluster, job)); err != nil { + return err + } + + return nil +} + +// reconcilePGBackRest is responsible for reconciling any/all pgBackRest resources owned by a +// specific PostgresCluster (e.g. Deployments, ConfigMaps, Secrets, etc.). This function will +// ensure various reconciliation logic is run as needed for each pgBackRest resource, while then +// also generating the proper Result as needed to ensure proper event requeuing according to +// the results of any attempts to properly reconcile these resources. +func (r *Reconciler) reconcilePGBackRest(ctx context.Context, + postgresCluster *v1beta1.PostgresCluster, + instances *observedInstances, + rootCA *pki.RootCertificateAuthority, + backupsSpecFound bool, +) (reconcile.Result, error) { + + // add some additional context about what component is being reconciled + log := logging.FromContext(ctx).WithValues("reconciler", "pgBackRest") + + // if nil, create the pgBackRest status that will be updated when + // reconciling various pgBackRest resources + if postgresCluster.Status.PGBackRest == nil { + postgresCluster.Status.PGBackRest = &v1beta1.PGBackRestStatus{} + } + + // create the Result that will be updated while reconciling any/all pgBackRest resources + result := reconcile.Result{} + + // Get all currently owned pgBackRest resources in the environment as needed for + // reconciliation. This includes deleting resources that should no longer exist per the + // current spec (e.g. if repos, repo hosts, etc. have been removed). + repoResources, err := r.getPGBackRestResources(ctx, postgresCluster, backupsSpecFound) + if err != nil { + // exit early if can't get and clean existing resources as needed to reconcile + return reconcile.Result{}, errors.WithStack(err) + } + + // At this point, reconciliation is allowed, so if no backups spec is found + // clear the status and exit + if !backupsSpecFound { + postgresCluster.Status.PGBackRest = &v1beta1.PGBackRestStatus{} + return result, nil + } + + var repoHost *appsv1.StatefulSet + var repoHostName string + // reconcile the pgbackrest repository host + repoHost, err = r.reconcileDedicatedRepoHost(ctx, postgresCluster, repoResources, instances) + if err != nil { + log.Error(err, "unable to reconcile pgBackRest repo host") + result.Requeue = true + return result, nil + } + repoHostName = repoHost.GetName() + + if err := r.reconcilePGBackRestSecret(ctx, postgresCluster, repoHost, rootCA); err != nil { + log.Error(err, "unable to reconcile pgBackRest secret") + result.Requeue = true + } + + // calculate hashes for the external repository configurations in the spec (e.g. for Azure, + // GCS and/or S3 repositories) as needed to properly detect changes to external repository + // configuration (and then execute stanza create commands accordingly) + configHashes, configHash, err := pgbackrest.CalculateConfigHashes(postgresCluster) + if err != nil { + log.Error(err, "unable to calculate config hashes") + result.Requeue = true + return result, nil + } + + // reconcile all pgbackrest repository repos + replicaCreateRepo, err := r.reconcileRepos(ctx, postgresCluster, configHashes, repoResources) + if err != nil { + log.Error(err, "unable to reconcile pgBackRest repo host") + result.Requeue = true + return result, nil + } + + // gather instance names and reconcile all pgbackrest configuration and secrets + instanceNames := []string{} + for _, instance := range instances.forCluster { + instanceNames = append(instanceNames, instance.Name) + } + // sort to ensure consistent ordering of hosts when creating pgBackRest configs + sort.Strings(instanceNames) + if err := r.reconcilePGBackRestConfig(ctx, postgresCluster, repoHostName, + configHash, naming.ClusterPodService(postgresCluster).Name, + postgresCluster.GetNamespace(), instanceNames); err != nil { + log.Error(err, "unable to reconcile pgBackRest configuration") + result.Requeue = true + } + + // reconcile the RBAC required to run pgBackRest Jobs (e.g. for backups) + sa, err := r.reconcilePGBackRestRBAC(ctx, postgresCluster) + if err != nil { + log.Error(err, "unable to create replica creation backup") + result.Requeue = true + return result, nil + } + + // reconcile the pgBackRest stanza for all configuration pgBackRest repos + configHashMismatch, err := r.reconcileStanzaCreate(ctx, postgresCluster, instances, configHash) + // If a stanza create error then requeue but don't return the error. This prevents + // stanza-create errors from bubbling up to the main Reconcile() function, which would + // prevent subsequent reconciles from occurring. Also, this provides a better chance + // that the pgBackRest status will be updated at the end of the Reconcile() function, + // e.g. to set the "stanzaCreated" indicator to false for any repos failing stanza creation + // (assuming no other reconcile errors bubble up to the Reconcile() function and block the + // status update). And finally, add some time to each requeue to slow down subsequent + // stanza create attempts in order to prevent pgBackRest mis-configuration (e.g. due to + // custom configuration) from spamming the logs, while also ensuring stanza creation is + // re-attempted until successful (e.g. allowing users to correct mis-configurations in + // custom configuration and ensure stanzas are still created). + if err != nil { + log.Error(err, "unable to create stanza") + result.RequeueAfter = 10 * time.Second + } + // If a config hash mismatch, then log an info message and requeue to try again. Add some time + // to the requeue to give the pgBackRest configuration changes a chance to propagate to the + // container. + if configHashMismatch { + log.Info("pgBackRest config hash mismatch detected, requeuing to reattempt stanza create") + result.RequeueAfter = 10 * time.Second + } + // reconcile the pgBackRest backup CronJobs + requeue := r.reconcileScheduledBackups(ctx, postgresCluster, sa, repoResources.cronjobs) + // If the pgBackRest backup CronJob reconciliation function has encountered an error, requeue + // after 10 seconds. The error will not bubble up to allow the reconcile loop to continue. + // An error is not logged because an event was already created. + // TODO(tjmoore4): Is this the desired eventing/logging/reconciliation strategy? + // A potential option to handle this proactively would be to use a webhook: + // https://book.kubebuilder.io/cronjob-tutorial/webhook-implementation.html + if requeue { + result.RequeueAfter = 10 * time.Second + } + + // Reconcile the initial backup that is needed to enable replica creation using pgBackRest. + // This is done once stanza creation is successful + if err := r.reconcileReplicaCreateBackup(ctx, postgresCluster, instances, + repoResources.replicaCreateBackupJobs, sa, configHash, replicaCreateRepo); err != nil { + log.Error(err, "unable to reconcile replica creation backup") + result.Requeue = true + } + + // Reconcile a manual backup as defined in the spec, and triggered by the end-user via + // annotation + if err := r.reconcileManualBackup(ctx, postgresCluster, repoResources.manualBackupJobs, + sa, instances); err != nil { + log.Error(err, "unable to reconcile manual backup") + result.Requeue = true + } + + return result, nil +} + +// +kubebuilder:rbac:groups="",resources="persistentvolumeclaims",verbs={create,patch} +// +kubebuilder:rbac:groups="batch",resources="jobs",verbs={create,patch,delete} + +// reconcilePostgresClusterDataSource is responsible for reconciling a PostgresCluster data source. +// This is specifically done by running a pgBackRest restore to populate a PostgreSQL data volume +// for the PostgresCluster being reconciled using the backups of another PostgresCluster. +func (r *Reconciler) reconcilePostgresClusterDataSource(ctx context.Context, + cluster *v1beta1.PostgresCluster, dataSource *v1beta1.PostgresClusterDataSource, + configHash string, clusterVolumes []corev1.PersistentVolumeClaim, + rootCA *pki.RootCertificateAuthority, + backupsSpecFound bool, +) error { + + // grab cluster, namespaces and repo name information from the data source + sourceClusterName := dataSource.ClusterName + // if the data source name is empty then we're restoring in-place and use the current cluster + // as the source cluster + if sourceClusterName == "" { + sourceClusterName = cluster.GetName() + } + // if data source namespace is empty then use the same namespace as the current cluster + sourceClusterNamespace := dataSource.ClusterNamespace + if sourceClusterNamespace == "" { + sourceClusterNamespace = cluster.GetNamespace() + } + // repo name is required by the api, so RepoName should be populated + sourceRepoName := dataSource.RepoName + + // Ensure the proper instance and instance set can be identified via the status. The + // StartupInstance and StartupInstanceSet values should be populated when the cluster + // is being prepared for a restore, and should therefore always exist at this point. + // Therefore, if either are not found it is treated as an error. + instanceName := cluster.Status.StartupInstance + if instanceName == "" { + return errors.WithStack( + errors.New("unable to find instance name for pgBackRest restore Job")) + } + instanceSetName := cluster.Status.StartupInstanceSet + if instanceSetName == "" { + return errors.WithStack( + errors.New("unable to find instance set name for pgBackRest restore Job")) + } + + // Ensure an instance set can be found in the current spec that corresponds to the + // instanceSetName. A valid instance spec is needed to reconcile and cluster volumes + // below (e.g. the PGDATA and/or WAL volumes). + var instanceSet *v1beta1.PostgresInstanceSetSpec + for i, set := range cluster.Spec.InstanceSets { + if set.Name == instanceSetName { + instanceSet = &cluster.Spec.InstanceSets[i] + break + } + } + if instanceSet == nil { + return errors.WithStack( + errors.New("unable to determine the proper instance set for the restore")) + } + + // If the cluster is already bootstrapped, or if the bootstrap Job is complete, then + // nothing to do. However, also ensure the "data sources initialized" condition is set + // to true if for some reason it doesn't exist (e.g. if it was deleted since the + // data source for the cluster was initialized). + if patroni.ClusterBootstrapped(cluster) { + condition := meta.FindStatusCondition(cluster.Status.Conditions, + ConditionPostgresDataInitialized) + if condition == nil || (condition.Status != metav1.ConditionTrue) { + meta.SetStatusCondition(&cluster.Status.Conditions, metav1.Condition{ + ObservedGeneration: cluster.GetGeneration(), + Type: ConditionPostgresDataInitialized, + Status: metav1.ConditionTrue, + Reason: "ClusterAlreadyBootstrapped", + Message: "The cluster is already bootstrapped", + }) + } + return nil + } + + // Identify the proper source cluster. If the source cluster configured matches the current + // cluster, then we do not need to lookup a cluster and simply copy the current PostgresCluster. + // Additionally, pgBackRest is reconciled to ensure any configuration needed to bootstrap the + // cluster exists (specifically since it may not yet exist, e.g. if we're initializing the + // data directory for a brand new PostgresCluster using existing backups for that cluster). + // If the source cluster is not the same as the current cluster, then look it up. + sourceCluster := &v1beta1.PostgresCluster{} + if sourceClusterName == cluster.GetName() && sourceClusterNamespace == cluster.GetNamespace() { + sourceCluster = cluster.DeepCopy() + instance := &Instance{Name: instanceName} + // Reconciling pgBackRest here will ensure a pgBackRest instance config file exists (since + // the cluster hasn't bootstrapped yet, and pgBackRest configs therefore have not yet been + // reconciled) as needed to properly configure the pgBackRest restore Job. + // Note that function reconcilePGBackRest only uses forCluster in observedInstances. + result, err := r.reconcilePGBackRest(ctx, cluster, &observedInstances{ + forCluster: []*Instance{instance}, + }, rootCA, backupsSpecFound) + if err != nil || result != (reconcile.Result{}) { + return fmt.Errorf("unable to reconcile pgBackRest as needed to initialize "+ + "PostgreSQL data for the cluster: %w", err) + } + } else { + if err := r.Client.Get(ctx, + client.ObjectKey{Name: sourceClusterName, Namespace: sourceClusterNamespace}, + sourceCluster); err != nil { + if apierrors.IsNotFound(err) { + r.Recorder.Eventf(cluster, corev1.EventTypeWarning, "InvalidDataSource", + "PostgresCluster %q does not exist", sourceClusterName) + return nil + } + return errors.WithStack(err) + } + + // Copy repository definitions and credentials from the source cluster. + // A copy is the only way to get this information across namespaces. + if err := r.copyRestoreConfiguration(ctx, cluster, sourceCluster); err != nil { + return err + } + } + + // verify the repo defined in the data source exists in the source cluster + var foundRepo bool + for _, repo := range sourceCluster.Spec.Backups.PGBackRest.Repos { + if repo.Name == sourceRepoName { + foundRepo = true + break + } + } + if !foundRepo { + r.Recorder.Eventf(cluster, corev1.EventTypeWarning, "InvalidDataSource", + "PostgresCluster %q does not have a repo named %q defined", + sourceClusterName, sourceRepoName) + return nil + } + + // Define a fake STS to use when calling the reconcile functions below since when + // bootstrapping the cluster it will not exist until after the restore is complete. + fakeSTS := &appsv1.StatefulSet{ObjectMeta: metav1.ObjectMeta{ + Name: instanceName, + Namespace: cluster.GetNamespace(), + }} + // Reconcile the PGDATA and WAL volumes for the restore + pgdata, err := r.reconcilePostgresDataVolume(ctx, cluster, instanceSet, fakeSTS, clusterVolumes, sourceCluster) + if err != nil { + return errors.WithStack(err) + } + pgwal, err := r.reconcilePostgresWALVolume(ctx, cluster, instanceSet, fakeSTS, nil, clusterVolumes) + if err != nil { + return errors.WithStack(err) + } + + pgtablespaces, err := r.reconcileTablespaceVolumes(ctx, cluster, instanceSet, fakeSTS, clusterVolumes) + if err != nil { + return errors.WithStack(err) + } + + // TODO(snapshots): If pgdata is being sourced by a VolumeSnapshot then don't perform a typical restore job; + // we only want to replay the WAL. + + // reconcile the pgBackRest restore Job to populate the cluster's data directory + if err := r.reconcileRestoreJob(ctx, cluster, sourceCluster, pgdata, pgwal, pgtablespaces, + dataSource, instanceName, instanceSetName, configHash, pgbackrest.DefaultStanzaName); err != nil { + return errors.WithStack(err) + } + + return nil +} + +// +kubebuilder:rbac:groups="",resources="persistentvolumeclaims",verbs={create,patch} +// +kubebuilder:rbac:groups="batch",resources="jobs",verbs={create,patch,delete} + +// reconcileCloudBasedDataSource is responsible for reconciling a cloud-based PostgresCluster +// data source, i.e., S3, etc. +func (r *Reconciler) reconcileCloudBasedDataSource(ctx context.Context, + cluster *v1beta1.PostgresCluster, dataSource *v1beta1.PGBackRestDataSource, + configHash string, clusterVolumes []corev1.PersistentVolumeClaim) error { + + // Ensure the proper instance and instance set can be identified via the status. The + // StartupInstance and StartupInstanceSet values should be populated when the cluster + // is being prepared for a restore, and should therefore always exist at this point. + // Therefore, if either are not found it is treated as an error. + instanceName := cluster.Status.StartupInstance + if instanceName == "" { + return errors.WithStack( + errors.New("unable to find instance name for pgBackRest restore Job")) + } + instanceSetName := cluster.Status.StartupInstanceSet + if instanceSetName == "" { + return errors.WithStack( + errors.New("unable to find instance set name for pgBackRest restore Job")) + } + + // Ensure an instance set can be found in the current spec that corresponds to the + // instanceSetName. A valid instance spec is needed to reconcile and cluster volumes + // below (e.g. the PGDATA and/or WAL volumes). + var instanceSet *v1beta1.PostgresInstanceSetSpec + for i, set := range cluster.Spec.InstanceSets { + if set.Name == instanceSetName { + instanceSet = &cluster.Spec.InstanceSets[i] + break + } + } + if instanceSet == nil { + return errors.WithStack( + errors.New("unable to determine the proper instance set for the restore")) + } + + // If the cluster is already bootstrapped, or if the bootstrap Job is complete, then + // nothing to do. However, also ensure the "data sources initialized" condition is set + // to true if for some reason it doesn't exist (e.g. if it was deleted since the + // data source for the cluster was initialized). + if patroni.ClusterBootstrapped(cluster) { + condition := meta.FindStatusCondition(cluster.Status.Conditions, + ConditionPostgresDataInitialized) + if condition == nil || (condition.Status != metav1.ConditionTrue) { + meta.SetStatusCondition(&cluster.Status.Conditions, metav1.Condition{ + ObservedGeneration: cluster.GetGeneration(), + Type: ConditionPostgresDataInitialized, + Status: metav1.ConditionTrue, + Reason: "ClusterAlreadyBootstrapped", + Message: "The cluster is already bootstrapped", + }) + } + return nil + } + + if err := r.createRestoreConfig(ctx, cluster, configHash); err != nil { + return err + } + + // TODO(benjaminjb): Is there a way to check that a repo exists outside of spinning + // up a pod with pgBackRest and checking? + + // Define a fake STS to use when calling the reconcile functions below since when + // bootstrapping the cluster it will not exist until after the restore is complete. + fakeSTS := &appsv1.StatefulSet{ObjectMeta: metav1.ObjectMeta{ + Name: instanceName, + Namespace: cluster.GetNamespace(), + }} + // Reconcile the PGDATA and WAL volumes for the restore + pgdata, err := r.reconcilePostgresDataVolume(ctx, cluster, instanceSet, fakeSTS, clusterVolumes, nil) + if err != nil { + return errors.WithStack(err) + } + pgwal, err := r.reconcilePostgresWALVolume(ctx, cluster, instanceSet, fakeSTS, nil, clusterVolumes) + if err != nil { + return errors.WithStack(err) + } + + // TODO(benjaminjb): do we really need this for cloud-based datasources? + pgtablespaces, err := r.reconcileTablespaceVolumes(ctx, cluster, instanceSet, fakeSTS, clusterVolumes) + if err != nil { + return errors.WithStack(err) + } + + // The `reconcileRestoreJob` was originally designed to take a PostgresClusterDataSource + // and rather than reconfigure that func's signature, we translate the PGBackRestDataSource + tmpDataSource := &v1beta1.PostgresClusterDataSource{ + RepoName: dataSource.Repo.Name, + Options: dataSource.Options, + Resources: dataSource.Resources, + Affinity: dataSource.Affinity, + Tolerations: dataSource.Tolerations, + PriorityClassName: dataSource.PriorityClassName, + } + + // reconcile the pgBackRest restore Job to populate the cluster's data directory + // Note that the 'source cluster' is nil as this is not used by this restore type. + if err := r.reconcileRestoreJob(ctx, cluster, nil, pgdata, pgwal, pgtablespaces, tmpDataSource, + instanceName, instanceSetName, configHash, dataSource.Stanza); err != nil { + return errors.WithStack(err) + } + + return nil +} + +// createRestoreConfig creates a configmap struct with pgBackRest pgbackrest.conf settings +// in the data field, for use with restoring from cloud-based data sources +func (r *Reconciler) createRestoreConfig(ctx context.Context, postgresCluster *v1beta1.PostgresCluster, + configHash string) error { + + postgresClusterWithMockedBackups := postgresCluster.DeepCopy() + postgresClusterWithMockedBackups.Spec.Backups.PGBackRest.Global = postgresCluster.Spec. + DataSource.PGBackRest.Global + postgresClusterWithMockedBackups.Spec.Backups.PGBackRest.Repos = []v1beta1.PGBackRestRepo{ + postgresCluster.Spec.DataSource.PGBackRest.Repo, + } + + return r.reconcilePGBackRestConfig(ctx, postgresClusterWithMockedBackups, + "", configHash, "", "", []string{}) +} + +// copyRestoreConfiguration copies pgBackRest configuration from another cluster for use by +// the current PostgresCluster (e.g. when restoring across namespaces, and the configuration +// for the source cluster needs to be copied into the PostgresCluster's local namespace). +func (r *Reconciler) copyRestoreConfiguration(ctx context.Context, + cluster, sourceCluster *v1beta1.PostgresCluster, +) error { + var err error + + sourceConfig := &corev1.ConfigMap{ObjectMeta: naming.PGBackRestConfig(sourceCluster)} + if err == nil { + err = errors.WithStack( + r.Client.Get(ctx, client.ObjectKeyFromObject(sourceConfig), sourceConfig)) + } + + // Retrieve the pgBackRest Secret of the source cluster if it has one. When + // it does not, indicate that with a nil pointer. + sourceSecret := &corev1.Secret{ObjectMeta: naming.PGBackRestSecret(sourceCluster)} + if err == nil { + err = errors.WithStack( + r.Client.Get(ctx, client.ObjectKeyFromObject(sourceSecret), sourceSecret)) + + if apierrors.IsNotFound(err) { + sourceSecret, err = nil, nil + } + } + + // See also [pgbackrest.CreatePGBackRestConfigMapIntent]. + config := &corev1.ConfigMap{ObjectMeta: naming.PGBackRestConfig(cluster)} + config.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("ConfigMap")) + + config.Annotations = naming.Merge( + cluster.Spec.Metadata.GetAnnotationsOrNil(), + cluster.Spec.Backups.PGBackRest.Metadata.GetAnnotationsOrNil(), + ) + config.Labels = naming.Merge( + cluster.Spec.Metadata.GetLabelsOrNil(), + cluster.Spec.Backups.PGBackRest.Metadata.GetLabelsOrNil(), + naming.PGBackRestConfigLabels(cluster.GetName()), + ) + if err == nil { + err = r.setControllerReference(cluster, config) + } + + // See also [Reconciler.reconcilePGBackRestSecret]. + secret := &corev1.Secret{ObjectMeta: naming.PGBackRestSecret(cluster)} + secret.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("Secret")) + secret.Type = corev1.SecretTypeOpaque + + secret.Annotations = naming.Merge( + cluster.Spec.Metadata.GetAnnotationsOrNil(), + cluster.Spec.Backups.PGBackRest.Metadata.GetAnnotationsOrNil(), + ) + secret.Labels = naming.Merge( + cluster.Spec.Metadata.GetLabelsOrNil(), + cluster.Spec.Backups.PGBackRest.Metadata.GetLabelsOrNil(), + naming.PGBackRestConfigLabels(cluster.Name), + ) + if err == nil { + err = r.setControllerReference(cluster, secret) + } + if err == nil { + pgbackrest.RestoreConfig( + sourceConfig, config, + sourceSecret, secret, + ) + } + if err == nil { + err = errors.WithStack(r.apply(ctx, config)) + } + + // Write the Secret when there is something we want to keep in it. + if err == nil && len(secret.Data) != 0 { + err = errors.WithStack(r.apply(ctx, secret)) + } + + // copy any needed projected Secrets or ConfigMaps + if err == nil { + err = r.copyConfigurationResources(ctx, cluster, sourceCluster) + } + + return err +} + +// copyConfigurationResources copies all pgBackRest configuration ConfigMaps and +// Secrets used by the source cluster when bootstrapping the new cluster using +// pgBackRest restore. This ensures those configuration resources mounted as +// VolumeProjections by the source cluster can be used by the new cluster during +// bootstrapping. +func (r *Reconciler) copyConfigurationResources(ctx context.Context, cluster, + sourceCluster *v1beta1.PostgresCluster) error { + + for i := range sourceCluster.Spec.Backups.PGBackRest.Configuration { + // While all volume projections from .Configuration will be carried over to + // the pgBackRest restore Job, we only explicitly copy the relevant ConfigMaps + // and Secrets. Any DownwardAPI or ServiceAccountToken projections will need + // to be handled manually. + // - https://kubernetes.io/docs/concepts/storage/projected-volumes/ + if sourceCluster.Spec.Backups.PGBackRest.Configuration[i].Secret != nil { + secretProjection := sourceCluster.Spec.Backups.PGBackRest.Configuration[i].Secret + secretCopy := &corev1.Secret{} + secretName := types.NamespacedName{ + Name: secretProjection.Name, + Namespace: sourceCluster.Namespace, + } + // Get the existing Secret for the copy, if it exists. It **must** + // exist if not configured as optional. + if secretProjection.Optional != nil && *secretProjection.Optional { + if err := errors.WithStack(r.Client.Get(ctx, secretName, + secretCopy)); apierrors.IsNotFound(err) { + continue + } else { + return err + } + } else { + if err := errors.WithStack( + r.Client.Get(ctx, secretName, secretCopy)); err != nil { + return err + } + } + // Set a unique name for the Secret copy using the original Secret + // name and the Secret projection index number. + secretCopyName := fmt.Sprintf(naming.RestoreConfigCopySuffix, secretProjection.Name, i) + + // set the new name and namespace + secretCopy.ObjectMeta = metav1.ObjectMeta{ + Name: secretCopyName, + Namespace: cluster.Namespace, + } + secretCopy.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("Secret")) + secretCopy.Annotations = naming.Merge( + cluster.Spec.Metadata.GetAnnotationsOrNil(), + cluster.Spec.Backups.PGBackRest.Metadata.GetAnnotationsOrNil(), + ) + secretCopy.Labels = naming.Merge( + cluster.Spec.Metadata.GetLabelsOrNil(), + cluster.Spec.Backups.PGBackRest.Metadata.GetLabelsOrNil(), + // this label allows for cleanup when the restore completes + naming.PGBackRestRestoreJobLabels(cluster.Name), + ) + if err := r.setControllerReference(cluster, secretCopy); err != nil { + return err + } + + if err := errors.WithStack(r.apply(ctx, secretCopy)); err != nil { + return err + } + // update the copy of the source PostgresCluster to add the new Secret + // projection(s) to the restore Job + sourceCluster.Spec.Backups.PGBackRest.Configuration[i].Secret.Name = secretCopyName + } + + if sourceCluster.Spec.Backups.PGBackRest.Configuration[i].ConfigMap != nil { + configMapProjection := sourceCluster.Spec.Backups.PGBackRest.Configuration[i].ConfigMap + configMapCopy := &corev1.ConfigMap{} + configMapName := types.NamespacedName{ + Name: configMapProjection.Name, + Namespace: sourceCluster.Namespace, + } + // Get the existing ConfigMap for the copy, if it exists. It **must** + // exist if not configured as optional. + if configMapProjection.Optional != nil && *configMapProjection.Optional { + if err := errors.WithStack(r.Client.Get(ctx, configMapName, + configMapCopy)); apierrors.IsNotFound(err) { + continue + } else { + return err + } + } else { + if err := errors.WithStack( + r.Client.Get(ctx, configMapName, configMapCopy)); err != nil { + return err + } + } + // Set a unique name for the ConfigMap copy using the original ConfigMap + // name and the ConfigMap projection index number. + configMapCopyName := fmt.Sprintf(naming.RestoreConfigCopySuffix, configMapProjection.Name, i) + + // set the new name and namespace + configMapCopy.ObjectMeta = metav1.ObjectMeta{ + Name: configMapCopyName, + Namespace: cluster.Namespace, + } + configMapCopy.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("ConfigMap")) + configMapCopy.Annotations = naming.Merge( + cluster.Spec.Metadata.GetAnnotationsOrNil(), + cluster.Spec.Backups.PGBackRest.Metadata.GetAnnotationsOrNil(), + ) + configMapCopy.Labels = naming.Merge( + cluster.Spec.Metadata.GetLabelsOrNil(), + cluster.Spec.Backups.PGBackRest.Metadata.GetLabelsOrNil(), + // this label allows for cleanup when the restore completes + naming.PGBackRestRestoreJobLabels(cluster.Name), + ) + if err := r.setControllerReference(cluster, configMapCopy); err != nil { + return err + } + if err := errors.WithStack(r.apply(ctx, configMapCopy)); err != nil { + return err + } + // update the copy of the source PostgresCluster to add the new ConfigMap + // projection(s) to the restore Job + sourceCluster.Spec.Backups.PGBackRest.Configuration[i].ConfigMap.Name = configMapCopyName + } + } + return nil +} + +// reconcilePGBackRestConfig is responsible for reconciling the pgBackRest ConfigMaps and Secrets. +func (r *Reconciler) reconcilePGBackRestConfig(ctx context.Context, + postgresCluster *v1beta1.PostgresCluster, + repoHostName, configHash, serviceName, serviceNamespace string, + instanceNames []string) error { + + backrestConfig := pgbackrest.CreatePGBackRestConfigMapIntent(postgresCluster, repoHostName, + configHash, serviceName, serviceNamespace, instanceNames) + if err := controllerutil.SetControllerReference(postgresCluster, backrestConfig, + r.Client.Scheme()); err != nil { + return err + } + if err := r.apply(ctx, backrestConfig); err != nil { + return errors.WithStack(err) + } + + return nil +} + +// +kubebuilder:rbac:groups="",resources="secrets",verbs={get} +// +kubebuilder:rbac:groups="",resources="secrets",verbs={create,delete,patch} + +// reconcilePGBackRestSecret reconciles the pgBackRest Secret. +func (r *Reconciler) reconcilePGBackRestSecret(ctx context.Context, + cluster *v1beta1.PostgresCluster, repoHost *appsv1.StatefulSet, + rootCA *pki.RootCertificateAuthority) error { + + intent := &corev1.Secret{ObjectMeta: naming.PGBackRestSecret(cluster)} + intent.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("Secret")) + intent.Type = corev1.SecretTypeOpaque + + intent.Annotations = naming.Merge( + cluster.Spec.Metadata.GetAnnotationsOrNil(), + cluster.Spec.Backups.PGBackRest.Metadata.GetAnnotationsOrNil()) + intent.Labels = naming.Merge( + cluster.Spec.Metadata.GetLabelsOrNil(), + cluster.Spec.Backups.PGBackRest.Metadata.GetLabelsOrNil(), + naming.PGBackRestConfigLabels(cluster.Name), + ) + + existing := &corev1.Secret{} + err := errors.WithStack(client.IgnoreNotFound( + r.Client.Get(ctx, client.ObjectKeyFromObject(intent), existing))) + + if err == nil { + err = r.setControllerReference(cluster, intent) + } + if err == nil { + err = pgbackrest.Secret(ctx, cluster, repoHost, rootCA, existing, intent) + } + + // Delete the Secret when it exists and there is nothing we want to keep in it. + if err == nil && len(existing.UID) != 0 && len(intent.Data) == 0 { + err = errors.WithStack(client.IgnoreNotFound( + r.deleteControlled(ctx, cluster, existing))) + } + + // Write the Secret when there is something we want to keep in it. + if err == nil && len(intent.Data) != 0 { + err = errors.WithStack(r.apply(ctx, intent)) + } + return err +} + +// +kubebuilder:rbac:groups="",resources="serviceaccounts",verbs={create,patch} +// +kubebuilder:rbac:groups="rbac.authorization.k8s.io",resources="roles",verbs={create,patch} +// +kubebuilder:rbac:groups="rbac.authorization.k8s.io",resources="rolebindings",verbs={create,patch} + +// reconcileInstanceRBAC reconciles the Role, RoleBinding, and ServiceAccount for +// pgBackRest +func (r *Reconciler) reconcilePGBackRestRBAC(ctx context.Context, + postgresCluster *v1beta1.PostgresCluster) (*corev1.ServiceAccount, error) { + + sa := &corev1.ServiceAccount{ObjectMeta: naming.PGBackRestRBAC(postgresCluster)} + sa.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("ServiceAccount")) + + role := &rbacv1.Role{ObjectMeta: naming.PGBackRestRBAC(postgresCluster)} + role.SetGroupVersionKind(rbacv1.SchemeGroupVersion.WithKind("Role")) + + binding := &rbacv1.RoleBinding{ObjectMeta: naming.PGBackRestRBAC(postgresCluster)} + binding.SetGroupVersionKind(rbacv1.SchemeGroupVersion.WithKind("RoleBinding")) + + if err := r.setControllerReference(postgresCluster, sa); err != nil { + return nil, errors.WithStack(err) + } + if err := r.setControllerReference(postgresCluster, binding); err != nil { + return nil, errors.WithStack(err) + } + if err := r.setControllerReference(postgresCluster, role); err != nil { + return nil, errors.WithStack(err) + } + + sa.Annotations = naming.Merge(postgresCluster.Spec.Metadata.GetAnnotationsOrNil(), + postgresCluster.Spec.Backups.PGBackRest.Metadata.GetAnnotationsOrNil()) + sa.Labels = naming.Merge(postgresCluster.Spec.Metadata.GetLabelsOrNil(), + postgresCluster.Spec.Backups.PGBackRest.Metadata.GetLabelsOrNil(), + naming.PGBackRestLabels(postgresCluster.GetName())) + binding.Annotations = naming.Merge(postgresCluster.Spec.Metadata.GetAnnotationsOrNil(), + postgresCluster.Spec.Backups.PGBackRest.Metadata.GetAnnotationsOrNil()) + binding.Labels = naming.Merge(postgresCluster.Spec.Metadata.GetLabelsOrNil(), + postgresCluster.Spec.Backups.PGBackRest.Metadata.GetLabelsOrNil(), + naming.PGBackRestLabels(postgresCluster.GetName())) + role.Annotations = naming.Merge(postgresCluster.Spec.Metadata.GetAnnotationsOrNil(), + postgresCluster.Spec.Backups.PGBackRest.Metadata.GetAnnotationsOrNil()) + role.Labels = naming.Merge(postgresCluster.Spec.Metadata.GetLabelsOrNil(), + postgresCluster.Spec.Backups.PGBackRest.Metadata.GetLabelsOrNil(), + naming.PGBackRestLabels(postgresCluster.GetName())) + + binding.RoleRef = rbacv1.RoleRef{ + APIGroup: rbacv1.SchemeGroupVersion.Group, + Kind: role.Kind, + Name: role.Name, + } + binding.Subjects = []rbacv1.Subject{{ + Kind: sa.Kind, + Name: sa.Name, + }} + role.Rules = pgbackrest.Permissions(postgresCluster) + + if err := r.apply(ctx, sa); err != nil { + return nil, errors.WithStack(err) + } + if err := r.apply(ctx, role); err != nil { + return nil, errors.WithStack(err) + } + if err := r.apply(ctx, binding); err != nil { + return nil, errors.WithStack(err) + } + + return sa, nil +} + +// reconcileDedicatedRepoHost is responsible for reconciling a pgBackRest dedicated repository host +// StatefulSet according to a specific PostgresCluster custom resource. +func (r *Reconciler) reconcileDedicatedRepoHost(ctx context.Context, + postgresCluster *v1beta1.PostgresCluster, + repoResources *RepoResources, + observedInstances *observedInstances) (*appsv1.StatefulSet, error) { + + log := logging.FromContext(ctx).WithValues("reconcileResource", "repoHost") + + // ensure conditions are set before returning as needed by subsequent reconcile functions + defer func() { + repoHostReady := metav1.Condition{ + ObservedGeneration: postgresCluster.GetGeneration(), + Type: ConditionRepoHostReady, + } + if postgresCluster.Status.PGBackRest.RepoHost == nil { + repoHostReady.Status = metav1.ConditionUnknown + repoHostReady.Reason = "RepoHostStatusMissing" + repoHostReady.Message = "pgBackRest dedicated repository host status is missing" + } else if postgresCluster.Status.PGBackRest.RepoHost.Ready { + repoHostReady.Status = metav1.ConditionTrue + repoHostReady.Reason = "RepoHostReady" + repoHostReady.Message = "pgBackRest dedicated repository host is ready" + } else { + repoHostReady.Status = metav1.ConditionFalse + repoHostReady.Reason = "RepoHostNotReady" + repoHostReady.Message = "pgBackRest dedicated repository host is not ready" + } + meta.SetStatusCondition(&postgresCluster.Status.Conditions, repoHostReady) + }() + var isCreate bool + if len(repoResources.hosts) == 0 { + name := fmt.Sprintf("%s-%s", postgresCluster.GetName(), "repo-host") + repoResources.hosts = append(repoResources.hosts, &appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: name, + }}) + isCreate = true + } else { + sort.Slice(repoResources.hosts, func(i, j int) bool { + return repoResources.hosts[i].CreationTimestamp.Before( + &repoResources.hosts[j].CreationTimestamp) + }) + } + repoHostName := repoResources.hosts[0].Name + repoHost, err := r.applyRepoHostIntent(ctx, postgresCluster, repoHostName, repoResources, + observedInstances) + if err != nil { + log.Error(err, "reconciling repository host") + return nil, err + } + + postgresCluster.Status.PGBackRest.RepoHost = getRepoHostStatus(repoHost) + + if isCreate { + r.Recorder.Eventf(postgresCluster, corev1.EventTypeNormal, EventRepoHostCreated, + "created pgBackRest repository host %s/%s", repoHost.TypeMeta.Kind, repoHostName) + } + + return repoHost, nil +} + +// +kubebuilder:rbac:groups="batch",resources="jobs",verbs={create,patch,delete} + +// reconcileManualBackup is responsible for reconciling pgBackRest backups that are initiated +// manually by the end-user +func (r *Reconciler) reconcileManualBackup(ctx context.Context, + postgresCluster *v1beta1.PostgresCluster, manualBackupJobs []*batchv1.Job, + serviceAccount *corev1.ServiceAccount, instances *observedInstances) error { + + manualAnnotation := postgresCluster.GetAnnotations()[naming.PGBackRestBackup] + manualStatus := postgresCluster.Status.PGBackRest.ManualBackup + + // first update status and cleanup according to any existing manual backup Jobs observed in + // the environment + var currentBackupJob *batchv1.Job + if len(manualBackupJobs) > 0 { + + currentBackupJob = manualBackupJobs[0] + completed := jobCompleted(currentBackupJob) + failed := jobFailed(currentBackupJob) + backupID := currentBackupJob.GetAnnotations()[naming.PGBackRestBackup] + + if manualStatus != nil && manualStatus.ID == backupID { + if completed { + meta.SetStatusCondition(&postgresCluster.Status.Conditions, metav1.Condition{ + ObservedGeneration: postgresCluster.GetGeneration(), + Type: ConditionManualBackupSuccessful, + Status: metav1.ConditionTrue, + Reason: "ManualBackupComplete", + Message: "Manual backup completed successfully", + }) + } else if failed { + meta.SetStatusCondition(&postgresCluster.Status.Conditions, metav1.Condition{ + ObservedGeneration: postgresCluster.GetGeneration(), + Type: ConditionManualBackupSuccessful, + Status: metav1.ConditionFalse, + Reason: "ManualBackupFailed", + Message: "Manual backup did not complete successfully", + }) + } + + // update the manual backup status based on the current status of the manual backup Job + manualStatus.StartTime = currentBackupJob.Status.StartTime + manualStatus.CompletionTime = currentBackupJob.Status.CompletionTime + manualStatus.Succeeded = currentBackupJob.Status.Succeeded + manualStatus.Failed = currentBackupJob.Status.Failed + manualStatus.Active = currentBackupJob.Status.Active + if completed || failed { + manualStatus.Finished = true + } + } + + // If the Job is finished with a "completed" or "failure" condition, and the Job is not + // annotated per the current value of the "pgbackrest-backup" annotation, then delete it so + // that a new Job can be generated with the proper (i.e. new) backup ID. This means any + // Jobs that are in progress will complete before being deleted to trigger a new backup + // per a new value for the annotation (unless the user manually deletes the Job). + if completed || failed { + if manualAnnotation != "" && backupID != manualAnnotation { + return errors.WithStack(r.Client.Delete(ctx, currentBackupJob, + client.PropagationPolicy(metav1.DeletePropagationBackground))) + } + } + } + + // pgBackRest connects to a PostgreSQL instance that is not in recovery to + // initiate a backup. Similar to "writable" but not exactly. + clusterWritable := false + for _, instance := range instances.forCluster { + writable, known := instance.IsWritable() + if writable && known { + clusterWritable = true + break + } + } + + // nothing to reconcile if there is no postgres or if a manual backup has not been + // requested + // + // TODO (andrewlecuyer): Since reconciliation doesn't currently occur when a leader is elected, + // the operator may not get another chance to create the backup if a writable instance is not + // detected, and it then returns without requeuing. To ensure this doesn't occur and that the + // operator always has a chance to reconcile when an instance becomes writable, we should watch + // Pods in the cluster for leader election events, and trigger reconciles accordingly. + if !clusterWritable || manualAnnotation == "" || + postgresCluster.Spec.Backups.PGBackRest.Manual == nil { + return nil + } + + // if there is an existing status, see if a new backup id has been provided, and if so reset + // the status and proceed with reconciling a new backup + if manualStatus == nil || manualStatus.ID != manualAnnotation { + manualStatus = &v1beta1.PGBackRestJobStatus{ + ID: manualAnnotation, + } + // Remove an existing manual backup condition if present. It will be + // created again as needed based on the newly reconciled backup Job. + meta.RemoveStatusCondition(&postgresCluster.Status.Conditions, + ConditionManualBackupSuccessful) + + postgresCluster.Status.PGBackRest.ManualBackup = manualStatus + } + + // if the status shows the Job is no longer in progress, then simply exit (which means a Job + // that has reached a "completed" or "failed" status is no longer reconciled) + if manualStatus != nil && manualStatus.Finished { + return nil + } + + // determine if the dedicated repository host is ready using the repo host ready + // condition, and return if not + repoCondition := meta.FindStatusCondition(postgresCluster.Status.Conditions, ConditionRepoHostReady) + if repoCondition == nil || repoCondition.Status != metav1.ConditionTrue { + return nil + } + + // Determine if the replica create backup is complete and return if not. This allows for proper + // orchestration of backup Jobs since only one backup can be run at a time. + backupCondition := meta.FindStatusCondition(postgresCluster.Status.Conditions, + ConditionReplicaCreate) + if backupCondition == nil || backupCondition.Status != metav1.ConditionTrue { + return nil + } + + // Verify that status exists for the repo configured for the manual backup, and that a stanza + // has been created, before proceeding. If either conditions are not true, then simply return + // without requeuing and record and event (subsequent events, e.g. successful stanza creation, + // writing of the proper repo status, adding a missing repo, etc. will trigger the reconciles + // needed to try again). + var statusFound, stanzaCreated bool + repoName := postgresCluster.Spec.Backups.PGBackRest.Manual.RepoName + for _, repo := range postgresCluster.Status.PGBackRest.Repos { + if repo.Name == repoName { + statusFound = true + stanzaCreated = repo.StanzaCreated + } + } + if !statusFound { + r.Recorder.Eventf(postgresCluster, corev1.EventTypeWarning, "InvalidBackupRepo", + "Unable to find status for %q as configured for a manual backup. Please ensure "+ + "this repo is defined in the spec.", repoName) + return nil + } + if !stanzaCreated { + r.Recorder.Eventf(postgresCluster, corev1.EventTypeWarning, "StanzaNotCreated", + "Stanza not created for %q as specified for a manual backup", repoName) + return nil + } + + var repo v1beta1.PGBackRestRepo + for i := range postgresCluster.Spec.Backups.PGBackRest.Repos { + if postgresCluster.Spec.Backups.PGBackRest.Repos[i].Name == repoName { + repo = postgresCluster.Spec.Backups.PGBackRest.Repos[i] + } + } + if repo.Name == "" { + return errors.Errorf("repo %q is not defined for this cluster", repoName) + } + + // Users should specify the repo for the command using the "manual.repoName" field in the spec, + // and not using the "--repo" option in the "manual.options" field. Therefore, record a + // warning event and return if a "--repo" option is found. Reconciliation will then be + // reattempted when "--repo" is removed from "manual.options" and the spec is updated. + // Since '--repo' can be set with or without an equals ('=') sign, we check for both + // usage patterns. + backupOpts := postgresCluster.Spec.Backups.PGBackRest.Manual.Options + for _, opt := range backupOpts { + if strings.Contains(opt, "--repo=") || strings.Contains(opt, "--repo ") { + r.Recorder.Eventf(postgresCluster, corev1.EventTypeWarning, "InvalidManualBackup", + "Option '--repo' is not allowed: please use the 'repoName' field instead.", + repoName) + return nil + } + } + + // create the backup Job + backupJob := &batchv1.Job{} + backupJob.ObjectMeta = naming.PGBackRestBackupJob(postgresCluster) + if currentBackupJob != nil { + backupJob.ObjectMeta.Name = currentBackupJob.ObjectMeta.Name + } + + var labels, annotations map[string]string + labels = naming.Merge(postgresCluster.Spec.Metadata.GetLabelsOrNil(), + postgresCluster.Spec.Backups.PGBackRest.Metadata.GetLabelsOrNil(), + naming.PGBackRestBackupJobLabels(postgresCluster.GetName(), repoName, + naming.BackupManual)) + annotations = naming.Merge(postgresCluster.Spec.Metadata.GetAnnotationsOrNil(), + postgresCluster.Spec.Backups.PGBackRest.Metadata.GetAnnotationsOrNil(), + map[string]string{ + naming.PGBackRestBackup: manualAnnotation, + }) + backupJob.ObjectMeta.Labels = labels + backupJob.ObjectMeta.Annotations = annotations + + spec := generateBackupJobSpecIntent(ctx, postgresCluster, repo, + serviceAccount.GetName(), labels, annotations, backupOpts...) + + backupJob.Spec = *spec + + // set gvk and ownership refs + backupJob.SetGroupVersionKind(batchv1.SchemeGroupVersion.WithKind("Job")) + if err := controllerutil.SetControllerReference(postgresCluster, backupJob, + r.Client.Scheme()); err != nil { + return errors.WithStack(err) + } + + // server-side apply the backup Job intent + if err := r.apply(ctx, backupJob); err != nil { + return errors.WithStack(err) + } + + return nil +} + +// +kubebuilder:rbac:groups="batch",resources="jobs",verbs={create,patch,delete} + +// reconcileReplicaCreateBackup is responsible for reconciling a full pgBackRest backup for the +// cluster as required to create replicas +func (r *Reconciler) reconcileReplicaCreateBackup(ctx context.Context, + postgresCluster *v1beta1.PostgresCluster, instances *observedInstances, + replicaCreateBackupJobs []*batchv1.Job, + serviceAccount *corev1.ServiceAccount, configHash string, + replicaCreateRepo v1beta1.PGBackRestRepo) error { + + var replicaCreateRepoStatus *v1beta1.RepoStatus + for i, repo := range postgresCluster.Status.PGBackRest.Repos { + if repo.Name == replicaCreateRepo.Name { + replicaCreateRepoStatus = &postgresCluster.Status.PGBackRest.Repos[i] + break + } + } + + // ensure condition is set before returning as needed by subsequent reconcile functions + defer func() { + replicaCreate := metav1.Condition{ + ObservedGeneration: postgresCluster.GetGeneration(), + Type: ConditionReplicaCreate, + } + if replicaCreateRepoStatus == nil { + replicaCreate.Status = metav1.ConditionUnknown + replicaCreate.Reason = "RepoStatusMissing" + replicaCreate.Message = "Status is missing for the replica create repo" + } else if replicaCreateRepoStatus.ReplicaCreateBackupComplete { + replicaCreate.Status = metav1.ConditionTrue + replicaCreate.Reason = "RepoBackupComplete" + replicaCreate.Message = "pgBackRest replica creation is now possible" + } else { + replicaCreate.Status = metav1.ConditionFalse + replicaCreate.Reason = "RepoBackupNotComplete" + replicaCreate.Message = "pgBackRest replica creation is not currently " + + "possible" + } + meta.SetStatusCondition(&postgresCluster.Status.Conditions, replicaCreate) + }() + + // pgBackRest connects to a PostgreSQL instance that is not in recovery to + // initiate a backup. Similar to "writable" but not exactly. + clusterWritable := false + for _, instance := range instances.forCluster { + writable, known := instance.IsWritable() + if writable && known { + clusterWritable = true + break + } + } + + // return early when there is no postgres, no repo, or the backup is already complete. + // + // TODO (andrewlecuyer): Since reconciliation doesn't currently occur when a leader is elected, + // the operator may not get another chance to create the backup if a writable instance is not + // detected, and it then returns without requeuing. To ensure this doesn't occur and that the + // operator always has a chance to reconcile when an instance becomes writable, we should watch + // Pods in the cluster for leader election events, and trigger reconciles accordingly. + if !clusterWritable || replicaCreateRepoStatus == nil || replicaCreateRepoStatus.ReplicaCreateBackupComplete { + return nil + } + + // determine if the replica create repo is ready using the "PGBackRestReplicaRepoReady" condition + var replicaRepoReady bool + condition := meta.FindStatusCondition(postgresCluster.Status.Conditions, ConditionReplicaRepoReady) + if condition != nil { + replicaRepoReady = (condition.Status == metav1.ConditionTrue) + } + + // determine if the dedicated repository host is ready using the repo host ready status + var dedicatedRepoReady bool + condition = meta.FindStatusCondition(postgresCluster.Status.Conditions, ConditionRepoHostReady) + if condition != nil { + dedicatedRepoReady = (condition.Status == metav1.ConditionTrue) + } + + // grab the current job if one exists, and perform any required Job cleanup or update the + // PostgresCluster status as required + var job *batchv1.Job + if len(replicaCreateBackupJobs) > 0 { + job = replicaCreateBackupJobs[0] + + failed := jobFailed(job) + completed := jobCompleted(job) + + // determine if the replica creation repo has changed + replicaCreateRepoChanged := true + if replicaCreateRepo.Name == job.GetLabels()[naming.LabelPGBackRestRepo] { + replicaCreateRepoChanged = false + } + + // Delete an existing Job (whether running or not) under the following conditions: + // - The job has failed. The Job will be deleted and recreated to try again. + // - The replica creation repo has changed since the Job was created. Delete and recreate + // with the Job with the proper repo configured. + // - The "config hash" annotation has changed, indicating a configuration change has been + // made in the spec (specifically a change to the config for an external repo). Delete + // and recreate the Job with proper hash per the current config. + if failed || replicaCreateRepoChanged || + (job.GetAnnotations()[naming.PGBackRestConfigHash] != configHash) { + if err := r.Client.Delete(ctx, job, + client.PropagationPolicy(metav1.DeletePropagationBackground)); err != nil { + return errors.WithStack(err) + } + return nil + } + + // if the Job completed then update status and return + if completed { + replicaCreateRepoStatus.ReplicaCreateBackupComplete = true + return nil + } + } + + // return if no job has been created and the replica repo or the dedicated + // repo host is not ready + if job == nil && (!dedicatedRepoReady || !replicaRepoReady) { + return nil + } + + // create the backup Job, and populate ObjectMeta based on whether or not a Job already exists + backupJob := &batchv1.Job{} + backupJob.ObjectMeta = naming.PGBackRestBackupJob(postgresCluster) + if job != nil { + backupJob.ObjectMeta.Name = job.ObjectMeta.Name + } + + var labels, annotations map[string]string + labels = naming.Merge(postgresCluster.Spec.Metadata.GetLabelsOrNil(), + postgresCluster.Spec.Backups.PGBackRest.Metadata.GetLabelsOrNil(), + naming.PGBackRestBackupJobLabels(postgresCluster.GetName(), + postgresCluster.Spec.Backups.PGBackRest.Repos[0].Name, naming.BackupReplicaCreate)) + annotations = naming.Merge(postgresCluster.Spec.Metadata.GetAnnotationsOrNil(), + postgresCluster.Spec.Backups.PGBackRest.Metadata.GetAnnotationsOrNil(), + map[string]string{ + naming.PGBackRestConfigHash: configHash, + }) + backupJob.ObjectMeta.Labels = labels + backupJob.ObjectMeta.Annotations = annotations + + spec := generateBackupJobSpecIntent(ctx, postgresCluster, replicaCreateRepo, + serviceAccount.GetName(), labels, annotations) + + backupJob.Spec = *spec + + // set gvk and ownership refs + backupJob.SetGroupVersionKind(batchv1.SchemeGroupVersion.WithKind("Job")) + if err := controllerutil.SetControllerReference(postgresCluster, backupJob, + r.Client.Scheme()); err != nil { + return errors.WithStack(err) + } + + if err := r.apply(ctx, backupJob); err != nil { + return errors.WithStack(err) + } + + return nil +} + +// reconcileRepos is responsible for reconciling any pgBackRest repositories configured +// for the cluster +func (r *Reconciler) reconcileRepos(ctx context.Context, + postgresCluster *v1beta1.PostgresCluster, extConfigHashes map[string]string, + repoResources *RepoResources) (v1beta1.PGBackRestRepo, error) { + + log := logging.FromContext(ctx).WithValues("reconcileResource", "repoVolume") + + errors := []error{} + errMsg := "reconciling repository volume" + repoVols := []*corev1.PersistentVolumeClaim{} + var replicaCreateRepo v1beta1.PGBackRestRepo + for i, repo := range postgresCluster.Spec.Backups.PGBackRest.Repos { + // the repo at index 0 is the replica creation repo + if i == 0 { + replicaCreateRepo = postgresCluster.Spec.Backups.PGBackRest.Repos[i] + } + // we only care about reconciling repo volumes, so ignore everything else + if repo.Volume == nil { + continue + } + repo, err := r.applyRepoVolumeIntent(ctx, postgresCluster, repo.Volume.VolumeClaimSpec, + repo.Name, repoResources) + if err != nil { + log.Error(err, errMsg) + errors = append(errors, err) + continue + } + if repo != nil { + repoVols = append(repoVols, repo) + } + } + + postgresCluster.Status.PGBackRest.Repos = + getRepoVolumeStatus(postgresCluster.Status.PGBackRest.Repos, repoVols, extConfigHashes, + replicaCreateRepo.Name) + + return replicaCreateRepo, utilerrors.NewAggregate(errors) +} + +// +kubebuilder:rbac:groups="",resources="pods",verbs={get,list} +// +kubebuilder:rbac:groups="",resources="pods/exec",verbs={create} + +// reconcileStanzaCreate is responsible for ensuring stanzas are properly created for the +// pgBackRest repositories configured for a PostgresCluster. If the bool returned from this +// function is false, this indicates that a pgBackRest config hash mismatch was identified that +// prevented the "pgbackrest stanza-create" command from running (with a config has mismatch +// indicating that pgBackRest configuration as stored in the pgBackRest ConfigMap has not yet +// propagated to the Pod). +func (r *Reconciler) reconcileStanzaCreate(ctx context.Context, + postgresCluster *v1beta1.PostgresCluster, + instances *observedInstances, configHash string) (bool, error) { + + // ensure conditions are set before returning as needed by subsequent reconcile functions + defer func() { + var replicaCreateRepoStatus *v1beta1.RepoStatus + if len(postgresCluster.Spec.Backups.PGBackRest.Repos) == 0 { + return + } + replicaCreateRepoName := postgresCluster.Spec.Backups.PGBackRest.Repos[0].Name + for i, repo := range postgresCluster.Status.PGBackRest.Repos { + if repo.Name == replicaCreateRepoName { + replicaCreateRepoStatus = &postgresCluster.Status.PGBackRest.Repos[i] + break + } + } + + replicaCreateRepoReady := metav1.Condition{ + ObservedGeneration: postgresCluster.GetGeneration(), + Type: ConditionReplicaRepoReady, + } + if replicaCreateRepoStatus == nil { + replicaCreateRepoReady.Status = metav1.ConditionUnknown + replicaCreateRepoReady.Reason = "RepoStatusMissing" + replicaCreateRepoReady.Message = "Status is missing for the replica creation repo" + } else if replicaCreateRepoStatus.StanzaCreated { + replicaCreateRepoReady.Status = metav1.ConditionTrue + replicaCreateRepoReady.Reason = "StanzaCreated" + replicaCreateRepoReady.Message = "pgBackRest replica create repo is ready for " + + "backups" + } else { + replicaCreateRepoReady.Status = metav1.ConditionFalse + replicaCreateRepoReady.Reason = "StanzaNotCreated" + replicaCreateRepoReady.Message = "pgBackRest replica create repo is not ready " + + "for backups" + } + meta.SetStatusCondition(&postgresCluster.Status.Conditions, replicaCreateRepoReady) + }() + + // determine if the cluster has been initialized. pgBackRest compares the + // local PostgreSQL data directory to information it sees in a PostgreSQL + // instance that is not in recovery. Similar to "writable" but not exactly. + // + // also, capture the name of the writable instance, since that instance (i.e. + // the primary) is where the stanza create command will always be run. This + // is possible as of the following change in pgBackRest v2.33: + // https://github.com/pgbackrest/pgbackrest/pull/1326. + clusterWritable := false + var writableInstanceName string + for _, instance := range instances.forCluster { + writable, known := instance.IsWritable() + if writable && known { + clusterWritable = true + writableInstanceName = instance.Name + "-0" + break + } + } + + stanzasCreated := true + for _, repoStatus := range postgresCluster.Status.PGBackRest.Repos { + if !repoStatus.StanzaCreated { + stanzasCreated = false + break + } + } + + // returns if the cluster is not yet writable, or if it has been initialized and + // all stanzas have already been created successfully + // + // TODO (andrewlecuyer): Since reconciliation doesn't currently occur when a leader is elected, + // the operator may not get another chance to create the stanza if a writable instance is not + // detected, and it then returns without requeuing. To ensure this doesn't occur and that the + // operator always has a chance to reconcile when an instance becomes writable, we should watch + // Pods in the cluster for leader election events, and trigger reconciles accordingly. + if !clusterWritable || stanzasCreated { + return false, nil + } + + // create a pgBackRest executor and attempt stanza creation + exec := func(ctx context.Context, stdin io.Reader, stdout, stderr io.Writer, + command ...string) error { + return r.PodExec(ctx, postgresCluster.GetNamespace(), writableInstanceName, + naming.ContainerDatabase, stdin, stdout, stderr, command...) + } + + // Always attempt to create pgBackRest stanza first + configHashMismatch, err := pgbackrest.Executor(exec).StanzaCreateOrUpgrade(ctx, configHash, postgresCluster) + if err != nil { + // record and log any errors resulting from running the stanza-create command + r.Recorder.Event(postgresCluster, corev1.EventTypeWarning, EventUnableToCreateStanzas, + err.Error()) + + return false, errors.WithStack(err) + } + // Don't record event or return an error if configHashMismatch is true, since this just means + // configuration changes in ConfigMaps/Secrets have not yet propagated to the container. + // Therefore, just log an an info message and return an error to requeue and try again. + if configHashMismatch { + + return true, nil + } + + // record an event indicating successful stanza creation + r.Recorder.Event(postgresCluster, corev1.EventTypeNormal, EventStanzasCreated, + "pgBackRest stanza creation completed successfully") + + // if no errors then stanza(s) created successfully + for i := range postgresCluster.Status.PGBackRest.Repos { + postgresCluster.Status.PGBackRest.Repos[i].StanzaCreated = true + } + + return false, nil +} + +// getRepoHostStatus is responsible for returning the pgBackRest status for the +// provided pgBackRest repository host +func getRepoHostStatus(repoHost *appsv1.StatefulSet) *v1beta1.RepoHostStatus { + + repoHostStatus := &v1beta1.RepoHostStatus{} + + repoHostStatus.TypeMeta = repoHost.TypeMeta + + if repoHost.Status.ReadyReplicas > 0 { + repoHostStatus.Ready = true + } else { + repoHostStatus.Ready = false + } + + return repoHostStatus +} + +// getRepoVolumeStatus is responsible for creating an array of repo statuses based on the +// existing/current status for any repos in the cluster, the repository volumes +// (i.e. PVCs) reconciled for the cluster, and the hashes calculated for the configuration for any +// external repositories defined for the cluster. +func getRepoVolumeStatus(repoStatus []v1beta1.RepoStatus, repoVolumes []*corev1.PersistentVolumeClaim, + configHashes map[string]string, replicaCreateRepoName string) []v1beta1.RepoStatus { + + // the new repository status that will be generated and returned + updatedRepoStatus := []v1beta1.RepoStatus{} + + // Update the repo status based on the repo volumes (PVCs) that were reconciled. This includes + // updating the status for any existing repository volumes, and adding status for any new + // repository volumes. + for _, rv := range repoVolumes { + newRepoVolStatus := true + repoName := rv.Labels[naming.LabelPGBackRestRepo] + for _, rs := range repoStatus { + // treat as new status if contains properties of a cloud (s3, gcr or azure) repo + if rs.Name == repoName && rs.RepoOptionsHash == "" { + newRepoVolStatus = false + + // if we find a status with ReplicaCreateBackupComplete set to "true" but the repo name + // for that status does not match the current replica create repo name, then reset + // ReplicaCreateBackupComplete and StanzaCreate back to false + if (rs.ReplicaCreateBackupComplete && (rs.Name != replicaCreateRepoName)) || + rs.RepoOptionsHash != "" { + rs.ReplicaCreateBackupComplete = false + rs.RepoOptionsHash = "" + } + + // update binding info if needed + if rs.Bound != (rv.Status.Phase == corev1.ClaimBound) { + rs.Bound = (rv.Status.Phase == corev1.ClaimBound) + } + + // if a different volume is detected, reset the stanza and replica create backup status + // so that both are run again. + if rs.VolumeName != "" && rs.VolumeName != rv.Spec.VolumeName { + rs.StanzaCreated = false + rs.ReplicaCreateBackupComplete = false + } + rs.VolumeName = rv.Spec.VolumeName + + updatedRepoStatus = append(updatedRepoStatus, rs) + break + } + } + if newRepoVolStatus { + updatedRepoStatus = append(updatedRepoStatus, v1beta1.RepoStatus{ + Bound: (rv.Status.Phase == corev1.ClaimBound), + Name: repoName, + VolumeName: rv.Spec.VolumeName, + }) + } + } + + // Update the repo status based on the configuration hashes for any external repositories + // configured for the cluster (e.g. Azure, GCS or S3 repositories). This includes + // updating the status for any existing external repositories, and adding status for any new + // external repositories. + for repoName, hash := range configHashes { + newExtRepoStatus := true + for _, rs := range repoStatus { + // treat as new status if contains properties of a "volume" repo + if rs.Name == repoName && !rs.Bound && rs.VolumeName == "" { + newExtRepoStatus = false + + // if we find a status with ReplicaCreateBackupComplete set to "true" but the repo name + // for that status does not match the current replica create repo name, then reset + // ReplicaCreateBackupComplete back to false + if rs.ReplicaCreateBackupComplete && (rs.Name != replicaCreateRepoName) { + rs.ReplicaCreateBackupComplete = false + } + + // Update the hash if needed. Setting StanzaCreated to "false" will force another + // run of the pgBackRest stanza-create command, while also setting + // ReplicaCreateBackupComplete to false (this will result in a new replica creation + // backup if this is the replica creation repo) + if rs.RepoOptionsHash != hash { + rs.RepoOptionsHash = hash + rs.StanzaCreated = false + rs.ReplicaCreateBackupComplete = false + } + + updatedRepoStatus = append(updatedRepoStatus, rs) + break + } + } + if newExtRepoStatus { + updatedRepoStatus = append(updatedRepoStatus, v1beta1.RepoStatus{ + Name: repoName, + RepoOptionsHash: hash, + }) + } + } + + // sort to ensure repo status always displays in a consistent order according to repo name + sort.Slice(updatedRepoStatus, func(i, j int) bool { + return updatedRepoStatus[i].Name < updatedRepoStatus[j].Name + }) + + return updatedRepoStatus +} + +// reconcileScheduledBackups is responsible for reconciling pgBackRest backup +// schedules configured in the cluster definition +func (r *Reconciler) reconcileScheduledBackups( + ctx context.Context, cluster *v1beta1.PostgresCluster, sa *corev1.ServiceAccount, + cronjobs []*batchv1.CronJob, +) bool { + log := logging.FromContext(ctx).WithValues("reconcileResource", "repoCronJob") + // requeue if there is an error during creation + var requeue bool + + for _, repo := range cluster.Spec.Backups.PGBackRest.Repos { + // if the repo level backup schedules block has not been created, + // there are no schedules defined + if repo.BackupSchedules != nil { + // next if the repo level schedule is not nil, create the CronJob. + if repo.BackupSchedules.Full != nil { + if err := r.reconcilePGBackRestCronJob(ctx, cluster, repo, + full, repo.BackupSchedules.Full, sa, cronjobs); err != nil { + log.Error(err, "unable to reconcile Full backup for "+repo.Name) + requeue = true + } + } + if repo.BackupSchedules.Differential != nil { + if err := r.reconcilePGBackRestCronJob(ctx, cluster, repo, + differential, repo.BackupSchedules.Differential, sa, cronjobs); err != nil { + log.Error(err, "unable to reconcile Differential backup for "+repo.Name) + requeue = true + } + } + if repo.BackupSchedules.Incremental != nil { + if err := r.reconcilePGBackRestCronJob(ctx, cluster, repo, + incremental, repo.BackupSchedules.Incremental, sa, cronjobs); err != nil { + log.Error(err, "unable to reconcile Incremental backup for "+repo.Name) + requeue = true + } + } + } + } + return requeue +} + +// +kubebuilder:rbac:groups="batch",resources="cronjobs",verbs={create,patch} + +// reconcilePGBackRestCronJob creates the CronJob for the given repo, pgBackRest +// backup type and schedule +func (r *Reconciler) reconcilePGBackRestCronJob( + ctx context.Context, cluster *v1beta1.PostgresCluster, repo v1beta1.PGBackRestRepo, + backupType string, schedule *string, serviceAccount *corev1.ServiceAccount, + cronjobs []*batchv1.CronJob, +) error { + + log := logging.FromContext(ctx).WithValues("reconcileResource", "repoCronJob") + + annotations := naming.Merge( + cluster.Spec.Metadata.GetAnnotationsOrNil(), + cluster.Spec.Backups.PGBackRest.Metadata.GetAnnotationsOrNil()) + labels := naming.Merge( + cluster.Spec.Metadata.GetLabelsOrNil(), + cluster.Spec.Backups.PGBackRest.Metadata.GetLabelsOrNil(), + naming.PGBackRestCronJobLabels(cluster.Name, repo.Name, backupType)) + objectmeta := naming.PGBackRestCronJob(cluster, backupType, repo.Name) + + // Look for an existing CronJob by the associated Labels. If one exists, + // update the ObjectMeta accordingly. + for _, cronjob := range cronjobs { + // ignore CronJobs that are terminating + if cronjob.GetDeletionTimestamp() != nil { + continue + } + + if cronjob.GetLabels()[naming.LabelCluster] == cluster.Name && + cronjob.GetLabels()[naming.LabelPGBackRestCronJob] == backupType && + cronjob.GetLabels()[naming.LabelPGBackRestRepo] == repo.Name { + objectmeta = metav1.ObjectMeta{ + Namespace: cluster.GetNamespace(), + Name: cronjob.Name, + } + } + } + + objectmeta.Labels = labels + objectmeta.Annotations = annotations + + // if the cluster isn't bootstrapped, return + if !patroni.ClusterBootstrapped(cluster) { + return nil + } + + // Determine if the replica create backup is complete and return if not. This allows for proper + // orchestration of backup Jobs since only one backup can be run at a time. + condition := meta.FindStatusCondition(cluster.Status.Conditions, + ConditionReplicaCreate) + if condition == nil || condition.Status != metav1.ConditionTrue { + return nil + } + + // Verify that status exists for the repo configured for the scheduled backup, and that a stanza + // has been created, before proceeding. If either conditions are not true, then simply return + // without requeuing and record and event (subsequent events, e.g. successful stanza creation, + // writing of the proper repo status, adding a missing reop, etc. will trigger the reconciles + // needed to try again). + var statusFound, stanzaCreated bool + for _, repoStatus := range cluster.Status.PGBackRest.Repos { + if repoStatus.Name == repo.Name { + statusFound = true + stanzaCreated = repoStatus.StanzaCreated + } + } + if !statusFound { + r.Recorder.Eventf(cluster, corev1.EventTypeWarning, "InvalidBackupRepo", + "Unable to find status for %q as configured for a scheduled backup. Please ensure "+ + "this repo is defined in the spec.", repo.Name) + return nil + } + if !stanzaCreated { + r.Recorder.Eventf(cluster, corev1.EventTypeWarning, "StanzaNotCreated", + "Stanza not created for %q as specified for a scheduled backup", repo.Name) + return nil + } + + // set backup type (i.e. "full", "diff", "incr") + backupOpts := []string{"--type=" + backupType} + + jobSpec := generateBackupJobSpecIntent(ctx, cluster, repo, + serviceAccount.GetName(), labels, annotations, backupOpts...) + + // Suspend cronjobs when shutdown or read-only. Any jobs that have already + // started will continue. + // - https://docs.k8s.io/reference/kubernetes-api/workload-resources/cron-job-v1beta1/#CronJobSpec + suspend := (cluster.Spec.Shutdown != nil && *cluster.Spec.Shutdown) || + (cluster.Spec.Standby != nil && cluster.Spec.Standby.Enabled) + + pgBackRestCronJob := &batchv1.CronJob{ + ObjectMeta: objectmeta, + Spec: batchv1.CronJobSpec{ + Schedule: *schedule, + Suspend: &suspend, + ConcurrencyPolicy: batchv1.ForbidConcurrent, + JobTemplate: batchv1.JobTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Annotations: annotations, + Labels: labels, + }, + Spec: *jobSpec, + }, + }, + } + + // Set the image pull secrets, if any exist. + // This is set here rather than using the service account due to the lack + // of propagation to existing pods when the CRD is updated: + // https://github.com/kubernetes/kubernetes/issues/88456 + pgBackRestCronJob.Spec.JobTemplate.Spec.Template.Spec.ImagePullSecrets = + cluster.Spec.ImagePullSecrets + + // set metadata + pgBackRestCronJob.SetGroupVersionKind(batchv1.SchemeGroupVersion.WithKind("CronJob")) + err := errors.WithStack(r.setControllerReference(cluster, pgBackRestCronJob)) + + if err == nil { + err = r.apply(ctx, pgBackRestCronJob) + } + if err != nil { + // record and log any errors resulting from trying to create the pgBackRest backup CronJob + r.Recorder.Event(cluster, corev1.EventTypeWarning, EventUnableToCreatePGBackRestCronJob, + err.Error()) + log.Error(err, "error when attempting to create pgBackRest CronJob") + } + return err +} + +// BackupsEnabled checks the state of the backups (i.e., if backups are in the spec, +// if a repo-host StatefulSet exists, if the annotation permitting backup deletion exists) +// and determines whether reconciliation is allowed. +// Reconciliation of backup-related Kubernetes objects is paused if +// - a user created a cluster with backups; +// - the cluster is updated to remove backups; +// - the annotation authorizing that removal is missing. +// +// This function also returns whether the spec has a defined backups or not. +func (r *Reconciler) BackupsEnabled( + ctx context.Context, + postgresCluster *v1beta1.PostgresCluster, +) ( + backupsSpecFound bool, + backupsReconciliationAllowed bool, + err error, +) { + specFound, stsNotFound, annotationFound, err := r.ObserveBackupUniverse(ctx, postgresCluster) + + switch { + case err != nil: + case specFound: + backupsSpecFound = true + backupsReconciliationAllowed = true + case annotationFound || stsNotFound: + backupsReconciliationAllowed = true + case !annotationFound && !stsNotFound: + // Destroying backups is a two key operation: + // 1. You must remove the backups section of the spec. + // 2. You must apply an annotation to the cluster. + // The existence of a StatefulSet without the backups spec is + // evidence of key 1 being turned without key 2 being turned + // -- block reconciliation until the annotation is added. + backupsReconciliationAllowed = false + default: + backupsReconciliationAllowed = false + } + return backupsSpecFound, backupsReconciliationAllowed, err +} + +// ObserveBackupUniverse returns +// - whether the spec has backups defined; +// - whether the repo-host statefulset exists; +// - whether the cluster has the annotation authorizing backup removal. +func (r *Reconciler) ObserveBackupUniverse(ctx context.Context, + postgresCluster *v1beta1.PostgresCluster, +) ( + backupsSpecFound bool, + repoHostStatefulSetNotFound bool, + backupsRemovalAnnotationFound bool, + err error, +) { + + // Does the cluster have a blank Backups section + backupsSpecFound = !reflect.DeepEqual(postgresCluster.Spec.Backups, v1beta1.Backups{PGBackRest: v1beta1.PGBackRestArchive{}}) + + // Does the repo-host StatefulSet exist? + name := fmt.Sprintf("%s-%s", postgresCluster.GetName(), "repo-host") + existing := &appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Namespace: postgresCluster.Namespace, + Name: name, + }, + } + err = errors.WithStack( + r.Client.Get(ctx, client.ObjectKeyFromObject(existing), existing)) + repoHostStatefulSetNotFound = apierrors.IsNotFound(err) + + // If we have an error that is not related to a missing repo-host StatefulSet, + // we return an error and expect the calling function to correctly stop processing. + if err != nil && !repoHostStatefulSetNotFound { + return true, false, false, err + } + + backupsRemovalAnnotationFound = authorizeBackupRemovalAnnotationPresent(postgresCluster) + + // If we have reached this point, the err is either nil or an IsNotFound error + // which we do not care about; hence, pass nil rather than the err + return backupsSpecFound, repoHostStatefulSetNotFound, backupsRemovalAnnotationFound, nil +} + +func authorizeBackupRemovalAnnotationPresent(postgresCluster *v1beta1.PostgresCluster) bool { + annotations := postgresCluster.GetAnnotations() + for annotation := range annotations { + if annotation == naming.AuthorizeBackupRemovalAnnotation { + return annotations[naming.AuthorizeBackupRemovalAnnotation] == "true" + } + } + return false +} diff --git a/internal/controller/postgrescluster/pgbackrest_test.go b/internal/controller/postgrescluster/pgbackrest_test.go new file mode 100644 index 0000000000..8e34dabb5e --- /dev/null +++ b/internal/controller/postgrescluster/pgbackrest_test.go @@ -0,0 +1,3902 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgrescluster + +import ( + "context" + "errors" + "fmt" + "io" + "os" + "strconv" + "strings" + "testing" + "time" + + "go.opentelemetry.io/otel" + "gotest.tools/v3/assert" + appsv1 "k8s.io/api/apps/v1" + batchv1 "k8s.io/api/batch/v1" + corev1 "k8s.io/api/core/v1" + rbacv1 "k8s.io/api/rbac/v1" + apierrors "k8s.io/apimachinery/pkg/api/errors" + "k8s.io/apimachinery/pkg/api/meta" + "k8s.io/apimachinery/pkg/api/resource" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" + "k8s.io/apimachinery/pkg/labels" + "k8s.io/apimachinery/pkg/runtime" + "k8s.io/apimachinery/pkg/selection" + "k8s.io/apimachinery/pkg/types" + "k8s.io/apimachinery/pkg/util/rand" + "k8s.io/apimachinery/pkg/util/wait" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/controller/controllerutil" + "sigs.k8s.io/controller-runtime/pkg/manager" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/pgbackrest" + "github.com/crunchydata/postgres-operator/internal/pki" + "github.com/crunchydata/postgres-operator/internal/testing/cmp" + "github.com/crunchydata/postgres-operator/internal/testing/require" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +var testCronSchedule string = "*/15 * * * *" + +func fakePostgresCluster(clusterName, namespace, clusterUID string, + includeDedicatedRepo bool) *v1beta1.PostgresCluster { + postgresCluster := &v1beta1.PostgresCluster{ + ObjectMeta: metav1.ObjectMeta{ + Name: clusterName, + Namespace: namespace, + UID: types.UID(clusterUID), + }, + Spec: v1beta1.PostgresClusterSpec{ + Port: initialize.Int32(5432), + Shutdown: initialize.Bool(false), + PostgresVersion: 13, + ImagePullSecrets: []corev1.LocalObjectReference{{ + Name: "myImagePullSecret"}, + }, + Image: "example.com/crunchy-postgres-ha:test", + InstanceSets: []v1beta1.PostgresInstanceSetSpec{{ + Name: "instance1", + DataVolumeClaimSpec: corev1.PersistentVolumeClaimSpec{ + AccessModes: []corev1.PersistentVolumeAccessMode{corev1.ReadWriteMany}, + Resources: corev1.VolumeResourceRequirements{ + Requests: corev1.ResourceList{ + corev1.ResourceStorage: resource.MustParse("1Gi"), + }, + }, + }, + }}, + Backups: v1beta1.Backups{ + PGBackRest: v1beta1.PGBackRestArchive{ + Image: "example.com/crunchy-pgbackrest:test", + Jobs: &v1beta1.BackupJobs{ + PriorityClassName: initialize.String("some-priority-class"), + }, + Global: map[string]string{"repo2-test": "config", + "repo3-test": "config", "repo4-test": "config"}, + Repos: []v1beta1.PGBackRestRepo{{ + Name: "repo1", + S3: &v1beta1.RepoS3{ + Bucket: "bucket", + Endpoint: "endpoint", + Region: "region", + }, + }, { + Name: "repo2", + Azure: &v1beta1.RepoAzure{ + Container: "container", + }, + }, { + Name: "repo3", + GCS: &v1beta1.RepoGCS{ + Bucket: "bucket", + }, + }, { + Name: "repo4", + S3: &v1beta1.RepoS3{ + Bucket: "bucket", + Endpoint: "endpoint", + Region: "region", + }, + }}, + }, + }, + }, + } + + if includeDedicatedRepo { + postgresCluster.Spec.Backups.PGBackRest.Repos[0] = v1beta1.PGBackRestRepo{ + Name: "repo1", + Volume: &v1beta1.RepoPVC{ + VolumeClaimSpec: corev1.PersistentVolumeClaimSpec{ + AccessModes: []corev1.PersistentVolumeAccessMode{corev1.ReadWriteMany}, + Resources: corev1.VolumeResourceRequirements{ + Requests: map[corev1.ResourceName]resource.Quantity{ + corev1.ResourceStorage: resource.MustParse("1Gi"), + }, + }, + }, + }, + } + postgresCluster.Spec.Backups.PGBackRest.RepoHost = &v1beta1.PGBackRestRepoHost{ + PriorityClassName: initialize.String("some-priority-class"), + Resources: corev1.ResourceRequirements{}, + Affinity: &corev1.Affinity{}, + Tolerations: []corev1.Toleration{ + {Key: "woot"}, + }, + TopologySpreadConstraints: []corev1.TopologySpreadConstraint{ + { + MaxSkew: int32(1), + TopologyKey: "fakekey", + WhenUnsatisfiable: corev1.ScheduleAnyway, + LabelSelector: &metav1.LabelSelector{ + MatchExpressions: []metav1.LabelSelectorRequirement{ + {Key: naming.LabelCluster, Operator: "In", Values: []string{"somename"}}, + {Key: naming.LabelData, Operator: "Exists"}, + }, + }, + }, + }, + } + } + // always add schedule info to the first repo + postgresCluster.Spec.Backups.PGBackRest.Repos[0].BackupSchedules = &v1beta1.PGBackRestBackupSchedules{ + Full: &testCronSchedule, + Differential: &testCronSchedule, + Incremental: &testCronSchedule, + } + + return postgresCluster +} + +func fakeObservedCronJobs() []*batchv1.CronJob { + return []*batchv1.CronJob{ + { + ObjectMeta: metav1.ObjectMeta{ + Name: "fake-cronjob", + }}} +} + +func TestReconcilePGBackRest(t *testing.T) { + // Garbage collector cleans up test resources before the test completes + if strings.EqualFold(os.Getenv("USE_EXISTING_CLUSTER"), "true") { + t.Skip("USE_EXISTING_CLUSTER: Test fails due to garbage collection") + } + + cfg, tClient := setupKubernetes(t) + require.ParallelCapacity(t, 2) + + r := &Reconciler{} + ctx, cancel := setupManager(t, cfg, func(mgr manager.Manager) { + r = &Reconciler{ + Client: mgr.GetClient(), + Recorder: mgr.GetEventRecorderFor(ControllerName), + Tracer: otel.Tracer(ControllerName), + Owner: ControllerName, + } + }) + t.Cleanup(func() { teardownManager(cancel, t) }) + + t.Run("run reconcile with backups defined", func(t *testing.T) { + clusterName := "hippocluster" + clusterUID := "hippouid" + + ns := setupNamespace(t, tClient) + // create a PostgresCluster to test with + postgresCluster := fakePostgresCluster(clusterName, ns.GetName(), clusterUID, true) + + // create a service account to test with + serviceAccount, err := r.reconcilePGBackRestRBAC(ctx, postgresCluster) + assert.NilError(t, err) + assert.Assert(t, serviceAccount != nil) + + // create the 'observed' instances and set the leader + instances := &observedInstances{ + forCluster: []*Instance{{Name: "instance1", + Pods: []*corev1.Pod{{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{naming.LabelRole: naming.RolePatroniLeader}, + }, + Spec: corev1.PodSpec{}, + }}, + }, {Name: "instance2"}, {Name: "instance3"}}, + } + + // set status + postgresCluster.Status = v1beta1.PostgresClusterStatus{ + Patroni: v1beta1.PatroniStatus{SystemIdentifier: "12345abcde"}, + PGBackRest: &v1beta1.PGBackRestStatus{ + RepoHost: &v1beta1.RepoHostStatus{Ready: true}, + Repos: []v1beta1.RepoStatus{{Name: "repo1", StanzaCreated: true}}}, + } + + // set conditions + clusterConditions := map[string]metav1.ConditionStatus{ + ConditionRepoHostReady: metav1.ConditionTrue, + ConditionReplicaCreate: metav1.ConditionTrue, + } + for condition, status := range clusterConditions { + meta.SetStatusCondition(&postgresCluster.Status.Conditions, metav1.Condition{ + Type: condition, Reason: "testing", Status: status}) + } + + rootCA, err := pki.NewRootCertificateAuthority() + assert.NilError(t, err) + + result, err := r.reconcilePGBackRest(ctx, postgresCluster, instances, rootCA, true) + if err != nil || result != (reconcile.Result{}) { + t.Errorf("unable to reconcile pgBackRest: %v", err) + } + + // repo is the first defined repo + repo := postgresCluster.Spec.Backups.PGBackRest.Repos[0] + + // test that the repo was created properly + t.Run("verify pgbackrest dedicated repo StatefulSet", func(t *testing.T) { + + // get the pgBackRest repo sts using the labels we expect it to have + dedicatedRepos := &appsv1.StatefulSetList{} + if err := tClient.List(ctx, dedicatedRepos, client.InNamespace(ns.Name), + client.MatchingLabels{ + naming.LabelCluster: clusterName, + naming.LabelPGBackRest: "", + naming.LabelPGBackRestDedicated: "", + }); err != nil { + t.Fatal(err) + } + + repo := appsv1.StatefulSet{} + // verify that we found a repo sts as expected + if len(dedicatedRepos.Items) == 0 { + t.Fatal("Did not find a dedicated repo sts") + } else if len(dedicatedRepos.Items) > 1 { + t.Fatal("Too many dedicated repo sts's found") + } else { + repo = dedicatedRepos.Items[0] + } + + // verify proper number of replicas + if *repo.Spec.Replicas != 1 { + t.Errorf("%v replicas found for dedicated repo sts, expected %v", + repo.Spec.Replicas, 1) + } + + // verify proper ownership + var foundOwnershipRef bool + for _, r := range repo.GetOwnerReferences() { + if r.Kind == "PostgresCluster" && r.Name == clusterName && + r.UID == types.UID(clusterUID) { + + foundOwnershipRef = true + break + } + } + + if !foundOwnershipRef { + t.Errorf("did not find expected ownership references") + } + + // verify proper matching labels + expectedLabels := map[string]string{ + naming.LabelCluster: clusterName, + naming.LabelPGBackRest: "", + naming.LabelPGBackRestDedicated: "", + } + expectedLabelsSelector, err := metav1.LabelSelectorAsSelector( + metav1.SetAsLabelSelector(expectedLabels)) + if err != nil { + t.Error(err) + } + if !expectedLabelsSelector.Matches(labels.Set(repo.GetLabels())) { + t.Errorf("dedicated repo host is missing an expected label: found=%v, expected=%v", + repo.GetLabels(), expectedLabels) + } + + template := repo.Spec.Template.DeepCopy() + + // Containers and Volumes should be populated. + assert.Assert(t, len(template.Spec.Containers) != 0) + assert.Assert(t, len(template.Spec.InitContainers) != 0) + assert.Assert(t, len(template.Spec.Volumes) != 0) + + // Ignore Containers and Volumes in the comparison below. + template.Spec.Containers = nil + template.Spec.InitContainers = nil + template.Spec.Volumes = nil + + // TODO(tjmoore4): Add additional tests to test appending existing + // topology spread constraints and spec.disableDefaultPodScheduling being + // set to true (as done in instance StatefulSet tests). + assert.Assert(t, cmp.MarshalMatches(template.Spec, ` +affinity: {} +automountServiceAccountToken: false +containers: null +dnsPolicy: ClusterFirst +enableServiceLinks: false +imagePullSecrets: +- name: myImagePullSecret +priorityClassName: some-priority-class +restartPolicy: Always +schedulerName: default-scheduler +securityContext: + fsGroup: 26 + fsGroupChangePolicy: OnRootMismatch +shareProcessNamespace: true +terminationGracePeriodSeconds: 30 +tolerations: +- key: woot +topologySpreadConstraints: +- labelSelector: + matchExpressions: + - key: postgres-operator.crunchydata.com/cluster + operator: In + values: + - somename + - key: postgres-operator.crunchydata.com/data + operator: Exists + maxSkew: 1 + topologyKey: fakekey + whenUnsatisfiable: ScheduleAnyway +- labelSelector: + matchExpressions: + - key: postgres-operator.crunchydata.com/data + operator: In + values: + - postgres + - pgbackrest + matchLabels: + postgres-operator.crunchydata.com/cluster: hippocluster + maxSkew: 1 + topologyKey: kubernetes.io/hostname + whenUnsatisfiable: ScheduleAnyway +- labelSelector: + matchExpressions: + - key: postgres-operator.crunchydata.com/data + operator: In + values: + - postgres + - pgbackrest + matchLabels: + postgres-operator.crunchydata.com/cluster: hippocluster + maxSkew: 1 + topologyKey: topology.kubernetes.io/zone + whenUnsatisfiable: ScheduleAnyway + `)) + + // verify that the repohost container exists and contains the proper env vars + var repoHostContExists bool + for _, c := range repo.Spec.Template.Spec.Containers { + if c.Name == naming.PGBackRestRepoContainerName { + repoHostContExists = true + } + } + // now verify the proper env within the container + if !repoHostContExists { + t.Errorf("dedicated repo host is missing a container with name %s", + naming.PGBackRestRepoContainerName) + } + + repoHostStatus := postgresCluster.Status.PGBackRest.RepoHost + if repoHostStatus != nil { + if repoHostStatus.APIVersion != "apps/v1" || repoHostStatus.Kind != "StatefulSet" { + t.Errorf("invalid version/kind for dedicated repo host status") + } + } else { + t.Errorf("dedicated repo host status is missing") + } + + var foundConditionRepoHostsReady bool + for _, c := range postgresCluster.Status.Conditions { + if c.Type == "PGBackRestRepoHostReady" { + foundConditionRepoHostsReady = true + break + } + } + if !foundConditionRepoHostsReady { + t.Errorf("status condition PGBackRestRepoHostsReady is missing") + } + + assert.Check(t, wait.PollUntilContextTimeout(ctx, time.Second/2, Scale(time.Second*2), false, + func(ctx context.Context) (bool, error) { + events := &corev1.EventList{} + err := tClient.List(ctx, events, &client.MatchingFields{ + "involvedObject.kind": "PostgresCluster", + "involvedObject.name": clusterName, + "involvedObject.namespace": ns.Name, + "involvedObject.uid": clusterUID, + "reason": "RepoHostCreated", + }) + return len(events.Items) == 1, err + })) + }) + + t.Run("verify pgbackrest repo volumes", func(t *testing.T) { + + // get the pgBackRest repo sts using the labels we expect it to have + repoVols := &corev1.PersistentVolumeClaimList{} + if err := tClient.List(ctx, repoVols, client.InNamespace(ns.Name), + client.MatchingLabels{ + naming.LabelCluster: clusterName, + naming.LabelPGBackRest: "", + naming.LabelPGBackRestRepoVolume: "", + }); err != nil { + t.Fatal(err) + } + assert.Assert(t, len(repoVols.Items) > 0) + + for _, r := range postgresCluster.Spec.Backups.PGBackRest.Repos { + if r.Volume == nil { + continue + } + var foundRepoVol bool + for _, v := range repoVols.Items { + if v.GetName() == + naming.PGBackRestRepoVolume(postgresCluster, r.Name).Name { + foundRepoVol = true + break + } + } + assert.Assert(t, foundRepoVol) + } + }) + + t.Run("verify pgbackrest configuration", func(t *testing.T) { + + config := &corev1.ConfigMap{} + if err := tClient.Get(ctx, types.NamespacedName{ + Name: naming.PGBackRestConfig(postgresCluster).Name, + Namespace: postgresCluster.GetNamespace(), + }, config); err != nil { + assert.NilError(t, err) + } + assert.Assert(t, len(config.Data) > 0) + + var instanceConfFound, dedicatedRepoConfFound bool + for k, v := range config.Data { + if v != "" { + if k == pgbackrest.CMInstanceKey { + instanceConfFound = true + } else if k == pgbackrest.CMRepoKey { + dedicatedRepoConfFound = true + } + } + } + assert.Check(t, instanceConfFound) + assert.Check(t, dedicatedRepoConfFound) + }) + + t.Run("verify pgbackrest schedule cronjob", func(t *testing.T) { + + // set status + postgresCluster.Status = v1beta1.PostgresClusterStatus{ + Patroni: v1beta1.PatroniStatus{SystemIdentifier: "12345abcde"}, + PGBackRest: &v1beta1.PGBackRestStatus{ + Repos: []v1beta1.RepoStatus{{Name: "repo1", StanzaCreated: true}}}, + } + + // set conditions + clusterConditions := map[string]metav1.ConditionStatus{ + ConditionRepoHostReady: metav1.ConditionTrue, + ConditionReplicaCreate: metav1.ConditionTrue, + } + + for condition, status := range clusterConditions { + meta.SetStatusCondition(&postgresCluster.Status.Conditions, metav1.Condition{ + Type: condition, Reason: "testing", Status: status}) + } + + requeue := r.reconcileScheduledBackups(ctx, postgresCluster, serviceAccount, fakeObservedCronJobs()) + assert.Assert(t, !requeue) + + returnedCronJob := &batchv1.CronJob{} + if err := tClient.Get(ctx, types.NamespacedName{ + Name: postgresCluster.Name + "-repo1-full", + Namespace: postgresCluster.GetNamespace(), + }, returnedCronJob); err != nil { + assert.NilError(t, err) + } + + // check returned cronjob matches set spec + assert.Equal(t, returnedCronJob.Name, "hippocluster-repo1-full") + assert.Equal(t, returnedCronJob.Spec.Schedule, testCronSchedule) + assert.Equal(t, returnedCronJob.Spec.ConcurrencyPolicy, batchv1.ForbidConcurrent) + assert.Equal(t, returnedCronJob.Spec.JobTemplate.Spec.Template.Spec.Containers[0].Name, + "pgbackrest") + assert.Assert(t, returnedCronJob.Spec.JobTemplate.Spec.Template.Spec.Containers[0].SecurityContext != &corev1.SecurityContext{}) + + }) + + t.Run("verify pgbackrest schedule found", func(t *testing.T) { + + assert.Assert(t, backupScheduleFound(repo, "full")) + + testrepo := v1beta1.PGBackRestRepo{ + Name: "repo1", + BackupSchedules: &v1beta1.PGBackRestBackupSchedules{ + Full: &testCronSchedule, + Differential: &testCronSchedule, + Incremental: &testCronSchedule, + }} + + assert.Assert(t, backupScheduleFound(testrepo, "full")) + assert.Assert(t, backupScheduleFound(testrepo, "diff")) + assert.Assert(t, backupScheduleFound(testrepo, "incr")) + + }) + + t.Run("verify pgbackrest schedule not found", func(t *testing.T) { + + assert.Assert(t, !backupScheduleFound(repo, "notabackuptype")) + + noscheduletestrepo := v1beta1.PGBackRestRepo{Name: "repo1"} + assert.Assert(t, !backupScheduleFound(noscheduletestrepo, "full")) + + }) + + t.Run("pgbackrest schedule suspended status", func(t *testing.T) { + + returnedCronJob := &batchv1.CronJob{} + if err := tClient.Get(ctx, types.NamespacedName{ + Name: postgresCluster.Name + "-repo1-full", + Namespace: postgresCluster.GetNamespace(), + }, returnedCronJob); err != nil { + assert.NilError(t, err) + } + + t.Run("pgbackrest schedule suspended false", func(t *testing.T) { + assert.Assert(t, !*returnedCronJob.Spec.Suspend) + }) + + t.Run("shutdown", func(t *testing.T) { + *postgresCluster.Spec.Shutdown = true + postgresCluster.Spec.Standby = nil + + requeue := r.reconcileScheduledBackups(ctx, + postgresCluster, serviceAccount, fakeObservedCronJobs()) + assert.Assert(t, !requeue) + + assert.NilError(t, tClient.Get(ctx, types.NamespacedName{ + Name: postgresCluster.Name + "-repo1-full", + Namespace: postgresCluster.GetNamespace(), + }, returnedCronJob)) + + assert.Assert(t, *returnedCronJob.Spec.Suspend) + }) + + t.Run("standby", func(t *testing.T) { + *postgresCluster.Spec.Shutdown = false + postgresCluster.Spec.Standby = &v1beta1.PostgresStandbySpec{ + Enabled: true, + } + + requeue := r.reconcileScheduledBackups(ctx, + postgresCluster, serviceAccount, fakeObservedCronJobs()) + assert.Assert(t, !requeue) + + assert.NilError(t, tClient.Get(ctx, types.NamespacedName{ + Name: postgresCluster.Name + "-repo1-full", + Namespace: postgresCluster.GetNamespace(), + }, returnedCronJob)) + + assert.Assert(t, *returnedCronJob.Spec.Suspend) + }) + }) + }) + + t.Run("run reconcile with backups not defined", func(t *testing.T) { + clusterName := "hippocluster2" + clusterUID := "hippouid2" + + ns := setupNamespace(t, tClient) + // create a PostgresCluster without backups to test with + postgresCluster := fakePostgresCluster(clusterName, ns.GetName(), clusterUID, true) + postgresCluster.Spec.Backups = v1beta1.Backups{} + + // create the 'observed' instances and set the leader + instances := &observedInstances{ + forCluster: []*Instance{{Name: "instance1", + Pods: []*corev1.Pod{{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{naming.LabelRole: naming.RolePatroniLeader}, + }, + Spec: corev1.PodSpec{}, + }}, + }, {Name: "instance2"}, {Name: "instance3"}}, + } + + rootCA, err := pki.NewRootCertificateAuthority() + assert.NilError(t, err) + + result, err := r.reconcilePGBackRest(ctx, postgresCluster, instances, rootCA, false) + if err != nil { + t.Errorf("unable to reconcile pgBackRest: %v", err) + } + assert.Equal(t, result, reconcile.Result{}) + + t.Run("verify pgbackrest dedicated repo StatefulSet", func(t *testing.T) { + + // Verify the sts doesn't exist + dedicatedRepos := &appsv1.StatefulSetList{} + if err := tClient.List(ctx, dedicatedRepos, client.InNamespace(ns.Name), + client.MatchingLabels{ + naming.LabelCluster: clusterName, + naming.LabelPGBackRest: "", + naming.LabelPGBackRestDedicated: "", + }); err != nil { + t.Fatal(err) + } + + assert.Equal(t, len(dedicatedRepos.Items), 0) + }) + + t.Run("verify pgbackrest repo volumes", func(t *testing.T) { + + // get the pgBackRest repo sts using the labels we expect it to have + repoVols := &corev1.PersistentVolumeClaimList{} + if err := tClient.List(ctx, repoVols, client.InNamespace(ns.Name), + client.MatchingLabels{ + naming.LabelCluster: clusterName, + naming.LabelPGBackRest: "", + naming.LabelPGBackRestRepoVolume: "", + }); err != nil { + t.Fatal(err) + } + + assert.Equal(t, len(repoVols.Items), 0) + }) + + t.Run("verify pgbackrest configuration", func(t *testing.T) { + + config := &corev1.ConfigMap{} + err := tClient.Get(ctx, types.NamespacedName{ + Name: naming.PGBackRestConfig(postgresCluster).Name, + Namespace: postgresCluster.GetNamespace(), + }, config) + assert.Equal(t, apierrors.IsNotFound(err), true) + }) + }) +} + +func TestReconcilePGBackRestRBAC(t *testing.T) { + // Garbage collector cleans up test resources before the test completes + if strings.EqualFold(os.Getenv("USE_EXISTING_CLUSTER"), "true") { + t.Skip("USE_EXISTING_CLUSTER: Test fails due to garbage collection") + } + + ctx := context.Background() + _, tClient := setupKubernetes(t) + require.ParallelCapacity(t, 0) + + r := &Reconciler{Client: tClient, Owner: client.FieldOwner(t.Name())} + + clusterName := "hippocluster" + clusterUID := "hippouid" + + ns := setupNamespace(t, tClient) + + // create a PostgresCluster to test with + postgresCluster := fakePostgresCluster(clusterName, ns.GetName(), clusterUID, true) + postgresCluster.Status.PGBackRest = &v1beta1.PGBackRestStatus{ + Repos: []v1beta1.RepoStatus{{Name: "repo1", StanzaCreated: false}}, + } + + serviceAccount, err := r.reconcilePGBackRestRBAC(ctx, postgresCluster) + assert.NilError(t, err) + assert.Assert(t, serviceAccount != nil) + + // first verify the service account has been created + sa := &corev1.ServiceAccount{} + err = tClient.Get(ctx, types.NamespacedName{ + Name: naming.PGBackRestRBAC(postgresCluster).Name, + Namespace: postgresCluster.GetNamespace(), + }, sa) + assert.NilError(t, err) + + role := &rbacv1.Role{} + err = tClient.Get(ctx, types.NamespacedName{ + Name: naming.PGBackRestRBAC(postgresCluster).Name, + Namespace: postgresCluster.GetNamespace(), + }, role) + assert.NilError(t, err) + assert.Assert(t, len(role.Rules) > 0) + + roleBinding := &rbacv1.RoleBinding{} + err = tClient.Get(ctx, types.NamespacedName{ + Name: naming.PGBackRestRBAC(postgresCluster).Name, + Namespace: postgresCluster.GetNamespace(), + }, roleBinding) + assert.NilError(t, err) + assert.Assert(t, roleBinding.RoleRef.Name == role.GetName()) + + var foundSubject bool + for _, subject := range roleBinding.Subjects { + if subject.Name == sa.GetName() { + foundSubject = true + } + } + assert.Assert(t, foundSubject) +} + +func TestReconcileStanzaCreate(t *testing.T) { + cfg, tClient := setupKubernetes(t) + require.ParallelCapacity(t, 0) + + r := &Reconciler{} + ctx, cancel := setupManager(t, cfg, func(mgr manager.Manager) { + r = &Reconciler{ + Client: mgr.GetClient(), + Recorder: mgr.GetEventRecorderFor(ControllerName), + Tracer: otel.Tracer(ControllerName), + Owner: ControllerName, + } + }) + t.Cleanup(func() { teardownManager(cancel, t) }) + + clusterName := "hippocluster" + clusterUID := "hippouid" + + ns := setupNamespace(t, tClient) + + // create a PostgresCluster to test with + postgresCluster := fakePostgresCluster(clusterName, ns.GetName(), clusterUID, true) + postgresCluster.Status.PGBackRest = &v1beta1.PGBackRestStatus{ + Repos: []v1beta1.RepoStatus{{Name: "repo1", StanzaCreated: false}}, + } + + instances := newObservedInstances(postgresCluster, nil, []corev1.Pod{{ + ObjectMeta: metav1.ObjectMeta{ + Annotations: map[string]string{"status": `"role":"master"`}, + Labels: map[string]string{ + naming.LabelCluster: postgresCluster.GetName(), + naming.LabelInstance: "", + naming.LabelRole: naming.RolePatroniLeader, + }, + }, + }}) + + stanzaCreateFail := func(ctx context.Context, namespace, pod, container string, stdin io.Reader, + stdout, stderr io.Writer, command ...string) error { + return errors.New("fake stanza create failed") + } + + stanzaCreateSuccess := func(ctx context.Context, namespace, pod, container string, stdin io.Reader, + stdout, stderr io.Writer, command ...string) error { + return nil + } + + // now verify a stanza create success + r.PodExec = stanzaCreateSuccess + meta.SetStatusCondition(&postgresCluster.Status.Conditions, metav1.Condition{ + ObservedGeneration: postgresCluster.GetGeneration(), + Type: ConditionRepoHostReady, + Status: metav1.ConditionTrue, + Reason: "RepoHostReady", + Message: "pgBackRest dedicated repository host is ready", + }) + + configHashMismatch, err := r.reconcileStanzaCreate(ctx, postgresCluster, instances, "abcde12345") + assert.NilError(t, err) + assert.Assert(t, !configHashMismatch) + + assert.NilError(t, wait.PollUntilContextTimeout(ctx, time.Second/2, Scale(time.Second*2), false, + func(ctx context.Context) (bool, error) { + events := &corev1.EventList{} + err := tClient.List(ctx, events, &client.MatchingFields{ + "involvedObject.kind": "PostgresCluster", + "involvedObject.name": clusterName, + "involvedObject.namespace": ns.Name, + "involvedObject.uid": clusterUID, + "reason": "StanzasCreated", + }) + return len(events.Items) == 1, err + })) + + // status should indicate stanzas were created + for _, r := range postgresCluster.Status.PGBackRest.Repos { + assert.Assert(t, r.StanzaCreated) + } + + // now verify failure event + postgresCluster = fakePostgresCluster(clusterName, ns.GetName(), clusterUID, true) + postgresCluster.Status.PGBackRest = &v1beta1.PGBackRestStatus{ + Repos: []v1beta1.RepoStatus{{Name: "repo1", StanzaCreated: false}}, + } + r.PodExec = stanzaCreateFail + meta.SetStatusCondition(&postgresCluster.Status.Conditions, metav1.Condition{ + ObservedGeneration: postgresCluster.GetGeneration(), + Type: ConditionRepoHostReady, + Status: metav1.ConditionTrue, + Reason: "RepoHostReady", + Message: "pgBackRest dedicated repository host is ready", + }) + postgresCluster.Status.Patroni = v1beta1.PatroniStatus{ + SystemIdentifier: "6952526174828511264", + } + + configHashMismatch, err = r.reconcileStanzaCreate(ctx, postgresCluster, instances, "abcde12345") + assert.Error(t, err, "fake stanza create failed: ") + assert.Assert(t, !configHashMismatch) + + assert.NilError(t, wait.PollUntilContextTimeout(ctx, time.Second/2, Scale(time.Second*2), false, + func(ctx context.Context) (bool, error) { + events := &corev1.EventList{} + err := tClient.List(ctx, events, &client.MatchingFields{ + "involvedObject.kind": "PostgresCluster", + "involvedObject.name": clusterName, + "involvedObject.namespace": ns.Name, + "involvedObject.uid": clusterUID, + "reason": "UnableToCreateStanzas", + }) + return len(events.Items) == 1, err + })) + + // status should indicate stanza were not created + for _, r := range postgresCluster.Status.PGBackRest.Repos { + assert.Assert(t, !r.StanzaCreated) + } +} + +func TestReconcileReplicaCreateBackup(t *testing.T) { + // Garbage collector cleans up test resources before the test completes + if strings.EqualFold(os.Getenv("USE_EXISTING_CLUSTER"), "true") { + t.Skip("USE_EXISTING_CLUSTER: Test fails due to garbage collection") + } + + ctx := context.Background() + _, tClient := setupKubernetes(t) + require.ParallelCapacity(t, 1) + + r := &Reconciler{Client: tClient, Owner: client.FieldOwner(t.Name())} + + clusterName := "hippocluster" + clusterUID := "hippouid" + + ns := setupNamespace(t, tClient) + + // create a PostgresCluster to test with + postgresCluster := fakePostgresCluster(clusterName, ns.GetName(), clusterUID, true) + // set status for the "replica create" repo, e.g. the repo ad index 0 + postgresCluster.Status.PGBackRest = &v1beta1.PGBackRestStatus{ + Repos: []v1beta1.RepoStatus{{Name: "repo1", StanzaCreated: false}}, + } + instances := newObservedInstances(postgresCluster, nil, []corev1.Pod{{ + ObjectMeta: metav1.ObjectMeta{ + Annotations: map[string]string{"status": `"role":"master"`}, + Labels: map[string]string{ + naming.LabelCluster: postgresCluster.GetName(), + naming.LabelInstance: "", + naming.LabelRole: naming.RolePatroniLeader, + }, + }, + }}) + + meta.SetStatusCondition(&postgresCluster.Status.Conditions, metav1.Condition{ + ObservedGeneration: postgresCluster.GetGeneration(), + Type: ConditionRepoHostReady, + Status: metav1.ConditionTrue, + Reason: "RepoHostReady", + Message: "pgBackRest dedicated repository host is ready", + }) + meta.SetStatusCondition(&postgresCluster.Status.Conditions, metav1.Condition{ + ObservedGeneration: postgresCluster.GetGeneration(), + Type: ConditionReplicaRepoReady, + Status: metav1.ConditionTrue, + Reason: "StanzaCreated", + Message: "pgBackRest replica create repo is ready for backups", + }) + postgresCluster.Status.Patroni = v1beta1.PatroniStatus{ + SystemIdentifier: "6952526174828511264", + } + + replicaCreateRepo := postgresCluster.Spec.Backups.PGBackRest.Repos[0] + configHash := "abcde12345" + + sa := &corev1.ServiceAccount{ + ObjectMeta: metav1.ObjectMeta{Name: "hippo-sa"}, + } + + err := r.reconcileReplicaCreateBackup(ctx, postgresCluster, instances, + []*batchv1.Job{}, sa, configHash, replicaCreateRepo) + assert.NilError(t, err) + + // now find the expected job + jobs := &batchv1.JobList{} + err = tClient.List(ctx, jobs, &client.ListOptions{ + Namespace: postgresCluster.Namespace, + LabelSelector: naming.PGBackRestBackupJobSelector(clusterName, replicaCreateRepo.Name, + naming.BackupReplicaCreate), + }) + assert.NilError(t, err) + assert.Equal(t, len(jobs.Items), 1, "expected 1 job") + backupJob := jobs.Items[0] + + var foundOwnershipRef bool + // verify ownership refs + for _, ref := range backupJob.ObjectMeta.GetOwnerReferences() { + if ref.Name == clusterName { + foundOwnershipRef = true + break + } + } + assert.Assert(t, foundOwnershipRef) + + var foundHashAnnotation bool + // verify annotations + for k, v := range backupJob.GetAnnotations() { + if k == naming.PGBackRestConfigHash && v == configHash { + foundHashAnnotation = true + } + } + assert.Assert(t, foundHashAnnotation) + + // verify container & env vars + assert.Assert(t, len(backupJob.Spec.Template.Spec.Containers) == 1) + assert.Assert(t, + backupJob.Spec.Template.Spec.Containers[0].Name == naming.PGBackRestRepoContainerName) + container := backupJob.Spec.Template.Spec.Containers[0] + for _, env := range container.Env { + switch env.Name { + case "COMMAND": + assert.Assert(t, env.Value == "backup") + case "COMMAND_OPTS": + assert.Assert(t, env.Value == "--stanza=db --repo=1") + case "COMPARE_HASH": + assert.Assert(t, env.Value == "true") + case "CONTAINER": + assert.Assert(t, env.Value == naming.PGBackRestRepoContainerName) + case "NAMESPACE": + assert.Assert(t, env.Value == ns.Name) + case "SELECTOR": + assert.Assert(t, env.Value == "postgres-operator.crunchydata.com/cluster=hippocluster,"+ + "postgres-operator.crunchydata.com/pgbackrest=,"+ + "postgres-operator.crunchydata.com/pgbackrest-dedicated=") + } + } + // verify mounted configuration is present + assert.Assert(t, len(container.VolumeMounts) == 1) + + // verify volume for configuration is present + assert.Assert(t, len(backupJob.Spec.Template.Spec.Volumes) == 1) + + // verify the image pull secret + assert.Assert(t, backupJob.Spec.Template.Spec.ImagePullSecrets != nil) + assert.Equal(t, backupJob.Spec.Template.Spec.ImagePullSecrets[0].Name, + "myImagePullSecret") + + // verify the priority class + assert.Equal(t, backupJob.Spec.Template.Spec.PriorityClassName, "some-priority-class") + + // now set the job to complete + backupJob.Status.Conditions = append(backupJob.Status.Conditions, + batchv1.JobCondition{Type: batchv1.JobComplete, Status: corev1.ConditionTrue}) + + // call reconcile function again + err = r.reconcileReplicaCreateBackup(ctx, postgresCluster, instances, + []*batchv1.Job{&backupJob}, sa, configHash, replicaCreateRepo) + assert.NilError(t, err) + + // verify the proper conditions have been set + var foundCompletedCondition bool + condition := meta.FindStatusCondition(postgresCluster.Status.Conditions, ConditionReplicaCreate) + if condition != nil && (condition.Status == metav1.ConditionTrue) { + foundCompletedCondition = true + } + assert.Assert(t, foundCompletedCondition) + + // verify the status has been updated properly + var replicaCreateRepoStatus *v1beta1.RepoStatus + for i, repo := range postgresCluster.Status.PGBackRest.Repos { + if repo.Name == replicaCreateRepo.Name { + replicaCreateRepoStatus = &postgresCluster.Status.PGBackRest.Repos[i] + break + } + } + if assert.Check(t, replicaCreateRepoStatus != nil) { + assert.Assert(t, replicaCreateRepoStatus.ReplicaCreateBackupComplete) + } +} + +func TestReconcileManualBackup(t *testing.T) { + cfg, tClient := setupKubernetes(t) + require.ParallelCapacity(t, 2) + + r := &Reconciler{} + _, cancel := setupManager(t, cfg, func(mgr manager.Manager) { + r = &Reconciler{ + Client: mgr.GetClient(), + Recorder: mgr.GetEventRecorderFor(ControllerName), + Tracer: otel.Tracer(ControllerName), + Owner: ControllerName, + } + }) + t.Cleanup(func() { teardownManager(cancel, t) }) + + ns := setupNamespace(t, tClient) + defaultBackupId := "default-backup-id" + backupId := metav1.Now().OpenAPISchemaFormat() + + fakeJob := func(clusterName, repoName string) *batchv1.Job { + return &batchv1.Job{ + ObjectMeta: metav1.ObjectMeta{ + Name: "manual-backup-" + rand.String(4), + Namespace: ns.GetName(), + Annotations: map[string]string{naming.PGBackRestBackup: defaultBackupId}, + Labels: naming.PGBackRestBackupJobLabels(clusterName, repoName, + naming.BackupManual), + }, + } + } + + sa := &corev1.ServiceAccount{ + ObjectMeta: metav1.ObjectMeta{Name: "hippo-sa"}, + } + + instances := &observedInstances{ + forCluster: []*Instance{{ + Name: "instance1", + Pods: []*corev1.Pod{{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{naming.LabelRole: naming.RolePatroniLeader}, + }, + }}, + }}, + } + + testCases := []struct { + // a description of the test + testDesc string + // whether or not the test only applies to configs with dedicated repo hosts + dedicatedOnly bool + // whether or not the primary instance should be read-only + standby bool + // whether or not to mock a current job in the env before reconciling (this job is not + // actually created, but rather just passed into the reconcile function under test) + createCurrentJob bool + // conditions to apply to the job if created (these are always set to "true") + jobConditions []batchv1.JobConditionType + // conditions to apply to the mock postgrescluster + clusterConditions map[string]metav1.ConditionStatus + // the status to apply to the mock postgrescluster + status *v1beta1.PostgresClusterStatus + // the ID used to populate the "backup" annotation for the test (can be empty) + backupId string + // the manual backup field to define in the postgrescluster spec for the test + manual *v1beta1.PGBackRestManualBackup + // whether or not the test should expect a Job to be reconciled + expectReconcile bool + // whether or not the test should expect a current job in the env to be deleted + expectCurrentJobDeletion bool + // the reason associated with the expected event for the test (can be empty if + // no event is expected) + expectedEventReason string + }{{ + testDesc: "read-only cluster should not reconcile", + createCurrentJob: false, + clusterConditions: map[string]metav1.ConditionStatus{ + ConditionRepoHostReady: metav1.ConditionTrue, + ConditionReplicaCreate: metav1.ConditionTrue, + }, + standby: true, + status: &v1beta1.PostgresClusterStatus{ + PGBackRest: &v1beta1.PGBackRestStatus{ + Repos: []v1beta1.RepoStatus{{Name: "repo1", StanzaCreated: true}}}, + }, + backupId: backupId, + manual: &v1beta1.PGBackRestManualBackup{RepoName: "repo1"}, + expectCurrentJobDeletion: false, + expectReconcile: false, + }, { + testDesc: "no conditions should not reconcile", + createCurrentJob: false, + clusterConditions: map[string]metav1.ConditionStatus{}, + status: &v1beta1.PostgresClusterStatus{ + PGBackRest: &v1beta1.PGBackRestStatus{ + Repos: []v1beta1.RepoStatus{{Name: "repo1", StanzaCreated: true}}}, + }, + backupId: backupId, + manual: &v1beta1.PGBackRestManualBackup{RepoName: "repo1"}, + expectCurrentJobDeletion: false, + expectReconcile: false, + }, { + testDesc: "no repo host ready condition should not reconcile", + dedicatedOnly: true, + createCurrentJob: false, + clusterConditions: map[string]metav1.ConditionStatus{ + ConditionReplicaCreate: metav1.ConditionTrue, + }, + status: &v1beta1.PostgresClusterStatus{ + PGBackRest: &v1beta1.PGBackRestStatus{ + Repos: []v1beta1.RepoStatus{{Name: "repo1", StanzaCreated: true}}}, + }, + backupId: backupId, + manual: &v1beta1.PGBackRestManualBackup{RepoName: "repo1"}, + expectCurrentJobDeletion: false, + expectReconcile: false, + }, { + testDesc: "no replica create condition should not reconcile", + createCurrentJob: false, + clusterConditions: map[string]metav1.ConditionStatus{ + ConditionRepoHostReady: metav1.ConditionTrue, + }, + status: &v1beta1.PostgresClusterStatus{ + PGBackRest: &v1beta1.PGBackRestStatus{ + Repos: []v1beta1.RepoStatus{{Name: "repo1", StanzaCreated: true}}}, + }, + backupId: backupId, + manual: &v1beta1.PGBackRestManualBackup{RepoName: "repo1"}, + expectCurrentJobDeletion: false, + expectReconcile: false, + }, { + testDesc: "false repo host ready condition should not reconcile", + dedicatedOnly: true, + createCurrentJob: false, + clusterConditions: map[string]metav1.ConditionStatus{ + ConditionRepoHostReady: metav1.ConditionFalse, + ConditionReplicaCreate: metav1.ConditionTrue, + }, + status: &v1beta1.PostgresClusterStatus{ + PGBackRest: &v1beta1.PGBackRestStatus{ + Repos: []v1beta1.RepoStatus{{Name: "repo1", StanzaCreated: true}}}, + }, + backupId: backupId, + manual: &v1beta1.PGBackRestManualBackup{RepoName: "repo1"}, + expectCurrentJobDeletion: false, + expectReconcile: false, + }, { + testDesc: "false replica create condition should not reconcile", + createCurrentJob: false, + clusterConditions: map[string]metav1.ConditionStatus{ + ConditionRepoHostReady: metav1.ConditionTrue, + ConditionReplicaCreate: metav1.ConditionFalse, + }, + status: &v1beta1.PostgresClusterStatus{ + PGBackRest: &v1beta1.PGBackRestStatus{ + Repos: []v1beta1.RepoStatus{{Name: "repo1", StanzaCreated: true}}}, + }, + backupId: backupId, + manual: &v1beta1.PGBackRestManualBackup{RepoName: "repo1"}, + expectCurrentJobDeletion: false, + expectReconcile: false, + }, { + testDesc: "no manual backup defined should not reconcile", + createCurrentJob: false, + clusterConditions: map[string]metav1.ConditionStatus{ + ConditionRepoHostReady: metav1.ConditionTrue, + ConditionReplicaCreate: metav1.ConditionTrue, + }, + status: &v1beta1.PostgresClusterStatus{ + PGBackRest: &v1beta1.PGBackRestStatus{ + Repos: []v1beta1.RepoStatus{{Name: "repo1", StanzaCreated: true}}}, + }, + backupId: backupId, + manual: nil, + expectCurrentJobDeletion: false, + expectReconcile: false, + }, { + testDesc: "manual backup already complete should not reconcile", + createCurrentJob: false, + clusterConditions: map[string]metav1.ConditionStatus{ + ConditionRepoHostReady: metav1.ConditionTrue, + ConditionReplicaCreate: metav1.ConditionTrue, + }, + status: &v1beta1.PostgresClusterStatus{ + PGBackRest: &v1beta1.PGBackRestStatus{ + ManualBackup: &v1beta1.PGBackRestJobStatus{ + ID: backupId, Finished: true}, + Repos: []v1beta1.RepoStatus{{Name: "repo1", StanzaCreated: true}}}, + }, + backupId: backupId, + manual: nil, + expectCurrentJobDeletion: false, + expectReconcile: false, + }, { + testDesc: "empty backup annotation should not reconcile", + createCurrentJob: false, + clusterConditions: map[string]metav1.ConditionStatus{ + ConditionRepoHostReady: metav1.ConditionTrue, + ConditionReplicaCreate: metav1.ConditionTrue, + }, + status: &v1beta1.PostgresClusterStatus{ + PGBackRest: &v1beta1.PGBackRestStatus{ + Repos: []v1beta1.RepoStatus{{Name: "repo1", StanzaCreated: true}}}, + }, + backupId: "", + manual: &v1beta1.PGBackRestManualBackup{RepoName: "repo1"}, + expectCurrentJobDeletion: false, + expectReconcile: false, + }, { + testDesc: "missing repo status should not reconcile", + createCurrentJob: false, + clusterConditions: map[string]metav1.ConditionStatus{ + ConditionRepoHostReady: metav1.ConditionTrue, + ConditionReplicaCreate: metav1.ConditionTrue, + }, + status: &v1beta1.PostgresClusterStatus{ + PGBackRest: &v1beta1.PGBackRestStatus{ + Repos: []v1beta1.RepoStatus{}}, + }, + backupId: backupId, + manual: &v1beta1.PGBackRestManualBackup{RepoName: "repo1"}, + expectCurrentJobDeletion: false, + expectReconcile: false, + expectedEventReason: "InvalidBackupRepo", + }, { + testDesc: "reconcile job when no current job exists", + createCurrentJob: false, + clusterConditions: map[string]metav1.ConditionStatus{ + ConditionRepoHostReady: metav1.ConditionTrue, + ConditionReplicaCreate: metav1.ConditionTrue, + }, + status: &v1beta1.PostgresClusterStatus{ + PGBackRest: &v1beta1.PGBackRestStatus{ + Repos: []v1beta1.RepoStatus{{Name: "repo1", StanzaCreated: true}}}, + }, + backupId: backupId, + manual: &v1beta1.PGBackRestManualBackup{RepoName: "repo1"}, + expectCurrentJobDeletion: false, + expectReconcile: true, + }, { + testDesc: "reconcile job when current job exists for id and is in progress", + createCurrentJob: true, + clusterConditions: map[string]metav1.ConditionStatus{ + ConditionRepoHostReady: metav1.ConditionTrue, + ConditionReplicaCreate: metav1.ConditionTrue, + }, + status: &v1beta1.PostgresClusterStatus{ + PGBackRest: &v1beta1.PGBackRestStatus{ + Repos: []v1beta1.RepoStatus{{Name: "repo1", StanzaCreated: true}}}, + }, + backupId: defaultBackupId, + manual: &v1beta1.PGBackRestManualBackup{RepoName: "repo1"}, + expectCurrentJobDeletion: false, + expectReconcile: true, + }, { + testDesc: "reconcile new job when in-progress job exists for another id", + createCurrentJob: true, + clusterConditions: map[string]metav1.ConditionStatus{ + ConditionRepoHostReady: metav1.ConditionTrue, + ConditionReplicaCreate: metav1.ConditionTrue, + }, + status: &v1beta1.PostgresClusterStatus{ + PGBackRest: &v1beta1.PGBackRestStatus{ + Repos: []v1beta1.RepoStatus{{Name: "repo1", StanzaCreated: true}}}, + }, + backupId: backupId, + manual: &v1beta1.PGBackRestManualBackup{RepoName: "repo1"}, + expectCurrentJobDeletion: false, + expectReconcile: true, + }, { + testDesc: "delete current job since job is complete and new backup id", + createCurrentJob: true, + jobConditions: []batchv1.JobConditionType{batchv1.JobComplete}, + clusterConditions: map[string]metav1.ConditionStatus{ + ConditionRepoHostReady: metav1.ConditionTrue, + ConditionReplicaCreate: metav1.ConditionTrue, + }, + status: &v1beta1.PostgresClusterStatus{ + PGBackRest: &v1beta1.PGBackRestStatus{ + Repos: []v1beta1.RepoStatus{{Name: "repo1", StanzaCreated: true}}}, + }, + backupId: backupId, + manual: &v1beta1.PGBackRestManualBackup{RepoName: "repo1"}, + expectCurrentJobDeletion: true, + expectReconcile: false, + }, { + testDesc: "delete current job since job is failed and new backup id", + createCurrentJob: true, + jobConditions: []batchv1.JobConditionType{batchv1.JobFailed}, + clusterConditions: map[string]metav1.ConditionStatus{ + ConditionRepoHostReady: metav1.ConditionTrue, + ConditionReplicaCreate: metav1.ConditionTrue, + }, + status: &v1beta1.PostgresClusterStatus{ + PGBackRest: &v1beta1.PGBackRestStatus{ + Repos: []v1beta1.RepoStatus{{Name: "repo1", StanzaCreated: true}}}, + }, + backupId: backupId, + manual: &v1beta1.PGBackRestManualBackup{RepoName: "repo1"}, + expectCurrentJobDeletion: true, + expectReconcile: false, + }} + + for _, dedicated := range []bool{true, false} { + for i, tc := range testCases { + var clusterName string + if !dedicated { + tc.testDesc = "no repo " + tc.testDesc + clusterName = "manual-backup-no-repo-" + strconv.Itoa(i) + } else { + clusterName = "manual-backup-" + strconv.Itoa(i) + } + t.Run(tc.testDesc, func(t *testing.T) { + + if tc.dedicatedOnly && !dedicated { + t.Skip() + } + + ctx := context.Background() + + postgresCluster := fakePostgresCluster(clusterName, ns.GetName(), "", dedicated) + postgresCluster.Spec.Backups.PGBackRest.Manual = tc.manual + postgresCluster.Annotations = map[string]string{naming.PGBackRestBackup: tc.backupId} + assert.NilError(t, tClient.Create(ctx, postgresCluster)) + + postgresCluster.Status = *tc.status + for condition, status := range tc.clusterConditions { + meta.SetStatusCondition(&postgresCluster.Status.Conditions, metav1.Condition{ + Type: condition, Reason: "testing", Status: status}) + } + assert.NilError(t, tClient.Status().Update(ctx, postgresCluster)) + + currentJobs := []*batchv1.Job{} + if tc.createCurrentJob { + job := fakeJob(postgresCluster.GetName(), tc.manual.RepoName) + job.Status.Conditions = []batchv1.JobCondition{} + for _, c := range tc.jobConditions { + job.Status.Conditions = append(job.Status.Conditions, + batchv1.JobCondition{Type: c, Status: corev1.ConditionTrue}) + } + currentJobs = append(currentJobs, job) + } + + if tc.standby { + instances.forCluster[0].Pods[0].Annotations = map[string]string{} + } else { + instances.forCluster[0].Pods[0].Annotations = map[string]string{ + "status": `"role":"master"`, + } + } + + err := r.reconcileManualBackup(ctx, postgresCluster, currentJobs, sa, instances) + + if tc.expectReconcile { + + // verify expected behavior when a reconcile is expected + + assert.NilError(t, err) + + jobs := &batchv1.JobList{} + err := tClient.List(ctx, jobs, &client.ListOptions{ + Namespace: postgresCluster.Namespace, + LabelSelector: naming.PGBackRestBackupJobSelector(clusterName, + tc.manual.RepoName, naming.BackupManual), + }) + assert.NilError(t, err) + assert.Assert(t, len(jobs.Items) == 1) + + var foundOwnershipRef bool + for _, r := range jobs.Items[0].GetOwnerReferences() { + if r.Kind == "PostgresCluster" && r.Name == clusterName && + r.UID == postgresCluster.GetUID() { + foundOwnershipRef = true + break + } + } + assert.Assert(t, foundOwnershipRef) + + // verify image pull secret + assert.Assert(t, len(jobs.Items[0].Spec.Template.Spec.ImagePullSecrets) > 0) + assert.Equal(t, jobs.Items[0].Spec.Template.Spec.ImagePullSecrets[0].Name, "myImagePullSecret") + + // verify the priority class + assert.Equal(t, jobs.Items[0].Spec.Template.Spec.PriorityClassName, "some-priority-class") + + // verify status is populated with the proper ID + assert.Assert(t, postgresCluster.Status.PGBackRest.ManualBackup != nil) + assert.Assert(t, postgresCluster.Status.PGBackRest.ManualBackup.ID != "") + + return + } else { + + // verify expected results when a reconcile is not expected + + // if a deletion is expected, then an error is expected. otherwise an error is + // not expected. + if tc.expectCurrentJobDeletion { + assert.Assert(t, apierrors.IsNotFound(err)) + assert.ErrorContains(t, err, + fmt.Sprintf(`"%s" not found`, currentJobs[0].GetName())) + } else { + assert.NilError(t, err) + } + + jobs := &batchv1.JobList{} + // just use a pgbackrest selector to check for the existence of any job since + // we might not have a repo name for tests within a manual backup defined + err := tClient.List(ctx, jobs, &client.ListOptions{ + Namespace: postgresCluster.Namespace, + LabelSelector: naming.PGBackRestSelector(clusterName), + }) + assert.NilError(t, err) + assert.Assert(t, len(jobs.Items) == 0) + + // if an event is expected, the check for it + if tc.expectedEventReason != "" { + assert.NilError(t, wait.PollUntilContextTimeout(ctx, time.Second/2, Scale(time.Second*2), false, + func(ctx context.Context) (bool, error) { + events := &corev1.EventList{} + err := tClient.List(ctx, events, &client.MatchingFields{ + "involvedObject.kind": "PostgresCluster", + "involvedObject.name": clusterName, + "involvedObject.namespace": ns.GetName(), + "involvedObject.uid": string(postgresCluster.GetUID()), + "reason": tc.expectedEventReason, + }) + return len(events.Items) == 1, err + })) + } + return + } + }) + } + } +} + +func TestGetPGBackRestResources(t *testing.T) { + // Garbage collector cleans up test resources before the test completes + if strings.EqualFold(os.Getenv("USE_EXISTING_CLUSTER"), "true") { + t.Skip("USE_EXISTING_CLUSTER: Test fails due to garbage collection") + } + + ctx := context.Background() + _, tClient := setupKubernetes(t) + require.ParallelCapacity(t, 1) + + r := &Reconciler{Client: tClient, Owner: client.FieldOwner(t.Name())} + + clusterName := "hippocluster" + clusterUID := "hippouid" + namespace := setupNamespace(t, tClient).Name + + type testResult struct { + jobCount, hostCount, pvcCount int + } + + testCases := []struct { + desc string + createResources []client.Object + cluster *v1beta1.PostgresCluster + result testResult + }{{ + desc: "repo still defined keep job", + createResources: []client.Object{ + &batchv1.Job{ + ObjectMeta: metav1.ObjectMeta{ + Name: "keep-job", + Namespace: namespace, + Labels: naming.PGBackRestBackupJobLabels(clusterName, "repo1", + naming.BackupReplicaCreate), + }, + Spec: batchv1.JobSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{{Name: "test", Image: "test"}}, + RestartPolicy: corev1.RestartPolicyNever, + }, + }, + }, + }, + }, + cluster: &v1beta1.PostgresCluster{ + ObjectMeta: metav1.ObjectMeta{ + Name: clusterName, + Namespace: namespace, + UID: types.UID(clusterUID), + }, + Spec: v1beta1.PostgresClusterSpec{ + Backups: v1beta1.Backups{ + PGBackRest: v1beta1.PGBackRestArchive{ + Repos: []v1beta1.PGBackRestRepo{{Name: "repo1"}}, + }, + }, + }, + }, + result: testResult{ + jobCount: 1, pvcCount: 0, hostCount: 0, + }, + }, { + desc: "repo no longer exists delete job", + createResources: []client.Object{ + &batchv1.Job{ + ObjectMeta: metav1.ObjectMeta{ + Name: "delete-job", + Namespace: namespace, + Labels: naming.PGBackRestBackupJobLabels(clusterName, "repo1", + naming.BackupReplicaCreate), + }, + Spec: batchv1.JobSpec{ + Template: corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{{Name: "test", Image: "test"}}, + RestartPolicy: corev1.RestartPolicyNever, + }, + }, + }, + }, + }, + cluster: &v1beta1.PostgresCluster{ + ObjectMeta: metav1.ObjectMeta{ + Name: clusterName, + Namespace: namespace, + UID: types.UID(clusterUID), + }, + Spec: v1beta1.PostgresClusterSpec{ + Backups: v1beta1.Backups{ + PGBackRest: v1beta1.PGBackRestArchive{ + Repos: []v1beta1.PGBackRestRepo{{Name: "repo4"}}, + }, + }, + }, + }, + result: testResult{ + jobCount: 0, pvcCount: 0, hostCount: 0, + }, + }, { + desc: "repo still defined keep pvc", + createResources: []client.Object{ + &corev1.PersistentVolumeClaim{ + ObjectMeta: metav1.ObjectMeta{ + Name: "keep-pvc", + Namespace: namespace, + Labels: naming.PGBackRestRepoVolumeLabels(clusterName, "repo1"), + }, + Spec: corev1.PersistentVolumeClaimSpec{ + AccessModes: []corev1.PersistentVolumeAccessMode{corev1.ReadWriteMany}, + Resources: corev1.VolumeResourceRequirements{ + Requests: corev1.ResourceList{ + corev1.ResourceStorage: resource.MustParse("1Gi"), + }, + }, + }, + }, + }, + cluster: &v1beta1.PostgresCluster{ + ObjectMeta: metav1.ObjectMeta{ + Name: clusterName, + Namespace: namespace, + UID: types.UID(clusterUID), + }, + Spec: v1beta1.PostgresClusterSpec{ + Backups: v1beta1.Backups{ + PGBackRest: v1beta1.PGBackRestArchive{ + Repos: []v1beta1.PGBackRestRepo{{ + Name: "repo1", + Volume: &v1beta1.RepoPVC{}, + }}, + }, + }, + }, + }, + result: testResult{ + jobCount: 0, pvcCount: 1, hostCount: 0, + }, + }, { + desc: "repo no longer exists delete pvc", + createResources: []client.Object{ + &corev1.PersistentVolumeClaim{ + ObjectMeta: metav1.ObjectMeta{ + Name: "delete-pvc", + Namespace: namespace, + Labels: naming.PGBackRestRepoVolumeLabels(clusterName, "repo1"), + }, + Spec: corev1.PersistentVolumeClaimSpec{ + AccessModes: []corev1.PersistentVolumeAccessMode{corev1.ReadWriteMany}, + Resources: corev1.VolumeResourceRequirements{ + Requests: corev1.ResourceList{ + corev1.ResourceStorage: resource.MustParse("1Gi"), + }, + }, + }, + }, + }, + cluster: &v1beta1.PostgresCluster{ + ObjectMeta: metav1.ObjectMeta{ + Name: clusterName, + Namespace: namespace, + UID: types.UID(clusterUID), + }, + Spec: v1beta1.PostgresClusterSpec{ + Backups: v1beta1.Backups{ + PGBackRest: v1beta1.PGBackRestArchive{ + Repos: []v1beta1.PGBackRestRepo{{ + Name: "repo4", + Volume: &v1beta1.RepoPVC{}, + }}, + }, + }, + }, + }, + result: testResult{ + jobCount: 0, pvcCount: 0, hostCount: 0, + }, + }, { + desc: "dedicated repo host defined keep dedicated sts", + createResources: []client.Object{ + &appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "keep-dedicated", + Namespace: namespace, + Labels: naming.PGBackRestDedicatedLabels(clusterName), + }, + Spec: appsv1.StatefulSetSpec{ + Selector: metav1.SetAsLabelSelector( + naming.PGBackRestDedicatedLabels(clusterName)), + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Labels: naming.PGBackRestDedicatedLabels(clusterName), + }, + Spec: corev1.PodSpec{}, + }, + }, + }, + }, + cluster: &v1beta1.PostgresCluster{ + ObjectMeta: metav1.ObjectMeta{ + Name: clusterName, + Namespace: namespace, + UID: types.UID(clusterUID), + }, + Spec: v1beta1.PostgresClusterSpec{ + Backups: v1beta1.Backups{ + PGBackRest: v1beta1.PGBackRestArchive{ + Repos: []v1beta1.PGBackRestRepo{{Volume: &v1beta1.RepoPVC{}}}, + }, + }, + }, + }, + result: testResult{ + jobCount: 0, pvcCount: 0, hostCount: 1, + }, + }, { + desc: "no dedicated repo host defined, dedicated sts not deleted", + createResources: []client.Object{ + &appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "keep-dedicated-two", + Namespace: namespace, + Labels: naming.PGBackRestDedicatedLabels(clusterName), + }, + Spec: appsv1.StatefulSetSpec{ + Selector: metav1.SetAsLabelSelector( + naming.PGBackRestDedicatedLabels(clusterName)), + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{ + Labels: naming.PGBackRestDedicatedLabels(clusterName), + }, + Spec: corev1.PodSpec{}, + }, + }, + }, + }, + cluster: &v1beta1.PostgresCluster{ + ObjectMeta: metav1.ObjectMeta{ + Name: clusterName, + Namespace: namespace, + UID: types.UID(clusterUID), + }, + Spec: v1beta1.PostgresClusterSpec{ + Backups: v1beta1.Backups{ + PGBackRest: v1beta1.PGBackRestArchive{}, + }, + }, + }, + result: testResult{ + // Host count is 2 due to previous repo host sts not being deleted. + jobCount: 0, pvcCount: 0, hostCount: 2, + }, + }} + + for _, tc := range testCases { + t.Run(tc.desc, func(t *testing.T) { + for _, resource := range tc.createResources { + + err := controllerutil.SetControllerReference(tc.cluster, resource, + tClient.Scheme()) + assert.NilError(t, err) + assert.NilError(t, tClient.Create(ctx, resource)) + + resources, err := r.getPGBackRestResources(ctx, tc.cluster, true) + assert.NilError(t, err) + + assert.Assert(t, tc.result.jobCount == len(resources.replicaCreateBackupJobs)) + assert.Assert(t, tc.result.hostCount == len(resources.hosts)) + assert.Assert(t, tc.result.pvcCount == len(resources.pvcs)) + } + }) + } +} + +func TestReconcilePostgresClusterDataSource(t *testing.T) { + cfg, tClient := setupKubernetes(t) + require.ParallelCapacity(t, 4) + + r := &Reconciler{} + ctx, cancel := setupManager(t, cfg, func(mgr manager.Manager) { + r = &Reconciler{ + Client: tClient, + Recorder: mgr.GetEventRecorderFor(ControllerName), + Tracer: otel.Tracer(ControllerName), + Owner: ControllerName, + } + }) + t.Cleanup(func() { teardownManager(cancel, t) }) + + namespace := setupNamespace(t, tClient).Name + rootCA, err := pki.NewRootCertificateAuthority() + assert.NilError(t, err) + + type testResult struct { + configCount, jobCount, pvcCount int + invalidSourceRepo, invalidSourceCluster, invalidOptions bool + expectedClusterCondition *metav1.Condition + } + + for _, dedicated := range []bool{true, false} { + testCases := []struct { + desc string + dataSource *v1beta1.DataSource + clusterBootstrapped bool + sourceClusterName string + sourceClusterRepos []v1beta1.PGBackRestRepo + result testResult + }{{ + desc: "initial reconcile", + dataSource: &v1beta1.DataSource{PostgresCluster: &v1beta1.PostgresClusterDataSource{ + ClusterName: "init-source", RepoName: "repo1", + }}, + clusterBootstrapped: false, + sourceClusterName: "init-source", + sourceClusterRepos: []v1beta1.PGBackRestRepo{{Name: "repo1"}}, + result: testResult{ + configCount: 1, jobCount: 1, pvcCount: 1, + invalidSourceRepo: false, invalidSourceCluster: false, invalidOptions: false, + expectedClusterCondition: nil, + }, + }, { + desc: "invalid source cluster", + dataSource: &v1beta1.DataSource{PostgresCluster: &v1beta1.PostgresClusterDataSource{ + ClusterName: "the-wrong-source", RepoName: "repo1", + }}, + clusterBootstrapped: false, + sourceClusterName: "the-right-source", + sourceClusterRepos: []v1beta1.PGBackRestRepo{{Name: "repo1"}}, + result: testResult{ + configCount: 0, jobCount: 0, pvcCount: 0, + invalidSourceRepo: false, invalidSourceCluster: true, invalidOptions: false, + expectedClusterCondition: nil, + }, + }, { + desc: "invalid source repo", + dataSource: &v1beta1.DataSource{PostgresCluster: &v1beta1.PostgresClusterDataSource{ + ClusterName: "invalid-repo", RepoName: "repo2", + }}, + clusterBootstrapped: false, + sourceClusterName: "invalid-repo", + sourceClusterRepos: []v1beta1.PGBackRestRepo{{Name: "repo1"}}, + result: testResult{ + configCount: 1, jobCount: 0, pvcCount: 0, + invalidSourceRepo: true, invalidSourceCluster: false, invalidOptions: false, + expectedClusterCondition: nil, + }, + }, { + desc: "invalid option: --repo=", + dataSource: &v1beta1.DataSource{PostgresCluster: &v1beta1.PostgresClusterDataSource{ + ClusterName: "invalid-repo-option-equals", RepoName: "repo1", + Options: []string{"--repo="}, + }}, + clusterBootstrapped: false, + sourceClusterName: "invalid-repo-option-equals", + sourceClusterRepos: []v1beta1.PGBackRestRepo{{Name: "repo1"}}, + result: testResult{ + configCount: 1, jobCount: 0, pvcCount: 1, + invalidSourceRepo: false, invalidSourceCluster: false, invalidOptions: true, + expectedClusterCondition: nil, + }, + }, { + desc: "invalid option: --repo ", + dataSource: &v1beta1.DataSource{PostgresCluster: &v1beta1.PostgresClusterDataSource{ + ClusterName: "invalid-repo-option-space", RepoName: "repo1", + Options: []string{"--repo "}, + }}, + clusterBootstrapped: false, + sourceClusterName: "invalid-repo-option-space", + sourceClusterRepos: []v1beta1.PGBackRestRepo{{Name: "repo1"}}, + result: testResult{ + configCount: 1, jobCount: 0, pvcCount: 1, + invalidSourceRepo: false, invalidSourceCluster: false, invalidOptions: true, + expectedClusterCondition: nil, + }, + }, { + desc: "invalid option: stanza", + dataSource: &v1beta1.DataSource{PostgresCluster: &v1beta1.PostgresClusterDataSource{ + ClusterName: "invalid-stanza-option", RepoName: "repo1", + Options: []string{"--stanza"}, + }}, + clusterBootstrapped: false, + sourceClusterName: "invalid-stanza-option", + sourceClusterRepos: []v1beta1.PGBackRestRepo{{Name: "repo1"}}, + result: testResult{ + configCount: 1, jobCount: 0, pvcCount: 1, + invalidSourceRepo: false, invalidSourceCluster: false, invalidOptions: true, + expectedClusterCondition: nil, + }, + }, { + desc: "invalid option: pg1-path", + dataSource: &v1beta1.DataSource{PostgresCluster: &v1beta1.PostgresClusterDataSource{ + ClusterName: "invalid-pgpath-option", RepoName: "repo1", + Options: []string{"--pg1-path"}, + }}, + clusterBootstrapped: false, + sourceClusterName: "invalid-pgpath-option", + sourceClusterRepos: []v1beta1.PGBackRestRepo{{Name: "repo1"}}, + result: testResult{ + configCount: 1, jobCount: 0, pvcCount: 1, + invalidSourceRepo: false, invalidSourceCluster: false, invalidOptions: true, + expectedClusterCondition: nil, + }, + }, { + desc: "cluster bootstrapped init condition missing", + dataSource: &v1beta1.DataSource{PostgresCluster: &v1beta1.PostgresClusterDataSource{ + ClusterName: "bootstrapped-init-missing", RepoName: "repo1", + }}, + clusterBootstrapped: true, + sourceClusterName: "init-cond-missing", + sourceClusterRepos: []v1beta1.PGBackRestRepo{{Name: "repo1"}}, + result: testResult{ + configCount: 0, jobCount: 0, pvcCount: 0, + invalidSourceRepo: false, invalidSourceCluster: false, invalidOptions: false, + expectedClusterCondition: &metav1.Condition{ + Type: ConditionPostgresDataInitialized, + Status: metav1.ConditionTrue, + Reason: "ClusterAlreadyBootstrapped", + Message: "The cluster is already bootstrapped", + }, + }, + }, { + desc: "data source config change deletes job", + dataSource: &v1beta1.DataSource{PostgresCluster: &v1beta1.PostgresClusterDataSource{ + ClusterName: "invalid-hash", RepoName: "repo1", + }}, + clusterBootstrapped: true, + sourceClusterName: "invalid-hash", + sourceClusterRepos: []v1beta1.PGBackRestRepo{{Name: "repo1"}}, + result: testResult{ + configCount: 0, jobCount: 0, pvcCount: 0, + invalidSourceRepo: false, invalidSourceCluster: false, invalidOptions: false, + expectedClusterCondition: nil, + }, + }} + + for i, tc := range testCases { + if !dedicated { + tc.desc += "-no-repo" + } + t.Run(tc.desc, func(t *testing.T) { + + clusterName := "hippocluster-" + strconv.Itoa(i) + if !dedicated { + clusterName = clusterName + "-no-repo" + } + clusterUID := "hippouid" + strconv.Itoa(i) + + cluster := fakePostgresCluster(clusterName, namespace, clusterUID, dedicated) + cluster.Spec.DataSource = tc.dataSource + assert.NilError(t, tClient.Create(ctx, cluster)) + if tc.clusterBootstrapped { + cluster.Status.Patroni = v1beta1.PatroniStatus{ + SystemIdentifier: "123456789", + } + } + cluster.Status.StartupInstance = "testinstance" + cluster.Status.StartupInstanceSet = "instance1" + assert.NilError(t, tClient.Status().Update(ctx, cluster)) + if !dedicated { + tc.sourceClusterName = tc.sourceClusterName + "-no-repo" + } + sourceCluster := fakePostgresCluster(tc.sourceClusterName, namespace, + "source"+clusterUID, dedicated) + sourceCluster.Spec.Backups.PGBackRest.Repos = tc.sourceClusterRepos + assert.NilError(t, tClient.Create(ctx, sourceCluster)) + + sourceClusterConfig := &corev1.ConfigMap{ + ObjectMeta: naming.PGBackRestConfig(sourceCluster), + Data: map[string]string{ + "pgbackrest_instance.conf": "source-stuff", + }, + } + assert.NilError(t, tClient.Create(ctx, sourceClusterConfig)) + + sourceClusterPrimary := &corev1.Pod{ + ObjectMeta: metav1.ObjectMeta{ + Name: "primary-" + tc.sourceClusterName, + Namespace: namespace, + Labels: map[string]string{ + naming.LabelCluster: tc.sourceClusterName, + naming.LabelInstanceSet: "test", + naming.LabelInstance: "test-abcd", + naming.LabelRole: naming.RolePatroniLeader, + }, + }, + Spec: corev1.PodSpec{ + Containers: []corev1.Container{{ + Name: "test", + Image: "test", + Command: []string{"test"}, + }}, + }, + } + assert.NilError(t, tClient.Create(ctx, sourceClusterPrimary)) + + var pgclusterDataSource *v1beta1.PostgresClusterDataSource + if tc.dataSource != nil { + pgclusterDataSource = tc.dataSource.PostgresCluster + } + err := r.reconcilePostgresClusterDataSource(ctx, cluster, pgclusterDataSource, + "testhash", nil, rootCA, true) + assert.NilError(t, err) + + restoreConfig := &corev1.ConfigMap{} + err = tClient.Get(ctx, + naming.AsObjectKey(naming.PGBackRestConfig(cluster)), restoreConfig) + + if tc.result.configCount == 0 { + assert.Assert(t, apierrors.IsNotFound(err), "expected NotFound, got %#v", err) + } else { + assert.NilError(t, err) + assert.DeepEqual(t, restoreConfig.Data, sourceClusterConfig.Data) + } + + restoreJobs := &batchv1.JobList{} + assert.NilError(t, tClient.List(ctx, restoreJobs, &client.ListOptions{ + LabelSelector: naming.PGBackRestRestoreJobSelector(clusterName), + Namespace: cluster.Namespace, + })) + assert.Assert(t, tc.result.jobCount == len(restoreJobs.Items)) + if len(restoreJobs.Items) == 1 { + assert.Assert(t, restoreJobs.Items[0].Labels[naming.LabelStartupInstance] != "") + assert.Assert(t, restoreJobs.Items[0].Annotations[naming.PGBackRestConfigHash] != "") + } + + dataPVCs := &corev1.PersistentVolumeClaimList{} + selector, err := naming.AsSelector(naming.Cluster(cluster.Name)) + assert.NilError(t, err) + dataRoleReq, err := labels.NewRequirement(naming.LabelRole, selection.Equals, + []string{naming.RolePostgresData}) + assert.NilError(t, err) + selector.Add(*dataRoleReq) + assert.NilError(t, tClient.List(ctx, dataPVCs, &client.ListOptions{ + LabelSelector: selector, + Namespace: cluster.Namespace, + })) + + assert.Assert(t, tc.result.pvcCount == len(dataPVCs.Items)) + + if tc.result.expectedClusterCondition != nil { + condition := meta.FindStatusCondition(cluster.Status.Conditions, + tc.result.expectedClusterCondition.Type) + if assert.Check(t, condition != nil) { + assert.Equal(t, tc.result.expectedClusterCondition.Status, condition.Status) + assert.Equal(t, tc.result.expectedClusterCondition.Reason, condition.Reason) + assert.Equal(t, tc.result.expectedClusterCondition.Message, condition.Message) + } + } + + if tc.result.invalidSourceCluster || tc.result.invalidSourceRepo || + tc.result.invalidOptions { + assert.Check(t, wait.PollUntilContextTimeout(ctx, time.Second/2, Scale(time.Second*2), false, + func(ctx context.Context) (bool, error) { + events := &corev1.EventList{} + err := tClient.List(ctx, events, &client.MatchingFields{ + "involvedObject.kind": "PostgresCluster", + "involvedObject.name": clusterName, + "involvedObject.namespace": namespace, + "reason": "InvalidDataSource", + }) + return len(events.Items) == 1, err + })) + } + }) + } + } +} + +func TestReconcileCloudBasedDataSource(t *testing.T) { + cfg, tClient := setupKubernetes(t) + require.ParallelCapacity(t, 4) + + r := &Reconciler{} + ctx, cancel := setupManager(t, cfg, func(mgr manager.Manager) { + r = &Reconciler{ + Client: tClient, + Recorder: mgr.GetEventRecorderFor(ControllerName), + Tracer: otel.Tracer(ControllerName), + Owner: ControllerName, + } + }) + t.Cleanup(func() { teardownManager(cancel, t) }) + + namespace := setupNamespace(t, tClient).Name + + type testResult struct { + configCount, jobCount, pvcCount int + conf string + expectedClusterCondition *metav1.Condition + } + + for _, dedicated := range []bool{true, false} { + testCases := []struct { + desc string + dataSource *v1beta1.DataSource + clusterBootstrapped bool + result testResult + }{{ + desc: "initial reconcile", + dataSource: &v1beta1.DataSource{PGBackRest: &v1beta1.PGBackRestDataSource{ + Stanza: "db", + Repo: v1beta1.PGBackRestRepo{ + Name: "repo1", + }, + }}, + clusterBootstrapped: false, + result: testResult{ + configCount: 1, jobCount: 1, pvcCount: 1, + expectedClusterCondition: nil, + conf: "|\n # Generated by postgres-operator. DO NOT EDIT.\n # Your changes will not be saved.\n\n [global]\n archive-async = y\n log-path = /pgdata/pgbackrest/log\n repo1-path = /pgbackrest/repo1\n spool-path = /pgdata/pgbackrest-spool\n\n [db]\n pg1-path = /pgdata/pg13\n pg1-port = 5432\n pg1-socket-path = /tmp/postgres\n", + }, + }, { + desc: "global/configuration set", + dataSource: &v1beta1.DataSource{PGBackRest: &v1beta1.PGBackRestDataSource{ + Stanza: "db", + Repo: v1beta1.PGBackRestRepo{ + Name: "repo1", + }, + Global: map[string]string{ + "repo1-path": "elephant", + }, + }}, + clusterBootstrapped: false, + result: testResult{ + configCount: 1, jobCount: 1, pvcCount: 1, + expectedClusterCondition: nil, + conf: "|\n # Generated by postgres-operator. DO NOT EDIT.\n # Your changes will not be saved.\n\n [global]\n archive-async = y\n log-path = /pgdata/pgbackrest/log\n repo1-path = elephant\n spool-path = /pgdata/pgbackrest-spool\n\n [db]\n pg1-path = /pgdata/pg13\n pg1-port = 5432\n pg1-socket-path = /tmp/postgres\n", + }, + }, { + desc: "invalid option: stanza", + dataSource: &v1beta1.DataSource{PGBackRest: &v1beta1.PGBackRestDataSource{ + Stanza: "db", + Repo: v1beta1.PGBackRestRepo{ + Name: "repo1", + }, + Options: []string{"--stanza"}, + }}, + clusterBootstrapped: false, + result: testResult{ + configCount: 1, jobCount: 0, pvcCount: 1, + expectedClusterCondition: nil, + conf: "|\n # Generated by postgres-operator. DO NOT EDIT.\n # Your changes will not be saved.\n\n [global]\n archive-async = y\n log-path = /pgdata/pgbackrest/log\n repo1-path = /pgbackrest/repo1\n spool-path = /pgdata/pgbackrest-spool\n\n [db]\n pg1-path = /pgdata/pg13\n pg1-port = 5432\n pg1-socket-path = /tmp/postgres\n", + }, + }, { + desc: "cluster bootstrapped init condition missing", + dataSource: &v1beta1.DataSource{PGBackRest: &v1beta1.PGBackRestDataSource{ + Stanza: "db", + Repo: v1beta1.PGBackRestRepo{ + Name: "repo1", + }, + }}, + clusterBootstrapped: true, + result: testResult{ + configCount: 0, jobCount: 0, pvcCount: 0, + expectedClusterCondition: &metav1.Condition{ + Type: ConditionPostgresDataInitialized, + Status: metav1.ConditionTrue, + Reason: "ClusterAlreadyBootstrapped", + Message: "The cluster is already bootstrapped", + }, + conf: "|\n # Generated by postgres-operator. DO NOT EDIT.\n # Your changes will not be saved.\n\n [global]\n archive-async = y\n log-path = /pgdata/pgbackrest/log\n repo1-path = /pgbackrest/repo1\n spool-path = /pgdata/pgbackrest-spool\n\n [db]\n pg1-path = /pgdata/pg13\n pg1-port = 5432\n pg1-socket-path = /tmp/postgres\n", + }, + }} + + for i, tc := range testCases { + t.Run(tc.desc, func(t *testing.T) { + + clusterName := "hippocluster-" + strconv.Itoa(i) + if !dedicated { + clusterName = clusterName + "-no-repo" + } + clusterUID := "hippouid" + strconv.Itoa(i) + + cluster := fakePostgresCluster(clusterName, namespace, clusterUID, dedicated) + cluster.Spec.DataSource = tc.dataSource + assert.NilError(t, tClient.Create(ctx, cluster)) + if tc.clusterBootstrapped { + cluster.Status.Patroni = v1beta1.PatroniStatus{ + SystemIdentifier: "123456789", + } + } + cluster.Status.StartupInstance = "testinstance" + cluster.Status.StartupInstanceSet = "instance1" + assert.NilError(t, tClient.Status().Update(ctx, cluster)) + + var pgclusterDataSource *v1beta1.PGBackRestDataSource + if tc.dataSource != nil { + pgclusterDataSource = tc.dataSource.PGBackRest + } + err := r.reconcileCloudBasedDataSource(ctx, + cluster, + pgclusterDataSource, + "testhash", + nil, + ) + assert.NilError(t, err) + + restoreConfig := &corev1.ConfigMap{} + err = tClient.Get(ctx, + naming.AsObjectKey(naming.PGBackRestConfig(cluster)), restoreConfig) + + if tc.result.configCount == 0 { + assert.Assert(t, apierrors.IsNotFound(err), "expected NotFound, got %#v", err) + } else { + assert.NilError(t, err) + assert.Assert(t, cmp.MarshalMatches(restoreConfig.Data["pgbackrest_instance.conf"], tc.result.conf)) + } + + restoreJobs := &batchv1.JobList{} + assert.NilError(t, tClient.List(ctx, restoreJobs, &client.ListOptions{ + LabelSelector: naming.PGBackRestRestoreJobSelector(clusterName), + Namespace: cluster.Namespace, + })) + assert.Assert(t, tc.result.jobCount == len(restoreJobs.Items)) + if len(restoreJobs.Items) == 1 { + assert.Assert(t, restoreJobs.Items[0].Labels[naming.LabelStartupInstance] != "") + assert.Assert(t, restoreJobs.Items[0].Annotations[naming.PGBackRestConfigHash] != "") + } + + dataPVCs := &corev1.PersistentVolumeClaimList{} + selector, err := naming.AsSelector(naming.Cluster(cluster.Name)) + assert.NilError(t, err) + dataRoleReq, err := labels.NewRequirement(naming.LabelRole, selection.Equals, + []string{naming.RolePostgresData}) + assert.NilError(t, err) + selector.Add(*dataRoleReq) + assert.NilError(t, tClient.List(ctx, dataPVCs, &client.ListOptions{ + LabelSelector: selector, + Namespace: cluster.Namespace, + })) + + assert.Assert(t, tc.result.pvcCount == len(dataPVCs.Items)) + + if tc.result.expectedClusterCondition != nil { + condition := meta.FindStatusCondition(cluster.Status.Conditions, + tc.result.expectedClusterCondition.Type) + if assert.Check(t, condition != nil) { + assert.Equal(t, tc.result.expectedClusterCondition.Status, condition.Status) + assert.Equal(t, tc.result.expectedClusterCondition.Reason, condition.Reason) + assert.Equal(t, tc.result.expectedClusterCondition.Message, condition.Message) + } + } + }) + } + } +} + +func TestCopyConfigurationResources(t *testing.T) { + _, tClient := setupKubernetes(t) + ctx := context.Background() + require.ParallelCapacity(t, 2) + + r := &Reconciler{Client: tClient, Owner: client.FieldOwner(t.Name())} + + ns1 := setupNamespace(t, tClient) + ns2 := setupNamespace(t, tClient) + + secret := func(testNum string) *corev1.Secret { + return &corev1.Secret{ + ObjectMeta: metav1.ObjectMeta{ + Name: "source-secret" + testNum, + Namespace: ns1.Name, + }, + } + } + + configMap := func(testNum string) *corev1.ConfigMap { + return &corev1.ConfigMap{ + ObjectMeta: metav1.ObjectMeta{ + Name: "source-configmap" + testNum, + Namespace: ns1.Name, + }, + } + } + + clusterUID := "hippouid" + + sourceCluster := func(testNum string) *v1beta1.PostgresCluster { + return &v1beta1.PostgresCluster{ + ObjectMeta: metav1.ObjectMeta{ + Name: "source-cluster" + testNum, + Namespace: ns1.Name, + UID: types.UID(clusterUID), + }, + Spec: v1beta1.PostgresClusterSpec{ + PostgresVersion: 13, + Image: "example.com/crunchy-postgres-ha:test", + InstanceSets: []v1beta1.PostgresInstanceSetSpec{{ + Name: "instance1", + DataVolumeClaimSpec: corev1.PersistentVolumeClaimSpec{ + AccessModes: []corev1.PersistentVolumeAccessMode{corev1.ReadWriteMany}, + Resources: corev1.VolumeResourceRequirements{ + Requests: corev1.ResourceList{ + corev1.ResourceStorage: resource.MustParse("1Gi"), + }, + }, + }, + }}, + Backups: v1beta1.Backups{ + PGBackRest: v1beta1.PGBackRestArchive{ + Configuration: []corev1.VolumeProjection{{ + Secret: &corev1.SecretProjection{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: "source-secret" + testNum, + }, + }}, { + ConfigMap: &corev1.ConfigMapProjection{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: "source-configmap" + testNum, + }, + }}, + }, + Image: "example.com/crunchy-pgbackrest:test", + Repos: []v1beta1.PGBackRestRepo{{ + Name: "repo1", + }}, + }, + }, + }, + } + } + + cluster := func(testNum, scName, scNamespace string) *v1beta1.PostgresCluster { + return &v1beta1.PostgresCluster{ + ObjectMeta: metav1.ObjectMeta{ + Name: "new-cluster" + testNum, + Namespace: ns2.Name, + UID: types.UID(clusterUID), + }, + Spec: v1beta1.PostgresClusterSpec{ + PostgresVersion: 13, + Image: "example.com/crunchy-postgres-ha:test", + DataSource: &v1beta1.DataSource{ + PostgresCluster: &v1beta1.PostgresClusterDataSource{ + ClusterName: scName, + ClusterNamespace: scNamespace, + RepoName: "repo1", + }, + }, + InstanceSets: []v1beta1.PostgresInstanceSetSpec{{ + Name: "instance1", + DataVolumeClaimSpec: corev1.PersistentVolumeClaimSpec{ + AccessModes: []corev1.PersistentVolumeAccessMode{corev1.ReadWriteMany}, + Resources: corev1.VolumeResourceRequirements{ + Requests: corev1.ResourceList{ + corev1.ResourceStorage: resource.MustParse("1Gi"), + }, + }, + }, + }}, + Backups: v1beta1.Backups{ + PGBackRest: v1beta1.PGBackRestArchive{ + Image: "example.com/crunchy-pgbackrest:test", + Repos: []v1beta1.PGBackRestRepo{{ + Name: "repo1", + }}, + }, + }, + }, + } + } + + checkSecret := func(secretName, nsName string) error { + secretCopy := &corev1.Secret{} + err := tClient.Get(ctx, types.NamespacedName{ + Name: secretName, + Namespace: nsName, + }, secretCopy) + return err + } + + checkConfigMap := func(configMapName, nsName string) error { + configMapCopy := &corev1.ConfigMap{} + err := tClient.Get(ctx, types.NamespacedName{ + Name: configMapName, + Namespace: nsName, + }, configMapCopy) + return err + } + + t.Run("No Secret or ConfigMap", func(t *testing.T) { + sc := sourceCluster("0") + + assert.Check(t, apierrors.IsNotFound( + r.copyConfigurationResources(ctx, cluster("0", sc.Name, sc.Namespace), sc))) + }) + t.Run("Only Secret", func(t *testing.T) { + secret := secret("1") + if err := tClient.Create(ctx, secret); err != nil { + t.Fatal(err) + } + assert.NilError(t, checkSecret(secret.Name, ns1.Name)) + + sc := sourceCluster("1") + + assert.Check(t, apierrors.IsNotFound( + r.copyConfigurationResources(ctx, cluster("1", sc.Name, sc.Namespace), sc))) + }) + t.Run("Only ConfigMap", func(t *testing.T) { + configMap := configMap("2") + if err := tClient.Create(ctx, configMap); err != nil { + t.Fatal(err) + } + assert.NilError(t, checkConfigMap(configMap.Name, ns1.Name)) + + sc := sourceCluster("2") + + assert.Check(t, apierrors.IsNotFound( + r.copyConfigurationResources(ctx, cluster("2", sc.Name, sc.Namespace), sc))) + }) + t.Run("Secret and ConfigMap, neither optional", func(t *testing.T) { + secret := secret("3") + if err := tClient.Create(ctx, secret); err != nil { + t.Fatal(err) + } + assert.NilError(t, checkSecret(secret.Name, ns1.Name)) + + configMap := configMap("3") + if err := tClient.Create(ctx, configMap); err != nil { + t.Fatal(err) + } + assert.NilError(t, checkConfigMap(configMap.Name, ns1.Name)) + + sc := sourceCluster("3") + nc := cluster("3", sc.Name, sc.Namespace) + if err := tClient.Create(ctx, nc); err != nil { + t.Fatal(err) + } + + assert.NilError(t, r.copyConfigurationResources(ctx, nc, sc)) + + assert.NilError(t, checkSecret(secret.Name+"-restorecopy-0", ns2.Name)) + assert.NilError(t, checkConfigMap(configMap.Name+"-restorecopy-1", ns2.Name)) + }) + t.Run("Secret and ConfigMap configured, Secret missing but optional", func(t *testing.T) { + secret := secret("4") + configMap := configMap("4") + if err := tClient.Create(ctx, configMap); err != nil { + t.Fatal(err) + } + assert.NilError(t, checkConfigMap(configMap.Name, ns1.Name)) + + sc := sourceCluster("4") + sc.Spec.Backups.PGBackRest.Configuration[0].Secret.Optional = initialize.Bool(true) + + nc := cluster("4", sc.Name, sc.Namespace) + if err := tClient.Create(ctx, nc); err != nil { + t.Fatal(err) + } + + assert.NilError(t, r.copyConfigurationResources(ctx, nc, sc)) + + assert.Check(t, apierrors.IsNotFound(checkSecret(secret.Name+"-restorecopy-0", ns2.Name))) + assert.NilError(t, checkConfigMap(configMap.Name+"-restorecopy-1", ns2.Name)) + }) + t.Run("Secret and ConfigMap configured, ConfigMap missing but optional", func(t *testing.T) { + secret := secret("5") + configMap := configMap("5") + if err := tClient.Create(ctx, secret); err != nil { + t.Fatal(err) + } + assert.NilError(t, checkSecret(secret.Name, ns1.Name)) + + sc := sourceCluster("5") + sc.Spec.Backups.PGBackRest.Configuration[1].ConfigMap.Optional = initialize.Bool(true) + + nc := cluster("5", sc.Name, sc.Namespace) + if err := tClient.Create(ctx, nc); err != nil { + t.Fatal(err) + } + + assert.NilError(t, r.copyConfigurationResources(ctx, nc, sc)) + + assert.NilError(t, checkSecret(secret.Name+"-restorecopy-0", ns2.Name)) + assert.Check(t, apierrors.IsNotFound(checkConfigMap(configMap.Name+"-restorecopy-1", ns2.Name))) + }) + t.Run("Secret and ConfigMap configured, both optional", func(t *testing.T) { + secret := secret("6") + configMap := configMap("6") + sc := sourceCluster("6") + sc.Spec.Backups.PGBackRest.Configuration[0].Secret.Optional = initialize.Bool(true) + sc.Spec.Backups.PGBackRest.Configuration[1].ConfigMap.Optional = initialize.Bool(true) + + nc := cluster("6", sc.Name, sc.Namespace) + if err := tClient.Create(ctx, nc); err != nil { + t.Fatal(err) + } + + assert.NilError(t, r.copyConfigurationResources(ctx, nc, sc)) + + assert.Assert(t, apierrors.IsNotFound(checkSecret(secret.Name+"-restorecopy-0", ns2.Name))) + assert.Assert(t, apierrors.IsNotFound(checkConfigMap(configMap.Name+"-restorecopy-1", ns2.Name))) + }) +} + +func TestGenerateBackupJobIntent(t *testing.T) { + ctx := context.Background() + t.Run("empty", func(t *testing.T) { + spec := generateBackupJobSpecIntent(ctx, + &v1beta1.PostgresCluster{}, v1beta1.PGBackRestRepo{}, + "", + nil, nil, + ) + assert.Assert(t, cmp.MarshalMatches(spec.Template.Spec, ` +containers: +- command: + - /opt/crunchy/bin/pgbackrest + env: + - name: COMMAND + value: backup + - name: COMMAND_OPTS + value: --stanza=db --repo= + - name: COMPARE_HASH + value: "true" + - name: CONTAINER + value: pgbackrest + - name: NAMESPACE + - name: SELECTOR + value: postgres-operator.crunchydata.com/cluster=,postgres-operator.crunchydata.com/pgbackrest=,postgres-operator.crunchydata.com/pgbackrest-dedicated= + name: pgbackrest + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault + volumeMounts: + - mountPath: /etc/pgbackrest/conf.d + name: pgbackrest-config + readOnly: true +enableServiceLinks: false +restartPolicy: Never +securityContext: + fsGroupChangePolicy: OnRootMismatch +volumes: +- name: pgbackrest-config + projected: + sources: + - configMap: + items: + - key: pgbackrest_repo.conf + path: pgbackrest_repo.conf + - key: config-hash + path: config-hash + - key: pgbackrest-server.conf + path: ~postgres-operator_server.conf + name: -pgbackrest-config + - secret: + items: + - key: pgbackrest.ca-roots + path: ~postgres-operator/tls-ca.crt + - key: pgbackrest-client.crt + path: ~postgres-operator/client-tls.crt + - key: pgbackrest-client.key + mode: 384 + path: ~postgres-operator/client-tls.key + name: -pgbackrest + `)) + }) + + t.Run("ImagePullPolicy", func(t *testing.T) { + cluster := &v1beta1.PostgresCluster{ + Spec: v1beta1.PostgresClusterSpec{ + ImagePullPolicy: corev1.PullAlways, + }, + } + job := generateBackupJobSpecIntent(ctx, + cluster, v1beta1.PGBackRestRepo{}, + "", + nil, nil, + ) + assert.Equal(t, job.Template.Spec.Containers[0].ImagePullPolicy, corev1.PullAlways) + }) + + t.Run("Resources", func(t *testing.T) { + cluster := &v1beta1.PostgresCluster{} + + t.Run("Resources not defined in jobs", func(t *testing.T) { + cluster.Spec.Backups = v1beta1.Backups{ + PGBackRest: v1beta1.PGBackRestArchive{}, + } + job := generateBackupJobSpecIntent(ctx, + cluster, v1beta1.PGBackRestRepo{}, + "", + nil, nil, + ) + assert.DeepEqual(t, job.Template.Spec.Containers[0].Resources, + corev1.ResourceRequirements{}) + }) + + t.Run("Resources defined", func(t *testing.T) { + cluster.Spec.Backups.PGBackRest.Jobs = &v1beta1.BackupJobs{ + Resources: corev1.ResourceRequirements{ + Requests: corev1.ResourceList{ + corev1.ResourceCPU: resource.MustParse("1m"), + }, + }, + } + job := generateBackupJobSpecIntent(ctx, + cluster, v1beta1.PGBackRestRepo{}, + "", + nil, nil, + ) + assert.DeepEqual(t, job.Template.Spec.Containers[0].Resources, + corev1.ResourceRequirements{ + Requests: corev1.ResourceList{ + corev1.ResourceCPU: resource.MustParse("1m"), + }}, + ) + }) + }) + + t.Run("Affinity", func(t *testing.T) { + affinity := &corev1.Affinity{ + NodeAffinity: &corev1.NodeAffinity{ + RequiredDuringSchedulingIgnoredDuringExecution: &corev1.NodeSelector{ + NodeSelectorTerms: []corev1.NodeSelectorTerm{{ + MatchExpressions: []corev1.NodeSelectorRequirement{{ + Key: "key", + Operator: "Exist", + }}, + }}, + }, + }, + } + + cluster := &v1beta1.PostgresCluster{ + Spec: v1beta1.PostgresClusterSpec{ + Backups: v1beta1.Backups{ + PGBackRest: v1beta1.PGBackRestArchive{ + Jobs: &v1beta1.BackupJobs{ + Affinity: affinity, + }, + }, + }, + }, + } + job := generateBackupJobSpecIntent(ctx, + cluster, v1beta1.PGBackRestRepo{}, + "", + nil, nil, + ) + assert.Equal(t, job.Template.Spec.Affinity, affinity) + }) + + t.Run("PriorityClassName", func(t *testing.T) { + cluster := &v1beta1.PostgresCluster{} + cluster.Spec.Backups.PGBackRest.Jobs = &v1beta1.BackupJobs{ + PriorityClassName: initialize.String("some-priority-class"), + } + job := generateBackupJobSpecIntent(ctx, + cluster, v1beta1.PGBackRestRepo{}, + "", + nil, nil, + ) + assert.Equal(t, job.Template.Spec.PriorityClassName, "some-priority-class") + }) + + t.Run("Tolerations", func(t *testing.T) { + tolerations := []corev1.Toleration{{ + Key: "key", + Operator: "Exist", + }} + + cluster := &v1beta1.PostgresCluster{} + cluster.Spec.Backups.PGBackRest.Jobs = &v1beta1.BackupJobs{ + Tolerations: tolerations, + } + job := generateBackupJobSpecIntent(ctx, + cluster, v1beta1.PGBackRestRepo{}, + "", + nil, nil, + ) + assert.DeepEqual(t, job.Template.Spec.Tolerations, tolerations) + }) + + t.Run("TTLSecondsAfterFinished", func(t *testing.T) { + cluster := &v1beta1.PostgresCluster{} + + t.Run("Undefined", func(t *testing.T) { + cluster.Spec.Backups.PGBackRest.Jobs = nil + + spec := generateBackupJobSpecIntent(ctx, + cluster, v1beta1.PGBackRestRepo{}, "", nil, nil, + ) + assert.Assert(t, spec.TTLSecondsAfterFinished == nil) + + cluster.Spec.Backups.PGBackRest.Jobs = &v1beta1.BackupJobs{} + + spec = generateBackupJobSpecIntent(ctx, + cluster, v1beta1.PGBackRestRepo{}, "", nil, nil, + ) + assert.Assert(t, spec.TTLSecondsAfterFinished == nil) + }) + + t.Run("Zero", func(t *testing.T) { + cluster.Spec.Backups.PGBackRest.Jobs = &v1beta1.BackupJobs{ + TTLSecondsAfterFinished: initialize.Int32(0), + } + + spec := generateBackupJobSpecIntent(ctx, + cluster, v1beta1.PGBackRestRepo{}, "", nil, nil, + ) + if assert.Check(t, spec.TTLSecondsAfterFinished != nil) { + assert.Equal(t, *spec.TTLSecondsAfterFinished, int32(0)) + } + }) + + t.Run("Positive", func(t *testing.T) { + cluster.Spec.Backups.PGBackRest.Jobs = &v1beta1.BackupJobs{ + TTLSecondsAfterFinished: initialize.Int32(100), + } + + spec := generateBackupJobSpecIntent(ctx, + cluster, v1beta1.PGBackRestRepo{}, "", nil, nil, + ) + if assert.Check(t, spec.TTLSecondsAfterFinished != nil) { + assert.Equal(t, *spec.TTLSecondsAfterFinished, int32(100)) + } + }) + }) +} + +func TestGenerateRepoHostIntent(t *testing.T) { + _, cc := setupKubernetes(t) + require.ParallelCapacity(t, 0) + + ctx := context.Background() + r := Reconciler{Client: cc} + + t.Run("empty", func(t *testing.T) { + _, err := r.generateRepoHostIntent(ctx, &v1beta1.PostgresCluster{}, "", &RepoResources{}, + &observedInstances{}) + assert.NilError(t, err) + }) + + cluster := &v1beta1.PostgresCluster{} + sts, err := r.generateRepoHostIntent(ctx, cluster, "", &RepoResources{}, &observedInstances{}) + assert.NilError(t, err) + + t.Run("ServiceAccount", func(t *testing.T) { + assert.Equal(t, sts.Spec.Template.Spec.ServiceAccountName, "") + if assert.Check(t, sts.Spec.Template.Spec.AutomountServiceAccountToken != nil) { + assert.Equal(t, *sts.Spec.Template.Spec.AutomountServiceAccountToken, false) + } + }) + + t.Run("Replicas", func(t *testing.T) { + assert.Equal(t, *sts.Spec.Replicas, int32(1)) + }) + + t.Run("PG instances observed, do not shutdown repo host", func(t *testing.T) { + cluster := &v1beta1.PostgresCluster{ + Spec: v1beta1.PostgresClusterSpec{ + Shutdown: initialize.Bool(true), + }, + } + observed := &observedInstances{forCluster: []*Instance{{Pods: []*corev1.Pod{{}}}}} + sts, err := r.generateRepoHostIntent(ctx, cluster, "", &RepoResources{}, observed) + assert.NilError(t, err) + assert.Equal(t, *sts.Spec.Replicas, int32(1)) + }) + + t.Run("No PG instances observed, shutdown repo host", func(t *testing.T) { + cluster := &v1beta1.PostgresCluster{ + Spec: v1beta1.PostgresClusterSpec{ + Shutdown: initialize.Bool(true), + }, + } + observed := &observedInstances{forCluster: []*Instance{{}}} + sts, err := r.generateRepoHostIntent(ctx, cluster, "", &RepoResources{}, observed) + assert.NilError(t, err) + assert.Equal(t, *sts.Spec.Replicas, int32(0)) + }) +} + +func TestGenerateRestoreJobIntent(t *testing.T) { + _, cc := setupKubernetes(t) + require.ParallelCapacity(t, 0) + + r := Reconciler{ + Client: cc, + } + + t.Run("empty", func(t *testing.T) { + err := r.generateRestoreJobIntent(&v1beta1.PostgresCluster{}, "", "", + []string{}, []corev1.VolumeMount{}, []corev1.Volume{}, + &v1beta1.PostgresClusterDataSource{}, &batchv1.Job{}) + assert.NilError(t, err) + }) + + configHash := "hash" + instanceName := "name" + cmd := []string{"cmd", "blah"} + volumeMounts := []corev1.VolumeMount{{ + Name: "mount", + }} + volumes := []corev1.Volume{{ + Name: "volume", + }} + dataSource := &v1beta1.PostgresClusterDataSource{ + // ClusterName/Namespace, Repo, and Options are tested in + // TestReconcilePostgresClusterDataSource + Resources: corev1.ResourceRequirements{ + Requests: corev1.ResourceList{ + corev1.ResourceStorage: resource.MustParse("1Gi"), + }, + }, + Affinity: &corev1.Affinity{ + NodeAffinity: &corev1.NodeAffinity{ + RequiredDuringSchedulingIgnoredDuringExecution: &corev1.NodeSelector{ + NodeSelectorTerms: []corev1.NodeSelectorTerm{{ + MatchExpressions: []corev1.NodeSelectorRequirement{{ + Key: "key", + Operator: "Exist", + }}, + }}, + }, + }, + }, + Tolerations: []corev1.Toleration{{ + Key: "key", + Operator: "Exist", + }}, + PriorityClassName: initialize.String("some-priority-class"), + } + cluster := &v1beta1.PostgresCluster{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test", + }, + Spec: v1beta1.PostgresClusterSpec{ + Metadata: &v1beta1.Metadata{ + Labels: map[string]string{"Global": "test"}, + Annotations: map[string]string{"Global": "test"}, + }, + Backups: v1beta1.Backups{PGBackRest: v1beta1.PGBackRestArchive{ + Metadata: &v1beta1.Metadata{ + Labels: map[string]string{"Backrest": "test"}, + Annotations: map[string]string{"Backrest": "test"}, + }, + }}, + Image: "image", + ImagePullSecrets: []corev1.LocalObjectReference{{Name: "Secret"}}, + ImagePullPolicy: corev1.PullAlways, + }, + } + + for _, openshift := range []bool{true, false} { + cluster.Spec.OpenShift = initialize.Bool(openshift) + + job := &batchv1.Job{} + err := r.generateRestoreJobIntent(cluster, configHash, instanceName, + cmd, volumeMounts, volumes, dataSource, job) + assert.NilError(t, err, job) + + t.Run(fmt.Sprintf("openshift-%v", openshift), func(t *testing.T) { + t.Run("ObjectMeta", func(t *testing.T) { + t.Run("Name", func(t *testing.T) { + assert.Equal(t, job.ObjectMeta.Name, + naming.PGBackRestRestoreJob(cluster).Name) + }) + t.Run("Namespace", func(t *testing.T) { + assert.Equal(t, job.ObjectMeta.Namespace, + naming.PGBackRestRestoreJob(cluster).Namespace) + }) + t.Run("Annotations", func(t *testing.T) { + // configHash is defined as an annotation on the job + annotations := labels.Set(job.GetAnnotations()) + assert.Assert(t, annotations.Has("Global")) + assert.Assert(t, annotations.Has("Backrest")) + assert.Equal(t, annotations.Get(naming.PGBackRestConfigHash), configHash) + }) + t.Run("Labels", func(t *testing.T) { + // instanceName is defined as a label on the job + label := labels.Set(job.GetLabels()) + assert.Equal(t, label.Get("Global"), "test") + assert.Equal(t, label.Get("Backrest"), "test") + assert.Equal(t, label.Get(naming.LabelStartupInstance), instanceName) + }) + }) + t.Run("Spec", func(t *testing.T) { + t.Run("Template", func(t *testing.T) { + t.Run("ObjectMeta", func(t *testing.T) { + t.Run("Annotations", func(t *testing.T) { + annotations := labels.Set(job.Spec.Template.GetAnnotations()) + assert.Assert(t, annotations.Has("Global")) + assert.Assert(t, annotations.Has("Backrest")) + assert.Equal(t, annotations.Get(naming.PGBackRestConfigHash), configHash) + }) + t.Run("Labels", func(t *testing.T) { + label := labels.Set(job.Spec.Template.GetLabels()) + assert.Equal(t, label.Get("Global"), "test") + assert.Equal(t, label.Get("Backrest"), "test") + assert.Equal(t, label.Get(naming.LabelStartupInstance), instanceName) + }) + }) + t.Run("Spec", func(t *testing.T) { + t.Run("Containers", func(t *testing.T) { + assert.Assert(t, len(job.Spec.Template.Spec.Containers) == 1) + t.Run("Command", func(t *testing.T) { + assert.DeepEqual(t, job.Spec.Template.Spec.Containers[0].Command, + []string{"cmd", "blah"}) + }) + t.Run("Image", func(t *testing.T) { + assert.Equal(t, job.Spec.Template.Spec.Containers[0].Image, + "image") + assert.Equal(t, job.Spec.Template.Spec.Containers[0].ImagePullPolicy, + corev1.PullAlways) + }) + t.Run("Name", func(t *testing.T) { + assert.Equal(t, job.Spec.Template.Spec.Containers[0].Name, + naming.PGBackRestRestoreContainerName) + }) + t.Run("VolumeMount", func(t *testing.T) { + assert.DeepEqual(t, job.Spec.Template.Spec.Containers[0].VolumeMounts, + []corev1.VolumeMount{{ + Name: "mount", + }}) + }) + t.Run("Env", func(t *testing.T) { + assert.DeepEqual(t, job.Spec.Template.Spec.Containers[0].Env, + []corev1.EnvVar{{Name: "PGHOST", Value: "/tmp"}}) + }) + t.Run("SecurityContext", func(t *testing.T) { + assert.DeepEqual(t, job.Spec.Template.Spec.Containers[0].SecurityContext, + initialize.RestrictedSecurityContext()) + }) + t.Run("Resources", func(t *testing.T) { + assert.DeepEqual(t, job.Spec.Template.Spec.Containers[0].Resources, + dataSource.Resources) + }) + }) + t.Run("RestartPolicy", func(t *testing.T) { + assert.Equal(t, job.Spec.Template.Spec.RestartPolicy, + corev1.RestartPolicyNever) + }) + t.Run("Volumes", func(t *testing.T) { + assert.DeepEqual(t, job.Spec.Template.Spec.Volumes, + []corev1.Volume{{ + Name: "volume", + }}) + }) + t.Run("Affinity", func(t *testing.T) { + assert.DeepEqual(t, job.Spec.Template.Spec.Affinity, + dataSource.Affinity) + }) + t.Run("Tolerations", func(t *testing.T) { + assert.DeepEqual(t, job.Spec.Template.Spec.Tolerations, + dataSource.Tolerations) + }) + t.Run("Pod Priority Class", func(t *testing.T) { + assert.DeepEqual(t, job.Spec.Template.Spec.PriorityClassName, + "some-priority-class") + }) + t.Run("ImagePullSecret", func(t *testing.T) { + assert.DeepEqual(t, job.Spec.Template.Spec.ImagePullSecrets, + []corev1.LocalObjectReference{{ + Name: "Secret", + }}) + }) + t.Run("PodSecurityContext", func(t *testing.T) { + assert.Assert(t, job.Spec.Template.Spec.SecurityContext != nil) + }) + t.Run("EnableServiceLinks", func(t *testing.T) { + if assert.Check(t, job.Spec.Template.Spec.EnableServiceLinks != nil) { + assert.Equal(t, *job.Spec.Template.Spec.EnableServiceLinks, false) + } + }) + t.Run("ServiceAccount", func(t *testing.T) { + assert.Equal(t, job.Spec.Template.Spec.ServiceAccountName, "test-instance") + if assert.Check(t, job.Spec.Template.Spec.AutomountServiceAccountToken != nil) { + assert.Equal(t, *job.Spec.Template.Spec.AutomountServiceAccountToken, false) + } + }) + }) + }) + }) + }) + } +} + +func TestObserveRestoreEnv(t *testing.T) { + ctx := context.Background() + _, tClient := setupKubernetes(t) + require.ParallelCapacity(t, 1) + + r := &Reconciler{Client: tClient, Owner: client.FieldOwner(t.Name())} + namespace := setupNamespace(t, tClient).Name + + generateJob := func(clusterName string, completed, failed *bool) *batchv1.Job { + + cluster := &v1beta1.PostgresCluster{ + ObjectMeta: metav1.ObjectMeta{ + Name: clusterName, + Namespace: namespace, + }, + } + meta := naming.PGBackRestRestoreJob(cluster) + labels := naming.PGBackRestRestoreJobLabels(cluster.Name) + meta.Labels = labels + meta.Annotations = map[string]string{naming.PGBackRestConfigHash: "testhash"} + + restoreJob := &batchv1.Job{ + ObjectMeta: meta, + Spec: batchv1.JobSpec{ + Template: corev1.PodTemplateSpec{ + ObjectMeta: meta, + Spec: corev1.PodSpec{ + Containers: []corev1.Container{{ + Image: "test", + Name: naming.PGBackRestRestoreContainerName, + }}, + RestartPolicy: corev1.RestartPolicyNever, + }, + }, + }, + } + + if completed != nil { + if *completed { + restoreJob.Status.Conditions = append(restoreJob.Status.Conditions, batchv1.JobCondition{ + Type: batchv1.JobComplete, + Status: corev1.ConditionTrue, + Reason: "test", + Message: "test", + }) + } else { + restoreJob.Status.Conditions = append(restoreJob.Status.Conditions, batchv1.JobCondition{ + Type: batchv1.JobComplete, + Status: corev1.ConditionFalse, + Reason: "test", + Message: "test", + }) + } + } else if failed != nil { + if *failed { + restoreJob.Status.Conditions = append(restoreJob.Status.Conditions, batchv1.JobCondition{ + Type: batchv1.JobFailed, + Status: corev1.ConditionTrue, + Reason: "test", + Message: "test", + }) + } else { + restoreJob.Status.Conditions = append(restoreJob.Status.Conditions, batchv1.JobCondition{ + Type: batchv1.JobFailed, + Status: corev1.ConditionFalse, + Reason: "test", + Message: "test", + }) + } + } + + return restoreJob + } + + type testResult struct { + foundRestoreJob bool + endpointCount int + expectedClusterCondition *metav1.Condition + } + + for _, dedicated := range []bool{true, false} { + testCases := []struct { + desc string + createResources func(t *testing.T, cluster *v1beta1.PostgresCluster) + result testResult + }{{ + desc: "restore job and all patroni endpoints exist", + createResources: func(t *testing.T, cluster *v1beta1.PostgresCluster) { + fakeLeaderEP := &corev1.Endpoints{} + fakeLeaderEP.ObjectMeta = naming.PatroniLeaderEndpoints(cluster) + fakeLeaderEP.ObjectMeta.Namespace = namespace + assert.NilError(t, r.Client.Create(ctx, fakeLeaderEP)) + fakeDCSEP := &corev1.Endpoints{} + fakeDCSEP.ObjectMeta = naming.PatroniDistributedConfiguration(cluster) + fakeDCSEP.ObjectMeta.Namespace = namespace + assert.NilError(t, r.Client.Create(ctx, fakeDCSEP)) + fakeFailoverEP := &corev1.Endpoints{} + fakeFailoverEP.ObjectMeta = naming.PatroniTrigger(cluster) + fakeFailoverEP.ObjectMeta.Namespace = namespace + assert.NilError(t, r.Client.Create(ctx, fakeFailoverEP)) + + job := generateJob(cluster.Name, initialize.Bool(false), initialize.Bool(false)) + assert.NilError(t, r.Client.Create(ctx, job)) + }, + result: testResult{ + foundRestoreJob: true, + endpointCount: 3, + expectedClusterCondition: nil, + }, + }, { + desc: "patroni endpoints only exist", + createResources: func(t *testing.T, cluster *v1beta1.PostgresCluster) { + fakeLeaderEP := &corev1.Endpoints{} + fakeLeaderEP.ObjectMeta = naming.PatroniLeaderEndpoints(cluster) + fakeLeaderEP.ObjectMeta.Namespace = namespace + assert.NilError(t, r.Client.Create(ctx, fakeLeaderEP)) + fakeDCSEP := &corev1.Endpoints{} + fakeDCSEP.ObjectMeta = naming.PatroniDistributedConfiguration(cluster) + fakeDCSEP.ObjectMeta.Namespace = namespace + assert.NilError(t, r.Client.Create(ctx, fakeDCSEP)) + fakeFailoverEP := &corev1.Endpoints{} + fakeFailoverEP.ObjectMeta = naming.PatroniTrigger(cluster) + fakeFailoverEP.ObjectMeta.Namespace = namespace + assert.NilError(t, r.Client.Create(ctx, fakeFailoverEP)) + }, + result: testResult{ + foundRestoreJob: false, + endpointCount: 3, + expectedClusterCondition: nil, + }, + }, { + desc: "restore job only exists", + createResources: func(t *testing.T, cluster *v1beta1.PostgresCluster) { + job := generateJob(cluster.Name, initialize.Bool(false), initialize.Bool(false)) + assert.NilError(t, r.Client.Create(ctx, job)) + }, + result: testResult{ + foundRestoreJob: true, + endpointCount: 0, + expectedClusterCondition: nil, + }, + }, { + desc: "restore job completed data init condition true", + createResources: func(t *testing.T, cluster *v1beta1.PostgresCluster) { + if strings.EqualFold(os.Getenv("USE_EXISTING_CLUSTER"), "true") { + t.Skip("requires mocking of Job conditions") + } + job := generateJob(cluster.Name, initialize.Bool(true), nil) + assert.NilError(t, r.Client.Create(ctx, job.DeepCopy())) + assert.NilError(t, r.Client.Status().Update(ctx, job)) + }, + result: testResult{ + foundRestoreJob: true, + endpointCount: 0, + expectedClusterCondition: &metav1.Condition{ + Type: ConditionPostgresDataInitialized, + Status: metav1.ConditionTrue, + Reason: "PGBackRestRestoreComplete", + Message: "pgBackRest restore completed successfully", + }, + }, + }, { + desc: "restore job failed data init condition false", + createResources: func(t *testing.T, cluster *v1beta1.PostgresCluster) { + if strings.EqualFold(os.Getenv("USE_EXISTING_CLUSTER"), "true") { + t.Skip("requires mocking of Job conditions") + } + job := generateJob(cluster.Name, nil, initialize.Bool(true)) + assert.NilError(t, r.Client.Create(ctx, job.DeepCopy())) + assert.NilError(t, r.Client.Status().Update(ctx, job)) + }, + result: testResult{ + foundRestoreJob: true, + endpointCount: 0, + expectedClusterCondition: &metav1.Condition{ + Type: ConditionPostgresDataInitialized, + Status: metav1.ConditionFalse, + Reason: "PGBackRestRestoreFailed", + Message: "pgBackRest restore failed", + }, + }, + }} + + for i, tc := range testCases { + t.Run(tc.desc, func(t *testing.T) { + + clusterName := "observe-restore-env" + strconv.Itoa(i) + if !dedicated { + clusterName = clusterName + "-no-repo" + } + clusterUID := clusterName + cluster := fakePostgresCluster(clusterName, namespace, clusterUID, dedicated) + tc.createResources(t, cluster) + + endpoints, job, err := r.observeRestoreEnv(ctx, cluster) + assert.NilError(t, err) + + assert.Assert(t, tc.result.foundRestoreJob == (job != nil)) + assert.Assert(t, tc.result.endpointCount == len(endpoints)) + + if tc.result.expectedClusterCondition != nil { + condition := meta.FindStatusCondition(cluster.Status.Conditions, + tc.result.expectedClusterCondition.Type) + if assert.Check(t, condition != nil) { + assert.Equal(t, tc.result.expectedClusterCondition.Status, condition.Status) + assert.Equal(t, tc.result.expectedClusterCondition.Reason, condition.Reason) + assert.Equal(t, tc.result.expectedClusterCondition.Message, condition.Message) + } + } + }) + } + } +} + +func TestPrepareForRestore(t *testing.T) { + ctx := context.Background() + _, tClient := setupKubernetes(t) + require.ParallelCapacity(t, 1) + + r := &Reconciler{Client: tClient, Owner: client.FieldOwner(t.Name())} + namespace := setupNamespace(t, tClient).Name + + generateJob := func(clusterName string) *batchv1.Job { + + cluster := &v1beta1.PostgresCluster{ + ObjectMeta: metav1.ObjectMeta{ + Name: clusterName, + Namespace: namespace, + }, + } + meta := naming.PGBackRestRestoreJob(cluster) + labels := naming.PGBackRestRestoreJobLabels(cluster.Name) + meta.Labels = labels + meta.Annotations = map[string]string{naming.PGBackRestConfigHash: "testhash"} + + restoreJob := &batchv1.Job{ + ObjectMeta: meta, + Spec: batchv1.JobSpec{ + Template: corev1.PodTemplateSpec{ + ObjectMeta: meta, + Spec: corev1.PodSpec{ + Containers: []corev1.Container{{ + Image: "test", + Name: naming.PGBackRestRestoreContainerName, + }}, + RestartPolicy: corev1.RestartPolicyNever, + }, + }, + }, + } + + return restoreJob + } + + type testResult struct { + restoreJobExists bool + endpointCount int + expectedClusterCondition *metav1.Condition + } + const primaryInstanceName = "primary-instance" + const primaryInstanceSetName = "primary-instance-set" + + for _, dedicated := range []bool{true, false} { + testCases := []struct { + desc string + createResources func(t *testing.T, cluster *v1beta1.PostgresCluster) (*batchv1.Job, []corev1.Endpoints) + fakeObserved *observedInstances + result testResult + }{{ + desc: "remove restore jobs", + createResources: func(t *testing.T, + cluster *v1beta1.PostgresCluster) (*batchv1.Job, []corev1.Endpoints) { + job := generateJob(cluster.Name) + assert.NilError(t, r.Client.Create(ctx, job)) + return job, nil + }, + result: testResult{ + restoreJobExists: false, + endpointCount: 0, + expectedClusterCondition: &metav1.Condition{ + Type: ConditionPGBackRestRestoreProgressing, + Status: metav1.ConditionTrue, + Reason: "RestoreInPlaceRequested", + Message: "Preparing cluster to restore in-place: removing restore job", + }, + }, + }, { + desc: "remove patroni endpoints", + createResources: func(t *testing.T, + cluster *v1beta1.PostgresCluster) (*batchv1.Job, []corev1.Endpoints) { + fakeLeaderEP := corev1.Endpoints{} + fakeLeaderEP.ObjectMeta = naming.PatroniLeaderEndpoints(cluster) + fakeLeaderEP.ObjectMeta.Namespace = namespace + assert.NilError(t, r.Client.Create(ctx, &fakeLeaderEP)) + fakeDCSEP := corev1.Endpoints{} + fakeDCSEP.ObjectMeta = naming.PatroniDistributedConfiguration(cluster) + fakeDCSEP.ObjectMeta.Namespace = namespace + assert.NilError(t, r.Client.Create(ctx, &fakeDCSEP)) + fakeFailoverEP := corev1.Endpoints{} + fakeFailoverEP.ObjectMeta = naming.PatroniTrigger(cluster) + fakeFailoverEP.ObjectMeta.Namespace = namespace + assert.NilError(t, r.Client.Create(ctx, &fakeFailoverEP)) + return nil, []corev1.Endpoints{fakeLeaderEP, fakeDCSEP, fakeFailoverEP} + }, + result: testResult{ + restoreJobExists: false, + endpointCount: 0, + expectedClusterCondition: &metav1.Condition{ + Type: ConditionPGBackRestRestoreProgressing, + Status: metav1.ConditionTrue, + Reason: "RestoreInPlaceRequested", + Message: "Preparing cluster to restore in-place: removing DCS", + }, + }, + }, { + desc: "cluster fully prepared", + createResources: func(t *testing.T, + cluster *v1beta1.PostgresCluster) (*batchv1.Job, []corev1.Endpoints) { + return nil, []corev1.Endpoints{} + }, + result: testResult{ + restoreJobExists: false, + endpointCount: 0, + expectedClusterCondition: &metav1.Condition{ + Type: ConditionPGBackRestRestoreProgressing, + Status: metav1.ConditionTrue, + Reason: ReasonReadyForRestore, + Message: "Restoring cluster in-place", + }, + }, + }, { + desc: "primary as startup instance", + fakeObserved: &observedInstances{forCluster: []*Instance{{ + Name: primaryInstanceName, + Spec: &v1beta1.PostgresInstanceSetSpec{Name: primaryInstanceSetName}, + Pods: []*corev1.Pod{{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{naming.LabelRole: naming.RolePatroniLeader}, + }, + }}}, + }}, + createResources: func(t *testing.T, + cluster *v1beta1.PostgresCluster) (*batchv1.Job, []corev1.Endpoints) { + return nil, []corev1.Endpoints{} + }, + result: testResult{ + restoreJobExists: false, + endpointCount: 0, + expectedClusterCondition: &metav1.Condition{ + Type: ConditionPGBackRestRestoreProgressing, + Status: metav1.ConditionTrue, + Reason: ReasonReadyForRestore, + Message: "Restoring cluster in-place", + }, + }, + }} + + for i, tc := range testCases { + name := tc.desc + if !dedicated { + name = tc.desc + "-no-repo" + } + t.Run(name, func(t *testing.T) { + + clusterName := "prepare-for-restore-" + strconv.Itoa(i) + if !dedicated { + clusterName = clusterName + "-no-repo" + } + clusterUID := clusterName + cluster := fakePostgresCluster(clusterName, namespace, clusterUID, dedicated) + cluster.Status.Patroni = v1beta1.PatroniStatus{SystemIdentifier: "abcde12345"} + cluster.Status.Proxy.PGBouncer.PostgreSQLRevision = "abcde12345" + cluster.Status.Monitoring.ExporterConfiguration = "abcde12345" + meta.SetStatusCondition(&cluster.Status.Conditions, metav1.Condition{ + ObservedGeneration: cluster.GetGeneration(), + Type: ConditionPostgresDataInitialized, + Status: metav1.ConditionTrue, + Reason: "PGBackRestRestoreComplete", + Message: "pgBackRest restore completed successfully", + }) + + job, endpoints := tc.createResources(t, cluster) + restoreID := "test-restore-id" + + fakeObserved := &observedInstances{forCluster: []*Instance{}} + if tc.fakeObserved != nil { + fakeObserved = tc.fakeObserved + } + assert.NilError(t, r.prepareForRestore(ctx, cluster, fakeObserved, endpoints, + job, restoreID)) + + var primaryInstance *Instance + for i, instance := range fakeObserved.forCluster { + isPrimary, _ := instance.IsPrimary() + if isPrimary { + primaryInstance = fakeObserved.forCluster[i] + } + } + + if primaryInstance != nil { + assert.Assert(t, cluster.Status.StartupInstance == primaryInstanceName) + } else { + assert.Equal(t, cluster.Status.StartupInstance, + naming.GenerateStartupInstance(cluster, &cluster.Spec.InstanceSets[0]).Name) + } + + leaderEP, dcsEP, failoverEP := corev1.Endpoints{}, corev1.Endpoints{}, corev1.Endpoints{} + currentEndpoints := []corev1.Endpoints{} + if err := r.Client.Get(ctx, naming.AsObjectKey(naming.PatroniLeaderEndpoints(cluster)), + &leaderEP); err != nil { + assert.NilError(t, client.IgnoreNotFound(err)) + } else { + currentEndpoints = append(currentEndpoints, leaderEP) + } + if err := r.Client.Get(ctx, naming.AsObjectKey(naming.PatroniDistributedConfiguration(cluster)), + &dcsEP); err != nil { + assert.NilError(t, client.IgnoreNotFound(err)) + } else { + currentEndpoints = append(currentEndpoints, dcsEP) + } + if err := r.Client.Get(ctx, naming.AsObjectKey(naming.PatroniTrigger(cluster)), + &failoverEP); err != nil { + assert.NilError(t, client.IgnoreNotFound(err)) + } else { + currentEndpoints = append(currentEndpoints, failoverEP) + } + + restoreJobs := &batchv1.JobList{} + assert.NilError(t, r.Client.List(ctx, restoreJobs, &client.ListOptions{ + Namespace: cluster.Namespace, + LabelSelector: naming.PGBackRestRestoreJobSelector(cluster.GetName()), + })) + + assert.Assert(t, tc.result.endpointCount == len(currentEndpoints)) + assert.Assert(t, tc.result.restoreJobExists == (len(restoreJobs.Items) == 1)) + + if tc.result.expectedClusterCondition != nil { + condition := meta.FindStatusCondition(cluster.Status.Conditions, + tc.result.expectedClusterCondition.Type) + if assert.Check(t, condition != nil) { + assert.Equal(t, tc.result.expectedClusterCondition.Status, condition.Status) + assert.Equal(t, tc.result.expectedClusterCondition.Reason, condition.Reason) + assert.Equal(t, tc.result.expectedClusterCondition.Message, condition.Message) + } + if tc.result.expectedClusterCondition.Reason == ReasonReadyForRestore { + assert.Assert(t, cluster.Status.Patroni.SystemIdentifier == "") + assert.Assert(t, cluster.Status.Proxy.PGBouncer.PostgreSQLRevision == "") + assert.Assert(t, cluster.Status.Monitoring.ExporterConfiguration == "") + assert.Assert(t, meta.FindStatusCondition(cluster.Status.Conditions, + ConditionPostgresDataInitialized) == nil) + } + } + }) + } + } +} + +func TestReconcileScheduledBackups(t *testing.T) { + cfg, tClient := setupKubernetes(t) + require.ParallelCapacity(t, 2) + + r := &Reconciler{} + _, cancel := setupManager(t, cfg, func(mgr manager.Manager) { + r = &Reconciler{ + Client: mgr.GetClient(), + Recorder: mgr.GetEventRecorderFor(ControllerName), + Tracer: otel.Tracer(ControllerName), + Owner: ControllerName, + } + }) + t.Cleanup(func() { teardownManager(cancel, t) }) + + ns := setupNamespace(t, tClient) + sa := &corev1.ServiceAccount{ + ObjectMeta: metav1.ObjectMeta{Name: "hippo-sa"}, + } + + testCases := []struct { + // a description of the test + testDesc string + // whether or not the test only applies to configs with dedicated repo hosts + dedicatedOnly bool + // conditions to apply to the mock postgrescluster + clusterConditions map[string]metav1.ConditionStatus + // the status to apply to the mock postgrescluster + status *v1beta1.PostgresClusterStatus + // whether or not the test should expect a Job to be reconciled + expectReconcile bool + // whether or not the test should expect a Job to be requeued + expectRequeue bool + // the reason associated with the expected event for the test (can be empty if + // no event is expected) + expectedEventReason string + // the observed instances + instances *observedInstances + // CronJobs exist + cronJobs bool + }{ + { + testDesc: "should reconcile, no requeue", + clusterConditions: map[string]metav1.ConditionStatus{ + ConditionRepoHostReady: metav1.ConditionTrue, + ConditionReplicaCreate: metav1.ConditionTrue, + }, + status: &v1beta1.PostgresClusterStatus{ + Patroni: v1beta1.PatroniStatus{SystemIdentifier: "12345abcde"}, + PGBackRest: &v1beta1.PGBackRestStatus{ + Repos: []v1beta1.RepoStatus{{Name: "repo1", StanzaCreated: true}}}, + }, + expectReconcile: true, + expectRequeue: false, + }, { + testDesc: "should reconcile, no requeue, existing cronjob", + clusterConditions: map[string]metav1.ConditionStatus{ + ConditionRepoHostReady: metav1.ConditionTrue, + ConditionReplicaCreate: metav1.ConditionTrue, + }, + status: &v1beta1.PostgresClusterStatus{ + Patroni: v1beta1.PatroniStatus{SystemIdentifier: "12345abcde"}, + PGBackRest: &v1beta1.PGBackRestStatus{ + Repos: []v1beta1.RepoStatus{{Name: "repo1", StanzaCreated: true}}}, + }, + expectReconcile: true, + expectRequeue: false, + cronJobs: true, + }, { + testDesc: "cluster not bootstrapped, should not reconcile", + status: &v1beta1.PostgresClusterStatus{ + PGBackRest: &v1beta1.PGBackRestStatus{ + Repos: []v1beta1.RepoStatus{{Name: "repo1", StanzaCreated: true}}}, + }, + expectReconcile: false, + expectRequeue: false, + }, { + testDesc: "no repo host ready condition, should not reconcile", + dedicatedOnly: true, + status: &v1beta1.PostgresClusterStatus{ + Patroni: v1beta1.PatroniStatus{SystemIdentifier: "12345abcde"}, + PGBackRest: &v1beta1.PGBackRestStatus{ + Repos: []v1beta1.RepoStatus{{Name: "repo1", StanzaCreated: true}}}, + }, + expectReconcile: false, + expectRequeue: false, + }, { + testDesc: "no replica create condition, should not reconcile", + status: &v1beta1.PostgresClusterStatus{ + Patroni: v1beta1.PatroniStatus{SystemIdentifier: "12345abcde"}, + PGBackRest: &v1beta1.PGBackRestStatus{ + Repos: []v1beta1.RepoStatus{{Name: "repo1", StanzaCreated: true}}}, + }, + expectReconcile: false, + expectRequeue: false, + }, { + testDesc: "false repo host ready condition, should not reconcile", + dedicatedOnly: true, + status: &v1beta1.PostgresClusterStatus{ + Patroni: v1beta1.PatroniStatus{SystemIdentifier: "12345abcde"}, + PGBackRest: &v1beta1.PGBackRestStatus{ + Repos: []v1beta1.RepoStatus{{Name: "repo1", StanzaCreated: true}}}, + }, + expectReconcile: false, + expectRequeue: false, + }, { + testDesc: "false replica create condition, should not reconcile", + status: &v1beta1.PostgresClusterStatus{ + Patroni: v1beta1.PatroniStatus{SystemIdentifier: "12345abcde"}, + PGBackRest: &v1beta1.PGBackRestStatus{ + Repos: []v1beta1.RepoStatus{{Name: "repo1", StanzaCreated: true}}}, + }, + expectReconcile: false, + expectRequeue: false, + }, { + testDesc: "missing repo status, should not reconcile", + clusterConditions: map[string]metav1.ConditionStatus{ + ConditionRepoHostReady: metav1.ConditionTrue, + ConditionReplicaCreate: metav1.ConditionTrue, + }, + status: &v1beta1.PostgresClusterStatus{ + Patroni: v1beta1.PatroniStatus{SystemIdentifier: "12345abcde"}, + PGBackRest: &v1beta1.PGBackRestStatus{ + Repos: []v1beta1.RepoStatus{}}, + }, + expectReconcile: false, + expectRequeue: false, + expectedEventReason: "InvalidBackupRepo", + }} + + for _, dedicated := range []bool{true, false} { + for i, tc := range testCases { + + var clusterName string + if !dedicated { + tc.testDesc = "no repo " + tc.testDesc + clusterName = "scheduled-backup-no-repo-" + strconv.Itoa(i) + } else { + clusterName = "scheduled-backup-" + strconv.Itoa(i) + } + + t.Run(tc.testDesc, func(t *testing.T) { + + if tc.dedicatedOnly && !dedicated { + t.Skip() + } + + ctx := context.Background() + + postgresCluster := fakePostgresCluster(clusterName, ns.GetName(), "", dedicated) + assert.NilError(t, tClient.Create(ctx, postgresCluster)) + postgresCluster.Status = *tc.status + for condition, status := range tc.clusterConditions { + meta.SetStatusCondition(&postgresCluster.Status.Conditions, metav1.Condition{ + Type: condition, Reason: "testing", Status: status}) + } + assert.NilError(t, tClient.Status().Update(ctx, postgresCluster)) + + var requeue bool + if tc.cronJobs { + existingCronJobs := []*batchv1.CronJob{ + { + ObjectMeta: metav1.ObjectMeta{ + Name: "existingcronjob-repo1-full", + Labels: map[string]string{ + naming.LabelCluster: clusterName, + naming.LabelPGBackRestCronJob: "full", + naming.LabelPGBackRestRepo: "repo1", + }}, + }, { + ObjectMeta: metav1.ObjectMeta{ + Name: "existingcronjob-repo1-incr", + Labels: map[string]string{ + naming.LabelCluster: clusterName, + naming.LabelPGBackRestCronJob: "incr", + naming.LabelPGBackRestRepo: "repo1", + }}, + }, { + ObjectMeta: metav1.ObjectMeta{ + Name: "existingcronjob-repo1-diff", + Labels: map[string]string{ + naming.LabelCluster: clusterName, + naming.LabelPGBackRestCronJob: "diff", + naming.LabelPGBackRestRepo: "repo1", + }}, + }, + } + requeue = r.reconcileScheduledBackups(ctx, postgresCluster, sa, existingCronJobs) + } else { + requeue = r.reconcileScheduledBackups(ctx, postgresCluster, sa, fakeObservedCronJobs()) + } + if !tc.expectReconcile && !tc.expectRequeue { + // expect no reconcile, no requeue + assert.Assert(t, !requeue) + + // if an event is expected, the check for it + if tc.expectedEventReason != "" { + assert.NilError(t, wait.PollUntilContextTimeout(ctx, time.Second/2, Scale(time.Second*2), false, + func(ctx context.Context) (bool, error) { + events := &corev1.EventList{} + err := tClient.List(ctx, events, &client.MatchingFields{ + "involvedObject.kind": "PostgresCluster", + "involvedObject.name": clusterName, + "involvedObject.namespace": ns.GetName(), + "involvedObject.uid": string(postgresCluster.GetUID()), + "reason": tc.expectedEventReason, + }) + return len(events.Items) == 1, err + })) + } + } else if !tc.expectReconcile && tc.expectRequeue { + // expect requeue, no reconcile + assert.Assert(t, requeue) + return + } else { + // expect reconcile, no requeue + assert.Assert(t, !requeue) + + // check for all three defined backup types + backupTypes := []string{"full", "diff", "incr"} + + for _, backupType := range backupTypes { + + var cronJobName string + if tc.cronJobs { + cronJobName = "existingcronjob-repo1-" + backupType + } else { + cronJobName = postgresCluster.Name + "-repo1-" + backupType + } + + returnedCronJob := &batchv1.CronJob{} + if err := tClient.Get(ctx, types.NamespacedName{ + Name: cronJobName, + Namespace: postgresCluster.GetNamespace(), + }, returnedCronJob); err != nil { + assert.NilError(t, err) + } + + // check returned cronjob matches set spec + assert.Equal(t, returnedCronJob.Name, cronJobName) + assert.Equal(t, returnedCronJob.Spec.Schedule, testCronSchedule) + assert.Equal(t, returnedCronJob.Spec.ConcurrencyPolicy, batchv1.ForbidConcurrent) + assert.Equal(t, returnedCronJob.Spec.JobTemplate.Spec.Template.Spec.PriorityClassName, "some-priority-class") + assert.Equal(t, returnedCronJob.Spec.JobTemplate.Spec.Template.Spec.Containers[0].Name, + "pgbackrest") + assert.Assert(t, returnedCronJob.Spec.JobTemplate.Spec.Template.Spec.Containers[0].SecurityContext != &corev1.SecurityContext{}) + + // verify the image pull secret + if returnedCronJob.Spec.JobTemplate.Spec.Template.Spec.ImagePullSecrets == nil { + t.Error("image pull secret is missing tolerations") + } + + if returnedCronJob.Spec.JobTemplate.Spec.Template.Spec.ImagePullSecrets != nil { + if returnedCronJob.Spec.JobTemplate.Spec.Template.Spec.ImagePullSecrets[0].Name != + "myImagePullSecret" { + t.Error("image pull secret name is not set correctly") + } + } + } + return + } + }) + } + } +} + +func TestSetScheduledJobStatus(t *testing.T) { + ctx := context.Background() + _, tClient := setupKubernetes(t) + require.ParallelCapacity(t, 0) + + r := &Reconciler{Client: tClient, Owner: client.FieldOwner(t.Name())} + + clusterName := "hippocluster" + clusterUID := "hippouid" + + ns := setupNamespace(t, tClient) + + t.Run("set scheduled backup status", func(t *testing.T) { + // create a PostgresCluster to test with + postgresCluster := fakePostgresCluster(clusterName, ns.GetName(), clusterUID, true) + + testJob := &batchv1.Job{ + TypeMeta: metav1.TypeMeta{ + Kind: "Job", + }, + ObjectMeta: metav1.ObjectMeta{ + Name: "TestJob", + Labels: map[string]string{"postgres-operator.crunchydata.com/pgbackrest-cronjob": "full"}, + }, + Status: batchv1.JobStatus{ + Active: 1, + Succeeded: 2, + Failed: 3, + }, + } + + // convert the runtime.Object to an unstructured object + unstructuredObj, err := runtime.DefaultUnstructuredConverter.ToUnstructured(testJob) + assert.NilError(t, err) + unstructuredJob := &unstructured.Unstructured{ + Object: unstructuredObj, + } + + // add it to an unstructured list + uList := &unstructured.UnstructuredList{} + uList.Items = append(uList.Items, *unstructuredJob) + + // set the status + r.setScheduledJobStatus(ctx, postgresCluster, uList.Items) + + assert.Assert(t, len(postgresCluster.Status.PGBackRest.ScheduledBackups) > 0) + assert.Equal(t, postgresCluster.Status.PGBackRest.ScheduledBackups[0].Active, int32(1)) + assert.Equal(t, postgresCluster.Status.PGBackRest.ScheduledBackups[0].Succeeded, int32(2)) + assert.Equal(t, postgresCluster.Status.PGBackRest.ScheduledBackups[0].Failed, int32(3)) + }) + + t.Run("fail to set scheduled backup status due to missing label", func(t *testing.T) { + // create a PostgresCluster to test with + postgresCluster := fakePostgresCluster(clusterName, ns.GetName(), clusterUID, true) + + testJob := &batchv1.Job{ + TypeMeta: metav1.TypeMeta{ + Kind: "Job", + }, + ObjectMeta: metav1.ObjectMeta{ + Name: "TestJob", + }, + Status: batchv1.JobStatus{ + Active: 1, + Succeeded: 2, + Failed: 3, + }, + } + + // convert the runtime.Object to an unstructured object + unstructuredObj, err := runtime.DefaultUnstructuredConverter.ToUnstructured(testJob) + assert.NilError(t, err) + unstructuredJob := &unstructured.Unstructured{ + Object: unstructuredObj, + } + + // add it to an unstructured list + uList := &unstructured.UnstructuredList{} + uList.Items = append(uList.Items, *unstructuredJob) + + // set the status + r.setScheduledJobStatus(ctx, postgresCluster, uList.Items) + assert.Assert(t, len(postgresCluster.Status.PGBackRest.ScheduledBackups) == 0) + }) +} + +func TestBackupsEnabled(t *testing.T) { + // Garbage collector cleans up test resources before the test completes + if strings.EqualFold(os.Getenv("USE_EXISTING_CLUSTER"), "true") { + t.Skip("USE_EXISTING_CLUSTER: Test fails due to garbage collection") + } + + cfg, tClient := setupKubernetes(t) + require.ParallelCapacity(t, 2) + + r := &Reconciler{} + ctx, cancel := setupManager(t, cfg, func(mgr manager.Manager) { + r = &Reconciler{ + Client: mgr.GetClient(), + Recorder: mgr.GetEventRecorderFor(ControllerName), + Tracer: otel.Tracer(ControllerName), + Owner: ControllerName, + } + }) + t.Cleanup(func() { teardownManager(cancel, t) }) + + t.Run("Cluster with backups, no sts can be reconciled", func(t *testing.T) { + clusterName := "hippocluster1" + clusterUID := "hippouid1" + + ns := setupNamespace(t, tClient) + + // create a PostgresCluster to test with + postgresCluster := fakePostgresCluster(clusterName, ns.GetName(), clusterUID, true) + + backupsSpecFound, backupsReconciliationAllowed, err := r.BackupsEnabled(ctx, postgresCluster) + + assert.NilError(t, err) + assert.Assert(t, backupsSpecFound) + assert.Assert(t, backupsReconciliationAllowed) + }) + + t.Run("Cluster with backups, sts can be reconciled", func(t *testing.T) { + clusterName := "hippocluster2" + clusterUID := "hippouid2" + + ns := setupNamespace(t, tClient) + + // create a PostgresCluster to test with + postgresCluster := fakePostgresCluster(clusterName, ns.GetName(), clusterUID, true) + + // create the 'observed' instances and set the leader + instances := &observedInstances{ + forCluster: []*Instance{{Name: "instance1", + Pods: []*corev1.Pod{{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{naming.LabelRole: naming.RolePatroniLeader}, + }, + Spec: corev1.PodSpec{}, + }}, + }, {Name: "instance2"}, {Name: "instance3"}}, + } + + rootCA, err := pki.NewRootCertificateAuthority() + assert.NilError(t, err) + + _, err = r.reconcilePGBackRest(ctx, postgresCluster, instances, rootCA, true) + assert.NilError(t, err) + + backupsSpecFound, backupsReconciliationAllowed, err := r.BackupsEnabled(ctx, postgresCluster) + + assert.NilError(t, err) + assert.Assert(t, backupsSpecFound) + assert.Assert(t, backupsReconciliationAllowed) + }) + + t.Run("Cluster with no backups, no sts can reconcile", func(t *testing.T) { + // create a PostgresCluster to test with + clusterName := "hippocluster3" + clusterUID := "hippouid3" + + ns := setupNamespace(t, tClient) + + postgresCluster := fakePostgresCluster(clusterName, ns.GetName(), clusterUID, true) + postgresCluster.Spec.Backups = v1beta1.Backups{} + + backupsSpecFound, backupsReconciliationAllowed, err := r.BackupsEnabled(ctx, postgresCluster) + + assert.NilError(t, err) + assert.Assert(t, !backupsSpecFound) + assert.Assert(t, backupsReconciliationAllowed) + }) + + t.Run("Cluster with no backups, sts cannot be reconciled", func(t *testing.T) { + clusterName := "hippocluster4" + clusterUID := "hippouid4" + + ns := setupNamespace(t, tClient) + + // create a PostgresCluster to test with + postgresCluster := fakePostgresCluster(clusterName, ns.GetName(), clusterUID, true) + + // create the 'observed' instances and set the leader + instances := &observedInstances{ + forCluster: []*Instance{{Name: "instance1", + Pods: []*corev1.Pod{{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{naming.LabelRole: naming.RolePatroniLeader}, + }, + Spec: corev1.PodSpec{}, + }}, + }, {Name: "instance2"}, {Name: "instance3"}}, + } + + rootCA, err := pki.NewRootCertificateAuthority() + assert.NilError(t, err) + + _, err = r.reconcilePGBackRest(ctx, postgresCluster, instances, rootCA, true) + assert.NilError(t, err) + + postgresCluster.Spec.Backups = v1beta1.Backups{} + + backupsSpecFound, backupsReconciliationAllowed, err := r.BackupsEnabled(ctx, postgresCluster) + + assert.NilError(t, err) + assert.Assert(t, !backupsSpecFound) + assert.Assert(t, !backupsReconciliationAllowed) + }) + + t.Run("Cluster with no backups, sts, annotation can be reconciled", func(t *testing.T) { + clusterName := "hippocluster5" + clusterUID := "hippouid5" + + ns := setupNamespace(t, tClient) + + // create a PostgresCluster to test with + postgresCluster := fakePostgresCluster(clusterName, ns.GetName(), clusterUID, true) + + // create the 'observed' instances and set the leader + instances := &observedInstances{ + forCluster: []*Instance{{Name: "instance1", + Pods: []*corev1.Pod{{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{naming.LabelRole: naming.RolePatroniLeader}, + }, + Spec: corev1.PodSpec{}, + }}, + }, {Name: "instance2"}, {Name: "instance3"}}, + } + + rootCA, err := pki.NewRootCertificateAuthority() + assert.NilError(t, err) + + _, err = r.reconcilePGBackRest(ctx, postgresCluster, instances, rootCA, true) + assert.NilError(t, err) + + postgresCluster.Spec.Backups = v1beta1.Backups{} + annotations := map[string]string{ + naming.AuthorizeBackupRemovalAnnotation: "true", + } + postgresCluster.Annotations = annotations + + backupsSpecFound, backupsReconciliationAllowed, err := r.BackupsEnabled(ctx, postgresCluster) + + assert.NilError(t, err) + assert.Assert(t, !backupsSpecFound) + assert.Assert(t, backupsReconciliationAllowed) + }) +} diff --git a/internal/controller/postgrescluster/pgbouncer.go b/internal/controller/postgrescluster/pgbouncer.go new file mode 100644 index 0000000000..76207fac02 --- /dev/null +++ b/internal/controller/postgrescluster/pgbouncer.go @@ -0,0 +1,567 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgrescluster + +import ( + "context" + "fmt" + "io" + + "github.com/pkg/errors" + appsv1 "k8s.io/api/apps/v1" + corev1 "k8s.io/api/core/v1" + policyv1 "k8s.io/api/policy/v1" + "k8s.io/apimachinery/pkg/api/meta" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/util/intstr" + "sigs.k8s.io/controller-runtime/pkg/client" + + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/logging" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/pgbouncer" + "github.com/crunchydata/postgres-operator/internal/pki" + "github.com/crunchydata/postgres-operator/internal/postgres" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +// reconcilePGBouncer writes the objects necessary to run a PgBouncer Pod. +func (r *Reconciler) reconcilePGBouncer( + ctx context.Context, cluster *v1beta1.PostgresCluster, instances *observedInstances, + primaryCertificate *corev1.SecretProjection, + root *pki.RootCertificateAuthority, +) error { + var ( + configmap *corev1.ConfigMap + secret *corev1.Secret + ) + + service, err := r.reconcilePGBouncerService(ctx, cluster) + if err == nil { + configmap, err = r.reconcilePGBouncerConfigMap(ctx, cluster) + } + if err == nil { + secret, err = r.reconcilePGBouncerSecret(ctx, cluster, root, service) + } + if err == nil { + err = r.reconcilePGBouncerDeployment(ctx, cluster, primaryCertificate, configmap, secret) + } + if err == nil { + err = r.reconcilePGBouncerPodDisruptionBudget(ctx, cluster) + } + if err == nil { + err = r.reconcilePGBouncerInPostgreSQL(ctx, cluster, instances, secret) + } + return err +} + +// +kubebuilder:rbac:groups="",resources="configmaps",verbs={get} +// +kubebuilder:rbac:groups="",resources="configmaps",verbs={create,delete,patch} + +// reconcilePGBouncerConfigMap writes the ConfigMap for a PgBouncer Pod. +func (r *Reconciler) reconcilePGBouncerConfigMap( + ctx context.Context, cluster *v1beta1.PostgresCluster, +) (*corev1.ConfigMap, error) { + configmap := &corev1.ConfigMap{ObjectMeta: naming.ClusterPGBouncer(cluster)} + configmap.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("ConfigMap")) + + if cluster.Spec.Proxy == nil || cluster.Spec.Proxy.PGBouncer == nil { + // PgBouncer is disabled; delete the ConfigMap if it exists. Check the + // client cache first using Get. + key := client.ObjectKeyFromObject(configmap) + err := errors.WithStack(r.Client.Get(ctx, key, configmap)) + if err == nil { + err = errors.WithStack(r.deleteControlled(ctx, cluster, configmap)) + } + return nil, client.IgnoreNotFound(err) + } + + err := errors.WithStack(r.setControllerReference(cluster, configmap)) + + configmap.Annotations = naming.Merge( + cluster.Spec.Metadata.GetAnnotationsOrNil(), + cluster.Spec.Proxy.PGBouncer.Metadata.GetAnnotationsOrNil()) + configmap.Labels = naming.Merge( + cluster.Spec.Metadata.GetLabelsOrNil(), + cluster.Spec.Proxy.PGBouncer.Metadata.GetLabelsOrNil(), + map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelRole: naming.RolePGBouncer, + }) + + if err == nil { + pgbouncer.ConfigMap(cluster, configmap) + } + if err == nil { + err = errors.WithStack(r.apply(ctx, configmap)) + } + + return configmap, err +} + +// +kubebuilder:rbac:groups="",resources="pods",verbs={get,list} + +// reconcilePGBouncerInPostgreSQL writes the user and other objects needed by +// PgBouncer inside of PostgreSQL. +func (r *Reconciler) reconcilePGBouncerInPostgreSQL( + ctx context.Context, cluster *v1beta1.PostgresCluster, instances *observedInstances, + clusterSecret *corev1.Secret, +) error { + var pod *corev1.Pod + + // Find the PostgreSQL instance that can execute SQL that writes to every + // database. When there is none, return early. + + for _, instance := range instances.forCluster { + writable, known := instance.IsWritable() + if writable && known && len(instance.Pods) > 0 { + pod = instance.Pods[0] + break + } + } + if pod == nil { + return nil + } + + // PostgreSQL is available for writes. Prepare to either add or remove + // PgBouncer objects. + + action := func(ctx context.Context, exec postgres.Executor) error { + return errors.WithStack(pgbouncer.EnableInPostgreSQL(ctx, exec, clusterSecret)) + } + if cluster.Spec.Proxy == nil || cluster.Spec.Proxy.PGBouncer == nil { + // PgBouncer is disabled. + action = func(ctx context.Context, exec postgres.Executor) error { + return errors.WithStack(pgbouncer.DisableInPostgreSQL(ctx, exec)) + } + } + + // First, calculate a hash of the SQL that should be executed in PostgreSQL. + + revision, err := safeHash32(func(hasher io.Writer) error { + // Discard log messages from the pgbouncer package about executing SQL. + // Nothing is being "executed" yet. + return action(logging.NewContext(ctx, logging.Discard()), func( + _ context.Context, stdin io.Reader, _, _ io.Writer, command ...string, + ) error { + _, err := io.Copy(hasher, stdin) + if err == nil { + _, err = fmt.Fprint(hasher, command) + } + return err + }) + }) + if err != nil { + return err + } + + if revision == cluster.Status.Proxy.PGBouncer.PostgreSQLRevision { + // The necessary SQL has already been applied; there's nothing more to do. + + // TODO(cbandy): Give the user a way to trigger execution regardless. + // The value of an annotation could influence the hash, for example. + return nil + } + + // Apply the necessary SQL and record its hash in cluster.Status. Include + // the hash in any log messages. + + if err == nil { + ctx := logging.NewContext(ctx, logging.FromContext(ctx).WithValues("revision", revision)) + err = action(ctx, func(ctx context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string) error { + return r.PodExec(ctx, pod.Namespace, pod.Name, naming.ContainerDatabase, stdin, stdout, stderr, command...) + }) + } + if err == nil { + cluster.Status.Proxy.PGBouncer.PostgreSQLRevision = revision + } + + return err +} + +// +kubebuilder:rbac:groups="",resources="secrets",verbs={get} +// +kubebuilder:rbac:groups="",resources="secrets",verbs={create,delete,patch} + +// reconcilePGBouncerSecret writes the Secret for a PgBouncer Pod. +func (r *Reconciler) reconcilePGBouncerSecret( + ctx context.Context, cluster *v1beta1.PostgresCluster, + root *pki.RootCertificateAuthority, service *corev1.Service, +) (*corev1.Secret, error) { + existing := &corev1.Secret{ObjectMeta: naming.ClusterPGBouncer(cluster)} + err := errors.WithStack( + r.Client.Get(ctx, client.ObjectKeyFromObject(existing), existing)) + if client.IgnoreNotFound(err) != nil { + return nil, err + } + + if cluster.Spec.Proxy == nil || cluster.Spec.Proxy.PGBouncer == nil { + // PgBouncer is disabled; delete the Secret if it exists. + if err == nil { + err = errors.WithStack(r.deleteControlled(ctx, cluster, existing)) + } + return nil, client.IgnoreNotFound(err) + } + + err = client.IgnoreNotFound(err) + + intent := &corev1.Secret{ObjectMeta: naming.ClusterPGBouncer(cluster)} + intent.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("Secret")) + intent.Type = corev1.SecretTypeOpaque + + if err == nil { + err = errors.WithStack(r.setControllerReference(cluster, intent)) + } + + intent.Annotations = naming.Merge( + cluster.Spec.Metadata.GetAnnotationsOrNil(), + cluster.Spec.Proxy.PGBouncer.Metadata.GetAnnotationsOrNil()) + intent.Labels = naming.Merge( + cluster.Spec.Metadata.GetLabelsOrNil(), + cluster.Spec.Proxy.PGBouncer.Metadata.GetLabelsOrNil(), + map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelRole: naming.RolePGBouncer, + }) + + if err == nil { + err = pgbouncer.Secret(ctx, cluster, root, existing, service, intent) + } + if err == nil { + err = errors.WithStack(r.apply(ctx, intent)) + } + + return intent, err +} + +// generatePGBouncerService returns a v1.Service that exposes PgBouncer pods. +// The ServiceType comes from the cluster proxy spec. +func (r *Reconciler) generatePGBouncerService( + cluster *v1beta1.PostgresCluster) (*corev1.Service, bool, error, +) { + service := &corev1.Service{ObjectMeta: naming.ClusterPGBouncer(cluster)} + service.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("Service")) + + if cluster.Spec.Proxy == nil || cluster.Spec.Proxy.PGBouncer == nil { + return service, false, nil + } + + service.Annotations = naming.Merge( + cluster.Spec.Metadata.GetAnnotationsOrNil(), + cluster.Spec.Proxy.PGBouncer.Metadata.GetAnnotationsOrNil()) + service.Labels = naming.Merge( + cluster.Spec.Metadata.GetLabelsOrNil(), + cluster.Spec.Proxy.PGBouncer.Metadata.GetLabelsOrNil()) + + if spec := cluster.Spec.Proxy.PGBouncer.Service; spec != nil { + service.Annotations = naming.Merge(service.Annotations, + spec.Metadata.GetAnnotationsOrNil()) + service.Labels = naming.Merge(service.Labels, + spec.Metadata.GetLabelsOrNil()) + } + + // add our labels last so they aren't overwritten + service.Labels = naming.Merge(service.Labels, + map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelRole: naming.RolePGBouncer, + }) + + // Allocate an IP address and/or node port and let Kubernetes manage the + // Endpoints by selecting Pods with the PgBouncer role. + // - https://docs.k8s.io/concepts/services-networking/service/#defining-a-service + service.Spec.Selector = map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelRole: naming.RolePGBouncer, + } + + // The TargetPort must be the name (not the number) of the PgBouncer + // ContainerPort. This name allows the port number to differ between Pods, + // which can happen during a rolling update. + servicePort := corev1.ServicePort{ + Name: naming.PortPGBouncer, + Port: *cluster.Spec.Proxy.PGBouncer.Port, + Protocol: corev1.ProtocolTCP, + TargetPort: intstr.FromString(naming.PortPGBouncer), + } + + if spec := cluster.Spec.Proxy.PGBouncer.Service; spec == nil { + service.Spec.Type = corev1.ServiceTypeClusterIP + } else { + service.Spec.Type = corev1.ServiceType(spec.Type) + if spec.NodePort != nil { + if service.Spec.Type == corev1.ServiceTypeClusterIP { + // The NodePort can only be set when the Service type is NodePort or + // LoadBalancer. However, due to a known issue prior to Kubernetes + // 1.20, we clear these errors during our apply. To preserve the + // appropriate behavior, we log an Event and return an error. + // TODO(tjmoore4): Once Validation Rules are available, this check + // and event could potentially be removed in favor of that validation + r.Recorder.Eventf(cluster, corev1.EventTypeWarning, "MisconfiguredClusterIP", + "NodePort cannot be set with type ClusterIP on Service %q", service.Name) + return nil, true, fmt.Errorf("NodePort cannot be set with type ClusterIP on Service %q", service.Name) + } + servicePort.NodePort = *spec.NodePort + } + service.Spec.ExternalTrafficPolicy = initialize.FromPointer(spec.ExternalTrafficPolicy) + service.Spec.InternalTrafficPolicy = spec.InternalTrafficPolicy + } + service.Spec.Ports = []corev1.ServicePort{servicePort} + + err := errors.WithStack(r.setControllerReference(cluster, service)) + + return service, true, err +} + +// +kubebuilder:rbac:groups="",resources="services",verbs={get} +// +kubebuilder:rbac:groups="",resources="services",verbs={create,delete,patch} + +// reconcilePGBouncerService writes the Service that resolves to PgBouncer. +func (r *Reconciler) reconcilePGBouncerService( + ctx context.Context, cluster *v1beta1.PostgresCluster, +) (*corev1.Service, error) { + service, specified, err := r.generatePGBouncerService(cluster) + + if err == nil && !specified { + // PgBouncer is disabled; delete the Service if it exists. Check the client + // cache first using Get. + key := client.ObjectKeyFromObject(service) + err := errors.WithStack(r.Client.Get(ctx, key, service)) + if err == nil { + err = errors.WithStack(r.deleteControlled(ctx, cluster, service)) + } + return nil, client.IgnoreNotFound(err) + } + + if err == nil { + err = errors.WithStack(r.apply(ctx, service)) + } + return service, err +} + +// generatePGBouncerDeployment returns an appsv1.Deployment that runs PgBouncer pods. +func (r *Reconciler) generatePGBouncerDeployment( + ctx context.Context, cluster *v1beta1.PostgresCluster, + primaryCertificate *corev1.SecretProjection, + configmap *corev1.ConfigMap, secret *corev1.Secret, +) (*appsv1.Deployment, bool, error) { + deploy := &appsv1.Deployment{ObjectMeta: naming.ClusterPGBouncer(cluster)} + deploy.SetGroupVersionKind(appsv1.SchemeGroupVersion.WithKind("Deployment")) + + if cluster.Spec.Proxy == nil || cluster.Spec.Proxy.PGBouncer == nil { + return deploy, false, nil + } + + deploy.Annotations = naming.Merge( + cluster.Spec.Metadata.GetAnnotationsOrNil(), + cluster.Spec.Proxy.PGBouncer.Metadata.GetAnnotationsOrNil()) + deploy.Labels = naming.Merge( + cluster.Spec.Metadata.GetLabelsOrNil(), + cluster.Spec.Proxy.PGBouncer.Metadata.GetLabelsOrNil(), + map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelRole: naming.RolePGBouncer, + }) + deploy.Spec.Selector = &metav1.LabelSelector{ + MatchLabels: map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelRole: naming.RolePGBouncer, + }, + } + deploy.Spec.Template.Annotations = naming.Merge( + cluster.Spec.Metadata.GetAnnotationsOrNil(), + cluster.Spec.Proxy.PGBouncer.Metadata.GetAnnotationsOrNil()) + deploy.Spec.Template.Labels = naming.Merge( + cluster.Spec.Metadata.GetLabelsOrNil(), + cluster.Spec.Proxy.PGBouncer.Metadata.GetLabelsOrNil(), + map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelRole: naming.RolePGBouncer, + }) + + // if the shutdown flag is set, set pgBouncer replicas to 0 + if cluster.Spec.Shutdown != nil && *cluster.Spec.Shutdown { + deploy.Spec.Replicas = initialize.Int32(0) + } else { + deploy.Spec.Replicas = cluster.Spec.Proxy.PGBouncer.Replicas + } + + // Don't clutter the namespace with extra ReplicaSets. + deploy.Spec.RevisionHistoryLimit = initialize.Int32(0) + + // Ensure that the number of Ready pods is never less than the specified + // Replicas by starting new pods while old pods are still running. + // - https://docs.k8s.io/concepts/workloads/controllers/deployment/#rolling-update-deployment + deploy.Spec.Strategy.Type = appsv1.RollingUpdateDeploymentStrategyType + deploy.Spec.Strategy.RollingUpdate = &appsv1.RollingUpdateDeployment{ + MaxUnavailable: initialize.Pointer(intstr.FromInt32(0)), + } + + // Use scheduling constraints from the cluster spec. + deploy.Spec.Template.Spec.Affinity = cluster.Spec.Proxy.PGBouncer.Affinity + deploy.Spec.Template.Spec.Tolerations = cluster.Spec.Proxy.PGBouncer.Tolerations + deploy.Spec.Template.Spec.PriorityClassName = + initialize.FromPointer(cluster.Spec.Proxy.PGBouncer.PriorityClassName) + deploy.Spec.Template.Spec.TopologySpreadConstraints = + cluster.Spec.Proxy.PGBouncer.TopologySpreadConstraints + + // if default pod scheduling is not explicitly disabled, add the default + // pod topology spread constraints + if !initialize.FromPointer(cluster.Spec.DisableDefaultPodScheduling) { + deploy.Spec.Template.Spec.TopologySpreadConstraints = append( + deploy.Spec.Template.Spec.TopologySpreadConstraints, + defaultTopologySpreadConstraints(*deploy.Spec.Selector)...) + } + + // Restart containers any time they stop, die, are killed, etc. + // - https://docs.k8s.io/concepts/workloads/pods/pod-lifecycle/#restart-policy + deploy.Spec.Template.Spec.RestartPolicy = corev1.RestartPolicyAlways + + // ShareProcessNamespace makes Kubernetes' pause process PID 1 and lets + // containers see each other's processes. + // - https://docs.k8s.io/tasks/configure-pod-container/share-process-namespace/ + deploy.Spec.Template.Spec.ShareProcessNamespace = initialize.Bool(true) + + // There's no need for individual DNS names of PgBouncer pods. + deploy.Spec.Template.Spec.Subdomain = "" + + // PgBouncer does not make any Kubernetes API calls. Use the default + // ServiceAccount and do not mount its credentials. + deploy.Spec.Template.Spec.AutomountServiceAccountToken = initialize.Bool(false) + + // Do not add environment variables describing services in this namespace. + deploy.Spec.Template.Spec.EnableServiceLinks = initialize.Bool(false) + + deploy.Spec.Template.Spec.SecurityContext = initialize.PodSecurityContext() + + // set the image pull secrets, if any exist + deploy.Spec.Template.Spec.ImagePullSecrets = cluster.Spec.ImagePullSecrets + + err := errors.WithStack(r.setControllerReference(cluster, deploy)) + + if err == nil { + pgbouncer.Pod(ctx, cluster, configmap, primaryCertificate, secret, &deploy.Spec.Template.Spec) + } + + return deploy, true, err +} + +// +kubebuilder:rbac:groups="apps",resources="deployments",verbs={get} +// +kubebuilder:rbac:groups="apps",resources="deployments",verbs={create,delete,patch} + +// reconcilePGBouncerDeployment writes the Deployment that runs PgBouncer. +func (r *Reconciler) reconcilePGBouncerDeployment( + ctx context.Context, cluster *v1beta1.PostgresCluster, + primaryCertificate *corev1.SecretProjection, + configmap *corev1.ConfigMap, secret *corev1.Secret, +) error { + deploy, specified, err := r.generatePGBouncerDeployment( + ctx, cluster, primaryCertificate, configmap, secret) + + // Set observations whether the deployment exists or not. + defer func() { + cluster.Status.Proxy.PGBouncer.Replicas = deploy.Status.Replicas + cluster.Status.Proxy.PGBouncer.ReadyReplicas = deploy.Status.ReadyReplicas + + // NOTE(cbandy): This should be somewhere else when there is more than + // one proxy implementation. + + var available *appsv1.DeploymentCondition + for i := range deploy.Status.Conditions { + if deploy.Status.Conditions[i].Type == appsv1.DeploymentAvailable { + available = &deploy.Status.Conditions[i] + } + } + + if available == nil { + meta.RemoveStatusCondition(&cluster.Status.Conditions, v1beta1.ProxyAvailable) + } else { + meta.SetStatusCondition(&cluster.Status.Conditions, metav1.Condition{ + Type: v1beta1.ProxyAvailable, + Status: metav1.ConditionStatus(available.Status), + Reason: available.Reason, + Message: available.Message, + + LastTransitionTime: available.LastTransitionTime, + ObservedGeneration: cluster.Generation, + }) + } + }() + + if err == nil && !specified { + // PgBouncer is disabled; delete the Deployment if it exists. Check the + // client cache first using Get. + key := client.ObjectKeyFromObject(deploy) + err := errors.WithStack(r.Client.Get(ctx, key, deploy)) + if err == nil { + err = errors.WithStack(r.deleteControlled(ctx, cluster, deploy)) + } + return client.IgnoreNotFound(err) + } + + if err == nil { + err = errors.WithStack(r.apply(ctx, deploy)) + } + return err +} + +// +kubebuilder:rbac:groups="policy",resources="poddisruptionbudgets",verbs={create,patch,get,delete} + +// reconcilePGBouncerPodDisruptionBudget creates a PDB for the PGBouncer deployment. +// A PDB will be created when minAvailable is determined to be greater than 0 and +// a PGBouncer proxy is defined in the spec. MinAvailable can be defined in the spec +// or a default value will be set based on the number of replicas defined for PGBouncer. +func (r *Reconciler) reconcilePGBouncerPodDisruptionBudget( + ctx context.Context, + cluster *v1beta1.PostgresCluster, +) error { + deleteExistingPDB := func(cluster *v1beta1.PostgresCluster) error { + existing := &policyv1.PodDisruptionBudget{ObjectMeta: naming.ClusterPGBouncer(cluster)} + err := errors.WithStack(r.Client.Get(ctx, client.ObjectKeyFromObject(existing), existing)) + if err == nil { + err = errors.WithStack(r.deleteControlled(ctx, cluster, existing)) + } + return client.IgnoreNotFound(err) + } + + if cluster.Spec.Proxy == nil || cluster.Spec.Proxy.PGBouncer == nil { + return deleteExistingPDB(cluster) + } + + if cluster.Spec.Proxy.PGBouncer.Replicas == nil { + // Replicas should always have a value because of defaults in the spec + return errors.New("Replicas should be defined") + } + minAvailable := getMinAvailable(cluster.Spec.Proxy.PGBouncer.MinAvailable, + *cluster.Spec.Proxy.PGBouncer.Replicas) + + // If 'minAvailable' is set to '0', we will not reconcile the PDB. If one + // already exists, we will remove it. + scaled, err := intstr.GetScaledValueFromIntOrPercent(minAvailable, + int(*cluster.Spec.Proxy.PGBouncer.Replicas), true) + if err == nil && scaled <= 0 { + return deleteExistingPDB(cluster) + } + + meta := naming.ClusterPGBouncer(cluster) + meta.Labels = naming.Merge(cluster.Spec.Metadata.GetLabelsOrNil(), + cluster.Spec.Proxy.PGBouncer.Metadata.GetLabelsOrNil(), + map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelRole: naming.RolePGBouncer, + }) + meta.Annotations = naming.Merge(cluster.Spec.Metadata.GetAnnotationsOrNil(), + cluster.Spec.Proxy.PGBouncer.Metadata.GetAnnotationsOrNil()) + + selector := naming.ClusterPGBouncerSelector(cluster) + pdb := &policyv1.PodDisruptionBudget{} + if err == nil { + pdb, err = r.generatePodDisruptionBudget(cluster, meta, minAvailable, selector) + } + + if err == nil { + err = errors.WithStack(r.apply(ctx, pdb)) + } + return err +} diff --git a/internal/controller/postgrescluster/pgbouncer_test.go b/internal/controller/postgrescluster/pgbouncer_test.go new file mode 100644 index 0000000000..9bbced5247 --- /dev/null +++ b/internal/controller/postgrescluster/pgbouncer_test.go @@ -0,0 +1,634 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgrescluster + +import ( + "context" + "strconv" + "testing" + + "github.com/pkg/errors" + "gotest.tools/v3/assert" + corev1 "k8s.io/api/core/v1" + policyv1 "k8s.io/api/policy/v1" + apierrors "k8s.io/apimachinery/pkg/api/errors" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/util/intstr" + "k8s.io/client-go/tools/record" + "sigs.k8s.io/controller-runtime/pkg/client" + + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/testing/cmp" + "github.com/crunchydata/postgres-operator/internal/testing/require" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestGeneratePGBouncerService(t *testing.T) { + _, cc := setupKubernetes(t) + require.ParallelCapacity(t, 0) + + reconciler := &Reconciler{ + Client: cc, + Recorder: new(record.FakeRecorder), + } + + cluster := &v1beta1.PostgresCluster{} + cluster.Namespace = "ns5" + cluster.Name = "pg7" + + t.Run("Unspecified", func(t *testing.T) { + for _, spec := range []*v1beta1.PostgresProxySpec{ + nil, new(v1beta1.PostgresProxySpec), + } { + cluster := cluster.DeepCopy() + cluster.Spec.Proxy = spec + + service, specified, err := reconciler.generatePGBouncerService(cluster) + assert.NilError(t, err) + assert.Assert(t, !specified) + + assert.Assert(t, cmp.MarshalMatches(service.ObjectMeta, ` +creationTimestamp: null +name: pg7-pgbouncer +namespace: ns5 + `)) + } + }) + + cluster.Spec.Proxy = &v1beta1.PostgresProxySpec{ + PGBouncer: &v1beta1.PGBouncerPodSpec{ + Port: initialize.Int32(9651), + }, + } + + alwaysExpect := func(t testing.TB, service *corev1.Service) { + assert.Assert(t, cmp.MarshalMatches(service.TypeMeta, ` +apiVersion: v1 +kind: Service + `)) + assert.Assert(t, cmp.MarshalMatches(service.ObjectMeta, ` +creationTimestamp: null +labels: + postgres-operator.crunchydata.com/cluster: pg7 + postgres-operator.crunchydata.com/role: pgbouncer +name: pg7-pgbouncer +namespace: ns5 +ownerReferences: +- apiVersion: postgres-operator.crunchydata.com/v1beta1 + blockOwnerDeletion: true + controller: true + kind: PostgresCluster + name: pg7 + uid: "" + `)) + + // Always gets a ClusterIP (never None). + assert.Equal(t, service.Spec.ClusterIP, "") + assert.DeepEqual(t, service.Spec.Selector, map[string]string{ + "postgres-operator.crunchydata.com/cluster": "pg7", + "postgres-operator.crunchydata.com/role": "pgbouncer", + }) + } + + t.Run("AnnotationsLabels", func(t *testing.T) { + cluster := cluster.DeepCopy() + cluster.Spec.Metadata = &v1beta1.Metadata{ + Annotations: map[string]string{"a": "v1"}, + Labels: map[string]string{"b": "v2"}, + } + + service, specified, err := reconciler.generatePGBouncerService(cluster) + assert.NilError(t, err) + assert.Assert(t, specified) + + // Annotations present in the metadata. + assert.DeepEqual(t, service.ObjectMeta.Annotations, map[string]string{ + "a": "v1", + }) + + // Labels present in the metadata. + assert.DeepEqual(t, service.ObjectMeta.Labels, map[string]string{ + "b": "v2", + "postgres-operator.crunchydata.com/cluster": "pg7", + "postgres-operator.crunchydata.com/role": "pgbouncer", + }) + + // Labels not in the selector. + assert.DeepEqual(t, service.Spec.Selector, map[string]string{ + "postgres-operator.crunchydata.com/cluster": "pg7", + "postgres-operator.crunchydata.com/role": "pgbouncer", + }) + + // Add metadata to individual service + cluster.Spec.Proxy.PGBouncer.Service = &v1beta1.ServiceSpec{ + Metadata: &v1beta1.Metadata{ + Annotations: map[string]string{"c": "v3"}, + Labels: map[string]string{"d": "v4", + "postgres-operator.crunchydata.com/cluster": "wrongName"}, + }, + } + + service, specified, err = reconciler.generatePGBouncerService(cluster) + assert.NilError(t, err) + assert.Assert(t, specified) + + // Annotations present in the metadata. + assert.DeepEqual(t, service.ObjectMeta.Annotations, map[string]string{ + "a": "v1", + "c": "v3", + }) + + // Labels present in the metadata. + assert.DeepEqual(t, service.ObjectMeta.Labels, map[string]string{ + "b": "v2", + "d": "v4", + "postgres-operator.crunchydata.com/cluster": "pg7", + "postgres-operator.crunchydata.com/role": "pgbouncer", + }) + + // Labels not in the selector. + assert.DeepEqual(t, service.Spec.Selector, map[string]string{ + "postgres-operator.crunchydata.com/cluster": "pg7", + "postgres-operator.crunchydata.com/role": "pgbouncer", + }) + }) + + t.Run("NoServiceSpec", func(t *testing.T) { + service, specified, err := reconciler.generatePGBouncerService(cluster) + assert.NilError(t, err) + assert.Assert(t, specified) + alwaysExpect(t, service) + // Defaults to ClusterIP. + assert.Equal(t, service.Spec.Type, corev1.ServiceTypeClusterIP) + assert.Assert(t, cmp.MarshalMatches(service.Spec.Ports, ` +- name: pgbouncer + port: 9651 + protocol: TCP + targetPort: pgbouncer + `)) + }) + + types := []struct { + Type string + Expect func(testing.TB, *corev1.Service) + }{ + {Type: "ClusterIP", Expect: func(t testing.TB, service *corev1.Service) { + assert.Equal(t, service.Spec.Type, corev1.ServiceTypeClusterIP) + }}, + {Type: "NodePort", Expect: func(t testing.TB, service *corev1.Service) { + assert.Equal(t, service.Spec.Type, corev1.ServiceTypeNodePort) + }}, + {Type: "LoadBalancer", Expect: func(t testing.TB, service *corev1.Service) { + assert.Equal(t, service.Spec.Type, corev1.ServiceTypeLoadBalancer) + }}, + } + + for _, test := range types { + t.Run(test.Type, func(t *testing.T) { + cluster := cluster.DeepCopy() + cluster.Spec.Proxy.PGBouncer.Service = &v1beta1.ServiceSpec{Type: test.Type} + + service, specified, err := reconciler.generatePGBouncerService(cluster) + assert.NilError(t, err) + assert.Assert(t, specified) + alwaysExpect(t, service) + test.Expect(t, service) + assert.Assert(t, cmp.MarshalMatches(service.Spec.Ports, ` +- name: pgbouncer + port: 9651 + protocol: TCP + targetPort: pgbouncer + `)) + }) + } + + typesAndPort := []struct { + Description string + Type string + NodePort *int32 + Expect func(testing.TB, *corev1.Service, error) + }{ + {Description: "ClusterIP with Port 32000", Type: "ClusterIP", + NodePort: initialize.Int32(32000), Expect: func(t testing.TB, service *corev1.Service, err error) { + assert.ErrorContains(t, err, "NodePort cannot be set with type ClusterIP on Service \"pg7-pgbouncer\"") + assert.Assert(t, service == nil) + }}, + {Description: "NodePort with Port 32001", Type: "NodePort", + NodePort: initialize.Int32(32001), Expect: func(t testing.TB, service *corev1.Service, err error) { + assert.NilError(t, err) + assert.Equal(t, service.Spec.Type, corev1.ServiceTypeNodePort) + alwaysExpect(t, service) + assert.Assert(t, cmp.MarshalMatches(service.Spec.Ports, ` +- name: pgbouncer + nodePort: 32001 + port: 9651 + protocol: TCP + targetPort: pgbouncer +`)) + }}, + {Description: "LoadBalancer with Port 32002", Type: "LoadBalancer", + NodePort: initialize.Int32(32002), Expect: func(t testing.TB, service *corev1.Service, err error) { + assert.NilError(t, err) + assert.Equal(t, service.Spec.Type, corev1.ServiceTypeLoadBalancer) + alwaysExpect(t, service) + assert.Assert(t, cmp.MarshalMatches(service.Spec.Ports, ` +- name: pgbouncer + nodePort: 32002 + port: 9651 + protocol: TCP + targetPort: pgbouncer +`)) + }}, + } + + for _, test := range typesAndPort { + t.Run(test.Type, func(t *testing.T) { + cluster := cluster.DeepCopy() + cluster.Spec.Proxy.PGBouncer.Service = &v1beta1.ServiceSpec{Type: test.Type, NodePort: test.NodePort} + + service, specified, err := reconciler.generatePGBouncerService(cluster) + test.Expect(t, service, err) + // whether or not an error is encountered, 'specified' is true because + // the service *should* exist + assert.Assert(t, specified) + }) + } +} + +func TestReconcilePGBouncerService(t *testing.T) { + ctx := context.Background() + _, cc := setupKubernetes(t) + require.ParallelCapacity(t, 1) + + reconciler := &Reconciler{Client: cc, Owner: client.FieldOwner(t.Name())} + + cluster := testCluster() + cluster.Namespace = setupNamespace(t, cc).Name + assert.NilError(t, cc.Create(ctx, cluster)) + + t.Run("Unspecified", func(t *testing.T) { + cluster := cluster.DeepCopy() + cluster.Spec.Proxy = nil + + service, err := reconciler.reconcilePGBouncerService(ctx, cluster) + assert.NilError(t, err) + assert.Assert(t, service == nil) + }) + + cluster.Spec.Proxy = &v1beta1.PostgresProxySpec{ + PGBouncer: &v1beta1.PGBouncerPodSpec{ + Port: initialize.Int32(19041), + }, + } + + t.Run("NoServiceSpec", func(t *testing.T) { + service, err := reconciler.reconcilePGBouncerService(ctx, cluster) + assert.NilError(t, err) + assert.Assert(t, service != nil) + t.Cleanup(func() { assert.Check(t, cc.Delete(ctx, service)) }) + + assert.Assert(t, service.Spec.ClusterIP != "", + "expected to be assigned a ClusterIP") + }) + + serviceTypes := []string{"ClusterIP", "NodePort", "LoadBalancer"} + + // Confirm that each ServiceType can be reconciled. + for _, serviceType := range serviceTypes { + t.Run(serviceType, func(t *testing.T) { + cluster := cluster.DeepCopy() + cluster.Spec.Proxy.PGBouncer.Service = &v1beta1.ServiceSpec{Type: serviceType} + + service, err := reconciler.reconcilePGBouncerService(ctx, cluster) + assert.NilError(t, err) + assert.Assert(t, service != nil) + t.Cleanup(func() { assert.Check(t, cc.Delete(ctx, service)) }) + + assert.Assert(t, service.Spec.ClusterIP != "", + "expected to be assigned a ClusterIP") + }) + } + + // CRD validation looks only at the new/incoming value of fields. Confirm + // that each ServiceType can change to any other ServiceType. Forbidding + // certain transitions requires a validating webhook. + serviceTypeChangeClusterCounter := 0 + for _, beforeType := range serviceTypes { + for _, changeType := range serviceTypes { + t.Run(beforeType+"To"+changeType, func(t *testing.T) { + // Creating fresh clusters for these tests + clusterNamespace := cluster.Namespace + cluster := testCluster() + cluster.Namespace = clusterNamespace + + // Note (dsessler): Adding a number to each cluster name to make cluster/service + // names unique to work around an intermittent race condition where a service + // from a prior test has not been deleted yet when the next test runs, causing + // the test to fail due to non-matching IP addresses. + cluster.Name += "-" + strconv.Itoa(serviceTypeChangeClusterCounter) + assert.NilError(t, cc.Create(ctx, cluster)) + + cluster.Spec.Proxy = &v1beta1.PostgresProxySpec{ + PGBouncer: &v1beta1.PGBouncerPodSpec{ + Port: initialize.Int32(19041), + }, + } + cluster.Spec.Proxy.PGBouncer.Service = &v1beta1.ServiceSpec{Type: beforeType} + + before, err := reconciler.reconcilePGBouncerService(ctx, cluster) + assert.NilError(t, err) + t.Cleanup(func() { assert.Check(t, cc.Delete(ctx, before)) }) + + cluster.Spec.Proxy.PGBouncer.Service.Type = changeType + + after, err := reconciler.reconcilePGBouncerService(ctx, cluster) + + // LoadBalancers are provisioned by a separate controller that + // updates the Service soon after creation. The API may return + // a conflict error when we race to update it, even though we + // don't send a resourceVersion in our payload. Retry. + if apierrors.IsConflict(err) { + t.Log("conflict:", err) + after, err = reconciler.reconcilePGBouncerService(ctx, cluster) + } + + assert.NilError(t, err, "\n%#v", errors.Unwrap(err)) + assert.Equal(t, after.Spec.ClusterIP, before.Spec.ClusterIP, + "expected to keep the same ClusterIP") + serviceTypeChangeClusterCounter++ + }) + } + } +} + +func TestGeneratePGBouncerDeployment(t *testing.T) { + _, cc := setupKubernetes(t) + require.ParallelCapacity(t, 0) + + ctx := context.Background() + reconciler := &Reconciler{Client: cc} + + cluster := &v1beta1.PostgresCluster{} + cluster.Namespace = "ns3" + cluster.Name = "test-cluster" + + t.Run("Unspecified", func(t *testing.T) { + for _, spec := range []*v1beta1.PostgresProxySpec{ + nil, new(v1beta1.PostgresProxySpec), + } { + cluster := cluster.DeepCopy() + cluster.Spec.Proxy = spec + + deploy, specified, err := reconciler.generatePGBouncerDeployment(ctx, cluster, nil, nil, nil) + assert.NilError(t, err) + assert.Assert(t, !specified) + + assert.Assert(t, cmp.MarshalMatches(deploy.ObjectMeta, ` +creationTimestamp: null +name: test-cluster-pgbouncer +namespace: ns3 + `)) + } + }) + + cluster.Spec.Proxy = &v1beta1.PostgresProxySpec{ + PGBouncer: &v1beta1.PGBouncerPodSpec{}, + } + cluster.Default() + + configmap := &corev1.ConfigMap{} + configmap.Name = "some-cm2" + + secret := &corev1.Secret{} + secret.Name = "some-secret3" + + primary := &corev1.SecretProjection{} + + t.Run("AnnotationsLabels", func(t *testing.T) { + cluster := cluster.DeepCopy() + cluster.Spec.Metadata = &v1beta1.Metadata{ + Annotations: map[string]string{"a": "v1"}, + Labels: map[string]string{"b": "v2"}, + } + + deploy, specified, err := reconciler.generatePGBouncerDeployment( + ctx, cluster, primary, configmap, secret) + assert.NilError(t, err) + assert.Assert(t, specified) + + // Annotations present in the metadata. + assert.DeepEqual(t, deploy.ObjectMeta.Annotations, map[string]string{ + "a": "v1", + }) + + // Labels present in the metadata. + assert.DeepEqual(t, deploy.ObjectMeta.Labels, map[string]string{ + "b": "v2", + "postgres-operator.crunchydata.com/cluster": "test-cluster", + "postgres-operator.crunchydata.com/role": "pgbouncer", + }) + + // Labels not in the pod selector. + assert.DeepEqual(t, deploy.Spec.Selector, + &metav1.LabelSelector{ + MatchLabels: map[string]string{ + "postgres-operator.crunchydata.com/cluster": "test-cluster", + "postgres-operator.crunchydata.com/role": "pgbouncer", + }, + }) + + // Annotations present in the pod template. + assert.DeepEqual(t, deploy.Spec.Template.Annotations, map[string]string{ + "a": "v1", + }) + + // Labels present in the pod template. + assert.DeepEqual(t, deploy.Spec.Template.Labels, map[string]string{ + "b": "v2", + "postgres-operator.crunchydata.com/cluster": "test-cluster", + "postgres-operator.crunchydata.com/role": "pgbouncer", + }) + }) + + t.Run("PodSpec", func(t *testing.T) { + deploy, specified, err := reconciler.generatePGBouncerDeployment( + ctx, cluster, primary, configmap, secret) + assert.NilError(t, err) + assert.Assert(t, specified) + + // Containers and Volumes should be populated. + assert.Assert(t, len(deploy.Spec.Template.Spec.Containers) != 0) + assert.Assert(t, len(deploy.Spec.Template.Spec.Volumes) != 0) + + // Ignore Containers and Volumes in the comparison below. + deploy.Spec.Template.Spec.Containers = nil + deploy.Spec.Template.Spec.Volumes = nil + + // TODO(tjmoore4): Add additional tests to test appending existing + // topology spread constraints and spec.disableDefaultPodScheduling being + // set to true (as done in instance StatefulSet tests). + + assert.Assert(t, cmp.MarshalMatches(deploy.Spec.Template.Spec, ` +automountServiceAccountToken: false +containers: null +enableServiceLinks: false +restartPolicy: Always +securityContext: + fsGroupChangePolicy: OnRootMismatch +shareProcessNamespace: true +topologySpreadConstraints: +- labelSelector: + matchLabels: + postgres-operator.crunchydata.com/cluster: test-cluster + postgres-operator.crunchydata.com/role: pgbouncer + maxSkew: 1 + topologyKey: kubernetes.io/hostname + whenUnsatisfiable: ScheduleAnyway +- labelSelector: + matchLabels: + postgres-operator.crunchydata.com/cluster: test-cluster + postgres-operator.crunchydata.com/role: pgbouncer + maxSkew: 1 + topologyKey: topology.kubernetes.io/zone + whenUnsatisfiable: ScheduleAnyway + `)) + + t.Run("DisableDefaultPodScheduling", func(t *testing.T) { + cluster := cluster.DeepCopy() + cluster.Spec.DisableDefaultPodScheduling = initialize.Bool(true) + + deploy, specified, err := reconciler.generatePGBouncerDeployment( + ctx, cluster, primary, configmap, secret) + assert.NilError(t, err) + assert.Assert(t, specified) + + assert.Assert(t, deploy.Spec.Template.Spec.TopologySpreadConstraints == nil) + }) + }) +} + +func TestReconcilePGBouncerDisruptionBudget(t *testing.T) { + ctx := context.Background() + _, cc := setupKubernetes(t) + require.ParallelCapacity(t, 0) + + r := &Reconciler{ + Client: cc, + Owner: client.FieldOwner(t.Name()), + } + + foundPDB := func( + cluster *v1beta1.PostgresCluster, + ) bool { + got := &policyv1.PodDisruptionBudget{} + err := r.Client.Get(ctx, + naming.AsObjectKey(naming.ClusterPGBouncer(cluster)), + got) + return !apierrors.IsNotFound(err) + } + + ns := setupNamespace(t, cc) + + t.Run("empty", func(t *testing.T) { + cluster := testCluster() + cluster.Namespace = ns.Name + cluster.Spec.Proxy = nil + + assert.NilError(t, r.reconcilePGBouncerPodDisruptionBudget(ctx, cluster)) + }) + + t.Run("no replicas in spec", func(t *testing.T) { + cluster := testCluster() + cluster.Namespace = ns.Name + cluster.Spec.Proxy.PGBouncer.Replicas = nil + assert.Error(t, r.reconcilePGBouncerPodDisruptionBudget(ctx, cluster), + "Replicas should be defined") + }) + + t.Run("not created", func(t *testing.T) { + cluster := testCluster() + cluster.Namespace = ns.Name + cluster.Spec.Proxy.PGBouncer.Replicas = initialize.Int32(1) + cluster.Spec.Proxy.PGBouncer.MinAvailable = initialize.Pointer(intstr.FromInt32(0)) + assert.NilError(t, r.reconcilePGBouncerPodDisruptionBudget(ctx, cluster)) + assert.Assert(t, !foundPDB(cluster)) + }) + + t.Run("int created", func(t *testing.T) { + cluster := testCluster() + cluster.Namespace = ns.Name + cluster.Spec.Proxy.PGBouncer.Replicas = initialize.Int32(1) + cluster.Spec.Proxy.PGBouncer.MinAvailable = initialize.Pointer(intstr.FromInt32(1)) + + assert.NilError(t, r.Client.Create(ctx, cluster)) + t.Cleanup(func() { assert.Check(t, r.Client.Delete(ctx, cluster)) }) + + assert.NilError(t, r.reconcilePGBouncerPodDisruptionBudget(ctx, cluster)) + assert.Assert(t, foundPDB(cluster)) + + t.Run("deleted", func(t *testing.T) { + cluster.Spec.Proxy.PGBouncer.MinAvailable = initialize.Pointer(intstr.FromInt32(0)) + err := r.reconcilePGBouncerPodDisruptionBudget(ctx, cluster) + if apierrors.IsConflict(err) { + // When running in an existing environment another controller will sometimes update + // the object. This leads to an error where the ResourceVersion of the object does + // not match what we expect. When we run into this conflict, try to reconcile the + // object again. + err = r.reconcilePGBouncerPodDisruptionBudget(ctx, cluster) + } + assert.NilError(t, err, errors.Unwrap(err)) + assert.Assert(t, !foundPDB(cluster)) + }) + }) + + t.Run("str created", func(t *testing.T) { + cluster := testCluster() + cluster.Namespace = ns.Name + cluster.Spec.Proxy.PGBouncer.Replicas = initialize.Int32(1) + cluster.Spec.Proxy.PGBouncer.MinAvailable = initialize.Pointer(intstr.FromString("50%")) + + assert.NilError(t, r.Client.Create(ctx, cluster)) + t.Cleanup(func() { assert.Check(t, r.Client.Delete(ctx, cluster)) }) + + assert.NilError(t, r.reconcilePGBouncerPodDisruptionBudget(ctx, cluster)) + assert.Assert(t, foundPDB(cluster)) + + t.Run("deleted", func(t *testing.T) { + cluster.Spec.Proxy.PGBouncer.MinAvailable = initialize.Pointer(intstr.FromString("0%")) + err := r.reconcilePGBouncerPodDisruptionBudget(ctx, cluster) + if apierrors.IsConflict(err) { + // When running in an existing environment another controller will sometimes update + // the object. This leads to an error where the ResourceVersion of the object does + // not match what we expect. When we run into this conflict, try to reconcile the + // object again. + err = r.reconcilePGBouncerPodDisruptionBudget(ctx, cluster) + } + assert.NilError(t, err, errors.Unwrap(err)) + assert.Assert(t, !foundPDB(cluster)) + }) + + t.Run("delete with 00%", func(t *testing.T) { + cluster.Spec.Proxy.PGBouncer.MinAvailable = initialize.Pointer(intstr.FromString("50%")) + + assert.NilError(t, r.reconcilePGBouncerPodDisruptionBudget(ctx, cluster)) + assert.Assert(t, foundPDB(cluster)) + + t.Run("deleted", func(t *testing.T) { + cluster.Spec.Proxy.PGBouncer.MinAvailable = initialize.Pointer(intstr.FromString("00%")) + err := r.reconcilePGBouncerPodDisruptionBudget(ctx, cluster) + if apierrors.IsConflict(err) { + // When running in an existing environment another controller will sometimes update + // the object. This leads to an error where the ResourceVersion of the object does + // not match what we expect. When we run into this conflict, try to reconcile the + // object again. + err = r.reconcilePGBouncerPodDisruptionBudget(ctx, cluster) + } + assert.NilError(t, err, errors.Unwrap(err)) + assert.Assert(t, !foundPDB(cluster)) + }) + }) + }) +} diff --git a/internal/controller/postgrescluster/pgmonitor.go b/internal/controller/postgrescluster/pgmonitor.go new file mode 100644 index 0000000000..e1b5186cb4 --- /dev/null +++ b/internal/controller/postgrescluster/pgmonitor.go @@ -0,0 +1,494 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgrescluster + +import ( + "context" + "fmt" + "io" + "os" + "strings" + + "github.com/pkg/errors" + corev1 "k8s.io/api/core/v1" + "sigs.k8s.io/controller-runtime/pkg/client" + + "github.com/crunchydata/postgres-operator/internal/config" + "github.com/crunchydata/postgres-operator/internal/feature" + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/logging" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/pgmonitor" + "github.com/crunchydata/postgres-operator/internal/postgres" + pgpassword "github.com/crunchydata/postgres-operator/internal/postgres/password" + "github.com/crunchydata/postgres-operator/internal/util" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +// If pgMonitor is enabled the pgMonitor sidecar(s) have been added to the +// instance pod. reconcilePGMonitor will update the database to +// create the necessary objects for the tool to run +func (r *Reconciler) reconcilePGMonitor(ctx context.Context, + cluster *v1beta1.PostgresCluster, instances *observedInstances, + monitoringSecret *corev1.Secret) error { + + err := r.reconcilePGMonitorExporter(ctx, cluster, instances, monitoringSecret) + + return err +} + +// reconcilePGMonitorExporter performs setup the postgres_exporter sidecar +// - PodExec to run the sql in the primary database +// Status.Monitoring.ExporterConfiguration is used to determine when the +// pgMonitor postgres_exporter configuration should be added/changed to +// limit how often PodExec is used +// - TODO (jmckulk): kube perms comment? +func (r *Reconciler) reconcilePGMonitorExporter(ctx context.Context, + cluster *v1beta1.PostgresCluster, instances *observedInstances, + monitoringSecret *corev1.Secret) error { + + var ( + writableInstance *Instance + writablePod *corev1.Pod + setup string + pgImageSHA string + ) + + // Find the PostgreSQL instance that can execute SQL that writes to every + // database. When there is none, return early. + writablePod, writableInstance = instances.writablePod(naming.ContainerDatabase) + if writableInstance == nil || writablePod == nil { + return nil + } + + // For the writableInstance found above + // 1) get and save the imageIDs for `database` container, and + // 2) exit early if we can't get the ImageID of this container. + // We use this ImageID and the setup.sql file in the hash we make to see if the operator needs to rerun + // the `EnableExporterInPostgreSQL` funcs; that way we are always running + // that function against an updated and running pod. + if pgmonitor.ExporterEnabled(cluster) { + sql, err := os.ReadFile(fmt.Sprintf("%s/pg%d/setup.sql", pgmonitor.GetQueriesConfigDir(ctx), cluster.Spec.PostgresVersion)) + if err != nil { + return err + } + + // TODO: Revisit how pgbackrest_info.sh is used with pgMonitor. + // pgMonitor queries expect a path to a script that runs pgBackRest + // info and provides json output. In the queries yaml for pgBackRest + // the default path is `/usr/bin/pgbackrest-info.sh`. We update + // the path to point to the script in our database image. + setup = strings.ReplaceAll(string(sql), "/usr/bin/pgbackrest-info.sh", + "/opt/crunchy/bin/postgres/pgbackrest_info.sh") + + for _, containerStatus := range writablePod.Status.ContainerStatuses { + if containerStatus.Name == naming.ContainerDatabase { + pgImageSHA = containerStatus.ImageID + } + } + + // Could not get container imageID + if pgImageSHA == "" { + return nil + } + } + + // PostgreSQL is available for writes. Prepare to either add or remove + // pgMonitor objects. + + action := func(ctx context.Context, exec postgres.Executor) error { + return pgmonitor.EnableExporterInPostgreSQL(ctx, exec, monitoringSecret, pgmonitor.ExporterDB, setup) + } + + if !pgmonitor.ExporterEnabled(cluster) { + action = func(ctx context.Context, exec postgres.Executor) error { + return pgmonitor.DisableExporterInPostgreSQL(ctx, exec) + } + } + + revision, err := safeHash32(func(hasher io.Writer) error { + // Discard log message from pgmonitor package about executing SQL. + // Nothing is being "executed" yet. + return action(logging.NewContext(ctx, logging.Discard()), func( + _ context.Context, stdin io.Reader, _, _ io.Writer, command ...string, + ) error { + _, err := io.Copy(hasher, stdin) + if err == nil { + // Use command and image tag in hash to execute hash on image update + _, err = fmt.Fprint(hasher, command, pgImageSHA, setup) + } + return err + }) + }) + + if err != nil { + return err + } + + if revision != cluster.Status.Monitoring.ExporterConfiguration { + // The configuration is out of date and needs to be updated. + // Include the revision hash in any log messages. + ctx := logging.NewContext(ctx, logging.FromContext(ctx).WithValues("revision", revision)) + + // Apply the necessary SQL and record its hash in cluster.Status + if err == nil { + err = action(ctx, func(ctx context.Context, stdin io.Reader, + stdout, stderr io.Writer, command ...string) error { + return r.PodExec(ctx, writablePod.Namespace, writablePod.Name, naming.ContainerDatabase, stdin, stdout, stderr, command...) + }) + } + if err == nil { + cluster.Status.Monitoring.ExporterConfiguration = revision + } + } + + return err +} + +// reconcileMonitoringSecret reconciles the secret containing authentication +// for monitoring tools +func (r *Reconciler) reconcileMonitoringSecret( + ctx context.Context, + cluster *v1beta1.PostgresCluster) (*corev1.Secret, error) { + + existing := &corev1.Secret{ObjectMeta: naming.MonitoringUserSecret(cluster)} + err := errors.WithStack( + r.Client.Get(ctx, client.ObjectKeyFromObject(existing), existing)) + if client.IgnoreNotFound(err) != nil { + return nil, err + } + + if !pgmonitor.ExporterEnabled(cluster) { + // TODO: Checking if the exporter is enabled to determine when monitoring + // secret should be created. If more tools are added to the monitoring + // suite, they could need the secret when the exporter is not enabled. + // This check may need to be updated. + // Exporter is disabled; delete monitoring secret if it exists. + if err == nil { + err = errors.WithStack(r.deleteControlled(ctx, cluster, existing)) + } + return nil, client.IgnoreNotFound(err) + } + + intent := &corev1.Secret{ObjectMeta: naming.MonitoringUserSecret(cluster)} + intent.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("Secret")) + + intent.Annotations = naming.Merge( + cluster.Spec.Metadata.GetAnnotationsOrNil(), + ) + intent.Labels = naming.Merge( + cluster.Spec.Metadata.GetLabelsOrNil(), + map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelRole: naming.RoleMonitoring, + }) + + intent.Data = make(map[string][]byte) + + // Copy existing password and verifier into the intent + if existing.Data != nil { + intent.Data["password"] = existing.Data["password"] + intent.Data["verifier"] = existing.Data["verifier"] + } + + // When password is unset, generate a new one + if len(intent.Data["password"]) == 0 { + password, err := util.GenerateASCIIPassword(util.DefaultGeneratedPasswordLength) + if err != nil { + return nil, err + } + intent.Data["password"] = []byte(password) + // We generated a new password, unset the verifier so that it is regenerated + intent.Data["verifier"] = nil + } + + // When a password has been generated or the verifier is empty, + // generate a verifier based on the current password. + // NOTE(cbandy): We don't have a function to compare a plaintext + // password to a SCRAM verifier. + if len(intent.Data["verifier"]) == 0 { + verifier, err := pgpassword.NewSCRAMPassword(string(intent.Data["password"])).Build() + if err != nil { + return nil, errors.WithStack(err) + } + intent.Data["verifier"] = []byte(verifier) + } + + err = errors.WithStack(r.setControllerReference(cluster, intent)) + if err == nil { + err = errors.WithStack(r.apply(ctx, intent)) + } + if err == nil { + return intent, nil + } + + return nil, err +} + +// addPGMonitorToInstancePodSpec performs the necessary setup to add +// pgMonitor resources on a PodTemplateSpec +func addPGMonitorToInstancePodSpec( + ctx context.Context, + cluster *v1beta1.PostgresCluster, + template *corev1.PodTemplateSpec, + exporterQueriesConfig, exporterWebConfig *corev1.ConfigMap) error { + + err := addPGMonitorExporterToInstancePodSpec(ctx, cluster, template, exporterQueriesConfig, exporterWebConfig) + + return err +} + +// addPGMonitorExporterToInstancePodSpec performs the necessary setup to +// add pgMonitor exporter resources to a PodTemplateSpec +// TODO (jmckulk): refactor to pass around monitoring secret; Without the secret +// the exporter container cannot be created; Testing relies on ensuring the +// monitoring secret is available +func addPGMonitorExporterToInstancePodSpec( + ctx context.Context, + cluster *v1beta1.PostgresCluster, + template *corev1.PodTemplateSpec, + exporterQueriesConfig, exporterWebConfig *corev1.ConfigMap) error { + + if !pgmonitor.ExporterEnabled(cluster) { + return nil + } + + certSecret := cluster.Spec.Monitoring.PGMonitor.Exporter.CustomTLSSecret + withBuiltInCollectors := + !strings.EqualFold(cluster.Annotations[naming.PostgresExporterCollectorsAnnotation], "None") + + var cmd []string + // PG 17 does not include some of the columns found in stat_bgwriter with older PGs. + // Selectively turn off the collector for stat_bgwriter in PG 17, unless the user + // requests all collectors to be turned off. + switch { + case cluster.Spec.PostgresVersion == 17 && withBuiltInCollectors && certSecret == nil: + cmd = pgmonitor.ExporterStartCommand(withBuiltInCollectors, + pgmonitor.ExporterDeactivateStatBGWriterFlag) + case cluster.Spec.PostgresVersion == 17 && withBuiltInCollectors && certSecret != nil: + cmd = pgmonitor.ExporterStartCommand(withBuiltInCollectors, + pgmonitor.ExporterWebConfigFileFlag, + pgmonitor.ExporterDeactivateStatBGWriterFlag) + // If you're turning off all built-in collectors, we don't care which + // version of PG you're using. + case certSecret != nil: + cmd = pgmonitor.ExporterStartCommand(withBuiltInCollectors, + pgmonitor.ExporterWebConfigFileFlag) + default: + cmd = pgmonitor.ExporterStartCommand(withBuiltInCollectors) + } + + securityContext := initialize.RestrictedSecurityContext() + exporterContainer := corev1.Container{ + Name: naming.ContainerPGMonitorExporter, + Image: config.PGExporterContainerImage(cluster), + ImagePullPolicy: cluster.Spec.ImagePullPolicy, + Resources: cluster.Spec.Monitoring.PGMonitor.Exporter.Resources, + Command: cmd, + Env: []corev1.EnvVar{ + {Name: "DATA_SOURCE_URI", Value: fmt.Sprintf("%s:%d/%s", pgmonitor.ExporterHost, *cluster.Spec.Port, pgmonitor.ExporterDB)}, + {Name: "DATA_SOURCE_USER", Value: pgmonitor.MonitoringUser}, + {Name: "DATA_SOURCE_PASS_FILE", Value: "/opt/crunchy/password"}, + }, + SecurityContext: securityContext, + // ContainerPort is needed to support proper target discovery by Prometheus for pgMonitor + // integration + Ports: []corev1.ContainerPort{{ + ContainerPort: pgmonitor.ExporterPort, + Name: naming.PortExporter, + Protocol: corev1.ProtocolTCP, + }}, + VolumeMounts: []corev1.VolumeMount{{ + Name: "exporter-config", + // this is the path for both custom and default queries files + MountPath: "/conf", + }, { + Name: "monitoring-secret", + MountPath: "/opt/crunchy/", + }}, + } + + passwordVolume := corev1.Volume{ + Name: "monitoring-secret", + VolumeSource: corev1.VolumeSource{ + Secret: &corev1.SecretVolumeSource{ + SecretName: naming.MonitoringUserSecret(cluster).Name, + }, + }, + } + + // add custom exporter config volume + configVolume := corev1.Volume{ + Name: "exporter-config", + VolumeSource: corev1.VolumeSource{ + Projected: &corev1.ProjectedVolumeSource{ + Sources: cluster.Spec.Monitoring.PGMonitor.Exporter.Configuration, + }, + }, + } + template.Spec.Volumes = append(template.Spec.Volumes, configVolume, passwordVolume) + + // The original "custom queries" ability allowed users to provide a file with custom queries; + // however, it would turn off the default queries. The new "custom queries" ability allows + // users to append custom queries to the default queries. This new behavior is feature gated. + // Therefore, we only want to add the default queries ConfigMap as a source for the + // "exporter-config" volume if the AppendCustomQueries feature gate is turned on OR if the + // user has not provided any custom configuration. + if feature.Enabled(ctx, feature.AppendCustomQueries) || + cluster.Spec.Monitoring.PGMonitor.Exporter.Configuration == nil { + + defaultConfigVolumeProjection := corev1.VolumeProjection{ + ConfigMap: &corev1.ConfigMapProjection{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: exporterQueriesConfig.Name, + }, + }, + } + configVolume.VolumeSource.Projected.Sources = append(configVolume.VolumeSource.Projected.Sources, + defaultConfigVolumeProjection) + } + + if certSecret != nil { + // TODO (jmckulk): params for paths and such + certVolume := corev1.Volume{Name: "exporter-certs"} + certVolume.Projected = &corev1.ProjectedVolumeSource{ + Sources: append([]corev1.VolumeProjection{}, + corev1.VolumeProjection{ + Secret: certSecret, + }, + ), + } + + webConfigVolume := corev1.Volume{Name: "web-config"} + webConfigVolume.ConfigMap = &corev1.ConfigMapVolumeSource{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: exporterWebConfig.Name, + }, + } + template.Spec.Volumes = append(template.Spec.Volumes, certVolume, webConfigVolume) + + mounts := []corev1.VolumeMount{{ + Name: "exporter-certs", + MountPath: "/certs", + }, { + Name: "web-config", + MountPath: "/web-config", + }} + + exporterContainer.VolumeMounts = append(exporterContainer.VolumeMounts, mounts...) + } + + template.Spec.Containers = append(template.Spec.Containers, exporterContainer) + + // add the proper label to support Pod discovery by Prometheus per pgMonitor configuration + initialize.Labels(template) + template.Labels[naming.LabelPGMonitorDiscovery] = "true" + + return nil +} + +// reconcileExporterWebConfig reconciles the configmap containing the webconfig for exporter tls +func (r *Reconciler) reconcileExporterWebConfig(ctx context.Context, + cluster *v1beta1.PostgresCluster) (*corev1.ConfigMap, error) { + + existing := &corev1.ConfigMap{ObjectMeta: naming.ExporterWebConfigMap(cluster)} + err := errors.WithStack(r.Client.Get(ctx, client.ObjectKeyFromObject(existing), existing)) + if client.IgnoreNotFound(err) != nil { + return nil, err + } + + if !pgmonitor.ExporterEnabled(cluster) || cluster.Spec.Monitoring.PGMonitor.Exporter.CustomTLSSecret == nil { + // We could still have a NotFound error here so check the err. + // If no error that means the configmap is found and needs to be deleted + if err == nil { + err = errors.WithStack(r.deleteControlled(ctx, cluster, existing)) + } + return nil, client.IgnoreNotFound(err) + } + + intent := &corev1.ConfigMap{ + ObjectMeta: naming.ExporterWebConfigMap(cluster), + Data: map[string]string{ + "web-config.yml": ` +# Generated by postgres-operator. DO NOT EDIT. +# Your changes will not be saved. + + +# A certificate and a key file are needed to enable TLS. +tls_server_config: + cert_file: /certs/tls.crt + key_file: /certs/tls.key`, + }, + } + + intent.Annotations = naming.Merge( + cluster.Spec.Metadata.GetAnnotationsOrNil(), + ) + intent.Labels = naming.Merge( + cluster.Spec.Metadata.GetLabelsOrNil(), + map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelRole: naming.RoleMonitoring, + }) + + intent.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("ConfigMap")) + + err = errors.WithStack(r.setControllerReference(cluster, intent)) + if err == nil { + err = errors.WithStack(r.apply(ctx, intent)) + } + if err == nil { + return intent, nil + } + + return nil, err +} + +// reconcileExporterQueriesConfig reconciles the configmap containing the default queries for exporter +func (r *Reconciler) reconcileExporterQueriesConfig(ctx context.Context, + cluster *v1beta1.PostgresCluster) (*corev1.ConfigMap, error) { + + existing := &corev1.ConfigMap{ObjectMeta: naming.ExporterQueriesConfigMap(cluster)} + err := errors.WithStack(r.Client.Get(ctx, client.ObjectKeyFromObject(existing), existing)) + if client.IgnoreNotFound(err) != nil { + return nil, err + } + + if !pgmonitor.ExporterEnabled(cluster) { + // We could still have a NotFound error here so check the err. + // If no error that means the configmap is found and needs to be deleted + if err == nil { + err = errors.WithStack(r.deleteControlled(ctx, cluster, existing)) + } + return nil, client.IgnoreNotFound(err) + } + + intent := &corev1.ConfigMap{ + ObjectMeta: naming.ExporterQueriesConfigMap(cluster), + Data: map[string]string{"defaultQueries.yml": pgmonitor.GenerateDefaultExporterQueries(ctx, cluster)}, + } + + intent.Annotations = naming.Merge( + cluster.Spec.Metadata.GetAnnotationsOrNil(), + ) + intent.Labels = naming.Merge( + cluster.Spec.Metadata.GetLabelsOrNil(), + map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelRole: naming.RoleMonitoring, + }) + + intent.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("ConfigMap")) + + err = errors.WithStack(r.setControllerReference(cluster, intent)) + if err == nil { + err = errors.WithStack(r.apply(ctx, intent)) + } + if err == nil { + return intent, nil + } + + return nil, err +} diff --git a/internal/controller/postgrescluster/pgmonitor_test.go b/internal/controller/postgrescluster/pgmonitor_test.go new file mode 100644 index 0000000000..8d8c8281d0 --- /dev/null +++ b/internal/controller/postgrescluster/pgmonitor_test.go @@ -0,0 +1,839 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgrescluster + +import ( + "bytes" + "context" + "io" + "os" + "strings" + "testing" + + "gotest.tools/v3/assert" + appsv1 "k8s.io/api/apps/v1" + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/resource" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/types" + "sigs.k8s.io/controller-runtime/pkg/client" + + "github.com/crunchydata/postgres-operator/internal/feature" + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/testing/cmp" + "github.com/crunchydata/postgres-operator/internal/testing/require" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func testExporterCollectorsAnnotation(t *testing.T, ctx context.Context, cluster *v1beta1.PostgresCluster, queriesConfig, webConfig *corev1.ConfigMap) { + t.Helper() + + t.Run("ExporterCollectorsAnnotation", func(t *testing.T) { + t.Run("UnexpectedValue", func(t *testing.T) { + template := new(corev1.PodTemplateSpec) + cluster := cluster.DeepCopy() + cluster.SetAnnotations(map[string]string{ + naming.PostgresExporterCollectorsAnnotation: "wrong-value", + }) + + assert.NilError(t, addPGMonitorExporterToInstancePodSpec(ctx, cluster, template, queriesConfig, webConfig)) + + assert.Equal(t, len(template.Spec.Containers), 1) + container := template.Spec.Containers[0] + + command := strings.Join(container.Command, "\n") + assert.Assert(t, cmp.Contains(command, "postgres_exporter")) + assert.Assert(t, !strings.Contains(command, "collector")) + }) + + t.Run("ExpectedValueNone", func(t *testing.T) { + template := new(corev1.PodTemplateSpec) + cluster := cluster.DeepCopy() + cluster.SetAnnotations(map[string]string{ + naming.PostgresExporterCollectorsAnnotation: "None", + }) + + assert.NilError(t, addPGMonitorExporterToInstancePodSpec(ctx, cluster, template, queriesConfig, webConfig)) + + assert.Equal(t, len(template.Spec.Containers), 1) + container := template.Spec.Containers[0] + + command := strings.Join(container.Command, "\n") + assert.Assert(t, cmp.Contains(command, "postgres_exporter")) + assert.Assert(t, cmp.Contains(command, "--[no-]collector")) + + t.Run("LowercaseToo", func(t *testing.T) { + template := new(corev1.PodTemplateSpec) + cluster.SetAnnotations(map[string]string{ + naming.PostgresExporterCollectorsAnnotation: "none", + }) + + assert.NilError(t, addPGMonitorExporterToInstancePodSpec(ctx, cluster, template, queriesConfig, webConfig)) + assert.Assert(t, cmp.Contains(strings.Join(template.Spec.Containers[0].Command, "\n"), "--[no-]collector")) + }) + }) + }) +} + +func TestAddPGMonitorExporterToInstancePodSpec(t *testing.T) { + t.Parallel() + + ctx := context.Background() + image := "test/image:tag" + + cluster := &v1beta1.PostgresCluster{} + cluster.Name = "pg1" + cluster.Spec.Port = initialize.Int32(5432) + cluster.Spec.ImagePullPolicy = corev1.PullAlways + + resources := corev1.ResourceRequirements{ + Requests: corev1.ResourceList{ + corev1.ResourceCPU: resource.MustParse("100m"), + }, + } + + exporterQueriesConfig := new(corev1.ConfigMap) + exporterQueriesConfig.Name = "query-conf" + + t.Run("ExporterDisabled", func(t *testing.T) { + template := &corev1.PodTemplateSpec{} + assert.NilError(t, addPGMonitorExporterToInstancePodSpec(ctx, cluster, template, nil, nil)) + assert.DeepEqual(t, template, &corev1.PodTemplateSpec{}) + }) + + t.Run("ExporterEnabled", func(t *testing.T) { + cluster.Spec.Monitoring = &v1beta1.MonitoringSpec{ + PGMonitor: &v1beta1.PGMonitorSpec{ + Exporter: &v1beta1.ExporterSpec{ + Image: image, + Resources: resources, + }, + }, + } + template := &corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{{ + Name: naming.ContainerDatabase, + }}, + }, + } + + assert.NilError(t, addPGMonitorExporterToInstancePodSpec(ctx, cluster, template, exporterQueriesConfig, nil)) + + assert.Equal(t, len(template.Spec.Containers), 2) + container := template.Spec.Containers[1] + + command := strings.Join(container.Command, "\n") + assert.Assert(t, cmp.Contains(command, "postgres_exporter")) + assert.Assert(t, cmp.Contains(command, "--extend.query-path")) + assert.Assert(t, cmp.Contains(command, "--web.listen-address")) + + // Exclude command from the following comparison. + container.Command = nil + assert.Assert(t, cmp.MarshalMatches(container, ` +env: +- name: DATA_SOURCE_URI + value: localhost:5432/postgres +- name: DATA_SOURCE_USER + value: ccp_monitoring +- name: DATA_SOURCE_PASS_FILE + value: /opt/crunchy/password +image: test/image:tag +imagePullPolicy: Always +name: exporter +ports: +- containerPort: 9187 + name: exporter + protocol: TCP +resources: + requests: + cpu: 100m +securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault +volumeMounts: +- mountPath: /conf + name: exporter-config +- mountPath: /opt/crunchy/ + name: monitoring-secret + `)) + + assert.Assert(t, cmp.MarshalMatches(template.Spec.Volumes, ` +- name: exporter-config + projected: + sources: + - configMap: + name: query-conf +- name: monitoring-secret + secret: + secretName: pg1-monitoring + `)) + + testExporterCollectorsAnnotation(t, ctx, cluster, exporterQueriesConfig, nil) + }) + + t.Run("CustomConfigAppendCustomQueriesOff", func(t *testing.T) { + cluster.Spec.Monitoring = &v1beta1.MonitoringSpec{ + PGMonitor: &v1beta1.PGMonitorSpec{ + Exporter: &v1beta1.ExporterSpec{ + Image: image, + Resources: resources, + Configuration: []corev1.VolumeProjection{{ConfigMap: &corev1.ConfigMapProjection{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: "exporter-custom-config-test", + }, + }}, + }, + }, + }, + } + template := &corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{{ + Name: naming.ContainerDatabase, + }}, + }, + } + + assert.NilError(t, addPGMonitorExporterToInstancePodSpec(ctx, cluster, template, exporterQueriesConfig, nil)) + + assert.Equal(t, len(template.Spec.Containers), 2) + container := template.Spec.Containers[1] + + assert.Assert(t, len(template.Spec.Volumes) > 0) + assert.Assert(t, cmp.MarshalMatches(template.Spec.Volumes[0], ` +name: exporter-config +projected: + sources: + - configMap: + name: exporter-custom-config-test + `)) + + assert.Assert(t, len(container.VolumeMounts) > 0) + assert.Assert(t, cmp.MarshalMatches(container.VolumeMounts[0], ` +mountPath: /conf +name: exporter-config + `)) + }) + + t.Run("CustomConfigAppendCustomQueriesOn", func(t *testing.T) { + gate := feature.NewGate() + assert.NilError(t, gate.SetFromMap(map[string]bool{ + feature.AppendCustomQueries: true, + })) + ctx := feature.NewContext(ctx, gate) + + cluster.Spec.Monitoring = &v1beta1.MonitoringSpec{ + PGMonitor: &v1beta1.PGMonitorSpec{ + Exporter: &v1beta1.ExporterSpec{ + Image: image, + Resources: resources, + Configuration: []corev1.VolumeProjection{{ConfigMap: &corev1.ConfigMapProjection{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: "exporter-custom-config-test", + }, + }}, + }, + }, + }, + } + template := &corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{{ + Name: naming.ContainerDatabase, + }}, + }, + } + + assert.NilError(t, addPGMonitorExporterToInstancePodSpec(ctx, cluster, template, exporterQueriesConfig, nil)) + + assert.Equal(t, len(template.Spec.Containers), 2) + container := template.Spec.Containers[1] + + assert.Assert(t, len(template.Spec.Volumes) > 0) + assert.Assert(t, cmp.MarshalMatches(template.Spec.Volumes[0], ` +name: exporter-config +projected: + sources: + - configMap: + name: exporter-custom-config-test + - configMap: + name: query-conf + `)) + + assert.Assert(t, len(container.VolumeMounts) > 0) + assert.Assert(t, cmp.MarshalMatches(container.VolumeMounts[0], ` +mountPath: /conf +name: exporter-config + `)) + }) + + t.Run("CustomTLS", func(t *testing.T) { + cluster.Spec.Monitoring = &v1beta1.MonitoringSpec{ + PGMonitor: &v1beta1.PGMonitorSpec{ + Exporter: &v1beta1.ExporterSpec{ + CustomTLSSecret: &corev1.SecretProjection{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: "custom-exporter-certs", + }, + }, + }, + }, + } + template := &corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + Containers: []corev1.Container{{ + Name: naming.ContainerDatabase, + }}, + }, + } + + testConfigMap := new(corev1.ConfigMap) + testConfigMap.Name = "test-web-conf" + + assert.NilError(t, addPGMonitorExporterToInstancePodSpec(ctx, cluster, template, exporterQueriesConfig, testConfigMap)) + + assert.Equal(t, len(template.Spec.Containers), 2) + container := template.Spec.Containers[1] + + assert.Assert(t, len(template.Spec.Volumes) > 2, "Expected the original two volumes") + assert.Assert(t, cmp.MarshalMatches(template.Spec.Volumes[2:], ` +- name: exporter-certs + projected: + sources: + - secret: + name: custom-exporter-certs +- configMap: + name: test-web-conf + name: web-config + `)) + + assert.Assert(t, len(container.VolumeMounts) > 2, "Expected the original two mounts") + assert.Assert(t, cmp.MarshalMatches(container.VolumeMounts[2:], ` +- mountPath: /certs + name: exporter-certs +- mountPath: /web-config + name: web-config + `)) + + command := strings.Join(container.Command, "\n") + assert.Assert(t, cmp.Contains(command, "postgres_exporter")) + assert.Assert(t, cmp.Contains(command, "--web.config.file")) + + testExporterCollectorsAnnotation(t, ctx, cluster, exporterQueriesConfig, testConfigMap) + }) +} + +// TestReconcilePGMonitorExporterSetupErrors tests how reconcilePGMonitorExporter +// reacts when the kubernetes resources are in different states (e.g., checks +// what happens when the database pod is terminating) +func TestReconcilePGMonitorExporterSetupErrors(t *testing.T) { + if os.Getenv("QUERIES_CONFIG_DIR") == "" { + t.Skip("QUERIES_CONFIG_DIR must be set") + } + + for _, test := range []struct { + name string + podExecCalled bool + status v1beta1.MonitoringStatus + monitoring *v1beta1.MonitoringSpec + instances []*Instance + secret *corev1.Secret + }{{ + name: "Terminating", + podExecCalled: false, + instances: []*Instance{ + { + Name: "daisy", + Pods: []*corev1.Pod{{ + ObjectMeta: metav1.ObjectMeta{ + Name: "daisy-pod", + Annotations: map[string]string{"status": `{"role":"master"}`}, + DeletionTimestamp: &metav1.Time{}, + }, + }}, + Runner: &appsv1.StatefulSet{}, + }, + }, + }, { + name: "NotWritable", + podExecCalled: false, + instances: []*Instance{ + { + Name: "daisy", + Pods: []*corev1.Pod{{ + ObjectMeta: metav1.ObjectMeta{ + Name: "daisy-pod", + }, + }}, + Runner: &appsv1.StatefulSet{}, + }, + }, + }, { + name: "NotRunning", + podExecCalled: false, + instances: []*Instance{ + { + Name: "daisy", + Pods: []*corev1.Pod{{ + ObjectMeta: metav1.ObjectMeta{ + Name: "daisy-pod", + Annotations: map[string]string{"status": `{"role":"master"}`}, + }, + }}, + Runner: &appsv1.StatefulSet{}, + }, + }, + }, { + name: "ExporterNotRunning", + podExecCalled: false, + monitoring: &v1beta1.MonitoringSpec{ + PGMonitor: &v1beta1.PGMonitorSpec{ + Exporter: &v1beta1.ExporterSpec{ + Image: "image", + }, + }, + }, + instances: []*Instance{ + { + Name: "daisy", + Pods: []*corev1.Pod{{ + ObjectMeta: metav1.ObjectMeta{ + Name: "daisy-pod", + Annotations: map[string]string{"status": `{"role":"master"}`}, + }, + Status: corev1.PodStatus{ + ContainerStatuses: []corev1.ContainerStatus{{ + Name: naming.ContainerDatabase, + State: corev1.ContainerState{Running: &corev1.ContainerStateRunning{}}, + }}, + }, + }}, + Runner: &appsv1.StatefulSet{}, + }, + }, + }, { + name: "ExporterImageIDNotFound", + podExecCalled: false, + monitoring: &v1beta1.MonitoringSpec{ + PGMonitor: &v1beta1.PGMonitorSpec{ + Exporter: &v1beta1.ExporterSpec{ + Image: "image", + }, + }, + }, + instances: []*Instance{ + { + Name: "daisy", + Pods: []*corev1.Pod{{ + ObjectMeta: metav1.ObjectMeta{ + Name: "daisy-pod", + Annotations: map[string]string{"status": `{"role":"master"}`}, + }, + Status: corev1.PodStatus{ + ContainerStatuses: []corev1.ContainerStatus{{ + Name: naming.ContainerDatabase, + State: corev1.ContainerState{Running: &corev1.ContainerStateRunning{}}, + }, { + Name: naming.ContainerPGMonitorExporter, + State: corev1.ContainerState{Running: &corev1.ContainerStateRunning{}}, + }}, + }, + }}, + Runner: &appsv1.StatefulSet{}, + }, + }, + }, { + name: "NoError", + podExecCalled: true, + monitoring: &v1beta1.MonitoringSpec{ + PGMonitor: &v1beta1.PGMonitorSpec{ + Exporter: &v1beta1.ExporterSpec{ + Image: "image", + }, + }, + }, + instances: []*Instance{ + { + Name: "daisy", + Pods: []*corev1.Pod{{ + ObjectMeta: metav1.ObjectMeta{ + Name: "daisy-pod", + Annotations: map[string]string{"status": `{"role":"master"}`}, + }, + Status: corev1.PodStatus{ + ContainerStatuses: []corev1.ContainerStatus{{ + Name: naming.ContainerDatabase, + State: corev1.ContainerState{Running: &corev1.ContainerStateRunning{}}, + ImageID: "image@sha123", + }, { + Name: naming.ContainerPGMonitorExporter, + State: corev1.ContainerState{Running: &corev1.ContainerStateRunning{}}, + ImageID: "image@sha123", + }}, + }, + }}, + Runner: &appsv1.StatefulSet{}, + }, + }, + secret: &corev1.Secret{ + Data: map[string][]byte{ + "verifier": []byte("blah"), + }, + }, + }} { + t.Run(test.name, func(t *testing.T) { + ctx := context.Background() + var called bool + reconciler := &Reconciler{ + PodExec: func(ctx context.Context, namespace, pod, container string, stdin io.Reader, + stdout, stderr io.Writer, command ...string) error { + called = true + return nil + }, + } + + cluster := &v1beta1.PostgresCluster{} + cluster.Spec.PostgresVersion = 15 + cluster.Spec.Monitoring = test.monitoring + cluster.Status.Monitoring.ExporterConfiguration = test.status.ExporterConfiguration + observed := &observedInstances{forCluster: test.instances} + + assert.NilError(t, reconciler.reconcilePGMonitorExporter(ctx, + cluster, observed, test.secret)) + assert.Equal(t, called, test.podExecCalled) + }) + } +} + +func TestReconcilePGMonitorExporter(t *testing.T) { + ctx := context.Background() + var called bool + reconciler := &Reconciler{ + PodExec: func(ctx context.Context, namespace, pod, container string, stdin io.Reader, + stdout, stderr io.Writer, command ...string) error { + called = true + return nil + }, + } + + t.Run("UninstallWhenSecretNil", func(t *testing.T) { + cluster := &v1beta1.PostgresCluster{} + cluster.Status.Monitoring.ExporterConfiguration = "installed" + instances := []*Instance{ + { + Name: "one-daisy", + Pods: []*corev1.Pod{{ + ObjectMeta: metav1.ObjectMeta{ + Name: "one-daisy-pod", + Annotations: map[string]string{"status": `{"role":"master"}`}, + }, + Status: corev1.PodStatus{ + Phase: corev1.PodRunning, + ContainerStatuses: []corev1.ContainerStatus{{ + Name: naming.ContainerDatabase, + ImageID: "dont-care", + State: corev1.ContainerState{ + Running: &corev1.ContainerStateRunning{}, + }, + }}, + }, + }}, + Runner: &appsv1.StatefulSet{}, + }, + } + observed := &observedInstances{forCluster: instances} + + called = false + assert.NilError(t, reconciler.reconcilePGMonitorExporter(ctx, + cluster, observed, nil)) + assert.Assert(t, called, "PodExec was not called.") + assert.Assert(t, cluster.Status.Monitoring.ExporterConfiguration != "", "ExporterConfiguration was empty.") + }) +} + +// TestReconcilePGMonitorExporterStatus checks that the exporter status is updated +// when it should be. Because the status updated when we update the setup sql from +// pgmonitor (by using podExec), we check if podExec is called when a change is needed. +func TestReconcilePGMonitorExporterStatus(t *testing.T) { + if os.Getenv("QUERIES_CONFIG_DIR") == "" { + t.Skip("QUERIES_CONFIG_DIR must be set") + } + + for _, test := range []struct { + name string + exporterEnabled bool + podExecCalled bool + status v1beta1.MonitoringStatus + statusChangedAfterReconcile bool + }{{ + name: "Disabled", + podExecCalled: true, + statusChangedAfterReconcile: true, + }, { + name: "Disabled Uninstall", + podExecCalled: true, + status: v1beta1.MonitoringStatus{ExporterConfiguration: "installed"}, + statusChangedAfterReconcile: true, + }, { + name: "Enabled", + exporterEnabled: true, + podExecCalled: true, + statusChangedAfterReconcile: true, + }, { + name: "Enabled Update", + exporterEnabled: true, + podExecCalled: true, + status: v1beta1.MonitoringStatus{ExporterConfiguration: "installed"}, + statusChangedAfterReconcile: true, + }, { + name: "Enabled NoUpdate", + exporterEnabled: true, + podExecCalled: false, + // Status was generated manually for this test case + // TODO (jmckulk): add code to generate status + status: v1beta1.MonitoringStatus{ExporterConfiguration: "6d874c58df"}, + statusChangedAfterReconcile: false, + }} { + t.Run(test.name, func(t *testing.T) { + ctx := context.Background() + var ( + called bool + secret *corev1.Secret + ) + + // Create reconciler with mock PodExec function + reconciler := &Reconciler{ + PodExec: func(ctx context.Context, namespace, pod, container string, stdin io.Reader, + stdout, stderr io.Writer, command ...string) error { + called = true + return nil + }, + } + + // Create the test cluster spec with the exporter status set + cluster := &v1beta1.PostgresCluster{} + cluster.Spec.PostgresVersion = 15 + cluster.Status.Monitoring.ExporterConfiguration = test.status.ExporterConfiguration + + // Mock up an instances that will be defined in the cluster. The instances should + // have all necessary fields that will be needed to reconcile the exporter + instances := []*Instance{ + { + Name: "daisy", + Pods: []*corev1.Pod{{ + ObjectMeta: metav1.ObjectMeta{ + Name: "daisy-pod", + Annotations: map[string]string{"status": `{"role":"master"}`}, + }, + Status: corev1.PodStatus{ + ContainerStatuses: []corev1.ContainerStatus{{ + Name: naming.ContainerDatabase, + State: corev1.ContainerState{Running: &corev1.ContainerStateRunning{}}, + ImageID: "image@sha123", + }}, + }, + }}, + Runner: &appsv1.StatefulSet{}, + }, + } + + if test.exporterEnabled { + // When testing with exporter enabled update the spec with exporter fields + cluster.Spec.Monitoring = &v1beta1.MonitoringSpec{ + PGMonitor: &v1beta1.PGMonitorSpec{ + Exporter: &v1beta1.ExporterSpec{ + Image: "image", + }, + }, + } + + // Update mock instances to include the exporter container + instances[0].Pods[0].Status.ContainerStatuses = append( + instances[0].Pods[0].Status.ContainerStatuses, corev1.ContainerStatus{ + Name: naming.ContainerPGMonitorExporter, + State: corev1.ContainerState{Running: &corev1.ContainerStateRunning{}}, + ImageID: "image@sha123", + }) + + secret = &corev1.Secret{ + Data: map[string][]byte{ + "verifier": []byte("blah"), + }, + } + } + + // Mock up observed instances based on our mock instances + observed := &observedInstances{forCluster: instances} + + // Check that we can reconcile with the test resources + assert.NilError(t, reconciler.reconcilePGMonitorExporter(ctx, + cluster, observed, secret)) + // Check that the exporter status changes when it needs to + assert.Assert(t, test.statusChangedAfterReconcile == (cluster.Status.Monitoring.ExporterConfiguration != test.status.ExporterConfiguration), + "got %v", cluster.Status.Monitoring.ExporterConfiguration) + // Check that pod exec is called correctly + assert.Equal(t, called, test.podExecCalled) + }) + } +} + +// TestReconcileMonitoringSecret checks that the secret intent returned by reconcileMonitoringSecret +// is correct. If exporter is enabled, the return shouldn't be nil. If the exporter is disabled, the +// return should be nil. +func TestReconcileMonitoringSecret(t *testing.T) { + // TODO (jmckulk): debug test with existing cluster + // Seems to be an issue when running with other tests + if strings.EqualFold(os.Getenv("USE_EXISTING_CLUSTER"), "true") { + t.Skip("Test failing with existing cluster") + } + + ctx := context.Background() + + // Kubernetes is required because reconcileMonitoringSecret + // (1) uses the client to get existing secrets + // (2) sets the controller reference on the new secret + _, cc := setupKubernetes(t) + require.ParallelCapacity(t, 0) + + reconciler := &Reconciler{Client: cc, Owner: client.FieldOwner(t.Name())} + + cluster := testCluster() + cluster.Default() + cluster.UID = types.UID("hippouid") + cluster.Namespace = setupNamespace(t, cc).Name + + // If the exporter is disabled then the secret should not exist + // Existing secrets should be removed + t.Run("ExporterDisabled", func(t *testing.T) { + t.Run("NotExisting", func(t *testing.T) { + secret, err := reconciler.reconcileMonitoringSecret(ctx, cluster) + assert.NilError(t, err) + assert.Assert(t, secret == nil, "Monitoring secret was not nil.") + }) + + t.Run("Existing", func(t *testing.T) { + cluster.Spec.Monitoring = &v1beta1.MonitoringSpec{ + PGMonitor: &v1beta1.PGMonitorSpec{ + Exporter: &v1beta1.ExporterSpec{Image: "image"}}} + existing, err := reconciler.reconcileMonitoringSecret(ctx, cluster) + assert.NilError(t, err, "error in test; existing secret not created") + assert.Assert(t, existing != nil, "error in test; existing secret not created") + + cluster.Spec.Monitoring = nil + actual, err := reconciler.reconcileMonitoringSecret(ctx, cluster) + assert.NilError(t, err) + assert.Assert(t, actual == nil, "Monitoring secret still exists after turning exporter off.") + }) + }) + + // If the exporter is enabled then a monitoring secret should exist + // It will need to be created or left in place with existing password + t.Run("ExporterEnabled", func(t *testing.T) { + var ( + existing, actual *corev1.Secret + err error + ) + + // Enable monitoring in the test cluster spec + cluster.Spec.Monitoring = &v1beta1.MonitoringSpec{ + PGMonitor: &v1beta1.PGMonitorSpec{ + Exporter: &v1beta1.ExporterSpec{ + Image: "image", + }, + }, + } + + t.Run("NotExisting", func(t *testing.T) { + existing, err = reconciler.reconcileMonitoringSecret(ctx, cluster) + assert.NilError(t, err) + assert.Assert(t, existing != nil, "Monitoring secret does not exist.") + }) + + t.Run("Existing", func(t *testing.T) { + actual, err = reconciler.reconcileMonitoringSecret(ctx, cluster) + assert.NilError(t, err) + assert.Assert(t, bytes.Equal(actual.Data["password"], existing.Data["password"]), "Passwords do not match.") + }) + }) +} + +// TestReconcileExporterQueriesConfig checks that the ConfigMap intent returned by +// reconcileExporterQueriesConfig is correct. If exporter is enabled, the return +// shouldn't be nil. If the exporter is disabled, the return should be nil. +func TestReconcileExporterQueriesConfig(t *testing.T) { + ctx := context.Background() + + // Kubernetes is required because reconcileExporterQueriesConfig + // (1) uses the client to get existing ConfigMaps + // (2) sets the controller reference on the new ConfigMap + _, cc := setupKubernetes(t) + require.ParallelCapacity(t, 0) + + reconciler := &Reconciler{Client: cc, Owner: client.FieldOwner(t.Name())} + + cluster := testCluster() + cluster.Default() + cluster.UID = types.UID("hippouid") + cluster.Namespace = setupNamespace(t, cc).Name + + t.Run("ExporterDisabled", func(t *testing.T) { + t.Run("NotExisting", func(t *testing.T) { + queriesConfig, err := reconciler.reconcileExporterQueriesConfig(ctx, cluster) + assert.NilError(t, err) + assert.Assert(t, queriesConfig == nil, "Default queries ConfigMap is present.") + }) + + t.Run("Existing", func(t *testing.T) { + cluster.Spec.Monitoring = &v1beta1.MonitoringSpec{ + PGMonitor: &v1beta1.PGMonitorSpec{ + Exporter: &v1beta1.ExporterSpec{Image: "image"}}} + existing, err := reconciler.reconcileExporterQueriesConfig(ctx, cluster) + assert.NilError(t, err, "error in test; existing config not created") + assert.Assert(t, existing != nil, "error in test; existing config not created") + + cluster.Spec.Monitoring = nil + actual, err := reconciler.reconcileExporterQueriesConfig(ctx, cluster) + assert.NilError(t, err) + assert.Assert(t, actual == nil, "Default queries config still present after disabling exporter.") + }) + }) + + t.Run("ExporterEnabled", func(t *testing.T) { + var ( + existing, actual *corev1.ConfigMap + err error + ) + + // Enable monitoring in the test cluster spec + cluster.Spec.Monitoring = &v1beta1.MonitoringSpec{ + PGMonitor: &v1beta1.PGMonitorSpec{ + Exporter: &v1beta1.ExporterSpec{ + Image: "image", + }, + }, + } + + t.Run("NotExisting", func(t *testing.T) { + existing, err = reconciler.reconcileExporterQueriesConfig(ctx, cluster) + assert.NilError(t, err) + assert.Assert(t, existing != nil, "Default queries config does not exist.") + }) + + t.Run("Existing", func(t *testing.T) { + actual, err = reconciler.reconcileExporterQueriesConfig(ctx, cluster) + assert.NilError(t, err) + assert.Assert(t, actual.Data["defaultQueries.yml"] == existing.Data["defaultQueries.yml"], "Data does not align.") + }) + }) +} diff --git a/internal/controller/postgrescluster/pki.go b/internal/controller/postgrescluster/pki.go new file mode 100644 index 0000000000..0314ad4406 --- /dev/null +++ b/internal/controller/postgrescluster/pki.go @@ -0,0 +1,253 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgrescluster + +import ( + "context" + + "github.com/pkg/errors" + appsv1 "k8s.io/api/apps/v1" + corev1 "k8s.io/api/core/v1" + "sigs.k8s.io/controller-runtime/pkg/client" + + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/pki" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +const ( + // https://www.postgresql.org/docs/current/ssl-tcp.html + clusterCertFile = "tls.crt" + clusterKeyFile = "tls.key" + rootCertFile = "ca.crt" +) + +// +kubebuilder:rbac:groups="",resources="secrets",verbs={get} +// +kubebuilder:rbac:groups="",resources="secrets",verbs={create,patch} + +// reconcileRootCertificate ensures the root certificate, stored +// in the relevant secret, has been created and is not 'bad' due +// to being expired, formatted incorrectly, etc. +// If it is bad for some reason, a new root certificate is +// generated for use. +func (r *Reconciler) reconcileRootCertificate( + ctx context.Context, cluster *v1beta1.PostgresCluster, +) ( + *pki.RootCertificateAuthority, error, +) { + const keyCertificate, keyPrivateKey = "root.crt", "root.key" + + existing := &corev1.Secret{} + existing.Namespace, existing.Name = cluster.Namespace, naming.RootCertSecret + err := errors.WithStack(client.IgnoreNotFound( + r.Client.Get(ctx, client.ObjectKeyFromObject(existing), existing))) + + root := &pki.RootCertificateAuthority{} + + if err == nil { + // Unmarshal and validate the stored root. These first errors can + // be ignored because they result in an invalid root which is then + // correctly regenerated. + _ = root.Certificate.UnmarshalText(existing.Data[keyCertificate]) + _ = root.PrivateKey.UnmarshalText(existing.Data[keyPrivateKey]) + + if !pki.RootIsValid(root) { + root, err = pki.NewRootCertificateAuthority() + err = errors.WithStack(err) + } + } + + intent := &corev1.Secret{} + intent.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("Secret")) + intent.Namespace, intent.Name = cluster.Namespace, naming.RootCertSecret + intent.Data = make(map[string][]byte) + intent.ObjectMeta.OwnerReferences = existing.ObjectMeta.OwnerReferences + + // A root secret is scoped to the namespace where postgrescluster(s) + // are deployed. For operator deployments with postgresclusters in more than + // one namespace, there will be one root per namespace. + // During reconciliation, the owner reference block of the root secret is + // updated to include the postgrescluster as an owner. + // However, unlike the leaf certificate, the postgrescluster will not be + // set as the controller. This allows for multiple owners to guide garbage + // collection, but avoids any errors related to setting multiple controllers. + // https://docs.k8s.io/concepts/workloads/controllers/garbage-collection/#owners-and-dependents + if err == nil { + err = errors.WithStack(r.setOwnerReference(cluster, intent)) + } + if err == nil { + intent.Data[keyCertificate], err = root.Certificate.MarshalText() + err = errors.WithStack(err) + } + if err == nil { + intent.Data[keyPrivateKey], err = root.PrivateKey.MarshalText() + err = errors.WithStack(err) + } + if err == nil { + err = errors.WithStack(r.apply(ctx, intent)) + } + + return root, err +} + +// +kubebuilder:rbac:groups="",resources="secrets",verbs={get} +// +kubebuilder:rbac:groups="",resources="secrets",verbs={create,patch} + +// reconcileClusterCertificate first checks if a custom certificate +// secret is configured. If so, that secret projection is returned. +// Otherwise, a secret containing a generated leaf certificate, stored in +// the relevant secret, has been created and is not 'bad' due to being +// expired, formatted incorrectly, etc. If it is bad for any reason, a new +// leaf certificate is generated using the current root certificate. +// In either case, the relevant secret is expected to contain three files: +// tls.crt, tls.key and ca.crt which are the TLS certificate, private key +// and CA certificate, respectively. +func (r *Reconciler) reconcileClusterCertificate( + ctx context.Context, root *pki.RootCertificateAuthority, + cluster *v1beta1.PostgresCluster, primaryService *corev1.Service, + replicaService *corev1.Service, +) ( + *corev1.SecretProjection, error, +) { + // if a custom postgrescluster secret is provided, just return it + if cluster.Spec.CustomTLSSecret != nil { + return cluster.Spec.CustomTLSSecret, nil + } + + const keyCertificate, keyPrivateKey, rootCA = "tls.crt", "tls.key", "ca.crt" + + existing := &corev1.Secret{ObjectMeta: naming.PostgresTLSSecret(cluster)} + err := errors.WithStack(client.IgnoreNotFound( + r.Client.Get(ctx, client.ObjectKeyFromObject(existing), existing))) + + leaf := &pki.LeafCertificate{} + dnsNames := append(naming.ServiceDNSNames(ctx, primaryService), naming.ServiceDNSNames(ctx, replicaService)...) + dnsFQDN := dnsNames[0] + + if err == nil { + // Unmarshal and validate the stored leaf. These first errors can + // be ignored because they result in an invalid leaf which is then + // correctly regenerated. + _ = leaf.Certificate.UnmarshalText(existing.Data[keyCertificate]) + _ = leaf.PrivateKey.UnmarshalText(existing.Data[keyPrivateKey]) + + leaf, err = root.RegenerateLeafWhenNecessary(leaf, dnsFQDN, dnsNames) + err = errors.WithStack(err) + } + + intent := &corev1.Secret{ObjectMeta: naming.PostgresTLSSecret(cluster)} + intent.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("Secret")) + intent.Data = make(map[string][]byte) + intent.ObjectMeta.OwnerReferences = existing.ObjectMeta.OwnerReferences + + intent.Annotations = naming.Merge(cluster.Spec.Metadata.GetAnnotationsOrNil()) + intent.Labels = naming.Merge( + cluster.Spec.Metadata.GetLabelsOrNil(), + map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelClusterCertificate: "postgres-tls", + }) + + if err == nil { + err = errors.WithStack(r.setControllerReference(cluster, intent)) + } + + if err == nil { + intent.Data[keyCertificate], err = leaf.Certificate.MarshalText() + err = errors.WithStack(err) + } + if err == nil { + intent.Data[keyPrivateKey], err = leaf.PrivateKey.MarshalText() + err = errors.WithStack(err) + } + if err == nil { + intent.Data[rootCA], err = root.Certificate.MarshalText() + err = errors.WithStack(err) + } + + // TODO(tjmoore4): The generated postgrescluster secret is only created + // when a custom secret is not specified. However, if the secret is + // initially created and a custom secret is later used, the generated + // secret is currently left in place. + if err == nil { + err = errors.WithStack(r.apply(ctx, intent)) + } + + return clusterCertSecretProjection(intent), err +} + +// +kubebuilder:rbac:groups="",resources="secrets",verbs={get} +// +kubebuilder:rbac:groups="",resources="secrets",verbs={create,patch} + +// instanceCertificate populates intent with the DNS leaf certificate and +// returns it. It also ensures the leaf certificate, stored in the relevant +// secret, has been created and is not 'bad' due to being expired, formatted +// incorrectly, etc. In addition, a check is made to ensure the leaf cert's +// authority key ID matches the corresponding root cert's subject +// key ID (i.e. the root cert is the 'parent' of the leaf cert). +// If it is bad for any reason, a new leaf certificate is generated +// using the current root certificate +func (*Reconciler) instanceCertificate( + ctx context.Context, instance *appsv1.StatefulSet, + existing, intent *corev1.Secret, root *pki.RootCertificateAuthority, +) ( + *pki.LeafCertificate, error, +) { + var err error + const keyCertificate, keyPrivateKey = "dns.crt", "dns.key" + + leaf := &pki.LeafCertificate{} + + // RFC 2818 states that the certificate DNS names must be used to verify + // HTTPS identity. + dnsNames := naming.InstancePodDNSNames(ctx, instance) + dnsFQDN := dnsNames[0] + + if err == nil { + // Unmarshal and validate the stored leaf. These first errors can + // be ignored because they result in an invalid leaf which is then + // correctly regenerated. + _ = leaf.Certificate.UnmarshalText(existing.Data[keyCertificate]) + _ = leaf.PrivateKey.UnmarshalText(existing.Data[keyPrivateKey]) + + leaf, err = root.RegenerateLeafWhenNecessary(leaf, dnsFQDN, dnsNames) + err = errors.WithStack(err) + } + + if err == nil { + intent.Data[keyCertificate], err = leaf.Certificate.MarshalText() + err = errors.WithStack(err) + } + if err == nil { + intent.Data[keyPrivateKey], err = leaf.PrivateKey.MarshalText() + err = errors.WithStack(err) + } + + return leaf, err +} + +// clusterCertSecretProjection returns a secret projection of the postgrescluster's +// CA, key, and certificate to include in the instance configuration volume. +func clusterCertSecretProjection(certificate *corev1.Secret) *corev1.SecretProjection { + return &corev1.SecretProjection{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: certificate.Name, + }, + Items: []corev1.KeyToPath{ + { + Key: clusterCertFile, + Path: clusterCertFile, + }, + { + Key: clusterKeyFile, + Path: clusterKeyFile, + }, + { + Key: rootCertFile, + Path: rootCertFile, + }, + }, + } +} diff --git a/internal/controller/postgrescluster/pki_test.go b/internal/controller/postgrescluster/pki_test.go new file mode 100644 index 0000000000..c2fe7af82a --- /dev/null +++ b/internal/controller/postgrescluster/pki_test.go @@ -0,0 +1,401 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgrescluster + +import ( + "context" + "fmt" + "os" + "reflect" + "strings" + "testing" + + "github.com/pkg/errors" + "gotest.tools/v3/assert" + appsv1 "k8s.io/api/apps/v1" + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/types" + "sigs.k8s.io/controller-runtime/pkg/client" + + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/pki" + "github.com/crunchydata/postgres-operator/internal/testing/require" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +// TestReconcileCerts tests the proper reconciliation of the root ca certificate +// secret, leaf certificate secrets and the updates that occur when updates are +// made to the cluster certificates generally. For the removal of ownership +// references and deletion of the root CA cert secret, a separate Kuttl test is +// used due to the need for proper garbage collection. +func TestReconcileCerts(t *testing.T) { + // Garbage collector cleans up test resources before the test completes + if strings.EqualFold(os.Getenv("USE_EXISTING_CLUSTER"), "true") { + t.Skip("USE_EXISTING_CLUSTER: Test fails due to garbage collection") + } + + _, tClient := setupKubernetes(t) + require.ParallelCapacity(t, 2) + ctx := context.Background() + namespace := setupNamespace(t, tClient).Name + + r := &Reconciler{ + Client: tClient, + Owner: ControllerName, + } + + // set up cluster1 + clusterName1 := "hippocluster1" + + // set up test cluster1 + cluster1 := testCluster() + cluster1.Name = clusterName1 + cluster1.Namespace = namespace + if err := tClient.Create(ctx, cluster1); err != nil { + t.Error(err) + } + + // set up test cluster2 + cluster2Name := "hippocluster2" + + cluster2 := testCluster() + cluster2.Name = cluster2Name + cluster2.Namespace = namespace + if err := tClient.Create(ctx, cluster2); err != nil { + t.Error(err) + } + + primaryService := new(corev1.Service) + primaryService.Namespace = namespace + primaryService.Name = "the-primary" + + replicaService := new(corev1.Service) + replicaService.Namespace = namespace + replicaService.Name = "the-replicas" + + t.Run("check root certificate reconciliation", func(t *testing.T) { + + initialRoot, err := r.reconcileRootCertificate(ctx, cluster1) + assert.NilError(t, err) + + rootSecret := &corev1.Secret{} + rootSecret.Namespace, rootSecret.Name = namespace, naming.RootCertSecret + rootSecret.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("Secret")) + + t.Run("check root CA secret first owner reference", func(t *testing.T) { + + err := tClient.Get(ctx, client.ObjectKeyFromObject(rootSecret), rootSecret) + assert.NilError(t, err) + + assert.Check(t, len(rootSecret.ObjectMeta.OwnerReferences) == 1, "first owner reference not set") + + expectedOR := metav1.OwnerReference{ + APIVersion: "postgres-operator.crunchydata.com/v1beta1", + Kind: "PostgresCluster", + Name: "hippocluster1", + UID: cluster1.UID, + } + + if len(rootSecret.ObjectMeta.OwnerReferences) > 0 { + assert.Equal(t, rootSecret.ObjectMeta.OwnerReferences[0], expectedOR) + } + }) + + t.Run("check root CA secret second owner reference", func(t *testing.T) { + + _, err := r.reconcileRootCertificate(ctx, cluster2) + assert.NilError(t, err) + + err = tClient.Get(ctx, client.ObjectKeyFromObject(rootSecret), rootSecret) + assert.NilError(t, err) + + clist := &v1beta1.PostgresClusterList{} + assert.NilError(t, tClient.List(ctx, clist)) + + assert.Check(t, len(rootSecret.ObjectMeta.OwnerReferences) == 2, "second owner reference not set") + + expectedOR := metav1.OwnerReference{ + APIVersion: "postgres-operator.crunchydata.com/v1beta1", + Kind: "PostgresCluster", + Name: "hippocluster2", + UID: cluster2.UID, + } + + if len(rootSecret.ObjectMeta.OwnerReferences) > 1 { + assert.Equal(t, rootSecret.ObjectMeta.OwnerReferences[1], expectedOR) + } + }) + + t.Run("root certificate is returned correctly", func(t *testing.T) { + + fromSecret, err := getCertFromSecret(ctx, tClient, naming.RootCertSecret, namespace, "root.crt") + assert.NilError(t, err) + + // assert returned certificate matches the one created earlier + assert.DeepEqual(t, *fromSecret, initialRoot.Certificate) + }) + + t.Run("root certificate changes", func(t *testing.T) { + // force the generation of a new root cert + // create an empty secret and apply the change + emptyRootSecret := &corev1.Secret{} + emptyRootSecret.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("Secret")) + emptyRootSecret.Namespace, emptyRootSecret.Name = namespace, naming.RootCertSecret + emptyRootSecret.Data = make(map[string][]byte) + err = errors.WithStack(r.apply(ctx, emptyRootSecret)) + assert.NilError(t, err) + + // reconcile the root cert secret, creating a new root cert + returnedRoot, err := r.reconcileRootCertificate(ctx, cluster1) + assert.NilError(t, err) + + fromSecret, err := getCertFromSecret(ctx, tClient, naming.RootCertSecret, namespace, "root.crt") + assert.NilError(t, err) + + // check that the cert from the secret does not equal the initial certificate + assert.Assert(t, !fromSecret.Equal(initialRoot.Certificate)) + + // check that the returned cert matches the cert from the secret + assert.DeepEqual(t, *fromSecret, returnedRoot.Certificate) + }) + + }) + + t.Run("check leaf certificate reconciliation", func(t *testing.T) { + + initialRoot, err := r.reconcileRootCertificate(ctx, cluster1) + assert.NilError(t, err) + + // instance with minimal required fields + instance := &appsv1.StatefulSet{ + TypeMeta: metav1.TypeMeta{ + APIVersion: appsv1.SchemeGroupVersion.String(), + Kind: "StatefulSet", + }, + ObjectMeta: metav1.ObjectMeta{ + Name: clusterName1, + Namespace: namespace, + }, + Spec: appsv1.StatefulSetSpec{ + ServiceName: clusterName1, + }, + } + + t.Run("check leaf certificate in secret", func(t *testing.T) { + existing := &corev1.Secret{Data: make(map[string][]byte)} + intent := &corev1.Secret{Data: make(map[string][]byte)} + + initialLeafCert, err := r.instanceCertificate(ctx, instance, existing, intent, initialRoot) + assert.NilError(t, err) + + fromSecret := &pki.LeafCertificate{} + assert.NilError(t, fromSecret.Certificate.UnmarshalText(intent.Data["dns.crt"])) + assert.NilError(t, fromSecret.PrivateKey.UnmarshalText(intent.Data["dns.key"])) + + assert.DeepEqual(t, fromSecret, initialLeafCert) + }) + + t.Run("check that the leaf certs update when root changes", func(t *testing.T) { + + // force the generation of a new root cert + // create an empty secret and apply the change + emptyRootSecret := &corev1.Secret{} + emptyRootSecret.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("Secret")) + emptyRootSecret.Namespace, emptyRootSecret.Name = namespace, naming.RootCertSecret + emptyRootSecret.Data = make(map[string][]byte) + err = errors.WithStack(r.apply(ctx, emptyRootSecret)) + + // reconcile the root cert secret + newRootCert, err := r.reconcileRootCertificate(ctx, cluster1) + assert.NilError(t, err) + + existing := &corev1.Secret{Data: make(map[string][]byte)} + intent := &corev1.Secret{Data: make(map[string][]byte)} + + initialLeaf, err := r.instanceCertificate(ctx, instance, existing, intent, initialRoot) + assert.NilError(t, err) + + // reconcile the certificate + newLeaf, err := r.instanceCertificate(ctx, instance, existing, intent, newRootCert) + assert.NilError(t, err) + + // assert old leaf cert does not match the newly reconciled one + assert.Assert(t, !initialLeaf.Certificate.Equal(newLeaf.Certificate)) + + // 'reconcile' the certificate when the secret does not change. The returned leaf certificate should not change + newLeaf2, err := r.instanceCertificate(ctx, instance, intent, intent, newRootCert) + assert.NilError(t, err) + + // check that the leaf cert did not change after another reconciliation + assert.DeepEqual(t, newLeaf2, newLeaf) + + }) + + }) + + t.Run("check cluster certificate secret reconciliation", func(t *testing.T) { + // example auto-generated secret projection + testSecretProjection := &corev1.SecretProjection{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: fmt.Sprintf(naming.ClusterCertSecret, cluster1.Name), + }, + Items: []corev1.KeyToPath{ + { + Key: clusterCertFile, + Path: clusterCertFile, + }, + { + Key: clusterKeyFile, + Path: clusterKeyFile, + }, + { + Key: rootCertFile, + Path: rootCertFile, + }, + }, + } + + // example custom secret projection + customSecretProjection := &corev1.SecretProjection{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: "customsecret", + }, + Items: []corev1.KeyToPath{ + { + Key: clusterCertFile, + Path: clusterCertFile, + }, + { + Key: clusterKeyFile, + Path: clusterKeyFile, + }, + { + Key: rootCertFile, + Path: rootCertFile, + }, + }, + } + + cluster2.Spec.CustomTLSSecret = customSecretProjection + + initialRoot, err := r.reconcileRootCertificate(ctx, cluster1) + assert.NilError(t, err) + + t.Run("check standard secret projection", func(t *testing.T) { + secretCertProj, err := r.reconcileClusterCertificate(ctx, initialRoot, cluster1, primaryService, replicaService) + assert.NilError(t, err) + + assert.DeepEqual(t, testSecretProjection, secretCertProj) + }) + + t.Run("check custom secret projection", func(t *testing.T) { + customSecretCertProj, err := r.reconcileClusterCertificate(ctx, initialRoot, cluster2, primaryService, replicaService) + assert.NilError(t, err) + + assert.DeepEqual(t, customSecretProjection, customSecretCertProj) + }) + + t.Run("check switch to a custom secret projection", func(t *testing.T) { + // simulate a new custom secret + testSecret := &corev1.Secret{} + testSecret.Namespace, testSecret.Name = namespace, "newcustomsecret" + // simulate cluster spec update + cluster2.Spec.CustomTLSSecret.LocalObjectReference.Name = "newcustomsecret" + + // get the expected secret projection + testSecretProjection := clusterCertSecretProjection(testSecret) + + // reconcile the secret project using the normal process + customSecretCertProj, err := r.reconcileClusterCertificate(ctx, initialRoot, cluster2, primaryService, replicaService) + assert.NilError(t, err) + + // results should be the same + assert.DeepEqual(t, testSecretProjection, customSecretCertProj) + }) + + t.Run("check cluster certificate secret", func(t *testing.T) { + // get the cluster cert secret + initialClusterCertSecret := &corev1.Secret{} + err := tClient.Get(ctx, types.NamespacedName{ + Name: fmt.Sprintf(naming.ClusterCertSecret, cluster1.Name), + Namespace: namespace, + }, initialClusterCertSecret) + assert.NilError(t, err) + + // force the generation of a new root cert + // create an empty secret and apply the change + emptyRootSecret := &corev1.Secret{} + emptyRootSecret.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("Secret")) + emptyRootSecret.Namespace, emptyRootSecret.Name = namespace, naming.RootCertSecret + emptyRootSecret.Data = make(map[string][]byte) + err = errors.WithStack(r.apply(ctx, emptyRootSecret)) + assert.NilError(t, err) + + // reconcile the root cert secret, creating a new root cert + returnedRoot, err := r.reconcileRootCertificate(ctx, cluster1) + assert.NilError(t, err) + + // pass in the new root, which should result in a new cluster cert + _, err = r.reconcileClusterCertificate(ctx, returnedRoot, cluster1, primaryService, replicaService) + assert.NilError(t, err) + + // get the new cluster cert secret + newClusterCertSecret := &corev1.Secret{} + err = tClient.Get(ctx, types.NamespacedName{ + Name: fmt.Sprintf(naming.ClusterCertSecret, cluster1.Name), + Namespace: namespace, + }, newClusterCertSecret) + assert.NilError(t, err) + + assert.Assert(t, !reflect.DeepEqual(initialClusterCertSecret, newClusterCertSecret)) + + leaf := &pki.LeafCertificate{} + assert.NilError(t, leaf.Certificate.UnmarshalText(newClusterCertSecret.Data["tls.crt"])) + assert.NilError(t, leaf.PrivateKey.UnmarshalText(newClusterCertSecret.Data["tls.key"])) + + assert.Assert(t, + strings.HasPrefix(leaf.Certificate.CommonName(), "the-primary."+namespace+".svc."), + "got %q", leaf.Certificate.CommonName()) + + if dnsNames := leaf.Certificate.DNSNames(); assert.Check(t, len(dnsNames) > 1) { + assert.DeepEqual(t, dnsNames[1:4], []string{ + "the-primary." + namespace + ".svc", + "the-primary." + namespace, + "the-primary", + }) + assert.DeepEqual(t, dnsNames[5:8], []string{ + "the-replicas." + namespace + ".svc", + "the-replicas." + namespace, + "the-replicas", + }) + } + }) + }) +} + +// getCertFromSecret returns a parsed certificate from the named secret +func getCertFromSecret( + ctx context.Context, tClient client.Client, name, namespace, dataKey string, +) (*pki.Certificate, error) { + // get cert secret + secret := &corev1.Secret{} + if err := tClient.Get(ctx, types.NamespacedName{ + Name: name, + Namespace: namespace, + }, secret); err != nil { + return nil, err + } + + // get the cert from the secret + secretCRT, ok := secret.Data[dataKey] + if !ok { + return nil, errors.New(fmt.Sprintf("could not retrieve %s", dataKey)) + } + + // parse the cert from binary encoded data + fromSecret := &pki.Certificate{} + return fromSecret, fromSecret.UnmarshalText(secretCRT) +} diff --git a/internal/controller/postgrescluster/pod_disruption_budget.go b/internal/controller/postgrescluster/pod_disruption_budget.go new file mode 100644 index 0000000000..4bff4a9743 --- /dev/null +++ b/internal/controller/postgrescluster/pod_disruption_budget.go @@ -0,0 +1,68 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgrescluster + +// Note: The behavior for an empty selector differs between the +// policy/v1beta1 and policy/v1 APIs for PodDisruptionBudgets. For +// policy/v1beta1 an empty selector matches zero pods, while for +// policy/v1 an empty selector matches every pod in the namespace. +// https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget +import ( + "github.com/pkg/errors" + policyv1 "k8s.io/api/policy/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/util/intstr" + + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +// generatePodDisruptionBudget takes parameters required to fill out a PDB and +// returns the PDB +func (r *Reconciler) generatePodDisruptionBudget( + cluster *v1beta1.PostgresCluster, + meta metav1.ObjectMeta, + minAvailable *intstr.IntOrString, + selector metav1.LabelSelector, +) (*policyv1.PodDisruptionBudget, error) { + pdb := &policyv1.PodDisruptionBudget{ + ObjectMeta: meta, + Spec: policyv1.PodDisruptionBudgetSpec{ + MinAvailable: minAvailable, + Selector: &selector, + }, + } + pdb.SetGroupVersionKind(policyv1.SchemeGroupVersion.WithKind("PodDisruptionBudget")) + err := errors.WithStack(r.setControllerReference(cluster, pdb)) + return pdb, err +} + +// getMinAvailable contains logic to either parse a user provided IntOrString +// value or determine a default minimum available based on replicas. In both +// cases it returns the minAvailable as an int32 that should be set on a +// PodDisruptionBudget +func getMinAvailable( + minAvailable *intstr.IntOrString, + replicas int32, +) *intstr.IntOrString { + // TODO: Webhook Validation for minAvailable in the spec + // - MinAvailable should be less than replicas + // - MinAvailable as a string value should be a percentage string <= 100% + if minAvailable != nil { + return minAvailable + } + + // If the user does not provide 'minAvailable', we will set a default + // based on the number of replicas. + var expect int32 + + // We default to '1' if they have more than one replica defined. + if replicas > 1 { + expect = 1 + } + + // If more than one replica is not defined, we will default to '0' + return initialize.Pointer(intstr.FromInt32(expect)) +} diff --git a/internal/controller/postgrescluster/pod_disruption_budget_test.go b/internal/controller/postgrescluster/pod_disruption_budget_test.go new file mode 100644 index 0000000000..55e2bb63c6 --- /dev/null +++ b/internal/controller/postgrescluster/pod_disruption_budget_test.go @@ -0,0 +1,107 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgrescluster + +import ( + "testing" + + "gotest.tools/v3/assert" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/labels" + "k8s.io/apimachinery/pkg/util/intstr" + + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/testing/require" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestGeneratePodDisruptionBudget(t *testing.T) { + _, cc := setupKubernetes(t) + r := &Reconciler{Client: cc} + require.ParallelCapacity(t, 0) + + var ( + minAvailable *intstr.IntOrString + selector metav1.LabelSelector + ) + + t.Run("empty", func(t *testing.T) { + // If empty values are passed into the function does it blow up + _, err := r.generatePodDisruptionBudget( + &v1beta1.PostgresCluster{}, + metav1.ObjectMeta{}, + minAvailable, + selector, + ) + assert.NilError(t, err) + }) + + t.Run("valid", func(t *testing.T) { + cluster := testCluster() + meta := metav1.ObjectMeta{ + Name: "test-pdb", + Namespace: "test-ns", + Labels: map[string]string{ + "label-key": "label-value", + }, + Annotations: map[string]string{ + "anno-key": "anno-value", + }, + } + minAvailable = initialize.Pointer(intstr.FromInt32(1)) + selector := metav1.LabelSelector{ + MatchLabels: map[string]string{ + "key": "value", + }, + } + pdb, err := r.generatePodDisruptionBudget( + cluster, + meta, + minAvailable, + selector, + ) + assert.NilError(t, err) + assert.Equal(t, pdb.Name, meta.Name) + assert.Equal(t, pdb.Namespace, meta.Namespace) + assert.Assert(t, labels.Set(pdb.Labels).Has("label-key")) + assert.Assert(t, labels.Set(pdb.Annotations).Has("anno-key")) + assert.Equal(t, pdb.Spec.MinAvailable, minAvailable) + assert.DeepEqual(t, pdb.Spec.Selector.MatchLabels, map[string]string{ + "key": "value", + }) + assert.Assert(t, metav1.IsControlledBy(pdb, cluster)) + }) +} + +func TestGetMinAvailable(t *testing.T) { + t.Run("minAvailable provided", func(t *testing.T) { + // minAvailable is defined so use that value + ma := initialize.Pointer(intstr.FromInt32(0)) + expect := getMinAvailable(ma, 1) + assert.Equal(t, *expect, intstr.FromInt(0)) + + ma = initialize.Pointer(intstr.FromInt32(1)) + expect = getMinAvailable(ma, 2) + assert.Equal(t, *expect, intstr.FromInt(1)) + + ma = initialize.Pointer(intstr.FromString("50%")) + expect = getMinAvailable(ma, 3) + assert.Equal(t, *expect, intstr.FromString("50%")) + + ma = initialize.Pointer(intstr.FromString("200%")) + expect = getMinAvailable(ma, 2147483647) + assert.Equal(t, *expect, intstr.FromString("200%")) + }) + + // When minAvailable is not defined we need to decide what value to use + t.Run("defaulting logic", func(t *testing.T) { + // When we have one replica minAvailable should be 0 + expect := getMinAvailable(nil, 1) + assert.Equal(t, *expect, intstr.FromInt(0)) + // When we have more than one replica minAvailable should be 1 + expect = getMinAvailable(nil, 2) + assert.Equal(t, *expect, intstr.FromInt(1)) + }) +} diff --git a/internal/controller/postgrescluster/postgres.go b/internal/controller/postgrescluster/postgres.go new file mode 100644 index 0000000000..312079d824 --- /dev/null +++ b/internal/controller/postgrescluster/postgres.go @@ -0,0 +1,995 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgrescluster + +import ( + "bytes" + "context" + "fmt" + "io" + "net" + "net/url" + "regexp" + "sort" + "strings" + + "github.com/pkg/errors" + appsv1 "k8s.io/api/apps/v1" + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/resource" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/util/sets" + "k8s.io/apimachinery/pkg/util/validation/field" + "sigs.k8s.io/controller-runtime/pkg/client" + + "github.com/crunchydata/postgres-operator/internal/feature" + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/logging" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/pgaudit" + "github.com/crunchydata/postgres-operator/internal/postgis" + "github.com/crunchydata/postgres-operator/internal/postgres" + pgpassword "github.com/crunchydata/postgres-operator/internal/postgres/password" + "github.com/crunchydata/postgres-operator/internal/util" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +// generatePostgresUserSecret returns a Secret containing a password and +// connection details for the first database in spec. When existing is nil or +// lacks a password or verifier, a new password and verifier are generated. +func (r *Reconciler) generatePostgresUserSecret( + cluster *v1beta1.PostgresCluster, spec *v1beta1.PostgresUserSpec, existing *corev1.Secret, +) (*corev1.Secret, error) { + username := string(spec.Name) + intent := &corev1.Secret{ObjectMeta: naming.PostgresUserSecret(cluster, username)} + intent.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("Secret")) + initialize.Map(&intent.Data) + + // Populate the Secret with libpq keywords for connecting through + // the primary Service. + // - https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-PARAMKEYWORDS + primary := naming.ClusterPrimaryService(cluster) + hostname := primary.Name + "." + primary.Namespace + ".svc" + port := fmt.Sprint(*cluster.Spec.Port) + + intent.Data["host"] = []byte(hostname) + intent.Data["port"] = []byte(port) + intent.Data["user"] = []byte(username) + + // Use the existing password and verifier. + if existing != nil { + intent.Data["password"] = existing.Data["password"] + intent.Data["verifier"] = existing.Data["verifier"] + } + + // When password is unset, generate a new one according to the specified policy. + if len(intent.Data["password"]) == 0 { + // NOTE: The tests around ASCII passwords are lacking. When changing + // this, make sure that ASCII is the default. + generate := util.GenerateASCIIPassword + if spec.Password != nil { + switch spec.Password.Type { + case v1beta1.PostgresPasswordTypeAlphaNumeric: + generate = util.GenerateAlphaNumericPassword + } + } + + password, err := generate(util.DefaultGeneratedPasswordLength) + if err != nil { + return nil, errors.WithStack(err) + } + intent.Data["password"] = []byte(password) + intent.Data["verifier"] = nil + } + + // When a password has been generated or the verifier is empty, + // generate a verifier based on the current password. + // NOTE(cbandy): We don't have a function to compare a plaintext + // password to a SCRAM verifier. + if len(intent.Data["verifier"]) == 0 { + verifier, err := pgpassword.NewSCRAMPassword(string(intent.Data["password"])).Build() + if err != nil { + return nil, errors.WithStack(err) + } + intent.Data["verifier"] = []byte(verifier) + } + + // When a database has been specified, include it and a connection URI. + // - https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING + if len(spec.Databases) > 0 { + database := string(spec.Databases[0]) + + intent.Data["dbname"] = []byte(database) + intent.Data["uri"] = []byte((&url.URL{ + Scheme: "postgresql", + User: url.UserPassword(username, string(intent.Data["password"])), + Host: net.JoinHostPort(hostname, port), + Path: database, + }).String()) + + // The JDBC driver requires a different URI scheme and query component. + // - https://jdbc.postgresql.org/documentation/use/#connection-parameters + query := url.Values{} + query.Set("user", username) + query.Set("password", string(intent.Data["password"])) + intent.Data["jdbc-uri"] = []byte((&url.URL{ + Scheme: "jdbc:postgresql", + Host: net.JoinHostPort(hostname, port), + Path: database, + RawQuery: query.Encode(), + }).String()) + } + + // When PgBouncer is enabled, include values for connecting through it. + if cluster.Spec.Proxy != nil && cluster.Spec.Proxy.PGBouncer != nil { + pgBouncer := naming.ClusterPGBouncer(cluster) + hostname := pgBouncer.Name + "." + pgBouncer.Namespace + ".svc" + port := fmt.Sprint(*cluster.Spec.Proxy.PGBouncer.Port) + + intent.Data["pgbouncer-host"] = []byte(hostname) + intent.Data["pgbouncer-port"] = []byte(port) + + if len(spec.Databases) > 0 { + database := string(spec.Databases[0]) + + intent.Data["pgbouncer-uri"] = []byte((&url.URL{ + Scheme: "postgresql", + User: url.UserPassword(username, string(intent.Data["password"])), + Host: net.JoinHostPort(hostname, port), + Path: database, + }).String()) + + // The JDBC driver requires a different URI scheme and query component. + // Disable prepared statements to be compatible with PgBouncer's + // transaction pooling. + // - https://jdbc.postgresql.org/documentation/use/#connection-parameters + // - https://www.pgbouncer.org/faq.html#how-to-use-prepared-statements-with-transaction-pooling + query := url.Values{} + query.Set("user", username) + query.Set("password", string(intent.Data["password"])) + query.Set("prepareThreshold", "0") + intent.Data["pgbouncer-jdbc-uri"] = []byte((&url.URL{ + Scheme: "jdbc:postgresql", + Host: net.JoinHostPort(hostname, port), + Path: database, + RawQuery: query.Encode(), + }).String()) + } + } + + intent.Annotations = cluster.Spec.Metadata.GetAnnotationsOrNil() + intent.Labels = naming.Merge( + cluster.Spec.Metadata.GetLabelsOrNil(), + map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelRole: naming.RolePostgresUser, + naming.LabelPostgresUser: username, + }) + + err := errors.WithStack(r.setControllerReference(cluster, intent)) + + return intent, err +} + +// reconcilePostgresDatabases creates databases inside of PostgreSQL. +func (r *Reconciler) reconcilePostgresDatabases( + ctx context.Context, cluster *v1beta1.PostgresCluster, instances *observedInstances, +) error { + const container = naming.ContainerDatabase + var podExecutor postgres.Executor + + // Find the PostgreSQL instance that can execute SQL that writes system + // catalogs. When there is none, return early. + pod, _ := instances.writablePod(container) + if pod == nil { + return nil + } + + ctx = logging.NewContext(ctx, logging.FromContext(ctx).WithValues("pod", pod.Name)) + podExecutor = func( + ctx context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + return r.PodExec(ctx, pod.Namespace, pod.Name, container, stdin, stdout, stderr, command...) + } + + // Gather the list of database that should exist in PostgreSQL. + + databases := sets.Set[string]{} + if cluster.Spec.Users == nil { + // Users are unspecified; create one database matching the cluster name + // if it is also a valid database name. + // TODO(cbandy): Move this to a defaulting (mutating admission) webhook + // to leverage regular validation. + path := field.NewPath("spec", "users").Index(0).Child("databases").Index(0) + + // Database names cannot be too long. PostgresCluster.Name is a DNS + // subdomain, so use len() to count characters. + if n := len(cluster.Name); n > 63 { + r.Recorder.Event(cluster, corev1.EventTypeWarning, "InvalidDatabase", + field.Invalid(path, cluster.Name, + fmt.Sprintf("should be at most %d chars long", 63)).Error()) + } else { + databases.Insert(cluster.Name) + } + } else { + for _, user := range cluster.Spec.Users { + for _, database := range user.Databases { + databases.Insert(string(database)) + } + } + } + + var pgAuditOK, postgisInstallOK bool + create := func(ctx context.Context, exec postgres.Executor) error { + if pgAuditOK = pgaudit.EnableInPostgreSQL(ctx, exec) == nil; !pgAuditOK { + // pgAudit can only be enabled after its shared library is loaded, + // but early versions of PGO do not load it automatically. Assume + // that an error here is because the cluster started during one of + // those versions and has not been restarted. + r.Recorder.Event(cluster, corev1.EventTypeWarning, "pgAuditDisabled", + "Unable to install pgAudit") + } + + // Enabling PostGIS extensions is a one-way operation + // e.g., you can take a PostgresCluster and turn it into a PostGISCluster, + // but you cannot reverse the process, as that would potentially remove an extension + // that is being used by some database/tables + if cluster.Spec.PostGISVersion == "" { + postgisInstallOK = true + } else if postgisInstallOK = postgis.EnableInPostgreSQL(ctx, exec) == nil; !postgisInstallOK { + // TODO(benjaminjb): Investigate under what conditions postgis would fail install + r.Recorder.Event(cluster, corev1.EventTypeWarning, "PostGISDisabled", + "Unable to install PostGIS") + } + + return postgres.CreateDatabasesInPostgreSQL(ctx, exec, sets.List(databases)) + } + + // Calculate a hash of the SQL that should be executed in PostgreSQL. + revision, err := safeHash32(func(hasher io.Writer) error { + // Discard log messages about executing SQL. + return create(logging.NewContext(ctx, logging.Discard()), func( + _ context.Context, stdin io.Reader, _, _ io.Writer, command ...string, + ) error { + _, err := fmt.Fprint(hasher, command) + if err == nil && stdin != nil { + _, err = io.Copy(hasher, stdin) + } + return err + }) + }) + + if err == nil && revision == cluster.Status.DatabaseRevision { + // The necessary SQL has already been applied; there's nothing more to do. + + // TODO(cbandy): Give the user a way to trigger execution regardless. + // The value of an annotation could influence the hash, for example. + return nil + } + + // Apply the necessary SQL and record its hash in cluster.Status. Include + // the hash in any log messages. + + if err == nil { + log := logging.FromContext(ctx).WithValues("revision", revision) + err = errors.WithStack(create(logging.NewContext(ctx, log), podExecutor)) + } + if err == nil && pgAuditOK && postgisInstallOK { + cluster.Status.DatabaseRevision = revision + } + + return err +} + +// reconcilePostgresUsers writes the objects necessary to manage users and their +// passwords in PostgreSQL. +func (r *Reconciler) reconcilePostgresUsers( + ctx context.Context, cluster *v1beta1.PostgresCluster, instances *observedInstances, +) error { + r.validatePostgresUsers(cluster) + + users, secrets, err := r.reconcilePostgresUserSecrets(ctx, cluster) + if err == nil { + err = r.reconcilePostgresUsersInPostgreSQL(ctx, cluster, instances, users, secrets) + } + if err == nil { + // Copy PostgreSQL users and passwords into pgAdmin. This is here because + // reconcilePostgresUserSecrets is building a (default) PostgresUserSpec + // that is not in the PostgresClusterSpec. The freshly generated Secrets + // are available here, too. + err = r.reconcilePGAdminUsers(ctx, cluster, users, secrets) + } + return err +} + +// validatePostgresUsers emits warnings when cluster.Spec.Users contains values +// that are no longer valid. NOTE(ratcheting) NOTE(validation) +// - https://docs.k8s.io/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-ratcheting +func (r *Reconciler) validatePostgresUsers(cluster *v1beta1.PostgresCluster) { + if len(cluster.Spec.Users) == 0 { + return + } + + path := field.NewPath("spec", "users") + reComments := regexp.MustCompile(`(?:--|/[*]|[*]/)`) + rePassword := regexp.MustCompile(`(?i:PASSWORD)`) + + for i := range cluster.Spec.Users { + errs := field.ErrorList{} + spec := cluster.Spec.Users[i] + + if reComments.MatchString(spec.Options) { + errs = append(errs, + field.Invalid(path.Index(i).Child("options"), spec.Options, + "cannot contain comments")) + } + if rePassword.MatchString(spec.Options) { + errs = append(errs, + field.Invalid(path.Index(i).Child("options"), spec.Options, + "cannot assign password")) + } + + if len(errs) > 0 { + r.Recorder.Event(cluster, corev1.EventTypeWarning, "InvalidUser", + errs.ToAggregate().Error()) + } + } +} + +// +kubebuilder:rbac:groups="",resources="secrets",verbs={list} +// +kubebuilder:rbac:groups="",resources="secrets",verbs={create,delete,patch} + +// reconcilePostgresUserSecrets writes Secrets for the PostgreSQL users +// specified in cluster and deletes existing Secrets that are not specified. +// It returns the user specifications it acted on (because defaults) and the +// Secrets it wrote. +func (r *Reconciler) reconcilePostgresUserSecrets( + ctx context.Context, cluster *v1beta1.PostgresCluster, +) ( + []v1beta1.PostgresUserSpec, map[string]*corev1.Secret, error, +) { + // When users are unspecified, create one user matching the cluster name if + // it is also a valid user name. + // TODO(cbandy): Move this to a defaulting (mutating admission) webhook to + // leverage regular validation. + specUsers := cluster.Spec.Users + if specUsers == nil { + path := field.NewPath("spec", "users").Index(0).Child("name") + reUser := regexp.MustCompile(`^[a-z0-9]([-a-z0-9]*[a-z0-9])?$`) + allErrors := field.ErrorList{} + + // User names cannot be too long. PostgresCluster.Name is a DNS + // subdomain, so use len() to count characters. + if n := len(cluster.Name); n > 63 { + allErrors = append(allErrors, + field.Invalid(path, cluster.Name, + fmt.Sprintf("should be at most %d chars long", 63))) + } + // See v1beta1.PostgresRoleSpec validation markers. + if !reUser.MatchString(cluster.Name) { + allErrors = append(allErrors, + field.Invalid(path, cluster.Name, + fmt.Sprintf("should match '%s'", reUser))) + } + + if len(allErrors) > 0 { + r.Recorder.Event(cluster, corev1.EventTypeWarning, "InvalidUser", + allErrors.ToAggregate().Error()) + } else { + identifier := v1beta1.PostgresIdentifier(cluster.Name) + specUsers = []v1beta1.PostgresUserSpec{{ + Name: identifier, + Databases: []v1beta1.PostgresIdentifier{identifier}, + }} + } + } + + // Index user specifications by PostgreSQL user name. + userSpecs := make(map[string]*v1beta1.PostgresUserSpec, len(specUsers)) + for i := range specUsers { + userSpecs[string(specUsers[i].Name)] = &specUsers[i] + } + + secrets := &corev1.SecretList{} + selector, err := naming.AsSelector(naming.ClusterPostgresUsers(cluster.Name)) + if err == nil { + err = errors.WithStack( + r.Client.List(ctx, secrets, + client.InNamespace(cluster.Namespace), + client.MatchingLabelsSelector{Selector: selector}, + )) + } + + // Sorts the slice of secrets.Items based on secrets with identical labels + // If one secret has "pguser" in its name and the other does not, the + // one without "pguser" is moved to the front. + // If both secrets have "pguser" in their names or neither has "pguser", they + // are sorted by creation timestamp. + // If two secrets have the same creation timestamp, they are further sorted by name. + // The secret to be used by PGO is put at the end of the sorted slice. + sort.Slice(secrets.Items, func(i, j int) bool { + // Check if either secrets have "pguser" in their names + isIPgUser := strings.Contains(secrets.Items[i].Name, "pguser") + isJPgUser := strings.Contains(secrets.Items[j].Name, "pguser") + + // If one secret has "pguser" and the other does not, + // move the one without "pguser" to the front + if isIPgUser && !isJPgUser { + return false + } else if !isIPgUser && isJPgUser { + return true + } + + if secrets.Items[i].CreationTimestamp.Time.Equal(secrets.Items[j].CreationTimestamp.Time) { + // If the creation timestamps are equal, sort by name + return secrets.Items[i].Name < secrets.Items[j].Name + } + + // If both secrets have "pguser" or neither have "pguser", + // sort by creation timestamp + return secrets.Items[i].CreationTimestamp.Time.After(secrets.Items[j].CreationTimestamp.Time) + }) + + // Index secrets by PostgreSQL user name and delete any that are not in the + // cluster spec. Keep track of the deprecated default secret to migrate its + // contents when the current secret doesn't exist. + var ( + defaultSecret *corev1.Secret + defaultSecretName = naming.DeprecatedPostgresUserSecret(cluster).Name + defaultUserName string + userSecrets = make(map[string]*corev1.Secret, len(secrets.Items)) + ) + if err == nil { + for i := range secrets.Items { + secret := &secrets.Items[i] + secretUserName := secret.Labels[naming.LabelPostgresUser] + + if _, specified := userSpecs[secretUserName]; specified { + if secret.Name == defaultSecretName { + defaultSecret = secret + defaultUserName = secretUserName + } else { + userSecrets[secretUserName] = secret + } + } else if err == nil { + err = errors.WithStack(r.deleteControlled(ctx, cluster, secret)) + } + } + } + + // Reconcile each PostgreSQL user in the cluster spec. + for userName, user := range userSpecs { + secret := userSecrets[userName] + + if secret == nil && userName == defaultUserName { + // The current secret doesn't exist, so read from the deprecated + // default secret, if any. + secret = defaultSecret + } + + if err == nil { + userSecrets[userName], err = r.generatePostgresUserSecret(cluster, user, secret) + } + if err == nil { + err = errors.WithStack(r.apply(ctx, userSecrets[userName])) + } + } + + return specUsers, userSecrets, err +} + +// reconcilePostgresUsersInPostgreSQL creates users inside of PostgreSQL and +// sets their options and database access as specified. +func (r *Reconciler) reconcilePostgresUsersInPostgreSQL( + ctx context.Context, cluster *v1beta1.PostgresCluster, instances *observedInstances, + specUsers []v1beta1.PostgresUserSpec, userSecrets map[string]*corev1.Secret, +) error { + const container = naming.ContainerDatabase + var podExecutor postgres.Executor + + // Find the PostgreSQL instance that can execute SQL that writes system + // catalogs. When there is none, return early. + + for _, instance := range instances.forCluster { + if terminating, known := instance.IsTerminating(); terminating || !known { + continue + } + if writable, known := instance.IsWritable(); !writable || !known { + continue + } + running, known := instance.IsRunning(container) + if running && known && len(instance.Pods) > 0 { + pod := instance.Pods[0] + ctx = logging.NewContext(ctx, logging.FromContext(ctx).WithValues("pod", pod.Name)) + + podExecutor = func( + ctx context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + return r.PodExec(ctx, pod.Namespace, pod.Name, container, stdin, stdout, stderr, command...) + } + break + } + } + if podExecutor == nil { + return nil + } + + // Calculate a hash of the SQL that should be executed in PostgreSQL. + + verifiers := make(map[string]string, len(userSecrets)) + for userName := range userSecrets { + verifiers[userName] = string(userSecrets[userName].Data["verifier"]) + } + + write := func(ctx context.Context, exec postgres.Executor) error { + return postgres.WriteUsersInPostgreSQL(ctx, cluster, exec, specUsers, verifiers) + } + + revision, err := safeHash32(func(hasher io.Writer) error { + // Discard log messages about executing SQL. + return write(logging.NewContext(ctx, logging.Discard()), func( + _ context.Context, stdin io.Reader, _, _ io.Writer, command ...string, + ) error { + _, err := fmt.Fprint(hasher, command) + if err == nil && stdin != nil { + _, err = io.Copy(hasher, stdin) + } + return err + }) + }) + + if err == nil && revision == cluster.Status.UsersRevision { + // The necessary SQL has already been applied; there's nothing more to do. + + // TODO(cbandy): Give the user a way to trigger execution regardless. + // The value of an annotation could influence the hash, for example. + return nil + } + + // Apply the necessary SQL and record its hash in cluster.Status. Include + // the hash in any log messages. + + if err == nil { + log := logging.FromContext(ctx).WithValues("revision", revision) + err = errors.WithStack(write(logging.NewContext(ctx, log), podExecutor)) + } + if err == nil { + cluster.Status.UsersRevision = revision + } + + return err +} + +// +kubebuilder:rbac:groups="",resources="persistentvolumeclaims",verbs={create,patch} + +// reconcilePostgresDataVolume writes the PersistentVolumeClaim for instance's +// PostgreSQL data volume. +func (r *Reconciler) reconcilePostgresDataVolume( + ctx context.Context, cluster *v1beta1.PostgresCluster, + instanceSpec *v1beta1.PostgresInstanceSetSpec, instance *appsv1.StatefulSet, + clusterVolumes []corev1.PersistentVolumeClaim, sourceCluster *v1beta1.PostgresCluster, +) (*corev1.PersistentVolumeClaim, error) { + + labelMap := map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelInstanceSet: instanceSpec.Name, + naming.LabelInstance: instance.Name, + naming.LabelRole: naming.RolePostgresData, + naming.LabelData: naming.DataPostgres, + } + + var pvc *corev1.PersistentVolumeClaim + existingPVCName, err := getPGPVCName(labelMap, clusterVolumes) + if err != nil { + return nil, errors.WithStack(err) + } + if existingPVCName != "" { + pvc = &corev1.PersistentVolumeClaim{ObjectMeta: metav1.ObjectMeta{ + Namespace: cluster.GetNamespace(), + Name: existingPVCName, + }} + } else { + pvc = &corev1.PersistentVolumeClaim{ObjectMeta: naming.InstancePostgresDataVolume(instance)} + } + + pvc.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("PersistentVolumeClaim")) + + err = errors.WithStack(r.setControllerReference(cluster, pvc)) + + pvc.Annotations = naming.Merge( + cluster.Spec.Metadata.GetAnnotationsOrNil(), + instanceSpec.Metadata.GetAnnotationsOrNil()) + + pvc.Labels = naming.Merge( + cluster.Spec.Metadata.GetLabelsOrNil(), + instanceSpec.Metadata.GetLabelsOrNil(), + labelMap, + ) + + pvc.Spec = instanceSpec.DataVolumeClaimSpec + + // If a source cluster was provided and VolumeSnapshots are turned on in the source cluster and + // there is a VolumeSnapshot available for the source cluster that is ReadyToUse, use it as the + // source for the PVC. If there is an error when retrieving VolumeSnapshots, or no ReadyToUse + // snapshots were found, create a warning event, but continue creating PVC in the usual fashion. + if sourceCluster != nil && sourceCluster.Spec.Backups.Snapshots != nil && feature.Enabled(ctx, feature.VolumeSnapshots) { + snapshots, err := r.getSnapshotsForCluster(ctx, sourceCluster) + if err == nil { + snapshot := getLatestReadySnapshot(snapshots) + if snapshot != nil { + r.Recorder.Eventf(cluster, corev1.EventTypeNormal, "BootstrappingWithSnapshot", + "Snapshot found for %v; bootstrapping cluster with snapshot.", sourceCluster.Name) + pvc.Spec.DataSource = &corev1.TypedLocalObjectReference{ + APIGroup: initialize.String("snapshot.storage.k8s.io"), + Kind: snapshot.Kind, + Name: snapshot.Name, + } + } else { + r.Recorder.Eventf(cluster, corev1.EventTypeWarning, "SnapshotNotFound", + "No ReadyToUse snapshots were found for %v; proceeding with typical restore process.", sourceCluster.Name) + } + } else { + r.Recorder.Eventf(cluster, corev1.EventTypeWarning, "SnapshotNotFound", + "Could not get snapshots for %v, proceeding with typical restore process.", sourceCluster.Name) + } + } + + r.setVolumeSize(ctx, cluster, pvc, instanceSpec.Name) + + // Clear any set limit before applying PVC. This is needed to allow the limit + // value to change later. + pvc.Spec.Resources.Limits = nil + + if err == nil { + err = r.handlePersistentVolumeClaimError(cluster, + errors.WithStack(r.apply(ctx, pvc))) + } + + return pvc, err +} + +// setVolumeSize compares the potential sizes from the instance spec, status +// and limit and sets the appropriate current value. +func (r *Reconciler) setVolumeSize(ctx context.Context, cluster *v1beta1.PostgresCluster, + pvc *corev1.PersistentVolumeClaim, instanceSpecName string) { + log := logging.FromContext(ctx) + + // Store the limit for this instance set. This value will not change below. + volumeLimitFromSpec := pvc.Spec.Resources.Limits.Storage() + + // Capture the largest pgData volume size currently defined for a given instance set. + // This value will capture our desired update. + volumeRequestSize := pvc.Spec.Resources.Requests.Storage() + + // If the request value is greater than the set limit, use the limit and issue + // a warning event. A limit of 0 is ignorned. + if !volumeLimitFromSpec.IsZero() && + volumeRequestSize.Value() > volumeLimitFromSpec.Value() { + r.Recorder.Eventf(cluster, corev1.EventTypeWarning, "VolumeRequestOverLimit", + "pgData volume request (%v) for %s/%s is greater than set limit (%v). Limit value will be used.", + volumeRequestSize, cluster.Name, instanceSpecName, volumeLimitFromSpec) + + pvc.Spec.Resources.Requests = corev1.ResourceList{ + corev1.ResourceStorage: *resource.NewQuantity(volumeLimitFromSpec.Value(), resource.BinarySI), + } + // Otherwise, if the limit is not set or the feature gate is not enabled, do not autogrow. + } else if !volumeLimitFromSpec.IsZero() && feature.Enabled(ctx, feature.AutoGrowVolumes) { + for i := range cluster.Status.InstanceSets { + if instanceSpecName == cluster.Status.InstanceSets[i].Name { + for _, dpv := range cluster.Status.InstanceSets[i].DesiredPGDataVolume { + if dpv != "" { + desiredRequest, err := resource.ParseQuantity(dpv) + if err == nil { + if desiredRequest.Value() > volumeRequestSize.Value() { + volumeRequestSize = &desiredRequest + } + } else { + log.Error(err, "Unable to parse volume request: "+dpv) + } + } + } + } + } + + // If the volume request size is greater than or equal to the limit and the + // limit is not zero, update the request size to the limit value. + // If the user manually requests a lower limit that is smaller than the current + // or requested volume size, it will be ignored in favor of the limit value. + if volumeRequestSize.Value() >= volumeLimitFromSpec.Value() { + + r.Recorder.Eventf(cluster, corev1.EventTypeNormal, "VolumeLimitReached", + "pgData volume(s) for %s/%s are at size limit (%v).", cluster.Name, + instanceSpecName, volumeLimitFromSpec) + + // If the volume size request is greater than the limit, issue an + // additional event warning. + if volumeRequestSize.Value() > volumeLimitFromSpec.Value() { + r.Recorder.Eventf(cluster, corev1.EventTypeWarning, "DesiredVolumeAboveLimit", + "The desired size (%v) for the %s/%s pgData volume(s) is greater than the size limit (%v).", + volumeRequestSize, cluster.Name, instanceSpecName, volumeLimitFromSpec) + } + + volumeRequestSize = volumeLimitFromSpec + } + pvc.Spec.Resources.Requests = corev1.ResourceList{ + corev1.ResourceStorage: *resource.NewQuantity(volumeRequestSize.Value(), resource.BinarySI), + } + } +} + +// +kubebuilder:rbac:groups="",resources="persistentvolumeclaims",verbs={create,patch} + +// reconcileTablespaceVolumes writes the PersistentVolumeClaims for instance's +// tablespace data volumes. +func (r *Reconciler) reconcileTablespaceVolumes( + ctx context.Context, cluster *v1beta1.PostgresCluster, + instanceSpec *v1beta1.PostgresInstanceSetSpec, instance *appsv1.StatefulSet, + clusterVolumes []corev1.PersistentVolumeClaim, +) (tablespaceVolumes []*corev1.PersistentVolumeClaim, err error) { + + if !feature.Enabled(ctx, feature.TablespaceVolumes) { + return + } + + if instanceSpec.TablespaceVolumes == nil { + return + } + + for _, vol := range instanceSpec.TablespaceVolumes { + labelMap := map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelInstanceSet: instanceSpec.Name, + naming.LabelInstance: instance.Name, + naming.LabelRole: "tablespace", + naming.LabelData: vol.Name, + } + + var pvc *corev1.PersistentVolumeClaim + existingPVCName, err := getPGPVCName(labelMap, clusterVolumes) + if err != nil { + return nil, errors.WithStack(err) + } + if existingPVCName != "" { + pvc = &corev1.PersistentVolumeClaim{ObjectMeta: metav1.ObjectMeta{ + Namespace: cluster.GetNamespace(), + Name: existingPVCName, + }} + } else { + pvc = &corev1.PersistentVolumeClaim{ObjectMeta: naming.InstanceTablespaceDataVolume(instance, vol.Name)} + } + + pvc.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("PersistentVolumeClaim")) + + err = errors.WithStack(r.setControllerReference(cluster, pvc)) + + pvc.Annotations = naming.Merge( + cluster.Spec.Metadata.GetAnnotationsOrNil(), + instanceSpec.Metadata.GetAnnotationsOrNil()) + + pvc.Labels = naming.Merge( + cluster.Spec.Metadata.GetLabelsOrNil(), + instanceSpec.Metadata.GetLabelsOrNil(), + labelMap, + ) + + pvc.Spec = vol.DataVolumeClaimSpec + + if err == nil { + err = r.handlePersistentVolumeClaimError(cluster, + errors.WithStack(r.apply(ctx, pvc))) + } + + if err != nil { + return nil, err + } + + tablespaceVolumes = append(tablespaceVolumes, pvc) + } + + return +} + +// +kubebuilder:rbac:groups="",resources="persistentvolumeclaims",verbs={get} +// +kubebuilder:rbac:groups="",resources="persistentvolumeclaims",verbs={create,delete,patch} + +// reconcilePostgresWALVolume writes the PersistentVolumeClaim for instance's +// PostgreSQL WAL volume. +func (r *Reconciler) reconcilePostgresWALVolume( + ctx context.Context, cluster *v1beta1.PostgresCluster, + instanceSpec *v1beta1.PostgresInstanceSetSpec, instance *appsv1.StatefulSet, + observed *Instance, clusterVolumes []corev1.PersistentVolumeClaim, +) (*corev1.PersistentVolumeClaim, error) { + + labelMap := map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelInstanceSet: instanceSpec.Name, + naming.LabelInstance: instance.Name, + naming.LabelRole: naming.RolePostgresWAL, + naming.LabelData: naming.DataPostgres, + } + + var pvc *corev1.PersistentVolumeClaim + existingPVCName, err := getPGPVCName(labelMap, clusterVolumes) + if err != nil { + return nil, errors.WithStack(err) + } + if existingPVCName != "" { + pvc = &corev1.PersistentVolumeClaim{ObjectMeta: metav1.ObjectMeta{ + Namespace: cluster.GetNamespace(), + Name: existingPVCName, + }} + } else { + pvc = &corev1.PersistentVolumeClaim{ObjectMeta: naming.InstancePostgresWALVolume(instance)} + } + + pvc.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("PersistentVolumeClaim")) + + if instanceSpec.WALVolumeClaimSpec == nil { + // No WAL volume is specified; delete the PVC safely if it exists. Check + // the client cache first using Get. + key := client.ObjectKeyFromObject(pvc) + err := errors.WithStack(r.Client.Get(ctx, key, pvc)) + if err != nil { + return nil, client.IgnoreNotFound(err) + } + + // The "StorageObjectInUseProtection" admission controller adds a + // finalizer to every PVC so that the "pvc-protection" controller can + // remove it safely. Return early when it is already scheduled for deletion. + // - https://docs.k8s.io/reference/access-authn-authz/admission-controllers/ + if pvc.DeletionTimestamp != nil { + return nil, nil + } + + // The WAL PVC exists and should be removed. Delete it only when WAL + // files are safely on their intended volume. The PVC will continue to + // exist until all Pods using it are also deleted. + // - https://docs.k8s.io/concepts/storage/persistent-volumes/#storage-object-in-use-protection + var walDirectory string + if observed != nil && len(observed.Pods) == 1 { + if running, known := observed.IsRunning(naming.ContainerDatabase); running && known { + // NOTE(cbandy): Despite the guard above, calling PodExec may still fail + // due to a missing or stopped container. + + // This assumes that $PGDATA matches the configured PostgreSQL "data_directory". + var stdout bytes.Buffer + err = errors.WithStack(r.PodExec( + ctx, observed.Pods[0].Namespace, observed.Pods[0].Name, naming.ContainerDatabase, + nil, &stdout, nil, "bash", "-ceu", "--", `exec realpath "${PGDATA}/pg_wal"`)) + + walDirectory = strings.TrimRight(stdout.String(), "\n") + } + } + if err == nil && walDirectory == postgres.WALDirectory(cluster, instanceSpec) { + return nil, errors.WithStack( + client.IgnoreNotFound(r.deleteControlled(ctx, cluster, pvc))) + } + + // The WAL PVC exists and might contain WAL files. There is no spec to + // reconcile toward so return early. + return pvc, err + } + + err = errors.WithStack(r.setControllerReference(cluster, pvc)) + + pvc.Annotations = naming.Merge( + cluster.Spec.Metadata.GetAnnotationsOrNil(), + instanceSpec.Metadata.GetAnnotationsOrNil()) + + pvc.Labels = naming.Merge( + cluster.Spec.Metadata.GetLabelsOrNil(), + instanceSpec.Metadata.GetLabelsOrNil(), + labelMap, + ) + + pvc.Spec = *instanceSpec.WALVolumeClaimSpec + + if err == nil { + err = r.handlePersistentVolumeClaimError(cluster, + errors.WithStack(r.apply(ctx, pvc))) + } + + return pvc, err +} + +// reconcileDatabaseInitSQL runs custom SQL files in the database. When +// DatabaseInitSQL is defined, the function will find the primary pod and run +// SQL from the defined ConfigMap +func (r *Reconciler) reconcileDatabaseInitSQL(ctx context.Context, + cluster *v1beta1.PostgresCluster, instances *observedInstances) error { + log := logging.FromContext(ctx) + + // Spec is not defined, unset status and return + if cluster.Spec.DatabaseInitSQL == nil { + // If database init sql is not requested, we will always expect the + // status to be nil + cluster.Status.DatabaseInitSQL = nil + return nil + } + + // Spec is defined but status is already set, return + if cluster.Status.DatabaseInitSQL != nil { + return nil + } + + // Based on the previous checks, the user wants to run sql in the database. + // Check the provided ConfigMap name and key to ensure the a string + // exists in the ConfigMap data + var ( + err error + data string + ) + + getDataFromConfigMap := func() (string, error) { + cm := &corev1.ConfigMap{ + ObjectMeta: metav1.ObjectMeta{ + Name: cluster.Spec.DatabaseInitSQL.Name, + Namespace: cluster.Namespace, + }, + } + err := r.Client.Get(ctx, client.ObjectKeyFromObject(cm), cm) + if err != nil { + return "", err + } + + key := cluster.Spec.DatabaseInitSQL.Key + if _, ok := cm.Data[key]; !ok { + err := errors.Errorf("ConfigMap did not contain expected key: %s", key) + return "", err + } + + return cm.Data[key], nil + } + + if data, err = getDataFromConfigMap(); err != nil { + log.Error(err, "Could not get data from ConfigMap", + "ConfigMap", cluster.Spec.DatabaseInitSQL.Name, + "Key", cluster.Spec.DatabaseInitSQL.Key) + return err + } + + // Now that we have the data provided by the user. We can check for a + // writable pod and get the podExecutor for the pod's database container + var podExecutor postgres.Executor + pod, _ := instances.writablePod(naming.ContainerDatabase) + if pod == nil { + log.V(1).Info("Could not find a pod with a writable database container.") + return nil + } + + podExecutor = func( + ctx context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + return r.PodExec(ctx, pod.Namespace, pod.Name, naming.ContainerDatabase, stdin, stdout, stderr, command...) + } + + // A writable pod executor has been found and we have the sql provided by + // the user. Setup a write function to execute the sql using the podExecutor + write := func(ctx context.Context, exec postgres.Executor) error { + stdout, stderr, err := exec.Exec(ctx, strings.NewReader(data), map[string]string{}) + log.V(1).Info("applied init SQL", "stdout", stdout, "stderr", stderr) + return err + } + + // Update the logger to include fields from the user provided ResourceRef + log = log.WithValues( + "name", cluster.Spec.DatabaseInitSQL.Name, + "key", cluster.Spec.DatabaseInitSQL.Key, + ) + + // Write SQL to database using the podExecutor + err = errors.WithStack(write(logging.NewContext(ctx, log), podExecutor)) + + // If the podExec returns with exit code 0 the write is considered a + // success, keep track of the ConfigMap using a status. This helps to + // ensure SQL doesn't get run again. SQL can be run again if the + // status is lost and the DatabaseInitSQL field exists in the spec. + if err == nil { + status := cluster.Spec.DatabaseInitSQL.Name + cluster.Status.DatabaseInitSQL = &status + } + + return err +} diff --git a/internal/controller/postgrescluster/postgres_test.go b/internal/controller/postgrescluster/postgres_test.go new file mode 100644 index 0000000000..0780b0f577 --- /dev/null +++ b/internal/controller/postgrescluster/postgres_test.go @@ -0,0 +1,1233 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgrescluster + +import ( + "context" + "fmt" + "io" + "testing" + + "github.com/go-logr/logr/funcr" + "github.com/google/go-cmp/cmp/cmpopts" + volumesnapshotv1 "github.com/kubernetes-csi/external-snapshotter/client/v8/apis/volumesnapshot/v1" + "github.com/pkg/errors" + "gotest.tools/v3/assert" + appsv1 "k8s.io/api/apps/v1" + corev1 "k8s.io/api/core/v1" + apierrors "k8s.io/apimachinery/pkg/api/errors" + "k8s.io/apimachinery/pkg/api/resource" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/yaml" + + "github.com/crunchydata/postgres-operator/internal/controller/runtime" + "github.com/crunchydata/postgres-operator/internal/feature" + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/logging" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/postgres" + "github.com/crunchydata/postgres-operator/internal/testing/cmp" + "github.com/crunchydata/postgres-operator/internal/testing/events" + "github.com/crunchydata/postgres-operator/internal/testing/require" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestGeneratePostgresUserSecret(t *testing.T) { + _, tClient := setupKubernetes(t) + require.ParallelCapacity(t, 0) + + reconciler := &Reconciler{Client: tClient} + + cluster := &v1beta1.PostgresCluster{} + cluster.Namespace = "ns1" + cluster.Name = "hippo2" + cluster.Spec.Port = initialize.Int32(9999) + + spec := &v1beta1.PostgresUserSpec{Name: "some-user-name"} + + t.Run("ObjectMeta", func(t *testing.T) { + secret, err := reconciler.generatePostgresUserSecret(cluster, spec, nil) + assert.NilError(t, err) + + if assert.Check(t, secret != nil) { + assert.Equal(t, secret.Namespace, cluster.Namespace) + assert.Assert(t, metav1.IsControlledBy(secret, cluster)) + assert.DeepEqual(t, secret.Labels, map[string]string{ + "postgres-operator.crunchydata.com/cluster": "hippo2", + "postgres-operator.crunchydata.com/role": "pguser", + "postgres-operator.crunchydata.com/pguser": "some-user-name", + }) + } + }) + + t.Run("Primary", func(t *testing.T) { + secret, err := reconciler.generatePostgresUserSecret(cluster, spec, nil) + assert.NilError(t, err) + + if assert.Check(t, secret != nil) { + assert.Equal(t, string(secret.Data["host"]), "hippo2-primary.ns1.svc") + assert.Equal(t, string(secret.Data["port"]), "9999") + assert.Equal(t, string(secret.Data["user"]), "some-user-name") + } + }) + + t.Run("Password", func(t *testing.T) { + // Generated when no existing Secret. + secret, err := reconciler.generatePostgresUserSecret(cluster, spec, nil) + assert.NilError(t, err) + + if assert.Check(t, secret != nil) { + assert.Assert(t, len(secret.Data["password"]) > 16, "got %v", len(secret.Data["password"])) + assert.Assert(t, len(secret.Data["verifier"]) > 90, "got %v", len(secret.Data["verifier"])) + } + + // Generated when existing Secret is lacking. + secret, err = reconciler.generatePostgresUserSecret(cluster, spec, new(corev1.Secret)) + assert.NilError(t, err) + + if assert.Check(t, secret != nil) { + assert.Assert(t, len(secret.Data["password"]) > 16, "got %v", len(secret.Data["password"])) + assert.Assert(t, len(secret.Data["verifier"]) > 90, "got %v", len(secret.Data["verifier"])) + } + + t.Run("Policy", func(t *testing.T) { + spec := spec.DeepCopy() + + // ASCII when unspecified. + spec.Password = nil + secret, err = reconciler.generatePostgresUserSecret(cluster, spec, new(corev1.Secret)) + assert.NilError(t, err) + + if assert.Check(t, secret != nil) { + // This assertion is lacking, but distinguishing between "alphanumeric" + // and "alphanumeric+symbols" is hard. If our generator changes to + // guarantee at least one symbol, we can check for symbols here. + assert.Assert(t, len(secret.Data["password"]) != 0) + } + + // AlphaNumeric when specified. + spec.Password = &v1beta1.PostgresPasswordSpec{ + Type: v1beta1.PostgresPasswordTypeAlphaNumeric, + } + + secret, err = reconciler.generatePostgresUserSecret(cluster, spec, new(corev1.Secret)) + assert.NilError(t, err) + + if assert.Check(t, secret != nil) { + assert.Assert(t, cmp.Regexp(`^[A-Za-z0-9]+$`, string(secret.Data["password"]))) + } + }) + + // Verifier is generated when existing Secret contains only a password. + secret, err = reconciler.generatePostgresUserSecret(cluster, spec, &corev1.Secret{ + Data: map[string][]byte{ + "password": []byte(`asdf`), + }, + }) + assert.NilError(t, err) + + if assert.Check(t, secret != nil) { + assert.Equal(t, string(secret.Data["password"]), "asdf") + assert.Assert(t, len(secret.Data["verifier"]) > 90, "got %v", len(secret.Data["verifier"])) + } + + // Copied when existing Secret is full. + secret, err = reconciler.generatePostgresUserSecret(cluster, spec, &corev1.Secret{ + Data: map[string][]byte{ + "password": []byte(`asdf`), + "verifier": []byte(`some$thing`), + }, + }) + assert.NilError(t, err) + + if assert.Check(t, secret != nil) { + assert.Equal(t, string(secret.Data["password"]), "asdf") + assert.Equal(t, string(secret.Data["verifier"]), "some$thing") + } + }) + + t.Run("Database", func(t *testing.T) { + spec := *spec + + // Missing when none specified. + secret, err := reconciler.generatePostgresUserSecret(cluster, &spec, nil) + assert.NilError(t, err) + + if assert.Check(t, secret != nil) { + assert.Assert(t, secret.Data["dbname"] == nil) + assert.Assert(t, secret.Data["uri"] == nil) + assert.Assert(t, secret.Data["jdbc-uri"] == nil) + } + + // Present when specified. + spec.Databases = []v1beta1.PostgresIdentifier{"db1"} + + secret, err = reconciler.generatePostgresUserSecret(cluster, &spec, nil) + assert.NilError(t, err) + + if assert.Check(t, secret != nil) { + assert.Equal(t, string(secret.Data["dbname"]), "db1") + assert.Assert(t, cmp.Regexp( + `^postgresql://some-user-name:[^@]+@hippo2-primary.ns1.svc:9999/db1$`, + string(secret.Data["uri"]))) + assert.Assert(t, cmp.Regexp( + `^jdbc:postgresql://hippo2-primary.ns1.svc:9999/db1`+ + `[?]password=[^&]+&user=some-user-name$`, + string(secret.Data["jdbc-uri"]))) + } + + // Only the first in the list. + spec.Databases = []v1beta1.PostgresIdentifier{"first", "asdf"} + + secret, err = reconciler.generatePostgresUserSecret(cluster, &spec, nil) + assert.NilError(t, err) + + if assert.Check(t, secret != nil) { + assert.Equal(t, string(secret.Data["dbname"]), "first") + assert.Assert(t, cmp.Regexp( + `^postgresql://some-user-name:[^@]+@hippo2-primary.ns1.svc:9999/first$`, + string(secret.Data["uri"]))) + assert.Assert(t, cmp.Regexp( + `^jdbc:postgresql://hippo2-primary.ns1.svc:9999/first[?].+$`, + string(secret.Data["jdbc-uri"]))) + + } + }) + + t.Run("PgBouncer", func(t *testing.T) { + assert.NilError(t, yaml.Unmarshal([]byte(`{ + proxy: { pgBouncer: { port: 10220 } }, + }`), &cluster.Spec)) + + secret, err := reconciler.generatePostgresUserSecret(cluster, spec, nil) + assert.NilError(t, err) + + if assert.Check(t, secret != nil) { + assert.Equal(t, string(secret.Data["pgbouncer-host"]), "hippo2-pgbouncer.ns1.svc") + assert.Equal(t, string(secret.Data["pgbouncer-port"]), "10220") + assert.Assert(t, secret.Data["pgbouncer-uri"] == nil) + assert.Assert(t, secret.Data["pgbouncer-jdbc-uri"] == nil) + } + + // Includes a URI when possible. + spec := *spec + spec.Databases = []v1beta1.PostgresIdentifier{"yes", "no"} + + secret, err = reconciler.generatePostgresUserSecret(cluster, &spec, nil) + assert.NilError(t, err) + + if assert.Check(t, secret != nil) { + assert.Assert(t, cmp.Regexp( + `^postgresql://some-user-name:[^@]+@hippo2-pgbouncer.ns1.svc:10220/yes$`, + string(secret.Data["pgbouncer-uri"]))) + assert.Assert(t, cmp.Regexp( + `^jdbc:postgresql://hippo2-pgbouncer.ns1.svc:10220/yes`+ + `[?]password=[^&]+&prepareThreshold=0&user=some-user-name$`, + string(secret.Data["pgbouncer-jdbc-uri"]))) + } + }) +} + +func TestReconcilePostgresVolumes(t *testing.T) { + ctx := context.Background() + _, tClient := setupKubernetes(t) + require.ParallelCapacity(t, 1) + + reconciler := &Reconciler{ + Client: tClient, + Owner: client.FieldOwner(t.Name()), + } + + t.Run("DataVolumeNoSourceCluster", func(t *testing.T) { + cluster := testCluster() + ns := setupNamespace(t, tClient) + cluster.Namespace = ns.Name + + assert.NilError(t, tClient.Create(ctx, cluster)) + t.Cleanup(func() { assert.Check(t, tClient.Delete(ctx, cluster)) }) + + spec := &v1beta1.PostgresInstanceSetSpec{} + assert.NilError(t, yaml.Unmarshal([]byte(`{ + name: "some-instance", + dataVolumeClaimSpec: { + accessModes: [ReadWriteOnce], + resources: { requests: { storage: 1Gi } }, + storageClassName: "storage-class-for-data", + }, + }`), spec)) + instance := &appsv1.StatefulSet{ObjectMeta: naming.GenerateInstance(cluster, spec)} + + pvc, err := reconciler.reconcilePostgresDataVolume(ctx, cluster, spec, instance, nil, nil) + assert.NilError(t, err) + + assert.Assert(t, metav1.IsControlledBy(pvc, cluster)) + + assert.Equal(t, pvc.Labels[naming.LabelCluster], cluster.Name) + assert.Equal(t, pvc.Labels[naming.LabelInstance], instance.Name) + assert.Equal(t, pvc.Labels[naming.LabelInstanceSet], spec.Name) + assert.Equal(t, pvc.Labels[naming.LabelRole], "pgdata") + + assert.Assert(t, cmp.MarshalMatches(pvc.Spec, ` +accessModes: +- ReadWriteOnce +resources: + requests: + storage: 1Gi +storageClassName: storage-class-for-data +volumeMode: Filesystem + `)) + }) + + t.Run("DataVolumeSourceClusterWithGoodSnapshot", func(t *testing.T) { + cluster := testCluster() + ns := setupNamespace(t, tClient) + cluster.Namespace = ns.Name + + assert.NilError(t, tClient.Create(ctx, cluster)) + t.Cleanup(func() { assert.Check(t, tClient.Delete(ctx, cluster)) }) + + spec := &v1beta1.PostgresInstanceSetSpec{} + assert.NilError(t, yaml.Unmarshal([]byte(`{ + name: "some-instance", + dataVolumeClaimSpec: { + accessModes: [ReadWriteOnce], + resources: { requests: { storage: 1Gi } }, + storageClassName: "storage-class-for-data", + }, + }`), spec)) + instance := &appsv1.StatefulSet{ObjectMeta: naming.GenerateInstance(cluster, spec)} + + recorder := events.NewRecorder(t, runtime.Scheme) + reconciler.Recorder = recorder + + // Turn on VolumeSnapshots feature gate + gate := feature.NewGate() + assert.NilError(t, gate.SetFromMap(map[string]bool{ + feature.VolumeSnapshots: true, + })) + ctx := feature.NewContext(ctx, gate) + + // Create source cluster and enable snapshots + sourceCluster := testCluster() + sourceCluster.Namespace = ns.Name + sourceCluster.Name = "rhino" + sourceCluster.Spec.Backups.Snapshots = &v1beta1.VolumeSnapshots{ + VolumeSnapshotClassName: "some-class-name", + } + + // Create a snapshot + snapshot := &volumesnapshotv1.VolumeSnapshot{ + TypeMeta: metav1.TypeMeta{ + APIVersion: volumesnapshotv1.SchemeGroupVersion.String(), + Kind: "VolumeSnapshot", + }, + ObjectMeta: metav1.ObjectMeta{ + Name: "some-snapshot", + Namespace: ns.Name, + Labels: map[string]string{ + naming.LabelCluster: "rhino", + }, + }, + } + snapshot.Spec.Source.PersistentVolumeClaimName = initialize.String("some-pvc-name") + snapshot.Spec.VolumeSnapshotClassName = initialize.String("some-class-name") + err := reconciler.apply(ctx, snapshot) + assert.NilError(t, err) + + // Get snapshot and update Status.ReadyToUse and CreationTime + err = reconciler.Client.Get(ctx, client.ObjectKeyFromObject(snapshot), snapshot) + assert.NilError(t, err) + + currentTime := metav1.Now() + snapshot.Status = &volumesnapshotv1.VolumeSnapshotStatus{ + ReadyToUse: initialize.Bool(true), + CreationTime: ¤tTime, + } + err = reconciler.Client.Status().Update(ctx, snapshot) + assert.NilError(t, err) + + // Reconcile volume + pvc, err := reconciler.reconcilePostgresDataVolume(ctx, cluster, spec, instance, nil, sourceCluster) + assert.NilError(t, err) + + assert.Assert(t, metav1.IsControlledBy(pvc, cluster)) + + assert.Equal(t, pvc.Labels[naming.LabelCluster], cluster.Name) + assert.Equal(t, pvc.Labels[naming.LabelInstance], instance.Name) + assert.Equal(t, pvc.Labels[naming.LabelInstanceSet], spec.Name) + assert.Equal(t, pvc.Labels[naming.LabelRole], "pgdata") + + assert.Assert(t, cmp.MarshalMatches(pvc.Spec, ` +accessModes: +- ReadWriteOnce +dataSource: + apiGroup: snapshot.storage.k8s.io + kind: VolumeSnapshot + name: some-snapshot +dataSourceRef: + apiGroup: snapshot.storage.k8s.io + kind: VolumeSnapshot + name: some-snapshot +resources: + requests: + storage: 1Gi +storageClassName: storage-class-for-data +volumeMode: Filesystem + `)) + assert.Equal(t, len(recorder.Events), 1) + assert.Equal(t, recorder.Events[0].Regarding.Name, cluster.Name) + assert.Equal(t, recorder.Events[0].Reason, "BootstrappingWithSnapshot") + assert.Equal(t, recorder.Events[0].Note, "Snapshot found for rhino; bootstrapping cluster with snapshot.") + }) + + t.Run("DataVolumeSourceClusterSnapshotsEnabledNoSnapshots", func(t *testing.T) { + cluster := testCluster() + ns := setupNamespace(t, tClient) + cluster.Namespace = ns.Name + + assert.NilError(t, tClient.Create(ctx, cluster)) + t.Cleanup(func() { assert.Check(t, tClient.Delete(ctx, cluster)) }) + + spec := &v1beta1.PostgresInstanceSetSpec{} + assert.NilError(t, yaml.Unmarshal([]byte(`{ + name: "some-instance", + dataVolumeClaimSpec: { + accessModes: [ReadWriteOnce], + resources: { requests: { storage: 1Gi } }, + storageClassName: "storage-class-for-data", + }, + }`), spec)) + instance := &appsv1.StatefulSet{ObjectMeta: naming.GenerateInstance(cluster, spec)} + + recorder := events.NewRecorder(t, runtime.Scheme) + reconciler.Recorder = recorder + + // Turn on VolumeSnapshots feature gate + gate := feature.NewGate() + assert.NilError(t, gate.SetFromMap(map[string]bool{ + feature.VolumeSnapshots: true, + })) + ctx := feature.NewContext(ctx, gate) + + // Create source cluster and enable snapshots + sourceCluster := testCluster() + sourceCluster.Namespace = ns.Name + sourceCluster.Name = "rhino" + sourceCluster.Spec.Backups.Snapshots = &v1beta1.VolumeSnapshots{ + VolumeSnapshotClassName: "some-class-name", + } + + // Reconcile volume + pvc, err := reconciler.reconcilePostgresDataVolume(ctx, cluster, spec, instance, nil, sourceCluster) + assert.NilError(t, err) + + assert.Assert(t, metav1.IsControlledBy(pvc, cluster)) + + assert.Equal(t, pvc.Labels[naming.LabelCluster], cluster.Name) + assert.Equal(t, pvc.Labels[naming.LabelInstance], instance.Name) + assert.Equal(t, pvc.Labels[naming.LabelInstanceSet], spec.Name) + assert.Equal(t, pvc.Labels[naming.LabelRole], "pgdata") + + assert.Assert(t, cmp.MarshalMatches(pvc.Spec, ` +accessModes: +- ReadWriteOnce +resources: + requests: + storage: 1Gi +storageClassName: storage-class-for-data +volumeMode: Filesystem + `)) + assert.Equal(t, len(recorder.Events), 1) + assert.Equal(t, recorder.Events[0].Regarding.Name, cluster.Name) + assert.Equal(t, recorder.Events[0].Reason, "SnapshotNotFound") + assert.Equal(t, recorder.Events[0].Note, "No ReadyToUse snapshots were found for rhino; proceeding with typical restore process.") + }) + + t.Run("WALVolume", func(t *testing.T) { + cluster := testCluster() + ns := setupNamespace(t, tClient) + cluster.Namespace = ns.Name + + assert.NilError(t, tClient.Create(ctx, cluster)) + t.Cleanup(func() { assert.Check(t, tClient.Delete(ctx, cluster)) }) + + spec := &v1beta1.PostgresInstanceSetSpec{} + assert.NilError(t, yaml.Unmarshal([]byte(`{ + name: "some-instance", + dataVolumeClaimSpec: { + accessModes: [ReadWriteOnce], + resources: { requests: { storage: 1Gi } }, + storageClassName: "storage-class-for-data", + }, + }`), spec)) + instance := &appsv1.StatefulSet{ObjectMeta: naming.GenerateInstance(cluster, spec)} + + observed := &Instance{} + + t.Run("None", func(t *testing.T) { + pvc, err := reconciler.reconcilePostgresWALVolume(ctx, cluster, spec, instance, observed, nil) + assert.NilError(t, err) + assert.Assert(t, pvc == nil) + }) + + t.Run("Specified", func(t *testing.T) { + spec := spec.DeepCopy() + assert.NilError(t, yaml.Unmarshal([]byte(`{ + walVolumeClaimSpec: { + accessModes: [ReadWriteMany], + resources: { requests: { storage: 2Gi } }, + storageClassName: "storage-class-for-wal", + }, + }`), spec)) + + pvc, err := reconciler.reconcilePostgresWALVolume(ctx, cluster, spec, instance, observed, nil) + assert.NilError(t, err) + + assert.Assert(t, metav1.IsControlledBy(pvc, cluster)) + + assert.Equal(t, pvc.Labels[naming.LabelCluster], cluster.Name) + assert.Equal(t, pvc.Labels[naming.LabelInstance], instance.Name) + assert.Equal(t, pvc.Labels[naming.LabelInstanceSet], spec.Name) + assert.Equal(t, pvc.Labels[naming.LabelRole], "pgwal") + + assert.Assert(t, cmp.MarshalMatches(pvc.Spec, ` +accessModes: +- ReadWriteMany +resources: + requests: + storage: 2Gi +storageClassName: storage-class-for-wal +volumeMode: Filesystem + `)) + + t.Run("Removed", func(t *testing.T) { + spec := spec.DeepCopy() + spec.WALVolumeClaimSpec = nil + + ignoreTypeMeta := cmpopts.IgnoreFields(corev1.PersistentVolumeClaim{}, "TypeMeta") + + t.Run("FilesAreNotSafe", func(t *testing.T) { + // No pods; expect no changes to the PVC. + returned, err := reconciler.reconcilePostgresWALVolume(ctx, cluster, spec, instance, observed, nil) + assert.NilError(t, err) + assert.DeepEqual(t, returned, pvc, ignoreTypeMeta) + + // Not running; expect no changes to the PVC. + observed.Pods = []*corev1.Pod{{}} + + returned, err = reconciler.reconcilePostgresWALVolume(ctx, cluster, spec, instance, observed, nil) + assert.NilError(t, err) + assert.DeepEqual(t, returned, pvc, ignoreTypeMeta) + + // Cannot find WAL files; expect no changes to the PVC. + observed.Pods[0].Namespace, observed.Pods[0].Name = "pod-ns", "pod-name" + observed.Pods[0].Status.ContainerStatuses = []corev1.ContainerStatus{{ + Name: naming.ContainerDatabase, + }} + observed.Pods[0].Status.ContainerStatuses[0].State.Running = + new(corev1.ContainerStateRunning) + + expected := errors.New("flop") + reconciler.PodExec = func( + ctx context.Context, namespace, pod, container string, + _ io.Reader, _, _ io.Writer, command ...string, + ) error { + assert.Equal(t, namespace, "pod-ns") + assert.Equal(t, pod, "pod-name") + assert.Equal(t, container, "database") + assert.DeepEqual(t, command, + []string{"bash", "-ceu", "--", `exec realpath "${PGDATA}/pg_wal"`}) + return expected + } + + returned, err = reconciler.reconcilePostgresWALVolume(ctx, cluster, spec, instance, observed, nil) + assert.Equal(t, expected, errors.Unwrap(err), "expected pod exec") + assert.DeepEqual(t, returned, pvc, ignoreTypeMeta) + + // Files are in the wrong place; expect no changes to the PVC. + reconciler.PodExec = func( + ctx context.Context, _, _, _ string, _ io.Reader, stdout, _ io.Writer, _ ...string, + ) error { + assert.Assert(t, stdout != nil) + _, err := stdout.Write([]byte("some-place\n")) + assert.NilError(t, err) + return nil + } + + returned, err = reconciler.reconcilePostgresWALVolume(ctx, cluster, spec, instance, observed, nil) + assert.NilError(t, err) + assert.DeepEqual(t, returned, pvc, ignoreTypeMeta) + }) + + t.Run("FilesAreSafe", func(t *testing.T) { + // Files are seen in the directory intended by the specification. + observed.Pods = []*corev1.Pod{{}} + observed.Pods[0].Status.ContainerStatuses = []corev1.ContainerStatus{{ + Name: naming.ContainerDatabase, + }} + observed.Pods[0].Status.ContainerStatuses[0].State.Running = + new(corev1.ContainerStateRunning) + + reconciler.PodExec = func( + ctx context.Context, _, _, _ string, _ io.Reader, stdout, _ io.Writer, _ ...string, + ) error { + assert.Assert(t, stdout != nil) + _, err := stdout.Write([]byte(postgres.WALDirectory(cluster, spec) + "\n")) + assert.NilError(t, err) + return nil + } + + returned, err := reconciler.reconcilePostgresWALVolume(ctx, cluster, spec, instance, observed, nil) + assert.NilError(t, err) + assert.Assert(t, returned == nil) + + key, fetched := client.ObjectKeyFromObject(pvc), &corev1.PersistentVolumeClaim{} + if err := tClient.Get(ctx, key, fetched); err == nil { + assert.Assert(t, fetched.DeletionTimestamp != nil, "expected deleted") + } else { + assert.Assert(t, apierrors.IsNotFound(err), "expected NotFound, got %v", err) + } + + // Pods will redeploy while the PVC is scheduled for deletion. + observed.Pods = nil + returned, err = reconciler.reconcilePostgresWALVolume(ctx, cluster, spec, instance, observed, nil) + assert.NilError(t, err) + assert.Assert(t, returned == nil) + }) + }) + }) + }) +} + +func TestSetVolumeSize(t *testing.T) { + t.Parallel() + + ctx := context.Background() + cluster := v1beta1.PostgresCluster{ + ObjectMeta: metav1.ObjectMeta{ + Name: "elephant", + Namespace: "test-namespace", + }, + Spec: v1beta1.PostgresClusterSpec{ + InstanceSets: []v1beta1.PostgresInstanceSetSpec{{ + Name: "some-instance", + Replicas: initialize.Int32(1), + }}, + }, + } + + instance := &appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Name: "elephant-some-instance-wxyz-0", + Namespace: cluster.Namespace, + }} + + setupLogCapture := func(ctx context.Context) (context.Context, *[]string) { + calls := []string{} + testlog := funcr.NewJSON(func(object string) { + calls = append(calls, object) + }, funcr.Options{ + Verbosity: 1, + }) + return logging.NewContext(ctx, testlog), &calls + } + + // helper functions + instanceSetSpec := func(request, limit string) *v1beta1.PostgresInstanceSetSpec { + return &v1beta1.PostgresInstanceSetSpec{ + Name: "some-instance", + DataVolumeClaimSpec: corev1.PersistentVolumeClaimSpec{ + AccessModes: []corev1.PersistentVolumeAccessMode{corev1.ReadWriteOnce}, + Resources: corev1.VolumeResourceRequirements{ + Requests: map[corev1.ResourceName]resource.Quantity{ + corev1.ResourceStorage: resource.MustParse(request), + }, + Limits: map[corev1.ResourceName]resource.Quantity{ + corev1.ResourceStorage: resource.MustParse(limit), + }}}} + } + + desiredStatus := func(request string) v1beta1.PostgresClusterStatus { + desiredMap := make(map[string]string) + desiredMap["elephant-some-instance-wxyz-0"] = request + return v1beta1.PostgresClusterStatus{ + InstanceSets: []v1beta1.PostgresInstanceSetStatus{{ + Name: "some-instance", + DesiredPGDataVolume: desiredMap, + }}} + } + + t.Run("RequestAboveLimit", func(t *testing.T) { + recorder := events.NewRecorder(t, runtime.Scheme) + reconciler := &Reconciler{Recorder: recorder} + ctx, logs := setupLogCapture(ctx) + + pvc := &corev1.PersistentVolumeClaim{ObjectMeta: naming.InstancePostgresDataVolume(instance)} + spec := instanceSetSpec("4Gi", "3Gi") + pvc.Spec = spec.DataVolumeClaimSpec + + reconciler.setVolumeSize(ctx, &cluster, pvc, spec.Name) + + assert.Assert(t, cmp.MarshalMatches(pvc.Spec, ` +accessModes: +- ReadWriteOnce +resources: + limits: + storage: 3Gi + requests: + storage: 3Gi +`)) + assert.Equal(t, len(*logs), 0) + assert.Equal(t, len(recorder.Events), 1) + assert.Equal(t, recorder.Events[0].Regarding.Name, cluster.Name) + assert.Equal(t, recorder.Events[0].Reason, "VolumeRequestOverLimit") + assert.Equal(t, recorder.Events[0].Note, "pgData volume request (4Gi) for elephant/some-instance is greater than set limit (3Gi). Limit value will be used.") + }) + + t.Run("NoFeatureGate", func(t *testing.T) { + recorder := events.NewRecorder(t, runtime.Scheme) + reconciler := &Reconciler{Recorder: recorder} + ctx, logs := setupLogCapture(ctx) + + pvc := &corev1.PersistentVolumeClaim{ObjectMeta: naming.InstancePostgresDataVolume(instance)} + spec := instanceSetSpec("1Gi", "3Gi") + + desiredMap := make(map[string]string) + desiredMap["elephant-some-instance-wxyz-0"] = "2Gi" + cluster.Status = v1beta1.PostgresClusterStatus{ + InstanceSets: []v1beta1.PostgresInstanceSetStatus{{ + Name: "some-instance", + DesiredPGDataVolume: desiredMap, + }}, + } + + pvc.Spec = spec.DataVolumeClaimSpec + + reconciler.setVolumeSize(ctx, &cluster, pvc, spec.Name) + + assert.Assert(t, cmp.MarshalMatches(pvc.Spec, ` +accessModes: +- ReadWriteOnce +resources: + limits: + storage: 3Gi + requests: + storage: 1Gi + `)) + + assert.Equal(t, len(recorder.Events), 0) + assert.Equal(t, len(*logs), 0) + + // clear status for other tests + cluster.Status = v1beta1.PostgresClusterStatus{} + }) + + t.Run("FeatureEnabled", func(t *testing.T) { + gate := feature.NewGate() + assert.NilError(t, gate.SetFromMap(map[string]bool{ + feature.AutoGrowVolumes: true, + })) + ctx := feature.NewContext(ctx, gate) + + t.Run("StatusNoLimit", func(t *testing.T) { + recorder := events.NewRecorder(t, runtime.Scheme) + reconciler := &Reconciler{Recorder: recorder} + ctx, logs := setupLogCapture(ctx) + + pvc := &corev1.PersistentVolumeClaim{ObjectMeta: naming.InstancePostgresDataVolume(instance)} + spec := &v1beta1.PostgresInstanceSetSpec{ + Name: "some-instance", + DataVolumeClaimSpec: corev1.PersistentVolumeClaimSpec{ + AccessModes: []corev1.PersistentVolumeAccessMode{corev1.ReadWriteOnce}, + Resources: corev1.VolumeResourceRequirements{ + Requests: map[corev1.ResourceName]resource.Quantity{ + corev1.ResourceStorage: resource.MustParse("1Gi"), + }}}} + cluster.Status = desiredStatus("2Gi") + pvc.Spec = spec.DataVolumeClaimSpec + + reconciler.setVolumeSize(ctx, &cluster, pvc, spec.Name) + + assert.Assert(t, cmp.MarshalMatches(pvc.Spec, ` +accessModes: +- ReadWriteOnce +resources: + requests: + storage: 1Gi +`)) + assert.Equal(t, len(recorder.Events), 0) + assert.Equal(t, len(*logs), 0) + + // clear status for other tests + cluster.Status = v1beta1.PostgresClusterStatus{} + }) + + t.Run("LimitNoStatus", func(t *testing.T) { + recorder := events.NewRecorder(t, runtime.Scheme) + reconciler := &Reconciler{Recorder: recorder} + ctx, logs := setupLogCapture(ctx) + + pvc := &corev1.PersistentVolumeClaim{ObjectMeta: naming.InstancePostgresDataVolume(instance)} + spec := instanceSetSpec("1Gi", "2Gi") + pvc.Spec = spec.DataVolumeClaimSpec + + reconciler.setVolumeSize(ctx, &cluster, pvc, spec.Name) + + assert.Assert(t, cmp.MarshalMatches(pvc.Spec, ` +accessModes: +- ReadWriteOnce +resources: + limits: + storage: 2Gi + requests: + storage: 1Gi +`)) + assert.Equal(t, len(recorder.Events), 0) + assert.Equal(t, len(*logs), 0) + }) + + t.Run("BadStatusWithLimit", func(t *testing.T) { + recorder := events.NewRecorder(t, runtime.Scheme) + reconciler := &Reconciler{Recorder: recorder} + ctx, logs := setupLogCapture(ctx) + + pvc := &corev1.PersistentVolumeClaim{ObjectMeta: naming.InstancePostgresDataVolume(instance)} + spec := instanceSetSpec("1Gi", "3Gi") + cluster.Status = desiredStatus("NotAValidValue") + pvc.Spec = spec.DataVolumeClaimSpec + + reconciler.setVolumeSize(ctx, &cluster, pvc, spec.Name) + + assert.Assert(t, cmp.MarshalMatches(pvc.Spec, ` +accessModes: +- ReadWriteOnce +resources: + limits: + storage: 3Gi + requests: + storage: 1Gi +`)) + + assert.Equal(t, len(recorder.Events), 0) + assert.Equal(t, len(*logs), 1) + assert.Assert(t, cmp.Contains((*logs)[0], "Unable to parse volume request: NotAValidValue")) + }) + + t.Run("StatusWithLimit", func(t *testing.T) { + recorder := events.NewRecorder(t, runtime.Scheme) + reconciler := &Reconciler{Recorder: recorder} + ctx, logs := setupLogCapture(ctx) + + pvc := &corev1.PersistentVolumeClaim{ObjectMeta: naming.InstancePostgresDataVolume(instance)} + spec := instanceSetSpec("1Gi", "3Gi") + cluster.Status = desiredStatus("2Gi") + pvc.Spec = spec.DataVolumeClaimSpec + + reconciler.setVolumeSize(ctx, &cluster, pvc, spec.Name) + + assert.Assert(t, cmp.MarshalMatches(pvc.Spec, ` +accessModes: +- ReadWriteOnce +resources: + limits: + storage: 3Gi + requests: + storage: 2Gi +`)) + assert.Equal(t, len(recorder.Events), 0) + assert.Equal(t, len(*logs), 0) + }) + + t.Run("StatusWithLimitGrowToLimit", func(t *testing.T) { + recorder := events.NewRecorder(t, runtime.Scheme) + reconciler := &Reconciler{Recorder: recorder} + ctx, logs := setupLogCapture(ctx) + + pvc := &corev1.PersistentVolumeClaim{ObjectMeta: naming.InstancePostgresDataVolume(instance)} + spec := instanceSetSpec("1Gi", "2Gi") + cluster.Status = desiredStatus("2Gi") + pvc.Spec = spec.DataVolumeClaimSpec + + reconciler.setVolumeSize(ctx, &cluster, pvc, spec.Name) + + assert.Assert(t, cmp.MarshalMatches(pvc.Spec, ` +accessModes: +- ReadWriteOnce +resources: + limits: + storage: 2Gi + requests: + storage: 2Gi +`)) + + assert.Equal(t, len(*logs), 0) + assert.Equal(t, len(recorder.Events), 1) + assert.Equal(t, recorder.Events[0].Regarding.Name, cluster.Name) + assert.Equal(t, recorder.Events[0].Reason, "VolumeLimitReached") + assert.Equal(t, recorder.Events[0].Note, "pgData volume(s) for elephant/some-instance are at size limit (2Gi).") + }) + + t.Run("DesiredStatusOverLimit", func(t *testing.T) { + recorder := events.NewRecorder(t, runtime.Scheme) + reconciler := &Reconciler{Recorder: recorder} + ctx, logs := setupLogCapture(ctx) + + pvc := &corev1.PersistentVolumeClaim{ObjectMeta: naming.InstancePostgresDataVolume(instance)} + spec := instanceSetSpec("4Gi", "5Gi") + cluster.Status = desiredStatus("10Gi") + pvc.Spec = spec.DataVolumeClaimSpec + + reconciler.setVolumeSize(ctx, &cluster, pvc, spec.Name) + + assert.Assert(t, cmp.MarshalMatches(pvc.Spec, ` +accessModes: +- ReadWriteOnce +resources: + limits: + storage: 5Gi + requests: + storage: 5Gi +`)) + + assert.Equal(t, len(*logs), 0) + assert.Equal(t, len(recorder.Events), 2) + var found1, found2 bool + for _, event := range recorder.Events { + if event.Reason == "VolumeLimitReached" { + found1 = true + assert.Equal(t, event.Regarding.Name, cluster.Name) + assert.Equal(t, event.Note, "pgData volume(s) for elephant/some-instance are at size limit (5Gi).") + } + if event.Reason == "DesiredVolumeAboveLimit" { + found2 = true + assert.Equal(t, event.Regarding.Name, cluster.Name) + assert.Equal(t, event.Note, + "The desired size (10Gi) for the elephant/some-instance pgData volume(s) is greater than the size limit (5Gi).") + } + } + assert.Assert(t, found1 && found2) + }) + + }) +} + +func TestReconcileDatabaseInitSQL(t *testing.T) { + ctx := context.Background() + var called bool + + // Test Environment Setup + _, client := setupKubernetes(t) + require.ParallelCapacity(t, 0) + + r := &Reconciler{ + Client: client, + + // Overwrite the PodExec function with a check to ensure the exec + // call would have been made + PodExec: func(ctx context.Context, namespace, pod, container string, stdin io.Reader, + stdout, stderr io.Writer, command ...string) error { + called = true + return nil + }, + } + + // Test Resources Setup + ns := setupNamespace(t, client) + + // Define a status to be set if sql has already been run + status := "set" + + // reconcileDatabaseInitSQL expects to find a pod that is running with a + // writable database container. Define this pod in an observed instance so + // we can simulate a podExec call into the database + instances := []*Instance{ + { + Name: "instance", + Pods: []*corev1.Pod{{ + ObjectMeta: metav1.ObjectMeta{ + Namespace: ns.Name, + Name: "pod", + Annotations: map[string]string{ + "status": `{"role":"master"}`, + }, + }, + Status: corev1.PodStatus{ + ContainerStatuses: []corev1.ContainerStatus{{ + Name: naming.ContainerDatabase, + State: corev1.ContainerState{ + Running: new(corev1.ContainerStateRunning), + }, + }}, + }, + }}, + Runner: &appsv1.StatefulSet{}, + }, + } + observed := &observedInstances{forCluster: instances} + + // Create a ConfigMap containing SQL to be defined in the spec + path := "test-path" + cm := corev1.ConfigMap{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-cm", + Namespace: ns.Name, + }, + Data: map[string]string{ + path: "stuff", + }, + } + assert.NilError(t, client.Create(ctx, cm.DeepCopy())) + + // Define a fully configured cluster that would lead to SQL being run in + // the database. This test cluster will be modified as needed for testing + testCluster := testCluster() + testCluster.Namespace = ns.Name + testCluster.Spec.DatabaseInitSQL = &v1beta1.DatabaseInitSQL{ + Name: cm.Name, + Key: path, + } + + // Start Tests + t.Run("not defined", func(t *testing.T) { + // Custom SQL is not defined in the spec and status is unset + cluster := testCluster.DeepCopy() + cluster.Spec.DatabaseInitSQL = nil + + assert.NilError(t, r.reconcileDatabaseInitSQL(ctx, cluster, observed)) + assert.Assert(t, !called, "PodExec should not have been called") + assert.Assert(t, cluster.Status.DatabaseInitSQL == nil, "Status should not be set") + }) + t.Run("not defined with status", func(t *testing.T) { + // Custom SQL is not defined in the spec and status is set + cluster := testCluster.DeepCopy() + cluster.Spec.DatabaseInitSQL = nil + cluster.Status.DatabaseInitSQL = &status + + assert.NilError(t, r.reconcileDatabaseInitSQL(ctx, cluster, observed)) + assert.Assert(t, !called, "PodExec should not have been called") + assert.Assert(t, cluster.Status.DatabaseInitSQL == nil, "Status was set and should have been removed") + }) + t.Run("status set", func(t *testing.T) { + // Custom SQL is defined and status is set + cluster := testCluster.DeepCopy() + cluster.Status.DatabaseInitSQL = &status + + assert.NilError(t, r.reconcileDatabaseInitSQL(ctx, cluster, observed)) + assert.Assert(t, !called, "PodExec should not have been called") + assert.Equal(t, cluster.Status.DatabaseInitSQL, &status, "Status should not have changed") + }) + t.Run("No writable pod", func(t *testing.T) { + cluster := testCluster.DeepCopy() + + assert.NilError(t, r.reconcileDatabaseInitSQL(ctx, cluster, nil)) + assert.Assert(t, !called, "PodExec should not have been called") + assert.Assert(t, cluster.Status.DatabaseInitSQL == nil, "SQL couldn't be executed so status should be unset") + }) + t.Run("Fully Configured", func(t *testing.T) { + cluster := testCluster.DeepCopy() + + assert.NilError(t, r.reconcileDatabaseInitSQL(ctx, cluster, observed)) + assert.Assert(t, called, "PodExec should be called") + assert.Equal(t, + *cluster.Status.DatabaseInitSQL, + cluster.Spec.DatabaseInitSQL.Name, + "Status should be set to the custom configmap name") + }) +} + +func TestReconcileDatabaseInitSQLConfigMap(t *testing.T) { + ctx := context.Background() + var called bool + + // Test Environment Setup + _, client := setupKubernetes(t) + require.ParallelCapacity(t, 0) + + r := &Reconciler{ + Client: client, + + // Overwrite the PodExec function with a check to ensure the exec + // call would have been made + PodExec: func(ctx context.Context, namespace, pod, container string, stdin io.Reader, + stdout, stderr io.Writer, command ...string) error { + called = true + return nil + }, + } + + // Test Resources Setup + ns := setupNamespace(t, client) + + // reconcileDatabaseInitSQL expects to find a pod that is running with a writable + // database container. Define this pod in an observed instance so that + // we can simulate a podExec call into the database + instances := []*Instance{ + { + Name: "instance", + Pods: []*corev1.Pod{{ + ObjectMeta: metav1.ObjectMeta{ + Namespace: ns.Name, + Name: "pod", + Annotations: map[string]string{ + "status": `{"role":"master"}`, + }, + }, + Status: corev1.PodStatus{ + ContainerStatuses: []corev1.ContainerStatus{{ + Name: naming.ContainerDatabase, + State: corev1.ContainerState{ + Running: new(corev1.ContainerStateRunning), + }, + }}, + }, + }}, + Runner: &appsv1.StatefulSet{}, + }, + } + observed := &observedInstances{forCluster: instances} + + // Define fully configured cluster that would lead to sql being run in the + // database. This cluster will be modified for testing + testCluster := testCluster() + testCluster.Namespace = ns.Name + testCluster.Spec.DatabaseInitSQL = new(v1beta1.DatabaseInitSQL) + + t.Run("not found", func(t *testing.T) { + cluster := testCluster.DeepCopy() + cluster.Spec.DatabaseInitSQL = &v1beta1.DatabaseInitSQL{ + Name: "not-found", + } + + err := r.reconcileDatabaseInitSQL(ctx, cluster, observed) + assert.Assert(t, apierrors.IsNotFound(err), err) + assert.Assert(t, !called) + }) + + t.Run("found no data", func(t *testing.T) { + cm := &corev1.ConfigMap{ + ObjectMeta: metav1.ObjectMeta{ + Name: "found-no-data", + Namespace: ns.Name, + }, + } + assert.NilError(t, client.Create(ctx, cm)) + + cluster := testCluster.DeepCopy() + cluster.Spec.DatabaseInitSQL = &v1beta1.DatabaseInitSQL{ + Name: cm.Name, + Key: "bad-path", + } + + err := r.reconcileDatabaseInitSQL(ctx, cluster, observed) + assert.Equal(t, err.Error(), "ConfigMap did not contain expected key: bad-path") + assert.Assert(t, !called) + }) + + t.Run("found with data", func(t *testing.T) { + path := "test-path" + + cm := &corev1.ConfigMap{ + ObjectMeta: metav1.ObjectMeta{ + Name: "found-with-data", + Namespace: ns.Name, + }, + Data: map[string]string{ + path: "string", + }, + } + assert.NilError(t, client.Create(ctx, cm)) + + cluster := testCluster.DeepCopy() + cluster.Spec.DatabaseInitSQL = &v1beta1.DatabaseInitSQL{ + Name: cm.Name, + Key: path, + } + + assert.NilError(t, r.reconcileDatabaseInitSQL(ctx, cluster, observed)) + assert.Assert(t, called) + }) +} + +func TestValidatePostgresUsers(t *testing.T) { + t.Parallel() + + t.Run("Empty", func(t *testing.T) { + cluster := v1beta1.NewPostgresCluster() + recorder := events.NewRecorder(t, runtime.Scheme) + reconciler := &Reconciler{Recorder: recorder} + + cluster.Spec.Users = nil + reconciler.validatePostgresUsers(cluster) + assert.Equal(t, len(recorder.Events), 0) + + cluster.Spec.Users = []v1beta1.PostgresUserSpec{} + reconciler.validatePostgresUsers(cluster) + assert.Equal(t, len(recorder.Events), 0) + }) + + // See [internal/testing/validation.TestPostgresUserOptions] + + t.Run("NoComments", func(t *testing.T) { + cluster := v1beta1.NewPostgresCluster() + cluster.Name = "pg1" + cluster.Spec.Users = []v1beta1.PostgresUserSpec{ + {Name: "dashes", Options: "ANY -- comment"}, + {Name: "block-open", Options: "/* asdf"}, + {Name: "block-close", Options: " qw */ rt"}, + } + + recorder := events.NewRecorder(t, runtime.Scheme) + reconciler := &Reconciler{Recorder: recorder} + + reconciler.validatePostgresUsers(cluster) + assert.Equal(t, len(recorder.Events), 3) + + for i, event := range recorder.Events { + assert.Equal(t, event.Regarding.Name, cluster.Name) + assert.Equal(t, event.Reason, "InvalidUser") + assert.Assert(t, cmp.Contains(event.Note, "cannot contain comments")) + assert.Assert(t, cmp.Contains(event.Note, + fmt.Sprintf("spec.users[%d].options", i))) + } + }) + + t.Run("NoPassword", func(t *testing.T) { + cluster := v1beta1.NewPostgresCluster() + cluster.Name = "pg5" + cluster.Spec.Users = []v1beta1.PostgresUserSpec{ + {Name: "uppercase", Options: "SUPERUSER PASSWORD ''"}, + {Name: "lowercase", Options: "password 'asdf'"}, + } + + recorder := events.NewRecorder(t, runtime.Scheme) + reconciler := &Reconciler{Recorder: recorder} + + reconciler.validatePostgresUsers(cluster) + assert.Equal(t, len(recorder.Events), 2) + + for i, event := range recorder.Events { + assert.Equal(t, event.Regarding.Name, cluster.Name) + assert.Equal(t, event.Reason, "InvalidUser") + assert.Assert(t, cmp.Contains(event.Note, "cannot assign password")) + assert.Assert(t, cmp.Contains(event.Note, + fmt.Sprintf("spec.users[%d].options", i))) + } + }) + + t.Run("Valid", func(t *testing.T) { + cluster := v1beta1.NewPostgresCluster() + cluster.Spec.Users = []v1beta1.PostgresUserSpec{ + {Name: "normal", Options: "CREATEDB valid until '2006-01-02'"}, + {Name: "very-full", Options: "NOSUPERUSER NOCREATEDB NOCREATEROLE NOINHERIT NOLOGIN NOREPLICATION NOBYPASSRLS CONNECTION LIMIT 5"}, + } + + reconciler := &Reconciler{} + assert.Assert(t, reconciler.Recorder == nil, + "expected the following to not use a Recorder at all") + + reconciler.validatePostgresUsers(cluster) + }) +} diff --git a/internal/controller/postgrescluster/rbac.go b/internal/controller/postgrescluster/rbac.go new file mode 100644 index 0000000000..38dd808c44 --- /dev/null +++ b/internal/controller/postgrescluster/rbac.go @@ -0,0 +1,96 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgrescluster + +import ( + "context" + + "github.com/pkg/errors" + corev1 "k8s.io/api/core/v1" + rbacv1 "k8s.io/api/rbac/v1" + + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/patroni" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +// reconcileRBACResources creates Roles, RoleBindings, and ServiceAccounts for +// cluster. The returned instanceServiceAccount has all the authorization needed +// by an instance Pod. +func (r *Reconciler) reconcileRBACResources( + ctx context.Context, cluster *v1beta1.PostgresCluster, +) ( + instanceServiceAccount *corev1.ServiceAccount, err error, +) { + return r.reconcileInstanceRBAC(ctx, cluster) +} + +// +kubebuilder:rbac:groups="",resources="serviceaccounts",verbs={create,patch} +// +kubebuilder:rbac:groups="rbac.authorization.k8s.io",resources="roles",verbs={create,patch} +// +kubebuilder:rbac:groups="rbac.authorization.k8s.io",resources="rolebindings",verbs={create,patch} + +// reconcileInstanceRBAC writes the Role, RoleBinding, and ServiceAccount for +// all instances of cluster. +func (r *Reconciler) reconcileInstanceRBAC( + ctx context.Context, cluster *v1beta1.PostgresCluster, +) (*corev1.ServiceAccount, error) { + account := &corev1.ServiceAccount{ObjectMeta: naming.ClusterInstanceRBAC(cluster)} + account.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("ServiceAccount")) + + binding := &rbacv1.RoleBinding{ObjectMeta: naming.ClusterInstanceRBAC(cluster)} + binding.SetGroupVersionKind(rbacv1.SchemeGroupVersion.WithKind("RoleBinding")) + + role := &rbacv1.Role{ObjectMeta: naming.ClusterInstanceRBAC(cluster)} + role.SetGroupVersionKind(rbacv1.SchemeGroupVersion.WithKind("Role")) + + err := errors.WithStack(r.setControllerReference(cluster, account)) + if err == nil { + err = errors.WithStack(r.setControllerReference(cluster, binding)) + } + if err == nil { + err = errors.WithStack(r.setControllerReference(cluster, role)) + } + + account.Annotations = naming.Merge(cluster.Spec.Metadata.GetAnnotationsOrNil()) + account.Labels = naming.Merge(cluster.Spec.Metadata.GetLabelsOrNil(), + map[string]string{ + naming.LabelCluster: cluster.Name, + }) + binding.Annotations = naming.Merge(cluster.Spec.Metadata.GetAnnotationsOrNil()) + binding.Labels = naming.Merge(cluster.Spec.Metadata.GetLabelsOrNil(), + map[string]string{ + naming.LabelCluster: cluster.Name, + }) + role.Annotations = naming.Merge(cluster.Spec.Metadata.GetAnnotationsOrNil()) + role.Labels = naming.Merge(cluster.Spec.Metadata.GetLabelsOrNil(), + map[string]string{ + naming.LabelCluster: cluster.Name, + }) + + account.AutomountServiceAccountToken = initialize.Bool(true) + binding.RoleRef = rbacv1.RoleRef{ + APIGroup: rbacv1.SchemeGroupVersion.Group, + Kind: role.Kind, + Name: role.Name, + } + binding.Subjects = []rbacv1.Subject{{ + Kind: account.Kind, + Name: account.Name, + }} + role.Rules = patroni.Permissions(cluster) + + if err == nil { + err = errors.WithStack(r.apply(ctx, account)) + } + if err == nil { + err = errors.WithStack(r.apply(ctx, role)) + } + if err == nil { + err = errors.WithStack(r.apply(ctx, binding)) + } + + return account, err +} diff --git a/internal/controller/postgrescluster/snapshots.go b/internal/controller/postgrescluster/snapshots.go new file mode 100644 index 0000000000..76ad195600 --- /dev/null +++ b/internal/controller/postgrescluster/snapshots.go @@ -0,0 +1,617 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgrescluster + +import ( + "context" + "fmt" + "strings" + "time" + + "github.com/pkg/errors" + batchv1 "k8s.io/api/batch/v1" + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "sigs.k8s.io/controller-runtime/pkg/client" + + volumesnapshotv1 "github.com/kubernetes-csi/external-snapshotter/client/v8/apis/volumesnapshot/v1" + + "github.com/crunchydata/postgres-operator/internal/config" + "github.com/crunchydata/postgres-operator/internal/feature" + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/pgbackrest" + "github.com/crunchydata/postgres-operator/internal/postgres" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +//+kubebuilder:rbac:groups="snapshot.storage.k8s.io",resources="volumesnapshots",verbs={get,list,create,patch,delete} + +// The controller-runtime client sets up a cache that watches anything we "get" or "list". +//+kubebuilder:rbac:groups="snapshot.storage.k8s.io",resources="volumesnapshots",verbs={watch} + +// reconcileVolumeSnapshots creates and manages VolumeSnapshots if the proper VolumeSnapshot CRDs +// are installed and VolumeSnapshots are enabled for the PostgresCluster. A VolumeSnapshot of the +// primary instance's pgdata volume will be created whenever a backup is completed. The steps to +// create snapshots include the following sequence: +// 1. We find the latest completed backup job and check the timestamp. +// 2. If the timestamp is later than what's on the dedicated snapshot PVC, a restore job runs in +// the dedicated snapshot volume. +// 3. When the restore job completes, an annotation is updated on the PVC. If the restore job +// fails, we don't run it again. +// 4. When the PVC annotation is updated, we see if there's a volume snapshot with an earlier +// timestamp. +// 5. If there are no snapshots at all, we take a snapshot and put the backup job's completion +// timestamp on the snapshot annotation. +// 6. If an earlier snapshot is found, we take a new snapshot, annotate it and delete the old +// snapshot. +// 7. When the snapshot job completes, we delete the restore job. +func (r *Reconciler) reconcileVolumeSnapshots(ctx context.Context, + postgrescluster *v1beta1.PostgresCluster, pvc *corev1.PersistentVolumeClaim) error { + + // If VolumeSnapshots feature gate is disabled. Do nothing and return early. + if !feature.Enabled(ctx, feature.VolumeSnapshots) { + return nil + } + + // Check if the Kube cluster has VolumeSnapshots installed. If VolumeSnapshots + // are not installed, we need to return early. If user is attempting to use + // VolumeSnapshots, return an error, otherwise return nil. + volumeSnapshotKindExists, err := r.GroupVersionKindExists("snapshot.storage.k8s.io/v1", "VolumeSnapshot") + if err != nil { + return err + } + if !*volumeSnapshotKindExists { + if postgrescluster.Spec.Backups.Snapshots != nil { + return errors.New("VolumeSnapshots are not installed/enabled in this Kubernetes cluster; cannot create snapshot.") + } else { + return nil + } + } + + // If user is attempting to use snapshots and has tablespaces enabled, we + // need to create a warning event indicating that the two features are not + // currently compatible and return early. + if postgrescluster.Spec.Backups.Snapshots != nil && + clusterUsingTablespaces(ctx, postgrescluster) { + r.Recorder.Event(postgrescluster, corev1.EventTypeWarning, "IncompatibleFeatures", + "VolumeSnapshots not currently compatible with TablespaceVolumes; cannot create snapshot.") + return nil + } + + // Get all snapshots for the cluster. + snapshots, err := r.getSnapshotsForCluster(ctx, postgrescluster) + if err != nil { + return err + } + + // If snapshots are disabled, delete any existing snapshots and return early. + if postgrescluster.Spec.Backups.Snapshots == nil { + return r.deleteSnapshots(ctx, postgrescluster, snapshots) + } + + // If we got here, then the snapshots are enabled (feature gate is enabled and the + // cluster has a Spec.Backups.Snapshots section defined). + + // Check snapshots for errors; if present, create an event. If there are + // multiple snapshots with errors, create event for the latest error and + // delete any older snapshots with error. + snapshotWithLatestError := getSnapshotWithLatestError(snapshots) + if snapshotWithLatestError != nil { + r.Recorder.Event(postgrescluster, corev1.EventTypeWarning, "VolumeSnapshotError", + *snapshotWithLatestError.Status.Error.Message) + for _, snapshot := range snapshots.Items { + if snapshot.Status != nil && snapshot.Status.Error != nil && + snapshot.Status.Error.Time.Before(snapshotWithLatestError.Status.Error.Time) { + err = r.deleteControlled(ctx, postgrescluster, &snapshot) + if err != nil { + return err + } + } + } + } + + // Get pvc backup job completion annotation. If it does not exist, there has not been + // a successful restore yet, so return early. + pvcUpdateTimeStamp, pvcAnnotationExists := pvc.GetAnnotations()[naming.PGBackRestBackupJobCompletion] + if !pvcAnnotationExists { + return err + } + + // Check to see if snapshot exists for the latest backup that has been restored into + // the dedicated pvc. + var snapshotForPvcUpdateIdx int + snapshotFoundForPvcUpdate := false + for idx, snapshot := range snapshots.Items { + if snapshot.GetAnnotations()[naming.PGBackRestBackupJobCompletion] == pvcUpdateTimeStamp { + snapshotForPvcUpdateIdx = idx + snapshotFoundForPvcUpdate = true + } + } + + // If a snapshot exists for the latest backup that has been restored into the dedicated pvc + // and the snapshot is Ready, delete all other snapshots. + if snapshotFoundForPvcUpdate && snapshots.Items[snapshotForPvcUpdateIdx].Status.ReadyToUse != nil && + *snapshots.Items[snapshotForPvcUpdateIdx].Status.ReadyToUse { + for idx, snapshot := range snapshots.Items { + if idx != snapshotForPvcUpdateIdx { + err = r.deleteControlled(ctx, postgrescluster, &snapshot) + if err != nil { + return err + } + } + } + } + + // If a snapshot for the latest backup/restore does not exist, create a snapshot. + if !snapshotFoundForPvcUpdate { + var snapshot *volumesnapshotv1.VolumeSnapshot + snapshot, err = r.generateSnapshotOfDedicatedSnapshotVolume(postgrescluster, pvc) + if err == nil { + err = errors.WithStack(r.apply(ctx, snapshot)) + } + } + + return err +} + +// +kubebuilder:rbac:groups="",resources="persistentvolumeclaims",verbs={get} +// +kubebuilder:rbac:groups="",resources="persistentvolumeclaims",verbs={create,delete,patch} + +// reconcileDedicatedSnapshotVolume reconciles the PersistentVolumeClaim that holds a +// copy of the pgdata and is dedicated for clean snapshots of the database. It creates +// and manages the volume as well as the restore jobs that bring the volume data forward +// after a successful backup. +func (r *Reconciler) reconcileDedicatedSnapshotVolume( + ctx context.Context, cluster *v1beta1.PostgresCluster, + clusterVolumes []corev1.PersistentVolumeClaim, +) (*corev1.PersistentVolumeClaim, error) { + + // If VolumeSnapshots feature gate is disabled, do nothing and return early. + if !feature.Enabled(ctx, feature.VolumeSnapshots) { + return nil, nil + } + + // Set appropriate labels for dedicated snapshot volume + labelMap := map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelRole: naming.RoleSnapshot, + naming.LabelData: naming.DataPostgres, + } + + // If volume already exists, use existing name. Otherwise, generate a name. + var pvc *corev1.PersistentVolumeClaim + existingPVCName, err := getPGPVCName(labelMap, clusterVolumes) + if err != nil { + return nil, errors.WithStack(err) + } + if existingPVCName != "" { + pvc = &corev1.PersistentVolumeClaim{ObjectMeta: metav1.ObjectMeta{ + Namespace: cluster.GetNamespace(), + Name: existingPVCName, + }} + } else { + pvc = &corev1.PersistentVolumeClaim{ObjectMeta: naming.ClusterDedicatedSnapshotVolume(cluster)} + } + pvc.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("PersistentVolumeClaim")) + + // If snapshots are disabled, delete the PVC if it exists and return early. + // Check the client cache first using Get. + if cluster.Spec.Backups.Snapshots == nil { + key := client.ObjectKeyFromObject(pvc) + err := errors.WithStack(r.Client.Get(ctx, key, pvc)) + if err == nil { + err = errors.WithStack(r.deleteControlled(ctx, cluster, pvc)) + } + return nil, client.IgnoreNotFound(err) + } + + // If we've got this far, snapshots are enabled so we should create/update/get + // the dedicated snapshot volume + pvc, err = r.createDedicatedSnapshotVolume(ctx, cluster, labelMap, pvc) + if err != nil { + return pvc, err + } + + // Determine if we need to run a restore job, based on the most recent backup + // and an annotation on the PVC. + + // Find the most recently completed backup job. + backupJob, err := r.getLatestCompleteBackupJob(ctx, cluster) + if err != nil { + return pvc, err + } + + // Return early if no complete backup job is found. + if backupJob == nil { + return pvc, nil + } + + // Return early if the pvc is annotated with a timestamp newer or equal to the latest backup job. + // If the annotation value cannot be parsed, we want to proceed with a restore. + pvcAnnotationTimestampString := pvc.GetAnnotations()[naming.PGBackRestBackupJobCompletion] + if pvcAnnotationTime, err := time.Parse(time.RFC3339, pvcAnnotationTimestampString); err == nil { + if backupJob.Status.CompletionTime.Compare(pvcAnnotationTime) <= 0 { + return pvc, nil + } + } + + // If we've made it here, the pvc has not been restored with latest backup. + // Find the dedicated snapshot volume restore job if it exists. Since we delete + // successful restores after we annotate the PVC and stop making restore jobs + // if a failed DSV restore job exists, there should only ever be one DSV restore + // job in existence at a time. + // TODO(snapshots): Should this function throw an error or something if multiple + // DSV restores somehow exist? + restoreJob, err := r.getDedicatedSnapshotVolumeRestoreJob(ctx, cluster) + if err != nil { + return pvc, err + } + + // If we don't find a restore job, we run one. + if restoreJob == nil { + err = r.dedicatedSnapshotVolumeRestore(ctx, cluster, pvc, backupJob) + return pvc, err + } + + // If we've made it here, we have found a restore job. If the restore job was + // successful, set/update the annotation on the PVC and delete the restore job. + if restoreJob.Status.Succeeded == 1 { + if pvc.GetAnnotations() == nil { + pvc.Annotations = map[string]string{} + } + pvc.Annotations[naming.PGBackRestBackupJobCompletion] = restoreJob.GetAnnotations()[naming.PGBackRestBackupJobCompletion] + annotations := fmt.Sprintf(`{"metadata":{"annotations":{"%s": "%s"}}}`, + naming.PGBackRestBackupJobCompletion, pvc.Annotations[naming.PGBackRestBackupJobCompletion]) + + patch := client.RawPatch(client.Merge.Type(), []byte(annotations)) + err = r.handlePersistentVolumeClaimError(cluster, + errors.WithStack(r.patch(ctx, pvc, patch))) + + if err != nil { + return pvc, err + } + + err = r.Client.Delete(ctx, restoreJob, client.PropagationPolicy(metav1.DeletePropagationBackground)) + return pvc, errors.WithStack(err) + } + + // If the restore job failed, create a warning event. + if restoreJob.Status.Failed == 1 { + r.Recorder.Event(cluster, corev1.EventTypeWarning, + "DedicatedSnapshotVolumeRestoreJobError", "restore job failed, check the logs") + return pvc, nil + } + + // If we made it here, the restore job is still running and we should do nothing. + return pvc, err +} + +// createDedicatedSnapshotVolume creates/updates/gets the dedicated snapshot volume. +// It expects that the volume name and GVK has already been set on the pvc that is passed in. +func (r *Reconciler) createDedicatedSnapshotVolume(ctx context.Context, + cluster *v1beta1.PostgresCluster, labelMap map[string]string, + pvc *corev1.PersistentVolumeClaim, +) (*corev1.PersistentVolumeClaim, error) { + var err error + + // An InstanceSet must be chosen to scale resources for the dedicated snapshot volume. + // TODO: We've chosen the first InstanceSet for the time being, but might want to consider + // making the choice configurable. + instanceSpec := cluster.Spec.InstanceSets[0] + + pvc.Annotations = naming.Merge( + cluster.Spec.Metadata.GetAnnotationsOrNil(), + instanceSpec.Metadata.GetAnnotationsOrNil()) + + pvc.Labels = naming.Merge( + cluster.Spec.Metadata.GetLabelsOrNil(), + instanceSpec.Metadata.GetLabelsOrNil(), + labelMap, + ) + + err = errors.WithStack(r.setControllerReference(cluster, pvc)) + if err != nil { + return pvc, err + } + + pvc.Spec = instanceSpec.DataVolumeClaimSpec + + // Set the snapshot volume to the same size as the pgdata volume. The size should scale with auto-grow. + r.setVolumeSize(ctx, cluster, pvc, instanceSpec.Name) + + // Clear any set limit before applying PVC. This is needed to allow the limit + // value to change later. + pvc.Spec.Resources.Limits = nil + + err = r.handlePersistentVolumeClaimError(cluster, + errors.WithStack(r.apply(ctx, pvc))) + if err != nil { + return pvc, err + } + + return pvc, err +} + +// dedicatedSnapshotVolumeRestore creates a Job that performs a restore into the dedicated +// snapshot volume. +// This function is very similar to reconcileRestoreJob, but specifically tailored to the +// dedicated snapshot volume. +func (r *Reconciler) dedicatedSnapshotVolumeRestore(ctx context.Context, + cluster *v1beta1.PostgresCluster, dedicatedSnapshotVolume *corev1.PersistentVolumeClaim, + backupJob *batchv1.Job, +) error { + + pgdata := postgres.DataDirectory(cluster) + repoName := backupJob.GetLabels()[naming.LabelPGBackRestRepo] + + opts := []string{ + "--stanza=" + pgbackrest.DefaultStanzaName, + "--pg1-path=" + pgdata, + "--repo=" + regexRepoIndex.FindString(repoName), + "--delta", + } + + cmd := pgbackrest.DedicatedSnapshotVolumeRestoreCommand(pgdata, strings.Join(opts, " ")) + + // Create the volume resources required for the Postgres data directory. + dataVolumeMount := postgres.DataVolumeMount() + dataVolume := corev1.Volume{ + Name: dataVolumeMount.Name, + VolumeSource: corev1.VolumeSource{ + PersistentVolumeClaim: &corev1.PersistentVolumeClaimVolumeSource{ + ClaimName: dedicatedSnapshotVolume.GetName(), + }, + }, + } + volumes := []corev1.Volume{dataVolume} + volumeMounts := []corev1.VolumeMount{dataVolumeMount} + + _, configHash, err := pgbackrest.CalculateConfigHashes(cluster) + if err != nil { + return err + } + + // A DataSource is required to avoid a nil pointer exception. + fakeDataSource := &v1beta1.PostgresClusterDataSource{RepoName: ""} + + restoreJob := &batchv1.Job{} + instanceName := cluster.Status.StartupInstance + + if err := r.generateRestoreJobIntent(cluster, configHash, instanceName, cmd, + volumeMounts, volumes, fakeDataSource, restoreJob); err != nil { + return errors.WithStack(err) + } + + // Attempt the restore exactly once. If the restore job fails, we prompt the user to investigate. + restoreJob.Spec.BackoffLimit = initialize.Int32(0) + restoreJob.Spec.Template.Spec.RestartPolicy = corev1.RestartPolicyNever + + // Add pgBackRest configs to template. + pgbackrest.AddConfigToRestorePod(cluster, cluster, &restoreJob.Spec.Template.Spec) + + // Add nss_wrapper init container and add nss_wrapper env vars to the pgbackrest restore container. + addNSSWrapper( + config.PGBackRestContainerImage(cluster), + cluster.Spec.ImagePullPolicy, + &restoreJob.Spec.Template) + + addTMPEmptyDir(&restoreJob.Spec.Template) + + restoreJob.Annotations[naming.PGBackRestBackupJobCompletion] = backupJob.Status.CompletionTime.Format(time.RFC3339) + return errors.WithStack(r.apply(ctx, restoreJob)) +} + +// generateSnapshotOfDedicatedSnapshotVolume will generate a VolumeSnapshot of +// the dedicated snapshot PersistentVolumeClaim and annotate it with the +// provided backup job's UID. +func (r *Reconciler) generateSnapshotOfDedicatedSnapshotVolume( + postgrescluster *v1beta1.PostgresCluster, + dedicatedSnapshotVolume *corev1.PersistentVolumeClaim, +) (*volumesnapshotv1.VolumeSnapshot, error) { + + snapshot, err := r.generateVolumeSnapshot(postgrescluster, *dedicatedSnapshotVolume, + postgrescluster.Spec.Backups.Snapshots.VolumeSnapshotClassName) + if err == nil { + if snapshot.Annotations == nil { + snapshot.Annotations = map[string]string{} + } + snapshot.Annotations[naming.PGBackRestBackupJobCompletion] = dedicatedSnapshotVolume.GetAnnotations()[naming.PGBackRestBackupJobCompletion] + } + + return snapshot, err +} + +// generateVolumeSnapshot generates a VolumeSnapshot that will use the supplied +// PersistentVolumeClaim and VolumeSnapshotClassName and will set the provided +// PostgresCluster as the owner. +func (r *Reconciler) generateVolumeSnapshot(postgrescluster *v1beta1.PostgresCluster, + pvc corev1.PersistentVolumeClaim, volumeSnapshotClassName string, +) (*volumesnapshotv1.VolumeSnapshot, error) { + + snapshot := &volumesnapshotv1.VolumeSnapshot{ + TypeMeta: metav1.TypeMeta{ + APIVersion: volumesnapshotv1.SchemeGroupVersion.String(), + Kind: "VolumeSnapshot", + }, + ObjectMeta: naming.ClusterVolumeSnapshot(postgrescluster), + } + snapshot.Spec.Source.PersistentVolumeClaimName = &pvc.Name + snapshot.Spec.VolumeSnapshotClassName = &volumeSnapshotClassName + + snapshot.Annotations = postgrescluster.Spec.Metadata.GetAnnotationsOrNil() + snapshot.Labels = naming.Merge(postgrescluster.Spec.Metadata.GetLabelsOrNil(), + map[string]string{ + naming.LabelCluster: postgrescluster.Name, + }) + + err := errors.WithStack(r.setControllerReference(postgrescluster, snapshot)) + + return snapshot, err +} + +// getDedicatedSnapshotVolumeRestoreJob finds a dedicated snapshot volume (DSV) +// restore job if one exists. Since we delete successful restore jobs and stop +// creating new restore jobs when one fails, there should only ever be one DSV +// restore job present at a time. If a DSV restore cannot be found, we return nil. +func (r *Reconciler) getDedicatedSnapshotVolumeRestoreJob(ctx context.Context, + postgrescluster *v1beta1.PostgresCluster) (*batchv1.Job, error) { + + // Get all restore jobs for this cluster + jobs := &batchv1.JobList{} + selectJobs, err := naming.AsSelector(naming.ClusterRestoreJobs(postgrescluster.Name)) + if err == nil { + err = errors.WithStack( + r.Client.List(ctx, jobs, + client.InNamespace(postgrescluster.Namespace), + client.MatchingLabelsSelector{Selector: selectJobs}, + )) + } + if err != nil { + return nil, err + } + + // Get restore job that has PGBackRestBackupJobCompletion annotation + for _, job := range jobs.Items { + _, annotationExists := job.GetAnnotations()[naming.PGBackRestBackupJobCompletion] + if annotationExists { + return &job, nil + } + } + + return nil, nil +} + +// getLatestCompleteBackupJob finds the most recently completed +// backup job for a cluster +func (r *Reconciler) getLatestCompleteBackupJob(ctx context.Context, + postgrescluster *v1beta1.PostgresCluster) (*batchv1.Job, error) { + + // Get all backup jobs for this cluster + jobs := &batchv1.JobList{} + selectJobs, err := naming.AsSelector(naming.ClusterBackupJobs(postgrescluster.Name)) + if err == nil { + err = errors.WithStack( + r.Client.List(ctx, jobs, + client.InNamespace(postgrescluster.Namespace), + client.MatchingLabelsSelector{Selector: selectJobs}, + )) + } + if err != nil { + return nil, err + } + + zeroTime := metav1.NewTime(time.Time{}) + latestCompleteBackupJob := batchv1.Job{ + Status: batchv1.JobStatus{ + Succeeded: 1, + CompletionTime: &zeroTime, + }, + } + for _, job := range jobs.Items { + if job.Status.Succeeded > 0 && + latestCompleteBackupJob.Status.CompletionTime.Before(job.Status.CompletionTime) { + latestCompleteBackupJob = job + } + } + + if latestCompleteBackupJob.Status.CompletionTime.Equal(&zeroTime) { + return nil, nil + } + + return &latestCompleteBackupJob, nil +} + +// getSnapshotWithLatestError takes a VolumeSnapshotList and returns a pointer to the +// snapshot that has most recently had an error. If no snapshot errors exist +// then it returns nil. +func getSnapshotWithLatestError(snapshots *volumesnapshotv1.VolumeSnapshotList) *volumesnapshotv1.VolumeSnapshot { + zeroTime := metav1.NewTime(time.Time{}) + snapshotWithLatestError := volumesnapshotv1.VolumeSnapshot{ + Status: &volumesnapshotv1.VolumeSnapshotStatus{ + Error: &volumesnapshotv1.VolumeSnapshotError{ + Time: &zeroTime, + }, + }, + } + for _, snapshot := range snapshots.Items { + if snapshot.Status != nil && snapshot.Status.Error != nil && + snapshotWithLatestError.Status.Error.Time.Before(snapshot.Status.Error.Time) { + snapshotWithLatestError = snapshot + } + } + + if snapshotWithLatestError.Status.Error.Time.Equal(&zeroTime) { + return nil + } + + return &snapshotWithLatestError +} + +// getSnapshotsForCluster gets all the VolumeSnapshots for a given postgrescluster. +func (r *Reconciler) getSnapshotsForCluster(ctx context.Context, cluster *v1beta1.PostgresCluster) ( + *volumesnapshotv1.VolumeSnapshotList, error) { + + selectSnapshots, err := naming.AsSelector(naming.Cluster(cluster.Name)) + if err != nil { + return nil, err + } + snapshots := &volumesnapshotv1.VolumeSnapshotList{} + err = errors.WithStack( + r.Client.List(ctx, snapshots, + client.InNamespace(cluster.Namespace), + client.MatchingLabelsSelector{Selector: selectSnapshots}, + )) + + return snapshots, err +} + +// getLatestReadySnapshot takes a VolumeSnapshotList and returns the latest ready VolumeSnapshot. +func getLatestReadySnapshot(snapshots *volumesnapshotv1.VolumeSnapshotList) *volumesnapshotv1.VolumeSnapshot { + zeroTime := metav1.NewTime(time.Time{}) + latestReadySnapshot := volumesnapshotv1.VolumeSnapshot{ + Status: &volumesnapshotv1.VolumeSnapshotStatus{ + CreationTime: &zeroTime, + }, + } + for _, snapshot := range snapshots.Items { + if snapshot.Status != nil && snapshot.Status.ReadyToUse != nil && *snapshot.Status.ReadyToUse && + latestReadySnapshot.Status.CreationTime.Before(snapshot.Status.CreationTime) { + latestReadySnapshot = snapshot + } + } + + if latestReadySnapshot.Status.CreationTime.Equal(&zeroTime) { + return nil + } + + return &latestReadySnapshot +} + +// deleteSnapshots takes a postgrescluster and a snapshot list and deletes all snapshots +// in the list that are controlled by the provided postgrescluster. +func (r *Reconciler) deleteSnapshots(ctx context.Context, + postgrescluster *v1beta1.PostgresCluster, snapshots *volumesnapshotv1.VolumeSnapshotList) error { + + for i := range snapshots.Items { + err := errors.WithStack(client.IgnoreNotFound( + r.deleteControlled(ctx, postgrescluster, &snapshots.Items[i]))) + if err != nil { + return err + } + } + return nil +} + +// tablespaceVolumesInUse determines if the TablespaceVolumes feature is enabled and the given +// cluster has tablespace volumes in place. +func clusterUsingTablespaces(ctx context.Context, postgrescluster *v1beta1.PostgresCluster) bool { + for _, instanceSet := range postgrescluster.Spec.InstanceSets { + if len(instanceSet.TablespaceVolumes) > 0 { + return feature.Enabled(ctx, feature.TablespaceVolumes) + } + } + return false +} diff --git a/internal/controller/postgrescluster/snapshots_test.go b/internal/controller/postgrescluster/snapshots_test.go new file mode 100644 index 0000000000..4c3d987ecd --- /dev/null +++ b/internal/controller/postgrescluster/snapshots_test.go @@ -0,0 +1,1476 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgrescluster + +import ( + "context" + "testing" + "time" + + "github.com/pkg/errors" + "gotest.tools/v3/assert" + appsv1 "k8s.io/api/apps/v1" + batchv1 "k8s.io/api/batch/v1" + corev1 "k8s.io/api/core/v1" + apierrors "k8s.io/apimachinery/pkg/api/errors" + "k8s.io/apimachinery/pkg/api/resource" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/client-go/discovery" + "sigs.k8s.io/controller-runtime/pkg/client" + + "github.com/crunchydata/postgres-operator/internal/controller/runtime" + "github.com/crunchydata/postgres-operator/internal/feature" + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/testing/cmp" + "github.com/crunchydata/postgres-operator/internal/testing/events" + "github.com/crunchydata/postgres-operator/internal/testing/require" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" + + volumesnapshotv1 "github.com/kubernetes-csi/external-snapshotter/client/v8/apis/volumesnapshot/v1" +) + +func TestReconcileVolumeSnapshots(t *testing.T) { + ctx := context.Background() + cfg, cc := setupKubernetes(t) + require.ParallelCapacity(t, 1) + discoveryClient, err := discovery.NewDiscoveryClientForConfig(cfg) + assert.NilError(t, err) + + recorder := events.NewRecorder(t, runtime.Scheme) + r := &Reconciler{ + Client: cc, + Owner: client.FieldOwner(t.Name()), + DiscoveryClient: discoveryClient, + Recorder: recorder, + } + ns := setupNamespace(t, cc) + + // Enable snapshots feature gate + gate := feature.NewGate() + assert.NilError(t, gate.SetFromMap(map[string]bool{ + feature.VolumeSnapshots: true, + })) + ctx = feature.NewContext(ctx, gate) + + t.Run("SnapshotsDisabledDeleteSnapshots", func(t *testing.T) { + // Create cluster (without snapshots spec) + cluster := testCluster() + cluster.Namespace = ns.Name + cluster.ObjectMeta.UID = "the-uid-123" + assert.NilError(t, r.Client.Create(ctx, cluster)) + t.Cleanup(func() { assert.Check(t, r.Client.Delete(ctx, cluster)) }) + + // Create a snapshot + pvc := &corev1.PersistentVolumeClaim{ + ObjectMeta: metav1.ObjectMeta{ + Name: "dedicated-snapshot-volume", + }, + } + volumeSnapshotClassName := "my-snapshotclass" + snapshot, err := r.generateVolumeSnapshot(cluster, *pvc, volumeSnapshotClassName) + assert.NilError(t, err) + err = errors.WithStack(r.apply(ctx, snapshot)) + assert.NilError(t, err) + + // Get all snapshots for this cluster and assert 1 exists + selectSnapshots, err := naming.AsSelector(naming.Cluster(cluster.Name)) + assert.NilError(t, err) + snapshots := &volumesnapshotv1.VolumeSnapshotList{} + err = errors.WithStack( + r.Client.List(ctx, snapshots, + client.InNamespace(cluster.Namespace), + client.MatchingLabelsSelector{Selector: selectSnapshots}, + )) + assert.NilError(t, err) + assert.Equal(t, len(snapshots.Items), 1) + + // Reconcile snapshots + err = r.reconcileVolumeSnapshots(ctx, cluster, pvc) + assert.NilError(t, err) + + // Get all snapshots for this cluster and assert 0 exist + assert.NilError(t, err) + snapshots = &volumesnapshotv1.VolumeSnapshotList{} + err = errors.WithStack( + r.Client.List(ctx, snapshots, + client.InNamespace(cluster.Namespace), + client.MatchingLabelsSelector{Selector: selectSnapshots}, + )) + assert.NilError(t, err) + assert.Equal(t, len(snapshots.Items), 0) + }) + + t.Run("SnapshotsEnabledTablespacesEnabled", func(t *testing.T) { + // Enable both tablespaces and snapshots feature gates + gate := feature.NewGate() + assert.NilError(t, gate.SetFromMap(map[string]bool{ + feature.TablespaceVolumes: true, + feature.VolumeSnapshots: true, + })) + ctx := feature.NewContext(ctx, gate) + + // Create a cluster with snapshots and tablespaces enabled + volumeSnapshotClassName := "my-snapshotclass" + cluster := testCluster() + cluster.Namespace = ns.Name + cluster.Spec.Backups.Snapshots = &v1beta1.VolumeSnapshots{ + VolumeSnapshotClassName: volumeSnapshotClassName, + } + cluster.Spec.InstanceSets[0].TablespaceVolumes = []v1beta1.TablespaceVolume{{ + Name: "volume-1", + }} + + // Create pvc for reconcile + pvc := &corev1.PersistentVolumeClaim{ + ObjectMeta: metav1.ObjectMeta{ + Name: "dedicated-snapshot-volume", + }, + } + + // Reconcile + err = r.reconcileVolumeSnapshots(ctx, cluster, pvc) + assert.NilError(t, err) + + // Assert warning event was created and has expected attributes + if assert.Check(t, len(recorder.Events) > 0) { + assert.Equal(t, recorder.Events[0].Type, "Warning") + assert.Equal(t, recorder.Events[0].Regarding.Kind, "PostgresCluster") + assert.Equal(t, recorder.Events[0].Regarding.Name, "hippo") + assert.Equal(t, recorder.Events[0].Reason, "IncompatibleFeatures") + assert.Assert(t, cmp.Contains(recorder.Events[0].Note, "VolumeSnapshots not currently compatible with TablespaceVolumes")) + } + }) + + t.Run("SnapshotsEnabledNoPvcAnnotation", func(t *testing.T) { + // Create a volume snapshot class + volumeSnapshotClassName := "my-snapshotclass" + volumeSnapshotClass := &volumesnapshotv1.VolumeSnapshotClass{ + ObjectMeta: metav1.ObjectMeta{ + Name: volumeSnapshotClassName, + }, + DeletionPolicy: "Delete", + } + assert.NilError(t, r.Client.Create(ctx, volumeSnapshotClass)) + t.Cleanup(func() { assert.Check(t, r.Client.Delete(ctx, volumeSnapshotClass)) }) + + // Create a cluster with snapshots enabled + cluster := testCluster() + cluster.Namespace = ns.Name + cluster.Spec.Backups.Snapshots = &v1beta1.VolumeSnapshots{ + VolumeSnapshotClassName: volumeSnapshotClassName, + } + assert.NilError(t, r.Client.Create(ctx, cluster)) + t.Cleanup(func() { assert.Check(t, r.Client.Delete(ctx, cluster)) }) + + // Create pvc for reconcile + pvc := &corev1.PersistentVolumeClaim{ + ObjectMeta: metav1.ObjectMeta{ + Name: "dedicated-snapshot-volume", + }, + } + + // Reconcile + err = r.reconcileVolumeSnapshots(ctx, cluster, pvc) + assert.NilError(t, err) + + // Assert no snapshots exist + selectSnapshots, err := naming.AsSelector(naming.Cluster(cluster.Name)) + assert.NilError(t, err) + snapshots := &volumesnapshotv1.VolumeSnapshotList{} + err = errors.WithStack( + r.Client.List(ctx, snapshots, + client.InNamespace(cluster.Namespace), + client.MatchingLabelsSelector{Selector: selectSnapshots}, + )) + assert.NilError(t, err) + assert.Equal(t, len(snapshots.Items), 0) + }) + + t.Run("SnapshotsEnabledReadySnapshotsExist", func(t *testing.T) { + // Create a volume snapshot class + volumeSnapshotClassName := "my-snapshotclass" + volumeSnapshotClass := &volumesnapshotv1.VolumeSnapshotClass{ + ObjectMeta: metav1.ObjectMeta{ + Name: volumeSnapshotClassName, + }, + DeletionPolicy: "Delete", + } + assert.NilError(t, r.Client.Create(ctx, volumeSnapshotClass)) + t.Cleanup(func() { assert.Check(t, r.Client.Delete(ctx, volumeSnapshotClass)) }) + + // Create a cluster with snapshots enabled + cluster := testCluster() + cluster.Namespace = ns.Name + cluster.ObjectMeta.UID = "the-uid-123" + cluster.Spec.Backups.Snapshots = &v1beta1.VolumeSnapshots{ + VolumeSnapshotClassName: volumeSnapshotClassName, + } + assert.NilError(t, r.Client.Create(ctx, cluster)) + t.Cleanup(func() { assert.Check(t, r.Client.Delete(ctx, cluster)) }) + + // Create pvc with annotation + pvcName := initialize.String("dedicated-snapshot-volume") + pvc := &corev1.PersistentVolumeClaim{ + ObjectMeta: metav1.ObjectMeta{ + Name: *pvcName, + Annotations: map[string]string{ + naming.PGBackRestBackupJobCompletion: "backup-timestamp", + }, + }, + } + + // Create snapshot with annotation matching the pvc annotation + snapshot1 := &volumesnapshotv1.VolumeSnapshot{ + TypeMeta: metav1.TypeMeta{ + APIVersion: volumesnapshotv1.SchemeGroupVersion.String(), + Kind: "VolumeSnapshot", + }, + ObjectMeta: metav1.ObjectMeta{ + Name: "first-snapshot", + Namespace: ns.Name, + Annotations: map[string]string{ + naming.PGBackRestBackupJobCompletion: "backup-timestamp", + }, + Labels: map[string]string{ + naming.LabelCluster: "hippo", + }, + }, + Spec: volumesnapshotv1.VolumeSnapshotSpec{ + Source: volumesnapshotv1.VolumeSnapshotSource{ + PersistentVolumeClaimName: pvcName, + }, + }, + } + err := errors.WithStack(r.setControllerReference(cluster, snapshot1)) + assert.NilError(t, err) + err = r.apply(ctx, snapshot1) + assert.NilError(t, err) + + // Update snapshot status + truePtr := initialize.Bool(true) + snapshot1.Status = &volumesnapshotv1.VolumeSnapshotStatus{ + ReadyToUse: truePtr, + } + err = r.Client.Status().Update(ctx, snapshot1) + assert.NilError(t, err) + + // Create second snapshot with different annotation value + snapshot2 := &volumesnapshotv1.VolumeSnapshot{ + TypeMeta: metav1.TypeMeta{ + APIVersion: volumesnapshotv1.SchemeGroupVersion.String(), + Kind: "VolumeSnapshot", + }, + ObjectMeta: metav1.ObjectMeta{ + Name: "second-snapshot", + Namespace: ns.Name, + Annotations: map[string]string{ + naming.PGBackRestBackupJobCompletion: "older-backup-timestamp", + }, + Labels: map[string]string{ + naming.LabelCluster: "hippo", + }, + }, + Spec: volumesnapshotv1.VolumeSnapshotSpec{ + Source: volumesnapshotv1.VolumeSnapshotSource{ + PersistentVolumeClaimName: pvcName, + }, + }, + } + err = errors.WithStack(r.setControllerReference(cluster, snapshot2)) + assert.NilError(t, err) + err = r.apply(ctx, snapshot2) + assert.NilError(t, err) + + // Update second snapshot's status + snapshot2.Status = &volumesnapshotv1.VolumeSnapshotStatus{ + ReadyToUse: truePtr, + } + err = r.Client.Status().Update(ctx, snapshot2) + assert.NilError(t, err) + + // Reconcile + err = r.reconcileVolumeSnapshots(ctx, cluster, pvc) + assert.NilError(t, err) + + // Assert first snapshot exists and second snapshot was deleted + selectSnapshots, err := naming.AsSelector(naming.Cluster(cluster.Name)) + assert.NilError(t, err) + snapshots := &volumesnapshotv1.VolumeSnapshotList{} + err = errors.WithStack( + r.Client.List(ctx, snapshots, + client.InNamespace(cluster.Namespace), + client.MatchingLabelsSelector{Selector: selectSnapshots}, + )) + assert.NilError(t, err) + assert.Equal(t, len(snapshots.Items), 1) + assert.Equal(t, snapshots.Items[0].Name, "first-snapshot") + + // Cleanup + err = r.deleteControlled(ctx, cluster, snapshot1) + assert.NilError(t, err) + }) + + t.Run("SnapshotsEnabledCreateSnapshot", func(t *testing.T) { + // Create a volume snapshot class + volumeSnapshotClassName := "my-snapshotclass" + volumeSnapshotClass := &volumesnapshotv1.VolumeSnapshotClass{ + ObjectMeta: metav1.ObjectMeta{ + Name: volumeSnapshotClassName, + }, + DeletionPolicy: "Delete", + } + assert.NilError(t, r.Client.Create(ctx, volumeSnapshotClass)) + t.Cleanup(func() { assert.Check(t, r.Client.Delete(ctx, volumeSnapshotClass)) }) + + // Create a cluster with snapshots enabled + cluster := testCluster() + cluster.Namespace = ns.Name + cluster.ObjectMeta.UID = "the-uid-123" + cluster.Spec.Backups.Snapshots = &v1beta1.VolumeSnapshots{ + VolumeSnapshotClassName: volumeSnapshotClassName, + } + assert.NilError(t, r.Client.Create(ctx, cluster)) + t.Cleanup(func() { assert.Check(t, r.Client.Delete(ctx, cluster)) }) + + // Create pvc with annotation + pvcName := initialize.String("dedicated-snapshot-volume") + pvc := &corev1.PersistentVolumeClaim{ + ObjectMeta: metav1.ObjectMeta{ + Name: *pvcName, + Annotations: map[string]string{ + naming.PGBackRestBackupJobCompletion: "another-backup-timestamp", + }, + }, + } + + // Reconcile + err = r.reconcileVolumeSnapshots(ctx, cluster, pvc) + assert.NilError(t, err) + + // Assert that a snapshot was created + selectSnapshots, err := naming.AsSelector(naming.Cluster(cluster.Name)) + assert.NilError(t, err) + snapshots := &volumesnapshotv1.VolumeSnapshotList{} + err = errors.WithStack( + r.Client.List(ctx, snapshots, + client.InNamespace(cluster.Namespace), + client.MatchingLabelsSelector{Selector: selectSnapshots}, + )) + assert.NilError(t, err) + assert.Equal(t, len(snapshots.Items), 1) + assert.Equal(t, snapshots.Items[0].Annotations[naming.PGBackRestBackupJobCompletion], + "another-backup-timestamp") + }) +} + +func TestReconcileDedicatedSnapshotVolume(t *testing.T) { + ctx := context.Background() + cfg, cc := setupKubernetes(t) + discoveryClient, err := discovery.NewDiscoveryClientForConfig(cfg) + assert.NilError(t, err) + + recorder := events.NewRecorder(t, runtime.Scheme) + r := &Reconciler{ + Client: cc, + Owner: client.FieldOwner(t.Name()), + DiscoveryClient: discoveryClient, + Recorder: recorder, + } + + // Enable snapshots feature gate + gate := feature.NewGate() + assert.NilError(t, gate.SetFromMap(map[string]bool{ + feature.VolumeSnapshots: true, + })) + ctx = feature.NewContext(ctx, gate) + + t.Run("SnapshotsDisabledDeletePvc", func(t *testing.T) { + // Create cluster without snapshots spec + ns := setupNamespace(t, cc) + cluster := testCluster() + cluster.Namespace = ns.Name + cluster.ObjectMeta.UID = "the-uid-123" + assert.NilError(t, r.Client.Create(ctx, cluster)) + t.Cleanup(func() { assert.Check(t, r.Client.Delete(ctx, cluster)) }) + + // Create a dedicated snapshot volume + pvc := &corev1.PersistentVolumeClaim{ + TypeMeta: metav1.TypeMeta{ + Kind: "PersistentVolumeClaim", + APIVersion: corev1.SchemeGroupVersion.String(), + }, + ObjectMeta: metav1.ObjectMeta{ + Name: "dedicated-snapshot-volume", + Namespace: ns.Name, + Labels: map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelRole: naming.RoleSnapshot, + naming.LabelData: naming.DataPostgres, + }, + }, + Spec: testVolumeClaimSpec(), + } + err = errors.WithStack(r.setControllerReference(cluster, pvc)) + assert.NilError(t, err) + err = r.apply(ctx, pvc) + assert.NilError(t, err) + + // Assert that the pvc was created + selectPvcs, err := naming.AsSelector(naming.Cluster(cluster.Name)) + assert.NilError(t, err) + pvcs := &corev1.PersistentVolumeClaimList{} + err = errors.WithStack( + r.Client.List(ctx, pvcs, + client.InNamespace(cluster.Namespace), + client.MatchingLabelsSelector{Selector: selectPvcs}, + )) + assert.NilError(t, err) + assert.Equal(t, len(pvcs.Items), 1) + + // Create volumes for reconcile + clusterVolumes := []corev1.PersistentVolumeClaim{*pvc} + + // Reconcile + returned, err := r.reconcileDedicatedSnapshotVolume(ctx, cluster, clusterVolumes) + assert.NilError(t, err) + assert.Check(t, returned == nil) + + // Assert that the pvc has been deleted or marked for deletion + key, fetched := client.ObjectKeyFromObject(pvc), &corev1.PersistentVolumeClaim{} + if err := r.Client.Get(ctx, key, fetched); err == nil { + assert.Assert(t, fetched.DeletionTimestamp != nil, "expected deleted") + } else { + assert.Assert(t, apierrors.IsNotFound(err), "expected NotFound, got %v", err) + } + }) + + t.Run("SnapshotsEnabledCreatePvcNoBackupNoRestore", func(t *testing.T) { + // Create cluster with snapshots enabled + ns := setupNamespace(t, cc) + cluster := testCluster() + cluster.Namespace = ns.Name + cluster.ObjectMeta.UID = "the-uid-123" + cluster.Spec.Backups.Snapshots = &v1beta1.VolumeSnapshots{ + VolumeSnapshotClassName: "my-snapshotclass", + } + assert.NilError(t, r.Client.Create(ctx, cluster)) + t.Cleanup(func() { assert.Check(t, r.Client.Delete(ctx, cluster)) }) + + // Create volumes for reconcile + clusterVolumes := []corev1.PersistentVolumeClaim{} + + // Reconcile + pvc, err := r.reconcileDedicatedSnapshotVolume(ctx, cluster, clusterVolumes) + assert.NilError(t, err) + assert.Assert(t, pvc != nil) + + // Assert pvc was created + selectPvcs, err := naming.AsSelector(naming.Cluster(cluster.Name)) + assert.NilError(t, err) + pvcs := &corev1.PersistentVolumeClaimList{} + err = errors.WithStack( + r.Client.List(ctx, pvcs, + client.InNamespace(cluster.Namespace), + client.MatchingLabelsSelector{Selector: selectPvcs}, + )) + assert.NilError(t, err) + assert.Equal(t, len(pvcs.Items), 1) + }) + + t.Run("SnapshotsEnabledBackupExistsCreateRestore", func(t *testing.T) { + // Create cluster with snapshots enabled + ns := setupNamespace(t, cc) + cluster := testCluster() + cluster.Namespace = ns.Name + cluster.ObjectMeta.UID = "the-uid-123" + cluster.Spec.Backups.Snapshots = &v1beta1.VolumeSnapshots{ + VolumeSnapshotClassName: "my-snapshotclass", + } + assert.NilError(t, r.Client.Create(ctx, cluster)) + t.Cleanup(func() { assert.Check(t, r.Client.Delete(ctx, cluster)) }) + + // Create successful backup job + backupJob := testBackupJob(cluster) + err = errors.WithStack(r.setControllerReference(cluster, backupJob)) + assert.NilError(t, err) + err = r.apply(ctx, backupJob) + assert.NilError(t, err) + + currentTime := metav1.Now() + backupJob.Status = batchv1.JobStatus{ + Succeeded: 1, + CompletionTime: ¤tTime, + } + err = r.Client.Status().Update(ctx, backupJob) + assert.NilError(t, err) + + // Create instance set and volumes for reconcile + sts := &appsv1.StatefulSet{} + generateInstanceStatefulSetIntent(ctx, cluster, &cluster.Spec.InstanceSets[0], "pod-service", "service-account", sts, 1) + clusterVolumes := []corev1.PersistentVolumeClaim{} + + // Reconcile + pvc, err := r.reconcileDedicatedSnapshotVolume(ctx, cluster, clusterVolumes) + assert.NilError(t, err) + assert.Assert(t, pvc != nil) + + // Assert restore job with annotation was created + restoreJobs := &batchv1.JobList{} + selectJobs, err := naming.AsSelector(naming.ClusterRestoreJobs(cluster.Name)) + assert.NilError(t, err) + err = errors.WithStack( + r.Client.List(ctx, restoreJobs, + client.InNamespace(cluster.Namespace), + client.MatchingLabelsSelector{Selector: selectJobs}, + )) + assert.NilError(t, err) + assert.Equal(t, len(restoreJobs.Items), 1) + assert.Assert(t, restoreJobs.Items[0].Annotations[naming.PGBackRestBackupJobCompletion] != "") + }) + + t.Run("SnapshotsEnabledSuccessfulRestoreExists", func(t *testing.T) { + // Create cluster with snapshots enabled + ns := setupNamespace(t, cc) + cluster := testCluster() + cluster.Namespace = ns.Name + cluster.ObjectMeta.UID = "the-uid-123" + cluster.Spec.Backups.Snapshots = &v1beta1.VolumeSnapshots{ + VolumeSnapshotClassName: "my-snapshotclass", + } + assert.NilError(t, r.Client.Create(ctx, cluster)) + t.Cleanup(func() { assert.Check(t, r.Client.Delete(ctx, cluster)) }) + + // Create times for jobs + currentTime := metav1.Now() + earlierTime := metav1.NewTime(currentTime.AddDate(-1, 0, 0)) + + // Create successful backup job + backupJob := testBackupJob(cluster) + err = errors.WithStack(r.setControllerReference(cluster, backupJob)) + assert.NilError(t, err) + err = r.apply(ctx, backupJob) + assert.NilError(t, err) + + backupJob.Status = batchv1.JobStatus{ + Succeeded: 1, + CompletionTime: &earlierTime, + } + err = r.Client.Status().Update(ctx, backupJob) + assert.NilError(t, err) + + // Create successful restore job + restoreJob := testRestoreJob(cluster) + restoreJob.Annotations = map[string]string{ + naming.PGBackRestBackupJobCompletion: backupJob.Status.CompletionTime.Format(time.RFC3339), + } + err = errors.WithStack(r.setControllerReference(cluster, restoreJob)) + assert.NilError(t, err) + err = r.apply(ctx, restoreJob) + assert.NilError(t, err) + + restoreJob.Status = batchv1.JobStatus{ + Succeeded: 1, + CompletionTime: ¤tTime, + } + err = r.Client.Status().Update(ctx, restoreJob) + assert.NilError(t, err) + + // Create instance set and volumes for reconcile + sts := &appsv1.StatefulSet{} + generateInstanceStatefulSetIntent(ctx, cluster, &cluster.Spec.InstanceSets[0], "pod-service", "service-account", sts, 1) + clusterVolumes := []corev1.PersistentVolumeClaim{} + + // Reconcile + pvc, err := r.reconcileDedicatedSnapshotVolume(ctx, cluster, clusterVolumes) + assert.NilError(t, err) + assert.Assert(t, pvc != nil) + + // Assert restore job was deleted + restoreJobs := &batchv1.JobList{} + selectJobs, err := naming.AsSelector(naming.ClusterRestoreJobs(cluster.Name)) + assert.NilError(t, err) + err = errors.WithStack( + r.Client.List(ctx, restoreJobs, + client.InNamespace(cluster.Namespace), + client.MatchingLabelsSelector{Selector: selectJobs}, + )) + assert.NilError(t, err) + assert.Equal(t, len(restoreJobs.Items), 0) + + // Assert pvc was annotated + assert.Equal(t, pvc.GetAnnotations()[naming.PGBackRestBackupJobCompletion], backupJob.Status.CompletionTime.Format(time.RFC3339)) + }) + + t.Run("SnapshotsEnabledFailedRestoreExists", func(t *testing.T) { + // Create cluster with snapshots enabled + ns := setupNamespace(t, cc) + cluster := testCluster() + cluster.Namespace = ns.Name + cluster.ObjectMeta.UID = "the-uid-123" + cluster.Spec.Backups.Snapshots = &v1beta1.VolumeSnapshots{ + VolumeSnapshotClassName: "my-snapshotclass", + } + assert.NilError(t, r.Client.Create(ctx, cluster)) + t.Cleanup(func() { assert.Check(t, r.Client.Delete(ctx, cluster)) }) + + // Create times for jobs + currentTime := metav1.Now() + earlierTime := metav1.NewTime(currentTime.AddDate(-1, 0, 0)) + + // Create successful backup job + backupJob := testBackupJob(cluster) + err = errors.WithStack(r.setControllerReference(cluster, backupJob)) + assert.NilError(t, err) + err = r.apply(ctx, backupJob) + assert.NilError(t, err) + + backupJob.Status = batchv1.JobStatus{ + Succeeded: 1, + CompletionTime: &earlierTime, + } + err = r.Client.Status().Update(ctx, backupJob) + assert.NilError(t, err) + + // Create failed restore job + restoreJob := testRestoreJob(cluster) + restoreJob.Annotations = map[string]string{ + naming.PGBackRestBackupJobCompletion: backupJob.Status.CompletionTime.Format(time.RFC3339), + } + err = errors.WithStack(r.setControllerReference(cluster, restoreJob)) + assert.NilError(t, err) + err = r.apply(ctx, restoreJob) + assert.NilError(t, err) + + restoreJob.Status = batchv1.JobStatus{ + Succeeded: 0, + Failed: 1, + CompletionTime: ¤tTime, + } + err = r.Client.Status().Update(ctx, restoreJob) + assert.NilError(t, err) + + // Setup instances and volumes for reconcile + sts := &appsv1.StatefulSet{} + generateInstanceStatefulSetIntent(ctx, cluster, &cluster.Spec.InstanceSets[0], "pod-service", "service-account", sts, 1) + clusterVolumes := []corev1.PersistentVolumeClaim{} + + // Reconcile + pvc, err := r.reconcileDedicatedSnapshotVolume(ctx, cluster, clusterVolumes) + assert.NilError(t, err) + assert.Assert(t, pvc != nil) + + // Assert warning event was created and has expected attributes + if assert.Check(t, len(recorder.Events) > 0) { + assert.Equal(t, recorder.Events[0].Type, "Warning") + assert.Equal(t, recorder.Events[0].Regarding.Kind, "PostgresCluster") + assert.Equal(t, recorder.Events[0].Regarding.Name, "hippo") + assert.Equal(t, recorder.Events[0].Reason, "DedicatedSnapshotVolumeRestoreJobError") + assert.Assert(t, cmp.Contains(recorder.Events[0].Note, "restore job failed, check the logs")) + } + }) +} + +func TestCreateDedicatedSnapshotVolume(t *testing.T) { + ctx := context.Background() + _, cc := setupKubernetes(t) + + r := &Reconciler{ + Client: cc, + Owner: client.FieldOwner(t.Name()), + } + + ns := setupNamespace(t, cc) + cluster := testCluster() + cluster.Namespace = ns.Name + cluster.ObjectMeta.UID = "the-uid-123" + + labelMap := map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelRole: naming.RoleSnapshot, + naming.LabelData: naming.DataPostgres, + } + pvc := &corev1.PersistentVolumeClaim{ObjectMeta: naming.ClusterDedicatedSnapshotVolume(cluster)} + pvc.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("PersistentVolumeClaim")) + + pvc, err := r.createDedicatedSnapshotVolume(ctx, cluster, labelMap, pvc) + assert.NilError(t, err) + assert.Assert(t, metav1.IsControlledBy(pvc, cluster)) + assert.Equal(t, pvc.Spec.Resources.Requests[corev1.ResourceStorage], resource.MustParse("1Gi")) +} + +func TestDedicatedSnapshotVolumeRestore(t *testing.T) { + ctx := context.Background() + _, cc := setupKubernetes(t) + + r := &Reconciler{ + Client: cc, + Owner: client.FieldOwner(t.Name()), + } + + ns := setupNamespace(t, cc) + cluster := testCluster() + cluster.Namespace = ns.Name + cluster.ObjectMeta.UID = "the-uid-123" + + pvc := &corev1.PersistentVolumeClaim{ + ObjectMeta: metav1.ObjectMeta{ + Name: "dedicated-snapshot-volume", + }, + } + + sts := &appsv1.StatefulSet{} + generateInstanceStatefulSetIntent(ctx, cluster, &cluster.Spec.InstanceSets[0], "pod-service", "service-account", sts, 1) + currentTime := metav1.Now() + backupJob := testBackupJob(cluster) + backupJob.Status.CompletionTime = ¤tTime + + err := r.dedicatedSnapshotVolumeRestore(ctx, cluster, pvc, backupJob) + assert.NilError(t, err) + + // Assert a restore job was created that has the correct annotation + jobs := &batchv1.JobList{} + selectJobs, err := naming.AsSelector(naming.ClusterRestoreJobs(cluster.Name)) + assert.NilError(t, err) + err = errors.WithStack( + r.Client.List(ctx, jobs, + client.InNamespace(cluster.Namespace), + client.MatchingLabelsSelector{Selector: selectJobs}, + )) + assert.NilError(t, err) + assert.Equal(t, len(jobs.Items), 1) + assert.Equal(t, jobs.Items[0].Annotations[naming.PGBackRestBackupJobCompletion], + backupJob.Status.CompletionTime.Format(time.RFC3339)) +} + +func TestGenerateSnapshotOfDedicatedSnapshotVolume(t *testing.T) { + _, cc := setupKubernetes(t) + require.ParallelCapacity(t, 1) + + r := &Reconciler{ + Client: cc, + Owner: client.FieldOwner(t.Name()), + } + ns := setupNamespace(t, cc) + + cluster := testCluster() + cluster.Namespace = ns.Name + cluster.Spec.Backups.Snapshots = &v1beta1.VolumeSnapshots{ + VolumeSnapshotClassName: "my-snapshot", + } + + pvc := &corev1.PersistentVolumeClaim{ + ObjectMeta: metav1.ObjectMeta{ + Annotations: map[string]string{ + naming.PGBackRestBackupJobCompletion: "backup-completion-timestamp", + }, + Name: "dedicated-snapshot-volume", + }, + } + + snapshot, err := r.generateSnapshotOfDedicatedSnapshotVolume(cluster, pvc) + assert.NilError(t, err) + assert.Equal(t, snapshot.GetAnnotations()[naming.PGBackRestBackupJobCompletion], + "backup-completion-timestamp") +} + +func TestGenerateVolumeSnapshot(t *testing.T) { + _, cc := setupKubernetes(t) + require.ParallelCapacity(t, 1) + + r := &Reconciler{ + Client: cc, + Owner: client.FieldOwner(t.Name()), + } + ns := setupNamespace(t, cc) + + cluster := testCluster() + cluster.Namespace = ns.Name + + pvc := &corev1.PersistentVolumeClaim{ + ObjectMeta: metav1.ObjectMeta{ + Name: "dedicated-snapshot-volume", + }, + } + volumeSnapshotClassName := "my-snapshot" + + snapshot, err := r.generateVolumeSnapshot(cluster, *pvc, volumeSnapshotClassName) + assert.NilError(t, err) + assert.Equal(t, *snapshot.Spec.VolumeSnapshotClassName, "my-snapshot") + assert.Equal(t, *snapshot.Spec.Source.PersistentVolumeClaimName, "dedicated-snapshot-volume") + assert.Equal(t, snapshot.Labels[naming.LabelCluster], "hippo") + assert.Equal(t, snapshot.ObjectMeta.OwnerReferences[0].Name, "hippo") +} + +func TestGetDedicatedSnapshotVolumeRestoreJob(t *testing.T) { + ctx := context.Background() + _, cc := setupKubernetes(t) + require.ParallelCapacity(t, 1) + + r := &Reconciler{ + Client: cc, + Owner: client.FieldOwner(t.Name()), + } + ns := setupNamespace(t, cc) + + cluster := testCluster() + cluster.Namespace = ns.Name + + t.Run("NoRestoreJobs", func(t *testing.T) { + dsvRestoreJob, err := r.getDedicatedSnapshotVolumeRestoreJob(ctx, cluster) + assert.NilError(t, err) + assert.Check(t, dsvRestoreJob == nil) + }) + + t.Run("NoDsvRestoreJobs", func(t *testing.T) { + job1 := testRestoreJob(cluster) + job1.Namespace = ns.Name + + err := r.apply(ctx, job1) + assert.NilError(t, err) + + dsvRestoreJob, err := r.getDedicatedSnapshotVolumeRestoreJob(ctx, cluster) + assert.NilError(t, err) + assert.Check(t, dsvRestoreJob == nil) + }) + + t.Run("DsvRestoreJobExists", func(t *testing.T) { + job2 := testRestoreJob(cluster) + job2.Name = "restore-job-2" + job2.Namespace = ns.Name + job2.Annotations = map[string]string{ + naming.PGBackRestBackupJobCompletion: "backup-timestamp", + } + + err := r.apply(ctx, job2) + assert.NilError(t, err) + + job3 := testRestoreJob(cluster) + job3.Name = "restore-job-3" + job3.Namespace = ns.Name + + err = r.apply(ctx, job3) + assert.NilError(t, err) + + dsvRestoreJob, err := r.getDedicatedSnapshotVolumeRestoreJob(ctx, cluster) + assert.NilError(t, err) + assert.Assert(t, dsvRestoreJob != nil) + assert.Equal(t, dsvRestoreJob.Name, "restore-job-2") + }) +} + +func TestGetLatestCompleteBackupJob(t *testing.T) { + ctx := context.Background() + _, cc := setupKubernetes(t) + // require.ParallelCapacity(t, 1) + + r := &Reconciler{ + Client: cc, + Owner: client.FieldOwner(t.Name()), + } + ns := setupNamespace(t, cc) + + cluster := testCluster() + cluster.Namespace = ns.Name + + t.Run("NoJobs", func(t *testing.T) { + latestCompleteBackupJob, err := r.getLatestCompleteBackupJob(ctx, cluster) + assert.NilError(t, err) + assert.Check(t, latestCompleteBackupJob == nil) + }) + + t.Run("NoCompleteJobs", func(t *testing.T) { + job1 := testBackupJob(cluster) + job1.Namespace = ns.Name + + err := r.apply(ctx, job1) + assert.NilError(t, err) + + latestCompleteBackupJob, err := r.getLatestCompleteBackupJob(ctx, cluster) + assert.NilError(t, err) + assert.Check(t, latestCompleteBackupJob == nil) + }) + + t.Run("OneCompleteBackupJob", func(t *testing.T) { + currentTime := metav1.Now() + + job1 := testBackupJob(cluster) + job1.Namespace = ns.Name + + err := r.apply(ctx, job1) + assert.NilError(t, err) + + job2 := testBackupJob(cluster) + job2.Namespace = ns.Name + job2.Name = "backup-job-2" + + err = r.apply(ctx, job2) + assert.NilError(t, err) + + // Get job1 and update Status. + err = r.Client.Get(ctx, client.ObjectKeyFromObject(job1), job1) + assert.NilError(t, err) + + job1.Status = batchv1.JobStatus{ + Succeeded: 1, + CompletionTime: ¤tTime, + } + err = r.Client.Status().Update(ctx, job1) + assert.NilError(t, err) + + latestCompleteBackupJob, err := r.getLatestCompleteBackupJob(ctx, cluster) + assert.NilError(t, err) + assert.Check(t, latestCompleteBackupJob.Name == "backup-job-1") + }) + + t.Run("TwoCompleteBackupJobs", func(t *testing.T) { + currentTime := metav1.Now() + earlierTime := metav1.NewTime(currentTime.AddDate(-1, 0, 0)) + assert.Check(t, earlierTime.Before(¤tTime)) + + job1 := testBackupJob(cluster) + job1.Namespace = ns.Name + + err := r.apply(ctx, job1) + assert.NilError(t, err) + + job2 := testBackupJob(cluster) + job2.Namespace = ns.Name + job2.Name = "backup-job-2" + + err = r.apply(ctx, job2) + assert.NilError(t, err) + + // Get job1 and update Status. + err = r.Client.Get(ctx, client.ObjectKeyFromObject(job1), job1) + assert.NilError(t, err) + + job1.Status = batchv1.JobStatus{ + Succeeded: 1, + CompletionTime: ¤tTime, + } + err = r.Client.Status().Update(ctx, job1) + assert.NilError(t, err) + + // Get job2 and update Status. + err = r.Client.Get(ctx, client.ObjectKeyFromObject(job2), job2) + assert.NilError(t, err) + + job2.Status = batchv1.JobStatus{ + Succeeded: 1, + CompletionTime: &earlierTime, + } + err = r.Client.Status().Update(ctx, job2) + assert.NilError(t, err) + + latestCompleteBackupJob, err := r.getLatestCompleteBackupJob(ctx, cluster) + assert.NilError(t, err) + assert.Check(t, latestCompleteBackupJob.Name == "backup-job-1") + }) +} + +func TestGetSnapshotWithLatestError(t *testing.T) { + t.Run("NoSnapshots", func(t *testing.T) { + snapshotList := &volumesnapshotv1.VolumeSnapshotList{} + snapshotWithLatestError := getSnapshotWithLatestError(snapshotList) + assert.Check(t, snapshotWithLatestError == nil) + }) + + t.Run("NoSnapshotsWithStatus", func(t *testing.T) { + snapshotList := &volumesnapshotv1.VolumeSnapshotList{ + Items: []volumesnapshotv1.VolumeSnapshot{ + {}, + {}, + }, + } + snapshotWithLatestError := getSnapshotWithLatestError(snapshotList) + assert.Check(t, snapshotWithLatestError == nil) + }) + + t.Run("NoSnapshotsWithErrors", func(t *testing.T) { + snapshotList := &volumesnapshotv1.VolumeSnapshotList{ + Items: []volumesnapshotv1.VolumeSnapshot{ + { + Status: &volumesnapshotv1.VolumeSnapshotStatus{ + ReadyToUse: initialize.Bool(true), + }, + }, + { + Status: &volumesnapshotv1.VolumeSnapshotStatus{ + ReadyToUse: initialize.Bool(false), + }, + }, + }, + } + snapshotWithLatestError := getSnapshotWithLatestError(snapshotList) + assert.Check(t, snapshotWithLatestError == nil) + }) + + t.Run("OneSnapshotWithError", func(t *testing.T) { + currentTime := metav1.Now() + earlierTime := metav1.NewTime(currentTime.AddDate(-1, 0, 0)) + snapshotList := &volumesnapshotv1.VolumeSnapshotList{ + Items: []volumesnapshotv1.VolumeSnapshot{ + { + ObjectMeta: metav1.ObjectMeta{ + Name: "good-snapshot", + UID: "the-uid-123", + }, + Status: &volumesnapshotv1.VolumeSnapshotStatus{ + CreationTime: ¤tTime, + ReadyToUse: initialize.Bool(true), + }, + }, + { + ObjectMeta: metav1.ObjectMeta{ + Name: "bad-snapshot", + UID: "the-uid-456", + }, + Status: &volumesnapshotv1.VolumeSnapshotStatus{ + ReadyToUse: initialize.Bool(false), + Error: &volumesnapshotv1.VolumeSnapshotError{ + Time: &earlierTime, + }, + }, + }, + }, + } + snapshotWithLatestError := getSnapshotWithLatestError(snapshotList) + assert.Equal(t, snapshotWithLatestError.ObjectMeta.Name, "bad-snapshot") + }) + + t.Run("TwoSnapshotsWithErrors", func(t *testing.T) { + currentTime := metav1.Now() + earlierTime := metav1.NewTime(currentTime.AddDate(-1, 0, 0)) + snapshotList := &volumesnapshotv1.VolumeSnapshotList{ + Items: []volumesnapshotv1.VolumeSnapshot{ + { + ObjectMeta: metav1.ObjectMeta{ + Name: "first-bad-snapshot", + UID: "the-uid-123", + }, + Status: &volumesnapshotv1.VolumeSnapshotStatus{ + ReadyToUse: initialize.Bool(false), + Error: &volumesnapshotv1.VolumeSnapshotError{ + Time: &earlierTime, + }, + }, + }, + { + ObjectMeta: metav1.ObjectMeta{ + Name: "second-bad-snapshot", + UID: "the-uid-456", + }, + Status: &volumesnapshotv1.VolumeSnapshotStatus{ + ReadyToUse: initialize.Bool(false), + Error: &volumesnapshotv1.VolumeSnapshotError{ + Time: ¤tTime, + }, + }, + }, + }, + } + snapshotWithLatestError := getSnapshotWithLatestError(snapshotList) + assert.Equal(t, snapshotWithLatestError.ObjectMeta.Name, "second-bad-snapshot") + }) +} + +func TestGetSnapshotsForCluster(t *testing.T) { + ctx := context.Background() + _, cc := setupKubernetes(t) + require.ParallelCapacity(t, 1) + + r := &Reconciler{ + Client: cc, + Owner: client.FieldOwner(t.Name()), + } + ns := setupNamespace(t, cc) + + cluster := testCluster() + cluster.Namespace = ns.Name + + t.Run("NoSnapshots", func(t *testing.T) { + snapshots, err := r.getSnapshotsForCluster(ctx, cluster) + assert.NilError(t, err) + assert.Equal(t, len(snapshots.Items), 0) + }) + + t.Run("NoSnapshotsForCluster", func(t *testing.T) { + snapshot := &volumesnapshotv1.VolumeSnapshot{ + TypeMeta: metav1.TypeMeta{ + APIVersion: volumesnapshotv1.SchemeGroupVersion.String(), + Kind: "VolumeSnapshot", + }, + ObjectMeta: metav1.ObjectMeta{ + Name: "some-snapshot", + Namespace: ns.Name, + Labels: map[string]string{ + naming.LabelCluster: "rhino", + }, + }, + } + snapshot.Spec.Source.PersistentVolumeClaimName = initialize.String("some-pvc-name") + snapshot.Spec.VolumeSnapshotClassName = initialize.String("some-class-name") + err := r.apply(ctx, snapshot) + assert.NilError(t, err) + + snapshots, err := r.getSnapshotsForCluster(ctx, cluster) + assert.NilError(t, err) + assert.Equal(t, len(snapshots.Items), 0) + }) + + t.Run("OneSnapshotForCluster", func(t *testing.T) { + snapshot1 := &volumesnapshotv1.VolumeSnapshot{ + TypeMeta: metav1.TypeMeta{ + APIVersion: volumesnapshotv1.SchemeGroupVersion.String(), + Kind: "VolumeSnapshot", + }, + ObjectMeta: metav1.ObjectMeta{ + Name: "some-snapshot", + Namespace: ns.Name, + Labels: map[string]string{ + naming.LabelCluster: "rhino", + }, + }, + } + snapshot1.Spec.Source.PersistentVolumeClaimName = initialize.String("some-pvc-name") + snapshot1.Spec.VolumeSnapshotClassName = initialize.String("some-class-name") + err := r.apply(ctx, snapshot1) + assert.NilError(t, err) + + snapshot2 := &volumesnapshotv1.VolumeSnapshot{ + TypeMeta: metav1.TypeMeta{ + APIVersion: volumesnapshotv1.SchemeGroupVersion.String(), + Kind: "VolumeSnapshot", + }, + ObjectMeta: metav1.ObjectMeta{ + Name: "another-snapshot", + Namespace: ns.Name, + Labels: map[string]string{ + naming.LabelCluster: "hippo", + }, + }, + } + snapshot2.Spec.Source.PersistentVolumeClaimName = initialize.String("another-pvc-name") + snapshot2.Spec.VolumeSnapshotClassName = initialize.String("another-class-name") + err = r.apply(ctx, snapshot2) + assert.NilError(t, err) + + snapshots, err := r.getSnapshotsForCluster(ctx, cluster) + assert.NilError(t, err) + assert.Equal(t, len(snapshots.Items), 1) + assert.Equal(t, snapshots.Items[0].Name, "another-snapshot") + }) + + t.Run("TwoSnapshotsForCluster", func(t *testing.T) { + snapshot1 := &volumesnapshotv1.VolumeSnapshot{ + TypeMeta: metav1.TypeMeta{ + APIVersion: volumesnapshotv1.SchemeGroupVersion.String(), + Kind: "VolumeSnapshot", + }, + ObjectMeta: metav1.ObjectMeta{ + Name: "some-snapshot", + Namespace: ns.Name, + Labels: map[string]string{ + naming.LabelCluster: "hippo", + }, + }, + } + snapshot1.Spec.Source.PersistentVolumeClaimName = initialize.String("some-pvc-name") + snapshot1.Spec.VolumeSnapshotClassName = initialize.String("some-class-name") + err := r.apply(ctx, snapshot1) + assert.NilError(t, err) + + snapshot2 := &volumesnapshotv1.VolumeSnapshot{ + TypeMeta: metav1.TypeMeta{ + APIVersion: volumesnapshotv1.SchemeGroupVersion.String(), + Kind: "VolumeSnapshot", + }, + ObjectMeta: metav1.ObjectMeta{ + Name: "another-snapshot", + Namespace: ns.Name, + Labels: map[string]string{ + naming.LabelCluster: "hippo", + }, + }, + } + snapshot2.Spec.Source.PersistentVolumeClaimName = initialize.String("another-pvc-name") + snapshot2.Spec.VolumeSnapshotClassName = initialize.String("another-class-name") + err = r.apply(ctx, snapshot2) + assert.NilError(t, err) + + snapshots, err := r.getSnapshotsForCluster(ctx, cluster) + assert.NilError(t, err) + assert.Equal(t, len(snapshots.Items), 2) + }) +} + +func TestGetLatestReadySnapshot(t *testing.T) { + t.Run("NoSnapshots", func(t *testing.T) { + snapshotList := &volumesnapshotv1.VolumeSnapshotList{} + latestReadySnapshot := getLatestReadySnapshot(snapshotList) + assert.Assert(t, latestReadySnapshot == nil) + }) + + t.Run("NoSnapshotsWithStatus", func(t *testing.T) { + snapshotList := &volumesnapshotv1.VolumeSnapshotList{ + Items: []volumesnapshotv1.VolumeSnapshot{ + {}, + {}, + }, + } + latestReadySnapshot := getLatestReadySnapshot(snapshotList) + assert.Assert(t, latestReadySnapshot == nil) + }) + + t.Run("NoReadySnapshots", func(t *testing.T) { + snapshotList := &volumesnapshotv1.VolumeSnapshotList{ + Items: []volumesnapshotv1.VolumeSnapshot{ + { + Status: &volumesnapshotv1.VolumeSnapshotStatus{ + ReadyToUse: initialize.Bool(false), + }, + }, + { + Status: &volumesnapshotv1.VolumeSnapshotStatus{ + ReadyToUse: initialize.Bool(false), + }, + }, + }, + } + latestReadySnapshot := getLatestReadySnapshot(snapshotList) + assert.Assert(t, latestReadySnapshot == nil) + }) + + t.Run("OneReadySnapshot", func(t *testing.T) { + currentTime := metav1.Now() + earlierTime := metav1.NewTime(currentTime.AddDate(-1, 0, 0)) + snapshotList := &volumesnapshotv1.VolumeSnapshotList{ + Items: []volumesnapshotv1.VolumeSnapshot{ + { + ObjectMeta: metav1.ObjectMeta{ + Name: "good-snapshot", + UID: "the-uid-123", + }, + Status: &volumesnapshotv1.VolumeSnapshotStatus{ + CreationTime: &earlierTime, + ReadyToUse: initialize.Bool(true), + }, + }, + { + ObjectMeta: metav1.ObjectMeta{ + Name: "bad-snapshot", + UID: "the-uid-456", + }, + Status: &volumesnapshotv1.VolumeSnapshotStatus{ + CreationTime: ¤tTime, + ReadyToUse: initialize.Bool(false), + }, + }, + }, + } + latestReadySnapshot := getLatestReadySnapshot(snapshotList) + assert.Equal(t, latestReadySnapshot.ObjectMeta.Name, "good-snapshot") + }) + + t.Run("TwoReadySnapshots", func(t *testing.T) { + currentTime := metav1.Now() + earlierTime := metav1.NewTime(currentTime.AddDate(-1, 0, 0)) + snapshotList := &volumesnapshotv1.VolumeSnapshotList{ + Items: []volumesnapshotv1.VolumeSnapshot{ + { + ObjectMeta: metav1.ObjectMeta{ + Name: "first-good-snapshot", + UID: "the-uid-123", + }, + Status: &volumesnapshotv1.VolumeSnapshotStatus{ + CreationTime: &earlierTime, + ReadyToUse: initialize.Bool(true), + }, + }, + { + ObjectMeta: metav1.ObjectMeta{ + Name: "second-good-snapshot", + UID: "the-uid-456", + }, + Status: &volumesnapshotv1.VolumeSnapshotStatus{ + CreationTime: ¤tTime, + ReadyToUse: initialize.Bool(true), + }, + }, + }, + } + latestReadySnapshot := getLatestReadySnapshot(snapshotList) + assert.Equal(t, latestReadySnapshot.ObjectMeta.Name, "second-good-snapshot") + }) +} + +func TestDeleteSnapshots(t *testing.T) { + ctx := context.Background() + cfg, cc := setupKubernetes(t) + discoveryClient, err := discovery.NewDiscoveryClientForConfig(cfg) + assert.NilError(t, err) + + r := &Reconciler{ + Client: cc, + Owner: client.FieldOwner(t.Name()), + DiscoveryClient: discoveryClient, + } + ns := setupNamespace(t, cc) + + cluster := testCluster() + cluster.Namespace = ns.Name + cluster.ObjectMeta.UID = "the-uid-123" + assert.NilError(t, r.Client.Create(ctx, cluster)) + + rhinoCluster := testCluster() + rhinoCluster.Name = "rhino" + rhinoCluster.Namespace = ns.Name + rhinoCluster.ObjectMeta.UID = "the-uid-456" + assert.NilError(t, r.Client.Create(ctx, rhinoCluster)) + + t.Cleanup(func() { + assert.Check(t, r.Client.Delete(ctx, cluster)) + assert.Check(t, r.Client.Delete(ctx, rhinoCluster)) + }) + + t.Run("NoSnapshots", func(t *testing.T) { + snapshotList := &volumesnapshotv1.VolumeSnapshotList{} + err := r.deleteSnapshots(ctx, cluster, snapshotList) + assert.NilError(t, err) + }) + + t.Run("NoSnapshotsControlledByHippo", func(t *testing.T) { + pvcName := initialize.String("dedicated-snapshot-volume") + snapshot1 := &volumesnapshotv1.VolumeSnapshot{ + TypeMeta: metav1.TypeMeta{ + APIVersion: volumesnapshotv1.SchemeGroupVersion.String(), + Kind: "VolumeSnapshot", + }, + ObjectMeta: metav1.ObjectMeta{ + Name: "first-snapshot", + Namespace: ns.Name, + }, + Spec: volumesnapshotv1.VolumeSnapshotSpec{ + Source: volumesnapshotv1.VolumeSnapshotSource{ + PersistentVolumeClaimName: pvcName, + }, + }, + } + err := errors.WithStack(r.setControllerReference(rhinoCluster, snapshot1)) + assert.NilError(t, err) + err = r.apply(ctx, snapshot1) + assert.NilError(t, err) + + snapshotList := &volumesnapshotv1.VolumeSnapshotList{ + Items: []volumesnapshotv1.VolumeSnapshot{ + *snapshot1, + }, + } + err = r.deleteSnapshots(ctx, cluster, snapshotList) + assert.NilError(t, err) + existingSnapshots := &volumesnapshotv1.VolumeSnapshotList{} + err = errors.WithStack( + r.Client.List(ctx, existingSnapshots, + client.InNamespace(ns.Namespace), + )) + assert.NilError(t, err) + assert.Equal(t, len(existingSnapshots.Items), 1) + }) + + t.Run("OneSnapshotControlledByHippo", func(t *testing.T) { + pvcName := initialize.String("dedicated-snapshot-volume") + snapshot1 := &volumesnapshotv1.VolumeSnapshot{ + TypeMeta: metav1.TypeMeta{ + APIVersion: volumesnapshotv1.SchemeGroupVersion.String(), + Kind: "VolumeSnapshot", + }, + ObjectMeta: metav1.ObjectMeta{ + Name: "first-snapshot", + Namespace: ns.Name, + }, + Spec: volumesnapshotv1.VolumeSnapshotSpec{ + Source: volumesnapshotv1.VolumeSnapshotSource{ + PersistentVolumeClaimName: pvcName, + }, + }, + } + err := errors.WithStack(r.setControllerReference(rhinoCluster, snapshot1)) + assert.NilError(t, err) + err = r.apply(ctx, snapshot1) + assert.NilError(t, err) + + snapshot2 := &volumesnapshotv1.VolumeSnapshot{ + TypeMeta: metav1.TypeMeta{ + APIVersion: volumesnapshotv1.SchemeGroupVersion.String(), + Kind: "VolumeSnapshot", + }, + ObjectMeta: metav1.ObjectMeta{ + Name: "second-snapshot", + Namespace: ns.Name, + }, + Spec: volumesnapshotv1.VolumeSnapshotSpec{ + Source: volumesnapshotv1.VolumeSnapshotSource{ + PersistentVolumeClaimName: pvcName, + }, + }, + } + err = errors.WithStack(r.setControllerReference(cluster, snapshot2)) + assert.NilError(t, err) + err = r.apply(ctx, snapshot2) + assert.NilError(t, err) + + snapshotList := &volumesnapshotv1.VolumeSnapshotList{ + Items: []volumesnapshotv1.VolumeSnapshot{ + *snapshot1, *snapshot2, + }, + } + err = r.deleteSnapshots(ctx, cluster, snapshotList) + assert.NilError(t, err) + existingSnapshots := &volumesnapshotv1.VolumeSnapshotList{} + err = errors.WithStack( + r.Client.List(ctx, existingSnapshots, + client.InNamespace(ns.Namespace), + )) + assert.NilError(t, err) + assert.Equal(t, len(existingSnapshots.Items), 1) + assert.Equal(t, existingSnapshots.Items[0].Name, "first-snapshot") + }) +} + +func TestClusterUsingTablespaces(t *testing.T) { + ctx := context.Background() + cluster := testCluster() + + t.Run("NoVolumesFeatureEnabled", func(t *testing.T) { + // Enable Tablespaces feature gate + gate := feature.NewGate() + assert.NilError(t, gate.SetFromMap(map[string]bool{ + feature.TablespaceVolumes: true, + })) + ctx := feature.NewContext(ctx, gate) + + assert.Assert(t, !clusterUsingTablespaces(ctx, cluster)) + }) + + t.Run("VolumesInPlaceFeatureDisabled", func(t *testing.T) { + cluster.Spec.InstanceSets[0].TablespaceVolumes = []v1beta1.TablespaceVolume{{ + Name: "volume-1", + }} + + assert.Assert(t, !clusterUsingTablespaces(ctx, cluster)) + }) + + t.Run("VolumesInPlaceAndFeatureEnabled", func(t *testing.T) { + // Enable Tablespaces feature gate + gate := feature.NewGate() + assert.NilError(t, gate.SetFromMap(map[string]bool{ + feature.TablespaceVolumes: true, + })) + ctx := feature.NewContext(ctx, gate) + + assert.Assert(t, clusterUsingTablespaces(ctx, cluster)) + }) +} diff --git a/internal/controller/postgrescluster/suite_test.go b/internal/controller/postgrescluster/suite_test.go new file mode 100644 index 0000000000..2a0e3d76ec --- /dev/null +++ b/internal/controller/postgrescluster/suite_test.go @@ -0,0 +1,84 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgrescluster + +import ( + "context" + "os" + "path/filepath" + "strings" + "testing" + + . "github.com/onsi/ginkgo/v2" + . "github.com/onsi/gomega" + "k8s.io/apimachinery/pkg/util/version" + "k8s.io/client-go/discovery" + + // Google Kubernetes Engine / Google Cloud Platform authentication provider + _ "k8s.io/client-go/plugin/pkg/client/auth/gcp" + "k8s.io/client-go/rest" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/envtest" + "sigs.k8s.io/controller-runtime/pkg/log" + "sigs.k8s.io/controller-runtime/pkg/manager" + + "github.com/crunchydata/postgres-operator/internal/controller/runtime" + "github.com/crunchydata/postgres-operator/internal/logging" +) + +var suite struct { + Client client.Client + Config *rest.Config + + Environment *envtest.Environment + ServerVersion *version.Version + + Manager manager.Manager +} + +func TestAPIs(t *testing.T) { + RegisterFailHandler(Fail) + + RunSpecs(t, "Controller Suite") +} + +var _ = BeforeSuite(func() { + if os.Getenv("KUBEBUILDER_ASSETS") == "" && !strings.EqualFold(os.Getenv("USE_EXISTING_CLUSTER"), "true") { + Skip("skipping") + } + + logging.SetLogSink(logging.Logrus(GinkgoWriter, "test", 1, 1)) + log.SetLogger(logging.FromContext(context.Background())) + + By("bootstrapping test environment") + suite.Environment = &envtest.Environment{ + CRDDirectoryPaths: []string{ + filepath.Join("..", "..", "..", "config", "crd", "bases"), + filepath.Join("..", "..", "..", "hack", "tools", "external-snapshotter", "client", "config", "crd"), + }, + } + + _, err := suite.Environment.Start() + Expect(err).ToNot(HaveOccurred()) + + DeferCleanup(suite.Environment.Stop) + + suite.Config = suite.Environment.Config + suite.Client, err = client.New(suite.Config, client.Options{Scheme: runtime.Scheme}) + Expect(err).ToNot(HaveOccurred()) + + dc, err := discovery.NewDiscoveryClientForConfig(suite.Config) + Expect(err).ToNot(HaveOccurred()) + + server, err := dc.ServerVersion() + Expect(err).ToNot(HaveOccurred()) + + suite.ServerVersion, err = version.ParseGeneric(server.GitVersion) + Expect(err).ToNot(HaveOccurred()) +}) + +var _ = AfterSuite(func() { + +}) diff --git a/internal/controller/postgrescluster/topology.go b/internal/controller/postgrescluster/topology.go new file mode 100644 index 0000000000..58778be907 --- /dev/null +++ b/internal/controller/postgrescluster/topology.go @@ -0,0 +1,27 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgrescluster + +import ( + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" +) + +// defaultTopologySpreadConstraints returns constraints that prefer to schedule +// pods on different nodes and in different zones. +func defaultTopologySpreadConstraints(selector metav1.LabelSelector) []corev1.TopologySpreadConstraint { + return []corev1.TopologySpreadConstraint{ + { + TopologyKey: corev1.LabelHostname, + WhenUnsatisfiable: corev1.ScheduleAnyway, + LabelSelector: &selector, MaxSkew: 1, + }, + { + TopologyKey: corev1.LabelTopologyZone, + WhenUnsatisfiable: corev1.ScheduleAnyway, + LabelSelector: &selector, MaxSkew: 1, + }, + } +} diff --git a/internal/controller/postgrescluster/topology_test.go b/internal/controller/postgrescluster/topology_test.go new file mode 100644 index 0000000000..40c8c0dd7f --- /dev/null +++ b/internal/controller/postgrescluster/topology_test.go @@ -0,0 +1,51 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgrescluster + +import ( + "testing" + + "gotest.tools/v3/assert" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + + "github.com/crunchydata/postgres-operator/internal/testing/cmp" +) + +func TestDefaultTopologySpreadConstraints(t *testing.T) { + constraints := defaultTopologySpreadConstraints(metav1.LabelSelector{ + MatchLabels: map[string]string{"basic": "stuff"}, + MatchExpressions: []metav1.LabelSelectorRequirement{ + {Key: "k1", Operator: "op", Values: []string{"v1", "v2"}}, + }, + }) + + // Entire selector, hostname, zone, and ScheduleAnyway. + assert.Assert(t, cmp.MarshalMatches(constraints, ` +- labelSelector: + matchExpressions: + - key: k1 + operator: op + values: + - v1 + - v2 + matchLabels: + basic: stuff + maxSkew: 1 + topologyKey: kubernetes.io/hostname + whenUnsatisfiable: ScheduleAnyway +- labelSelector: + matchExpressions: + - key: k1 + operator: op + values: + - v1 + - v2 + matchLabels: + basic: stuff + maxSkew: 1 + topologyKey: topology.kubernetes.io/zone + whenUnsatisfiable: ScheduleAnyway + `)) +} diff --git a/internal/controller/postgrescluster/util.go b/internal/controller/postgrescluster/util.go new file mode 100644 index 0000000000..25120ab574 --- /dev/null +++ b/internal/controller/postgrescluster/util.go @@ -0,0 +1,287 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgrescluster + +import ( + "fmt" + "hash/fnv" + "io" + + batchv1 "k8s.io/api/batch/v1" + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/resource" + "k8s.io/apimachinery/pkg/util/rand" + + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/naming" +) + +var tmpDirSizeLimit = resource.MustParse("16Mi") + +const ( + // devSHMDir is the directory used for allocating shared memory segments, + // which are needed by Postgres + devSHMDir = "/dev/shm" + // nssWrapperDir is the directory in a container for the nss_wrapper passwd and group files + nssWrapperDir = "/tmp/nss_wrapper/%s/%s" + // postgresNSSWrapperPrefix sets the required variables when running the NSS + // wrapper script for the 'postgres' user + postgresNSSWrapperPrefix = `export NSS_WRAPPER_SUBDIR=postgres CRUNCHY_NSS_USERNAME=postgres ` + + `CRUNCHY_NSS_USER_DESC="postgres" ` + // pgAdminNSSWrapperPrefix sets the required variables when running the NSS + // wrapper script for the 'pgadmin' user + pgAdminNSSWrapperPrefix = `export NSS_WRAPPER_SUBDIR=pgadmin CRUNCHY_NSS_USERNAME=pgadmin ` + + `CRUNCHY_NSS_USER_DESC="pgadmin" ` + // nssWrapperScript sets up an nss_wrapper environment in accordance with OpenShift + // guidance for supporting arbitrary user ID's is the script for the configuration + // and startup of the pgAdmin service. + // It is based on the nss_wrapper.sh script from the Crunchy Containers Project. + // - https://github.com/CrunchyData/crunchy-containers/blob/master/bin/common/nss_wrapper.sh + nssWrapperScript = ` +# Define nss_wrapper directory and passwd & group files that will be utilized by nss_wrapper. The +# nss_wrapper_env.sh script (which also sets these vars) isn't sourced here since the nss_wrapper +# has not yet been setup, and we therefore don't yet want the nss_wrapper vars in the environment. +mkdir -p /tmp/nss_wrapper +chmod g+rwx /tmp/nss_wrapper + +NSS_WRAPPER_DIR="/tmp/nss_wrapper/${NSS_WRAPPER_SUBDIR}" +NSS_WRAPPER_PASSWD="${NSS_WRAPPER_DIR}/passwd" +NSS_WRAPPER_GROUP="${NSS_WRAPPER_DIR}/group" + +# create the nss_wrapper directory +mkdir -p "${NSS_WRAPPER_DIR}" + +# grab the current user ID and group ID +USER_ID=$(id -u) +export USER_ID +GROUP_ID=$(id -g) +export GROUP_ID + +# get copies of the passwd and group files +[[ -f "${NSS_WRAPPER_PASSWD}" ]] || cp "/etc/passwd" "${NSS_WRAPPER_PASSWD}" +[[ -f "${NSS_WRAPPER_GROUP}" ]] || cp "/etc/group" "${NSS_WRAPPER_GROUP}" + +# if the username is missing from the passwd file, then add it +if [[ ! $(cat "${NSS_WRAPPER_PASSWD}") =~ ${CRUNCHY_NSS_USERNAME}:x:${USER_ID} ]]; then + echo "nss_wrapper: adding user" + passwd_tmp="${NSS_WRAPPER_DIR}/passwd_tmp" + cp "${NSS_WRAPPER_PASSWD}" "${passwd_tmp}" + sed -i "/${CRUNCHY_NSS_USERNAME}:x:/d" "${passwd_tmp}" + # needed for OCP 4.x because crio updates /etc/passwd with an entry for USER_ID + sed -i "/${USER_ID}:x:/d" "${passwd_tmp}" + printf '${CRUNCHY_NSS_USERNAME}:x:${USER_ID}:${GROUP_ID}:${CRUNCHY_NSS_USER_DESC}:${HOME}:/bin/bash\n' >> "${passwd_tmp}" + envsubst < "${passwd_tmp}" > "${NSS_WRAPPER_PASSWD}" + rm "${passwd_tmp}" +else + echo "nss_wrapper: user exists" +fi + +# if the username (which will be the same as the group name) is missing from group file, then add it +if [[ ! $(cat "${NSS_WRAPPER_GROUP}") =~ ${CRUNCHY_NSS_USERNAME}:x:${USER_ID} ]]; then + echo "nss_wrapper: adding group" + group_tmp="${NSS_WRAPPER_DIR}/group_tmp" + cp "${NSS_WRAPPER_GROUP}" "${group_tmp}" + sed -i "/${CRUNCHY_NSS_USERNAME}:x:/d" "${group_tmp}" + printf '${CRUNCHY_NSS_USERNAME}:x:${USER_ID}:${CRUNCHY_NSS_USERNAME}\n' >> "${group_tmp}" + envsubst < "${group_tmp}" > "${NSS_WRAPPER_GROUP}" + rm "${group_tmp}" +else + echo "nss_wrapper: group exists" +fi + +# export the nss_wrapper env vars +# define nss_wrapper directory and passwd & group files that will be utilized by nss_wrapper +NSS_WRAPPER_DIR="/tmp/nss_wrapper/${NSS_WRAPPER_SUBDIR}" +NSS_WRAPPER_PASSWD="${NSS_WRAPPER_DIR}/passwd" +NSS_WRAPPER_GROUP="${NSS_WRAPPER_DIR}/group" + +export LD_PRELOAD=/usr/lib64/libnss_wrapper.so +export NSS_WRAPPER_PASSWD="${NSS_WRAPPER_PASSWD}" +export NSS_WRAPPER_GROUP="${NSS_WRAPPER_GROUP}" + +echo "nss_wrapper: environment configured" +` +) + +// addDevSHM adds the shared memory "directory" to a Pod, which is needed by +// Postgres to allocate shared memory segments. This is a special directory +// called "/dev/shm", and is mounted as an emptyDir over a "memory" medium. This +// is mounted only to the database container. +func addDevSHM(template *corev1.PodTemplateSpec) { + + // do not set a size limit on shared memory. This will be handled by the OS + // layer + template.Spec.Volumes = append(template.Spec.Volumes, corev1.Volume{ + Name: "dshm", + VolumeSource: corev1.VolumeSource{ + EmptyDir: &corev1.EmptyDirVolumeSource{ + Medium: corev1.StorageMediumMemory, + }, + }, + }) + + // only give the database container access to shared memory + for i := range template.Spec.Containers { + if template.Spec.Containers[i].Name == naming.ContainerDatabase { + template.Spec.Containers[i].VolumeMounts = append(template.Spec.Containers[i].VolumeMounts, + corev1.VolumeMount{ + Name: "dshm", + MountPath: devSHMDir, + }) + } + } +} + +// addTMPEmptyDir adds a "tmp" EmptyDir volume to the provided Pod template, while then also adding a +// volume mount at /tmp for all containers defined within the Pod template +// The '/tmp' directory is currently utilized for the following: +// - As the pgBackRest lock directory (this is the default lock location for pgBackRest) +// - The location where the replication client certificates can be loaded with the proper +// permissions set +func addTMPEmptyDir(template *corev1.PodTemplateSpec) { + + template.Spec.Volumes = append(template.Spec.Volumes, corev1.Volume{ + Name: "tmp", + VolumeSource: corev1.VolumeSource{ + EmptyDir: &corev1.EmptyDirVolumeSource{ + SizeLimit: &tmpDirSizeLimit, + }, + }, + }) + + for i := range template.Spec.Containers { + template.Spec.Containers[i].VolumeMounts = append(template.Spec.Containers[i].VolumeMounts, + corev1.VolumeMount{ + Name: "tmp", + MountPath: "/tmp", + }) + } + + for i := range template.Spec.InitContainers { + template.Spec.InitContainers[i].VolumeMounts = append(template.Spec.InitContainers[i].VolumeMounts, + corev1.VolumeMount{ + Name: "tmp", + MountPath: "/tmp", + }) + } +} + +// addNSSWrapper adds nss_wrapper environment variables to the database and pgBackRest +// containers in the Pod template. Additionally, an init container is added to the Pod template +// as needed to setup the nss_wrapper. Please note that the nss_wrapper is required for +// compatibility with OpenShift: https://access.redhat.com/articles/4859371. +func addNSSWrapper(image string, imagePullPolicy corev1.PullPolicy, template *corev1.PodTemplateSpec) { + + nssWrapperCmd := postgresNSSWrapperPrefix + nssWrapperScript + for i, c := range template.Spec.Containers { + switch c.Name { + case naming.ContainerDatabase, naming.PGBackRestRepoContainerName, + naming.PGBackRestRestoreContainerName: + passwd := fmt.Sprintf(nssWrapperDir, "postgres", "passwd") + group := fmt.Sprintf(nssWrapperDir, "postgres", "group") + template.Spec.Containers[i].Env = append(template.Spec.Containers[i].Env, []corev1.EnvVar{ + {Name: "LD_PRELOAD", Value: "/usr/lib64/libnss_wrapper.so"}, + {Name: "NSS_WRAPPER_PASSWD", Value: passwd}, + {Name: "NSS_WRAPPER_GROUP", Value: group}, + }...) + case naming.ContainerPGAdmin: + nssWrapperCmd = pgAdminNSSWrapperPrefix + nssWrapperScript + passwd := fmt.Sprintf(nssWrapperDir, "pgadmin", "passwd") + group := fmt.Sprintf(nssWrapperDir, "pgadmin", "group") + template.Spec.Containers[i].Env = append(template.Spec.Containers[i].Env, []corev1.EnvVar{ + {Name: "LD_PRELOAD", Value: "/usr/lib64/libnss_wrapper.so"}, + {Name: "NSS_WRAPPER_PASSWD", Value: passwd}, + {Name: "NSS_WRAPPER_GROUP", Value: group}, + }...) + } + } + + container := corev1.Container{ + Command: []string{"bash", "-c", nssWrapperCmd}, + Image: image, + ImagePullPolicy: imagePullPolicy, + Name: naming.ContainerNSSWrapperInit, + SecurityContext: initialize.RestrictedSecurityContext(), + } + + // Here we set the NSS wrapper container resources to the 'database', 'pgadmin' + // or 'pgbackrest' container configuration, as appropriate. + + // First, we'll set the NSS wrapper container configuration for any pgAdmin + // containers because pgAdmin Pods won't contain any other containers + containsPGAdmin := false + for i, c := range template.Spec.Containers { + if c.Name == naming.ContainerPGAdmin { + containsPGAdmin = true + container.Resources = template.Spec.Containers[i].Resources + break + } + } + + // If this was a pgAdmin Pod, we don't need to check anything else. + if !containsPGAdmin { + // Because the instance Pod has both a 'database' and 'pgbackrest' container, + // we'll first check for the 'database' container and use those resource + // settings for any instance pods. + containsDatabase := false + for i, c := range template.Spec.Containers { + if c.Name == naming.ContainerDatabase { + containsDatabase = true + container.Resources = template.Spec.Containers[i].Resources + break + } + if c.Name == naming.PGBackRestRestoreContainerName { + container.Resources = template.Spec.Containers[i].Resources + break + } + } + // If 'database' is not found, we need to use the 'pgbackrest' resource + // configuration settings instead + if !containsDatabase { + for i, c := range template.Spec.Containers { + if c.Name == naming.PGBackRestRepoContainerName { + container.Resources = template.Spec.Containers[i].Resources + break + } + } + } + } + template.Spec.InitContainers = append(template.Spec.InitContainers, container) +} + +// jobFailed returns "true" if the Job provided has failed. Otherwise it returns "false". +func jobFailed(job *batchv1.Job) bool { + conditions := job.Status.Conditions + for i := range conditions { + if conditions[i].Type == batchv1.JobFailed { + return (conditions[i].Status == corev1.ConditionTrue) + } + } + return false +} + +// jobCompleted returns "true" if the Job provided completed successfully. Otherwise it returns +// "false". +func jobCompleted(job *batchv1.Job) bool { + conditions := job.Status.Conditions + for i := range conditions { + if conditions[i].Type == batchv1.JobComplete { + return (conditions[i].Status == corev1.ConditionTrue) + } + } + return false +} + +// safeHash32 runs content and returns a short alphanumeric string that +// represents everything written to w. The string is unlikely to have bad words +// and is safe to store in the Kubernetes API. This is the same algorithm used +// by ControllerRevision's "controller.kubernetes.io/hash". +func safeHash32(content func(w io.Writer) error) (string, error) { + hash := fnv.New32() + if err := content(hash); err != nil { + return "", err + } + return rand.SafeEncodeString(fmt.Sprint(hash.Sum32())), nil +} diff --git a/internal/controller/postgrescluster/util_test.go b/internal/controller/postgrescluster/util_test.go new file mode 100644 index 0000000000..51a32f1e85 --- /dev/null +++ b/internal/controller/postgrescluster/util_test.go @@ -0,0 +1,381 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgrescluster + +import ( + "errors" + "io" + "testing" + + "gotest.tools/v3/assert" + batchv1 "k8s.io/api/batch/v1" + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/resource" + + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/testing/cmp" +) + +func TestSafeHash32(t *testing.T) { + expected := errors.New("whomp") + + _, err := safeHash32(func(io.Writer) error { return expected }) + assert.Equal(t, err, expected) + + stuff, err := safeHash32(func(w io.Writer) error { + _, _ = w.Write([]byte(`some stuff`)) + return nil + }) + assert.NilError(t, err) + assert.Equal(t, stuff, "574b4c7d87", "expected alphanumeric") + + same, err := safeHash32(func(w io.Writer) error { + _, _ = w.Write([]byte(`some stuff`)) + return nil + }) + assert.NilError(t, err) + assert.Equal(t, same, stuff, "expected deterministic hash") +} + +func TestAddDevSHM(t *testing.T) { + + testCases := []struct { + tcName string + podTemplate *corev1.PodTemplateSpec + expected bool + }{{ + tcName: "database and pgbackrest containers", + podTemplate: &corev1.PodTemplateSpec{Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + {Name: "database"}, {Name: "pgbackrest"}, {Name: "dontmodify"}, + }}}, + expected: true, + }, { + tcName: "database container only", + podTemplate: &corev1.PodTemplateSpec{Spec: corev1.PodSpec{ + Containers: []corev1.Container{{Name: "database"}, {Name: "dontmodify"}}}}, + expected: true, + }, { + tcName: "pgbackest container only", + podTemplate: &corev1.PodTemplateSpec{Spec: corev1.PodSpec{ + Containers: []corev1.Container{{Name: "dontmodify"}, {Name: "pgbackrest"}}}}, + }, { + tcName: "other containers", + podTemplate: &corev1.PodTemplateSpec{Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + {Name: "dontmodify1"}, {Name: "dontmodify2"}}}}, + }} + + for _, tc := range testCases { + t.Run(tc.tcName, func(t *testing.T) { + + template := tc.podTemplate + + addDevSHM(template) + + found := false + + // check there is an empty dir mounted under the dshm volume + for _, v := range template.Spec.Volumes { + if v.Name == "dshm" && v.VolumeSource.EmptyDir != nil && v.VolumeSource.EmptyDir.Medium == corev1.StorageMediumMemory { + found = true + break + } + } + assert.Assert(t, found) + + // check that the database container contains a mount to the shared volume + // directory + found = false + + loop: + for _, c := range template.Spec.Containers { + if c.Name == naming.ContainerDatabase { + for _, vm := range c.VolumeMounts { + if vm.Name == "dshm" && vm.MountPath == "/dev/shm" { + found = true + break loop + } + } + } + } + + assert.Equal(t, tc.expected, found) + }) + } +} + +func TestAddNSSWrapper(t *testing.T) { + + image := "test-image" + imagePullPolicy := corev1.PullAlways + + expectedResources := corev1.ResourceRequirements{ + Requests: corev1.ResourceList{ + corev1.ResourceCPU: resource.MustParse("200m"), + }} + + expectedEnv := []corev1.EnvVar{ + {Name: "LD_PRELOAD", Value: "/usr/lib64/libnss_wrapper.so"}, + {Name: "NSS_WRAPPER_PASSWD", Value: "/tmp/nss_wrapper/postgres/passwd"}, + {Name: "NSS_WRAPPER_GROUP", Value: "/tmp/nss_wrapper/postgres/group"}, + } + + expectedPGAdminEnv := []corev1.EnvVar{ + {Name: "LD_PRELOAD", Value: "/usr/lib64/libnss_wrapper.so"}, + {Name: "NSS_WRAPPER_PASSWD", Value: "/tmp/nss_wrapper/pgadmin/passwd"}, + {Name: "NSS_WRAPPER_GROUP", Value: "/tmp/nss_wrapper/pgadmin/group"}, + } + + testCases := []struct { + tcName string + podTemplate *corev1.PodTemplateSpec + pgadmin bool + resourceProvider string + expectedUpdatedContainerCount int + }{{ + tcName: "database container with pgbackrest sidecar", + podTemplate: &corev1.PodTemplateSpec{Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + {Name: naming.ContainerDatabase, Resources: expectedResources}, + {Name: naming.PGBackRestRepoContainerName, Resources: expectedResources}, + {Name: "dontmodify"}, + }}}, + expectedUpdatedContainerCount: 2, + }, { + tcName: "database container only", + podTemplate: &corev1.PodTemplateSpec{Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + {Name: naming.ContainerDatabase, Resources: expectedResources}, + {Name: "dontmodify"}}}}, + expectedUpdatedContainerCount: 1, + }, { + tcName: "pgbackest container only", + podTemplate: &corev1.PodTemplateSpec{Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + {Name: naming.PGBackRestRepoContainerName, Resources: expectedResources}, + {Name: "dontmodify"}, + }}}, + expectedUpdatedContainerCount: 1, + }, { + tcName: "pgadmin container only", + podTemplate: &corev1.PodTemplateSpec{Spec: corev1.PodSpec{ + Containers: []corev1.Container{{Name: "dontmodify"}, {Name: "pgadmin"}}}}, + pgadmin: true, + expectedUpdatedContainerCount: 1, + }, { + tcName: "restore container only", + podTemplate: &corev1.PodTemplateSpec{Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + {Name: naming.PGBackRestRestoreContainerName, Resources: expectedResources}, + {Name: "dontmodify"}, + }}}, + expectedUpdatedContainerCount: 1, + }, { + tcName: "custom database container resources", + podTemplate: &corev1.PodTemplateSpec{Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + {Name: "database", + Resources: corev1.ResourceRequirements{ + Requests: corev1.ResourceList{ + corev1.ResourceCPU: resource.MustParse("200m"), + }}}}}}, + resourceProvider: "database", + expectedUpdatedContainerCount: 1, + }, { + tcName: "custom pgbackrest container resources", + podTemplate: &corev1.PodTemplateSpec{Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + {Name: "pgbackrest", + Resources: corev1.ResourceRequirements{ + Requests: corev1.ResourceList{ + corev1.ResourceCPU: resource.MustParse("300m"), + }}}}}}, + resourceProvider: "pgbackrest", + expectedUpdatedContainerCount: 1, + }, { + tcName: "custom pgadmin container resources", + podTemplate: &corev1.PodTemplateSpec{Spec: corev1.PodSpec{ + Containers: []corev1.Container{ + {Name: "pgadmin", + Resources: corev1.ResourceRequirements{ + Requests: corev1.ResourceList{ + corev1.ResourceCPU: resource.MustParse("400m"), + }}}}}}, + pgadmin: true, + resourceProvider: "pgadmin", + expectedUpdatedContainerCount: 1, + }} + + for _, tc := range testCases { + t.Run(tc.tcName, func(t *testing.T) { + + template := tc.podTemplate + beforeAddNSS := template.DeepCopy().Spec.Containers + + addNSSWrapper(image, imagePullPolicy, template) + + t.Run("container-updated", func(t *testing.T) { + // Each container that requires the nss_wrapper envs should be updated + var actualUpdatedContainerCount int + for i, c := range template.Spec.Containers { + if c.Name == naming.ContainerDatabase || + c.Name == naming.PGBackRestRepoContainerName || + c.Name == naming.PGBackRestRestoreContainerName { + assert.DeepEqual(t, expectedEnv, c.Env) + actualUpdatedContainerCount++ + } else if c.Name == "pgadmin" { + assert.DeepEqual(t, expectedPGAdminEnv, c.Env) + actualUpdatedContainerCount++ + } else { + assert.DeepEqual(t, beforeAddNSS[i], c) + } + } + // verify database and/or pgbackrest containers updated + assert.Equal(t, actualUpdatedContainerCount, + tc.expectedUpdatedContainerCount) + }) + + t.Run("init-container-added", func(t *testing.T) { + var foundInitContainer bool + // verify init container command, image & name + for _, ic := range template.Spec.InitContainers { + if ic.Name == naming.ContainerNSSWrapperInit { + if tc.pgadmin { + assert.Equal(t, pgAdminNSSWrapperPrefix+nssWrapperScript, ic.Command[2]) // ignore "bash -c" + } else { + assert.Equal(t, postgresNSSWrapperPrefix+nssWrapperScript, ic.Command[2]) // ignore "bash -c" + } + assert.Assert(t, ic.Image == image) + assert.Assert(t, ic.ImagePullPolicy == imagePullPolicy) + assert.Assert(t, !cmp.DeepEqual(ic.SecurityContext, + &corev1.SecurityContext{})().Success()) + + if tc.resourceProvider != "" { + for _, c := range template.Spec.Containers { + if c.Name == tc.resourceProvider { + assert.DeepEqual(t, ic.Resources.Requests, + c.Resources.Requests) + } + } + } + foundInitContainer = true + break + } + } + // verify init container is present + assert.Assert(t, foundInitContainer) + }) + }) + } +} + +func TestJobCompleted(t *testing.T) { + + testCases := []struct { + job *batchv1.Job + expectSuccessful bool + testDesc string + }{{ + job: &batchv1.Job{ + Status: batchv1.JobStatus{ + Conditions: []batchv1.JobCondition{{ + Type: batchv1.JobComplete, + Status: corev1.ConditionTrue, + }}, + }, + }, + expectSuccessful: true, + testDesc: "condition present and true", + }, { + job: &batchv1.Job{ + Status: batchv1.JobStatus{ + Conditions: []batchv1.JobCondition{{ + Type: batchv1.JobComplete, + Status: corev1.ConditionFalse, + }}, + }, + }, + expectSuccessful: false, + testDesc: "condition present but false", + }, { + job: &batchv1.Job{ + Status: batchv1.JobStatus{ + Conditions: []batchv1.JobCondition{{ + Type: batchv1.JobComplete, + Status: corev1.ConditionUnknown, + }}, + }, + }, + expectSuccessful: false, + testDesc: "condition present but unknown", + }, { + job: &batchv1.Job{}, + expectSuccessful: false, + testDesc: "empty conditions", + }} + + for _, tc := range testCases { + t.Run(tc.testDesc, func(t *testing.T) { + // first ensure jobCompleted gives the expected result + isCompleted := jobCompleted(tc.job) + assert.Assert(t, isCompleted == tc.expectSuccessful) + }) + } +} + +func TestJobFailed(t *testing.T) { + + testCases := []struct { + job *batchv1.Job + expectFailed bool + testDesc string + }{{ + job: &batchv1.Job{ + Status: batchv1.JobStatus{ + Conditions: []batchv1.JobCondition{{ + Type: batchv1.JobFailed, + Status: corev1.ConditionTrue, + }}, + }, + }, + expectFailed: true, + testDesc: "condition present and true", + }, { + job: &batchv1.Job{ + Status: batchv1.JobStatus{ + Conditions: []batchv1.JobCondition{{ + Type: batchv1.JobFailed, + Status: corev1.ConditionFalse, + }}, + }, + }, + expectFailed: false, + testDesc: "condition present but false", + }, { + job: &batchv1.Job{ + Status: batchv1.JobStatus{ + Conditions: []batchv1.JobCondition{{ + Type: batchv1.JobFailed, + Status: corev1.ConditionUnknown, + }}, + }, + }, + expectFailed: false, + testDesc: "condition present but unknown", + }, { + job: &batchv1.Job{}, + expectFailed: false, + testDesc: "empty conditions", + }} + + for _, tc := range testCases { + t.Run(tc.testDesc, func(t *testing.T) { + // first ensure jobCompleted gives the expected result + isCompleted := jobFailed(tc.job) + assert.Assert(t, isCompleted == tc.expectFailed) + }) + } +} diff --git a/internal/controller/postgrescluster/volumes.go b/internal/controller/postgrescluster/volumes.go new file mode 100644 index 0000000000..e40710d4ff --- /dev/null +++ b/internal/controller/postgrescluster/volumes.go @@ -0,0 +1,875 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgrescluster + +import ( + "context" + "fmt" + "strconv" + + "github.com/pkg/errors" + batchv1 "k8s.io/api/batch/v1" + corev1 "k8s.io/api/core/v1" + apierrors "k8s.io/apimachinery/pkg/api/errors" + "k8s.io/apimachinery/pkg/api/meta" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/labels" + "k8s.io/apimachinery/pkg/util/validation/field" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/controller/controllerutil" + + "github.com/crunchydata/postgres-operator/internal/config" + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/pgbackrest" + "github.com/crunchydata/postgres-operator/internal/postgres" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +// +kubebuilder:rbac:groups="",resources="persistentvolumeclaims",verbs={list} + +// observePersistentVolumeClaims reads all PVCs for cluster from the Kubernetes +// API and sets the PersistentVolumeResizing condition as appropriate. +func (r *Reconciler) observePersistentVolumeClaims( + ctx context.Context, cluster *v1beta1.PostgresCluster, +) ([]corev1.PersistentVolumeClaim, error) { + volumes := &corev1.PersistentVolumeClaimList{} + + selector, err := naming.AsSelector(naming.Cluster(cluster.Name)) + if err == nil { + err = errors.WithStack( + r.Client.List(ctx, volumes, + client.InNamespace(cluster.Namespace), + client.MatchingLabelsSelector{Selector: selector}, + )) + } + + resizing := metav1.Condition{ + Type: v1beta1.PersistentVolumeResizing, + Message: "One or more volumes are changing size", + + ObservedGeneration: cluster.Generation, + } + + minNotZero := func(a, b metav1.Time) metav1.Time { + if b.IsZero() || (a.Before(&b) && !a.IsZero()) { + return a + } + return b + } + + for _, pvc := range volumes.Items { + for _, condition := range pvc.Status.Conditions { + switch condition.Type { + case + // When the resize controller sees `spec.resources != status.capacity`, + // it sets a "Resizing" condition and invokes the storage provider. + // NOTE: The oldest KEP talks about "ResizeStarted", but that + // changed to "Resizing" during the merge to Kubernetes v1.8. + // - https://git.k8s.io/enhancements/keps/sig-storage/284-enable-volume-expansion + // - https://pr.k8s.io/49727#discussion_r136678508 + corev1.PersistentVolumeClaimResizing, + + // Kubernetes v1.10 added the "FileSystemResizePending" condition + // to indicate when the storage provider has finished its work. + // When a CSI implementation indicates that it performed the + // *entire* resize, this condition does not appear. + // - https://git.k8s.io/enhancements/keps/sig-storage/556-csi-volume-resizing + // - https://pr.k8s.io/58415 + // + // Kubernetes v1.15 ("ExpandInUsePersistentVolumes" feature gate) + // finishes the resize of mounted and writable PVCs that have + // the "FileSystemResizePending" condition. When the work is done, + // the condition is removed and `spec.resources == status.capacity`. + // - https://git.k8s.io/enhancements/keps/sig-storage/531-online-pv-resizing + corev1.PersistentVolumeClaimFileSystemResizePending: + + // Initialize from the first condition. + if resizing.Status == "" { + resizing.Status = metav1.ConditionStatus(condition.Status) + resizing.Reason = condition.Reason + resizing.LastTransitionTime = condition.LastTransitionTime + + // corev1.PersistentVolumeClaimCondition.Reason is optional + // while metav1.Condition.Reason is required. + if resizing.Reason == "" { + resizing.Reason = string(condition.Type) + } + } + + // Use most things from an adverse condition. + if condition.Status != corev1.ConditionTrue { + resizing.Status = metav1.ConditionStatus(condition.Status) + resizing.Reason = condition.Reason + resizing.Message = condition.Message + resizing.LastTransitionTime = condition.LastTransitionTime + + // corev1.PersistentVolumeClaimCondition.Reason is optional + // while metav1.Condition.Reason is required. + if resizing.Reason == "" { + resizing.Reason = string(condition.Type) + } + } + + // Use the oldest transition time of healthy conditions. + if resizing.Status == metav1.ConditionTrue && + condition.Status == corev1.ConditionTrue { + resizing.LastTransitionTime = minNotZero( + resizing.LastTransitionTime, condition.LastTransitionTime) + } + + case + // The "ModifyingVolume" and "ModifyVolumeError" conditions occur + // when the attribute class of a PVC is changing. These attributes + // do not affect the size of a volume, so there's nothing to do. + // See the "VolumeAttributesClass" feature gate. + // - https://git.k8s.io/enhancements/keps/sig-storage/3751-volume-attributes-class + corev1.PersistentVolumeClaimVolumeModifyingVolume, + corev1.PersistentVolumeClaimVolumeModifyVolumeError: + } + } + } + + if resizing.Status != "" { + meta.SetStatusCondition(&cluster.Status.Conditions, resizing) + } else { + // NOTE(cbandy): This clears the condition, but it may immediately + // return with a new LastTransitionTime when a PVC spec is invalid. + meta.RemoveStatusCondition(&cluster.Status.Conditions, resizing.Type) + } + + return volumes.Items, err +} + +// configureExistingPVCs configures the defined pgData, pg_wal and pgBackRest +// repo volumes to be used by the PostgresCluster. In the case of existing +// pgData volumes, an appropriate instance set name is defined that will be +// used for the PostgresCluster. Existing pg_wal volumes MUST be defined along +// with existing pgData volumes to ensure consistent naming and proper +// bootstrapping. +func (r *Reconciler) configureExistingPVCs( + ctx context.Context, cluster *v1beta1.PostgresCluster, + volumes []corev1.PersistentVolumeClaim, +) ([]corev1.PersistentVolumeClaim, error) { + + var err error + + if cluster.Spec.DataSource != nil && + cluster.Spec.DataSource.Volumes != nil && + cluster.Spec.DataSource.Volumes.PGDataVolume != nil { + // If the startup instance name isn't set, use the instance set defined at position zero. + if cluster.Status.StartupInstance == "" { + set := &cluster.Spec.InstanceSets[0] + cluster.Status.StartupInstanceSet = set.Name + cluster.Status.StartupInstance = naming.GenerateStartupInstance(cluster, set).Name + } + volumes, err = r.configureExistingPGVolumes(ctx, cluster, volumes, + cluster.Status.StartupInstance) + + // existing WAL volume must be paired with an existing pgData volume + if cluster.Spec.DataSource != nil && + cluster.Spec.DataSource.Volumes != nil && + cluster.Spec.DataSource.Volumes.PGWALVolume != nil && + err == nil { + volumes, err = r.configureExistingPGWALVolume(ctx, cluster, volumes, + cluster.Status.StartupInstance) + } + } + + if cluster.Spec.DataSource != nil && + cluster.Spec.DataSource.Volumes != nil && + cluster.Spec.DataSource.Volumes.PGBackRestVolume != nil && + err == nil { + + volumes, err = r.configureExistingRepoVolumes(ctx, cluster, volumes) + } + return volumes, err +} + +// +kubebuilder:rbac:groups="",resources="persistentvolumeclaims",verbs={create,patch} + +// configureExistingPGVolumes first searches the observed volumes list to see +// if the existing pgData volume defined in the spec is already updated. If not, +// this sets the appropriate labels and ownership for the volume to be used in +// the PostgresCluster. +func (r *Reconciler) configureExistingPGVolumes( + ctx context.Context, + cluster *v1beta1.PostgresCluster, + volumes []corev1.PersistentVolumeClaim, + instanceName string, +) ([]corev1.PersistentVolumeClaim, error) { + + // if the volume is already in the list, move on + for i := range volumes { + if cluster.Spec.DataSource.Volumes.PGDataVolume. + PVCName == volumes[i].Name { + return volumes, nil + } + } + + if len(cluster.Spec.InstanceSets) > 0 { + if volName := cluster.Spec.DataSource.Volumes. + PGDataVolume.PVCName; volName != "" { + volume := &corev1.PersistentVolumeClaim{ + ObjectMeta: metav1.ObjectMeta{ + Name: volName, + Namespace: cluster.Namespace, + }, + Spec: cluster.Spec.InstanceSets[0].DataVolumeClaimSpec, + } + + volume.ObjectMeta.Labels = map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelInstanceSet: cluster.Spec.InstanceSets[0].Name, + naming.LabelInstance: instanceName, + naming.LabelRole: naming.RolePostgresData, + naming.LabelData: naming.DataPostgres, + } + volume.SetGroupVersionKind(corev1.SchemeGroupVersion. + WithKind("PersistentVolumeClaim")) + if err := r.setControllerReference(cluster, volume); err != nil { + return volumes, err + } + if err := errors.WithStack(r.apply(ctx, volume)); err != nil { + return volumes, err + } + volumes = append(volumes, *volume) + } + } + return volumes, nil +} + +// +kubebuilder:rbac:groups="",resources="persistentvolumeclaims",verbs={create,patch} + +// configureExistingPGWALVolume first searches the observed volumes list to see +// if the existing pg_wal volume defined in the spec is already updated. If not, +// this sets the appropriate labels and ownership for the volume to be used in +// the PostgresCluster. +func (r *Reconciler) configureExistingPGWALVolume( + ctx context.Context, + cluster *v1beta1.PostgresCluster, + volumes []corev1.PersistentVolumeClaim, + instanceName string, +) ([]corev1.PersistentVolumeClaim, error) { + + // if the volume is already in the list, move on + for i := range volumes { + if cluster.Spec.DataSource.Volumes.PGWALVolume. + PVCName == volumes[i].Name { + return volumes, nil + } + } + + if volName := cluster.Spec.DataSource.Volumes.PGWALVolume. + PVCName; volName != "" { + + volume := &corev1.PersistentVolumeClaim{ + ObjectMeta: metav1.ObjectMeta{ + Name: volName, + Namespace: cluster.Namespace, + }, + Spec: cluster.Spec.InstanceSets[0].DataVolumeClaimSpec, + } + + volume.ObjectMeta.Labels = map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelInstanceSet: cluster.Spec.InstanceSets[0].Name, + naming.LabelInstance: instanceName, + naming.LabelRole: naming.RolePostgresWAL, + naming.LabelData: naming.DataPostgres, + } + volume.SetGroupVersionKind(corev1.SchemeGroupVersion. + WithKind("PersistentVolumeClaim")) + if err := r.setControllerReference(cluster, volume); err != nil { + return volumes, err + } + if err := errors.WithStack(r.apply(ctx, volume)); err != nil { + return volumes, err + } + volumes = append(volumes, *volume) + } + return volumes, nil +} + +// +kubebuilder:rbac:groups="",resources="persistentvolumeclaims",verbs={create,patch} + +// configureExistingRepoVolumes first searches the observed volumes list to see +// if the existing pgBackRest repo volume defined in the spec is already updated. +// If not, this sets the appropriate labels and ownership for the volume to be +// used in the PostgresCluster. +func (r *Reconciler) configureExistingRepoVolumes( + ctx context.Context, + cluster *v1beta1.PostgresCluster, + volumes []corev1.PersistentVolumeClaim, +) ([]corev1.PersistentVolumeClaim, error) { + + // if the volume is already in the list, move on + for i := range volumes { + if cluster.Spec.DataSource.Volumes.PGBackRestVolume. + PVCName == volumes[i].Name { + return volumes, nil + } + } + + if len(cluster.Spec.Backups.PGBackRest.Repos) > 0 { + // there must be at least on pgBackrest repo defined + if volName := cluster.Spec.DataSource.Volumes. + PGBackRestVolume.PVCName; volName != "" { + volume := &corev1.PersistentVolumeClaim{ + ObjectMeta: metav1.ObjectMeta{ + Name: volName, + Namespace: cluster.Namespace, + Labels: naming.PGBackRestRepoVolumeLabels(cluster.Name, + cluster.Spec.Backups.PGBackRest.Repos[0].Name), + }, + Spec: cluster.Spec.Backups.PGBackRest.Repos[0].Volume. + VolumeClaimSpec, + } + + //volume.ObjectMeta = naming.PGBackRestRepoVolume(cluster, cluster.Spec.Backups.PGBackRest.Repos[0].Name) + volume.SetGroupVersionKind(corev1.SchemeGroupVersion. + WithKind("PersistentVolumeClaim")) + if err := r.setControllerReference(cluster, volume); err != nil { + return volumes, err + } + if err := errors.WithStack(r.apply(ctx, volume)); err != nil { + return volumes, err + } + volumes = append(volumes, *volume) + } + } + return volumes, nil +} + +// +kubebuilder:rbac:groups="batch",resources="jobs",verbs={list} + +// reconcileDirMoveJobs creates the existing volume move Jobs as defined in +// the PostgresCluster spec. A boolean value is return to indicate whether +// the main control loop should return early. +func (r *Reconciler) reconcileDirMoveJobs(ctx context.Context, + cluster *v1beta1.PostgresCluster) (bool, error) { + + if cluster.Spec.DataSource != nil && + cluster.Spec.DataSource.Volumes != nil { + + moveJobs := &batchv1.JobList{} + if err := r.Client.List(ctx, moveJobs, &client.ListOptions{ + Namespace: cluster.Namespace, + LabelSelector: naming.DirectoryMoveJobLabels(cluster.Name).AsSelector(), + }); err != nil { + return false, errors.WithStack(err) + } + + var err error + var pgDataReturn, pgWALReturn, repoReturn bool + + if cluster.Spec.DataSource.Volumes.PGDataVolume != nil && + cluster.Spec.DataSource.Volumes.PGDataVolume. + Directory != "" && + cluster.Spec.DataSource.Volumes.PGDataVolume. + PVCName != "" { + pgDataReturn, err = r.reconcileMovePGDataDir(ctx, cluster, moveJobs) + } + + if err == nil && + cluster.Spec.DataSource.Volumes.PGWALVolume != nil && + cluster.Spec.DataSource.Volumes.PGWALVolume. + Directory != "" && + cluster.Spec.DataSource.Volumes.PGWALVolume. + PVCName != "" { + pgWALReturn, err = r.reconcileMoveWALDir(ctx, cluster, moveJobs) + } + + if err == nil && + cluster.Spec.DataSource.Volumes.PGBackRestVolume != nil && + cluster.Spec.DataSource.Volumes.PGBackRestVolume. + Directory != "" && + cluster.Spec.DataSource.Volumes.PGBackRestVolume. + PVCName != "" { + repoReturn, err = r.reconcileMoveRepoDir(ctx, cluster, moveJobs) + } + // if any of the 'return early' values are true, return true + return pgDataReturn || pgWALReturn || repoReturn, err + } + + return false, nil +} + +// +kubebuilder:rbac:groups="batch",resources="jobs",verbs={create,patch,delete} + +// reconcileMovePGDataDir creates a Job to move the provided pgData directory +// in the given volume to the expected location before the PostgresCluster is +// bootstrapped. It returns any errors and a boolean indicating whether the +// main control loop should continue or return early to allow time for the job +// to complete. +func (r *Reconciler) reconcileMovePGDataDir(ctx context.Context, + cluster *v1beta1.PostgresCluster, moveJobs *batchv1.JobList) (bool, error) { + + moveDirJob := &batchv1.Job{} + moveDirJob.ObjectMeta = naming.MovePGDataDirJob(cluster) + + // check for an existing Job + for i := range moveJobs.Items { + if moveJobs.Items[i].Name == moveDirJob.Name { + if jobCompleted(&moveJobs.Items[i]) { + // if the Job is completed, return as this only needs to run once + return false, nil + } + if !jobFailed(&moveJobs.Items[i]) { + // if the Job otherwise exists and has not failed, return and + // give the Job time to finish + return true, nil + } + } + } + + // at this point, the Job either wasn't found or it has failed, so the it + // should be created + moveDirJob.ObjectMeta.Annotations = naming.Merge(cluster.Spec.Metadata. + GetAnnotationsOrNil()) + labels := naming.Merge(cluster.Spec.Metadata.GetLabelsOrNil(), + naming.DirectoryMoveJobLabels(cluster.Name), + map[string]string{ + naming.LabelMovePGDataDir: "", + }) + moveDirJob.ObjectMeta.Labels = labels + + // `patroni.dynamic.json` holds the previous state of the DCS. Since we are + // migrating the volumes, we want to clear out any obsolete configuration info. + script := fmt.Sprintf(`echo "Preparing cluster %s volumes for PGO v5.x" + echo "pgdata_pvc=%s" + echo "Current PG data directory volume contents:" + ls -lh "/pgdata" + echo "Now updating PG data directory..." + [ -d "/pgdata/%s" ] && mv "/pgdata/%s" "/pgdata/pg%s_bootstrap" + rm -f "/pgdata/pg%s/patroni.dynamic.json" + echo "Updated PG data directory contents:" + ls -lh "/pgdata" + echo "PG Data directory preparation complete" + `, cluster.Name, + cluster.Spec.DataSource.Volumes.PGDataVolume.PVCName, + cluster.Spec.DataSource.Volumes.PGDataVolume.Directory, + cluster.Spec.DataSource.Volumes.PGDataVolume.Directory, + strconv.Itoa(cluster.Spec.PostgresVersion), + strconv.Itoa(cluster.Spec.PostgresVersion)) + + container := corev1.Container{ + Command: []string{"bash", "-ceu", script}, + Image: config.PostgresContainerImage(cluster), + ImagePullPolicy: cluster.Spec.ImagePullPolicy, + Name: naming.ContainerJobMovePGDataDir, + SecurityContext: initialize.RestrictedSecurityContext(), + VolumeMounts: []corev1.VolumeMount{postgres.DataVolumeMount()}, + } + if len(cluster.Spec.InstanceSets) > 0 { + container.Resources = cluster.Spec.InstanceSets[0].Resources + } + + jobSpec := &batchv1.JobSpec{ + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{Labels: labels}, + Spec: corev1.PodSpec{ + // Set the image pull secrets, if any exist. + // This is set here rather than using the service account due to the lack + // of propagation to existing pods when the CRD is updated: + // https://github.com/kubernetes/kubernetes/issues/88456 + ImagePullSecrets: cluster.Spec.ImagePullSecrets, + Containers: []corev1.Container{container}, + SecurityContext: postgres.PodSecurityContext(cluster), + // Set RestartPolicy to "Never" since we want a new Pod to be + // created by the Job controller when there is a failure + // (instead of the container simply restarting). + RestartPolicy: corev1.RestartPolicyNever, + // These Jobs don't make Kubernetes API calls, so we can just + // use the default ServiceAccount and not mount its credentials. + AutomountServiceAccountToken: initialize.Bool(false), + EnableServiceLinks: initialize.Bool(false), + Volumes: []corev1.Volume{{ + Name: "postgres-data", + VolumeSource: corev1.VolumeSource{ + PersistentVolumeClaim: &corev1.PersistentVolumeClaimVolumeSource{ + ClaimName: cluster.Spec.DataSource.Volumes. + PGDataVolume.PVCName, + }, + }}, + }, + }, + }, + } + // set the priority class name, if it exists + if len(cluster.Spec.InstanceSets) > 0 { + jobSpec.Template.Spec.PriorityClassName = + initialize.FromPointer(cluster.Spec.InstanceSets[0].PriorityClassName) + } + moveDirJob.Spec = *jobSpec + + // set gvk and ownership refs + moveDirJob.SetGroupVersionKind(batchv1.SchemeGroupVersion.WithKind("Job")) + if err := controllerutil.SetControllerReference(cluster, moveDirJob, + r.Client.Scheme()); err != nil { + return true, errors.WithStack(err) + } + + // server-side apply the backup Job intent + if err := r.apply(ctx, moveDirJob); err != nil { + return true, errors.WithStack(err) + } + + return true, nil +} + +// +kubebuilder:rbac:groups="batch",resources="jobs",verbs={create,patch,delete} + +// reconcileMoveWalDir creates a Job to move the provided pg_wal directory +// in the given volume to the expected location before the PostgresCluster is +// bootstrapped. It returns any errors and a boolean indicating whether the +// main control loop should continue or return early to allow time for the job +// to complete. +func (r *Reconciler) reconcileMoveWALDir(ctx context.Context, + cluster *v1beta1.PostgresCluster, moveJobs *batchv1.JobList) (bool, error) { + + moveDirJob := &batchv1.Job{} + moveDirJob.ObjectMeta = naming.MovePGWALDirJob(cluster) + + // check for an existing Job + for i := range moveJobs.Items { + if moveJobs.Items[i].Name == moveDirJob.Name { + if jobCompleted(&moveJobs.Items[i]) { + // if the Job is completed, return as this only needs to run once + return false, nil + } + if !jobFailed(&moveJobs.Items[i]) { + // if the Job otherwise exists and has not failed, return and + // give the Job time to finish + return true, nil + } + } + } + + moveDirJob.ObjectMeta.Annotations = naming.Merge(cluster.Spec.Metadata. + GetAnnotationsOrNil()) + labels := naming.Merge(cluster.Spec.Metadata.GetLabelsOrNil(), + naming.DirectoryMoveJobLabels(cluster.Name), + map[string]string{ + naming.LabelMovePGWalDir: "", + }) + moveDirJob.ObjectMeta.Labels = labels + + script := fmt.Sprintf(`echo "Preparing cluster %s volumes for PGO v5.x" + echo "pg_wal_pvc=%s" + echo "Current PG WAL directory volume contents:" + ls -lh "/pgwal" + echo "Now updating PG WAL directory..." + [ -d "/pgwal/%s" ] && mv "/pgwal/%s" "/pgwal/%s-wal" + echo "Updated PG WAL directory contents:" + ls -lh "/pgwal" + echo "PG WAL directory preparation complete" + `, cluster.Name, + cluster.Spec.DataSource.Volumes.PGWALVolume.PVCName, + cluster.Spec.DataSource.Volumes.PGWALVolume.Directory, + cluster.Spec.DataSource.Volumes.PGWALVolume.Directory, + cluster.ObjectMeta.Name) + + container := corev1.Container{ + Command: []string{"bash", "-ceu", script}, + Image: config.PostgresContainerImage(cluster), + ImagePullPolicy: cluster.Spec.ImagePullPolicy, + Name: naming.ContainerJobMovePGWALDir, + SecurityContext: initialize.RestrictedSecurityContext(), + VolumeMounts: []corev1.VolumeMount{postgres.WALVolumeMount()}, + } + if len(cluster.Spec.InstanceSets) > 0 { + container.Resources = cluster.Spec.InstanceSets[0].Resources + } + + jobSpec := &batchv1.JobSpec{ + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{Labels: labels}, + Spec: corev1.PodSpec{ + // Set the image pull secrets, if any exist. + // This is set here rather than using the service account due to the lack + // of propagation to existing pods when the CRD is updated: + // https://github.com/kubernetes/kubernetes/issues/88456 + ImagePullSecrets: cluster.Spec.ImagePullSecrets, + Containers: []corev1.Container{container}, + SecurityContext: postgres.PodSecurityContext(cluster), + // Set RestartPolicy to "Never" since we want a new Pod to be + // created by the Job controller when there is a failure + // (instead of the container simply restarting). + RestartPolicy: corev1.RestartPolicyNever, + // These Jobs don't make Kubernetes API calls, so we can just + // use the default ServiceAccount and not mount its credentials. + AutomountServiceAccountToken: initialize.Bool(false), + EnableServiceLinks: initialize.Bool(false), + Volumes: []corev1.Volume{{ + Name: "postgres-wal", + VolumeSource: corev1.VolumeSource{ + PersistentVolumeClaim: &corev1.PersistentVolumeClaimVolumeSource{ + ClaimName: cluster.Spec.DataSource.Volumes. + PGWALVolume.PVCName, + }, + }}, + }, + }, + }, + } + // set the priority class name, if it exists + if len(cluster.Spec.InstanceSets) > 0 { + jobSpec.Template.Spec.PriorityClassName = + initialize.FromPointer(cluster.Spec.InstanceSets[0].PriorityClassName) + } + moveDirJob.Spec = *jobSpec + + // set gvk and ownership refs + moveDirJob.SetGroupVersionKind(batchv1.SchemeGroupVersion.WithKind("Job")) + if err := controllerutil.SetControllerReference(cluster, moveDirJob, + r.Client.Scheme()); err != nil { + return true, errors.WithStack(err) + } + + // server-side apply the backup Job intent + if err := r.apply(ctx, moveDirJob); err != nil { + return true, errors.WithStack(err) + } + + return true, nil +} + +// +kubebuilder:rbac:groups="batch",resources="jobs",verbs={create,patch,delete} + +// reconcileMoveRepoDir creates a Job to move the provided pgBackRest repo +// directory in the given volume to the expected location before the +// PostgresCluster is bootstrapped. It returns any errors and a boolean +// indicating whether the main control loop should continue or return early +// to allow time for the job to complete. +func (r *Reconciler) reconcileMoveRepoDir(ctx context.Context, + cluster *v1beta1.PostgresCluster, moveJobs *batchv1.JobList) (bool, error) { + + moveDirJob := &batchv1.Job{} + moveDirJob.ObjectMeta = naming.MovePGBackRestRepoDirJob(cluster) + + // check for an existing Job + for i := range moveJobs.Items { + if moveJobs.Items[i].Name == moveDirJob.Name { + if jobCompleted(&moveJobs.Items[i]) { + // if the Job is completed, return as this only needs to run once + return false, nil + } + if !jobFailed(&moveJobs.Items[i]) { + // if the Job otherwise exists and has not failed, return and + // give the Job time to finish + return true, nil + } + } + } + + moveDirJob.ObjectMeta.Annotations = naming.Merge( + cluster.Spec.Metadata.GetAnnotationsOrNil()) + labels := naming.Merge(cluster.Spec.Metadata.GetLabelsOrNil(), + naming.DirectoryMoveJobLabels(cluster.Name), + map[string]string{ + naming.LabelMovePGBackRestRepoDir: "", + }) + moveDirJob.ObjectMeta.Labels = labels + + script := fmt.Sprintf(`echo "Preparing cluster %s pgBackRest repo volume for PGO v5.x" + echo "repo_pvc=%s" + echo "pgbackrest directory:" + ls -lh /pgbackrest + echo "Current pgBackRest repo directory volume contents:" + ls -lh "/pgbackrest/%s" + echo "Now updating repo directory..." + [ -d "/pgbackrest/%s" ] && mv -t "/pgbackrest/" "/pgbackrest/%s/archive" + [ -d "/pgbackrest/%s" ] && mv -t "/pgbackrest/" "/pgbackrest/%s/backup" + echo "Updated /pgbackrest directory contents:" + ls -lh "/pgbackrest" + echo "Repo directory preparation complete" + `, cluster.Name, + cluster.Spec.DataSource.Volumes.PGBackRestVolume.PVCName, + cluster.Spec.DataSource.Volumes.PGBackRestVolume.Directory, + cluster.Spec.DataSource.Volumes.PGBackRestVolume.Directory, + cluster.Spec.DataSource.Volumes.PGBackRestVolume.Directory, + cluster.Spec.DataSource.Volumes.PGBackRestVolume.Directory, + cluster.Spec.DataSource.Volumes.PGBackRestVolume.Directory) + + container := corev1.Container{ + Command: []string{"bash", "-ceu", script}, + Image: config.PGBackRestContainerImage(cluster), + ImagePullPolicy: cluster.Spec.ImagePullPolicy, + Name: naming.ContainerJobMovePGBackRestRepoDir, + SecurityContext: initialize.RestrictedSecurityContext(), + VolumeMounts: []corev1.VolumeMount{pgbackrest.RepoVolumeMount()}, + } + if cluster.Spec.Backups.PGBackRest.RepoHost != nil { + container.Resources = cluster.Spec.Backups.PGBackRest.RepoHost.Resources + } + + jobSpec := &batchv1.JobSpec{ + Template: corev1.PodTemplateSpec{ + ObjectMeta: metav1.ObjectMeta{Labels: labels}, + Spec: corev1.PodSpec{ + // Set the image pull secrets, if any exist. + // This is set here rather than using the service account due to the lack + // of propagation to existing pods when the CRD is updated: + // https://github.com/kubernetes/kubernetes/issues/88456 + ImagePullSecrets: cluster.Spec.ImagePullSecrets, + Containers: []corev1.Container{container}, + SecurityContext: postgres.PodSecurityContext(cluster), + // Set RestartPolicy to "Never" since we want a new Pod to be created by the Job + // controller when there is a failure (instead of the container simply restarting). + RestartPolicy: corev1.RestartPolicyNever, + // These Jobs don't make Kubernetes API calls, so we can just + // use the default ServiceAccount and not mount its credentials. + AutomountServiceAccountToken: initialize.Bool(false), + EnableServiceLinks: initialize.Bool(false), + Volumes: []corev1.Volume{{ + Name: "pgbackrest-repo", + VolumeSource: corev1.VolumeSource{ + PersistentVolumeClaim: &corev1.PersistentVolumeClaimVolumeSource{ + ClaimName: cluster.Spec.DataSource.Volumes. + PGBackRestVolume.PVCName, + }, + }}, + }, + }, + }, + } + // set the priority class name, if it exists + if repoHost := cluster.Spec.Backups.PGBackRest.RepoHost; repoHost != nil { + jobSpec.Template.Spec.PriorityClassName = initialize.FromPointer(repoHost.PriorityClassName) + } + moveDirJob.Spec = *jobSpec + + // set gvk and ownership refs + moveDirJob.SetGroupVersionKind(batchv1.SchemeGroupVersion.WithKind("Job")) + if err := controllerutil.SetControllerReference(cluster, moveDirJob, + r.Client.Scheme()); err != nil { + return true, errors.WithStack(err) + } + + // server-side apply the backup Job intent + if err := r.apply(ctx, moveDirJob); err != nil { + return true, errors.WithStack(err) + } + return true, nil +} + +// handlePersistentVolumeClaimError inspects err for expected Kubernetes API +// responses to writing a PVC. It turns errors it understands into conditions +// and events. When err is handled it returns nil. Otherwise it returns err. +func (r *Reconciler) handlePersistentVolumeClaimError( + cluster *v1beta1.PostgresCluster, err error, +) error { + var status metav1.Status + if api := apierrors.APIStatus(nil); errors.As(err, &api) { + status = api.Status() + } + + cannotResize := func(err error) { + meta.SetStatusCondition(&cluster.Status.Conditions, metav1.Condition{ + Type: v1beta1.PersistentVolumeResizing, + Status: metav1.ConditionFalse, + Reason: string(apierrors.ReasonForError(err)), + Message: "One or more volumes cannot be resized", + + ObservedGeneration: cluster.Generation, + }) + } + + volumeError := func(err error) { + r.Recorder.Event(cluster, + corev1.EventTypeWarning, "PersistentVolumeError", err.Error()) + } + + // Forbidden means (RBAC is broken or) the API request was rejected by an + // admission controller. Assume it is the latter and raise the issue as a + // condition and event. + // - https://releases.k8s.io/v1.21.0/plugin/pkg/admission/storage/persistentvolume/resize/admission.go + if apierrors.IsForbidden(err) { + cannotResize(err) + volumeError(err) + return nil + } + + if apierrors.IsInvalid(err) && status.Details != nil { + unknownCause := false + for _, cause := range status.Details.Causes { + switch { + // Forbidden "spec" happens when the PVC is waiting to be bound. + // It should resolve on its own and trigger another reconcile. Raise + // the issue as an event. + // - https://releases.k8s.io/v1.21.0/pkg/apis/core/validation/validation.go#L2028 + // + // TODO(cbandy): This can also happen when changing a field other + // than requests within the spec (access modes, storage class, etc). + // That case needs a condition or should be prevented via a webhook. + case + cause.Type == metav1.CauseType(field.ErrorTypeForbidden) && + cause.Field == "spec": + volumeError(err) + + // Forbidden "storage" happens when the change is not allowed. Raise + // the issue as a condition and event. + // - https://releases.k8s.io/v1.21.0/pkg/apis/core/validation/validation.go#L2028 + case + cause.Type == metav1.CauseType(field.ErrorTypeForbidden) && + cause.Field == "spec.resources.requests.storage": + cannotResize(err) + volumeError(err) + + default: + unknownCause = true + } + } + + if len(status.Details.Causes) > 0 && !unknownCause { + // All the causes were identified and handled. + return nil + } + } + + return err +} + +// getRepoPVCNames returns a map containing the names of repo PVCs that have +// the appropriate labels for each defined pgBackRest repo, if found. +func getRepoPVCNames( + cluster *v1beta1.PostgresCluster, + currentRepoPVCs []*corev1.PersistentVolumeClaim, +) map[string]string { + + repoPVCs := make(map[string]string) + for _, repo := range cluster.Spec.Backups.PGBackRest.Repos { + for _, pvc := range currentRepoPVCs { + if pvc.Labels[naming.LabelPGBackRestRepo] == repo.Name { + repoPVCs[repo.Name] = pvc.GetName() + break + } + } + } + + return repoPVCs +} + +// getPGPVCName returns the name of a PVC that has the provided labels, if found. +func getPGPVCName(labelMap map[string]string, + clusterVolumes []corev1.PersistentVolumeClaim, +) (string, error) { + + selector, err := naming.AsSelector(metav1.LabelSelector{ + MatchLabels: labelMap, + }) + if err != nil { + return "", errors.WithStack(err) + } + + for _, pvc := range clusterVolumes { + if selector.Matches(labels.Set(pvc.GetLabels())) { + return pvc.GetName(), nil + } + } + + return "", nil +} diff --git a/internal/controller/postgrescluster/volumes_test.go b/internal/controller/postgrescluster/volumes_test.go new file mode 100644 index 0000000000..96eef5f916 --- /dev/null +++ b/internal/controller/postgrescluster/volumes_test.go @@ -0,0 +1,918 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgrescluster + +import ( + "context" + "errors" + "testing" + "time" + + "gotest.tools/v3/assert" + batchv1 "k8s.io/api/batch/v1" + corev1 "k8s.io/api/core/v1" + apierrors "k8s.io/apimachinery/pkg/api/errors" + "k8s.io/apimachinery/pkg/api/resource" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/util/validation/field" + "k8s.io/apimachinery/pkg/util/wait" + "sigs.k8s.io/controller-runtime/pkg/client" + + "github.com/crunchydata/postgres-operator/internal/controller/runtime" + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/testing/cmp" + "github.com/crunchydata/postgres-operator/internal/testing/events" + "github.com/crunchydata/postgres-operator/internal/testing/require" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestHandlePersistentVolumeClaimError(t *testing.T) { + recorder := events.NewRecorder(t, runtime.Scheme) + reconciler := &Reconciler{ + Recorder: recorder, + } + + cluster := new(v1beta1.PostgresCluster) + cluster.Namespace = "ns1" + cluster.Name = "pg2" + + reset := func() { + cluster.Status.Conditions = cluster.Status.Conditions[:0] + recorder.Events = recorder.Events[:0] + } + + // It returns any error it does not recognize completely. + t.Run("Unexpected", func(t *testing.T) { + t.Cleanup(reset) + + err := errors.New("whomp") + + assert.Equal(t, err, reconciler.handlePersistentVolumeClaimError(cluster, err)) + assert.Assert(t, len(cluster.Status.Conditions) == 0) + assert.Assert(t, len(recorder.Events) == 0) + + err = apierrors.NewInvalid( + corev1.SchemeGroupVersion.WithKind("PersistentVolumeClaim").GroupKind(), + "some-pvc", + field.ErrorList{ + field.Forbidden(field.NewPath("metadata"), "dunno"), + }) + + assert.Equal(t, err, reconciler.handlePersistentVolumeClaimError(cluster, err)) + assert.Assert(t, len(cluster.Status.Conditions) == 0) + assert.Assert(t, len(recorder.Events) == 0) + }) + + // Neither statically nor dynamically provisioned claims can be resized + // before they are bound to a persistent volume. Kubernetes rejects such + // changes during PVC validation. + // + // A static PVC is one with a present-and-blank storage class. It is + // pending until a PV exists that matches its selector, requests, etc. + // - https://docs.k8s.io/concepts/storage/persistent-volumes/#static + // - https://docs.k8s.io/concepts/storage/persistent-volumes/#class-1 + // + // A dynamic PVC is associated with a storage class. Storage classes that + // "WaitForFirstConsumer" do not bind a PV until there is a pod. + // - https://docs.k8s.io/concepts/storage/persistent-volumes/#dynamic + t.Run("Pending", func(t *testing.T) { + t.Run("Grow", func(t *testing.T) { + t.Cleanup(reset) + + err := apierrors.NewInvalid( + corev1.SchemeGroupVersion.WithKind("PersistentVolumeClaim").GroupKind(), + "my-pending-pvc", + field.ErrorList{ + // - https://releases.k8s.io/v1.24.0/pkg/apis/core/validation/validation.go#L2184 + field.Forbidden(field.NewPath("spec"), "… immutable … bound claim …"), + }) + + // PVCs will bind eventually. This error should become an event without a condition. + assert.NilError(t, reconciler.handlePersistentVolumeClaimError(cluster, err)) + + assert.Check(t, len(cluster.Status.Conditions) == 0) + assert.Check(t, len(recorder.Events) > 0) + + for _, event := range recorder.Events { + assert.Equal(t, event.Type, "Warning") + assert.Equal(t, event.Reason, "PersistentVolumeError") + assert.Assert(t, cmp.Contains(event.Note, "PersistentVolumeClaim")) + assert.Assert(t, cmp.Contains(event.Note, "my-pending-pvc")) + assert.Assert(t, cmp.Contains(event.Note, "bound claim")) + assert.DeepEqual(t, event.Regarding, corev1.ObjectReference{ + APIVersion: v1beta1.GroupVersion.Identifier(), + Kind: "PostgresCluster", + Namespace: "ns1", Name: "pg2", + }) + } + }) + + t.Run("Shrink", func(t *testing.T) { + t.Cleanup(reset) + + // Requests to make a pending PVC smaller fail for multiple reasons. + err := apierrors.NewInvalid( + corev1.SchemeGroupVersion.WithKind("PersistentVolumeClaim").GroupKind(), + "my-pending-pvc", + field.ErrorList{ + // - https://releases.k8s.io/v1.24.0/pkg/apis/core/validation/validation.go#L2184 + field.Forbidden(field.NewPath("spec"), "… immutable … bound claim …"), + + // - https://releases.k8s.io/v1.24.0/pkg/apis/core/validation/validation.go#L2188 + field.Forbidden(field.NewPath("spec", "resources", "requests", "storage"), "… not be less …"), + }) + + // PVCs will bind eventually, but the size is rejected. + assert.NilError(t, reconciler.handlePersistentVolumeClaimError(cluster, err)) + + assert.Check(t, len(cluster.Status.Conditions) > 0) + assert.Check(t, len(recorder.Events) > 0) + + for _, condition := range cluster.Status.Conditions { + assert.Equal(t, condition.Type, "PersistentVolumeResizing") + assert.Equal(t, condition.Status, metav1.ConditionFalse) + assert.Equal(t, condition.Reason, "Invalid") + assert.Assert(t, cmp.Contains(condition.Message, "cannot be resized")) + } + + for _, event := range recorder.Events { + assert.Equal(t, event.Type, "Warning") + assert.Equal(t, event.Reason, "PersistentVolumeError") + assert.Assert(t, cmp.Contains(event.Note, "PersistentVolumeClaim")) + assert.Assert(t, cmp.Contains(event.Note, "my-pending-pvc")) + assert.Assert(t, cmp.Contains(event.Note, "bound claim")) + assert.Assert(t, cmp.Contains(event.Note, "not be less")) + assert.DeepEqual(t, event.Regarding, corev1.ObjectReference{ + APIVersion: v1beta1.GroupVersion.Identifier(), + Kind: "PostgresCluster", + Namespace: "ns1", Name: "pg2", + }) + } + }) + }) + + // Statically provisioned claims cannot be resized. Kubernetes responds + // differently based on the size growing or shrinking. + // + // Dynamically provisioned claims of storage classes that do *not* + // "allowVolumeExpansion" behave the same way. + t.Run("NoExpansion", func(t *testing.T) { + t.Run("Grow", func(t *testing.T) { + t.Cleanup(reset) + + // - https://releases.k8s.io/v1.24.0/plugin/pkg/admission/storage/persistentvolume/resize/admission.go#L108 + err := apierrors.NewForbidden( + corev1.Resource("persistentvolumeclaims"), "my-static-pvc", + errors.New("… only dynamically provisioned …")) + + // This PVC cannot resize. The error should become an event and condition. + assert.NilError(t, reconciler.handlePersistentVolumeClaimError(cluster, err)) + + assert.Check(t, len(cluster.Status.Conditions) > 0) + assert.Check(t, len(recorder.Events) > 0) + + for _, condition := range cluster.Status.Conditions { + assert.Equal(t, condition.Type, "PersistentVolumeResizing") + assert.Equal(t, condition.Status, metav1.ConditionFalse) + assert.Equal(t, condition.Reason, "Forbidden") + assert.Assert(t, cmp.Contains(condition.Message, "cannot be resized")) + } + + for _, event := range recorder.Events { + assert.Equal(t, event.Type, "Warning") + assert.Equal(t, event.Reason, "PersistentVolumeError") + assert.Assert(t, cmp.Contains(event.Note, "persistentvolumeclaim")) + assert.Assert(t, cmp.Contains(event.Note, "my-static-pvc")) + assert.Assert(t, cmp.Contains(event.Note, "only dynamic")) + assert.DeepEqual(t, event.Regarding, corev1.ObjectReference{ + APIVersion: v1beta1.GroupVersion.Identifier(), + Kind: "PostgresCluster", + Namespace: "ns1", Name: "pg2", + }) + } + }) + + // Dynamically provisioned claims of storage classes that *do* + // "allowVolumeExpansion" can grow but cannot shrink. Kubernetes + // rejects such changes during PVC validation, just like static claims. + // + // A future version of Kubernetes will allow `spec.resources` to shrink + // so long as it is greater than `status.capacity`. + // - https://git.k8s.io/enhancements/keps/sig-storage/1790-recover-resize-failure + t.Run("Shrink", func(t *testing.T) { + t.Cleanup(reset) + + err := apierrors.NewInvalid( + corev1.SchemeGroupVersion.WithKind("PersistentVolumeClaim").GroupKind(), + "my-static-pvc", + field.ErrorList{ + // - https://releases.k8s.io/v1.24.0/pkg/apis/core/validation/validation.go#L2188 + field.Forbidden(field.NewPath("spec", "resources", "requests", "storage"), "… not be less …"), + }) + + // The PVC size is rejected. This error should become an event and condition. + assert.NilError(t, reconciler.handlePersistentVolumeClaimError(cluster, err)) + + assert.Check(t, len(cluster.Status.Conditions) > 0) + assert.Check(t, len(recorder.Events) > 0) + + for _, condition := range cluster.Status.Conditions { + assert.Equal(t, condition.Type, "PersistentVolumeResizing") + assert.Equal(t, condition.Status, metav1.ConditionFalse) + assert.Equal(t, condition.Reason, "Invalid") + assert.Assert(t, cmp.Contains(condition.Message, "cannot be resized")) + } + + for _, event := range recorder.Events { + assert.Equal(t, event.Type, "Warning") + assert.Equal(t, event.Reason, "PersistentVolumeError") + assert.Assert(t, cmp.Contains(event.Note, "PersistentVolumeClaim")) + assert.Assert(t, cmp.Contains(event.Note, "my-static-pvc")) + assert.Assert(t, cmp.Contains(event.Note, "not be less")) + assert.DeepEqual(t, event.Regarding, corev1.ObjectReference{ + APIVersion: v1beta1.GroupVersion.Identifier(), + Kind: "PostgresCluster", + Namespace: "ns1", Name: "pg2", + }) + } + }) + }) +} + +func TestGetPVCNameMethods(t *testing.T) { + + namespace := "postgres-operator-test-get-pvc-name" + + // Stub to see that handlePersistentVolumeClaimError returns nil. + cluster := &v1beta1.PostgresCluster{ + ObjectMeta: metav1.ObjectMeta{ + Name: "testcluster", + Namespace: namespace, + }, + } + cluster.Spec.Backups.PGBackRest.Repos = []v1beta1.PGBackRestRepo{{ + Name: "testrepo1", + Volume: &v1beta1.RepoPVC{}, + }} + + pvc := &corev1.PersistentVolumeClaim{ + ObjectMeta: metav1.ObjectMeta{ + Name: "testvolume", + Namespace: namespace, + Labels: map[string]string{ + naming.LabelCluster: cluster.Name, + }, + }, + Spec: corev1.PersistentVolumeClaimSpec{ + AccessModes: []corev1.PersistentVolumeAccessMode{ + "ReadWriteMany", + }, + Resources: corev1.VolumeResourceRequirements{ + Requests: corev1.ResourceList{ + corev1.ResourceStorage: resource.MustParse("1Gi"), + }, + }, + }, + } + + pgDataPVC := pvc.DeepCopy() + pgDataPVC.Name = "testpgdatavol" + pgDataPVC.Labels = map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelInstanceSet: "testinstance1", + naming.LabelInstance: "testinstance1-abcd", + naming.LabelRole: naming.RolePostgresData, + } + + walPVC := pvc.DeepCopy() + walPVC.Name = "testwalvol" + walPVC.Labels = map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelInstanceSet: "testinstance1", + naming.LabelInstance: "testinstance1-abcd", + naming.LabelRole: naming.RolePostgresWAL, + } + clusterVolumes := []corev1.PersistentVolumeClaim{*pgDataPVC, *walPVC} + + repoPVC1 := pvc.DeepCopy() + repoPVC1.Name = "testrepovol1" + repoPVC1.Labels = map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelPGBackRest: "", + naming.LabelPGBackRestRepo: "testrepo1", + naming.LabelPGBackRestRepoVolume: "", + } + repoPVCs := []*corev1.PersistentVolumeClaim{repoPVC1} + + repoPVC2 := pvc.DeepCopy() + repoPVC2.Name = "testrepovol2" + repoPVC2.Labels = map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelPGBackRest: "", + naming.LabelPGBackRestRepo: "testrepo2", + naming.LabelPGBackRestRepoVolume: "", + } + // don't create this one yet + + t.Run("get pgdata PVC", func(t *testing.T) { + + pvcNames, err := getPGPVCName(map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelInstanceSet: "testinstance1", + naming.LabelInstance: "testinstance1-abcd", + naming.LabelRole: naming.RolePostgresData, + }, clusterVolumes) + assert.NilError(t, err) + + assert.Assert(t, pvcNames == "testpgdatavol") + }) + + t.Run("get wal PVC", func(t *testing.T) { + + pvcNames, err := getPGPVCName(map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelInstanceSet: "testinstance1", + naming.LabelInstance: "testinstance1-abcd", + naming.LabelRole: naming.RolePostgresWAL, + }, clusterVolumes) + assert.NilError(t, err) + + assert.Assert(t, pvcNames == "testwalvol") + }) + + t.Run("get one repo PVC", func(t *testing.T) { + expectedMap := map[string]string{ + "testrepo1": "testrepovol1", + } + + assert.DeepEqual(t, getRepoPVCNames(cluster, repoPVCs), expectedMap) + }) + + t.Run("get two repo PVCs", func(t *testing.T) { + repoPVCs2 := append(repoPVCs, repoPVC2) + + cluster.Spec.Backups.PGBackRest.Repos = []v1beta1.PGBackRestRepo{{ + Name: "testrepo1", + Volume: &v1beta1.RepoPVC{}, + }, { + Name: "testrepo2", + Volume: &v1beta1.RepoPVC{}, + }} + + expectedMap := map[string]string{ + "testrepo1": "testrepovol1", + "testrepo2": "testrepovol2", + } + + assert.DeepEqual(t, getRepoPVCNames(cluster, repoPVCs2), expectedMap) + }) +} + +func TestReconcileConfigureExistingPVCs(t *testing.T) { + ctx := context.Background() + _, tClient := setupKubernetes(t) + require.ParallelCapacity(t, 1) + + r := &Reconciler{Client: tClient, Owner: client.FieldOwner(t.Name())} + + ns := setupNamespace(t, tClient) + cluster := &v1beta1.PostgresCluster{ + ObjectMeta: metav1.ObjectMeta{ + Name: "testcluster", + Namespace: ns.GetName(), + }, + Spec: v1beta1.PostgresClusterSpec{ + PostgresVersion: 13, + Image: "example.com/crunchy-postgres-ha:test", + DataSource: &v1beta1.DataSource{ + Volumes: &v1beta1.DataSourceVolumes{}, + }, + InstanceSets: []v1beta1.PostgresInstanceSetSpec{{ + Name: "instance1", + DataVolumeClaimSpec: corev1.PersistentVolumeClaimSpec{ + AccessModes: []corev1.PersistentVolumeAccessMode{ + corev1.ReadWriteMany}, + Resources: corev1.VolumeResourceRequirements{ + Requests: corev1.ResourceList{ + corev1.ResourceStorage: resource.MustParse("1Gi"), + }, + }, + }, + }}, + Backups: v1beta1.Backups{ + PGBackRest: v1beta1.PGBackRestArchive{ + Image: "example.com/crunchy-pgbackrest:test", + Repos: []v1beta1.PGBackRestRepo{{ + Name: "repo1", + Volume: &v1beta1.RepoPVC{ + VolumeClaimSpec: corev1.PersistentVolumeClaimSpec{ + AccessModes: []corev1.PersistentVolumeAccessMode{ + corev1.ReadWriteMany}, + Resources: corev1.VolumeResourceRequirements{ + Requests: map[corev1.ResourceName]resource. + Quantity{ + corev1.ResourceStorage: resource. + MustParse("1Gi"), + }, + }, + }, + }, + }, + }, + }, + }, + }, + } + + // create base PostgresCluster + assert.NilError(t, tClient.Create(ctx, cluster)) + t.Cleanup(func() { assert.Check(t, tClient.Delete(ctx, cluster)) }) + + t.Run("existing pgdata volume", func(t *testing.T) { + volume := &corev1.PersistentVolumeClaim{ + ObjectMeta: metav1.ObjectMeta{ + Name: "pgdatavolume", + Namespace: cluster.Namespace, + Labels: map[string]string{ + "somelabel": "labelvalue-pgdata", + }, + }, + Spec: cluster.Spec.InstanceSets[0].DataVolumeClaimSpec, + } + + assert.NilError(t, tClient.Create(ctx, volume)) + + // add the pgData PVC name to the CRD + cluster.Spec.DataSource.Volumes. + PGDataVolume = &v1beta1.DataSourceVolume{ + PVCName: "pgdatavolume", + } + + clusterVolumes, err := r.observePersistentVolumeClaims(ctx, cluster) + assert.NilError(t, err) + // check that created volume does not show up in observed volumes since + // it does not have appropriate labels + assert.Assert(t, len(clusterVolumes) == 0) + + clusterVolumes, err = r.configureExistingPVCs(ctx, cluster, + clusterVolumes) + assert.NilError(t, err) + + // now, check that the label volume is returned + assert.Assert(t, len(clusterVolumes) == 1) + + // observe again, but allow time for the change to be observed + err = wait.PollUntilContextTimeout(ctx, time.Second/2, Scale(time.Second*15), false, func(ctx context.Context) (bool, error) { + clusterVolumes, err = r.observePersistentVolumeClaims(ctx, cluster) + return len(clusterVolumes) == 1, err + }) + assert.NilError(t, err) + // check that created volume is now in the list + assert.Assert(t, len(clusterVolumes) == 1) + + // validate the expected labels are in place + // expected volume labels, plus the original label + expected := map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelInstanceSet: cluster.Spec.InstanceSets[0].Name, + naming.LabelInstance: cluster.Status.StartupInstance, + naming.LabelRole: naming.RolePostgresData, + naming.LabelData: naming.DataPostgres, + "somelabel": "labelvalue-pgdata", + } + + // ensure volume is found and labeled correctly + var found bool + for i := range clusterVolumes { + if clusterVolumes[i].Name == cluster.Spec.DataSource.Volumes. + PGDataVolume.PVCName { + found = true + assert.DeepEqual(t, expected, clusterVolumes[i].Labels) + } + } + assert.Assert(t, found) + }) + + t.Run("existing pg_wal volume", func(t *testing.T) { + pgWALVolume := &corev1.PersistentVolumeClaim{ + ObjectMeta: metav1.ObjectMeta{ + Name: "pgwalvolume", + Namespace: cluster.Namespace, + Labels: map[string]string{ + "somelabel": "labelvalue-pgwal", + }, + }, + Spec: cluster.Spec.InstanceSets[0].DataVolumeClaimSpec, + } + + assert.NilError(t, tClient.Create(ctx, pgWALVolume)) + + // add the pg_wal PVC name to the CRD + cluster.Spec.DataSource.Volumes.PGWALVolume = + &v1beta1.DataSourceVolume{ + PVCName: "pgwalvolume", + } + + clusterVolumes, err := r.observePersistentVolumeClaims(ctx, cluster) + assert.NilError(t, err) + // check that created pgwal volume does not show up in observed volumes + // since it does not have appropriate labels, only the previously created + // pgdata volume should be in the observed list + assert.Assert(t, len(clusterVolumes) == 1) + + clusterVolumes, err = r.configureExistingPVCs(ctx, cluster, + clusterVolumes) + assert.NilError(t, err) + + // now, check that the label volume is returned + assert.Assert(t, len(clusterVolumes) == 2) + + // observe again, but allow time for the change to be observed + err = wait.PollUntilContextTimeout(ctx, time.Second/2, Scale(time.Second*15), false, func(ctx context.Context) (bool, error) { + clusterVolumes, err = r.observePersistentVolumeClaims(ctx, cluster) + return len(clusterVolumes) == 2, err + }) + assert.NilError(t, err) + // check that created volume is now in the list + assert.Assert(t, len(clusterVolumes) == 2) + + // validate the expected labels are in place + // expected volume labels, plus the original label + expected := map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelInstanceSet: cluster.Spec.InstanceSets[0].Name, + naming.LabelInstance: cluster.Status.StartupInstance, + naming.LabelRole: naming.RolePostgresWAL, + naming.LabelData: naming.DataPostgres, + "somelabel": "labelvalue-pgwal", + } + + // ensure volume is found and labeled correctly + var found bool + for i := range clusterVolumes { + if clusterVolumes[i].Name == cluster.Spec.DataSource.Volumes. + PGWALVolume.PVCName { + found = true + assert.DeepEqual(t, expected, clusterVolumes[i].Labels) + } + } + assert.Assert(t, found) + }) + + t.Run("existing repo volume", func(t *testing.T) { + volume := &corev1.PersistentVolumeClaim{ + ObjectMeta: metav1.ObjectMeta{ + Name: "repovolume", + Namespace: cluster.Namespace, + Labels: map[string]string{ + "somelabel": "labelvalue-repo", + }, + }, + Spec: cluster.Spec.InstanceSets[0].DataVolumeClaimSpec, + } + + assert.NilError(t, tClient.Create(ctx, volume)) + + // add the pgBackRest repo PVC name to the CRD + cluster.Spec.DataSource.Volumes.PGBackRestVolume = + &v1beta1.DataSourceVolume{ + PVCName: "repovolume", + } + + clusterVolumes, err := r.observePersistentVolumeClaims(ctx, cluster) + assert.NilError(t, err) + // check that created volume does not show up in observed volumes since + // it does not have appropriate labels + // check that created pgBackRest repo volume does not show up in observed + // volumes since it does not have appropriate labels, only the previously + // created pgdata and pg_wal volumes should be in the observed list + assert.Assert(t, len(clusterVolumes) == 2) + + clusterVolumes, err = r.configureExistingPVCs(ctx, cluster, + clusterVolumes) + assert.NilError(t, err) + + // now, check that the label volume is returned + assert.Assert(t, len(clusterVolumes) == 3) + + // observe again, but allow time for the change to be observed + err = wait.PollUntilContextTimeout(ctx, time.Second/2, Scale(time.Second*15), false, func(ctx context.Context) (bool, error) { + clusterVolumes, err = r.observePersistentVolumeClaims(ctx, cluster) + return len(clusterVolumes) == 3, err + }) + assert.NilError(t, err) + // check that created volume is now in the list + assert.Assert(t, len(clusterVolumes) == 3) + + // validate the expected labels are in place + // expected volume labels, plus the original label + expected := map[string]string{ + naming.LabelCluster: cluster.Name, + naming.LabelData: naming.DataPGBackRest, + naming.LabelPGBackRest: "", + naming.LabelPGBackRestRepo: "repo1", + naming.LabelPGBackRestRepoVolume: "", + "somelabel": "labelvalue-repo", + } + + // ensure volume is found and labeled correctly + var found bool + for i := range clusterVolumes { + if clusterVolumes[i].Name == cluster.Spec.DataSource.Volumes. + PGBackRestVolume.PVCName { + found = true + assert.DeepEqual(t, expected, clusterVolumes[i].Labels) + } + } + assert.Assert(t, found) + }) +} + +func TestReconcileMoveDirectories(t *testing.T) { + ctx := context.Background() + _, tClient := setupKubernetes(t) + require.ParallelCapacity(t, 1) + + r := &Reconciler{Client: tClient, Owner: client.FieldOwner(t.Name())} + + ns := setupNamespace(t, tClient) + cluster := &v1beta1.PostgresCluster{ + ObjectMeta: metav1.ObjectMeta{ + Name: "testcluster", + Namespace: ns.GetName(), + }, + Spec: v1beta1.PostgresClusterSpec{ + PostgresVersion: 13, + Image: "example.com/crunchy-postgres-ha:test", + ImagePullPolicy: corev1.PullAlways, + ImagePullSecrets: []corev1.LocalObjectReference{{ + Name: "test-secret", + }}, + DataSource: &v1beta1.DataSource{ + Volumes: &v1beta1.DataSourceVolumes{ + PGDataVolume: &v1beta1.DataSourceVolume{ + PVCName: "testpgdata", + Directory: "testpgdatadir", + }, + PGWALVolume: &v1beta1.DataSourceVolume{ + PVCName: "testwal", + Directory: "testwaldir", + }, + PGBackRestVolume: &v1beta1.DataSourceVolume{ + PVCName: "testrepo", + Directory: "testrepodir", + }, + }, + }, + InstanceSets: []v1beta1.PostgresInstanceSetSpec{{ + Name: "instance1", + Resources: corev1.ResourceRequirements{ + Requests: corev1.ResourceList{ + corev1.ResourceCPU: resource.MustParse("1m"), + }, + }, + PriorityClassName: initialize.String("some-priority-class"), + DataVolumeClaimSpec: corev1.PersistentVolumeClaimSpec{ + AccessModes: []corev1.PersistentVolumeAccessMode{ + corev1.ReadWriteMany}, + Resources: corev1.VolumeResourceRequirements{ + Requests: corev1.ResourceList{ + corev1.ResourceStorage: resource.MustParse("1Gi"), + }, + }, + }, + }}, + Backups: v1beta1.Backups{ + PGBackRest: v1beta1.PGBackRestArchive{ + Image: "example.com/crunchy-pgbackrest:test", + RepoHost: &v1beta1.PGBackRestRepoHost{ + Resources: corev1.ResourceRequirements{ + Requests: corev1.ResourceList{ + corev1.ResourceCPU: resource.MustParse("1m"), + }, + }, + PriorityClassName: initialize.String("some-priority-class"), + }, + Repos: []v1beta1.PGBackRestRepo{{ + Name: "repo1", + Volume: &v1beta1.RepoPVC{ + VolumeClaimSpec: corev1.PersistentVolumeClaimSpec{ + AccessModes: []corev1.PersistentVolumeAccessMode{ + corev1.ReadWriteMany}, + Resources: corev1.VolumeResourceRequirements{ + Requests: map[corev1.ResourceName]resource. + Quantity{ + corev1.ResourceStorage: resource. + MustParse("1Gi"), + }, + }, + }, + }, + }, + }, + }, + }, + }, + } + + // create PostgresCluster + assert.NilError(t, tClient.Create(ctx, cluster)) + t.Cleanup(func() { assert.Check(t, tClient.Delete(ctx, cluster)) }) + + returnEarly, err := r.reconcileDirMoveJobs(ctx, cluster) + assert.NilError(t, err) + // returnEarly will initially be true because the Jobs will not have + // completed yet + assert.Assert(t, returnEarly) + + moveJobs := &batchv1.JobList{} + err = r.Client.List(ctx, moveJobs, &client.ListOptions{ + Namespace: cluster.Namespace, + LabelSelector: naming.DirectoryMoveJobLabels(cluster.Name).AsSelector(), + }) + assert.NilError(t, err) + + t.Run("check pgdata move job pod spec", func(t *testing.T) { + + for i := range moveJobs.Items { + if moveJobs.Items[i].Name == "testcluster-move-pgdata-dir" { + compare := ` +automountServiceAccountToken: false +containers: +- command: + - bash + - -ceu + - "echo \"Preparing cluster testcluster volumes for PGO v5.x\"\n echo \"pgdata_pvc=testpgdata\"\n + \ echo \"Current PG data directory volume contents:\" \n ls -lh \"/pgdata\"\n + \ echo \"Now updating PG data directory...\"\n [ -d \"/pgdata/testpgdatadir\" + ] && mv \"/pgdata/testpgdatadir\" \"/pgdata/pg13_bootstrap\"\n rm -f \"/pgdata/pg13/patroni.dynamic.json\"\n + \ echo \"Updated PG data directory contents:\" \n ls -lh \"/pgdata\"\n echo + \"PG Data directory preparation complete\"\n " + image: example.com/crunchy-postgres-ha:test + imagePullPolicy: Always + name: pgdata-move-job + resources: + requests: + cpu: 1m + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /pgdata + name: postgres-data +dnsPolicy: ClusterFirst +enableServiceLinks: false +imagePullSecrets: +- name: test-secret +priorityClassName: some-priority-class +restartPolicy: Never +schedulerName: default-scheduler +securityContext: + fsGroup: 26 + fsGroupChangePolicy: OnRootMismatch +terminationGracePeriodSeconds: 30 +volumes: +- name: postgres-data + persistentVolumeClaim: + claimName: testpgdata + ` + + assert.Assert(t, cmp.MarshalMatches(moveJobs.Items[i].Spec.Template.Spec, compare+"\n")) + } + } + + }) + + t.Run("check pgwal move job pod spec", func(t *testing.T) { + + for i := range moveJobs.Items { + if moveJobs.Items[i].Name == "testcluster-move-pgwal-dir" { + compare := ` +automountServiceAccountToken: false +containers: +- command: + - bash + - -ceu + - "echo \"Preparing cluster testcluster volumes for PGO v5.x\"\n echo \"pg_wal_pvc=testwal\"\n + \ echo \"Current PG WAL directory volume contents:\"\n ls -lh \"/pgwal\"\n + \ echo \"Now updating PG WAL directory...\"\n [ -d \"/pgwal/testwaldir\" + ] && mv \"/pgwal/testwaldir\" \"/pgwal/testcluster-wal\"\n echo \"Updated PG + WAL directory contents:\"\n ls -lh \"/pgwal\"\n echo \"PG WAL directory + preparation complete\"\n " + image: example.com/crunchy-postgres-ha:test + imagePullPolicy: Always + name: pgwal-move-job + resources: + requests: + cpu: 1m + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /pgwal + name: postgres-wal +dnsPolicy: ClusterFirst +enableServiceLinks: false +imagePullSecrets: +- name: test-secret +priorityClassName: some-priority-class +restartPolicy: Never +schedulerName: default-scheduler +securityContext: + fsGroup: 26 + fsGroupChangePolicy: OnRootMismatch +terminationGracePeriodSeconds: 30 +volumes: +- name: postgres-wal + persistentVolumeClaim: + claimName: testwal + ` + + assert.Assert(t, cmp.MarshalMatches(moveJobs.Items[i].Spec.Template.Spec, compare+"\n")) + } + } + + }) + + t.Run("check repo move job pod spec", func(t *testing.T) { + + for i := range moveJobs.Items { + if moveJobs.Items[i].Name == "testcluster-move-pgbackrest-repo-dir" { + compare := ` +automountServiceAccountToken: false +containers: +- command: + - bash + - -ceu + - "echo \"Preparing cluster testcluster pgBackRest repo volume for PGO v5.x\"\n + \ echo \"repo_pvc=testrepo\"\n echo \"pgbackrest directory:\"\n ls -lh + /pgbackrest\n echo \"Current pgBackRest repo directory volume contents:\" \n + \ ls -lh \"/pgbackrest/testrepodir\"\n echo \"Now updating repo directory...\"\n + \ [ -d \"/pgbackrest/testrepodir\" ] && mv -t \"/pgbackrest/\" \"/pgbackrest/testrepodir/archive\"\n + \ [ -d \"/pgbackrest/testrepodir\" ] && mv -t \"/pgbackrest/\" \"/pgbackrest/testrepodir/backup\"\n + \ echo \"Updated /pgbackrest directory contents:\"\n ls -lh \"/pgbackrest\"\n + \ echo \"Repo directory preparation complete\"\n " + image: example.com/crunchy-pgbackrest:test + imagePullPolicy: Always + name: repo-move-job + resources: + requests: + cpu: 1m + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /pgbackrest + name: pgbackrest-repo +dnsPolicy: ClusterFirst +enableServiceLinks: false +imagePullSecrets: +- name: test-secret +priorityClassName: some-priority-class +restartPolicy: Never +schedulerName: default-scheduler +securityContext: + fsGroup: 26 + fsGroupChangePolicy: OnRootMismatch +terminationGracePeriodSeconds: 30 +volumes: +- name: pgbackrest-repo + persistentVolumeClaim: + claimName: testrepo + ` + assert.Assert(t, cmp.MarshalMatches(moveJobs.Items[i].Spec.Template.Spec, compare+"\n")) + } + } + + }) +} diff --git a/internal/controller/postgrescluster/watches.go b/internal/controller/postgrescluster/watches.go new file mode 100644 index 0000000000..0b5ba5fa87 --- /dev/null +++ b/internal/controller/postgrescluster/watches.go @@ -0,0 +1,76 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgrescluster + +import ( + "context" + + "k8s.io/client-go/util/workqueue" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/event" + "sigs.k8s.io/controller-runtime/pkg/handler" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/patroni" +) + +// watchPods returns a handler.EventHandler for Pods. +func (*Reconciler) watchPods() handler.Funcs { + return handler.Funcs{ + UpdateFunc: func(ctx context.Context, e event.UpdateEvent, q workqueue.RateLimitingInterface) { + labels := e.ObjectNew.GetLabels() + cluster := labels[naming.LabelCluster] + + // When a Patroni pod stops being standby leader, the entire cluster + // may have come out of standby. Queue an event to start applying + // changes if PostgreSQL is now writable. + if len(cluster) != 0 && + patroni.PodIsStandbyLeader(e.ObjectOld) && + !patroni.PodIsStandbyLeader(e.ObjectNew) { + q.Add(reconcile.Request{NamespacedName: client.ObjectKey{ + Namespace: e.ObjectNew.GetNamespace(), + Name: cluster, + }}) + return + } + + // Queue an event when a Patroni pod indicates it needs to restart + // or finished restarting. + if len(cluster) != 0 && + (patroni.PodRequiresRestart(e.ObjectOld) || + patroni.PodRequiresRestart(e.ObjectNew)) { + q.Add(reconcile.Request{NamespacedName: client.ObjectKey{ + Namespace: e.ObjectNew.GetNamespace(), + Name: cluster, + }}) + return + } + + // Queue an event to start applying changes if the PostgreSQL instance + // now has the "master" role. + if len(cluster) != 0 && + !patroni.PodIsPrimary(e.ObjectOld) && + patroni.PodIsPrimary(e.ObjectNew) { + q.Add(reconcile.Request{NamespacedName: client.ObjectKey{ + Namespace: e.ObjectNew.GetNamespace(), + Name: cluster, + }}) + return + } + + oldAnnotations := e.ObjectOld.GetAnnotations() + newAnnotations := e.ObjectNew.GetAnnotations() + // If the suggested-pgdata-pvc-size annotation is added or changes, reconcile. + if len(cluster) != 0 && oldAnnotations["suggested-pgdata-pvc-size"] != newAnnotations["suggested-pgdata-pvc-size"] { + q.Add(reconcile.Request{NamespacedName: client.ObjectKey{ + Namespace: e.ObjectNew.GetNamespace(), + Name: cluster, + }}) + return + } + }, + } +} diff --git a/internal/controller/postgrescluster/watches_test.go b/internal/controller/postgrescluster/watches_test.go new file mode 100644 index 0000000000..fdea498862 --- /dev/null +++ b/internal/controller/postgrescluster/watches_test.go @@ -0,0 +1,184 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgrescluster + +import ( + "context" + "testing" + + "gotest.tools/v3/assert" + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/client-go/util/workqueue" + "sigs.k8s.io/controller-runtime/pkg/controller/controllertest" + "sigs.k8s.io/controller-runtime/pkg/event" + "sigs.k8s.io/controller-runtime/pkg/reconcile" +) + +func TestWatchPodsUpdate(t *testing.T) { + ctx := context.Background() + queue := &controllertest.Queue{Interface: workqueue.New()} + reconciler := &Reconciler{} + + update := reconciler.watchPods().UpdateFunc + assert.Assert(t, update != nil) + + // No metadata; no reconcile. + update(ctx, event.UpdateEvent{ + ObjectOld: &corev1.Pod{}, + ObjectNew: &corev1.Pod{}, + }, queue) + assert.Equal(t, queue.Len(), 0) + + // Cluster label, but nothing else; no reconcile. + update(ctx, event.UpdateEvent{ + ObjectOld: &corev1.Pod{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "postgres-operator.crunchydata.com/cluster": "starfish", + }, + }, + }, + ObjectNew: &corev1.Pod{ + ObjectMeta: metav1.ObjectMeta{ + Labels: map[string]string{ + "postgres-operator.crunchydata.com/cluster": "starfish", + }, + }, + }, + }, queue) + assert.Equal(t, queue.Len(), 0) + + // Cluster standby leader changed; one reconcile by label. + update(ctx, event.UpdateEvent{ + ObjectOld: &corev1.Pod{ + ObjectMeta: metav1.ObjectMeta{ + Annotations: map[string]string{ + "status": `{"role":"standby_leader"}`, + }, + Labels: map[string]string{ + "postgres-operator.crunchydata.com/cluster": "starfish", + }, + }, + }, + ObjectNew: &corev1.Pod{ + ObjectMeta: metav1.ObjectMeta{ + Namespace: "some-ns", + Labels: map[string]string{ + "postgres-operator.crunchydata.com/cluster": "starfish", + "postgres-operator.crunchydata.com/role": "master", + }, + }, + }, + }, queue) + assert.Equal(t, queue.Len(), 1) + + item, _ := queue.Get() + expected := reconcile.Request{} + expected.Namespace = "some-ns" + expected.Name = "starfish" + assert.Equal(t, item, expected) + queue.Done(item) + + t.Run("PendingRestart", func(t *testing.T) { + expected := reconcile.Request{} + expected.Namespace = "some-ns" + expected.Name = "starfish" + + base := &corev1.Pod{} + base.Namespace = "some-ns" + base.Labels = map[string]string{ + "postgres-operator.crunchydata.com/cluster": "starfish", + } + + pending := base.DeepCopy() + pending.Annotations = map[string]string{ + "status": `{"pending_restart":true}`, + } + + // Newly pending; one reconcile by label. + update(ctx, event.UpdateEvent{ + ObjectOld: base.DeepCopy(), + ObjectNew: pending.DeepCopy(), + }, queue) + assert.Equal(t, queue.Len(), 1, "expected one reconcile") + + item, _ := queue.Get() + assert.Equal(t, item, expected) + queue.Done(item) + + // Still pending; one reconcile by label. + update(ctx, event.UpdateEvent{ + ObjectOld: pending.DeepCopy(), + ObjectNew: pending.DeepCopy(), + }, queue) + assert.Equal(t, queue.Len(), 1, "expected one reconcile") + + item, _ = queue.Get() + assert.Equal(t, item, expected) + queue.Done(item) + + // No longer pending; one reconcile by label. + update(ctx, event.UpdateEvent{ + ObjectOld: pending.DeepCopy(), + ObjectNew: base.DeepCopy(), + }, queue) + assert.Equal(t, queue.Len(), 1, "expected one reconcile") + + item, _ = queue.Get() + assert.Equal(t, item, expected) + queue.Done(item) + }) + + // Pod annotation with arbitrary key; no reconcile. + update(ctx, event.UpdateEvent{ + ObjectOld: &corev1.Pod{ + ObjectMeta: metav1.ObjectMeta{ + Annotations: map[string]string{ + "clortho": "vince", + }, + Labels: map[string]string{ + "postgres-operator.crunchydata.com/cluster": "starfish", + }, + }, + }, + ObjectNew: &corev1.Pod{ + ObjectMeta: metav1.ObjectMeta{ + Annotations: map[string]string{ + "clortho": "vin", + }, + Labels: map[string]string{ + "postgres-operator.crunchydata.com/cluster": "starfish", + }, + }, + }, + }, queue) + assert.Equal(t, queue.Len(), 0) + + // Pod annotation with suggested-pgdata-pvc-size; reconcile. + update(ctx, event.UpdateEvent{ + ObjectOld: &corev1.Pod{ + ObjectMeta: metav1.ObjectMeta{ + Annotations: map[string]string{ + "suggested-pgdata-pvc-size": "5000Mi", + }, + Labels: map[string]string{ + "postgres-operator.crunchydata.com/cluster": "starfish", + }, + }, + }, + ObjectNew: &corev1.Pod{ + ObjectMeta: metav1.ObjectMeta{ + Annotations: map[string]string{ + "suggested-pgdata-pvc-size": "8000Mi", + }, + Labels: map[string]string{ + "postgres-operator.crunchydata.com/cluster": "starfish", + }, + }, + }, + }, queue) + assert.Equal(t, queue.Len(), 1) +} diff --git a/internal/controller/runtime/client.go b/internal/controller/runtime/client.go new file mode 100644 index 0000000000..4cc05c9835 --- /dev/null +++ b/internal/controller/runtime/client.go @@ -0,0 +1,76 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package runtime + +import ( + "context" + + "sigs.k8s.io/controller-runtime/pkg/client" +) + +// Types that implement single methods of the [client.Reader] interface. +type ( + ClientGet func(context.Context, client.ObjectKey, client.Object, ...client.GetOption) error + ClientList func(context.Context, client.ObjectList, ...client.ListOption) error +) + +// ClientReader implements [client.Reader] by composing assignable functions. +type ClientReader struct { + ClientGet + ClientList +} + +var _ client.Reader = ClientReader{} + +// Types that implement single methods of the [client.Writer] interface. +type ( + ClientCreate func(context.Context, client.Object, ...client.CreateOption) error + ClientDelete func(context.Context, client.Object, ...client.DeleteOption) error + ClientPatch func(context.Context, client.Object, client.Patch, ...client.PatchOption) error + ClientDeleteAll func(context.Context, client.Object, ...client.DeleteAllOfOption) error + ClientUpdate func(context.Context, client.Object, ...client.UpdateOption) error +) + +// ClientWriter implements [client.Writer] by composing assignable functions. +type ClientWriter struct { + ClientCreate + ClientDelete + ClientDeleteAll + ClientPatch + ClientUpdate +} + +var _ client.Writer = ClientWriter{} + +// NOTE: The following implementations can go away following https://go.dev/issue/47487. +// The function types above would become single-method interfaces. + +func (fn ClientCreate) Create(ctx context.Context, obj client.Object, opts ...client.CreateOption) error { + return fn(ctx, obj, opts...) +} + +func (fn ClientDelete) Delete(ctx context.Context, obj client.Object, opts ...client.DeleteOption) error { + return fn(ctx, obj, opts...) +} + +func (fn ClientDeleteAll) DeleteAllOf(ctx context.Context, obj client.Object, opts ...client.DeleteAllOfOption) error { + return fn(ctx, obj, opts...) +} + +func (fn ClientGet) Get(ctx context.Context, key client.ObjectKey, obj client.Object, opts ...client.GetOption) error { + return fn(ctx, key, obj, opts...) +} + +func (fn ClientList) List(ctx context.Context, list client.ObjectList, opts ...client.ListOption) error { + return fn(ctx, list, opts...) +} + +func (fn ClientPatch) Patch(ctx context.Context, obj client.Object, patch client.Patch, opts ...client.PatchOption) error { + return fn(ctx, obj, patch, opts...) +} + +func (fn ClientUpdate) Update(ctx context.Context, obj client.Object, opts ...client.UpdateOption) error { + return fn(ctx, obj, opts...) +} diff --git a/internal/controller/runtime/pod_client.go b/internal/controller/runtime/pod_client.go new file mode 100644 index 0000000000..e842601aa7 --- /dev/null +++ b/internal/controller/runtime/pod_client.go @@ -0,0 +1,68 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package runtime + +import ( + "context" + "io" + + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/runtime/serializer" + "k8s.io/client-go/kubernetes/scheme" + "k8s.io/client-go/rest" + "k8s.io/client-go/tools/remotecommand" + "sigs.k8s.io/controller-runtime/pkg/client/apiutil" +) + +// podExecutor runs command on container in pod in namespace. Non-nil streams +// (stdin, stdout, and stderr) are attached the to the remote process. +type podExecutor func( + ctx context.Context, namespace, pod, container string, + stdin io.Reader, stdout, stderr io.Writer, command ...string, +) error + +func newPodClient(config *rest.Config) (rest.Interface, error) { + codecs := serializer.NewCodecFactory(scheme.Scheme) + gvk, _ := apiutil.GVKForObject(&corev1.Pod{}, scheme.Scheme) + httpClient, err := rest.HTTPClientFor(config) + if err != nil { + return nil, err + } + return apiutil.RESTClientForGVK(gvk, false, config, codecs, httpClient) +} + +// +kubebuilder:rbac:groups="",resources="pods/exec",verbs={create} + +func NewPodExecutor(config *rest.Config) (podExecutor, error) { + client, err := newPodClient(config) + + return func( + ctx context.Context, namespace, pod, container string, + stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + request := client.Post(). + Resource("pods").SubResource("exec"). + Namespace(namespace).Name(pod). + VersionedParams(&corev1.PodExecOptions{ + Container: container, + Command: command, + Stdin: stdin != nil, + Stdout: stdout != nil, + Stderr: stderr != nil, + }, scheme.ParameterCodec) + + exec, err := remotecommand.NewSPDYExecutor(config, "POST", request.URL()) + + if err == nil { + err = exec.StreamWithContext(ctx, remotecommand.StreamOptions{ + Stdin: stdin, + Stdout: stdout, + Stderr: stderr, + }) + } + + return err + }, err +} diff --git a/internal/controller/runtime/reconcile.go b/internal/controller/runtime/reconcile.go new file mode 100644 index 0000000000..a2196d1626 --- /dev/null +++ b/internal/controller/runtime/reconcile.go @@ -0,0 +1,69 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package runtime + +import ( + "time" + + "sigs.k8s.io/controller-runtime/pkg/reconcile" +) + +// ErrorWithBackoff returns a Result and error that indicate a non-nil err +// should be logged and measured and its [reconcile.Request] should be retried +// later. When err is nil, nothing is logged and the Request is not retried. +// When err unwraps to [reconcile.TerminalError], the Request is not retried. +func ErrorWithBackoff(err error) (reconcile.Result, error) { + // Result should be zero to avoid warning messages. + return reconcile.Result{}, err + + // When error is not nil and not a TerminalError, the controller-runtime Controller + // puts [reconcile.Request] back into the workqueue using AddRateLimited. + // - https://github.com/kubernetes-sigs/controller-runtime/blob/v0.18.4/pkg/internal/controller/controller.go#L317 + // - https://pkg.go.dev/k8s.io/client-go/util/workqueue#RateLimitingInterface +} + +// ErrorWithoutBackoff returns a Result and error that indicate a non-nil err +// should be logged and measured without retrying its [reconcile.Request]. +// When err is nil, nothing is logged and the Request is not retried. +func ErrorWithoutBackoff(err error) (reconcile.Result, error) { + if err != nil { + err = reconcile.TerminalError(err) + } + + // Result should be zero to avoid warning messages. + return reconcile.Result{}, err + + // When error is a TerminalError, the controller-runtime Controller increments + // a counter rather than interact with the workqueue. + // - https://github.com/kubernetes-sigs/controller-runtime/blob/v0.18.4/pkg/internal/controller/controller.go#L314 +} + +// RequeueWithBackoff returns a Result that indicates a [reconcile.Request] +// should be retried later. +func RequeueWithBackoff() reconcile.Result { + return reconcile.Result{Requeue: true} + + // When [reconcile.Result].Requeue is true, the controller-runtime Controller + // puts [reconcile.Request] back into the workqueue using AddRateLimited. + // - https://github.com/kubernetes-sigs/controller-runtime/blob/v0.18.4/pkg/internal/controller/controller.go#L334 + // - https://pkg.go.dev/k8s.io/client-go/util/workqueue#RateLimitingInterface +} + +// RequeueWithoutBackoff returns a Result that indicates a [reconcile.Request] +// should be retried on or before delay. +func RequeueWithoutBackoff(delay time.Duration) reconcile.Result { + // RequeueAfter must be positive to not backoff. + if delay <= 0 { + delay = time.Nanosecond + } + + // RequeueAfter implies Requeue, but set both to remove any ambiguity. + return reconcile.Result{Requeue: true, RequeueAfter: delay} + + // When [reconcile.Result].RequeueAfter is positive, the controller-runtime Controller + // puts [reconcile.Request] back into the workqueue using AddAfter. + // - https://github.com/kubernetes-sigs/controller-runtime/blob/v0.18.4/pkg/internal/controller/controller.go#L325 + // - https://pkg.go.dev/k8s.io/client-go/util/workqueue#DelayingInterface +} diff --git a/internal/controller/runtime/reconcile_test.go b/internal/controller/runtime/reconcile_test.go new file mode 100644 index 0000000000..925b3cf47d --- /dev/null +++ b/internal/controller/runtime/reconcile_test.go @@ -0,0 +1,57 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package runtime + +import ( + "errors" + "testing" + "time" + + "gotest.tools/v3/assert" + "sigs.k8s.io/controller-runtime/pkg/reconcile" +) + +func TestErrorWithBackoff(t *testing.T) { + result, err := ErrorWithBackoff(nil) + assert.Assert(t, result.IsZero()) + assert.NilError(t, err) + + expected := errors.New("doot") + result, err = ErrorWithBackoff(expected) + assert.Assert(t, result.IsZero()) + assert.Equal(t, err, expected) +} + +func TestErrorWithoutBackoff(t *testing.T) { + result, err := ErrorWithoutBackoff(nil) + assert.Assert(t, result.IsZero()) + assert.NilError(t, err) + + expected := errors.New("doot") + result, err = ErrorWithoutBackoff(expected) + assert.Assert(t, result.IsZero()) + assert.Assert(t, errors.Is(err, reconcile.TerminalError(nil))) + assert.Equal(t, errors.Unwrap(err), expected) +} + +func TestRequeueWithBackoff(t *testing.T) { + result := RequeueWithBackoff() + assert.Assert(t, result.Requeue) + assert.Assert(t, result.RequeueAfter == 0) +} + +func TestRequeueWithoutBackoff(t *testing.T) { + result := RequeueWithoutBackoff(0) + assert.Assert(t, result.Requeue) + assert.Assert(t, result.RequeueAfter > 0) + + result = RequeueWithoutBackoff(-1) + assert.Assert(t, result.Requeue) + assert.Assert(t, result.RequeueAfter > 0) + + result = RequeueWithoutBackoff(time.Minute) + assert.Assert(t, result.Requeue) + assert.Equal(t, result.RequeueAfter, time.Minute) +} diff --git a/internal/controller/runtime/runtime.go b/internal/controller/runtime/runtime.go new file mode 100644 index 0000000000..34bfeabf61 --- /dev/null +++ b/internal/controller/runtime/runtime.go @@ -0,0 +1,76 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package runtime + +import ( + "context" + + "k8s.io/apimachinery/pkg/runtime" + "k8s.io/client-go/kubernetes/scheme" + "k8s.io/client-go/rest" + "sigs.k8s.io/controller-runtime/pkg/cache" + "sigs.k8s.io/controller-runtime/pkg/client/config" + "sigs.k8s.io/controller-runtime/pkg/log" + "sigs.k8s.io/controller-runtime/pkg/manager" + "sigs.k8s.io/controller-runtime/pkg/manager/signals" + + "github.com/crunchydata/postgres-operator/internal/logging" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" + + volumesnapshotv1 "github.com/kubernetes-csi/external-snapshotter/client/v8/apis/volumesnapshot/v1" +) + +type ( + CacheConfig = cache.Config + Manager = manager.Manager + Options = manager.Options +) + +// Scheme associates standard Kubernetes API objects and PGO API objects with Go structs. +var Scheme *runtime.Scheme = runtime.NewScheme() + +func init() { + if err := scheme.AddToScheme(Scheme); err != nil { + panic(err) + } + if err := v1beta1.AddToScheme(Scheme); err != nil { + panic(err) + } + if err := volumesnapshotv1.AddToScheme(Scheme); err != nil { + panic(err) + } +} + +// GetConfig returns a Kubernetes client configuration from KUBECONFIG or the +// service account Kubernetes gives to pods. +func GetConfig() (*rest.Config, error) { return config.GetConfig() } + +// NewManager returns a Manager that interacts with the Kubernetes API of config. +// When config is nil, it reads from KUBECONFIG or the local service account. +// When options.Scheme is nil, it uses the Scheme from this package. +func NewManager(config *rest.Config, options manager.Options) (manager.Manager, error) { + var m manager.Manager + var err error + + if config == nil { + config, err = GetConfig() + } + + if options.Scheme == nil { + options.Scheme = Scheme + } + + if err == nil { + m, err = manager.New(config, options) + } + + return m, err +} + +// SetLogger assigns the default Logger used by [sigs.k8s.io/controller-runtime]. +func SetLogger(logger logging.Logger) { log.SetLogger(logger) } + +// SignalHandler returns a Context that is canceled on SIGINT or SIGTERM. +func SignalHandler() context.Context { return signals.SetupSignalHandler() } diff --git a/internal/controller/runtime/ticker.go b/internal/controller/runtime/ticker.go new file mode 100644 index 0000000000..830179eafc --- /dev/null +++ b/internal/controller/runtime/ticker.go @@ -0,0 +1,70 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package runtime + +import ( + "context" + "time" + + "k8s.io/client-go/util/workqueue" + "sigs.k8s.io/controller-runtime/pkg/event" + "sigs.k8s.io/controller-runtime/pkg/handler" + "sigs.k8s.io/controller-runtime/pkg/source" +) + +type ticker struct { + time.Duration + event.GenericEvent + Handler handler.EventHandler + Immediate bool +} + +// NewTicker returns a Source that emits e every d. +func NewTicker(d time.Duration, e event.GenericEvent, + h handler.EventHandler) source.Source { + return &ticker{Duration: d, GenericEvent: e, Handler: h} +} + +// NewTickerImmediate returns a Source that emits e at start and every d. +func NewTickerImmediate(d time.Duration, e event.GenericEvent, + h handler.EventHandler) source.Source { + return &ticker{Duration: d, GenericEvent: e, Handler: h, Immediate: true} +} + +func (t ticker) String() string { return "every " + t.Duration.String() } + +// Start is called by controller-runtime Controller and returns quickly. +// It cleans up when ctx is cancelled. +func (t ticker) Start( + ctx context.Context, q workqueue.RateLimitingInterface, +) error { + ticker := time.NewTicker(t.Duration) + + // Pass t.GenericEvent to h when it is not filtered out by p. + // - https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/source/internal#EventHandler + emit := func() { + t.Handler.Generic(ctx, t.GenericEvent, q) + } + + if t.Immediate { + emit() + } + + // Repeat until ctx is cancelled. + go func() { + defer ticker.Stop() + + for { + select { + case <-ticker.C: + emit() + case <-ctx.Done(): + return + } + } + }() + + return nil +} diff --git a/internal/controller/runtime/ticker_test.go b/internal/controller/runtime/ticker_test.go new file mode 100644 index 0000000000..49cecd79d7 --- /dev/null +++ b/internal/controller/runtime/ticker_test.go @@ -0,0 +1,70 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package runtime + +import ( + "context" + "testing" + "time" + + "gotest.tools/v3/assert" + corev1 "k8s.io/api/core/v1" + "k8s.io/client-go/util/workqueue" + "sigs.k8s.io/controller-runtime/pkg/event" + "sigs.k8s.io/controller-runtime/pkg/handler" +) + +func TestTickerString(t *testing.T) { + assert.Equal(t, ticker{Duration: time.Millisecond}.String(), "every 1ms") + assert.Equal(t, ticker{Duration: 10 * time.Second}.String(), "every 10s") + assert.Equal(t, ticker{Duration: time.Hour}.String(), "every 1h0m0s") +} + +func TestTicker(t *testing.T) { + t.Parallel() + + var called []event.GenericEvent + expected := event.GenericEvent{Object: new(corev1.ConfigMap)} + + tq := workqueue.NewRateLimitingQueue(workqueue.DefaultItemBasedRateLimiter()) + th := handler.Funcs{GenericFunc: func(ctx context.Context, e event.GenericEvent, q workqueue.RateLimitingInterface) { + called = append(called, e) + + assert.Equal(t, q, tq, "should be called with the queue passed in Start") + }} + + t.Run("NotImmediate", func(t *testing.T) { + called = nil + + ticker := NewTicker(100*time.Millisecond, expected, th) + ctx, cancel := context.WithTimeout(context.Background(), 250*time.Millisecond) + t.Cleanup(cancel) + + // Start the ticker and wait for the deadline to pass. + assert.NilError(t, ticker.Start(ctx, tq)) + <-ctx.Done() + + assert.Equal(t, len(called), 2) + assert.Equal(t, called[0], expected, "expected at 100ms") + assert.Equal(t, called[1], expected, "expected at 200ms") + }) + + t.Run("Immediate", func(t *testing.T) { + called = nil + + ticker := NewTickerImmediate(100*time.Millisecond, expected, th) + ctx, cancel := context.WithTimeout(context.Background(), 250*time.Millisecond) + t.Cleanup(cancel) + + // Start the ticker and wait for the deadline to pass. + assert.NilError(t, ticker.Start(ctx, tq)) + <-ctx.Done() + + assert.Assert(t, len(called) > 2) + assert.Equal(t, called[0], expected, "expected at 0ms") + assert.Equal(t, called[1], expected, "expected at 100ms") + assert.Equal(t, called[2], expected, "expected at 200ms") + }) +} diff --git a/internal/controller/standalone_pgadmin/apply.go b/internal/controller/standalone_pgadmin/apply.go new file mode 100644 index 0000000000..0eaa613df8 --- /dev/null +++ b/internal/controller/standalone_pgadmin/apply.go @@ -0,0 +1,47 @@ +// Copyright 2023 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package standalone_pgadmin + +import ( + "context" + "reflect" + + "sigs.k8s.io/controller-runtime/pkg/client" +) + +// patch sends patch to object's endpoint in the Kubernetes API and updates +// object with any returned content. The fieldManager is set to r.Owner, but +// can be overridden in options. +// - https://docs.k8s.io/reference/using-api/server-side-apply/#managers +// +// TODO(tjmoore4): This function is duplicated from a version that takes a PostgresCluster object. +func (r *PGAdminReconciler) patch( + ctx context.Context, object client.Object, + patch client.Patch, options ...client.PatchOption, +) error { + options = append([]client.PatchOption{r.Owner}, options...) + return r.Client.Patch(ctx, object, patch, options...) +} + +// apply sends an apply patch to object's endpoint in the Kubernetes API and +// updates object with any returned content. The fieldManager is set to +// r.Owner and the force parameter is true. +// - https://docs.k8s.io/reference/using-api/server-side-apply/#managers +// - https://docs.k8s.io/reference/using-api/server-side-apply/#conflicts +// +// TODO(tjmoore4): This function is duplicated from a version that takes a PostgresCluster object. +func (r *PGAdminReconciler) apply(ctx context.Context, object client.Object) error { + // Generate an apply-patch by comparing the object to its zero value. + zero := reflect.New(reflect.TypeOf(object).Elem()).Interface() + data, err := client.MergeFrom(zero.(client.Object)).Data(object) + apply := client.RawPatch(client.Apply.Type(), data) + + // Send the apply-patch with force=true. + if err == nil { + err = r.patch(ctx, object, apply, client.ForceOwnership) + } + + return err +} diff --git a/internal/controller/standalone_pgadmin/config.go b/internal/controller/standalone_pgadmin/config.go new file mode 100644 index 0000000000..ddd080985b --- /dev/null +++ b/internal/controller/standalone_pgadmin/config.go @@ -0,0 +1,19 @@ +// Copyright 2023 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package standalone_pgadmin + +// Include configs here used by multiple files +const ( + // ConfigMap keys used also in mounting volume to pod + settingsConfigMapKey = "pgadmin-settings.json" + settingsClusterMapKey = "pgadmin-shared-clusters.json" + gunicornConfigKey = "gunicorn-config.json" + + // Port address used to define pod and service + pgAdminPort = 5050 + + // Directory for pgAdmin in container + pgAdminDir = "/usr/local/lib/python3.11/site-packages/pgadmin4" +) diff --git a/internal/controller/standalone_pgadmin/configmap.go b/internal/controller/standalone_pgadmin/configmap.go new file mode 100644 index 0000000000..d1ec39bf13 --- /dev/null +++ b/internal/controller/standalone_pgadmin/configmap.go @@ -0,0 +1,209 @@ +// Copyright 2023 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package standalone_pgadmin + +import ( + "bytes" + "context" + "encoding/json" + "fmt" + "sort" + "strconv" + + corev1 "k8s.io/api/core/v1" + + "github.com/pkg/errors" + + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +// +kubebuilder:rbac:groups="",resources="configmaps",verbs={get} +// +kubebuilder:rbac:groups="",resources="configmaps",verbs={create,delete,patch} + +// reconcilePGAdminConfigMap writes the ConfigMap for pgAdmin. +func (r *PGAdminReconciler) reconcilePGAdminConfigMap( + ctx context.Context, pgadmin *v1beta1.PGAdmin, + clusters map[string]*v1beta1.PostgresClusterList, +) (*corev1.ConfigMap, error) { + configmap, err := configmap(pgadmin, clusters) + if err == nil { + err = errors.WithStack(r.setControllerReference(pgadmin, configmap)) + } + if err == nil { + err = errors.WithStack(r.apply(ctx, configmap)) + } + + return configmap, err +} + +// configmap returns a v1.ConfigMap for pgAdmin. +func configmap(pgadmin *v1beta1.PGAdmin, + clusters map[string]*v1beta1.PostgresClusterList, +) (*corev1.ConfigMap, error) { + configmap := &corev1.ConfigMap{ObjectMeta: naming.StandalonePGAdmin(pgadmin)} + configmap.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("ConfigMap")) + + configmap.Annotations = pgadmin.Spec.Metadata.GetAnnotationsOrNil() + configmap.Labels = naming.Merge( + pgadmin.Spec.Metadata.GetLabelsOrNil(), + naming.StandalonePGAdminLabels(pgadmin.Name)) + + // TODO(tjmoore4): Populate configuration details. + initialize.Map(&configmap.Data) + configSettings, err := generateConfig(pgadmin) + if err == nil { + configmap.Data[settingsConfigMapKey] = configSettings + } + + clusterSettings, err := generateClusterConfig(clusters) + if err == nil { + configmap.Data[settingsClusterMapKey] = clusterSettings + } + + gunicornSettings, err := generateGunicornConfig(pgadmin) + if err == nil { + configmap.Data[gunicornConfigKey] = gunicornSettings + } + + return configmap, err +} + +// generateConfig generates the config settings for the pgAdmin +func generateConfig(pgadmin *v1beta1.PGAdmin) (string, error) { + settings := map[string]any{ + // Bind to all IPv4 addresses by default. "0.0.0.0" here represents INADDR_ANY. + // - https://flask.palletsprojects.com/en/2.2.x/api/#flask.Flask.run + // - https://flask.palletsprojects.com/en/2.3.x/api/#flask.Flask.run + "DEFAULT_SERVER": "0.0.0.0", + } + + // Copy any specified settings over the defaults. + for k, v := range pgadmin.Spec.Config.Settings { + settings[k] = v + } + + // Write mandatory settings over any specified ones. + // SERVER_MODE must always be enabled when running on a webserver. + // - https://github.com/pgadmin-org/pgadmin4/blob/REL-7_7/web/config.py#L110 + settings["SERVER_MODE"] = true + settings["UPGRADE_CHECK_ENABLED"] = false + settings["UPGRADE_CHECK_URL"] = "" + settings["UPGRADE_CHECK_KEY"] = "" + + // To avoid spurious reconciles, the following value must not change when + // the spec does not change. [json.Encoder] and [json.Marshal] do this by + // emitting map keys in sorted order. Indent so the value is not rendered + // as one long line by `kubectl`. + buffer := new(bytes.Buffer) + encoder := json.NewEncoder(buffer) + encoder.SetEscapeHTML(false) + encoder.SetIndent("", " ") + err := encoder.Encode(settings) + + return buffer.String(), err +} + +// generateClusterConfig generates the settings for the servers registered in pgAdmin. +// pgAdmin's `setup.py --load-server` function ingests this list of servers as JSON, +// in the following form: +// +// { +// "Servers": { +// "1": { +// "Name": "Minimally Defined Server", +// "Group": "Server Group 1", +// "Port": 5432, +// "Username": "postgres", +// "Host": "localhost", +// "SSLMode": "prefer", +// "MaintenanceDB": "postgres" +// }, +// "2": { ... } +// } +// } +func generateClusterConfig( + clusters map[string]*v1beta1.PostgresClusterList, +) (string, error) { + // To avoid spurious reconciles, the following value must not change when + // the spec does not change. [json.Encoder] and [json.Marshal] do this by + // emitting map keys in sorted order. Indent so the value is not rendered + // as one long line by `kubectl`. + buffer := new(bytes.Buffer) + encoder := json.NewEncoder(buffer) + encoder.SetEscapeHTML(false) + encoder.SetIndent("", " ") + + // To avoid spurious reconciles, we want to keep the `clusters` order consistent + // which we can do by + // a) sorting the ServerGroup name used as a key; and + // b) sorting the clusters by name; + keys := []string{} + for key := range clusters { + keys = append(keys, key) + } + sort.Strings(keys) + + clusterServers := map[int]any{} + for _, serverGroupName := range keys { + sort.Slice(clusters[serverGroupName].Items, + func(i, j int) bool { + return clusters[serverGroupName].Items[i].Name < clusters[serverGroupName].Items[j].Name + }) + for _, cluster := range clusters[serverGroupName].Items { + object := map[string]any{ + "Name": cluster.Name, + "Group": serverGroupName, + "Host": fmt.Sprintf("%s-primary.%s.svc", cluster.Name, cluster.Namespace), + "Port": 5432, + "MaintenanceDB": "postgres", + "Username": cluster.Name, + // `SSLMode` and some other settings may need to be set by the user in the future + "SSLMode": "prefer", + "Shared": true, + } + clusterServers[len(clusterServers)+1] = object + } + } + servers := map[string]any{ + "Servers": clusterServers, + } + err := encoder.Encode(servers) + return buffer.String(), err +} + +// generateGunicornConfig generates the config settings for the gunicorn server +// - https://docs.gunicorn.org/en/latest/settings.html +func generateGunicornConfig(pgadmin *v1beta1.PGAdmin) (string, error) { + settings := map[string]any{ + // Bind to all IPv4 addresses and set 25 threads by default. + // - https://docs.gunicorn.org/en/latest/settings.html#bind + // - https://docs.gunicorn.org/en/latest/settings.html#threads + "bind": "0.0.0.0:" + strconv.Itoa(pgAdminPort), + "threads": 25, + } + + // Copy any specified settings over the defaults. + for k, v := range pgadmin.Spec.Config.Gunicorn { + settings[k] = v + } + + // Write mandatory settings over any specified ones. + // - https://docs.gunicorn.org/en/latest/settings.html#workers + settings["workers"] = 1 + + // To avoid spurious reconciles, the following value must not change when + // the spec does not change. [json.Encoder] and [json.Marshal] do this by + // emitting map keys in sorted order. Indent so the value is not rendered + // as one long line by `kubectl`. + buffer := new(bytes.Buffer) + encoder := json.NewEncoder(buffer) + encoder.SetEscapeHTML(false) + encoder.SetIndent("", " ") + err := encoder.Encode(settings) + + return buffer.String(), err +} diff --git a/internal/controller/standalone_pgadmin/configmap_test.go b/internal/controller/standalone_pgadmin/configmap_test.go new file mode 100644 index 0000000000..5a844e520c --- /dev/null +++ b/internal/controller/standalone_pgadmin/configmap_test.go @@ -0,0 +1,293 @@ +// Copyright 2023 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package standalone_pgadmin + +import ( + "testing" + + "gotest.tools/v3/assert" + + "github.com/crunchydata/postgres-operator/internal/testing/cmp" + "github.com/crunchydata/postgres-operator/internal/testing/require" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestGenerateConfig(t *testing.T) { + require.ParallelCapacity(t, 0) + + t.Run("Default", func(t *testing.T) { + pgadmin := new(v1beta1.PGAdmin) + result, err := generateConfig(pgadmin) + + assert.NilError(t, err) + assert.Equal(t, result, `{ + "DEFAULT_SERVER": "0.0.0.0", + "SERVER_MODE": true, + "UPGRADE_CHECK_ENABLED": false, + "UPGRADE_CHECK_KEY": "", + "UPGRADE_CHECK_URL": "" +}`+"\n") + }) + + t.Run("Mandatory", func(t *testing.T) { + pgadmin := new(v1beta1.PGAdmin) + pgadmin.Spec.Config.Settings = map[string]any{ + "SERVER_MODE": false, + "UPGRADE_CHECK_ENABLED": true, + } + result, err := generateConfig(pgadmin) + + assert.NilError(t, err) + assert.Equal(t, result, `{ + "DEFAULT_SERVER": "0.0.0.0", + "SERVER_MODE": true, + "UPGRADE_CHECK_ENABLED": false, + "UPGRADE_CHECK_KEY": "", + "UPGRADE_CHECK_URL": "" +}`+"\n") + }) + + t.Run("Specified", func(t *testing.T) { + pgadmin := new(v1beta1.PGAdmin) + pgadmin.Spec.Config.Settings = map[string]any{ + "ALLOWED_HOSTS": []any{"225.0.0.0/8", "226.0.0.0/7", "228.0.0.0/6"}, + "DEFAULT_SERVER": "::", + } + result, err := generateConfig(pgadmin) + + assert.NilError(t, err) + assert.Equal(t, result, `{ + "ALLOWED_HOSTS": [ + "225.0.0.0/8", + "226.0.0.0/7", + "228.0.0.0/6" + ], + "DEFAULT_SERVER": "::", + "SERVER_MODE": true, + "UPGRADE_CHECK_ENABLED": false, + "UPGRADE_CHECK_KEY": "", + "UPGRADE_CHECK_URL": "" +}`+"\n") + }) +} + +func TestGenerateClusterConfig(t *testing.T) { + require.ParallelCapacity(t, 0) + + cluster := testCluster() + cluster.Namespace = "postgres-operator" + clusterList := &v1beta1.PostgresClusterList{ + Items: []v1beta1.PostgresCluster{*cluster, *cluster}, + } + clusters := map[string]*v1beta1.PostgresClusterList{ + "shared": clusterList, + "test": clusterList, + "hello": clusterList, + } + + expectedString := `{ + "Servers": { + "1": { + "Group": "hello", + "Host": "hippo-primary.postgres-operator.svc", + "MaintenanceDB": "postgres", + "Name": "hippo", + "Port": 5432, + "SSLMode": "prefer", + "Shared": true, + "Username": "hippo" + }, + "2": { + "Group": "hello", + "Host": "hippo-primary.postgres-operator.svc", + "MaintenanceDB": "postgres", + "Name": "hippo", + "Port": 5432, + "SSLMode": "prefer", + "Shared": true, + "Username": "hippo" + }, + "3": { + "Group": "shared", + "Host": "hippo-primary.postgres-operator.svc", + "MaintenanceDB": "postgres", + "Name": "hippo", + "Port": 5432, + "SSLMode": "prefer", + "Shared": true, + "Username": "hippo" + }, + "4": { + "Group": "shared", + "Host": "hippo-primary.postgres-operator.svc", + "MaintenanceDB": "postgres", + "Name": "hippo", + "Port": 5432, + "SSLMode": "prefer", + "Shared": true, + "Username": "hippo" + }, + "5": { + "Group": "test", + "Host": "hippo-primary.postgres-operator.svc", + "MaintenanceDB": "postgres", + "Name": "hippo", + "Port": 5432, + "SSLMode": "prefer", + "Shared": true, + "Username": "hippo" + }, + "6": { + "Group": "test", + "Host": "hippo-primary.postgres-operator.svc", + "MaintenanceDB": "postgres", + "Name": "hippo", + "Port": 5432, + "SSLMode": "prefer", + "Shared": true, + "Username": "hippo" + } + } +} +` + actualString, err := generateClusterConfig(clusters) + assert.NilError(t, err) + assert.Equal(t, actualString, expectedString) +} + +func TestGeneratePGAdminConfigMap(t *testing.T) { + require.ParallelCapacity(t, 0) + + pgadmin := new(v1beta1.PGAdmin) + pgadmin.Namespace = "some-ns" + pgadmin.Name = "pg1" + clusters := map[string]*v1beta1.PostgresClusterList{} + t.Run("Data,ObjectMeta,TypeMeta", func(t *testing.T) { + pgadmin := pgadmin.DeepCopy() + + configmap, err := configmap(pgadmin, clusters) + + assert.NilError(t, err) + assert.Assert(t, cmp.MarshalMatches(configmap.TypeMeta, ` +apiVersion: v1 +kind: ConfigMap + `)) + assert.Assert(t, cmp.MarshalMatches(configmap.ObjectMeta, ` +creationTimestamp: null +labels: + postgres-operator.crunchydata.com/pgadmin: pg1 + postgres-operator.crunchydata.com/role: pgadmin +name: pgadmin- +namespace: some-ns + `)) + + assert.Assert(t, len(configmap.Data) > 0, "expected some configuration") + }) + + t.Run("Annotations,Labels", func(t *testing.T) { + pgadmin := pgadmin.DeepCopy() + pgadmin.Spec.Metadata = &v1beta1.Metadata{ + Annotations: map[string]string{"a": "v1", "b": "v2"}, + Labels: map[string]string{"c": "v3", "d": "v4"}, + } + + configmap, err := configmap(pgadmin, clusters) + + assert.NilError(t, err) + // Annotations present in the metadata. + assert.DeepEqual(t, configmap.ObjectMeta.Annotations, map[string]string{ + "a": "v1", "b": "v2", + }) + + // Labels present in the metadata. + assert.DeepEqual(t, configmap.ObjectMeta.Labels, map[string]string{ + "c": "v3", "d": "v4", + "postgres-operator.crunchydata.com/pgadmin": "pg1", + "postgres-operator.crunchydata.com/role": "pgadmin", + }) + }) +} + +func TestGenerateGunicornConfig(t *testing.T) { + require.ParallelCapacity(t, 0) + + t.Run("Default", func(t *testing.T) { + pgAdmin := &v1beta1.PGAdmin{} + pgAdmin.Name = "test" + pgAdmin.Namespace = "postgres-operator" + + expectedString := `{ + "bind": "0.0.0.0:5050", + "threads": 25, + "workers": 1 +} +` + actualString, err := generateGunicornConfig(pgAdmin) + assert.NilError(t, err) + assert.Equal(t, actualString, expectedString) + }) + + t.Run("Add Settings", func(t *testing.T) { + pgAdmin := &v1beta1.PGAdmin{} + pgAdmin.Name = "test" + pgAdmin.Namespace = "postgres-operator" + pgAdmin.Spec.Config.Gunicorn = map[string]any{ + "keyfile": "/path/to/keyfile", + "certfile": "/path/to/certfile", + } + + expectedString := `{ + "bind": "0.0.0.0:5050", + "certfile": "/path/to/certfile", + "keyfile": "/path/to/keyfile", + "threads": 25, + "workers": 1 +} +` + actualString, err := generateGunicornConfig(pgAdmin) + assert.NilError(t, err) + assert.Equal(t, actualString, expectedString) + }) + + t.Run("Update Defaults", func(t *testing.T) { + pgAdmin := &v1beta1.PGAdmin{} + pgAdmin.Name = "test" + pgAdmin.Namespace = "postgres-operator" + pgAdmin.Spec.Config.Gunicorn = map[string]any{ + "bind": "127.0.0.1:5051", + "threads": 30, + } + + expectedString := `{ + "bind": "127.0.0.1:5051", + "threads": 30, + "workers": 1 +} +` + actualString, err := generateGunicornConfig(pgAdmin) + assert.NilError(t, err) + assert.Equal(t, actualString, expectedString) + }) + + t.Run("Update Mandatory", func(t *testing.T) { + pgAdmin := &v1beta1.PGAdmin{} + pgAdmin.Name = "test" + pgAdmin.Namespace = "postgres-operator" + pgAdmin.Spec.Config.Gunicorn = map[string]any{ + "workers": "100", + } + + expectedString := `{ + "bind": "0.0.0.0:5050", + "threads": 25, + "workers": 1 +} +` + actualString, err := generateGunicornConfig(pgAdmin) + assert.NilError(t, err) + assert.Equal(t, actualString, expectedString) + }) + +} diff --git a/internal/controller/standalone_pgadmin/controller.go b/internal/controller/standalone_pgadmin/controller.go new file mode 100644 index 0000000000..81d5fc2d40 --- /dev/null +++ b/internal/controller/standalone_pgadmin/controller.go @@ -0,0 +1,178 @@ +// Copyright 2023 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package standalone_pgadmin + +import ( + "context" + "io" + + appsv1 "k8s.io/api/apps/v1" + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/equality" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/client-go/tools/record" + ctrl "sigs.k8s.io/controller-runtime" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/controller/controllerutil" + + controllerruntime "github.com/crunchydata/postgres-operator/internal/controller/runtime" + "github.com/crunchydata/postgres-operator/internal/logging" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +// PGAdminReconciler reconciles a PGAdmin object +type PGAdminReconciler struct { + client.Client + Owner client.FieldOwner + PodExec func( + ctx context.Context, namespace, pod, container string, + stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error + Recorder record.EventRecorder + IsOpenShift bool +} + +//+kubebuilder:rbac:groups="postgres-operator.crunchydata.com",resources="pgadmins",verbs={list,watch} +//+kubebuilder:rbac:groups="postgres-operator.crunchydata.com",resources="postgresclusters",verbs={list,watch} +//+kubebuilder:rbac:groups="",resources="persistentvolumeclaims",verbs={list,watch} +//+kubebuilder:rbac:groups="",resources="secrets",verbs={list,watch} +//+kubebuilder:rbac:groups="",resources="configmaps",verbs={list,watch} +//+kubebuilder:rbac:groups="apps",resources="statefulsets",verbs={list,watch} + +// SetupWithManager sets up the controller with the Manager. +// +// TODO(tjmoore4): This function is duplicated from a version that takes a PostgresCluster object. +func (r *PGAdminReconciler) SetupWithManager(mgr ctrl.Manager) error { + if r.PodExec == nil { + var err error + r.PodExec, err = controllerruntime.NewPodExecutor(mgr.GetConfig()) + if err != nil { + return err + } + } + + return ctrl.NewControllerManagedBy(mgr). + For(&v1beta1.PGAdmin{}). + Owns(&corev1.ConfigMap{}). + Owns(&corev1.PersistentVolumeClaim{}). + Owns(&corev1.Secret{}). + Owns(&appsv1.StatefulSet{}). + Owns(&corev1.Service{}). + Watches( + v1beta1.NewPostgresCluster(), + r.watchPostgresClusters(), + ). + Watches( + &corev1.Secret{}, + r.watchForRelatedSecret(), + ). + Complete(r) +} + +//+kubebuilder:rbac:groups="postgres-operator.crunchydata.com",resources="pgadmins",verbs={get} +//+kubebuilder:rbac:groups="postgres-operator.crunchydata.com",resources="pgadmins/status",verbs={patch} + +// Reconcile which aims to move the current state of the pgAdmin closer to the +// desired state described in a [v1beta1.PGAdmin] identified by request. +func (r *PGAdminReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { + + var err error + log := logging.FromContext(ctx) + + pgAdmin := &v1beta1.PGAdmin{} + if err := r.Get(ctx, req.NamespacedName, pgAdmin); err != nil { + // NotFound cannot be fixed by requeuing so ignore it. During background + // deletion, we receive delete events from pgadmin's dependents after + // pgadmin is deleted. + return ctrl.Result{}, client.IgnoreNotFound(err) + } + + // Write any changes to the pgadmin status on the way out. + before := pgAdmin.DeepCopy() + defer func() { + if !equality.Semantic.DeepEqual(before.Status, pgAdmin.Status) { + statusErr := r.Status().Patch(ctx, pgAdmin, client.MergeFrom(before), r.Owner) + if statusErr != nil { + log.Error(statusErr, "Patching PGAdmin status") + } + if err == nil { + err = statusErr + } + } + }() + + log.V(1).Info("Reconciling pgAdmin") + + // Set defaults if unset + pgAdmin.Default() + + var ( + configmap *corev1.ConfigMap + dataVolume *corev1.PersistentVolumeClaim + clusters map[string]*v1beta1.PostgresClusterList + _ *corev1.Service + ) + + if err == nil { + clusters, err = r.getClustersForPGAdmin(ctx, pgAdmin) + } + if err == nil { + configmap, err = r.reconcilePGAdminConfigMap(ctx, pgAdmin, clusters) + } + if err == nil { + dataVolume, err = r.reconcilePGAdminDataVolume(ctx, pgAdmin) + } + if err == nil { + err = r.reconcilePGAdminService(ctx, pgAdmin) + } + if err == nil { + err = r.reconcilePGAdminStatefulSet(ctx, pgAdmin, configmap, dataVolume) + } + if err == nil { + err = r.reconcilePGAdminUsers(ctx, pgAdmin) + } + + if err == nil { + // at this point everything reconciled successfully, and we can update the + // observedGeneration + pgAdmin.Status.ObservedGeneration = pgAdmin.GetGeneration() + log.V(1).Info("Reconciled pgAdmin") + } + + return ctrl.Result{}, err +} + +// The owner reference created by controllerutil.SetControllerReference blocks +// deletion. The OwnerReferencesPermissionEnforcement plugin requires that the +// creator of such a reference have either "delete" permission on the owner or +// "update" permission on the owner's "finalizers" subresource. +// - https://docs.k8s.io/reference/access-authn-authz/admission-controllers/ +// +kubebuilder:rbac:groups="postgres-operator.crunchydata.com",resources="pgadmins/finalizers",verbs={update} + +// setControllerReference sets owner as a Controller OwnerReference on controlled. +// Only one OwnerReference can be a controller, so it returns an error if another +// is already set. +// +// TODO(tjmoore4): This function is duplicated from a version that takes a PostgresCluster object. +func (r *PGAdminReconciler) setControllerReference( + owner *v1beta1.PGAdmin, controlled client.Object, +) error { + return controllerutil.SetControllerReference(owner, controlled, r.Client.Scheme()) +} + +// deleteControlled safely deletes object when it is controlled by pgAdmin. +func (r *PGAdminReconciler) deleteControlled( + ctx context.Context, pgadmin *v1beta1.PGAdmin, object client.Object, +) error { + if metav1.IsControlledBy(object, pgadmin) { + uid := object.GetUID() + version := object.GetResourceVersion() + exactly := client.Preconditions{UID: &uid, ResourceVersion: &version} + + return r.Client.Delete(ctx, object, exactly) + } + + return nil +} diff --git a/internal/controller/standalone_pgadmin/controller_test.go b/internal/controller/standalone_pgadmin/controller_test.go new file mode 100644 index 0000000000..b0fe17cbe6 --- /dev/null +++ b/internal/controller/standalone_pgadmin/controller_test.go @@ -0,0 +1,75 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package standalone_pgadmin + +import ( + "context" + "strings" + "testing" + + "gotest.tools/v3/assert" + corev1 "k8s.io/api/core/v1" + apierrors "k8s.io/apimachinery/pkg/api/errors" + "sigs.k8s.io/controller-runtime/pkg/client" + + "github.com/crunchydata/postgres-operator/internal/testing/require" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestDeleteControlled(t *testing.T) { + ctx := context.Background() + cc := setupKubernetes(t) + require.ParallelCapacity(t, 1) + + ns := setupNamespace(t, cc) + reconciler := PGAdminReconciler{Client: cc} + + pgadmin := new(v1beta1.PGAdmin) + pgadmin.Namespace = ns.Name + pgadmin.Name = strings.ToLower(t.Name()) + assert.NilError(t, cc.Create(ctx, pgadmin)) + + t.Run("NoOwnership", func(t *testing.T) { + secret := &corev1.Secret{} + secret.Namespace = ns.Name + secret.Name = "solo" + + assert.NilError(t, cc.Create(ctx, secret)) + + // No-op when there's no ownership + assert.NilError(t, reconciler.deleteControlled(ctx, pgadmin, secret)) + assert.NilError(t, cc.Get(ctx, client.ObjectKeyFromObject(secret), secret)) + }) + + // We aren't currently using setOwnerReference in the pgAdmin controller + // If that changes we can uncomment this code + // t.Run("Owned", func(t *testing.T) { + // secret := &corev1.Secret{} + // secret.Namespace = ns.Name + // secret.Name = "owned" + + // assert.NilError(t, reconciler.setOwnerReference(pgadmin, secret)) + // assert.NilError(t, cc.Create(ctx, secret)) + + // // No-op when not controlled by cluster. + // assert.NilError(t, reconciler.deleteControlled(ctx, pgadmin, secret)) + // assert.NilError(t, cc.Get(ctx, client.ObjectKeyFromObject(secret), secret)) + // }) + + t.Run("Controlled", func(t *testing.T) { + secret := &corev1.Secret{} + secret.Namespace = ns.Name + secret.Name = "controlled" + + assert.NilError(t, reconciler.setControllerReference(pgadmin, secret)) + assert.NilError(t, cc.Create(ctx, secret)) + + // Deletes when controlled by cluster. + assert.NilError(t, reconciler.deleteControlled(ctx, pgadmin, secret)) + + err := cc.Get(ctx, client.ObjectKeyFromObject(secret), secret) + assert.Assert(t, apierrors.IsNotFound(err), "expected NotFound, got %#v", err) + }) +} diff --git a/internal/controller/standalone_pgadmin/helpers_test.go b/internal/controller/standalone_pgadmin/helpers_test.go new file mode 100644 index 0000000000..9096edb5a1 --- /dev/null +++ b/internal/controller/standalone_pgadmin/helpers_test.go @@ -0,0 +1,76 @@ +// Copyright 2023 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package standalone_pgadmin + +import ( + "context" + "os" + "strconv" + "testing" + "time" + + corev1 "k8s.io/api/core/v1" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/yaml" + + "github.com/crunchydata/postgres-operator/internal/testing/require" +) + +// Scale extends d according to PGO_TEST_TIMEOUT_SCALE. +// +// TODO(tjmoore4): This function is duplicated from a version that takes a PostgresCluster object. +var Scale = func(d time.Duration) time.Duration { return d } + +func init() { + setting := os.Getenv("PGO_TEST_TIMEOUT_SCALE") + factor, _ := strconv.ParseFloat(setting, 64) + + if setting != "" { + if factor <= 0 { + panic("PGO_TEST_TIMEOUT_SCALE must be a fractional number greater than zero") + } + + Scale = func(d time.Duration) time.Duration { + return time.Duration(factor * float64(d)) + } + } +} + +// setupKubernetes starts or connects to a Kubernetes API and returns a client +// that uses it. See [require.Kubernetes] for more details. +func setupKubernetes(t testing.TB) client.Client { + t.Helper() + + // Start and/or connect to a Kubernetes API, or Skip when that's not configured. + cc := require.Kubernetes(t) + + // Log the status of any test namespaces after this test fails. + t.Cleanup(func() { + if t.Failed() { + var namespaces corev1.NamespaceList + _ = cc.List(context.Background(), &namespaces, client.HasLabels{"postgres-operator-test"}) + + type shaped map[string]corev1.NamespaceStatus + result := make([]shaped, len(namespaces.Items)) + + for i, ns := range namespaces.Items { + result[i] = shaped{ns.Labels["postgres-operator-test"]: ns.Status} + } + + formatted, _ := yaml.Marshal(result) + t.Logf("Test Namespaces:\n%s", formatted) + } + }) + + return cc +} + +// setupNamespace creates a random namespace that will be deleted by t.Cleanup. +// +// Deprecated: Use [require.Namespace] instead. +func setupNamespace(t testing.TB, cc client.Client) *corev1.Namespace { + t.Helper() + return require.Namespace(t, cc) +} diff --git a/internal/controller/standalone_pgadmin/helpers_unit_test.go b/internal/controller/standalone_pgadmin/helpers_unit_test.go new file mode 100644 index 0000000000..63887385fc --- /dev/null +++ b/internal/controller/standalone_pgadmin/helpers_unit_test.go @@ -0,0 +1,76 @@ +// Copyright 2023 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package standalone_pgadmin + +import ( + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/resource" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +// TODO(benjaminjb): This file is duplicated test help functions +// that could probably be put into a separate test_helper package + +var ( + //TODO(tjmoore4): With the new RELATED_IMAGES defaulting behavior, tests could be refactored + // to reference those environment variables instead of hard coded image values + CrunchyPostgresHAImage = "registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-13.6-1" + CrunchyPGBackRestImage = "registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:ubi8-2.38-0" + CrunchyPGBouncerImage = "registry.developers.crunchydata.com/crunchydata/crunchy-pgbouncer:ubi8-1.16-2" +) + +func testCluster() *v1beta1.PostgresCluster { + // Defines a base cluster spec that can be used by tests to generate a + // cluster with an expected number of instances + cluster := v1beta1.PostgresCluster{ + ObjectMeta: metav1.ObjectMeta{ + Name: "hippo", + }, + Spec: v1beta1.PostgresClusterSpec{ + PostgresVersion: 13, + Image: CrunchyPostgresHAImage, + ImagePullSecrets: []corev1.LocalObjectReference{{ + Name: "myImagePullSecret"}, + }, + InstanceSets: []v1beta1.PostgresInstanceSetSpec{{ + Name: "instance1", + Replicas: initialize.Int32(1), + DataVolumeClaimSpec: testVolumeClaimSpec(), + }}, + Backups: v1beta1.Backups{ + PGBackRest: v1beta1.PGBackRestArchive{ + Image: CrunchyPGBackRestImage, + Repos: []v1beta1.PGBackRestRepo{{ + Name: "repo1", + Volume: &v1beta1.RepoPVC{ + VolumeClaimSpec: testVolumeClaimSpec(), + }, + }}, + }, + }, + Proxy: &v1beta1.PostgresProxySpec{ + PGBouncer: &v1beta1.PGBouncerPodSpec{ + Image: CrunchyPGBouncerImage, + }, + }, + }, + } + return cluster.DeepCopy() +} + +func testVolumeClaimSpec() corev1.PersistentVolumeClaimSpec { + // Defines a volume claim spec that can be used to create instances + return corev1.PersistentVolumeClaimSpec{ + AccessModes: []corev1.PersistentVolumeAccessMode{corev1.ReadWriteOnce}, + Resources: corev1.VolumeResourceRequirements{ + Requests: map[corev1.ResourceName]resource.Quantity{ + corev1.ResourceStorage: resource.MustParse("1Gi"), + }, + }, + } +} diff --git a/internal/controller/standalone_pgadmin/pod.go b/internal/controller/standalone_pgadmin/pod.go new file mode 100644 index 0000000000..bbb39b9322 --- /dev/null +++ b/internal/controller/standalone_pgadmin/pod.go @@ -0,0 +1,462 @@ +// Copyright 2023 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package standalone_pgadmin + +import ( + "fmt" + "strings" + + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/resource" + "k8s.io/apimachinery/pkg/util/intstr" + + "github.com/crunchydata/postgres-operator/internal/config" + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +const ( + configMountPath = "/etc/pgadmin/conf.d" + configFilePath = "~postgres-operator/" + settingsConfigMapKey + clusterFilePath = "~postgres-operator/" + settingsClusterMapKey + configDatabaseURIPath = "~postgres-operator/config-database-uri" + ldapFilePath = "~postgres-operator/ldap-bind-password" + gunicornConfigFilePath = "~postgres-operator/" + gunicornConfigKey + + // Nothing should be mounted to this location except the script our initContainer writes + scriptMountPath = "/etc/pgadmin" +) + +// pod populates a PodSpec with the container and volumes needed to run pgAdmin. +func pod( + inPGAdmin *v1beta1.PGAdmin, + inConfigMap *corev1.ConfigMap, + outPod *corev1.PodSpec, + pgAdminVolume *corev1.PersistentVolumeClaim, +) { + const ( + // config and data volume names + configVolumeName = "pgadmin-config" + dataVolumeName = "pgadmin-data" + logVolumeName = "pgadmin-log" + scriptVolumeName = "pgadmin-config-system" + tempVolumeName = "tmp" + ) + + // create the projected volume of config maps for use in + // 1. dynamic server discovery + // 2. adding the config variables during pgAdmin startup + configVolume := corev1.Volume{Name: configVolumeName} + configVolume.VolumeSource = corev1.VolumeSource{ + Projected: &corev1.ProjectedVolumeSource{ + Sources: podConfigFiles(inConfigMap, *inPGAdmin), + }, + } + + // create the data volume for the persistent database + dataVolume := corev1.Volume{Name: dataVolumeName} + dataVolume.VolumeSource = corev1.VolumeSource{ + PersistentVolumeClaim: &corev1.PersistentVolumeClaimVolumeSource{ + ClaimName: pgAdminVolume.Name, + ReadOnly: false, + }, + } + + // create the temp volume for logs + logVolume := corev1.Volume{Name: logVolumeName} + logVolume.VolumeSource = corev1.VolumeSource{ + EmptyDir: &corev1.EmptyDirVolumeSource{ + Medium: corev1.StorageMediumMemory, + }, + } + + // Volume used to write a custom config_system.py file in the initContainer + // which then loads the configs found in the `configVolume` + scriptVolume := corev1.Volume{Name: scriptVolumeName} + scriptVolume.VolumeSource = corev1.VolumeSource{ + EmptyDir: &corev1.EmptyDirVolumeSource{ + Medium: corev1.StorageMediumMemory, + + // When this volume is too small, the Pod will be evicted and recreated + // by the StatefulSet controller. + // - https://kubernetes.io/docs/concepts/storage/volumes/#emptydir + // NOTE: tmpfs blocks are PAGE_SIZE, usually 4KiB, and size rounds up. + SizeLimit: resource.NewQuantity(32<<10, resource.BinarySI), + }, + } + + // create a temp volume for restart pid/other/debugging use + // TODO: discuss tmp vol vs. persistent vol + tmpVolume := corev1.Volume{Name: tempVolumeName} + tmpVolume.VolumeSource = corev1.VolumeSource{ + EmptyDir: &corev1.EmptyDirVolumeSource{ + Medium: corev1.StorageMediumMemory, + }, + } + + // pgadmin container + container := corev1.Container{ + Name: naming.ContainerPGAdmin, + Command: startupScript(inPGAdmin), + Image: config.StandalonePGAdminContainerImage(inPGAdmin), + ImagePullPolicy: inPGAdmin.Spec.ImagePullPolicy, + Resources: inPGAdmin.Spec.Resources, + SecurityContext: initialize.RestrictedSecurityContext(), + Ports: []corev1.ContainerPort{{ + Name: naming.PortPGAdmin, + ContainerPort: int32(pgAdminPort), + Protocol: corev1.ProtocolTCP, + }}, + Env: []corev1.EnvVar{ + { + Name: "PGADMIN_SETUP_EMAIL", + Value: fmt.Sprintf("admin@%s.%s.svc", inPGAdmin.Name, inPGAdmin.Namespace), + }, + // Setting the KRB5_CONFIG for kerberos + // - https://web.mit.edu/kerberos/krb5-current/doc/admin/conf_files/krb5_conf.html + { + Name: "KRB5_CONFIG", + Value: configMountPath + "/krb5.conf", + }, + // In testing it was determined that we need to set this env var for the replay cache + // otherwise it defaults to the read-only location `/var/tmp/` + // - https://web.mit.edu/kerberos/krb5-current/doc/basic/rcache_def.html#replay-cache-types + { + Name: "KRB5RCACHEDIR", + Value: "/tmp", + }, + }, + VolumeMounts: []corev1.VolumeMount{ + { + Name: configVolumeName, + MountPath: configMountPath, + ReadOnly: true, + }, + { + Name: dataVolumeName, + MountPath: "/var/lib/pgadmin", + }, + { + Name: logVolumeName, + MountPath: "/var/log/pgadmin", + }, + { + Name: scriptVolumeName, + MountPath: scriptMountPath, + ReadOnly: true, + }, + { + Name: tempVolumeName, + MountPath: "/tmp", + }, + }, + } + + // Creating a readiness probe that will check that the pgAdmin `/login` + // endpoint is reachable at the specified port + readinessProbe := &corev1.Probe{ + ProbeHandler: corev1.ProbeHandler{ + HTTPGet: &corev1.HTTPGetAction{ + Port: intstr.FromInt32(pgAdminPort), + Path: "/login", + Scheme: corev1.URISchemeHTTP, + }, + }, + } + gunicornData := inConfigMap.Data[gunicornConfigKey] + // Check the configmap to see if we think TLS is enabled + // If so, update the readiness check scheme to HTTPS + if strings.Contains(gunicornData, "certfile") && strings.Contains(gunicornData, "keyfile") { + readinessProbe.ProbeHandler.HTTPGet.Scheme = corev1.URISchemeHTTPS + } + container.ReadinessProbe = readinessProbe + + startup := corev1.Container{ + Name: naming.ContainerPGAdminStartup, + Command: startupCommand(), + Image: container.Image, + ImagePullPolicy: container.ImagePullPolicy, + Resources: container.Resources, + SecurityContext: initialize.RestrictedSecurityContext(), + VolumeMounts: []corev1.VolumeMount{ + // Volume to write a custom `config_system.py` file to. + { + Name: scriptVolumeName, + MountPath: scriptMountPath, + ReadOnly: false, + }, + }, + } + + // add volumes and containers + outPod.Volumes = []corev1.Volume{ + configVolume, + dataVolume, + logVolume, + scriptVolume, + tmpVolume, + } + outPod.Containers = []corev1.Container{container} + outPod.InitContainers = []corev1.Container{startup} +} + +// podConfigFiles returns projections of pgAdmin's configuration files to +// include in the configuration volume. +func podConfigFiles(configmap *corev1.ConfigMap, pgadmin v1beta1.PGAdmin) []corev1.VolumeProjection { + + config := append(append([]corev1.VolumeProjection{}, pgadmin.Spec.Config.Files...), + []corev1.VolumeProjection{ + { + ConfigMap: &corev1.ConfigMapProjection{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: configmap.Name, + }, + Items: []corev1.KeyToPath{ + { + Key: settingsConfigMapKey, + Path: configFilePath, + }, + { + Key: settingsClusterMapKey, + Path: clusterFilePath, + }, + { + Key: gunicornConfigKey, + Path: gunicornConfigFilePath, + }, + }, + }, + }, + }...) + + if pgadmin.Spec.Config.ConfigDatabaseURI != nil { + config = append(config, corev1.VolumeProjection{ + Secret: &corev1.SecretProjection{ + LocalObjectReference: pgadmin.Spec.Config.ConfigDatabaseURI.LocalObjectReference, + Optional: pgadmin.Spec.Config.ConfigDatabaseURI.Optional, + Items: []corev1.KeyToPath{ + { + Key: pgadmin.Spec.Config.ConfigDatabaseURI.Key, + Path: configDatabaseURIPath, + }, + }, + }, + }) + } + + // To enable LDAP authentication for pgAdmin, various LDAP settings must be configured. + // While most of the required configuration can be set using the 'settings' + // feature on the spec (.Spec.UserInterface.PGAdmin.Config.Settings), those + // values are stored in a ConfigMap in plaintext. + // As a special case, here we mount a provided Secret containing the LDAP_BIND_PASSWORD + // for use with the other pgAdmin LDAP configuration. + // - https://www.pgadmin.org/docs/pgadmin4/latest/config_py.html + // - https://www.pgadmin.org/docs/pgadmin4/development/enabling_ldap_authentication.html + if pgadmin.Spec.Config.LDAPBindPassword != nil { + config = append(config, corev1.VolumeProjection{ + Secret: &corev1.SecretProjection{ + LocalObjectReference: pgadmin.Spec.Config.LDAPBindPassword.LocalObjectReference, + Optional: pgadmin.Spec.Config.LDAPBindPassword.Optional, + Items: []corev1.KeyToPath{ + { + Key: pgadmin.Spec.Config.LDAPBindPassword.Key, + Path: ldapFilePath, + }, + }, + }, + }) + } + + return config +} + +func startupScript(pgadmin *v1beta1.PGAdmin) []string { + // loadServerCommandV7 is a python command leveraging the pgadmin v7 setup.py script + // with the `--load-servers` flag to replace the servers registered to the admin user + // with the contents of the `settingsClusterMapKey` file + var loadServerCommandV7 = fmt.Sprintf(`python3 ${PGADMIN_DIR}/setup.py --load-servers %s/%s --user %s --replace`, + configMountPath, + clusterFilePath, + fmt.Sprintf("admin@%s.%s.svc", pgadmin.Name, pgadmin.Namespace)) + + // loadServerCommandV8 is a python command leveraging the pgadmin v8 setup.py script + // with the `load-servers` sub-command to replace the servers registered to the admin user + // with the contents of the `settingsClusterMapKey` file + var loadServerCommandV8 = fmt.Sprintf(`python3 ${PGADMIN_DIR}/setup.py load-servers %s/%s --user %s --replace`, + configMountPath, + clusterFilePath, + fmt.Sprintf("admin@%s.%s.svc", pgadmin.Name, pgadmin.Namespace)) + + // setupCommands (v8 requires the 'setup-db' sub-command) + var setupCommandV7 = "python3 ${PGADMIN_DIR}/setup.py" + var setupCommandV8 = setupCommandV7 + " setup-db" + + // startCommands (v8 image includes Gunicorn) + var startCommandV7 = "pgadmin4 &" + var startCommandV8 = "gunicorn -c /etc/pgadmin/gunicorn_config.py --chdir $PGADMIN_DIR pgAdmin4:app &" + + // This script sets up, starts pgadmin, and runs the appropriate `loadServerCommand` to register the discovered servers. + // pgAdmin is hosted by Gunicorn and uses a config file. + // - https://www.pgadmin.org/docs/pgadmin4/development/server_deployment.html#standalone-gunicorn-configuration + // - https://docs.gunicorn.org/en/latest/configure.html + var startScript = fmt.Sprintf(` +export PGADMIN_SETUP_PASSWORD="$(date +%%s | sha256sum | base64 | head -c 32)" +PGADMIN_DIR=%s +APP_RELEASE=$(cd $PGADMIN_DIR && python3 -c "import config; print(config.APP_RELEASE)") + +echo "Running pgAdmin4 Setup" +if [ $APP_RELEASE -eq 7 ]; then + %s +else + %s +fi + +echo "Starting pgAdmin4" +PGADMIN4_PIDFILE=/tmp/pgadmin4.pid +if [ $APP_RELEASE -eq 7 ]; then + %s +else + %s +fi +echo $! > $PGADMIN4_PIDFILE + +loadServerCommand() { + if [ $APP_RELEASE -eq 7 ]; then + %s + else + %s + fi +} +loadServerCommand +`, pgAdminDir, setupCommandV7, setupCommandV8, startCommandV7, startCommandV8, loadServerCommandV7, loadServerCommandV8) + + // Use a Bash loop to periodically check: + // 1. the mtime of the mounted configuration volume for shared/discovered servers. + // When it changes, reload the shared server configuration. + // 2. that the pgadmin process is still running on the saved proc id. + // When it isn't, we consider pgadmin stopped. + // Restart pgadmin and continue watching. + + // Coreutils `sleep` uses a lot of memory, so the following opens a file + // descriptor and uses the timeout of the builtin `read` to wait. That same + // descriptor gets closed and reopened to use the builtin `[ -nt` to check mtimes. + // - https://unix.stackexchange.com/a/407383 + var reloadScript = ` +exec {fd}<> <(:||:) +while read -r -t 5 -u "${fd}" ||:; do + if [[ "${cluster_file}" -nt "/proc/self/fd/${fd}" ]] && loadServerCommand + then + exec {fd}>&- && exec {fd}<> <(:||:) + stat --format='Loaded shared servers dated %y' "${cluster_file}" + fi + if [[ ! -d /proc/$(cat $PGADMIN4_PIDFILE) ]] + then + if [[ $APP_RELEASE -eq 7 ]]; then + ` + startCommandV7 + ` + else + ` + startCommandV8 + ` + fi + echo $! > $PGADMIN4_PIDFILE + echo "Restarting pgAdmin4" + fi +done +` + + wrapper := `monitor() {` + startScript + reloadScript + `}; export cluster_file="$1"; export -f monitor; exec -a "$0" bash -ceu monitor` + + return []string{"bash", "-ceu", "--", wrapper, "pgadmin", fmt.Sprintf("%s/%s", configMountPath, clusterFilePath)} +} + +// startupCommand returns an entrypoint that prepares the filesystem for pgAdmin. +func startupCommand() []string { + // pgAdmin reads from the `/etc/pgadmin/config_system.py` file during startup + // after all other config files. + // - https://github.com/pgadmin-org/pgadmin4/blob/REL-7_7/docs/en_US/config_py.rst + // + // This command writes a script in `/etc/pgadmin/config_system.py` that reads from + // the `pgadmin-settings.json` file and the config-database-uri and/or + // `ldap-bind-password` files (if either exists) and sets those variables globally. + // That way those values are available as pgAdmin configurations when pgAdmin starts. + // + // Note: All pgAdmin settings are uppercase alphanumeric with underscores, so ignore + // any keys/names that are not. + // + // Note: set the pgAdmin LDAP_BIND_PASSWORD and CONFIG_DATABASE_URI settings from the + // Secrets last in order to overwrite the respective configurations set via ConfigMap JSON. + + const ( + // ldapFilePath is the path for mounting the LDAP Bind Password + ldapPasswordAbsolutePath = configMountPath + "/" + ldapFilePath + + // configDatabaseURIPath is the path for mounting the database URI connection string + configDatabaseURIPathAbsolutePath = configMountPath + "/" + configDatabaseURIPath + + configSystem = ` +import glob, json, re, os +DEFAULT_BINARY_PATHS = {'pg': sorted([''] + glob.glob('/usr/pgsql-*/bin')).pop()} +with open('` + configMountPath + `/` + configFilePath + `') as _f: + _conf, _data = re.compile(r'[A-Z_0-9]+'), json.load(_f) + if type(_data) is dict: + globals().update({k: v for k, v in _data.items() if _conf.fullmatch(k)}) +if os.path.isfile('` + ldapPasswordAbsolutePath + `'): + with open('` + ldapPasswordAbsolutePath + `') as _f: + LDAP_BIND_PASSWORD = _f.read() +if os.path.isfile('` + configDatabaseURIPathAbsolutePath + `'): + with open('` + configDatabaseURIPathAbsolutePath + `') as _f: + CONFIG_DATABASE_URI = _f.read() +` + // gunicorn reads from the `/etc/pgadmin/gunicorn_config.py` file during startup + // after all other config files. + // - https://docs.gunicorn.org/en/latest/configure.html#configuration-file + // + // This command writes a script in `/etc/pgadmin/gunicorn_config.py` that reads + // from the `gunicorn-config.json` file and sets those variables globally. + // That way those values are available as settings when gunicorn starts. + // + // Note: All gunicorn settings are lowercase with underscores, so ignore + // any keys/names that are not. + gunicornConfig = ` +import json, re +with open('` + configMountPath + `/` + gunicornConfigFilePath + `') as _f: + _conf, _data = re.compile(r'[a-z_]+'), json.load(_f) + if type(_data) is dict: + globals().update({k: v for k, v in _data.items() if _conf.fullmatch(k)}) +` + ) + + args := []string{strings.TrimLeft(configSystem, "\n"), strings.TrimLeft(gunicornConfig, "\n")} + + script := strings.Join([]string{ + // Use the initContainer to create this path to avoid the error noted here: + // - https://issue.k8s.io/121294 + `mkdir -p ` + configMountPath, + // Write the system and server configurations. + `echo "$1" > ` + scriptMountPath + `/config_system.py`, + `echo "$2" > ` + scriptMountPath + `/gunicorn_config.py`, + }, "\n") + + return append([]string{"bash", "-ceu", "--", script, "startup"}, args...) +} + +// podSecurityContext returns a v1.PodSecurityContext for pgadmin that can write +// to PersistentVolumes. +func podSecurityContext(r *PGAdminReconciler) *corev1.PodSecurityContext { + podSecurityContext := initialize.PodSecurityContext() + + // TODO (dsessler7): Add ability to add supplemental groups + + // OpenShift assigns a filesystem group based on a SecurityContextConstraint. + // Otherwise, set a filesystem group so pgAdmin can write to files + // regardless of the UID or GID of a container. + // - https://cloud.redhat.com/blog/a-guide-to-openshift-and-uids + // - https://docs.k8s.io/tasks/configure-pod-container/security-context/ + // - https://docs.openshift.com/container-platform/4.14/authentication/managing-security-context-constraints.html + if !r.IsOpenShift { + podSecurityContext.FSGroup = initialize.Int64(2) + } + + return podSecurityContext +} diff --git a/internal/controller/standalone_pgadmin/pod_test.go b/internal/controller/standalone_pgadmin/pod_test.go new file mode 100644 index 0000000000..19cee52882 --- /dev/null +++ b/internal/controller/standalone_pgadmin/pod_test.go @@ -0,0 +1,447 @@ +// Copyright 2023 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package standalone_pgadmin + +import ( + "testing" + + "gotest.tools/v3/assert" + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/resource" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/testing/cmp" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestPod(t *testing.T) { + t.Parallel() + + pgadmin := new(v1beta1.PGAdmin) + pgadmin.Name = "pgadmin" + pgadmin.Namespace = "postgres-operator" + config := new(corev1.ConfigMap) + testpod := new(corev1.PodSpec) + pvc := new(corev1.PersistentVolumeClaim) + + call := func() { pod(pgadmin, config, testpod, pvc) } + + t.Run("Defaults", func(t *testing.T) { + + call() + + assert.Assert(t, cmp.MarshalMatches(testpod, ` +containers: +- command: + - bash + - -ceu + - -- + - |- + monitor() { + export PGADMIN_SETUP_PASSWORD="$(date +%s | sha256sum | base64 | head -c 32)" + PGADMIN_DIR=/usr/local/lib/python3.11/site-packages/pgadmin4 + APP_RELEASE=$(cd $PGADMIN_DIR && python3 -c "import config; print(config.APP_RELEASE)") + + echo "Running pgAdmin4 Setup" + if [ $APP_RELEASE -eq 7 ]; then + python3 ${PGADMIN_DIR}/setup.py + else + python3 ${PGADMIN_DIR}/setup.py setup-db + fi + + echo "Starting pgAdmin4" + PGADMIN4_PIDFILE=/tmp/pgadmin4.pid + if [ $APP_RELEASE -eq 7 ]; then + pgadmin4 & + else + gunicorn -c /etc/pgadmin/gunicorn_config.py --chdir $PGADMIN_DIR pgAdmin4:app & + fi + echo $! > $PGADMIN4_PIDFILE + + loadServerCommand() { + if [ $APP_RELEASE -eq 7 ]; then + python3 ${PGADMIN_DIR}/setup.py --load-servers /etc/pgadmin/conf.d/~postgres-operator/pgadmin-shared-clusters.json --user admin@pgadmin.postgres-operator.svc --replace + else + python3 ${PGADMIN_DIR}/setup.py load-servers /etc/pgadmin/conf.d/~postgres-operator/pgadmin-shared-clusters.json --user admin@pgadmin.postgres-operator.svc --replace + fi + } + loadServerCommand + + exec {fd}<> <(:||:) + while read -r -t 5 -u "${fd}" ||:; do + if [[ "${cluster_file}" -nt "/proc/self/fd/${fd}" ]] && loadServerCommand + then + exec {fd}>&- && exec {fd}<> <(:||:) + stat --format='Loaded shared servers dated %y' "${cluster_file}" + fi + if [[ ! -d /proc/$(cat $PGADMIN4_PIDFILE) ]] + then + if [[ $APP_RELEASE -eq 7 ]]; then + pgadmin4 & + else + gunicorn -c /etc/pgadmin/gunicorn_config.py --chdir $PGADMIN_DIR pgAdmin4:app & + fi + echo $! > $PGADMIN4_PIDFILE + echo "Restarting pgAdmin4" + fi + done + }; export cluster_file="$1"; export -f monitor; exec -a "$0" bash -ceu monitor + - pgadmin + - /etc/pgadmin/conf.d/~postgres-operator/pgadmin-shared-clusters.json + env: + - name: PGADMIN_SETUP_EMAIL + value: admin@pgadmin.postgres-operator.svc + - name: KRB5_CONFIG + value: /etc/pgadmin/conf.d/krb5.conf + - name: KRB5RCACHEDIR + value: /tmp + name: pgadmin + ports: + - containerPort: 5050 + name: pgadmin + protocol: TCP + readinessProbe: + httpGet: + path: /login + port: 5050 + scheme: HTTP + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault + volumeMounts: + - mountPath: /etc/pgadmin/conf.d + name: pgadmin-config + readOnly: true + - mountPath: /var/lib/pgadmin + name: pgadmin-data + - mountPath: /var/log/pgadmin + name: pgadmin-log + - mountPath: /etc/pgadmin + name: pgadmin-config-system + readOnly: true + - mountPath: /tmp + name: tmp +initContainers: +- command: + - bash + - -ceu + - -- + - |- + mkdir -p /etc/pgadmin/conf.d + echo "$1" > /etc/pgadmin/config_system.py + echo "$2" > /etc/pgadmin/gunicorn_config.py + - startup + - | + import glob, json, re, os + DEFAULT_BINARY_PATHS = {'pg': sorted([''] + glob.glob('/usr/pgsql-*/bin')).pop()} + with open('/etc/pgadmin/conf.d/~postgres-operator/pgadmin-settings.json') as _f: + _conf, _data = re.compile(r'[A-Z_0-9]+'), json.load(_f) + if type(_data) is dict: + globals().update({k: v for k, v in _data.items() if _conf.fullmatch(k)}) + if os.path.isfile('/etc/pgadmin/conf.d/~postgres-operator/ldap-bind-password'): + with open('/etc/pgadmin/conf.d/~postgres-operator/ldap-bind-password') as _f: + LDAP_BIND_PASSWORD = _f.read() + if os.path.isfile('/etc/pgadmin/conf.d/~postgres-operator/config-database-uri'): + with open('/etc/pgadmin/conf.d/~postgres-operator/config-database-uri') as _f: + CONFIG_DATABASE_URI = _f.read() + - | + import json, re + with open('/etc/pgadmin/conf.d/~postgres-operator/gunicorn-config.json') as _f: + _conf, _data = re.compile(r'[a-z_]+'), json.load(_f) + if type(_data) is dict: + globals().update({k: v for k, v in _data.items() if _conf.fullmatch(k)}) + name: pgadmin-startup + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault + volumeMounts: + - mountPath: /etc/pgadmin + name: pgadmin-config-system +volumes: +- name: pgadmin-config + projected: + sources: + - configMap: + items: + - key: pgadmin-settings.json + path: ~postgres-operator/pgadmin-settings.json + - key: pgadmin-shared-clusters.json + path: ~postgres-operator/pgadmin-shared-clusters.json + - key: gunicorn-config.json + path: ~postgres-operator/gunicorn-config.json +- name: pgadmin-data + persistentVolumeClaim: + claimName: "" +- emptyDir: + medium: Memory + name: pgadmin-log +- emptyDir: + medium: Memory + sizeLimit: 32Ki + name: pgadmin-config-system +- emptyDir: + medium: Memory + name: tmp +`)) + + // No change when called again. + before := testpod.DeepCopy() + call() + assert.DeepEqual(t, before, testpod) + }) + + t.Run("Customizations", func(t *testing.T) { + pgadmin.Spec.ImagePullPolicy = corev1.PullAlways + pgadmin.Spec.Image = initialize.String("new-image") + pgadmin.Spec.Resources.Requests = corev1.ResourceList{ + corev1.ResourceCPU: resource.MustParse("100m"), + } + + call() + + assert.Assert(t, cmp.MarshalMatches(testpod, ` +containers: +- command: + - bash + - -ceu + - -- + - |- + monitor() { + export PGADMIN_SETUP_PASSWORD="$(date +%s | sha256sum | base64 | head -c 32)" + PGADMIN_DIR=/usr/local/lib/python3.11/site-packages/pgadmin4 + APP_RELEASE=$(cd $PGADMIN_DIR && python3 -c "import config; print(config.APP_RELEASE)") + + echo "Running pgAdmin4 Setup" + if [ $APP_RELEASE -eq 7 ]; then + python3 ${PGADMIN_DIR}/setup.py + else + python3 ${PGADMIN_DIR}/setup.py setup-db + fi + + echo "Starting pgAdmin4" + PGADMIN4_PIDFILE=/tmp/pgadmin4.pid + if [ $APP_RELEASE -eq 7 ]; then + pgadmin4 & + else + gunicorn -c /etc/pgadmin/gunicorn_config.py --chdir $PGADMIN_DIR pgAdmin4:app & + fi + echo $! > $PGADMIN4_PIDFILE + + loadServerCommand() { + if [ $APP_RELEASE -eq 7 ]; then + python3 ${PGADMIN_DIR}/setup.py --load-servers /etc/pgadmin/conf.d/~postgres-operator/pgadmin-shared-clusters.json --user admin@pgadmin.postgres-operator.svc --replace + else + python3 ${PGADMIN_DIR}/setup.py load-servers /etc/pgadmin/conf.d/~postgres-operator/pgadmin-shared-clusters.json --user admin@pgadmin.postgres-operator.svc --replace + fi + } + loadServerCommand + + exec {fd}<> <(:||:) + while read -r -t 5 -u "${fd}" ||:; do + if [[ "${cluster_file}" -nt "/proc/self/fd/${fd}" ]] && loadServerCommand + then + exec {fd}>&- && exec {fd}<> <(:||:) + stat --format='Loaded shared servers dated %y' "${cluster_file}" + fi + if [[ ! -d /proc/$(cat $PGADMIN4_PIDFILE) ]] + then + if [[ $APP_RELEASE -eq 7 ]]; then + pgadmin4 & + else + gunicorn -c /etc/pgadmin/gunicorn_config.py --chdir $PGADMIN_DIR pgAdmin4:app & + fi + echo $! > $PGADMIN4_PIDFILE + echo "Restarting pgAdmin4" + fi + done + }; export cluster_file="$1"; export -f monitor; exec -a "$0" bash -ceu monitor + - pgadmin + - /etc/pgadmin/conf.d/~postgres-operator/pgadmin-shared-clusters.json + env: + - name: PGADMIN_SETUP_EMAIL + value: admin@pgadmin.postgres-operator.svc + - name: KRB5_CONFIG + value: /etc/pgadmin/conf.d/krb5.conf + - name: KRB5RCACHEDIR + value: /tmp + image: new-image + imagePullPolicy: Always + name: pgadmin + ports: + - containerPort: 5050 + name: pgadmin + protocol: TCP + readinessProbe: + httpGet: + path: /login + port: 5050 + scheme: HTTP + resources: + requests: + cpu: 100m + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault + volumeMounts: + - mountPath: /etc/pgadmin/conf.d + name: pgadmin-config + readOnly: true + - mountPath: /var/lib/pgadmin + name: pgadmin-data + - mountPath: /var/log/pgadmin + name: pgadmin-log + - mountPath: /etc/pgadmin + name: pgadmin-config-system + readOnly: true + - mountPath: /tmp + name: tmp +initContainers: +- command: + - bash + - -ceu + - -- + - |- + mkdir -p /etc/pgadmin/conf.d + echo "$1" > /etc/pgadmin/config_system.py + echo "$2" > /etc/pgadmin/gunicorn_config.py + - startup + - | + import glob, json, re, os + DEFAULT_BINARY_PATHS = {'pg': sorted([''] + glob.glob('/usr/pgsql-*/bin')).pop()} + with open('/etc/pgadmin/conf.d/~postgres-operator/pgadmin-settings.json') as _f: + _conf, _data = re.compile(r'[A-Z_0-9]+'), json.load(_f) + if type(_data) is dict: + globals().update({k: v for k, v in _data.items() if _conf.fullmatch(k)}) + if os.path.isfile('/etc/pgadmin/conf.d/~postgres-operator/ldap-bind-password'): + with open('/etc/pgadmin/conf.d/~postgres-operator/ldap-bind-password') as _f: + LDAP_BIND_PASSWORD = _f.read() + if os.path.isfile('/etc/pgadmin/conf.d/~postgres-operator/config-database-uri'): + with open('/etc/pgadmin/conf.d/~postgres-operator/config-database-uri') as _f: + CONFIG_DATABASE_URI = _f.read() + - | + import json, re + with open('/etc/pgadmin/conf.d/~postgres-operator/gunicorn-config.json') as _f: + _conf, _data = re.compile(r'[a-z_]+'), json.load(_f) + if type(_data) is dict: + globals().update({k: v for k, v in _data.items() if _conf.fullmatch(k)}) + image: new-image + imagePullPolicy: Always + name: pgadmin-startup + resources: + requests: + cpu: 100m + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault + volumeMounts: + - mountPath: /etc/pgadmin + name: pgadmin-config-system +volumes: +- name: pgadmin-config + projected: + sources: + - configMap: + items: + - key: pgadmin-settings.json + path: ~postgres-operator/pgadmin-settings.json + - key: pgadmin-shared-clusters.json + path: ~postgres-operator/pgadmin-shared-clusters.json + - key: gunicorn-config.json + path: ~postgres-operator/gunicorn-config.json +- name: pgadmin-data + persistentVolumeClaim: + claimName: "" +- emptyDir: + medium: Memory + name: pgadmin-log +- emptyDir: + medium: Memory + sizeLimit: 32Ki + name: pgadmin-config-system +- emptyDir: + medium: Memory + name: tmp +`)) + }) +} + +func TestPodConfigFiles(t *testing.T) { + configmap := &corev1.ConfigMap{ObjectMeta: metav1.ObjectMeta{Name: "some-cm"}} + + pgadmin := v1beta1.PGAdmin{ + Spec: v1beta1.PGAdminSpec{ + Config: v1beta1.StandalonePGAdminConfiguration{Files: []corev1.VolumeProjection{{ + Secret: &corev1.SecretProjection{LocalObjectReference: corev1.LocalObjectReference{ + Name: "test-secret", + }}, + }, { + ConfigMap: &corev1.ConfigMapProjection{LocalObjectReference: corev1.LocalObjectReference{ + Name: "test-cm", + }}, + }}}, + }, + } + + projections := podConfigFiles(configmap, pgadmin) + assert.Assert(t, cmp.MarshalMatches(projections, ` +- secret: + name: test-secret +- configMap: + name: test-cm +- configMap: + items: + - key: pgadmin-settings.json + path: ~postgres-operator/pgadmin-settings.json + - key: pgadmin-shared-clusters.json + path: ~postgres-operator/pgadmin-shared-clusters.json + - key: gunicorn-config.json + path: ~postgres-operator/gunicorn-config.json + name: some-cm + `)) +} + +func TestPodSecurityContext(t *testing.T) { + pgAdminReconciler := &PGAdminReconciler{} + + assert.Assert(t, cmp.MarshalMatches(podSecurityContext(pgAdminReconciler), ` +fsGroup: 2 +fsGroupChangePolicy: OnRootMismatch + `)) + + pgAdminReconciler.IsOpenShift = true + assert.Assert(t, cmp.MarshalMatches(podSecurityContext(pgAdminReconciler), + `fsGroupChangePolicy: OnRootMismatch`)) +} diff --git a/internal/controller/standalone_pgadmin/postgrescluster.go b/internal/controller/standalone_pgadmin/postgrescluster.go new file mode 100644 index 0000000000..5327b8ae70 --- /dev/null +++ b/internal/controller/standalone_pgadmin/postgrescluster.go @@ -0,0 +1,91 @@ +// Copyright 2023 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package standalone_pgadmin + +import ( + "context" + + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" + + "k8s.io/apimachinery/pkg/labels" + "k8s.io/apimachinery/pkg/types" + "sigs.k8s.io/controller-runtime/pkg/client" +) + +//+kubebuilder:rbac:groups="postgres-operator.crunchydata.com",resources="pgadmins",verbs={list} + +// findPGAdminsForPostgresCluster returns PGAdmins that target a given cluster. +func (r *PGAdminReconciler) findPGAdminsForPostgresCluster( + ctx context.Context, cluster client.Object, +) []*v1beta1.PGAdmin { + var ( + matching []*v1beta1.PGAdmin + pgadmins v1beta1.PGAdminList + ) + + // NOTE: If this becomes slow due to a large number of pgadmins in a single + // namespace, we can configure the [ctrl.Manager] field indexer and pass a + // [fields.Selector] here. + // - https://book.kubebuilder.io/reference/watching-resources/externally-managed.html + if r.List(ctx, &pgadmins, &client.ListOptions{ + Namespace: cluster.GetNamespace(), + }) == nil { + for i := range pgadmins.Items { + for _, serverGroup := range pgadmins.Items[i].Spec.ServerGroups { + if serverGroup.PostgresClusterName == cluster.GetName() { + matching = append(matching, &pgadmins.Items[i]) + continue + } + if selector, err := naming.AsSelector(serverGroup.PostgresClusterSelector); err == nil { + if selector.Matches(labels.Set(cluster.GetLabels())) { + matching = append(matching, &pgadmins.Items[i]) + } + } + } + } + } + return matching +} + +//+kubebuilder:rbac:groups="postgres-operator.crunchydata.com",resources="postgresclusters",verbs={list,watch} + +// getClustersForPGAdmin returns clusters managed by the given pgAdmin +func (r *PGAdminReconciler) getClustersForPGAdmin( + ctx context.Context, + pgAdmin *v1beta1.PGAdmin, +) (map[string]*v1beta1.PostgresClusterList, error) { + matching := make(map[string]*v1beta1.PostgresClusterList) + var err error + var selector labels.Selector + + for _, serverGroup := range pgAdmin.Spec.ServerGroups { + cluster := &v1beta1.PostgresCluster{} + if serverGroup.PostgresClusterName != "" { + err = r.Get(ctx, types.NamespacedName{ + Name: serverGroup.PostgresClusterName, + Namespace: pgAdmin.GetNamespace(), + }, cluster) + if err == nil { + matching[serverGroup.Name] = &v1beta1.PostgresClusterList{ + Items: []v1beta1.PostgresCluster{*cluster}, + } + } + continue + } + if selector, err = naming.AsSelector(serverGroup.PostgresClusterSelector); err == nil { + var filteredList v1beta1.PostgresClusterList + err = r.List(ctx, &filteredList, + client.InNamespace(pgAdmin.Namespace), + client.MatchingLabelsSelector{Selector: selector}, + ) + if err == nil { + matching[serverGroup.Name] = &filteredList + } + } + } + + return matching, err +} diff --git a/internal/controller/standalone_pgadmin/service.go b/internal/controller/standalone_pgadmin/service.go new file mode 100644 index 0000000000..2453a6a1fa --- /dev/null +++ b/internal/controller/standalone_pgadmin/service.go @@ -0,0 +1,140 @@ +// Copyright 2023 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package standalone_pgadmin + +import ( + "context" + + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/types" + "k8s.io/apimachinery/pkg/util/intstr" + "sigs.k8s.io/controller-runtime/pkg/client" + + apierrors "k8s.io/apimachinery/pkg/api/errors" + + "github.com/pkg/errors" + + "github.com/crunchydata/postgres-operator/internal/logging" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +// +kubebuilder:rbac:groups="",resources="services",verbs={get} +// +kubebuilder:rbac:groups="",resources="services",verbs={create,delete,patch} + +// reconcilePGAdminService will reconcile a ClusterIP service that points to +// pgAdmin. +func (r *PGAdminReconciler) reconcilePGAdminService( + ctx context.Context, + pgadmin *v1beta1.PGAdmin, +) error { + log := logging.FromContext(ctx) + + // Since spec.Service only accepts a single service name, we shouldn't ever + // have more than one service. However, if the user changes ServiceName, we + // need to delete any existing service(s). At the start of every reconcile + // get all services that match the current pgAdmin labels. + services := corev1.ServiceList{} + if err := r.Client.List(ctx, &services, + client.InNamespace(pgadmin.Namespace), + client.MatchingLabels{ + naming.LabelStandalonePGAdmin: pgadmin.Name, + naming.LabelRole: naming.RolePGAdmin, + }); err != nil { + return err + } + + // Delete any controlled and labeled service that is not defined in the spec. + for i := range services.Items { + if services.Items[i].Name != pgadmin.Spec.ServiceName { + log.V(1).Info( + "Deleting service(s) not defined in spec.ServiceName that are owned by pgAdmin", + "serviceName", services.Items[i].Name) + if err := r.deleteControlled(ctx, pgadmin, &services.Items[i]); err != nil { + return err + } + } + } + + // At this point only a service defined by spec.ServiceName should exist. + // Check if the user has requested a service through ServiceName + if pgadmin.Spec.ServiceName != "" { + // Look for an existing service with name ServiceName in the namespace + existingService := &corev1.Service{} + err := r.Client.Get(ctx, types.NamespacedName{ + Name: pgadmin.Spec.ServiceName, + Namespace: pgadmin.GetNamespace(), + }, existingService) + if client.IgnoreNotFound(err) != nil { + return err + } + + // If we found an existing service in our namespace with ServiceName + if !apierrors.IsNotFound(err) { + + // Check if the existing service has ownerReferences. + // If it doesn't we can go ahead and reconcile the service. + // If it does then we need to check if we are the controller. + if len(existingService.OwnerReferences) != 0 { + + // If the service is not controlled by this pgAdmin then we shouldn't reconcile + if !metav1.IsControlledBy(existingService, pgadmin) { + err := errors.New("Service is controlled by another object") + log.V(1).Error(err, "PGO does not force ownership on existing services", + "ServiceName", pgadmin.Spec.ServiceName) + r.Recorder.Event(pgadmin, + corev1.EventTypeWarning, "InvalidServiceWarning", + "Failed to reconcile Service ServiceName: "+pgadmin.Spec.ServiceName) + + return err + } + } + } + + // A service has been requested and we are allowed to create or reconcile + service := service(pgadmin) + + // Set the controller reference on the service + if err := errors.WithStack(r.setControllerReference(pgadmin, service)); err != nil { + return err + } + + return errors.WithStack(r.apply(ctx, service)) + } + + // If we get here then ServiceName was not provided through the spec + return nil +} + +// Generate a corev1.Service for pgAdmin +func service(pgadmin *v1beta1.PGAdmin) *corev1.Service { + + service := &corev1.Service{} + service.ObjectMeta = metav1.ObjectMeta{ + Name: pgadmin.Spec.ServiceName, + Namespace: pgadmin.Namespace, + } + service.SetGroupVersionKind( + corev1.SchemeGroupVersion.WithKind("Service")) + + service.Annotations = pgadmin.Spec.Metadata.GetAnnotationsOrNil() + service.Labels = naming.Merge( + pgadmin.Spec.Metadata.GetLabelsOrNil(), + naming.StandalonePGAdminLabels(pgadmin.Name)) + + service.Spec.Type = corev1.ServiceTypeClusterIP + service.Spec.Selector = map[string]string{ + naming.LabelStandalonePGAdmin: pgadmin.Name, + } + service.Spec.Ports = []corev1.ServicePort{{ + Name: "pgadmin-port", + Port: pgAdminPort, + Protocol: corev1.ProtocolTCP, + TargetPort: intstr.FromInt(pgAdminPort), + }} + + return service +} diff --git a/internal/controller/standalone_pgadmin/service_test.go b/internal/controller/standalone_pgadmin/service_test.go new file mode 100644 index 0000000000..24b20c8247 --- /dev/null +++ b/internal/controller/standalone_pgadmin/service_test.go @@ -0,0 +1,61 @@ +// Copyright 2023 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package standalone_pgadmin + +import ( + "testing" + + "gotest.tools/v3/assert" + + "github.com/crunchydata/postgres-operator/internal/testing/cmp" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestService(t *testing.T) { + pgadmin := new(v1beta1.PGAdmin) + pgadmin.Name = "daisy" + pgadmin.Namespace = "daisy-service-ns" + pgadmin.Spec.ServiceName = "daisy-service" + pgadmin.Spec.Metadata = &v1beta1.Metadata{ + Labels: map[string]string{ + "test-label": "test-label-val", + "postgres-operator.crunchydata.com/pgadmin": "bad-val", + "postgres-operator.crunchydata.com/role": "bad-val", + }, + Annotations: map[string]string{ + "test-annotation": "test-annotation-val", + }, + } + + service := service(pgadmin) + assert.Assert(t, service != nil) + assert.Assert(t, cmp.MarshalMatches(service.TypeMeta, ` +apiVersion: v1 +kind: Service + `)) + + assert.Assert(t, cmp.MarshalMatches(service.ObjectMeta, ` +annotations: + test-annotation: test-annotation-val +creationTimestamp: null +labels: + postgres-operator.crunchydata.com/pgadmin: daisy + postgres-operator.crunchydata.com/role: pgadmin + test-label: test-label-val +name: daisy-service +namespace: daisy-service-ns + `)) + + assert.Assert(t, cmp.MarshalMatches(service.Spec, ` +ports: +- name: pgadmin-port + port: 5050 + protocol: TCP + targetPort: 5050 +selector: + postgres-operator.crunchydata.com/pgadmin: daisy +type: ClusterIP + `)) +} diff --git a/internal/controller/standalone_pgadmin/statefulset.go b/internal/controller/standalone_pgadmin/statefulset.go new file mode 100644 index 0000000000..e086e333f4 --- /dev/null +++ b/internal/controller/standalone_pgadmin/statefulset.go @@ -0,0 +1,118 @@ +// Copyright 2023 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package standalone_pgadmin + +import ( + "context" + + appsv1 "k8s.io/api/apps/v1" + corev1 "k8s.io/api/core/v1" + apierrors "k8s.io/apimachinery/pkg/api/errors" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "sigs.k8s.io/controller-runtime/pkg/client" + + "github.com/pkg/errors" + + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +// reconcilePGAdminStatefulSet writes the StatefulSet that runs pgAdmin. +func (r *PGAdminReconciler) reconcilePGAdminStatefulSet( + ctx context.Context, pgadmin *v1beta1.PGAdmin, + configmap *corev1.ConfigMap, dataVolume *corev1.PersistentVolumeClaim, +) error { + sts := statefulset(r, pgadmin, configmap, dataVolume) + + // Previous versions of PGO used a StatefulSet Pod Management Policy that could leave the Pod + // in a failed state. When we see that it has the wrong policy, we will delete the StatefulSet + // and then recreate it with the correct policy, as this is not a property that can be patched. + // When we delete the StatefulSet, we will leave its Pods in place. They will be claimed by + // the StatefulSet that gets created in the next reconcile. + existing := &appsv1.StatefulSet{} + if err := errors.WithStack(r.Client.Get(ctx, client.ObjectKeyFromObject(sts), existing)); err != nil { + if !apierrors.IsNotFound(err) { + return err + } + } else { + if existing.Spec.PodManagementPolicy != sts.Spec.PodManagementPolicy { + // We want to delete the STS without affecting the Pods, so we set the PropagationPolicy to Orphan. + // The orphaned Pods will be claimed by the StatefulSet that will be created in the next reconcile. + uid := existing.GetUID() + version := existing.GetResourceVersion() + exactly := client.Preconditions{UID: &uid, ResourceVersion: &version} + propagate := client.PropagationPolicy(metav1.DeletePropagationOrphan) + + return errors.WithStack(client.IgnoreNotFound(r.Client.Delete(ctx, existing, exactly, propagate))) + } + } + + if err := errors.WithStack(r.setControllerReference(pgadmin, sts)); err != nil { + return err + } + return errors.WithStack(r.apply(ctx, sts)) +} + +// statefulset defines the StatefulSet needed to run pgAdmin. +func statefulset( + r *PGAdminReconciler, + pgadmin *v1beta1.PGAdmin, + configmap *corev1.ConfigMap, + dataVolume *corev1.PersistentVolumeClaim, +) *appsv1.StatefulSet { + sts := &appsv1.StatefulSet{ObjectMeta: naming.StandalonePGAdmin(pgadmin)} + sts.SetGroupVersionKind(appsv1.SchemeGroupVersion.WithKind("StatefulSet")) + + sts.Annotations = pgadmin.Spec.Metadata.GetAnnotationsOrNil() + sts.Labels = naming.Merge( + pgadmin.Spec.Metadata.GetLabelsOrNil(), + naming.StandalonePGAdminDataLabels(pgadmin.Name), + ) + sts.Spec.Selector = &metav1.LabelSelector{ + MatchLabels: naming.StandalonePGAdminLabels(pgadmin.Name), + } + sts.Spec.Template.Annotations = pgadmin.Spec.Metadata.GetAnnotationsOrNil() + sts.Spec.Template.Labels = naming.Merge( + pgadmin.Spec.Metadata.GetLabelsOrNil(), + naming.StandalonePGAdminDataLabels(pgadmin.Name), + ) + + // Don't clutter the namespace with extra ControllerRevisions. + sts.Spec.RevisionHistoryLimit = initialize.Int32(0) + + // Use StatefulSet's "RollingUpdate" strategy and "Parallel" policy to roll + // out changes to pods even when not Running or not Ready. + // - https://docs.k8s.io/concepts/workloads/controllers/statefulset/#rolling-updates + // - https://docs.k8s.io/concepts/workloads/controllers/statefulset/#forced-rollback + // - https://kep.k8s.io/3541 + sts.Spec.PodManagementPolicy = appsv1.ParallelPodManagement + sts.Spec.UpdateStrategy.Type = appsv1.RollingUpdateStatefulSetStrategyType + + // Use scheduling constraints from the cluster spec. + sts.Spec.Template.Spec.Affinity = pgadmin.Spec.Affinity + sts.Spec.Template.Spec.Tolerations = pgadmin.Spec.Tolerations + sts.Spec.Template.Spec.PriorityClassName = initialize.FromPointer(pgadmin.Spec.PriorityClassName) + + // Restart containers any time they stop, die, are killed, etc. + // - https://docs.k8s.io/concepts/workloads/pods/pod-lifecycle/#restart-policy + sts.Spec.Template.Spec.RestartPolicy = corev1.RestartPolicyAlways + + // pgAdmin does not make any Kubernetes API calls. Use the default + // ServiceAccount and do not mount its credentials. + sts.Spec.Template.Spec.AutomountServiceAccountToken = initialize.Bool(false) + + // Do not add environment variables describing services in this namespace. + sts.Spec.Template.Spec.EnableServiceLinks = initialize.Bool(false) + + // set the image pull secrets, if any exist + sts.Spec.Template.Spec.ImagePullSecrets = pgadmin.Spec.ImagePullSecrets + + sts.Spec.Template.Spec.SecurityContext = podSecurityContext(r) + + pod(pgadmin, configmap, &sts.Spec.Template.Spec, dataVolume) + + return sts +} diff --git a/internal/controller/standalone_pgadmin/statefulset_test.go b/internal/controller/standalone_pgadmin/statefulset_test.go new file mode 100644 index 0000000000..52c501b357 --- /dev/null +++ b/internal/controller/standalone_pgadmin/statefulset_test.go @@ -0,0 +1,207 @@ +// Copyright 2023 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package standalone_pgadmin + +import ( + "context" + "testing" + + "gotest.tools/v3/assert" + appsv1 "k8s.io/api/apps/v1" + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "sigs.k8s.io/controller-runtime/pkg/client" + + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/testing/cmp" + "github.com/crunchydata/postgres-operator/internal/testing/require" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestReconcilePGAdminStatefulSet(t *testing.T) { + ctx := context.Background() + cc := setupKubernetes(t) + require.ParallelCapacity(t, 1) + + reconciler := &PGAdminReconciler{ + Client: cc, + Owner: client.FieldOwner(t.Name()), + } + + ns := setupNamespace(t, cc) + pgadmin := new(v1beta1.PGAdmin) + pgadmin.Name = "test-standalone-pgadmin" + pgadmin.Namespace = ns.Name + + assert.NilError(t, cc.Create(ctx, pgadmin)) + t.Cleanup(func() { assert.Check(t, cc.Delete(ctx, pgadmin)) }) + + configmap := &corev1.ConfigMap{} + configmap.Name = "test-cm" + + pvc := &corev1.PersistentVolumeClaim{} + pvc.Name = "test-pvc" + + t.Run("verify StatefulSet", func(t *testing.T) { + err := reconciler.reconcilePGAdminStatefulSet(ctx, pgadmin, configmap, pvc) + assert.NilError(t, err) + + selector, err := naming.AsSelector(metav1.LabelSelector{ + MatchLabels: map[string]string{ + naming.LabelStandalonePGAdmin: pgadmin.Name, + }, + }) + assert.NilError(t, err) + + list := appsv1.StatefulSetList{} + assert.NilError(t, cc.List(ctx, &list, client.InNamespace(pgadmin.Namespace), + client.MatchingLabelsSelector{Selector: selector})) + assert.Equal(t, len(list.Items), 1) + + template := list.Items[0].Spec.Template.DeepCopy() + + // Containers and Volumes should be populated. + assert.Assert(t, len(template.Spec.Containers) != 0) + assert.Assert(t, len(template.Spec.Volumes) != 0) + + // Ignore Containers and Volumes in the comparison below. + template.Spec.Containers = nil + template.Spec.InitContainers = nil + template.Spec.Volumes = nil + + assert.Assert(t, cmp.MarshalMatches(template.ObjectMeta, ` +creationTimestamp: null +labels: + postgres-operator.crunchydata.com/data: pgadmin + postgres-operator.crunchydata.com/pgadmin: test-standalone-pgadmin + postgres-operator.crunchydata.com/role: pgadmin + `)) + + compare := ` +automountServiceAccountToken: false +containers: null +dnsPolicy: ClusterFirst +enableServiceLinks: false +restartPolicy: Always +schedulerName: default-scheduler +securityContext: + fsGroup: 2 + fsGroupChangePolicy: OnRootMismatch +terminationGracePeriodSeconds: 30 + ` + + assert.Assert(t, cmp.MarshalMatches(template.Spec, compare)) + }) + + t.Run("verify customized deployment", func(t *testing.T) { + + custompgadmin := new(v1beta1.PGAdmin) + + // add pod level customizations + custompgadmin.Name = "custom-pgadmin" + custompgadmin.Namespace = ns.Name + + // annotation and label + custompgadmin.Spec.Metadata = &v1beta1.Metadata{ + Annotations: map[string]string{ + "annotation1": "annotationvalue", + }, + Labels: map[string]string{ + "label1": "labelvalue", + }, + } + + // scheduling constraints + custompgadmin.Spec.Affinity = &corev1.Affinity{ + NodeAffinity: &corev1.NodeAffinity{ + RequiredDuringSchedulingIgnoredDuringExecution: &corev1.NodeSelector{ + NodeSelectorTerms: []corev1.NodeSelectorTerm{{ + MatchExpressions: []corev1.NodeSelectorRequirement{{ + Key: "key", + Operator: "Exists", + }}, + }}, + }, + }, + } + custompgadmin.Spec.Tolerations = []corev1.Toleration{ + {Key: "sometoleration"}, + } + + if pgadmin.Spec.PriorityClassName != nil { + custompgadmin.Spec.PriorityClassName = initialize.String("testpriorityclass") + } + + // set an image pull secret + custompgadmin.Spec.ImagePullSecrets = []corev1.LocalObjectReference{{ + Name: "myImagePullSecret"}} + + assert.NilError(t, cc.Create(ctx, custompgadmin)) + t.Cleanup(func() { assert.Check(t, cc.Delete(ctx, custompgadmin)) }) + + err := reconciler.reconcilePGAdminStatefulSet(ctx, custompgadmin, configmap, pvc) + assert.NilError(t, err) + + selector, err := naming.AsSelector(metav1.LabelSelector{ + MatchLabels: map[string]string{ + naming.LabelStandalonePGAdmin: custompgadmin.Name, + }, + }) + assert.NilError(t, err) + + list := appsv1.StatefulSetList{} + assert.NilError(t, cc.List(ctx, &list, client.InNamespace(custompgadmin.Namespace), + client.MatchingLabelsSelector{Selector: selector})) + assert.Equal(t, len(list.Items), 1) + + template := list.Items[0].Spec.Template.DeepCopy() + + // Containers and Volumes should be populated. + assert.Assert(t, len(template.Spec.Containers) != 0) + + // Ignore Containers and Volumes in the comparison below. + template.Spec.Containers = nil + template.Spec.InitContainers = nil + template.Spec.Volumes = nil + + assert.Assert(t, cmp.MarshalMatches(template.ObjectMeta, ` +annotations: + annotation1: annotationvalue +creationTimestamp: null +labels: + label1: labelvalue + postgres-operator.crunchydata.com/data: pgadmin + postgres-operator.crunchydata.com/pgadmin: custom-pgadmin + postgres-operator.crunchydata.com/role: pgadmin + `)) + + compare := ` +affinity: + nodeAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: key + operator: Exists +automountServiceAccountToken: false +containers: null +dnsPolicy: ClusterFirst +enableServiceLinks: false +imagePullSecrets: +- name: myImagePullSecret +restartPolicy: Always +schedulerName: default-scheduler +securityContext: + fsGroup: 2 + fsGroupChangePolicy: OnRootMismatch +terminationGracePeriodSeconds: 30 +tolerations: +- key: sometoleration +` + + assert.Assert(t, cmp.MarshalMatches(template.Spec, compare)) + }) +} diff --git a/internal/controller/standalone_pgadmin/users.go b/internal/controller/standalone_pgadmin/users.go new file mode 100644 index 0000000000..3c9a3ce05b --- /dev/null +++ b/internal/controller/standalone_pgadmin/users.go @@ -0,0 +1,308 @@ +// Copyright 2023 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package standalone_pgadmin + +import ( + "bytes" + "context" + "encoding/json" + "fmt" + "io" + "strconv" + "strings" + + "github.com/pkg/errors" + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "sigs.k8s.io/controller-runtime/pkg/client" + + "github.com/crunchydata/postgres-operator/internal/logging" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +type Executor func( + ctx context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, +) error + +// pgAdminUserForJson is used for user data that is put in the users.json file in the +// pgAdmin secret. IsAdmin and Username come from the user spec, whereas Password is +// generated when the user is created. +type pgAdminUserForJson struct { + // Whether the user has admin privileges or not. + IsAdmin bool `json:"isAdmin"` + + // The user's password + Password string `json:"password"` + + // The username for User in pgAdmin. + // Must be unique in the pgAdmin's users list. + Username string `json:"username"` +} + +// reconcilePGAdminUsers reconciles the users listed in the pgAdmin spec, adding them +// to the pgAdmin secret, and creating/updating them in pgAdmin when appropriate. +func (r *PGAdminReconciler) reconcilePGAdminUsers(ctx context.Context, pgadmin *v1beta1.PGAdmin) error { + const container = naming.ContainerPGAdmin + var podExecutor Executor + log := logging.FromContext(ctx) + + // Find the running pgAdmin container. When there is none, return early. + pod := &corev1.Pod{ObjectMeta: naming.StandalonePGAdmin(pgadmin)} + pod.Name += "-0" + + err := errors.WithStack(r.Client.Get(ctx, client.ObjectKeyFromObject(pod), pod)) + if err != nil { + return client.IgnoreNotFound(err) + } + + var running bool + var pgAdminImageSha string + for _, status := range pod.Status.ContainerStatuses { + if status.Name == container { + running = status.State.Running != nil + pgAdminImageSha = status.ImageID + } + } + if terminating := pod.DeletionTimestamp != nil; running && !terminating { + ctx = logging.NewContext(ctx, logging.FromContext(ctx).WithValues("pod", pod.Name)) + + podExecutor = func( + ctx context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + return r.PodExec(ctx, pod.Namespace, pod.Name, container, stdin, stdout, stderr, command...) + } + } + if podExecutor == nil { + return nil + } + + // If the pgAdmin version is not in the status or the image SHA has changed, get + // the pgAdmin version and store it in the status. + var pgadminVersion int + if pgadmin.Status.MajorVersion == 0 || pgadmin.Status.ImageSHA != pgAdminImageSha { + pgadminVersion, err = r.reconcilePGAdminMajorVersion(ctx, podExecutor) + if err != nil { + return err + } + pgadmin.Status.MajorVersion = pgadminVersion + pgadmin.Status.ImageSHA = pgAdminImageSha + } else { + pgadminVersion = pgadmin.Status.MajorVersion + } + + // If the pgAdmin version is not v8 or higher, return early as user management is + // only supported for pgAdmin v8 and higher. + if pgadminVersion < 8 { + // If pgAdmin version is less than v8 and user management is being attempted, + // log a message clarifying that it is only supported for pgAdmin v8 and higher. + if len(pgadmin.Spec.Users) > 0 { + log.Info("User management is only supported for pgAdmin v8 and higher.", + "pgadminVersion", pgadminVersion) + } + return err + } + + return r.writePGAdminUsers(ctx, pgadmin, podExecutor) +} + +// reconcilePGAdminMajorVersion execs into the pgAdmin pod and retrieves the pgAdmin major version +func (r *PGAdminReconciler) reconcilePGAdminMajorVersion(ctx context.Context, exec Executor) (int, error) { + script := fmt.Sprintf(` +PGADMIN_DIR=%s +cd $PGADMIN_DIR && python3 -c "import config; print(config.APP_RELEASE)" +`, pgAdminDir) + + var stdin, stdout, stderr bytes.Buffer + + err := exec(ctx, &stdin, &stdout, &stderr, + []string{"bash", "-ceu", "--", script}...) + + if err != nil { + return 0, err + } + + return strconv.Atoi(strings.TrimSpace(stdout.String())) +} + +// writePGAdminUsers takes the users in the pgAdmin spec and writes (adds or updates) their data +// to both pgAdmin and the users.json file that is stored in the pgAdmin secret. If a user is +// removed from the spec, its data is removed from users.json, but it is not deleted from pgAdmin. +func (r *PGAdminReconciler) writePGAdminUsers(ctx context.Context, pgadmin *v1beta1.PGAdmin, + exec Executor) error { + log := logging.FromContext(ctx) + + existingUserSecret := &corev1.Secret{ObjectMeta: naming.StandalonePGAdmin(pgadmin)} + err := errors.WithStack( + r.Client.Get(ctx, client.ObjectKeyFromObject(existingUserSecret), existingUserSecret)) + if client.IgnoreNotFound(err) != nil { + return err + } + + intentUserSecret := &corev1.Secret{ObjectMeta: naming.StandalonePGAdmin(pgadmin)} + intentUserSecret.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("Secret")) + + intentUserSecret.Annotations = naming.Merge( + pgadmin.Spec.Metadata.GetAnnotationsOrNil(), + ) + intentUserSecret.Labels = naming.Merge( + pgadmin.Spec.Metadata.GetLabelsOrNil(), + naming.StandalonePGAdminLabels(pgadmin.Name)) + + // Initialize secret data map, or copy existing data if not nil + intentUserSecret.Data = make(map[string][]byte) + + setupScript := fmt.Sprintf(` +PGADMIN_DIR=%s +cd $PGADMIN_DIR +`, pgAdminDir) + + var existingUsersArr []pgAdminUserForJson + if existingUserSecret.Data["users.json"] != nil { + err := json.Unmarshal(existingUserSecret.Data["users.json"], &existingUsersArr) + if err != nil { + return err + } + } + existingUsersMap := make(map[string]pgAdminUserForJson) + for _, user := range existingUsersArr { + existingUsersMap[user.Username] = user + } + intentUsers := []pgAdminUserForJson{} + for _, user := range pgadmin.Spec.Users { + var stdin, stdout, stderr bytes.Buffer + typeFlag := "--nonadmin" + isAdmin := false + if user.Role == "Administrator" { + typeFlag = "--admin" + isAdmin = true + } + + // Get password from secret + userPasswordSecret := &corev1.Secret{ObjectMeta: metav1.ObjectMeta{ + Namespace: pgadmin.Namespace, + Name: user.PasswordRef.LocalObjectReference.Name, + }} + err := errors.WithStack( + r.Client.Get(ctx, client.ObjectKeyFromObject(userPasswordSecret), userPasswordSecret)) + if err != nil { + log.Error(err, "Could not get user password secret") + continue + } + + // Make sure the password isn't nil or empty + password := userPasswordSecret.Data[user.PasswordRef.Key] + if password == nil { + log.Error(nil, `Could not retrieve password from secret. Make sure secret name and key are correct.`) + continue + } + if len(password) == 0 { + log.Error(nil, `Password must not be empty.`) + continue + } + + // Assemble user that will be used in add/update command and in updating + // the users.json file in the secret + intentUser := pgAdminUserForJson{ + Username: user.Username, + Password: string(password), + IsAdmin: isAdmin, + } + // If the user already exists in users.json and isAdmin or password has + // changed, run the update-user command. If the user already exists in + // users.json, but it hasn't changed, do nothing. If the user doesn't + // exist in users.json, run the add-user command. + if existingUser, present := existingUsersMap[user.Username]; present { + // If Password or IsAdmin have changed, attempt update-user command + if intentUser.IsAdmin != existingUser.IsAdmin || intentUser.Password != existingUser.Password { + script := setupScript + fmt.Sprintf(`python3 setup.py update-user %s --password "%s" "%s"`, + typeFlag, intentUser.Password, intentUser.Username) + "\n" + err = exec(ctx, &stdin, &stdout, &stderr, + []string{"bash", "-ceu", "--", script}...) + + // If any errors occurred during update, we want to log a message, + // add the existing user to users.json since the update was + // unsuccessful, and continue reconciling users. + if err != nil { + log.Error(err, "PodExec failed: ") + intentUsers = append(intentUsers, existingUser) + continue + } else if strings.TrimSpace(stderr.String()) != "" { + log.Error(errors.New(stderr.String()), fmt.Sprintf("pgAdmin setup.py error for %s: ", + intentUser.Username)) + intentUsers = append(intentUsers, existingUser) + continue + } + // If update user fails due to user not found or password length: + // https://github.com/pgadmin-org/pgadmin4/blob/REL-8_5/web/setup.py#L263 + // https://github.com/pgadmin-org/pgadmin4/blob/REL-8_5/web/setup.py#L246 + if strings.Contains(stdout.String(), "User not found") || + strings.Contains(stdout.String(), "Password must be") { + + log.Info("Failed to update pgAdmin user", "user", intentUser.Username, "error", stdout.String()) + r.Recorder.Event(pgadmin, + corev1.EventTypeWarning, "InvalidUserWarning", + fmt.Sprintf("Failed to update pgAdmin user %s: %s", + intentUser.Username, stdout.String())) + intentUsers = append(intentUsers, existingUser) + continue + } + } + } else { + // New user, so attempt add-user command + script := setupScript + fmt.Sprintf(`python3 setup.py add-user %s -- "%s" "%s"`, + typeFlag, intentUser.Username, intentUser.Password) + "\n" + err = exec(ctx, &stdin, &stdout, &stderr, + []string{"bash", "-ceu", "--", script}...) + + // If any errors occurred when attempting to add user, we want to log a message, + // and continue reconciling users. + if err != nil { + log.Error(err, "PodExec failed: ") + continue + } + if strings.TrimSpace(stderr.String()) != "" { + log.Error(errors.New(stderr.String()), fmt.Sprintf("pgAdmin setup.py error for %s: ", + intentUser.Username)) + continue + } + // If add user fails due to invalid username or password length: + // https://github.com/pgadmin-org/pgadmin4/blob/REL-8_5/web/pgadmin/tools/user_management/__init__.py#L457 + // https://github.com/pgadmin-org/pgadmin4/blob/REL-8_5/web/setup.py#L374 + if strings.Contains(stdout.String(), "Invalid email address") || + strings.Contains(stdout.String(), "Password must be") { + + log.Info(fmt.Sprintf("Failed to create pgAdmin user %s: %s", + intentUser.Username, stdout.String())) + r.Recorder.Event(pgadmin, + corev1.EventTypeWarning, "InvalidUserWarning", + fmt.Sprintf("Failed to create pgAdmin user %s: %s", + intentUser.Username, stdout.String())) + continue + } + } + // If we've gotten here, the user was successfully added or updated or nothing was done + // to the user at all, so we want to add it to the slice of users that will be put in the + // users.json file in the secret. + intentUsers = append(intentUsers, intentUser) + } + + // We've at least attempted to reconcile all users in the spec. If errors occurred when attempting + // to add a user, that user will not be in intentUsers. If errors occurred when attempting to + // update a user, the user will be in intentUsers as it existed before. We now want to marshal the + // intentUsers to json and write the users.json file to the secret. + usersJSON, err := json.Marshal(intentUsers) + if err != nil { + return err + } + intentUserSecret.Data["users.json"] = usersJSON + + err = errors.WithStack(r.setControllerReference(pgadmin, intentUserSecret)) + if err == nil { + err = errors.WithStack(r.apply(ctx, intentUserSecret)) + } + + return err +} diff --git a/internal/controller/standalone_pgadmin/users_test.go b/internal/controller/standalone_pgadmin/users_test.go new file mode 100644 index 0000000000..409fcea701 --- /dev/null +++ b/internal/controller/standalone_pgadmin/users_test.go @@ -0,0 +1,709 @@ +// Copyright 2023 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package standalone_pgadmin + +import ( + "context" + "encoding/json" + "fmt" + "io" + "strings" + "testing" + + "github.com/pkg/errors" + "gotest.tools/v3/assert" + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/client/fake" + + "github.com/crunchydata/postgres-operator/internal/controller/runtime" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/testing/events" + "github.com/crunchydata/postgres-operator/internal/testing/require" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestReconcilePGAdminUsers(t *testing.T) { + ctx := context.Background() + + pgadmin := &v1beta1.PGAdmin{} + pgadmin.Namespace = "ns1" + pgadmin.Name = "pgadmin1" + pgadmin.UID = "123" + pgadmin.Spec.Users = []v1beta1.PGAdminUser{ + { + Username: "testuser", + Role: "Administrator", + }, + } + + t.Run("NoPods", func(t *testing.T) { + r := new(PGAdminReconciler) + r.Client = fake.NewClientBuilder().Build() + assert.NilError(t, r.reconcilePGAdminUsers(ctx, pgadmin)) + }) + + // Pod in the namespace + pod := corev1.Pod{} + pod.Namespace = pgadmin.Namespace + pod.Name = fmt.Sprintf("pgadmin-%s-0", pgadmin.UID) + + t.Run("ContainerNotRunning", func(t *testing.T) { + pod := pod.DeepCopy() + + pod.DeletionTimestamp = nil + pod.Status.ContainerStatuses = nil + + r := new(PGAdminReconciler) + r.Client = fake.NewClientBuilder().WithObjects(pod).Build() + + assert.NilError(t, r.reconcilePGAdminUsers(ctx, pgadmin)) + }) + + t.Run("PodTerminating", func(t *testing.T) { + pod := pod.DeepCopy() + + // Must add finalizer when adding deletion timestamp otherwise fake client will panic: + // https://github.com/kubernetes-sigs/controller-runtime/pull/2316 + pod.Finalizers = append(pod.Finalizers, "some-finalizer") + + pod.DeletionTimestamp = new(metav1.Time) + *pod.DeletionTimestamp = metav1.Now() + pod.Status.ContainerStatuses = + []corev1.ContainerStatus{{Name: naming.ContainerPGAdmin}} + pod.Status.ContainerStatuses[0].State.Running = + new(corev1.ContainerStateRunning) + + r := new(PGAdminReconciler) + r.Client = fake.NewClientBuilder().WithObjects(pod).Build() + + assert.NilError(t, r.reconcilePGAdminUsers(ctx, pgadmin)) + }) + + // We only test v7 because if we did v8 then the writePGAdminUsers would + // be called and that method has its own tests later in this file + t.Run("PodHealthyVersionNotSet", func(t *testing.T) { + pgadmin := pgadmin.DeepCopy() + pod := pod.DeepCopy() + + pod.DeletionTimestamp = nil + pod.Status.ContainerStatuses = + []corev1.ContainerStatus{{Name: naming.ContainerPGAdmin}} + pod.Status.ContainerStatuses[0].State.Running = + new(corev1.ContainerStateRunning) + pod.Status.ContainerStatuses[0].ImageID = "fakeSHA" + + r := new(PGAdminReconciler) + r.Client = fake.NewClientBuilder().WithObjects(pod).Build() + + calls := 0 + r.PodExec = func( + ctx context.Context, namespace, pod, container string, + stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + calls++ + + assert.Equal(t, pod, "pgadmin-123-0") + assert.Equal(t, namespace, pgadmin.Namespace) + assert.Equal(t, container, naming.ContainerPGAdmin) + + // Simulate a v7 version of pgAdmin by setting stdout to "7" for + // podexec call in reconcilePGAdminMajorVersion + stdout.Write([]byte("7")) + return nil + } + + assert.NilError(t, r.reconcilePGAdminUsers(ctx, pgadmin)) + assert.Equal(t, calls, 1, "PodExec should be called once") + assert.Equal(t, pgadmin.Status.MajorVersion, 7) + assert.Equal(t, pgadmin.Status.ImageSHA, "fakeSHA") + }) + + t.Run("PodHealthyShaChanged", func(t *testing.T) { + pgadmin := pgadmin.DeepCopy() + pgadmin.Status.MajorVersion = 7 + pgadmin.Status.ImageSHA = "fakeSHA" + pod := pod.DeepCopy() + + pod.DeletionTimestamp = nil + pod.Status.ContainerStatuses = + []corev1.ContainerStatus{{Name: naming.ContainerPGAdmin}} + pod.Status.ContainerStatuses[0].State.Running = + new(corev1.ContainerStateRunning) + pod.Status.ContainerStatuses[0].ImageID = "newFakeSHA" + + r := new(PGAdminReconciler) + r.Client = fake.NewClientBuilder().WithObjects(pod).Build() + + calls := 0 + r.PodExec = func( + ctx context.Context, namespace, pod, container string, + stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + calls++ + + // Simulate a v7 version of pgAdmin by setting stdout to "7" for + // podexec call in reconcilePGAdminMajorVersion + stdout.Write([]byte("7")) + return nil + } + + assert.NilError(t, r.reconcilePGAdminUsers(ctx, pgadmin)) + assert.Equal(t, calls, 1, "PodExec should be called once") + assert.Equal(t, pgadmin.Status.MajorVersion, 7) + assert.Equal(t, pgadmin.Status.ImageSHA, "newFakeSHA") + }) +} + +func TestReconcilePGAdminMajorVersion(t *testing.T) { + ctx := context.Background() + pod := corev1.Pod{} + pod.Namespace = "test-namespace" + pod.Name = "pgadmin-123-0" + reconciler := &PGAdminReconciler{} + + podExecutor := func( + ctx context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + return reconciler.PodExec(ctx, pod.Namespace, pod.Name, "pgadmin", stdin, stdout, stderr, command...) + } + + t.Run("SuccessfulRetrieval", func(t *testing.T) { + reconciler.PodExec = func( + ctx context.Context, namespace, pod, container string, + stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + assert.Equal(t, pod, "pgadmin-123-0") + assert.Equal(t, namespace, "test-namespace") + assert.Equal(t, container, naming.ContainerPGAdmin) + + // Simulate a v7 version of pgAdmin by setting stdout to "7" for + // podexec call in reconcilePGAdminMajorVersion + stdout.Write([]byte("7")) + return nil + } + + version, err := reconciler.reconcilePGAdminMajorVersion(ctx, podExecutor) + assert.NilError(t, err) + assert.Equal(t, version, 7) + }) + + t.Run("FailedRetrieval", func(t *testing.T) { + reconciler.PodExec = func( + ctx context.Context, namespace, pod, container string, + stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + // Simulate the python call giving bad data (not a version int) + stdout.Write([]byte("asdfjkl;")) + return nil + } + + version, err := reconciler.reconcilePGAdminMajorVersion(ctx, podExecutor) + assert.Check(t, err != nil) + assert.Equal(t, version, 0) + }) + + t.Run("PodExecError", func(t *testing.T) { + reconciler.PodExec = func( + ctx context.Context, namespace, pod, container string, + stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + return errors.New("PodExecError") + } + + version, err := reconciler.reconcilePGAdminMajorVersion(ctx, podExecutor) + assert.Check(t, err != nil) + assert.Equal(t, version, 0) + }) +} + +func TestWritePGAdminUsers(t *testing.T) { + ctx := context.Background() + cc := setupKubernetes(t) + require.ParallelCapacity(t, 1) + + recorder := events.NewRecorder(t, runtime.Scheme) + reconciler := &PGAdminReconciler{ + Client: cc, + Owner: client.FieldOwner(t.Name()), + Recorder: recorder, + } + + ns := setupNamespace(t, cc) + pgadmin := new(v1beta1.PGAdmin) + pgadmin.Name = "test-standalone-pgadmin" + pgadmin.Namespace = ns.Name + assert.NilError(t, cc.Create(ctx, pgadmin)) + + userPasswordSecret1 := &corev1.Secret{ + ObjectMeta: metav1.ObjectMeta{ + Name: "user-password-secret1", + Namespace: ns.Name, + }, + Data: map[string][]byte{ + "password": []byte(`asdf`), + }, + } + assert.NilError(t, cc.Create(ctx, userPasswordSecret1)) + + userPasswordSecret2 := &corev1.Secret{ + ObjectMeta: metav1.ObjectMeta{ + Name: "user-password-secret2", + Namespace: ns.Name, + }, + Data: map[string][]byte{ + "password": []byte(`qwer`), + }, + } + assert.NilError(t, cc.Create(ctx, userPasswordSecret2)) + + t.Cleanup(func() { + assert.Check(t, cc.Delete(ctx, pgadmin)) + assert.Check(t, cc.Delete(ctx, userPasswordSecret1)) + assert.Check(t, cc.Delete(ctx, userPasswordSecret2)) + }) + + pod := corev1.Pod{} + pod.Namespace = pgadmin.Namespace + pod.Name = fmt.Sprintf("pgadmin-%s-0", pgadmin.UID) + + podExecutor := func( + ctx context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + return reconciler.PodExec(ctx, pod.Namespace, pod.Name, "pgadmin", stdin, stdout, stderr, command...) + } + + t.Run("CreateOneUser", func(t *testing.T) { + pgadmin.Spec.Users = []v1beta1.PGAdminUser{ + { + PasswordRef: &corev1.SecretKeySelector{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: "user-password-secret1", + }, + Key: "password", + }, + Username: "testuser1", + Role: "Administrator", + }, + } + + calls := 0 + reconciler.PodExec = func( + ctx context.Context, namespace, pod, container string, + stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + calls++ + + assert.Equal(t, pod, fmt.Sprintf("pgadmin-%s-0", pgadmin.UID)) + assert.Equal(t, namespace, pgadmin.Namespace) + assert.Equal(t, container, naming.ContainerPGAdmin) + assert.Equal(t, strings.Contains(strings.Join(command, " "), + `python3 setup.py add-user --admin -- "testuser1" "asdf"`), true) + + return nil + } + + assert.NilError(t, reconciler.writePGAdminUsers(ctx, pgadmin, podExecutor)) + assert.Equal(t, calls, 1, "PodExec should be called once") + + secret := &corev1.Secret{ObjectMeta: naming.StandalonePGAdmin(pgadmin)} + assert.NilError(t, errors.WithStack( + reconciler.Client.Get(ctx, client.ObjectKeyFromObject(secret), secret))) + if assert.Check(t, secret.Data["users.json"] != nil) { + var usersArr []pgAdminUserForJson + assert.NilError(t, json.Unmarshal(secret.Data["users.json"], &usersArr)) + assert.Equal(t, len(usersArr), 1) + assert.Equal(t, usersArr[0].Username, "testuser1") + assert.Equal(t, usersArr[0].IsAdmin, true) + assert.Equal(t, usersArr[0].Password, "asdf") + } + }) + + t.Run("AddAnotherUserEditExistingUser", func(t *testing.T) { + pgadmin.Spec.Users = []v1beta1.PGAdminUser{ + { + PasswordRef: &corev1.SecretKeySelector{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: "user-password-secret1", + }, + Key: "password", + }, + Username: "testuser1", + Role: "User", + }, + { + PasswordRef: &corev1.SecretKeySelector{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: "user-password-secret2", + }, + Key: "password", + }, + Username: "testuser2", + Role: "Administrator", + }, + } + + calls := 0 + addUserCalls := 0 + updateUserCalls := 0 + reconciler.PodExec = func( + ctx context.Context, namespace, pod, container string, + stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + calls++ + if strings.Contains(strings.Join(command, " "), "python3 setup.py add-user") { + addUserCalls++ + } + if strings.Contains(strings.Join(command, " "), "python3 setup.py update-user") { + updateUserCalls++ + } + + return nil + } + + assert.NilError(t, reconciler.writePGAdminUsers(ctx, pgadmin, podExecutor)) + assert.Equal(t, calls, 2, "PodExec should be called twice") + assert.Equal(t, addUserCalls, 1, "The add-user command should be executed once") + assert.Equal(t, updateUserCalls, 1, "The update-user command should be executed once") + + secret := &corev1.Secret{ObjectMeta: naming.StandalonePGAdmin(pgadmin)} + assert.NilError(t, errors.WithStack( + reconciler.Client.Get(ctx, client.ObjectKeyFromObject(secret), secret))) + if assert.Check(t, secret.Data["users.json"] != nil) { + var usersArr []pgAdminUserForJson + assert.NilError(t, json.Unmarshal(secret.Data["users.json"], &usersArr)) + assert.Equal(t, len(usersArr), 2) + assert.Equal(t, usersArr[0].Username, "testuser1") + assert.Equal(t, usersArr[0].IsAdmin, false) + assert.Equal(t, usersArr[0].Password, "asdf") + assert.Equal(t, usersArr[1].Username, "testuser2") + assert.Equal(t, usersArr[1].IsAdmin, true) + assert.Equal(t, usersArr[1].Password, "qwer") + } + }) + + t.Run("AddOneEditOneLeaveOneAlone", func(t *testing.T) { + pgadmin.Spec.Users = []v1beta1.PGAdminUser{ + { + PasswordRef: &corev1.SecretKeySelector{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: "user-password-secret1", + }, + Key: "password", + }, + Username: "testuser1", + Role: "User", + }, + { + PasswordRef: &corev1.SecretKeySelector{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: "user-password-secret1", + }, + Key: "password", + }, + Username: "testuser2", + Role: "User", + }, + { + PasswordRef: &corev1.SecretKeySelector{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: "user-password-secret2", + }, + Key: "password", + }, + Username: "testuser3", + Role: "Administrator", + }, + } + calls := 0 + addUserCalls := 0 + updateUserCalls := 0 + reconciler.PodExec = func( + ctx context.Context, namespace, pod, container string, + stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + calls++ + if strings.Contains(strings.Join(command, " "), "python3 setup.py add-user") { + addUserCalls++ + } + if strings.Contains(strings.Join(command, " "), "python3 setup.py update-user") { + updateUserCalls++ + } + + return nil + } + + assert.NilError(t, reconciler.writePGAdminUsers(ctx, pgadmin, podExecutor)) + assert.Equal(t, calls, 2, "PodExec should be called twice") + assert.Equal(t, addUserCalls, 1, "The add-user command should be executed once") + assert.Equal(t, updateUserCalls, 1, "The update-user command should be executed once") + + secret := &corev1.Secret{ObjectMeta: naming.StandalonePGAdmin(pgadmin)} + assert.NilError(t, errors.WithStack( + reconciler.Client.Get(ctx, client.ObjectKeyFromObject(secret), secret))) + if assert.Check(t, secret.Data["users.json"] != nil) { + var usersArr []pgAdminUserForJson + assert.NilError(t, json.Unmarshal(secret.Data["users.json"], &usersArr)) + assert.Equal(t, len(usersArr), 3) + assert.Equal(t, usersArr[0].Username, "testuser1") + assert.Equal(t, usersArr[0].IsAdmin, false) + assert.Equal(t, usersArr[0].Password, "asdf") + assert.Equal(t, usersArr[1].Username, "testuser2") + assert.Equal(t, usersArr[1].IsAdmin, false) + assert.Equal(t, usersArr[1].Password, "asdf") + assert.Equal(t, usersArr[2].Username, "testuser3") + assert.Equal(t, usersArr[2].IsAdmin, true) + assert.Equal(t, usersArr[2].Password, "qwer") + } + }) + + t.Run("DeleteUsers", func(t *testing.T) { + pgadmin.Spec.Users = []v1beta1.PGAdminUser{ + { + PasswordRef: &corev1.SecretKeySelector{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: "user-password-secret1", + }, + Key: "password", + }, + Username: "testuser1", + Role: "User", + }, + } + calls := 0 + reconciler.PodExec = func( + ctx context.Context, namespace, pod, container string, + stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + calls++ + + return nil + } + + assert.NilError(t, reconciler.writePGAdminUsers(ctx, pgadmin, podExecutor)) + assert.Equal(t, calls, 0, "PodExec should be called zero times") + + secret := &corev1.Secret{ObjectMeta: naming.StandalonePGAdmin(pgadmin)} + assert.NilError(t, errors.WithStack( + reconciler.Client.Get(ctx, client.ObjectKeyFromObject(secret), secret))) + if assert.Check(t, secret.Data["users.json"] != nil) { + var usersArr []pgAdminUserForJson + assert.NilError(t, json.Unmarshal(secret.Data["users.json"], &usersArr)) + assert.Equal(t, len(usersArr), 1) + assert.Equal(t, usersArr[0].Username, "testuser1") + assert.Equal(t, usersArr[0].IsAdmin, false) + assert.Equal(t, usersArr[0].Password, "asdf") + } + }) + + t.Run("ErrorsWhenUpdating", func(t *testing.T) { + pgadmin.Spec.Users = []v1beta1.PGAdminUser{ + { + PasswordRef: &corev1.SecretKeySelector{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: "user-password-secret1", + }, + Key: "password", + }, + Username: "testuser1", + Role: "Administrator", + }, + } + + // PodExec error + calls := 0 + reconciler.PodExec = func( + ctx context.Context, namespace, pod, container string, + stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + calls++ + + return errors.New("podexec failure") + } + + assert.NilError(t, reconciler.writePGAdminUsers(ctx, pgadmin, podExecutor)) + assert.Equal(t, calls, 1, "PodExec should be called once") + + // User in users.json should be unchanged + secret := &corev1.Secret{ObjectMeta: naming.StandalonePGAdmin(pgadmin)} + assert.NilError(t, errors.WithStack( + reconciler.Client.Get(ctx, client.ObjectKeyFromObject(secret), secret))) + if assert.Check(t, secret.Data["users.json"] != nil) { + var usersArr []pgAdminUserForJson + assert.NilError(t, json.Unmarshal(secret.Data["users.json"], &usersArr)) + assert.Equal(t, len(usersArr), 1) + assert.Equal(t, usersArr[0].Username, "testuser1") + assert.Equal(t, usersArr[0].IsAdmin, false) + assert.Equal(t, usersArr[0].Password, "asdf") + } + + // setup.py error in stderr + reconciler.PodExec = func( + ctx context.Context, namespace, pod, container string, + stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + calls++ + + stderr.Write([]byte("issue running setup.py update-user command")) + + return nil + } + + assert.NilError(t, reconciler.writePGAdminUsers(ctx, pgadmin, podExecutor)) + assert.Equal(t, calls, 2, "PodExec should be called once more") + + // User in users.json should be unchanged + assert.NilError(t, errors.WithStack( + reconciler.Client.Get(ctx, client.ObjectKeyFromObject(secret), secret))) + if assert.Check(t, secret.Data["users.json"] != nil) { + var usersArr []pgAdminUserForJson + assert.NilError(t, json.Unmarshal(secret.Data["users.json"], &usersArr)) + assert.Equal(t, len(usersArr), 1) + assert.Equal(t, usersArr[0].Username, "testuser1") + assert.Equal(t, usersArr[0].IsAdmin, false) + assert.Equal(t, usersArr[0].Password, "asdf") + } + }) + + t.Run("ErrorsWhenAdding", func(t *testing.T) { + pgadmin.Spec.Users = []v1beta1.PGAdminUser{ + { + PasswordRef: &corev1.SecretKeySelector{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: "user-password-secret1", + }, + Key: "password", + }, + Username: "testuser1", + Role: "User", + }, + { + PasswordRef: &corev1.SecretKeySelector{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: "user-password-secret2", + }, + Key: "password", + }, + Username: "testuser2", + Role: "Administrator", + }, + } + + // PodExec error + calls := 0 + reconciler.PodExec = func( + ctx context.Context, namespace, pod, container string, + stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + calls++ + + return errors.New("podexec failure") + } + + assert.NilError(t, reconciler.writePGAdminUsers(ctx, pgadmin, podExecutor)) + assert.Equal(t, calls, 1, "PodExec should be called once") + + // User in users.json should be unchanged and attempt to add user should not + // have succeeded + secret := &corev1.Secret{ObjectMeta: naming.StandalonePGAdmin(pgadmin)} + assert.NilError(t, errors.WithStack( + reconciler.Client.Get(ctx, client.ObjectKeyFromObject(secret), secret))) + if assert.Check(t, secret.Data["users.json"] != nil) { + var usersArr []pgAdminUserForJson + assert.NilError(t, json.Unmarshal(secret.Data["users.json"], &usersArr)) + assert.Equal(t, len(usersArr), 1) + assert.Equal(t, usersArr[0].Username, "testuser1") + assert.Equal(t, usersArr[0].IsAdmin, false) + assert.Equal(t, usersArr[0].Password, "asdf") + } + + // setup.py error in stderr + reconciler.PodExec = func( + ctx context.Context, namespace, pod, container string, + stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + calls++ + + stderr.Write([]byte("issue running setup.py add-user command")) + + return nil + } + + assert.NilError(t, reconciler.writePGAdminUsers(ctx, pgadmin, podExecutor)) + assert.Equal(t, calls, 2, "PodExec should be called once more") + + // User in users.json should be unchanged and attempt to add user should not + // have succeeded + assert.NilError(t, errors.WithStack( + reconciler.Client.Get(ctx, client.ObjectKeyFromObject(secret), secret))) + if assert.Check(t, secret.Data["users.json"] != nil) { + var usersArr []pgAdminUserForJson + assert.NilError(t, json.Unmarshal(secret.Data["users.json"], &usersArr)) + assert.Equal(t, len(usersArr), 1) + assert.Equal(t, usersArr[0].Username, "testuser1") + assert.Equal(t, usersArr[0].IsAdmin, false) + assert.Equal(t, usersArr[0].Password, "asdf") + } + + // setup.py error in stdout regarding email address + reconciler.PodExec = func( + ctx context.Context, namespace, pod, container string, + stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + calls++ + + stdout.Write([]byte("Invalid email address")) + + return nil + } + + assert.NilError(t, reconciler.writePGAdminUsers(ctx, pgadmin, podExecutor)) + assert.Equal(t, calls, 3, "PodExec should be called once more") + + // User in users.json should be unchanged and attempt to add user should not + // have succeeded + assert.NilError(t, errors.WithStack( + reconciler.Client.Get(ctx, client.ObjectKeyFromObject(secret), secret))) + if assert.Check(t, secret.Data["users.json"] != nil) { + var usersArr []pgAdminUserForJson + assert.NilError(t, json.Unmarshal(secret.Data["users.json"], &usersArr)) + assert.Equal(t, len(usersArr), 1) + assert.Equal(t, usersArr[0].Username, "testuser1") + assert.Equal(t, usersArr[0].IsAdmin, false) + assert.Equal(t, usersArr[0].Password, "asdf") + } + assert.Equal(t, len(recorder.Events), 1) + + // setup.py error in stdout regarding password + reconciler.PodExec = func( + ctx context.Context, namespace, pod, container string, + stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + calls++ + + stdout.Write([]byte("Password must be at least 6 characters long")) + + return nil + } + + assert.NilError(t, reconciler.writePGAdminUsers(ctx, pgadmin, podExecutor)) + assert.Equal(t, calls, 4, "PodExec should be called once more") + + // User in users.json should be unchanged and attempt to add user should not + // have succeeded + assert.NilError(t, errors.WithStack( + reconciler.Client.Get(ctx, client.ObjectKeyFromObject(secret), secret))) + if assert.Check(t, secret.Data["users.json"] != nil) { + var usersArr []pgAdminUserForJson + assert.NilError(t, json.Unmarshal(secret.Data["users.json"], &usersArr)) + assert.Equal(t, len(usersArr), 1) + assert.Equal(t, usersArr[0].Username, "testuser1") + assert.Equal(t, usersArr[0].IsAdmin, false) + assert.Equal(t, usersArr[0].Password, "asdf") + } + assert.Equal(t, len(recorder.Events), 2) + }) +} diff --git a/internal/controller/standalone_pgadmin/volume.go b/internal/controller/standalone_pgadmin/volume.go new file mode 100644 index 0000000000..7615f6142b --- /dev/null +++ b/internal/controller/standalone_pgadmin/volume.go @@ -0,0 +1,136 @@ +// Copyright 2023 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package standalone_pgadmin + +import ( + "context" + + corev1 "k8s.io/api/core/v1" + apierrors "k8s.io/apimachinery/pkg/api/errors" + "k8s.io/apimachinery/pkg/api/meta" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/util/validation/field" + + "github.com/pkg/errors" + + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +// +kubebuilder:rbac:groups="",resources="persistentvolumeclaims",verbs={create,patch} + +// reconcilePGAdminDataVolume writes the PersistentVolumeClaim for instance's +// pgAdmin data volume. +func (r *PGAdminReconciler) reconcilePGAdminDataVolume( + ctx context.Context, pgadmin *v1beta1.PGAdmin, +) (*corev1.PersistentVolumeClaim, error) { + + pvc := pvc(pgadmin) + + err := errors.WithStack(r.setControllerReference(pgadmin, pvc)) + + if err == nil { + err = r.handlePersistentVolumeClaimError(pgadmin, + errors.WithStack(r.apply(ctx, pvc))) + } + + return pvc, err +} + +// pvc defines the data volume for pgAdmin. +func pvc(pgadmin *v1beta1.PGAdmin) *corev1.PersistentVolumeClaim { + pvc := &corev1.PersistentVolumeClaim{ + ObjectMeta: naming.StandalonePGAdmin(pgadmin), + } + pvc.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("PersistentVolumeClaim")) + + pvc.Annotations = pgadmin.Spec.Metadata.GetAnnotationsOrNil() + pvc.Labels = naming.Merge( + pgadmin.Spec.Metadata.GetLabelsOrNil(), + naming.StandalonePGAdminDataLabels(pgadmin.Name), + ) + pvc.Spec = pgadmin.Spec.DataVolumeClaimSpec + + return pvc +} + +// handlePersistentVolumeClaimError inspects err for expected Kubernetes API +// responses to writing a PVC. It turns errors it understands into conditions +// and events. When err is handled it returns nil. Otherwise it returns err. +// +// TODO(tjmoore4): This function is duplicated from a version that takes a PostgresCluster object. +func (r *PGAdminReconciler) handlePersistentVolumeClaimError( + pgadmin *v1beta1.PGAdmin, err error, +) error { + var status metav1.Status + if api := apierrors.APIStatus(nil); errors.As(err, &api) { + status = api.Status() + } + + cannotResize := func(err error) { + meta.SetStatusCondition(&pgadmin.Status.Conditions, metav1.Condition{ + Type: v1beta1.PersistentVolumeResizing, + Status: metav1.ConditionFalse, + Reason: string(apierrors.ReasonForError(err)), + Message: "One or more volumes cannot be resized", + + ObservedGeneration: pgadmin.Generation, + }) + } + + volumeError := func(err error) { + r.Recorder.Event(pgadmin, + corev1.EventTypeWarning, "PersistentVolumeError", err.Error()) + } + + // Forbidden means (RBAC is broken or) the API request was rejected by an + // admission controller. Assume it is the latter and raise the issue as a + // condition and event. + // - https://releases.k8s.io/v1.21.0/plugin/pkg/admission/storage/persistentvolume/resize/admission.go + if apierrors.IsForbidden(err) { + cannotResize(err) + volumeError(err) + return nil + } + + if apierrors.IsInvalid(err) && status.Details != nil { + unknownCause := false + for _, cause := range status.Details.Causes { + switch { + // Forbidden "spec" happens when the PVC is waiting to be bound. + // It should resolve on its own and trigger another reconcile. Raise + // the issue as an event. + // - https://releases.k8s.io/v1.21.0/pkg/apis/core/validation/validation.go#L2028 + // + // TODO(cbandy): This can also happen when changing a field other + // than requests within the spec (access modes, storage class, etc). + // That case needs a condition or should be prevented via a webhook. + case + cause.Type == metav1.CauseType(field.ErrorTypeForbidden) && + cause.Field == "spec": + volumeError(err) + + // Forbidden "storage" happens when the change is not allowed. Raise + // the issue as a condition and event. + // - https://releases.k8s.io/v1.21.0/pkg/apis/core/validation/validation.go#L2028 + case + cause.Type == metav1.CauseType(field.ErrorTypeForbidden) && + cause.Field == "spec.resources.requests.storage": + cannotResize(err) + volumeError(err) + + default: + unknownCause = true + } + } + + if len(status.Details.Causes) > 0 && !unknownCause { + // All the causes were identified and handled. + return nil + } + } + + return err +} diff --git a/internal/controller/standalone_pgadmin/volume_test.go b/internal/controller/standalone_pgadmin/volume_test.go new file mode 100644 index 0000000000..645c228277 --- /dev/null +++ b/internal/controller/standalone_pgadmin/volume_test.go @@ -0,0 +1,291 @@ +// Copyright 2023 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package standalone_pgadmin + +import ( + "context" + "testing" + + "gotest.tools/v3/assert" + corev1 "k8s.io/api/core/v1" + apierrors "k8s.io/apimachinery/pkg/api/errors" + "k8s.io/apimachinery/pkg/api/resource" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/util/validation/field" + "sigs.k8s.io/controller-runtime/pkg/client" + + "github.com/pkg/errors" + + "github.com/crunchydata/postgres-operator/internal/controller/runtime" + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/testing/cmp" + "github.com/crunchydata/postgres-operator/internal/testing/events" + "github.com/crunchydata/postgres-operator/internal/testing/require" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestReconcilePGAdminDataVolume(t *testing.T) { + ctx := context.Background() + cc := setupKubernetes(t) + require.ParallelCapacity(t, 1) + + reconciler := &PGAdminReconciler{ + Client: cc, + Owner: client.FieldOwner(t.Name()), + } + + ns := setupNamespace(t, cc) + pgadmin := &v1beta1.PGAdmin{ + ObjectMeta: metav1.ObjectMeta{ + Name: "test-standalone-pgadmin", + Namespace: ns.Name, + }, + Spec: v1beta1.PGAdminSpec{ + DataVolumeClaimSpec: corev1.PersistentVolumeClaimSpec{ + AccessModes: []corev1.PersistentVolumeAccessMode{corev1.ReadWriteOnce}, + Resources: corev1.VolumeResourceRequirements{ + Requests: map[corev1.ResourceName]resource.Quantity{ + corev1.ResourceStorage: resource.MustParse("1Gi")}}, + StorageClassName: initialize.String("storage-class-for-data"), + }}} + + assert.NilError(t, cc.Create(ctx, pgadmin)) + t.Cleanup(func() { assert.Check(t, cc.Delete(ctx, pgadmin)) }) + + t.Run("DataVolume", func(t *testing.T) { + pvc, err := reconciler.reconcilePGAdminDataVolume(ctx, pgadmin) + assert.NilError(t, err) + + assert.Assert(t, metav1.IsControlledBy(pvc, pgadmin)) + + assert.Equal(t, pvc.Labels[naming.LabelStandalonePGAdmin], pgadmin.Name) + assert.Equal(t, pvc.Labels[naming.LabelRole], naming.RolePGAdmin) + assert.Equal(t, pvc.Labels[naming.LabelData], naming.DataPGAdmin) + + assert.Assert(t, cmp.MarshalMatches(pvc.Spec, ` +accessModes: +- ReadWriteOnce +resources: + requests: + storage: 1Gi +storageClassName: storage-class-for-data +volumeMode: Filesystem + `)) + }) +} + +func TestHandlePersistentVolumeClaimError(t *testing.T) { + recorder := events.NewRecorder(t, runtime.Scheme) + reconciler := &PGAdminReconciler{ + Recorder: recorder, + } + + pgadmin := new(v1beta1.PGAdmin) + pgadmin.Namespace = "ns1" + pgadmin.Name = "pg2" + + reset := func() { + pgadmin.Status.Conditions = pgadmin.Status.Conditions[:0] + recorder.Events = recorder.Events[:0] + } + + // It returns any error it does not recognize completely. + t.Run("Unexpected", func(t *testing.T) { + t.Cleanup(reset) + + err := errors.New("whomp") + + assert.Equal(t, err, reconciler.handlePersistentVolumeClaimError(pgadmin, err)) + assert.Assert(t, len(pgadmin.Status.Conditions) == 0) + assert.Assert(t, len(recorder.Events) == 0) + + err = apierrors.NewInvalid( + corev1.SchemeGroupVersion.WithKind("PersistentVolumeClaim").GroupKind(), + "some-pvc", + field.ErrorList{ + field.Forbidden(field.NewPath("metadata"), "dunno"), + }) + + assert.Equal(t, err, reconciler.handlePersistentVolumeClaimError(pgadmin, err)) + assert.Assert(t, len(pgadmin.Status.Conditions) == 0) + assert.Assert(t, len(recorder.Events) == 0) + }) + + // Neither statically nor dynamically provisioned claims can be resized + // before they are bound to a persistent volume. Kubernetes rejects such + // changes during PVC validation. + // + // A static PVC is one with a present-and-blank storage class. It is + // pending until a PV exists that matches its selector, requests, etc. + // - https://docs.k8s.io/concepts/storage/persistent-volumes/#static + // - https://docs.k8s.io/concepts/storage/persistent-volumes/#class-1 + // + // A dynamic PVC is associated with a storage class. Storage classes that + // "WaitForFirstConsumer" do not bind a PV until there is a pod. + // - https://docs.k8s.io/concepts/storage/persistent-volumes/#dynamic + t.Run("Pending", func(t *testing.T) { + t.Run("Grow", func(t *testing.T) { + t.Cleanup(reset) + + err := apierrors.NewInvalid( + corev1.SchemeGroupVersion.WithKind("PersistentVolumeClaim").GroupKind(), + "my-pending-pvc", + field.ErrorList{ + // - https://releases.k8s.io/v1.24.0/pkg/apis/core/validation/validation.go#L2184 + field.Forbidden(field.NewPath("spec"), "… immutable … bound claim …"), + }) + + // PVCs will bind eventually. This error should become an event without a condition. + assert.NilError(t, reconciler.handlePersistentVolumeClaimError(pgadmin, err)) + + assert.Check(t, len(pgadmin.Status.Conditions) == 0) + assert.Check(t, len(recorder.Events) > 0) + + for _, event := range recorder.Events { + assert.Equal(t, event.Type, "Warning") + assert.Equal(t, event.Reason, "PersistentVolumeError") + assert.Assert(t, cmp.Contains(event.Note, "PersistentVolumeClaim")) + assert.Assert(t, cmp.Contains(event.Note, "my-pending-pvc")) + assert.Assert(t, cmp.Contains(event.Note, "bound claim")) + assert.DeepEqual(t, event.Regarding, corev1.ObjectReference{ + APIVersion: v1beta1.GroupVersion.Identifier(), + Kind: "PGAdmin", + Namespace: "ns1", Name: "pg2", + }) + } + }) + + t.Run("Shrink", func(t *testing.T) { + t.Cleanup(reset) + + // Requests to make a pending PVC smaller fail for multiple reasons. + err := apierrors.NewInvalid( + corev1.SchemeGroupVersion.WithKind("PersistentVolumeClaim").GroupKind(), + "my-pending-pvc", + field.ErrorList{ + // - https://releases.k8s.io/v1.24.0/pkg/apis/core/validation/validation.go#L2184 + field.Forbidden(field.NewPath("spec"), "… immutable … bound claim …"), + + // - https://releases.k8s.io/v1.24.0/pkg/apis/core/validation/validation.go#L2188 + field.Forbidden(field.NewPath("spec", "resources", "requests", "storage"), "… not be less …"), + }) + + // PVCs will bind eventually, but the size is rejected. + assert.NilError(t, reconciler.handlePersistentVolumeClaimError(pgadmin, err)) + + assert.Check(t, len(pgadmin.Status.Conditions) > 0) + assert.Check(t, len(recorder.Events) > 0) + + for _, condition := range pgadmin.Status.Conditions { + assert.Equal(t, condition.Type, "PersistentVolumeResizing") + assert.Equal(t, condition.Status, metav1.ConditionFalse) + assert.Equal(t, condition.Reason, "Invalid") + assert.Assert(t, cmp.Contains(condition.Message, "cannot be resized")) + } + + for _, event := range recorder.Events { + assert.Equal(t, event.Type, "Warning") + assert.Equal(t, event.Reason, "PersistentVolumeError") + assert.Assert(t, cmp.Contains(event.Note, "PersistentVolumeClaim")) + assert.Assert(t, cmp.Contains(event.Note, "my-pending-pvc")) + assert.Assert(t, cmp.Contains(event.Note, "bound claim")) + assert.Assert(t, cmp.Contains(event.Note, "not be less")) + assert.DeepEqual(t, event.Regarding, corev1.ObjectReference{ + APIVersion: v1beta1.GroupVersion.Identifier(), + Kind: "PGAdmin", + Namespace: "ns1", Name: "pg2", + }) + } + }) + }) + + // Statically provisioned claims cannot be resized. Kubernetes responds + // differently based on the size growing or shrinking. + // + // Dynamically provisioned claims of storage classes that do *not* + // "allowVolumeExpansion" behave the same way. + t.Run("NoExpansion", func(t *testing.T) { + t.Run("Grow", func(t *testing.T) { + t.Cleanup(reset) + + // - https://releases.k8s.io/v1.24.0/plugin/pkg/admission/storage/persistentvolume/resize/admission.go#L108 + err := apierrors.NewForbidden( + corev1.Resource("persistentvolumeclaims"), "my-static-pvc", + errors.New("… only dynamically provisioned …")) + + // This PVC cannot resize. The error should become an event and condition. + assert.NilError(t, reconciler.handlePersistentVolumeClaimError(pgadmin, err)) + + assert.Check(t, len(pgadmin.Status.Conditions) > 0) + assert.Check(t, len(recorder.Events) > 0) + + for _, condition := range pgadmin.Status.Conditions { + assert.Equal(t, condition.Type, "PersistentVolumeResizing") + assert.Equal(t, condition.Status, metav1.ConditionFalse) + assert.Equal(t, condition.Reason, "Forbidden") + assert.Assert(t, cmp.Contains(condition.Message, "cannot be resized")) + } + + for _, event := range recorder.Events { + assert.Equal(t, event.Type, "Warning") + assert.Equal(t, event.Reason, "PersistentVolumeError") + assert.Assert(t, cmp.Contains(event.Note, "persistentvolumeclaim")) + assert.Assert(t, cmp.Contains(event.Note, "my-static-pvc")) + assert.Assert(t, cmp.Contains(event.Note, "only dynamic")) + assert.DeepEqual(t, event.Regarding, corev1.ObjectReference{ + APIVersion: v1beta1.GroupVersion.Identifier(), + Kind: "PGAdmin", + Namespace: "ns1", Name: "pg2", + }) + } + }) + + // Dynamically provisioned claims of storage classes that *do* + // "allowVolumeExpansion" can grow but cannot shrink. Kubernetes + // rejects such changes during PVC validation, just like static claims. + // + // A future version of Kubernetes will allow `spec.resources` to shrink + // so long as it is greater than `status.capacity`. + // - https://git.k8s.io/enhancements/keps/sig-storage/1790-recover-resize-failure + t.Run("Shrink", func(t *testing.T) { + t.Cleanup(reset) + + err := apierrors.NewInvalid( + corev1.SchemeGroupVersion.WithKind("PersistentVolumeClaim").GroupKind(), + "my-static-pvc", + field.ErrorList{ + // - https://releases.k8s.io/v1.24.0/pkg/apis/core/validation/validation.go#L2188 + field.Forbidden(field.NewPath("spec", "resources", "requests", "storage"), "… not be less …"), + }) + + // The PVC size is rejected. This error should become an event and condition. + assert.NilError(t, reconciler.handlePersistentVolumeClaimError(pgadmin, err)) + + assert.Check(t, len(pgadmin.Status.Conditions) > 0) + assert.Check(t, len(recorder.Events) > 0) + + for _, condition := range pgadmin.Status.Conditions { + assert.Equal(t, condition.Type, "PersistentVolumeResizing") + assert.Equal(t, condition.Status, metav1.ConditionFalse) + assert.Equal(t, condition.Reason, "Invalid") + assert.Assert(t, cmp.Contains(condition.Message, "cannot be resized")) + } + + for _, event := range recorder.Events { + assert.Equal(t, event.Type, "Warning") + assert.Equal(t, event.Reason, "PersistentVolumeError") + assert.Assert(t, cmp.Contains(event.Note, "PersistentVolumeClaim")) + assert.Assert(t, cmp.Contains(event.Note, "my-static-pvc")) + assert.Assert(t, cmp.Contains(event.Note, "not be less")) + assert.DeepEqual(t, event.Regarding, corev1.ObjectReference{ + APIVersion: v1beta1.GroupVersion.Identifier(), + Kind: "PGAdmin", + Namespace: "ns1", Name: "pg2", + }) + } + }) + }) +} diff --git a/internal/controller/standalone_pgadmin/watches.go b/internal/controller/standalone_pgadmin/watches.go new file mode 100644 index 0000000000..49ac1ebd29 --- /dev/null +++ b/internal/controller/standalone_pgadmin/watches.go @@ -0,0 +1,102 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package standalone_pgadmin + +import ( + "context" + + "k8s.io/client-go/util/workqueue" + ctrl "sigs.k8s.io/controller-runtime" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/event" + "sigs.k8s.io/controller-runtime/pkg/handler" + + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +// watchPostgresClusters returns a [handler.EventHandler] for PostgresClusters. +func (r *PGAdminReconciler) watchPostgresClusters() handler.Funcs { + handle := func(ctx context.Context, cluster client.Object, q workqueue.RateLimitingInterface) { + for _, pgadmin := range r.findPGAdminsForPostgresCluster(ctx, cluster) { + + q.Add(ctrl.Request{ + NamespacedName: client.ObjectKeyFromObject(pgadmin), + }) + } + } + + return handler.Funcs{ + CreateFunc: func(ctx context.Context, e event.CreateEvent, q workqueue.RateLimitingInterface) { + handle(ctx, e.Object, q) + }, + UpdateFunc: func(ctx context.Context, e event.UpdateEvent, q workqueue.RateLimitingInterface) { + handle(ctx, e.ObjectNew, q) + }, + DeleteFunc: func(ctx context.Context, e event.DeleteEvent, q workqueue.RateLimitingInterface) { + handle(ctx, e.Object, q) + }, + } +} + +// watchForRelatedSecret handles create/update/delete events for secrets, +// passing the Secret ObjectKey to findPGAdminsForSecret +func (r *PGAdminReconciler) watchForRelatedSecret() handler.EventHandler { + handle := func(ctx context.Context, secret client.Object, q workqueue.RateLimitingInterface) { + key := client.ObjectKeyFromObject(secret) + + for _, pgadmin := range r.findPGAdminsForSecret(ctx, key) { + q.Add(ctrl.Request{ + NamespacedName: client.ObjectKeyFromObject(pgadmin), + }) + } + } + + return handler.Funcs{ + CreateFunc: func(ctx context.Context, e event.CreateEvent, q workqueue.RateLimitingInterface) { + handle(ctx, e.Object, q) + }, + UpdateFunc: func(ctx context.Context, e event.UpdateEvent, q workqueue.RateLimitingInterface) { + handle(ctx, e.ObjectNew, q) + }, + // If the secret is deleted, we want to reconcile + // in order to emit an event/status about this problem. + // We will also emit a matching event/status about this problem + // when we reconcile the cluster and can't find the secret. + // That way, users will get two alerts: one when the secret is deleted + // and another when the cluster is being reconciled. + DeleteFunc: func(ctx context.Context, e event.DeleteEvent, q workqueue.RateLimitingInterface) { + handle(ctx, e.Object, q) + }, + } +} + +//+kubebuilder:rbac:groups="postgres-operator.crunchydata.com",resources="pgadmins",verbs={list} + +// findPGAdminsForSecret returns PGAdmins that have a user or users that have their password +// stored in the Secret +func (r *PGAdminReconciler) findPGAdminsForSecret( + ctx context.Context, secret client.ObjectKey, +) []*v1beta1.PGAdmin { + var matching []*v1beta1.PGAdmin + var pgadmins v1beta1.PGAdminList + + // NOTE: If this becomes slow due to a large number of PGAdmins in a single + // namespace, we can configure the [ctrl.Manager] field indexer and pass a + // [fields.Selector] here. + // - https://book.kubebuilder.io/reference/watching-resources/externally-managed.html + if err := r.List(ctx, &pgadmins, &client.ListOptions{ + Namespace: secret.Namespace, + }); err == nil { + for i := range pgadmins.Items { + for j := range pgadmins.Items[i].Spec.Users { + if pgadmins.Items[i].Spec.Users[j].PasswordRef.LocalObjectReference.Name == secret.Name { + matching = append(matching, &pgadmins.Items[i]) + break + } + } + } + } + return matching +} diff --git a/internal/controller/standalone_pgadmin/watches_test.go b/internal/controller/standalone_pgadmin/watches_test.go new file mode 100644 index 0000000000..1419eb9efa --- /dev/null +++ b/internal/controller/standalone_pgadmin/watches_test.go @@ -0,0 +1,122 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package standalone_pgadmin + +import ( + "context" + "testing" + + "gotest.tools/v3/assert" + corev1 "k8s.io/api/core/v1" + "sigs.k8s.io/controller-runtime/pkg/client" + + "github.com/crunchydata/postgres-operator/internal/testing/require" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestFindPGAdminsForSecret(t *testing.T) { + ctx := context.Background() + tClient := setupKubernetes(t) + require.ParallelCapacity(t, 0) + + ns := setupNamespace(t, tClient) + reconciler := &PGAdminReconciler{Client: tClient} + + secret1 := &corev1.Secret{} + secret1.Namespace = ns.Name + secret1.Name = "first-password-secret" + + assert.NilError(t, tClient.Create(ctx, secret1)) + secretObjectKey := client.ObjectKeyFromObject(secret1) + + t.Run("NoPGAdmins", func(t *testing.T) { + pgadmins := reconciler.findPGAdminsForSecret(ctx, secretObjectKey) + + assert.Equal(t, len(pgadmins), 0) + }) + + t.Run("OnePGAdmin", func(t *testing.T) { + pgadmin1 := new(v1beta1.PGAdmin) + pgadmin1.Namespace = ns.Name + pgadmin1.Name = "first-pgadmin" + pgadmin1.Spec.Users = []v1beta1.PGAdminUser{ + { + PasswordRef: &corev1.SecretKeySelector{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: "first-password-secret", + }, + Key: "password", + }, + Username: "testuser", + Role: "Administrator", + }, + } + assert.NilError(t, tClient.Create(ctx, pgadmin1)) + + pgadmins := reconciler.findPGAdminsForSecret(ctx, secretObjectKey) + + assert.Equal(t, len(pgadmins), 1) + assert.Equal(t, pgadmins[0].Name, "first-pgadmin") + }) + + t.Run("TwoPGAdmins", func(t *testing.T) { + pgadmin2 := new(v1beta1.PGAdmin) + pgadmin2.Namespace = ns.Name + pgadmin2.Name = "second-pgadmin" + pgadmin2.Spec.Users = []v1beta1.PGAdminUser{ + { + PasswordRef: &corev1.SecretKeySelector{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: "first-password-secret", + }, + Key: "password", + }, + Username: "testuser2", + Role: "Administrator", + }, + } + assert.NilError(t, tClient.Create(ctx, pgadmin2)) + + pgadmins := reconciler.findPGAdminsForSecret(ctx, secretObjectKey) + + assert.Equal(t, len(pgadmins), 2) + pgadminCount := map[string]int{} + for _, pgadmin := range pgadmins { + pgadminCount[pgadmin.Name] += 1 + } + assert.Equal(t, pgadminCount["first-pgadmin"], 1) + assert.Equal(t, pgadminCount["second-pgadmin"], 1) + }) + + t.Run("PGAdminWithDifferentSecretNameNotIncluded", func(t *testing.T) { + pgadmin3 := new(v1beta1.PGAdmin) + pgadmin3.Namespace = ns.Name + pgadmin3.Name = "third-pgadmin" + pgadmin3.Spec.Users = []v1beta1.PGAdminUser{ + { + PasswordRef: &corev1.SecretKeySelector{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: "other-password-secret", + }, + Key: "password", + }, + Username: "testuser2", + Role: "Administrator", + }, + } + assert.NilError(t, tClient.Create(ctx, pgadmin3)) + + pgadmins := reconciler.findPGAdminsForSecret(ctx, secretObjectKey) + + assert.Equal(t, len(pgadmins), 2) + pgadminCount := map[string]int{} + for _, pgadmin := range pgadmins { + pgadminCount[pgadmin.Name] += 1 + } + assert.Equal(t, pgadminCount["first-pgadmin"], 1) + assert.Equal(t, pgadminCount["second-pgadmin"], 1) + assert.Equal(t, pgadminCount["third-pgadmin"], 0) + }) +} diff --git a/internal/feature/features.go b/internal/feature/features.go new file mode 100644 index 0000000000..db424ead42 --- /dev/null +++ b/internal/feature/features.go @@ -0,0 +1,132 @@ +// Copyright 2017 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +/* +Package feature provides types and functions to enable and disable features +of the Postgres Operator. + +To add a new feature, export its name as a constant string and configure it +in [NewGate]. Choose a name that is clear to end users, as they will use it +to enable or disable the feature. + +# Stages + +Each feature must be configured with a maturity called a stage. We follow the +Kubernetes convention that features in the "Alpha" stage are disabled by default, +while those in the "Beta" stage are enabled by default. + - https://docs.k8s.io/reference/command-line-tools-reference/feature-gates/#feature-stages + +NOTE: Since Kubernetes 1.24, APIs (not features) in the "Beta" stage are disabled by default: + - https://blog.k8s.io/2022/05/03/kubernetes-1-24-release-announcement/#beta-apis-off-by-default + - https://git.k8s.io/enhancements/keps/sig-architecture/3136-beta-apis-off-by-default#goals + +# Using Features + +We initialize and configure one [MutableGate] in main() and add it to the Context +passed to Reconcilers and other Runnables. Those can then interrogate it using [Enabled]: + + if !feature.Enabled(ctx, feature.Excellent) { return } + +Tests should create and configure their own [MutableGate] and inject it using +[NewContext]. For example, the following enables one feature and disables another: + + gate := feature.NewGate() + assert.NilError(t, gate.SetFromMap(map[string]bool{ + feature.Excellent: true, + feature.Uncommon: false, + })) + ctx := feature.NewContext(context.Background(), gate) +*/ +package feature + +import ( + "context" + + "k8s.io/component-base/featuregate" +) + +type Feature = featuregate.Feature + +// Gate indicates what features exist and which are enabled. +type Gate interface { + Enabled(Feature) bool + String() string +} + +// MutableGate contains features that can be enabled or disabled. +type MutableGate interface { + Gate + // Set enables or disables features by parsing a string like "feature1=true,feature2=false". + Set(string) error + // SetFromMap enables or disables features by boolean values. + SetFromMap(map[string]bool) error +} + +const ( + // Support appending custom queries to default PGMonitor queries + AppendCustomQueries = "AppendCustomQueries" + + // Enables automatic creation of user schema + AutoCreateUserSchema = "AutoCreateUserSchema" + + // Support automatically growing volumes + AutoGrowVolumes = "AutoGrowVolumes" + + BridgeIdentifiers = "BridgeIdentifiers" + + // Support custom sidecars for PostgreSQL instance Pods + InstanceSidecars = "InstanceSidecars" + + // Support custom sidecars for pgBouncer Pods + PGBouncerSidecars = "PGBouncerSidecars" + + // Support tablespace volumes + TablespaceVolumes = "TablespaceVolumes" + + // Support VolumeSnapshots + VolumeSnapshots = "VolumeSnapshots" +) + +// NewGate returns a MutableGate with the Features defined in this package. +func NewGate() MutableGate { + gate := featuregate.NewFeatureGate() + + if err := gate.Add(map[Feature]featuregate.FeatureSpec{ + AppendCustomQueries: {Default: false, PreRelease: featuregate.Alpha}, + AutoCreateUserSchema: {Default: true, PreRelease: featuregate.Beta}, + AutoGrowVolumes: {Default: false, PreRelease: featuregate.Alpha}, + BridgeIdentifiers: {Default: false, PreRelease: featuregate.Alpha}, + InstanceSidecars: {Default: false, PreRelease: featuregate.Alpha}, + PGBouncerSidecars: {Default: false, PreRelease: featuregate.Alpha}, + TablespaceVolumes: {Default: false, PreRelease: featuregate.Alpha}, + VolumeSnapshots: {Default: false, PreRelease: featuregate.Alpha}, + }); err != nil { + panic(err) + } + + return gate +} + +type contextKey struct{} + +// Enabled indicates if a Feature is enabled in the Gate contained in ctx. It +// returns false when there is no Gate. +func Enabled(ctx context.Context, f Feature) bool { + gate, ok := ctx.Value(contextKey{}).(Gate) + return ok && gate.Enabled(f) +} + +// NewContext returns a copy of ctx containing gate. Check it using [Enabled]. +func NewContext(ctx context.Context, gate Gate) context.Context { + return context.WithValue(ctx, contextKey{}, gate) +} + +func ShowGates(ctx context.Context) string { + featuresEnabled := "" + gate, ok := ctx.Value(contextKey{}).(Gate) + if ok { + featuresEnabled = gate.String() + } + return featuresEnabled +} diff --git a/internal/feature/features_test.go b/internal/feature/features_test.go new file mode 100644 index 0000000000..f76dd216e6 --- /dev/null +++ b/internal/feature/features_test.go @@ -0,0 +1,65 @@ +// Copyright 2017 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package feature + +import ( + "context" + "testing" + + "gotest.tools/v3/assert" +) + +func TestDefaults(t *testing.T) { + t.Parallel() + gate := NewGate() + + assert.Assert(t, false == gate.Enabled(AppendCustomQueries)) + assert.Assert(t, true == gate.Enabled(AutoCreateUserSchema)) + assert.Assert(t, false == gate.Enabled(AutoGrowVolumes)) + assert.Assert(t, false == gate.Enabled(BridgeIdentifiers)) + assert.Assert(t, false == gate.Enabled(InstanceSidecars)) + assert.Assert(t, false == gate.Enabled(PGBouncerSidecars)) + assert.Assert(t, false == gate.Enabled(TablespaceVolumes)) + assert.Assert(t, false == gate.Enabled(VolumeSnapshots)) + + assert.Equal(t, gate.String(), "") +} + +func TestStringFormat(t *testing.T) { + t.Parallel() + gate := NewGate() + + assert.NilError(t, gate.Set("")) + assert.NilError(t, gate.Set("TablespaceVolumes=true")) + assert.Equal(t, gate.String(), "TablespaceVolumes=true") + assert.Assert(t, true == gate.Enabled(TablespaceVolumes)) + + err := gate.Set("NotAGate=true") + assert.ErrorContains(t, err, "unrecognized feature gate") + assert.ErrorContains(t, err, "NotAGate") + + err = gate.Set("GateNotSet") + assert.ErrorContains(t, err, "missing bool") + assert.ErrorContains(t, err, "GateNotSet") + + err = gate.Set("GateNotSet=foo") + assert.ErrorContains(t, err, "invalid value") + assert.ErrorContains(t, err, "GateNotSet") +} + +func TestContext(t *testing.T) { + t.Parallel() + gate := NewGate() + ctx := NewContext(context.Background(), gate) + assert.Equal(t, ShowGates(ctx), "") + + assert.NilError(t, gate.Set("TablespaceVolumes=true")) + assert.Assert(t, true == Enabled(ctx, TablespaceVolumes)) + assert.Equal(t, ShowGates(ctx), "TablespaceVolumes=true") + + assert.NilError(t, gate.SetFromMap(map[string]bool{TablespaceVolumes: false})) + assert.Assert(t, false == Enabled(ctx, TablespaceVolumes)) + assert.Equal(t, ShowGates(ctx), "TablespaceVolumes=false") +} diff --git a/internal/initialize/doc.go b/internal/initialize/doc.go new file mode 100644 index 0000000000..aedd85846f --- /dev/null +++ b/internal/initialize/doc.go @@ -0,0 +1,6 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +// Package initialize provides functions to initialize some common fields and types. +package initialize diff --git a/internal/initialize/metadata.go b/internal/initialize/metadata.go new file mode 100644 index 0000000000..d62530736a --- /dev/null +++ b/internal/initialize/metadata.go @@ -0,0 +1,23 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package initialize + +import ( + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" +) + +// Annotations initializes the Annotations of object when they are nil. +func Annotations(object metav1.Object) { + if object != nil && object.GetAnnotations() == nil { + object.SetAnnotations(make(map[string]string)) + } +} + +// Labels initializes the Labels of object when they are nil. +func Labels(object metav1.Object) { + if object != nil && object.GetLabels() == nil { + object.SetLabels(make(map[string]string)) + } +} diff --git a/internal/initialize/metadata_test.go b/internal/initialize/metadata_test.go new file mode 100644 index 0000000000..735e455a2e --- /dev/null +++ b/internal/initialize/metadata_test.go @@ -0,0 +1,66 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package initialize_test + +import ( + "testing" + + "gotest.tools/v3/assert" + corev1 "k8s.io/api/core/v1" + + "github.com/crunchydata/postgres-operator/internal/initialize" +) + +func TestAnnotations(t *testing.T) { + // Ignores nil interface. + initialize.Annotations(nil) + + pod := new(corev1.Pod) + + // Starts nil. + assert.Assert(t, pod.Annotations == nil) + + // Gets initialized. + initialize.Annotations(pod) + assert.DeepEqual(t, pod.Annotations, map[string]string{}) + + // Now writable. + pod.Annotations["x"] = "y" + + // Doesn't overwrite. + initialize.Annotations(pod) + assert.DeepEqual(t, pod.Annotations, map[string]string{"x": "y"}) + + // Works with PodTemplate, too. + template := new(corev1.PodTemplate) + initialize.Annotations(template) + assert.DeepEqual(t, template.Annotations, map[string]string{}) +} + +func TestLabels(t *testing.T) { + // Ignores nil interface. + initialize.Labels(nil) + + pod := new(corev1.Pod) + + // Starts nil. + assert.Assert(t, pod.Labels == nil) + + // Gets initialized. + initialize.Labels(pod) + assert.DeepEqual(t, pod.Labels, map[string]string{}) + + // Now writable. + pod.Labels["x"] = "y" + + // Doesn't overwrite. + initialize.Labels(pod) + assert.DeepEqual(t, pod.Labels, map[string]string{"x": "y"}) + + // Works with PodTemplate, too. + template := new(corev1.PodTemplate) + initialize.Labels(template) + assert.DeepEqual(t, template.Labels, map[string]string{}) +} diff --git a/internal/initialize/primitives.go b/internal/initialize/primitives.go new file mode 100644 index 0000000000..9bc264f88c --- /dev/null +++ b/internal/initialize/primitives.go @@ -0,0 +1,39 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package initialize + +// Bool returns a pointer to v. +func Bool(v bool) *bool { return &v } + +// FromPointer returns the value that p points to. +// When p is nil, it returns the zero value of T. +func FromPointer[T any](p *T) T { + var v T + if p != nil { + v = *p + } + return v +} + +// Int32 returns a pointer to v. +func Int32(v int32) *int32 { return &v } + +// Int64 returns a pointer to v. +func Int64(v int64) *int64 { return &v } + +// Map initializes m when it points to nil. +func Map[M ~map[K]V, K comparable, V any](m *M) { + // See https://pkg.go.dev/maps for similar type constraints. + + if m != nil && *m == nil { + *m = make(M) + } +} + +// Pointer returns a pointer to v. +func Pointer[T any](v T) *T { return &v } + +// String returns a pointer to v. +func String(v string) *string { return &v } diff --git a/internal/initialize/primitives_test.go b/internal/initialize/primitives_test.go new file mode 100644 index 0000000000..e39898b4fe --- /dev/null +++ b/internal/initialize/primitives_test.go @@ -0,0 +1,203 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package initialize_test + +import ( + "testing" + + "gotest.tools/v3/assert" + + "github.com/crunchydata/postgres-operator/internal/initialize" +) + +func TestBool(t *testing.T) { + n := initialize.Bool(false) + if assert.Check(t, n != nil) { + assert.Equal(t, *n, false) + } + + y := initialize.Bool(true) + if assert.Check(t, y != nil) { + assert.Equal(t, *y, true) + } +} + +func TestFromPointer(t *testing.T) { + t.Run("bool", func(t *testing.T) { + assert.Equal(t, initialize.FromPointer((*bool)(nil)), false) + assert.Equal(t, initialize.FromPointer(initialize.Pointer(false)), false) + assert.Equal(t, initialize.FromPointer(initialize.Pointer(true)), true) + }) + + t.Run("int32", func(t *testing.T) { + assert.Equal(t, initialize.FromPointer((*int32)(nil)), int32(0)) + assert.Equal(t, initialize.FromPointer(initialize.Pointer(int32(0))), int32(0)) + assert.Equal(t, initialize.FromPointer(initialize.Pointer(int32(-99))), int32(-99)) + assert.Equal(t, initialize.FromPointer(initialize.Pointer(int32(42))), int32(42)) + }) + + t.Run("int64", func(t *testing.T) { + assert.Equal(t, initialize.FromPointer((*int64)(nil)), int64(0)) + assert.Equal(t, initialize.FromPointer(initialize.Pointer(int64(0))), int64(0)) + assert.Equal(t, initialize.FromPointer(initialize.Pointer(int64(-99))), int64(-99)) + assert.Equal(t, initialize.FromPointer(initialize.Pointer(int64(42))), int64(42)) + }) + + t.Run("string", func(t *testing.T) { + assert.Equal(t, initialize.FromPointer((*string)(nil)), "") + assert.Equal(t, initialize.FromPointer(initialize.Pointer("")), "") + assert.Equal(t, initialize.FromPointer(initialize.Pointer("sup")), "sup") + }) +} + +func TestInt32(t *testing.T) { + z := initialize.Int32(0) + if assert.Check(t, z != nil) { + assert.Equal(t, *z, int32(0)) + } + + n := initialize.Int32(-99) + if assert.Check(t, n != nil) { + assert.Equal(t, *n, int32(-99)) + } + + p := initialize.Int32(42) + if assert.Check(t, p != nil) { + assert.Equal(t, *p, int32(42)) + } +} + +func TestInt64(t *testing.T) { + z := initialize.Int64(0) + if assert.Check(t, z != nil) { + assert.Equal(t, *z, int64(0)) + } + + n := initialize.Int64(-99) + if assert.Check(t, n != nil) { + assert.Equal(t, *n, int64(-99)) + } + + p := initialize.Int64(42) + if assert.Check(t, p != nil) { + assert.Equal(t, *p, int64(42)) + } +} + +func TestMap(t *testing.T) { + t.Run("map[string][]byte", func(t *testing.T) { + // Ignores nil pointer. + initialize.Map((*map[string][]byte)(nil)) + + var m map[string][]byte + + // Starts nil. + assert.Assert(t, m == nil) + + // Gets initialized. + initialize.Map(&m) + assert.DeepEqual(t, m, map[string][]byte{}) + + // Now writable. + m["x"] = []byte("y") + + // Doesn't overwrite. + initialize.Map(&m) + assert.DeepEqual(t, m, map[string][]byte{"x": []byte("y")}) + }) + + t.Run("map[string]string", func(t *testing.T) { + // Ignores nil pointer. + initialize.Map((*map[string]string)(nil)) + + var m map[string]string + + // Starts nil. + assert.Assert(t, m == nil) + + // Gets initialized. + initialize.Map(&m) + assert.DeepEqual(t, m, map[string]string{}) + + // Now writable. + m["x"] = "y" + + // Doesn't overwrite. + initialize.Map(&m) + assert.DeepEqual(t, m, map[string]string{"x": "y"}) + }) +} + +func TestPointer(t *testing.T) { + t.Run("bool", func(t *testing.T) { + n := initialize.Pointer(false) + if assert.Check(t, n != nil) { + assert.Equal(t, *n, false) + } + + y := initialize.Pointer(true) + if assert.Check(t, y != nil) { + assert.Equal(t, *y, true) + } + }) + + t.Run("int32", func(t *testing.T) { + z := initialize.Pointer(int32(0)) + if assert.Check(t, z != nil) { + assert.Equal(t, *z, int32(0)) + } + + n := initialize.Pointer(int32(-99)) + if assert.Check(t, n != nil) { + assert.Equal(t, *n, int32(-99)) + } + + p := initialize.Pointer(int32(42)) + if assert.Check(t, p != nil) { + assert.Equal(t, *p, int32(42)) + } + }) + + t.Run("int64", func(t *testing.T) { + z := initialize.Pointer(int64(0)) + if assert.Check(t, z != nil) { + assert.Equal(t, *z, int64(0)) + } + + n := initialize.Pointer(int64(-99)) + if assert.Check(t, n != nil) { + assert.Equal(t, *n, int64(-99)) + } + + p := initialize.Pointer(int64(42)) + if assert.Check(t, p != nil) { + assert.Equal(t, *p, int64(42)) + } + }) + + t.Run("string", func(t *testing.T) { + z := initialize.Pointer("") + if assert.Check(t, z != nil) { + assert.Equal(t, *z, "") + } + + n := initialize.Pointer("sup") + if assert.Check(t, n != nil) { + assert.Equal(t, *n, "sup") + } + }) +} + +func TestString(t *testing.T) { + z := initialize.String("") + if assert.Check(t, z != nil) { + assert.Equal(t, *z, "") + } + + n := initialize.String("sup") + if assert.Check(t, n != nil) { + assert.Equal(t, *n, "sup") + } +} diff --git a/internal/initialize/security.go b/internal/initialize/security.go new file mode 100644 index 0000000000..5dd52d7b1e --- /dev/null +++ b/internal/initialize/security.go @@ -0,0 +1,48 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package initialize + +import ( + corev1 "k8s.io/api/core/v1" +) + +// PodSecurityContext returns a v1.PodSecurityContext with some defaults. +func PodSecurityContext() *corev1.PodSecurityContext { + onRootMismatch := corev1.FSGroupChangeOnRootMismatch + return &corev1.PodSecurityContext{ + // If set to "OnRootMismatch", if the root of the volume already has + // the correct permissions, the recursive permission change can be skipped + FSGroupChangePolicy: &onRootMismatch, + } +} + +// RestrictedSecurityContext returns a v1.SecurityContext with safe defaults. +// See https://docs.k8s.io/concepts/security/pod-security-standards/ +func RestrictedSecurityContext() *corev1.SecurityContext { + return &corev1.SecurityContext{ + // Prevent any container processes from gaining privileges. + AllowPrivilegeEscalation: Bool(false), + + // Drop any capabilities granted by the container runtime. + // This must be uppercase to pass Pod Security Admission. + // - https://releases.k8s.io/v1.24.0/staging/src/k8s.io/pod-security-admission/policy/check_capabilities_restricted.go + Capabilities: &corev1.Capabilities{ + Drop: []corev1.Capability{"ALL"}, + }, + + // Processes in privileged containers are essentially root on the host. + Privileged: Bool(false), + + // Limit filesystem changes to volumes that are mounted read-write. + ReadOnlyRootFilesystem: Bool(true), + + // Fail to start the container if its image runs as UID 0 (root). + RunAsNonRoot: Bool(true), + + SeccompProfile: &corev1.SeccompProfile{ + Type: corev1.SeccompProfileTypeRuntimeDefault, + }, + } +} diff --git a/internal/initialize/security_test.go b/internal/initialize/security_test.go new file mode 100644 index 0000000000..0a6409cf41 --- /dev/null +++ b/internal/initialize/security_test.go @@ -0,0 +1,122 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package initialize_test + +import ( + "fmt" + "testing" + + "gotest.tools/v3/assert" + corev1 "k8s.io/api/core/v1" + + "github.com/crunchydata/postgres-operator/internal/initialize" +) + +func TestPodSecurityContext(t *testing.T) { + psc := initialize.PodSecurityContext() + + if assert.Check(t, psc.FSGroupChangePolicy != nil) { + assert.Equal(t, string(*psc.FSGroupChangePolicy), "OnRootMismatch") + } + + // Kubernetes describes recommended security profiles: + // - https://docs.k8s.io/concepts/security/pod-security-standards/ + + // > The Baseline policy is aimed at ease of adoption for common + // > containerized workloads while preventing known privilege escalations. + // > This policy is targeted at application operators and developers of + // > non-critical applications. + t.Run("Baseline", func(t *testing.T) { + assert.Assert(t, psc.SELinuxOptions == nil, + `Setting a custom SELinux user or role option is forbidden.`) + + assert.Assert(t, psc.Sysctls == nil, + `Sysctls can disable security mechanisms or affect all containers on a host, and should be disallowed except for an allowed "safe" subset.`) + }) + + // > The Restricted policy is aimed at enforcing current Pod hardening best + // > practices, at the expense of some compatibility. It is targeted at + // > operators and developers of security-critical applications, as well as + // > lower-trust users. + t.Run("Restricted", func(t *testing.T) { + if assert.Check(t, psc.RunAsNonRoot == nil) { + assert.Assert(t, initialize.RestrictedSecurityContext().RunAsNonRoot != nil, + `RunAsNonRoot should be delegated to the container-level v1.SecurityContext`) + } + + assert.Assert(t, psc.RunAsUser == nil, + `Containers must not set runAsUser to 0`) + + if assert.Check(t, psc.SeccompProfile == nil) { + assert.Assert(t, initialize.RestrictedSecurityContext().SeccompProfile != nil, + `SeccompProfile should be delegated to the container-level v1.SecurityContext`) + } + }) +} + +func TestRestrictedSecurityContext(t *testing.T) { + sc := initialize.RestrictedSecurityContext() + + // Kubernetes describes recommended security profiles: + // - https://docs.k8s.io/concepts/security/pod-security-standards/ + + // > The Baseline policy is aimed at ease of adoption for common + // > containerized workloads while preventing known privilege escalations. + // > This policy is targeted at application operators and developers of + // > non-critical applications. + t.Run("Baseline", func(t *testing.T) { + if assert.Check(t, sc.Privileged != nil) { + assert.Assert(t, *sc.Privileged == false, + "Privileged Pods disable most security mechanisms and must be disallowed.") + } + + if assert.Check(t, sc.Capabilities != nil) { + assert.Assert(t, sc.Capabilities.Add == nil, + "Adding additional capabilities … must be disallowed.") + } + + assert.Assert(t, sc.SELinuxOptions == nil, + "Setting a custom SELinux user or role option is forbidden.") + + assert.Assert(t, sc.ProcMount == nil, + "The default /proc masks are set up to reduce attack surface, and should be required.") + }) + + // > The Restricted policy is aimed at enforcing current Pod hardening best + // > practices, at the expense of some compatibility. It is targeted at + // > operators and developers of security-critical applications, as well as + // > lower-trust users. + t.Run("Restricted", func(t *testing.T) { + if assert.Check(t, sc.AllowPrivilegeEscalation != nil) { + assert.Assert(t, *sc.AllowPrivilegeEscalation == false, + "Privilege escalation (such as via set-user-ID or set-group-ID file mode) should not be allowed.") + } + + if assert.Check(t, sc.Capabilities != nil) { + assert.Assert(t, fmt.Sprint(sc.Capabilities.Drop) == `[ALL]`, + "Containers must drop ALL capabilities, and are only permitted to add back the NET_BIND_SERVICE capability.") + } + + if assert.Check(t, sc.RunAsNonRoot != nil) { + assert.Assert(t, *sc.RunAsNonRoot == true, + "Containers must be required to run as non-root users.") + } + + assert.Assert(t, sc.RunAsUser == nil, + `Containers must not set runAsUser to 0`) + + // NOTE: The "restricted" Security Context Constraint (SCC) of OpenShift 4.10 + // and earlier does not allow any profile to be set. The "restricted-v2" SCC + // of OpenShift 4.11 uses the "runtime/default" profile. + // - https://docs.openshift.com/container-platform/4.10/security/seccomp-profiles.html + // - https://docs.openshift.com/container-platform/4.11/security/seccomp-profiles.html + assert.Assert(t, sc.SeccompProfile.Type == corev1.SeccompProfileTypeRuntimeDefault, + `Seccomp profile must be explicitly set to one of the allowed values. Both the Unconfined profile and the absence of a profile are prohibited.`) + }) + + if assert.Check(t, sc.ReadOnlyRootFilesystem != nil) { + assert.Assert(t, *sc.ReadOnlyRootFilesystem == true) + } +} diff --git a/internal/kubeapi/client_config.go b/internal/kubeapi/client_config.go deleted file mode 100644 index 5d070fc25e..0000000000 --- a/internal/kubeapi/client_config.go +++ /dev/null @@ -1,92 +0,0 @@ -package kubeapi - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "k8s.io/client-go/kubernetes" - "k8s.io/client-go/kubernetes/scheme" - "k8s.io/client-go/rest" - "k8s.io/client-go/tools/clientcmd" - - crunchydata "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned" - crunchydatascheme "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned/scheme" - crunchydatav1 "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned/typed/crunchydata.com/v1" -) - -func init() { - // Register all types of our clientset into the standard scheme. - _ = crunchydatascheme.AddToScheme(scheme.Scheme) -} - -type Interface interface { - kubernetes.Interface - CrunchydataV1() crunchydatav1.CrunchydataV1Interface -} - -// Interface should satisfy both our typed Interface and the standard one. -var _ crunchydata.Interface = Interface(nil) -var _ kubernetes.Interface = Interface(nil) - -// Client provides methods for interacting with Kubernetes resources. -// It implements both kubernetes and crunchydata clientset Interfaces. -type Client struct { - *rest.Config - *kubernetes.Clientset - - crunchydataV1 *crunchydatav1.CrunchydataV1Client -} - -// Client should satisfy Interface. -var _ Interface = &Client{} - -// CrunchydataV1 retrieves the CrunchydataV1Client -func (c *Client) CrunchydataV1() crunchydatav1.CrunchydataV1Interface { return c.crunchydataV1 } - -func loadClientConfig() (*rest.Config, error) { - // The default loading rules try to read from the files specified in the - // environment or from the home directory. - loader := clientcmd.NewDefaultClientConfigLoadingRules() - - // The deferred loader tries an in-cluster config if the default loading - // rules produce no results. - return clientcmd.NewNonInteractiveDeferredLoadingClientConfig( - loader, &clientcmd.ConfigOverrides{}, - ).ClientConfig() -} - -// NewClient returns a kubernetes.Clientset and its underlying configuration. -func NewClient() (*Client, error) { - config, err := loadClientConfig() - if err != nil { - return nil, err - } - - // Match the settings applied by sigs.k8s.io/controller-runtime@v0.6.0; - // see https://github.com/kubernetes-sigs/controller-runtime/issues/365. - if config.QPS == 0.0 { - config.QPS = 20.0 - config.Burst = 30.0 - } - - client := &Client{Config: config} - client.Clientset, err = kubernetes.NewForConfig(client.Config) - - if err == nil { - client.crunchydataV1, err = crunchydatav1.NewForConfig(client.Config) - } - - return client, err -} diff --git a/internal/kubeapi/deployment.go b/internal/kubeapi/deployment.go deleted file mode 100644 index ec13795e09..0000000000 --- a/internal/kubeapi/deployment.go +++ /dev/null @@ -1,58 +0,0 @@ -package kubeapi - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "encoding/json" - - jsonpatch "github.com/evanphx/json-patch" - log "github.com/sirupsen/logrus" - v1 "k8s.io/api/apps/v1" - - "k8s.io/apimachinery/pkg/types" - "k8s.io/client-go/kubernetes" -) - -func AddLabelToDeployment(clientset kubernetes.Interface, origDeployment *v1.Deployment, key, value, namespace string) error { - var newData, patchBytes []byte - var err error - - //get the original data before we change it - origData, err := json.Marshal(origDeployment) - if err != nil { - return err - } - - origDeployment.ObjectMeta.Labels[key] = value - - newData, err = json.Marshal(origDeployment) - if err != nil { - return err - } - - patchBytes, err = jsonpatch.CreateMergePatch(origData, newData) - if err != nil { - return err - } - - _, err = clientset.AppsV1().Deployments(namespace).Patch(origDeployment.Name, types.MergePatchType, patchBytes) - if err != nil { - log.Error(err) - log.Errorf("error add label to Deployment %s=%s", key, value) - } - log.Debugf("add label to deployment %s=%v", key, value) - return err -} diff --git a/internal/kubeapi/endpoints.go b/internal/kubeapi/endpoints.go deleted file mode 100644 index b4cd8ece62..0000000000 --- a/internal/kubeapi/endpoints.go +++ /dev/null @@ -1,69 +0,0 @@ -package kubeapi - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - log "github.com/sirupsen/logrus" - "k8s.io/api/core/v1" - meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/client-go/kubernetes" -) - -// GetEndpointRequest is used for the GetEndpoint function, which includes the -// current Kubernetes request context, as well as the namespace / endpoint name -// being requested -type GetEndpointRequest struct { - Clientset kubernetes.Interface // Kubernetes Clientset that interfaces with the Kubernetes cluster - Name string // Name of the endpoint that is being queried - Namespace string // Namespace the endpoint being queried resides in -} - -// GetEndpointResponse contains the results from a successful request to the -// endpoint API, including the Kubernetes Endpoint as well as the original -// request data -type GetEndpointResponse struct { - Endpoint *v1.Endpoints // Kubernetes Endpoint object that specifics about the endpoint - Name string // Name of the endpoint - Namespace string // Namespace that the endpoint is in -} - -// GetEndpoint tries to find an individual endpoint in a namespace. Returns the -// endpoint object if it can be IsNotFound -// If no endpoint can be found, then an error is returned -func GetEndpoint(request *GetEndpointRequest) (*GetEndpointResponse, error) { - log.Debugf("GetEndpointResponse Called: (%s,%s,%s)", request.Clientset, request.Name, request.Namespace) - // set the endpoints interfaces that will be used to make the query - endpointsInterface := request.Clientset.CoreV1().Endpoints(request.Namespace) - // make the query to Kubernetes to see if the specific endpoint exists - endpoint, err := endpointsInterface.Get(request.Name, meta_v1.GetOptions{}) - // return at this point if there is an error - if err != nil { - log.Errorf("GetEndpointResponse(%s,%s): Endpoint Not Found: %s", - request.Name, request.Namespace, err.Error()) - return nil, err - } - // create a response and return - response := &GetEndpointResponse{ - Endpoint: endpoint, - Name: request.Name, - Namespace: request.Namespace, - } - - log.Debugf("GetEndpointResponse Response: (%s,%s,%s)", - response.Namespace, response.Name, response.Endpoint) - - return response, nil -} diff --git a/internal/kubeapi/errors.go b/internal/kubeapi/errors.go deleted file mode 100644 index 829ca9f097..0000000000 --- a/internal/kubeapi/errors.go +++ /dev/null @@ -1,24 +0,0 @@ -package kubeapi - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import "k8s.io/apimachinery/pkg/api/errors" - -// IsAlreadyExists returns true if the err indicates that a resource already exists. -func IsAlreadyExists(err error) bool { return errors.IsAlreadyExists(err) } - -// IsNotFound returns true if err indicates that a resource was not found. -func IsNotFound(err error) bool { return errors.IsNotFound(err) } diff --git a/internal/kubeapi/exec.go b/internal/kubeapi/exec.go deleted file mode 100644 index b2e994d84d..0000000000 --- a/internal/kubeapi/exec.go +++ /dev/null @@ -1,81 +0,0 @@ -package kubeapi - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "bytes" - "io" - - log "github.com/sirupsen/logrus" - v1 "k8s.io/api/core/v1" - "k8s.io/apimachinery/pkg/runtime" - "k8s.io/client-go/kubernetes" - "k8s.io/client-go/rest" - "k8s.io/client-go/tools/remotecommand" -) - -// ExecToPodThroughAPI uninterractively exec to the pod with the command specified. -// :param string command: list of the str which specify the command. -// :param string pod_name: Pod name -// :param string namespace: namespace of the Pod. -// :param io.Reader stdin: Standerd Input if necessary, otherwise `nil` -// :return: string: Output of the command. (STDOUT) -// string: Errors. (STDERR) -// error: If any error has occurred otherwise `nil` -func ExecToPodThroughAPI(config *rest.Config, clientset kubernetes.Interface, command []string, containerName, podName, namespace string, stdin io.Reader) (string, string, error) { - req := clientset.CoreV1().RESTClient().Post(). - Resource("pods"). - Name(podName). - Namespace(namespace). - SubResource("exec") - scheme := runtime.NewScheme() - if err := v1.AddToScheme(scheme); err != nil { - log.Error(err) - return "", "", err - } - - parameterCodec := runtime.NewParameterCodec(scheme) - req.VersionedParams(&v1.PodExecOptions{ - Command: command, - Container: containerName, - Stdin: stdin != nil, - Stdout: true, - Stderr: true, - TTY: false, - }, parameterCodec) - - log.Debugf("Request URL: %s", req.URL().String()) - - exec, err := remotecommand.NewSPDYExecutor(config, "POST", req.URL()) - if err != nil { - log.Error(err) - return "", "", err - } - - var stdout, stderr bytes.Buffer - err = exec.Stream(remotecommand.StreamOptions{ - Stdin: stdin, - Stdout: &stdout, - Stderr: &stderr, - Tty: false, - }) - if err != nil { - log.Error(err) - return stdout.String(), stderr.String(), err - } - - return stdout.String(), stderr.String(), nil -} diff --git a/internal/kubeapi/fake/clientset.go b/internal/kubeapi/fake/clientset.go deleted file mode 100644 index 7fbd74b802..0000000000 --- a/internal/kubeapi/fake/clientset.go +++ /dev/null @@ -1,36 +0,0 @@ -package fake - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - fakekubernetes "k8s.io/client-go/kubernetes/fake" - - "github.com/crunchydata/postgres-operator/internal/kubeapi" - fakecrunchydata "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned/fake" - crunchydatav1 "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned/typed/crunchydata.com/v1" -) - -type Clientset struct { - *fakekubernetes.Clientset - PGOClientset *fakecrunchydata.Clientset -} - -var _ kubeapi.Interface = &Clientset{} - -// CrunchydataV1 retrieves the CrunchydataV1Client -func (c *Clientset) CrunchydataV1() crunchydatav1.CrunchydataV1Interface { - return c.PGOClientset.CrunchydataV1() -} diff --git a/internal/kubeapi/fake/fakeclients.go b/internal/kubeapi/fake/fakeclients.go deleted file mode 100644 index 6a263818d4..0000000000 --- a/internal/kubeapi/fake/fakeclients.go +++ /dev/null @@ -1,123 +0,0 @@ -package fake - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "errors" - "io/ioutil" - "os" - - v1 "k8s.io/api/core/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - fakekube "k8s.io/client-go/kubernetes/fake" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/kubeapi" - fakecrunchy "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned/fake" -) - -const ( - // defaultPGOInstallationName is the default installation name for a fake PGO client - defaultPGOInstallationName = "test" - // pgoNamespace is the default operator namespace for a fake PGO client - defaultPGONamespace = "pgo" - // defaultTargetNamespaces are the default target namespaces for a fake PGO client - defaultTargetNamespaces = "pgouser1,pgouser2" -) - -var ( - // pgoRoot represents the root of the PostgreSQL Operator project repository - pgoRoot = os.Getenv("PGOROOT") - // templatePath defines the default location for the PostgreSQL Operator templates relative to - // env var PGOROOT - templatePath = pgoRoot + "/installers/ansible/roles/pgo-operator/files/pgo-configs/" - // pgoYAMLPath defines the default location for the default pgo.yaml configuration file - // relative to env var PGOROOT - pgoYAMLPath = pgoRoot + "/conf/postgres-operator/pgo.yaml" -) - -// NewFakePGOClient creates a fake PostgreSQL Operator client. Specifically, it creates -// a fake client containing a 'pgo-config' ConfigMap as needed to initialize the Operator -// (i.e. call the 'operator' packages 'Initialize()' function). This allows for the proper -// initialization of the Operator in various unit tests where the various resources loaded -// during initialization (e.g. templates, config and/or global variables) are required. -func NewFakePGOClient() (kubeapi.Interface, error) { - - if pgoRoot == "" { - return nil, errors.New("Environment variable PGOROOT must be set to the root directory " + - "of the PostgreSQL Operator project repository in order to create a fake client") - } - - os.Setenv("CRUNCHY_DEBUG", "false") - os.Setenv("NAMESPACE", defaultTargetNamespaces) - os.Setenv("PGO_INSTALLATION_NAME", defaultPGOInstallationName) - os.Setenv("PGO_OPERATOR_NAMESPACE", defaultPGONamespace) - - // create a fake 'pgo-config' ConfigMap containing the operator templates and pgo.yaml - pgoConfig, err := createMockPGOConfigMap(defaultPGONamespace) - if err != nil { - return nil, err - } - - // now create and return a fake client containing the ConfigMap - return &Clientset{ - Clientset: fakekube.NewSimpleClientset(pgoConfig), - PGOClientset: fakecrunchy.NewSimpleClientset(), - }, nil -} - -// createMockPGOConfigMap creates a mock 'pgo-config' ConfigMap containing the default pgo.yaml -// and templates included in the PostgreSQL Operator project repository. This ConfigMap can be -// utilized when testing to similate and environment containing the various PostgreSQL Operator -// configuration files (e.g. templates) required to run the Operator. -func createMockPGOConfigMap(pgoNamespace string) (*v1.ConfigMap, error) { - - // create a configMap that will hold the default configs - pgoConfigMap := &v1.ConfigMap{ - Data: make(map[string]string), - ObjectMeta: metav1.ObjectMeta{ - Name: config.CustomConfigMapName, - Namespace: pgoNamespace, - }, - } - - // get all templates from the default template directory - templates, err := ioutil.ReadDir(templatePath) - if err != nil { - return nil, err - } - - // grab all file content so that it can be added to the ConfigMap - fileContent := make(map[string]string) - for _, t := range templates { - content, err := ioutil.ReadFile(templatePath + t.Name()) - if err != nil { - return nil, err - } - fileContent[t.Name()] = string(content) - } - - // add the default pgo.yaml - pgoContent, err := ioutil.ReadFile(pgoYAMLPath) - if err != nil { - return nil, err - } - fileContent["pgo.yaml"] = string(pgoContent) - - pgoConfigMap.Data = fileContent - - return pgoConfigMap, nil -} diff --git a/internal/kubeapi/patch.go b/internal/kubeapi/patch.go new file mode 100644 index 0000000000..973852c17a --- /dev/null +++ b/internal/kubeapi/patch.go @@ -0,0 +1,177 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package kubeapi + +import ( + "strings" + + "k8s.io/apimachinery/pkg/types" + "k8s.io/apimachinery/pkg/util/json" + "sigs.k8s.io/controller-runtime/pkg/client" +) + +// escapeJSONPointer encodes '~' and '/' according to RFC 6901. +var escapeJSONPointer = strings.NewReplacer( + "~", "~0", + "/", "~1", +).Replace + +// JSON6902 represents a JSON Patch according to RFC 6902; the same as +// k8s.io/apimachinery/pkg/types.JSONPatchType. +type JSON6902 []interface{} + +// NewJSONPatch creates a new JSON Patch according to RFC 6902; the same as +// k8s.io/apimachinery/pkg/types.JSONPatchType. +func NewJSONPatch() *JSON6902 { return &JSON6902{} } + +func (*JSON6902) pointer(tokens ...string) string { + var b strings.Builder + + for _, t := range tokens { + _ = b.WriteByte('/') + _, _ = b.WriteString(escapeJSONPointer(t)) + } + + return b.String() +} + +// Add appends an "add" operation to patch. +// +// > The "add" operation performs one of the following functions, +// > depending upon what the target location references: +// > +// > o If the target location specifies an array index, a new value is +// > inserted into the array at the specified index. +// > +// > o If the target location specifies an object member that does not +// > already exist, a new member is added to the object. +// > +// > o If the target location specifies an object member that does exist, +// > that member's value is replaced. +func (patch *JSON6902) Add(path ...string) func(value interface{}) *JSON6902 { + i := len(*patch) + f := func(value interface{}) *JSON6902 { + (*patch)[i] = map[string]interface{}{ + "op": "add", + "path": patch.pointer(path...), + "value": value, + } + return patch + } + + *patch = append(*patch, f) + + return f +} + +// Remove appends a "remove" operation to patch. +// +// > The "remove" operation removes the value at the target location. +// > +// > The target location MUST exist for the operation to be successful. +func (patch *JSON6902) Remove(path ...string) *JSON6902 { + *patch = append(*patch, map[string]interface{}{ + "op": "remove", + "path": patch.pointer(path...), + }) + + return patch +} + +// Replace appends a "replace" operation to patch. +// +// > The "replace" operation replaces the value at the target location +// > with a new value. +// > +// > The target location MUST exist for the operation to be successful. +func (patch *JSON6902) Replace(path ...string) func(value interface{}) *JSON6902 { + i := len(*patch) + f := func(value interface{}) *JSON6902 { + (*patch)[i] = map[string]interface{}{ + "op": "replace", + "path": patch.pointer(path...), + "value": value, + } + return patch + } + + *patch = append(*patch, f) + + return f +} + +// Bytes returns the JSON representation of patch. +func (patch JSON6902) Bytes() ([]byte, error) { return patch.Data(nil) } + +// Data returns the JSON representation of patch. +func (patch JSON6902) Data(client.Object) ([]byte, error) { return json.Marshal(patch) } + +// IsEmpty returns true when patch has no operations. +func (patch JSON6902) IsEmpty() bool { return len(patch) == 0 } + +// Type returns k8s.io/apimachinery/pkg/types.JSONPatchType. +func (patch JSON6902) Type() types.PatchType { return types.JSONPatchType } + +// Merge7386 represents a JSON Merge Patch according to RFC 7386; the same as +// k8s.io/apimachinery/pkg/types.MergePatchType. +type Merge7386 map[string]interface{} + +// NewMergePatch creates a new JSON Merge Patch according to RFC 7386; the same +// as k8s.io/apimachinery/pkg/types.MergePatchType. +func NewMergePatch() *Merge7386 { return &Merge7386{} } + +// Add modifies patch to indicate that the member at path should be added or +// replaced with value. +// +// > If the provided merge patch contains members that do not appear +// > within the target, those members are added. If the target does +// > contain the member, the value is replaced. Null values in the merge +// > patch are given special meaning to indicate the removal of existing +// > values in the target. +func (patch *Merge7386) Add(path ...string) func(value interface{}) *Merge7386 { + position := *patch + + for len(path) > 1 { + p, ok := position[path[0]].(Merge7386) + if !ok { + p = Merge7386{} + position[path[0]] = p + } + + position = p + path = path[1:] + } + + if len(path) < 1 { + return func(interface{}) *Merge7386 { return patch } + } + + f := func(value interface{}) *Merge7386 { + position[path[0]] = value + return patch + } + + position[path[0]] = f + + return f +} + +// Remove modifies patch to indicate that the member at path should be removed +// if it exists. +func (patch *Merge7386) Remove(path ...string) *Merge7386 { + return patch.Add(path...)(nil) +} + +// Bytes returns the JSON representation of patch. +func (patch Merge7386) Bytes() ([]byte, error) { return patch.Data(nil) } + +// Data returns the JSON representation of patch. +func (patch Merge7386) Data(client.Object) ([]byte, error) { return json.Marshal(patch) } + +// IsEmpty returns true when patch has no modifications. +func (patch Merge7386) IsEmpty() bool { return len(patch) == 0 } + +// Type returns k8s.io/apimachinery/pkg/types.MergePatchType. +func (patch Merge7386) Type() types.PatchType { return types.MergePatchType } diff --git a/internal/kubeapi/patch_test.go b/internal/kubeapi/patch_test.go new file mode 100644 index 0000000000..52f5787b8f --- /dev/null +++ b/internal/kubeapi/patch_test.go @@ -0,0 +1,265 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package kubeapi + +import ( + "encoding/json" + "reflect" + "testing" + + "k8s.io/apimachinery/pkg/labels" + "k8s.io/apimachinery/pkg/types" +) + +func assertJSON(t testing.TB, expected interface{}, actual []byte) { + t.Helper() + + var e, a interface{} + var err error + + if b, ok := expected.([]byte); ok { + err = json.Unmarshal(b, &e) + } else if s, ok := expected.(string); ok { + err = json.Unmarshal([]byte(s), &e) + } else { + t.Fatalf("bug in test: unexpected type %T", expected) + } + + if err != nil { + t.Fatalf("bug in test: %v", err) + } + + if err = json.Unmarshal(actual, &a); err != nil { + t.Fatal(err) + } + + if !reflect.DeepEqual(e, a) { + t.Errorf("\n--- Expected\n+++ Actual\n- %#v\n+ %#v", e, a) + } +} + +func TestEscapeJSONPointer(t *testing.T) { + t.Parallel() + + for _, tt := range []struct{ input, expected string }{ + {"~1", "~01"}, + {"~~", "~0~0"}, + {"/1", "~11"}, + {"//", "~1~1"}, + {"~/", "~0~1"}, + {"some/label", "some~1label"}, + } { + actual := escapeJSONPointer(tt.input) + if actual != tt.expected { + t.Errorf("expected %q, got %q", tt.expected, actual) + } + } +} + +func TestJSON6902(t *testing.T) { + t.Parallel() + + if actual := NewJSONPatch().Type(); actual != types.JSONPatchType { + t.Fatalf("expected %q, got %q", types.JSONPatchType, actual) + } + + // An empty patch is valid. + { + patch := NewJSONPatch() + if !patch.IsEmpty() { + t.Fatal("expected empty") + } + b, err := patch.Bytes() + if err != nil { + t.Fatalf("expected no error, got %v", err) + } + assertJSON(t, `[]`, b) + } + + // Calling Add without its value is an error. + { + patch := NewJSONPatch() + patch.Add("a") + _, err := patch.Bytes() + if err == nil { + t.Fatal("expected an error, got none") + } + } + { + b, err := NewJSONPatch().Add("a", "x/y", "0")(9).Bytes() + if err != nil { + t.Fatalf("expected no error, got %v", err) + } + assertJSON(t, `[{"op":"add","path":"/a/x~1y/0","value":9}]`, b) + } + + // Remove takes no value. + { + b, err := NewJSONPatch().Remove("b", "m/n/o").Bytes() + if err != nil { + t.Fatalf("expected no error, got %v", err) + } + assertJSON(t, `[{"op":"remove","path":"/b/m~1n~1o"}]`, b) + } + + // Calling Replace without its value is an error. + { + patch := NewJSONPatch() + patch.Replace("a") + _, err := patch.Bytes() + if err == nil { + t.Fatal("expected an error, got none") + } + } + { + b, err := NewJSONPatch().Replace("metadata", "labels", "some/thing")("5").Bytes() + if err != nil { + t.Fatalf("expected no error, got %v", err) + } + assertJSON(t, `[{"op":"replace","path":"/metadata/labels/some~1thing","value":"5"}]`, b) + } + + // Calls are chainable. + { + b, err := NewJSONPatch(). + Add("a", "b", "c")(1). + Remove("x", "y", "z"). + Replace("1", "2", "3")(nil). + Bytes() + if err != nil { + t.Fatalf("expected no error, got %v", err) + } + assertJSON(t, `[ + {"op":"add","path":"/a/b/c","value":1}, + {"op":"remove","path":"/x/y/z"}, + {"op":"replace","path":"/1/2/3","value":null} + ]`, b) + } +} + +func TestMerge7386(t *testing.T) { + t.Parallel() + + if actual := NewMergePatch().Type(); actual != types.MergePatchType { + t.Fatalf("expected %q, got %q", types.MergePatchType, actual) + } + + // An empty patch is valid. + { + patch := NewMergePatch() + if !patch.IsEmpty() { + t.Fatal("expected empty") + } + b, err := patch.Bytes() + if err != nil { + t.Fatalf("expected no error, got %v", err) + } + assertJSON(t, `{}`, b) + } + + // Calling Add without a path does nothing. + { + b, err := NewMergePatch().Add()("anything").Bytes() + if err != nil { + t.Fatalf("expected no error, got %v", err) + } + assertJSON(t, `{}`, b) + } + + // Calling Add without its value is an error. + { + patch := NewMergePatch() + patch.Add("a") + _, err := patch.Bytes() + if err == nil { + t.Fatal("expected an error, got none") + } + } + { + b, err := NewMergePatch().Add("a", "x/y", "0")(9).Bytes() + if err != nil { + t.Fatalf("expected no error, got %v", err) + } + assertJSON(t, `{"a":{"x/y":{"0":9}}}`, b) + } + + // Remove takes no value. + { + b, err := NewMergePatch().Remove("b", "m/n/o").Bytes() + if err != nil { + t.Fatalf("expected no error, got %v", err) + } + assertJSON(t, `{"b":{"m/n/o":null}}`, b) + } + + // Calls are chainable. + { + b, err := NewMergePatch(). + Add("a", "b", "c")(1). + Remove("x", "y", "z"). + Bytes() + if err != nil { + t.Fatalf("expected no error, got %v", err) + } + assertJSON(t, `{ + "a":{"b":{"c":1}}, + "x":{"y":{"z":null}} + }`, b) + } +} + +// TestMerge7386Equivalence demonstrates that the same effect can be spelled +// different ways with Merge7386. +func TestMerge7386Equivalence(t *testing.T) { + t.Parallel() + + expected := `{ + "metadata": { + "labels": {"lk":"lv"}, + "annotations": {"ak1":"av1", "ak2":"av2"} + } + }` + + patches := []*Merge7386{ + // multiple calls to Add + NewMergePatch(). + Add("metadata", "labels", "lk")("lv"). + Add("metadata", "annotations", "ak1")("av1"). + Add("metadata", "annotations", "ak2")("av2"), + + // fewer calls using the patch type + NewMergePatch(). + Add("metadata", "labels")(Merge7386{"lk": "lv"}). + Add("metadata", "annotations")(Merge7386{"ak1": "av1", "ak2": "av2"}), + + // fewer calls using other types + NewMergePatch(). + Add("metadata", "labels")(labels.Set{"lk": "lv"}). + Add("metadata", "annotations")(map[string]string{"ak1": "av1", "ak2": "av2"}), + + // one call using the patch type + NewMergePatch(). + Add("metadata")(Merge7386{ + "labels": Merge7386{"lk": "lv"}, + "annotations": Merge7386{"ak1": "av1", "ak2": "av2"}, + }), + + // one call using other types + NewMergePatch(). + Add("metadata")(map[string]interface{}{ + "labels": labels.Set{"lk": "lv"}, + "annotations": map[string]string{"ak1": "av1", "ak2": "av2"}, + }), + } + + for i, patch := range patches { + b, err := patch.Bytes() + if err != nil { + t.Fatalf("expected no error for %v, got %v", i, err) + } + + assertJSON(t, expected, b) + } +} diff --git a/internal/kubeapi/pod.go b/internal/kubeapi/pod.go deleted file mode 100644 index 1dc5539089..0000000000 --- a/internal/kubeapi/pod.go +++ /dev/null @@ -1,57 +0,0 @@ -package kubeapi - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "encoding/json" - - jsonpatch "github.com/evanphx/json-patch" - log "github.com/sirupsen/logrus" - v1 "k8s.io/api/core/v1" - "k8s.io/apimachinery/pkg/types" - "k8s.io/client-go/kubernetes" -) - -func AddLabelToPod(clientset kubernetes.Interface, origPod *v1.Pod, key, value, namespace string) error { - var newData, patchBytes []byte - var err error - - //get the original data before we change it - origData, err := json.Marshal(origPod) - if err != nil { - return err - } - - origPod.ObjectMeta.Labels[key] = value - - newData, err = json.Marshal(origPod) - if err != nil { - return err - } - - patchBytes, err = jsonpatch.CreateMergePatch(origData, newData) - if err != nil { - return err - } - - _, err = clientset.CoreV1().Pods(namespace).Patch(origPod.Name, types.MergePatchType, patchBytes) - if err != nil { - log.Error(err) - log.Errorf("error add label to Pod %s %s=%s", origPod.Name, key, value) - } - log.Debugf("add label to Pod %s %s=%v", origPod.Name, key, value) - return err -} diff --git a/internal/kubeapi/volumes.go b/internal/kubeapi/volumes.go deleted file mode 100644 index 05412672ac..0000000000 --- a/internal/kubeapi/volumes.go +++ /dev/null @@ -1,45 +0,0 @@ -package kubeapi - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import v1 "k8s.io/api/core/v1" - -// FindOrAppendVolume returns a pointer to the Volume in volumes named name. -// If no such Volume exists, it creates one with that name and returns it. -func FindOrAppendVolume(volumes *[]v1.Volume, name string) *v1.Volume { - for i := range *volumes { - if (*volumes)[i].Name == name { - return &(*volumes)[i] - } - } - - *volumes = append(*volumes, v1.Volume{Name: name}) - return &(*volumes)[len(*volumes)-1] -} - -// FindOrAppendVolumeMount returns a pointer to the VolumeMount in mounts named -// name. If no such VolumeMount exists, it creates one with that name and -// returns it. -func FindOrAppendVolumeMount(mounts *[]v1.VolumeMount, name string) *v1.VolumeMount { - for i := range *mounts { - if (*mounts)[i].Name == name { - return &(*mounts)[i] - } - } - - *mounts = append(*mounts, v1.VolumeMount{Name: name}) - return &(*mounts)[len(*mounts)-1] -} diff --git a/internal/kubeapi/volumes_test.go b/internal/kubeapi/volumes_test.go deleted file mode 100644 index b793ac5269..0000000000 --- a/internal/kubeapi/volumes_test.go +++ /dev/null @@ -1,108 +0,0 @@ -package kubeapi - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "testing" - - v1 "k8s.io/api/core/v1" -) - -func TestFindOrAppendVolume(t *testing.T) { - t.Parallel() - - t.Run("empty", func(t *testing.T) { - var volumes []v1.Volume - var volume = FindOrAppendVolume(&volumes, "v1") - if expected, actual := 1, len(volumes); expected != actual { - t.Fatalf("expected appended volume, got %v", actual) - } - if volume != &volumes[0] { - t.Fatal("expected appended volume") - } - if expected, actual := "v1", volume.Name; expected != actual { - t.Fatalf("expected name to be appended, got %q", actual) - } - }) - - t.Run("missing", func(t *testing.T) { - volumes := []v1.Volume{{Name: "v1"}, {Name: "v2"}} - volume := FindOrAppendVolume(&volumes, "v3") - if expected, actual := 3, len(volumes); expected != actual { - t.Fatalf("expected appended volume, got %v", actual) - } - if volume != &volumes[2] { - t.Fatal("expected appended volume") - } - if expected, actual := "v3", volume.Name; expected != actual { - t.Fatalf("expected name to be appended, got %q", actual) - } - }) - - t.Run("present", func(t *testing.T) { - volumes := []v1.Volume{{Name: "v1"}, {Name: "v2"}} - volume := FindOrAppendVolume(&volumes, "v2") - if expected, actual := 2, len(volumes); expected != actual { - t.Fatalf("expected nothing to be appended, got %v", actual) - } - if volume != &volumes[1] { - t.Fatal("expected existing volume") - } - }) -} - -func TestFindOrAppendVolumeMount(t *testing.T) { - t.Parallel() - - t.Run("empty", func(t *testing.T) { - var mounts []v1.VolumeMount - var mount = FindOrAppendVolumeMount(&mounts, "v1") - if expected, actual := 1, len(mounts); expected != actual { - t.Fatalf("expected appended mount, got %v", actual) - } - if mount != &mounts[0] { - t.Fatal("expected appended mount") - } - if expected, actual := "v1", mount.Name; expected != actual { - t.Fatalf("expected name to be appended, got %q", actual) - } - }) - - t.Run("missing", func(t *testing.T) { - mounts := []v1.VolumeMount{{Name: "v1"}, {Name: "v2"}} - mount := FindOrAppendVolumeMount(&mounts, "v3") - if expected, actual := 3, len(mounts); expected != actual { - t.Fatalf("expected appended mount, got %v", actual) - } - if mount != &mounts[2] { - t.Fatal("expected appended mount") - } - if expected, actual := "v3", mount.Name; expected != actual { - t.Fatalf("expected name to be appended, got %q", actual) - } - }) - - t.Run("present", func(t *testing.T) { - mounts := []v1.VolumeMount{{Name: "v1"}, {Name: "v2"}} - mount := FindOrAppendVolumeMount(&mounts, "v2") - if expected, actual := 2, len(mounts); expected != actual { - t.Fatalf("expected nothing to be appended, got %v", actual) - } - if mount != &mounts[1] { - t.Fatal("expected existing mount") - } - }) -} diff --git a/internal/logging/loglib.go b/internal/logging/loglib.go deleted file mode 100644 index c109d435f5..0000000000 --- a/internal/logging/loglib.go +++ /dev/null @@ -1,94 +0,0 @@ -//Package logging Functions to set unique configuration for use with the logrus logger -package logging - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - "os" - "regexp" - "runtime" - - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" -) - -func SetParameters() LogValues { - var logval LogValues - - logval.version = msgs.PGO_VERSION - - return logval -} - -//LogValues holds the standard log value types -type LogValues struct { - version string -} - -// formatter adds default fields to each log entry. -type formatter struct { - fields log.Fields - lf log.Formatter -} - -// Format satisfies the logrus.Formatter interface. -func (f *formatter) Format(e *log.Entry) ([]byte, error) { - for k, v := range f.fields { - e.Data[k] = v - } - return f.lf.Format(e) -} - -//CrunchyLogger adds the customized logging fields to the logrus instance context -func CrunchyLogger(logDetails LogValues) { - //Sets calling method as a field - log.SetReportCaller(true) - - crunchyTextFormatter := &log.TextFormatter{ - CallerPrettyfier: func(f *runtime.Frame) (string, string) { - filename := f.File - function := f.Function - re1 := regexp.MustCompile(`postgres-operator/(.*go)`) - result1 := re1.FindStringSubmatch(f.File) - if len(result1) > 1 { - filename = result1[1] - } - - re2 := regexp.MustCompile(`postgres-operator/(.*)`) - result2 := re2.FindStringSubmatch(f.Function) - if len(result2) > 1 { - function = result2[1] - } - return fmt.Sprintf("%s()", function), fmt.Sprintf("%s:%d", filename, f.Line) - }, - FullTimestamp: true, - } - - log.SetFormatter(&formatter{ - fields: log.Fields{ - "version": logDetails.version, - }, - lf: crunchyTextFormatter, - }) - - // Output to stdout instead of the default stderr - // Can be any io.Writer, see below for File example - log.SetOutput(os.Stdout) - - // Only log the debug severity or above. - log.SetLevel(log.DebugLevel) -} diff --git a/internal/logging/logr.go b/internal/logging/logr.go new file mode 100644 index 0000000000..c907997d40 --- /dev/null +++ b/internal/logging/logr.go @@ -0,0 +1,97 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package logging + +import ( + "context" + + "github.com/go-logr/logr" + "go.opentelemetry.io/otel/trace" +) + +var global = logr.Discard() + +// Logger is an interface to an abstract logging implementation. +type Logger = logr.Logger + +// Discard returns a Logger that discards all messages logged to it. +func Discard() Logger { return logr.Discard() } + +// SetLogSink replaces the global Logger with sink. Before this is called, +// the global Logger is a no-op. +func SetLogSink(sink logr.LogSink) { global = logr.New(sink) } + +// NewContext returns a copy of ctx containing logger. Retrieve it using FromContext. +func NewContext(ctx context.Context, logger Logger) context.Context { + return logr.NewContext(ctx, logger) +} + +// FromContext returns the global Logger or the one stored by a prior call +// to NewContext. +func FromContext(ctx context.Context) Logger { + log, err := logr.FromContext(ctx) + if err != nil { + log = global + } + + // Add trace context, if any, according to OpenTelemetry recommendations. + // Omit trace flags for now because they don't seem relevant. + // - https://github.com/open-telemetry/opentelemetry-specification/blob/v0.7.0/specification/logs/overview.md + if sc := trace.SpanFromContext(ctx).SpanContext(); sc.IsValid() { + log = log.WithValues("spanid", sc.SpanID(), "traceid", sc.TraceID()) + } + + return log +} + +// sink implements logr.LogSink using two function pointers. +type sink struct { + depth int + verbosity int + names []string + values []interface{} + + // TODO(cbandy): add names or frame to the functions below. + + fnError func(error, string, ...interface{}) + fnInfo func(int, string, ...interface{}) +} + +var _ logr.LogSink = (*sink)(nil) + +func (s *sink) Enabled(level int) bool { return level <= s.verbosity } +func (s *sink) Init(info logr.RuntimeInfo) { s.depth = info.CallDepth } + +func (s sink) combineValues(kv ...interface{}) []interface{} { + if len(kv) == 0 { + return s.values + } + if n := len(s.values); n > 0 { + return append(s.values[:n:n], kv...) + } + return kv +} + +func (s *sink) Error(err error, msg string, kv ...interface{}) { + s.fnError(err, msg, s.combineValues(kv...)...) +} + +func (s *sink) Info(level int, msg string, kv ...interface{}) { + s.fnInfo(level, msg, s.combineValues(kv...)...) +} + +func (s *sink) WithName(name string) logr.LogSink { + n := len(s.names) + out := *s + out.names = append(out.names[:n:n], name) + return &out +} + +func (s *sink) WithValues(kv ...interface{}) logr.LogSink { + n := len(s.values) + out := *s + out.values = append(out.values[:n:n], kv...) + return &out +} diff --git a/internal/logging/logr_test.go b/internal/logging/logr_test.go new file mode 100644 index 0000000000..1cbc818ad9 --- /dev/null +++ b/internal/logging/logr_test.go @@ -0,0 +1,73 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package logging + +import ( + "context" + "testing" + + "github.com/go-logr/logr" + "go.opentelemetry.io/otel/sdk/trace" + "gotest.tools/v3/assert" +) + +func TestDiscard(t *testing.T) { + assert.Equal(t, Discard(), logr.Discard()) +} + +func TestFromContext(t *testing.T) { + global = logr.Discard() + + // Defaults to global. + log := FromContext(context.Background()) + assert.Equal(t, log, global) + + // Retrieves from NewContext. + double := logr.New(&sink{}) + log = FromContext(NewContext(context.Background(), double)) + assert.Equal(t, log, double) +} + +func TestFromContextTraceContext(t *testing.T) { + var calls []map[string]interface{} + + SetLogSink(&sink{ + fnInfo: func(_ int, _ string, kv ...interface{}) { + m := make(map[string]interface{}) + for i := 0; i < len(kv); i += 2 { + m[kv[i].(string)] = kv[i+1] + } + calls = append(calls, m) + }, + }) + + ctx := context.Background() + + // Nothing when there's no trace. + FromContext(ctx).Info("") + assert.Equal(t, calls[0]["spanid"], nil) + assert.Equal(t, calls[0]["traceid"], nil) + + ctx, span := trace.NewTracerProvider().Tracer("").Start(ctx, "test-span") + defer span.End() + + // OpenTelemetry trace context when there is. + FromContext(ctx).Info("") + assert.Equal(t, calls[1]["spanid"], span.SpanContext().SpanID()) + assert.Equal(t, calls[1]["traceid"], span.SpanContext().TraceID()) +} + +func TestSetLogSink(t *testing.T) { + var calls []string + + SetLogSink(&sink{ + fnInfo: func(_ int, m string, _ ...interface{}) { + calls = append(calls, m) + }, + }) + + global.Info("called") + assert.DeepEqual(t, calls, []string{"called"}) +} diff --git a/internal/logging/logrus.go b/internal/logging/logrus.go new file mode 100644 index 0000000000..9683a104d1 --- /dev/null +++ b/internal/logging/logrus.go @@ -0,0 +1,114 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package logging + +import ( + "fmt" + "io" + "path/filepath" + "runtime" + "strings" + + "github.com/go-logr/logr" + "github.com/pkg/errors" + "github.com/sirupsen/logrus" +) + +// Logrus creates a sink that writes to out using a logrus format. Log entries +// are emitted when their level is at or below verbosity. (Only the most +// important entries are emitted when verbosity is zero.) Error entries get a +// logrus.ErrorLevel, Info entries with verbosity less than debug get a +// logrus.InfoLevel, and Info entries with verbosity of debug or more get a +// logrus.DebugLevel. +func Logrus(out io.Writer, version string, debug, verbosity int) logr.LogSink { + root := logrus.New() + + root.SetLevel(logrus.TraceLevel) + root.SetOutput(out) + + root.SetFormatter(&logrus.TextFormatter{ + FullTimestamp: true, + }) + + _, module, _, _ := runtime.Caller(0) + module = strings.TrimSuffix(module, "internal/logging/logrus.go") + + return &sink{ + verbosity: verbosity, + + fnError: func(err error, message string, kv ...interface{}) { + entry := root.WithField("version", version) + entry = logrusFields(entry, kv...) + + if v, ok := entry.Data[logrus.ErrorKey]; ok { + entry.Data["fields."+logrus.ErrorKey] = v + } + entry = entry.WithError(err) + + var t interface{ StackTrace() errors.StackTrace } + if errors.As(err, &t) { + if st := t.StackTrace(); len(st) > 0 { + frame, _ := runtime.CallersFrames([]uintptr{uintptr(st[0])}).Next() + logrusFrame(entry, frame, module) + } + } + entry.Log(logrus.ErrorLevel, message) + }, + + fnInfo: func(level int, message string, kv ...interface{}) { + entry := root.WithField("version", version) + entry = logrusFields(entry, kv...) + + if level >= debug { + entry.Log(logrus.DebugLevel, message) + } else { + entry.Log(logrus.InfoLevel, message) + } + }, + } +} + +// logrusFields structures and adds the key/value interface to the logrus.Entry; +// for instance, if a key is not a string, this formats the key as a string. +func logrusFields(entry *logrus.Entry, kv ...interface{}) *logrus.Entry { + if len(kv) == 0 { + return entry + } + if len(kv)%2 == 1 { + kv = append(kv, nil) + } + + m := make(map[string]interface{}, len(kv)/2) + + for i := 0; i < len(kv); i += 2 { + key, ok := kv[i].(string) + if !ok { + key = fmt.Sprintf("!(%#v)", kv[i]) + } + m[key] = kv[i+1] + } + + return entry.WithFields(m) +} + +// logrusFrame adds the file and func to the logrus.Entry, +// for use in logging errors +func logrusFrame(entry *logrus.Entry, frame runtime.Frame, module string) { + if frame.File != "" { + filename := strings.TrimPrefix(frame.File, module) + fileline := fmt.Sprintf("%s:%d", filename, frame.Line) + if v, ok := entry.Data["file"]; ok { + entry.Data["fields.file"] = v + } + entry.Data["file"] = fileline + } + if frame.Function != "" { + _, function := filepath.Split(frame.Function) + if v, ok := entry.Data["func"]; ok { + entry.Data["fields.func"] = v + } + entry.Data["func"] = function + } +} diff --git a/internal/logging/logrus_test.go b/internal/logging/logrus_test.go new file mode 100644 index 0000000000..3e73193d1a --- /dev/null +++ b/internal/logging/logrus_test.go @@ -0,0 +1,84 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package logging + +import ( + "bytes" + "fmt" + "runtime" + "strings" + "testing" + + "github.com/google/go-cmp/cmp" + "github.com/pkg/errors" + "gotest.tools/v3/assert" +) + +func assertLogrusContains(t testing.TB, actual, expected string) { + t.Helper() + + if !strings.Contains(actual, expected) { + t.Fatalf("missing from logrus:\n%s", cmp.Diff(expected, strings.Fields(actual))) + } +} + +func TestLogrus(t *testing.T) { + t.Parallel() + + out := new(bytes.Buffer) + logrus := Logrus(out, "v1", 1, 2) + + // Configured verbosity discards. + assert.Assert(t, logrus.Enabled(1)) + assert.Assert(t, logrus.Enabled(2)) + assert.Assert(t, !logrus.Enabled(3)) + + // Default level is INFO. + // Version field is always present. + out.Reset() + logrus.Info(0, "") + assertLogrusContains(t, out.String(), `level=info version=v1`) + + // Configured level or higher is DEBUG. + out.Reset() + logrus.Info(1, "") + assertLogrusContains(t, out.String(), `level=debug`) + out.Reset() + logrus.Info(2, "") + assertLogrusContains(t, out.String(), `level=debug`) + + // Any error is ERROR level. + out.Reset() + logrus.Error(fmt.Errorf("%s", "dang"), "") + assertLogrusContains(t, out.String(), `level=error error=dang`) + + // A wrapped error includes one frame of its stack. + out.Reset() + _, _, baseline, _ := runtime.Caller(0) + logrus.Error(errors.New("dang"), "") + assertLogrusContains(t, out.String(), fmt.Sprintf(`file="internal/logging/logrus_test.go:%d"`, baseline+1)) + assertLogrusContains(t, out.String(), `func=logging.TestLogrus`) + + out.Reset() + logrus.Info(0, "", "k1", "str", "k2", 13, "k3", false) + assertLogrusContains(t, out.String(), `k1=str k2=13 k3=false`) + + out.Reset() + logrus.Info(0, "banana") + assertLogrusContains(t, out.String(), `msg=banana`) + + // Fields don't overwrite builtins. + out.Reset() + logrus.Error(errors.New("dang"), "banana", + "error", "not-err", + "file", "not-file", + "func", "not-func", + "level", "not-lvl", + "msg", "not-msg", + ) + assertLogrusContains(t, out.String(), `level=error msg=banana error=dang`) + assertLogrusContains(t, out.String(), `fields.error=not-err fields.file=not-file fields.func=not-func`) + assertLogrusContains(t, out.String(), `fields.level=not-lvl fields.msg=not-msg`) +} diff --git a/internal/naming/annotations.go b/internal/naming/annotations.go new file mode 100644 index 0000000000..2179a5f084 --- /dev/null +++ b/internal/naming/annotations.go @@ -0,0 +1,71 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package naming + +const ( + annotationPrefix = labelPrefix + + // Finalizer marks an object to be garbage collected by this module. + Finalizer = annotationPrefix + "finalizer" + + // PatroniSwitchover is the annotation added to a PostgresCluster to initiate a manual + // Patroni Switchover (or Failover). + PatroniSwitchover = annotationPrefix + "trigger-switchover" + + // PGBackRestBackup is the annotation that is added to a PostgresCluster to initiate a manual + // backup. The value of the annotation will be a unique identifier for a backup Job (e.g. a + // timestamp), which will be stored in the PostgresCluster status to properly track completion + // of the Job. Also used to annotate the backup Job itself as needed to identify the backup + // ID associated with a specific manual backup Job. + PGBackRestBackup = annotationPrefix + "pgbackrest-backup" + + // PGBackRestBackupJobCompletion is the annotation that is added to restore jobs, pvcs, and + // VolumeSnapshots that are involved in the volume snapshot creation process. The annotation + // holds a RFC3339 formatted timestamp that corresponds to the completion time of the associated + // backup job. + PGBackRestBackupJobCompletion = annotationPrefix + "pgbackrest-backup-job-completion" + + // PGBackRestConfigHash is an annotation used to specify the hash value associated with a + // repo configuration as needed to detect configuration changes that invalidate running Jobs + // (and therefore must be recreated) + PGBackRestConfigHash = annotationPrefix + "pgbackrest-hash" + + // PGBackRestRestore is the annotation that is added to a PostgresCluster to initiate an in-place + // restore. The value of the annotation will be a unique identifier for a restore Job (e.g. a + // timestamp), which will be stored in the PostgresCluster status to properly track completion + // of the Job. + PGBackRestRestore = annotationPrefix + "pgbackrest-restore" + + // PGBackRestIPVersion is an annotation used to indicate whether an IPv6 wildcard address should be + // used for the pgBackRest "tls-server-address" or not. If the user wants to use IPv6, the value + // should be "IPv6". As of right now, if the annotation is not present or if the annotation's value + // is anything other than "IPv6", the "tls-server-address" will default to IPv4 (0.0.0.0). The need + // for this annotation is due to an issue in pgBackRest (#1841) where using a wildcard address to + // bind all addresses does not work in certain IPv6 environments. + PGBackRestIPVersion = annotationPrefix + "pgbackrest-ip-version" + + // PostgresExporterCollectorsAnnotation is an annotation used to allow users to control whether or + // not postgres_exporter default metrics, settings, and collectors are enabled. The value "None" + // disables all postgres_exporter defaults. Disabling the defaults may cause errors in dashboards. + PostgresExporterCollectorsAnnotation = annotationPrefix + "postgres-exporter-collectors" + + // CrunchyBridgeClusterAdoptionAnnotation is an annotation used to allow users to "adopt" or take + // control over an existing Bridge Cluster with a CrunchyBridgeCluster CR. Essentially, if a + // CrunchyBridgeCluster CR does not have a status.ID, but the name matches the name of an existing + // bridge cluster, the user must add this annotation to the CR to allow the CR to take control of + // the Bridge Cluster. The Value assigned to the annotation must be the ID of existing cluster. + CrunchyBridgeClusterAdoptionAnnotation = annotationPrefix + "adopt-bridge-cluster" + + // AutoCreateUserSchemaAnnotation is an annotation used to allow users to control whether the cluster + // has schemas automatically created for the users defined in `spec.users` for all of the databases + // listed for that user. + AutoCreateUserSchemaAnnotation = annotationPrefix + "autoCreateUserSchema" + + // AuthorizeBackupRemovalAnnotation is an annotation used to allow users + // to delete PVC-based backups when changing from a cluster with backups + // to a cluster without backups. As usual with the operator, we do not + // touch cloud-based backups. + AuthorizeBackupRemovalAnnotation = annotationPrefix + "authorizeBackupRemoval" +) diff --git a/internal/naming/annotations_test.go b/internal/naming/annotations_test.go new file mode 100644 index 0000000000..318dd5ab5c --- /dev/null +++ b/internal/naming/annotations_test.go @@ -0,0 +1,26 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package naming + +import ( + "testing" + + "gotest.tools/v3/assert" + "k8s.io/apimachinery/pkg/util/validation" +) + +func TestAnnotationsValid(t *testing.T) { + assert.Assert(t, nil == validation.IsQualifiedName(AuthorizeBackupRemovalAnnotation)) + assert.Assert(t, nil == validation.IsQualifiedName(AutoCreateUserSchemaAnnotation)) + assert.Assert(t, nil == validation.IsQualifiedName(CrunchyBridgeClusterAdoptionAnnotation)) + assert.Assert(t, nil == validation.IsQualifiedName(Finalizer)) + assert.Assert(t, nil == validation.IsQualifiedName(PatroniSwitchover)) + assert.Assert(t, nil == validation.IsQualifiedName(PGBackRestBackup)) + assert.Assert(t, nil == validation.IsQualifiedName(PGBackRestBackupJobCompletion)) + assert.Assert(t, nil == validation.IsQualifiedName(PGBackRestConfigHash)) + assert.Assert(t, nil == validation.IsQualifiedName(PGBackRestIPVersion)) + assert.Assert(t, nil == validation.IsQualifiedName(PGBackRestRestore)) + assert.Assert(t, nil == validation.IsQualifiedName(PostgresExporterCollectorsAnnotation)) +} diff --git a/internal/naming/controllers.go b/internal/naming/controllers.go new file mode 100644 index 0000000000..3d492e8a3a --- /dev/null +++ b/internal/naming/controllers.go @@ -0,0 +1,10 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package naming + +const ( + ControllerBridge = "bridge-controller" + ControllerPGAdmin = "pgadmin-controller" +) diff --git a/internal/naming/dns.go b/internal/naming/dns.go new file mode 100644 index 0000000000..d3351a5d70 --- /dev/null +++ b/internal/naming/dns.go @@ -0,0 +1,88 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package naming + +import ( + "context" + "net" + "strings" + + appsv1 "k8s.io/api/apps/v1" + corev1 "k8s.io/api/core/v1" +) + +// InstancePodDNSNames returns the possible DNS names for instance. The first +// name is the fully qualified domain name (FQDN). +func InstancePodDNSNames(ctx context.Context, instance *appsv1.StatefulSet) []string { + var ( + domain = KubernetesClusterDomain(ctx) + namespace = instance.Namespace + name = instance.Name + "-0." + instance.Spec.ServiceName + ) + + // We configure our instances with a subdomain so that Pods get stable DNS + // names in the form "{pod}.{service}.{namespace}.svc.{cluster-domain}". + // - https://docs.k8s.io/concepts/services-networking/dns-pod-service/#pods + return []string{ + name + "." + namespace + ".svc." + domain, + name + "." + namespace + ".svc", + name + "." + namespace, + name, + } +} + +// RepoHostPodDNSNames returns the possible DNS names for a pgBackRest repository host Pod. +// The first name is the fully qualified domain name (FQDN). +func RepoHostPodDNSNames(ctx context.Context, repoHost *appsv1.StatefulSet) []string { + var ( + domain = KubernetesClusterDomain(ctx) + namespace = repoHost.Namespace + name = repoHost.Name + "-0." + repoHost.Spec.ServiceName + ) + + // We configure our repository hosts with a subdomain so that Pods get stable + // DNS names in the form "{pod}.{service}.{namespace}.svc.{cluster-domain}". + // - https://docs.k8s.io/concepts/services-networking/dns-pod-service/#pods + return []string{ + name + "." + namespace + ".svc." + domain, + name + "." + namespace + ".svc", + name + "." + namespace, + name, + } +} + +// ServiceDNSNames returns the possible DNS names for service. The first name +// is the fully qualified domain name (FQDN). +func ServiceDNSNames(ctx context.Context, service *corev1.Service) []string { + domain := KubernetesClusterDomain(ctx) + + return []string{ + service.Name + "." + service.Namespace + ".svc." + domain, + service.Name + "." + service.Namespace + ".svc", + service.Name + "." + service.Namespace, + service.Name, + } +} + +// KubernetesClusterDomain looks up the Kubernetes cluster domain name. +func KubernetesClusterDomain(ctx context.Context) string { + ctx, span := tracer.Start(ctx, "kubernetes-domain-lookup") + defer span.End() + + // Lookup an existing Service to determine its fully qualified domain name. + // This is inexpensive because the "net" package uses OS-level DNS caching. + // - https://golang.org/issue/24796 + api := "kubernetes.default.svc" + cname, err := net.DefaultResolver.LookupCNAME(ctx, api) + + if err == nil { + return strings.TrimPrefix(cname, api+".") + } + + span.RecordError(err) + // The kubeadm default is "cluster.local" and is adequate when not running + // in an actual Kubernetes cluster. + return "cluster.local." +} diff --git a/internal/naming/dns_test.go b/internal/naming/dns_test.go new file mode 100644 index 0000000000..e7e2ea9dc6 --- /dev/null +++ b/internal/naming/dns_test.go @@ -0,0 +1,61 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package naming + +import ( + "context" + "strings" + "testing" + "time" + + "gotest.tools/v3/assert" + appsv1 "k8s.io/api/apps/v1" + corev1 "k8s.io/api/core/v1" +) + +func TestInstancePodDNSNames(t *testing.T) { + ctx, cancel := context.WithTimeout(context.Background(), time.Second) + defer cancel() + + instance := &appsv1.StatefulSet{} + instance.Namespace = "some-place" + instance.Name = "cluster-name-id" + instance.Spec.ServiceName = "cluster-pods" + + names := InstancePodDNSNames(ctx, instance) + assert.Assert(t, len(names) > 0) + + assert.DeepEqual(t, names[1:], []string{ + "cluster-name-id-0.cluster-pods.some-place.svc", + "cluster-name-id-0.cluster-pods.some-place", + "cluster-name-id-0.cluster-pods", + }) + + assert.Assert(t, len(names[0]) > len(names[1]), "expected FQDN first, got %q", names[0]) + assert.Assert(t, strings.HasPrefix(names[0], names[1]+"."), "wrong FQDN: %q", names[0]) + assert.Assert(t, strings.HasSuffix(names[0], "."), "expected root, got %q", names[0]) +} + +func TestServiceDNSNames(t *testing.T) { + ctx, cancel := context.WithTimeout(context.Background(), time.Second) + defer cancel() + + service := &corev1.Service{} + service.Namespace = "baltia" + service.Name = "the-primary" + + names := ServiceDNSNames(ctx, service) + assert.Assert(t, len(names) > 0) + + assert.DeepEqual(t, names[1:], []string{ + "the-primary.baltia.svc", + "the-primary.baltia", + "the-primary", + }) + + assert.Assert(t, len(names[0]) > len(names[1]), "expected FQDN first, got %q", names[0]) + assert.Assert(t, strings.HasPrefix(names[0], names[1]+"."), "wrong FQDN: %q", names[0]) + assert.Assert(t, strings.HasSuffix(names[0], "."), "expected root, got %q", names[0]) +} diff --git a/internal/naming/doc.go b/internal/naming/doc.go new file mode 100644 index 0000000000..72cab8b0b0 --- /dev/null +++ b/internal/naming/doc.go @@ -0,0 +1,7 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +// Package naming provides functions and constants for the postgres-operator +// naming and labeling scheme. +package naming diff --git a/internal/naming/labels.go b/internal/naming/labels.go new file mode 100644 index 0000000000..f25993122b --- /dev/null +++ b/internal/naming/labels.go @@ -0,0 +1,326 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package naming + +import ( + "k8s.io/apimachinery/pkg/labels" +) + +const ( + labelPrefix = "postgres-operator.crunchydata.com/" + + // LabelCluster et al. provides the fundamental labels for Postgres instances + LabelCluster = labelPrefix + "cluster" + LabelInstance = labelPrefix + "instance" + LabelInstanceSet = labelPrefix + "instance-set" + + // LabelRepoName is used to specify the name of a pgBackRest repository + LabelRepoName = labelPrefix + "name" + + LabelPatroni = labelPrefix + "patroni" + LabelRole = labelPrefix + "role" + + // LabelClusterCertificate is used to identify a secret containing a cluster certificate + LabelClusterCertificate = labelPrefix + "cluster-certificate" + + // LabelData is used to identify Pods and Volumes store Postgres data. + LabelData = labelPrefix + "data" + + // LabelMoveJob is used to identify a directory move Job. + LabelMoveJob = labelPrefix + "move-job" + + // LabelMovePGBackRestRepoDir is used to identify the Job that moves an existing pgBackRest repo directory. + LabelMovePGBackRestRepoDir = labelPrefix + "move-pgbackrest-repo-dir" + + // LabelMovePGDataDir is used to identify the Job that moves an existing pgData directory. + LabelMovePGDataDir = labelPrefix + "move-pgdata-dir" + + // LabelMovePGWalDir is used to identify the Job that moves an existing pg_wal directory. + LabelMovePGWalDir = labelPrefix + "move-pgwal-dir" + + // LabelPGBackRest is used to indicate that a resource is for pgBackRest + LabelPGBackRest = labelPrefix + "pgbackrest" + + // LabelPGBackRestBackup is used to indicate that a resource is for a pgBackRest backup + LabelPGBackRestBackup = labelPrefix + "pgbackrest-backup" + + // LabelPGBackRestConfig is used to indicate that a ConfigMap or Secret is for pgBackRest + LabelPGBackRestConfig = labelPrefix + "pgbackrest-config" + + // LabelPGBackRestDedicated is used to indicate that a ConfigMap is for a pgBackRest dedicated + // repository host + LabelPGBackRestDedicated = labelPrefix + "pgbackrest-dedicated" + + // LabelPGBackRestRepo is used to indicate that a Deployment or Pod is for a pgBackRest + // repository + LabelPGBackRestRepo = labelPrefix + "pgbackrest-repo" + + // LabelPGBackRestRepoVolume is used to indicate that a resource for a pgBackRest + // repository + LabelPGBackRestRepoVolume = labelPrefix + "pgbackrest-volume" + + LabelPGBackRestCronJob = labelPrefix + "pgbackrest-cronjob" + + // LabelPGBackRestRestore is used to indicate that a Job or Pod is for a pgBackRest restore + LabelPGBackRestRestore = labelPrefix + "pgbackrest-restore" + + // LabelPGBackRestRestoreConfig is used to indicate that a configuration + // resource (e.g. a ConfigMap or Secret) is for a pgBackRest restore + LabelPGBackRestRestoreConfig = labelPrefix + "pgbackrest-restore-config" + + // LabelPGMonitorDiscovery is the label added to Pods running the "exporter" container to + // support discovery by Prometheus according to pgMonitor configuration + LabelPGMonitorDiscovery = labelPrefix + "crunchy-postgres-exporter" + + // LabelPostgresUser identifies the PostgreSQL user an object is for or about. + LabelPostgresUser = labelPrefix + "pguser" + + // LabelStartupInstance is used to indicate the startup instance associated with a resource + LabelStartupInstance = labelPrefix + "startup-instance" + + RolePrimary = "primary" + RoleReplica = "replica" + + // RolePatroniLeader is the LabelRole that Patroni sets on the Pod that is + // currently the leader. + RolePatroniLeader = "master" + + // RolePatroniReplica is a LabelRole value that Patroni sets on Pods that are + // following the leader. + RolePatroniReplica = "replica" + + // RolePGBouncer is the LabelRole applied to PgBouncer objects. + RolePGBouncer = "pgbouncer" + + // RolePGAdmin is the LabelRole applied to pgAdmin objects. + RolePGAdmin = "pgadmin" + + // RolePostgresData is the LabelRole applied to PostgreSQL data volumes. + RolePostgresData = "pgdata" + + // RolePostgresUser is the LabelRole applied to PostgreSQL user secrets. + RolePostgresUser = "pguser" + + // RolePostgresWAL is the LabelRole applied to PostgreSQL WAL volumes. + RolePostgresWAL = "pgwal" + + // RoleMonitoring is the LabelRole applied to Monitoring resources + RoleMonitoring = "monitoring" + + // RoleSnapshot is the LabelRole applied to Snapshot resources. + RoleSnapshot = "snapshot" +) + +const ( + // LabelCrunchyBridgeClusterPostgresRole identifies the PostgreSQL user an object is for or about. + LabelCrunchyBridgeClusterPostgresRole = labelPrefix + "cbc-pgrole" + + // RoleCrunchyBridgeClusterPostgresRole is the LabelRole applied to CBC PostgreSQL role secrets. + RoleCrunchyBridgeClusterPostgresRole = "cbc-pgrole" +) + +const ( + // DataPGAdmin is a LabelData value that indicates the object has pgAdmin data. + DataPGAdmin = "pgadmin" + + // DataPGBackRest is a LabelData value that indicates the object has pgBackRest data. + DataPGBackRest = "pgbackrest" + + // DataPostgres is a LabelData value that indicates the object has PostgreSQL data. + DataPostgres = "postgres" +) + +// BackupJobType represents different types of backups (e.g. ad-hoc backups, scheduled backups, +// the backup for pgBackRest replica creation, etc.) +type BackupJobType string + +const ( + // BackupManual is the backup type utilized for manual backups + BackupManual BackupJobType = "manual" + + // BackupReplicaCreate is the backup type for the backup taken to enable pgBackRest replica + // creation + BackupReplicaCreate BackupJobType = "replica-create" + + // BackupScheduled is the backup type utilized for scheduled backups + BackupScheduled BackupJobType = "scheduled" +) + +const ( + + // LabelStandalonePGAdmin is used to indicate a resource for a standalone-pgadmin instance. + LabelStandalonePGAdmin = labelPrefix + "pgadmin" +) + +// Merge takes sets of labels and merges them. The last set +// provided will win in case of conflicts. +func Merge(sets ...map[string]string) labels.Set { + merged := labels.Set{} + for _, set := range sets { + merged = labels.Merge(merged, set) + } + return merged +} + +// DirectoryMoveJobLabels provides labels for PVC move Jobs. +func DirectoryMoveJobLabels(clusterName string) labels.Set { + jobLabels := map[string]string{ + LabelCluster: clusterName, + LabelMoveJob: "", + } + return jobLabels +} + +// PGBackRestLabels provides common labels for pgBackRest resources. +func PGBackRestLabels(clusterName string) labels.Set { + return map[string]string{ + LabelCluster: clusterName, + LabelPGBackRest: "", + } +} + +// PGBackRestBackupJobLabels provides labels for pgBackRest backup Jobs. +func PGBackRestBackupJobLabels(clusterName, repoName string, + backupType BackupJobType) labels.Set { + repoLabels := PGBackRestLabels(clusterName) + jobLabels := map[string]string{ + LabelPGBackRestRepo: repoName, + LabelPGBackRestBackup: string(backupType), + } + return labels.Merge(jobLabels, repoLabels) +} + +// PGBackRestBackupJobSelector provides a selector for querying all pgBackRest +// resources +func PGBackRestBackupJobSelector(clusterName, repoName string, + backupType BackupJobType) labels.Selector { + return PGBackRestBackupJobLabels(clusterName, repoName, backupType).AsSelector() +} + +// PGBackRestRestoreConfigLabels provides labels for configuration (e.g. ConfigMaps and Secrets) +// generated to perform a pgBackRest restore. +// +// Deprecated: Store restore data in the pgBackRest ConfigMap and Secret, +// [PGBackRestConfig] and [PGBackRestSecret]. +func PGBackRestRestoreConfigLabels(clusterName string) labels.Set { + commonLabels := PGBackRestLabels(clusterName) + jobLabels := map[string]string{ + LabelPGBackRestRestoreConfig: "", + } + return labels.Merge(jobLabels, commonLabels) +} + +// PGBackRestRestoreConfigSelector provides selector for querying pgBackRest restore config +// resources. +func PGBackRestRestoreConfigSelector(clusterName string) labels.Selector { + return PGBackRestRestoreConfigLabels(clusterName).AsSelector() +} + +// PGBackRestRestoreJobLabels provides labels for pgBackRest restore Jobs and +// associated configuration ConfigMaps and Secrets. +func PGBackRestRestoreJobLabels(clusterName string) labels.Set { + commonLabels := PGBackRestLabels(clusterName) + jobLabels := map[string]string{ + LabelPGBackRestRestore: "", + } + return labels.Merge(jobLabels, commonLabels) +} + +// PGBackRestRestoreJobSelector provides selector for querying pgBackRest restore Jobs. +func PGBackRestRestoreJobSelector(clusterName string) labels.Selector { + return PGBackRestRestoreJobLabels(clusterName).AsSelector() +} + +// PGBackRestRepoLabels provides common labels for pgBackRest repository +// resources. +func PGBackRestRepoLabels(clusterName, repoName string) labels.Set { + commonLabels := PGBackRestLabels(clusterName) + repoLabels := map[string]string{ + LabelPGBackRestRepo: repoName, + } + return labels.Merge(commonLabels, repoLabels) +} + +// PGBackRestSelector provides a selector for querying all pgBackRest +// resources +func PGBackRestSelector(clusterName string) labels.Selector { + return PGBackRestLabels(clusterName).AsSelector() +} + +// PGBackRestConfigLabels provides labels for the pgBackRest configuration created and used by +// the PostgreSQL Operator +func PGBackRestConfigLabels(clusterName string) labels.Set { + repoLabels := PGBackRestLabels(clusterName) + operatorConfigLabels := map[string]string{ + LabelPGBackRestConfig: "", + } + return labels.Merge(repoLabels, operatorConfigLabels) +} + +// PGBackRestCronJobLabels provides common labels for pgBackRest CronJobs +func PGBackRestCronJobLabels(clusterName, repoName, backupType string) labels.Set { + commonLabels := PGBackRestLabels(clusterName) + cronJobLabels := map[string]string{ + LabelPGBackRestRepo: repoName, + LabelPGBackRestCronJob: backupType, + LabelPGBackRestBackup: string(BackupScheduled), + } + return labels.Merge(commonLabels, cronJobLabels) +} + +// PGBackRestDedicatedLabels provides labels for a pgBackRest dedicated repository host +func PGBackRestDedicatedLabels(clusterName string) labels.Set { + commonLabels := PGBackRestLabels(clusterName) + operatorConfigLabels := map[string]string{ + LabelPGBackRestDedicated: "", + } + return labels.Merge(commonLabels, operatorConfigLabels) +} + +// PGBackRestDedicatedSelector provides a selector for querying pgBackRest dedicated +// repository host resources +func PGBackRestDedicatedSelector(clusterName string) labels.Selector { + return PGBackRestDedicatedLabels(clusterName).AsSelector() +} + +// PGBackRestRepoVolumeLabels the labels for a pgBackRest repository volume. +func PGBackRestRepoVolumeLabels(clusterName, repoName string) labels.Set { + repoLabels := PGBackRestRepoLabels(clusterName, repoName) + repoVolLabels := map[string]string{ + LabelPGBackRestRepoVolume: "", + LabelData: DataPGBackRest, + } + return labels.Merge(repoLabels, repoVolLabels) +} + +// StandalonePGAdminLabels return labels for standalone pgAdmin resources +func StandalonePGAdminLabels(pgAdminName string) labels.Set { + return map[string]string{ + LabelStandalonePGAdmin: pgAdminName, + LabelRole: RolePGAdmin, + } +} + +// StandalonePGAdminSelector provides a selector for standalone pgAdmin resources +func StandalonePGAdminSelector(pgAdminName string) labels.Selector { + return StandalonePGAdminLabels(pgAdminName).AsSelector() +} + +// StandalonePGAdminDataLabels returns the labels for standalone pgAdmin resources +// that contain or mount data +func StandalonePGAdminDataLabels(pgAdminName string) labels.Set { + return labels.Merge( + StandalonePGAdminLabels(pgAdminName), + map[string]string{ + LabelData: DataPGAdmin, + }, + ) +} + +// StandalonePGAdminDataSelector returns a selector for standalone pgAdmin resources +// that contain or mount data +func StandalonePGAdminDataSelector(pgAdmiName string) labels.Selector { + return StandalonePGAdminDataLabels(pgAdmiName).AsSelector() +} diff --git a/internal/naming/labels_test.go b/internal/naming/labels_test.go new file mode 100644 index 0000000000..b8a7779858 --- /dev/null +++ b/internal/naming/labels_test.go @@ -0,0 +1,231 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package naming + +import ( + "testing" + + "gotest.tools/v3/assert" + "k8s.io/apimachinery/pkg/labels" + "k8s.io/apimachinery/pkg/util/validation" +) + +func TestLabelsValid(t *testing.T) { + assert.Assert(t, nil == validation.IsQualifiedName(LabelCluster)) + assert.Assert(t, nil == validation.IsQualifiedName(LabelData)) + assert.Assert(t, nil == validation.IsQualifiedName(LabelInstance)) + assert.Assert(t, nil == validation.IsQualifiedName(LabelInstanceSet)) + assert.Assert(t, nil == validation.IsQualifiedName(LabelMoveJob)) + assert.Assert(t, nil == validation.IsQualifiedName(LabelMovePGBackRestRepoDir)) + assert.Assert(t, nil == validation.IsQualifiedName(LabelMovePGDataDir)) + assert.Assert(t, nil == validation.IsQualifiedName(LabelMovePGWalDir)) + assert.Assert(t, nil == validation.IsQualifiedName(LabelPatroni)) + assert.Assert(t, nil == validation.IsQualifiedName(LabelRole)) + assert.Assert(t, nil == validation.IsQualifiedName(LabelPGBackRest)) + assert.Assert(t, nil == validation.IsQualifiedName(LabelPGBackRestBackup)) + assert.Assert(t, nil == validation.IsQualifiedName(LabelPGBackRestConfig)) + assert.Assert(t, nil == validation.IsQualifiedName(LabelPGBackRestDedicated)) + assert.Assert(t, nil == validation.IsQualifiedName(LabelPGBackRestRepo)) + assert.Assert(t, nil == validation.IsQualifiedName(LabelPGBackRestRepoVolume)) + assert.Assert(t, nil == validation.IsQualifiedName(LabelPGBackRestRestore)) + assert.Assert(t, nil == validation.IsQualifiedName(LabelPGBackRestRestoreConfig)) + assert.Assert(t, nil == validation.IsQualifiedName(LabelPGMonitorDiscovery)) + assert.Assert(t, nil == validation.IsQualifiedName(LabelPostgresUser)) + assert.Assert(t, nil == validation.IsQualifiedName(LabelStandalonePGAdmin)) + assert.Assert(t, nil == validation.IsQualifiedName(LabelStartupInstance)) + assert.Assert(t, nil == validation.IsQualifiedName(LabelCrunchyBridgeClusterPostgresRole)) +} + +func TestLabelValuesValid(t *testing.T) { + assert.Assert(t, nil == validation.IsValidLabelValue(DataPGAdmin)) + assert.Assert(t, nil == validation.IsValidLabelValue(DataPGBackRest)) + assert.Assert(t, nil == validation.IsValidLabelValue(DataPostgres)) + assert.Assert(t, nil == validation.IsValidLabelValue(RolePatroniLeader)) + assert.Assert(t, nil == validation.IsValidLabelValue(RolePatroniReplica)) + assert.Assert(t, nil == validation.IsValidLabelValue(RolePGAdmin)) + assert.Assert(t, nil == validation.IsValidLabelValue(RolePGBouncer)) + assert.Assert(t, nil == validation.IsValidLabelValue(RolePostgresData)) + assert.Assert(t, nil == validation.IsValidLabelValue(RolePostgresUser)) + assert.Assert(t, nil == validation.IsValidLabelValue(RolePostgresWAL)) + assert.Assert(t, nil == validation.IsValidLabelValue(RolePrimary)) + assert.Assert(t, nil == validation.IsValidLabelValue(RoleReplica)) + assert.Assert(t, nil == validation.IsValidLabelValue(string(BackupManual))) + assert.Assert(t, nil == validation.IsValidLabelValue(string(BackupReplicaCreate))) + assert.Assert(t, nil == validation.IsValidLabelValue(string(BackupScheduled))) + assert.Assert(t, nil == validation.IsValidLabelValue(RoleMonitoring)) + assert.Assert(t, nil == validation.IsValidLabelValue(RoleCrunchyBridgeClusterPostgresRole)) +} + +func TestMerge(t *testing.T) { + for _, test := range []struct { + name string + sets []map[string]string + expect labels.Set + }{{ + name: "no sets", + sets: []map[string]string{}, + expect: labels.Set{}, + }, { + name: "nil map", + sets: []map[string]string{ + map[string]string(nil), + }, + expect: labels.Set{}, + }, { + name: "has empty sets", + sets: []map[string]string{ + {"label.one": "one"}, + {}, + }, + expect: labels.Set{ + "label.one": "one", + }, + }, { + name: "two sets with no overlap", + sets: []map[string]string{ + {"label.one": "one"}, + {"label.two": "two"}, + }, + expect: labels.Set{ + "label.one": "one", + "label.two": "two", + }, + }, { + name: "two sets with overlap", + sets: []map[string]string{ + {LabelCluster: "bad", "label.one": "one"}, + {LabelCluster: "good", "label.two": "two"}, + }, + expect: labels.Set{ + "label.one": "one", + "label.two": "two", + LabelCluster: "good", + }, + }, { + name: "three sets with no overlap", + sets: []map[string]string{ + {"label.one": "one"}, + {"label.two": "two"}, + {"label.three": "three"}, + }, + expect: labels.Set{ + "label.one": "one", + "label.two": "two", + "label.three": "three", + }, + }, { + name: "three sets with overlap", + sets: []map[string]string{ + {LabelCluster: "bad-one", "label.one": "one"}, + {LabelCluster: "bad-two", "label.two": "two"}, + {LabelCluster: "good", "label.three": "three"}, + }, + expect: labels.Set{ + "label.one": "one", + "label.two": "two", + "label.three": "three", + LabelCluster: "good", + }, + }} { + t.Run(test.name, func(t *testing.T) { + merged := Merge(test.sets...) + assert.DeepEqual(t, merged, test.expect) + }) + } +} + +// validate various functions that return pgBackRest labels +func TestPGBackRestLabelFuncs(t *testing.T) { + + clusterName := "hippo" + repoName := "hippo-repo" + + // verify the labels that identify pgBackRest resources + pgBackRestLabels := PGBackRestLabels(clusterName) + assert.Equal(t, pgBackRestLabels.Get(LabelCluster), clusterName) + assert.Check(t, pgBackRestLabels.Has(LabelPGBackRest)) + + // verify that the labels selector is created as expected + pgBackRestSelector := PGBackRestSelector(clusterName) + assert.Check(t, pgBackRestSelector.Matches(pgBackRestLabels)) + + // verify the labels that identify pgBackRest backup resources + pgBackRestReplicaBackupLabels := PGBackRestBackupJobLabels(clusterName, repoName, + BackupReplicaCreate) + assert.Equal(t, pgBackRestReplicaBackupLabels.Get(LabelCluster), clusterName) + assert.Check(t, pgBackRestReplicaBackupLabels.Has(LabelPGBackRest)) + assert.Equal(t, pgBackRestReplicaBackupLabels.Get(LabelPGBackRestRepo), repoName) + assert.Equal(t, pgBackRestReplicaBackupLabels.Get(LabelPGBackRestBackup), + string(BackupReplicaCreate)) + + // verify the pgBackRest label selector function + // PGBackRestBackupJobSelector + pgBackRestBackupJobSelector := PGBackRestBackupJobSelector(clusterName, repoName, + BackupReplicaCreate) + assert.Check(t, pgBackRestBackupJobSelector.Matches(pgBackRestReplicaBackupLabels)) + + // verify the labels that identify pgBackRest repo resources + pgBackRestRepoLabels := PGBackRestRepoLabels(clusterName, repoName) + assert.Equal(t, pgBackRestRepoLabels.Get(LabelCluster), clusterName) + assert.Check(t, pgBackRestRepoLabels.Has(LabelPGBackRest)) + assert.Equal(t, pgBackRestRepoLabels.Get(LabelPGBackRestRepo), repoName) + + // verify the labels that identify pgBackRest configuration resources + pgBackRestConfigLabels := PGBackRestConfigLabels(clusterName) + assert.Equal(t, pgBackRestConfigLabels.Get(LabelCluster), clusterName) + assert.Check(t, pgBackRestConfigLabels.Has(LabelPGBackRest)) + assert.Check(t, pgBackRestConfigLabels.Has(LabelPGBackRestConfig)) + + // verify the labels that identify pgBackRest repo resources + pgBackRestCronJobLabels := PGBackRestCronJobLabels(clusterName, repoName, + "testBackupType") + assert.Equal(t, pgBackRestCronJobLabels.Get(LabelCluster), clusterName) + assert.Check(t, pgBackRestCronJobLabels.Has(LabelPGBackRest)) + assert.Equal(t, pgBackRestCronJobLabels.Get(LabelPGBackRestRepo), repoName) + assert.Equal(t, pgBackRestCronJobLabels.Get(LabelPGBackRestBackup), string(BackupScheduled)) + + // verify the labels that identify pgBackRest dedicated repository host resources + pgBackRestDedicatedLabels := PGBackRestDedicatedLabels(clusterName) + assert.Equal(t, pgBackRestDedicatedLabels.Get(LabelCluster), clusterName) + assert.Check(t, pgBackRestDedicatedLabels.Has(LabelPGBackRest)) + assert.Check(t, pgBackRestDedicatedLabels.Has(LabelPGBackRestDedicated)) + + // verify that the dedicated labels selector is created as expected + pgBackRestDedicatedSelector := PGBackRestDedicatedSelector(clusterName) + assert.Check(t, pgBackRestDedicatedSelector.Matches(pgBackRestDedicatedLabels)) + + // verify the labels that identify pgBackRest repository volume resources + pgBackRestRepoVolumeLabels := PGBackRestRepoVolumeLabels(clusterName, repoName) + assert.Equal(t, pgBackRestRepoVolumeLabels.Get(LabelCluster), clusterName) + assert.Check(t, pgBackRestRepoVolumeLabels.Has(LabelPGBackRest)) + assert.Equal(t, pgBackRestRepoVolumeLabels.Get(LabelPGBackRestRepo), repoName) + assert.Check(t, pgBackRestRepoVolumeLabels.Has(LabelPGBackRestRepoVolume)) + + // verify the labels that identify pgBackRest repository volume resources + pgBackRestRestoreJobLabels := PGBackRestRestoreJobLabels(clusterName) + assert.Equal(t, pgBackRestRestoreJobLabels.Get(LabelCluster), clusterName) + assert.Check(t, pgBackRestRestoreJobLabels.Has(LabelPGBackRest)) + assert.Check(t, pgBackRestRestoreJobLabels.Has(LabelPGBackRestRestore)) + + // verify the labels that identify pgBackRest restore configuration resources + pgBackRestRestoreConfigLabels := PGBackRestRestoreConfigLabels(clusterName) + assert.Equal(t, pgBackRestRestoreConfigLabels.Get(LabelCluster), clusterName) + assert.Check(t, pgBackRestRestoreConfigLabels.Has(LabelPGBackRest)) + assert.Check(t, pgBackRestRestoreConfigLabels.Has(LabelPGBackRestRestoreConfig)) + + pgBackRestRestoreConfigSelector := PGBackRestRestoreConfigSelector(clusterName) + assert.Check(t, pgBackRestRestoreConfigSelector.Matches(pgBackRestRestoreConfigLabels)) +} + +// validate the DirectoryMoveJobLabels function +func TestMoveJobLabelFunc(t *testing.T) { + + clusterName := "hippo" + + // verify the labels that identify directory move jobs + dirMoveJobLabels := DirectoryMoveJobLabels(clusterName) + assert.Equal(t, dirMoveJobLabels.Get(LabelCluster), clusterName) + assert.Check(t, dirMoveJobLabels.Has(LabelMoveJob)) +} diff --git a/internal/naming/limitations.md b/internal/naming/limitations.md new file mode 100644 index 0000000000..ba607215f7 --- /dev/null +++ b/internal/naming/limitations.md @@ -0,0 +1,105 @@ + + +# Definitions + +[k8s-names]: https://docs.k8s.io/concepts/overview/working-with-objects/names/ + +### DNS subdomain + +Most resource types require this kind of name. It must be 253 characters or less, +lowercase, and alphanumeric with hyphens U+002D and dots U+002E allowed in between. + +- [k8s.io/apimachinery/pkg/util/validation.IsDNS1123Subdomain](https://pkg.go.dev/k8s.io/apimachinery/pkg/util/validation#IsDNS1123Subdomain) + +### DNS label + +Some resource types require this kind of name. It must be 63 characters or less, +lowercase, and alphanumeric with hyphens U+002D allowed in between. + +Some have a stricter requirement to start with an alphabetic (nonnumerical) character. + +- [k8s.io/apimachinery/pkg/util/validation.IsDNS1123Label](https://pkg.go.dev/k8s.io/apimachinery/pkg/util/validation#IsDNS1123Label) +- [k8s.io/apimachinery/pkg/util/validation.IsDNS1035Label](https://pkg.go.dev/k8s.io/apimachinery/pkg/util/validation#IsDNS1035Label) + + +# Labels + +[k8s-labels]: https://docs.k8s.io/concepts/overview/working-with-objects/labels/ + +Label names must be 317 characters or less. The portion before an optional slash U+002F +must be a DNS subdomain. The portion after must be 63 characters or less. + +Label values must be 63 characters or less and can be empty. + +Both label names and values must be alphanumeric with hyphens U+002D, underscores U+005F, +and dots U+002E allowed in between. + +- [k8s.io/apimachinerypkg/util/validation.IsQualifiedName](https://pkg.go.dev/k8s.io/apimachinery/pkg/util/validation#IsQualifiedName) +- [k8s.io/apimachinerypkg/util/validation.IsValidLabelValue](https://pkg.go.dev/k8s.io/apimachinery/pkg/util/validation#IsValidLabelValue) + + +# Annotations + +[k8s-annotations]: https://docs.k8s.io/concepts/overview/working-with-objects/annotations/ + +Annotation names must be 317 characters or less. The portion before an optional slash U+002F +must be a DNS subdomain. The portion after must be 63 characters or less and alphanumeric with +hyphens U+002D, underscores U+005F, and dots U+002E allowed in between. + +Annotation values may contain anything, but the combined size of *all* names and values +must be 256 KiB or less. + +- [https://pkg.go.dev/k8s.io/apimachinery/pkg/api/validation.ValidateAnnotations](https://pkg.go.dev/k8s.io/apimachinery/pkg/api/validation#ValidateAnnotations) + + +# Specifics + +The Kubernetes API validates custom resource metadata. +[Custom resource names are DNS subdomains](https://releases.k8s.io/v1.23.0/staging/src/k8s.io/apiextensions-apiserver/pkg/registry/customresource/validator.go#L60). +It may be possible to limit this further through validation. This is a stated +goal of [CEL expression validation](https://docs.k8s.io/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-rules). + +[ConfigMap names are DNS subdomains](https://releases.k8s.io/v1.23.0/pkg/apis/core/validation/validation.go#L5618). + +[CronJob names are DNS subdomains](https://docs.k8s.io/concepts/workloads/controllers/cron-jobs/) +but must be [52 characters or less](https://releases.k8s.io/v1.23.0/pkg/apis/batch/validation/validation.go#L281). + +[Deployment names are DNS subdomains](https://releases.k8s.io/v1.23.0/pkg/apis/apps/validation/validation.go#L632). + +[Job names are DNS subdomains](https://releases.k8s.io/v1.23.0/pkg/apis/batch/validation/validation.go#L86). +When `.spec.completionMode = Indexed`, the name must be shorter (closer to 61 characters, it depends). +When `.spec.manualSelector` is unset, its Pods get (and must have) a "job-name" label, limiting the +name to 63 characters or less. + +[Namespace names are DNS labels](https://releases.k8s.io/v1.23.0/pkg/apis/core/validation/validation.go#L5963). + +[PersistentVolumeClaim (PVC) names are DNS subdomains](https://releases.k8s.io/v1.23.0/pkg/apis/core/validation/validation.go#L2066). + +[Pod names are DNS subdomains](https://releases.k8s.io/v1.23.0/pkg/apis/core/validation/validation.go#L3443). +The strategy for [generating Pod names](https://releases.k8s.io/v1.23.0/pkg/registry/core/pod/strategy.go#L62) truncates to 63 characters. +The `.spec.hostname` field must be 63 characters or less. + +PodDisruptionBudget (PDB) + +[ReplicaSet names are DNS subdomains](https://releases.k8s.io/v1.23.0/pkg/apis/apps/validation/validation.go#L655). + +Role + +RoleBinding + +[Secret names are DNS subdomains](https://releases.k8s.io/v1.23.0/pkg/apis/core/validation/validation.go#L5515). + +[Service names are DNS labels](https://docs.k8s.io/concepts/services-networking/service/) +that must begin with a letter. + +ServiceAccount (subdomain) + +[StatefulSet names are DNS subdomains](https://docs.k8s.io/concepts/workloads/controllers/statefulset/), +but its Pods get [hostnames](https://releases.k8s.io/v1.23.0/pkg/apis/core/validation/validation.go#L3561) +so it must be shorter (closer to 61 characters, it depends). Its Pods also get a "controller-revision-hash" +label with [11 characters appended](https://issue.k8s.io/64023), limiting the name to 52 characters or less. + diff --git a/internal/naming/names.go b/internal/naming/names.go new file mode 100644 index 0000000000..369591de91 --- /dev/null +++ b/internal/naming/names.go @@ -0,0 +1,593 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package naming + +import ( + "fmt" + "hash/fnv" + + appsv1 "k8s.io/api/apps/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/util/rand" + "sigs.k8s.io/controller-runtime/pkg/client" + + "github.com/crunchydata/postgres-operator/internal/config" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +const ( + // ContainerDatabase is the name of the container running PostgreSQL and + // supporting tools: Patroni, pgBackRest, etc. + ContainerDatabase = "database" + + // ContainerPGAdmin is the name of a container running pgAdmin. + ContainerPGAdmin = "pgadmin" + + // ContainerPGAdminStartup is the name of the initialization container + // that prepares the filesystem for pgAdmin. + ContainerPGAdminStartup = "pgadmin-startup" + + // ContainerPGBackRestConfig is the name of a container supporting pgBackRest. + ContainerPGBackRestConfig = "pgbackrest-config" + + // ContainerPGBouncer is the name of a container running PgBouncer. + ContainerPGBouncer = "pgbouncer" + // ContainerPGBouncerConfig is the name of a container supporting PgBouncer. + ContainerPGBouncerConfig = "pgbouncer-config" + + // ContainerPostgresStartup is the name of the initialization container + // that prepares the filesystem for PostgreSQL. + ContainerPostgresStartup = "postgres-startup" + + // ContainerClientCertCopy is the name of the container that is responsible for copying and + // setting proper permissions on the client certificate and key after initialization whenever + // there is a change in the certificates or key + ContainerClientCertCopy = "replication-cert-copy" + // ContainerNSSWrapperInit is the name of the init container utilized to configure support + // for the nss_wrapper + ContainerNSSWrapperInit = "nss-wrapper-init" + + // ContainerPGBackRestLogDirInit is the name of the init container utilized to make + // a pgBackRest log directory when using a dedicated repo host. + ContainerPGBackRestLogDirInit = "pgbackrest-log-dir" + + // ContainerPGMonitorExporter is the name of a container running postgres_exporter + ContainerPGMonitorExporter = "exporter" + + // ContainerJobMovePGDataDir is the name of the job container utilized to copy v4 Operator + // pgData directories to the v5 default location + ContainerJobMovePGDataDir = "pgdata-move-job" + // ContainerJobMovePGWALDir is the name of the job container utilized to copy v4 Operator + // pg_wal directories to the v5 default location + ContainerJobMovePGWALDir = "pgwal-move-job" + // ContainerJobMovePGBackRestRepoDir is the name of the job container utilized to copy v4 + // Operator pgBackRest repo directories to the v5 default location + ContainerJobMovePGBackRestRepoDir = "repo-move-job" +) + +const ( + // PortExporter is the named port for the "exporter" container + PortExporter = "exporter" + // PortPGAdmin is the name of a port that connects to pgAdmin. + PortPGAdmin = "pgadmin" + // PortPGBouncer is the name of a port that connects to PgBouncer. + PortPGBouncer = "pgbouncer" + // PortPostgreSQL is the name of a port that connects to PostgreSQL. + PortPostgreSQL = "postgres" +) + +const ( + // RootCertSecret is the default root certificate secret name + RootCertSecret = "pgo-root-cacert" /* #nosec */ + // ClusterCertSecret is the default cluster leaf certificate secret name + ClusterCertSecret = "%s-cluster-cert" /* #nosec */ +) + +const ( + // CertVolume is the name of the Certificate volume and volume mount in a + // PostgreSQL instance Pod + CertVolume = "cert-volume" + + // CertMountPath is the path for mounting the postgrescluster certificates + // and key + CertMountPath = "/pgconf/tls" + + // ReplicationDirectory is the directory at CertMountPath where the replication + // certificates and key are mounted + ReplicationDirectory = "/replication" + + // ReplicationTmp is the directory where the replication certificates and key can + // have the proper permissions set due to: + // https://github.com/kubernetes/kubernetes/issues/57923 + ReplicationTmp = "/tmp/replication" + + // ReplicationCert is the secret key to the postgrescluster's + // replication/rewind user's client certificate + ReplicationCert = "tls.crt" + + // ReplicationCertPath is the path to the postgrescluster's replication/rewind + // user's client certificate + ReplicationCertPath = "replication/tls.crt" + + // ReplicationPrivateKey is the secret key to the postgrescluster's + // replication/rewind user's client private key + ReplicationPrivateKey = "tls.key" + + // ReplicationPrivateKeyPath is the path to the postgrescluster's + // replication/rewind user's client private key + ReplicationPrivateKeyPath = "replication/tls.key" + + // ReplicationCACert is the key name of the postgrescluster's replication/rewind + // user's client CA certificate + // Note: when using auto-generated certificates, this will be identical to the + // server CA cert + ReplicationCACert = "ca.crt" + + // ReplicationCACertPath is the path to the postgrescluster's replication/rewind + // user's client CA certificate + ReplicationCACertPath = "replication/ca.crt" +) + +const ( + // PGBackRestRepoContainerName is the name assigned to the container used to run pgBackRest + PGBackRestRepoContainerName = "pgbackrest" + + // PGBackRestRestoreContainerName is the name assigned to the container used to run pgBackRest + // restores + PGBackRestRestoreContainerName = "pgbackrest-restore" + + // PGBackRestRepoName is the name used for a pgbackrest repository + PGBackRestRepoName = "%s-pgbackrest-repo-%s" + + // PGBackRestPGDataLogPath is the pgBackRest default log path configuration used by the + // PostgreSQL instance. + PGBackRestPGDataLogPath = "/pgdata/pgbackrest/log" + + // PGBackRestRepoLogPath is the pgBackRest default log path configuration used by the + // dedicated repo host, if configured. + PGBackRestRepoLogPath = "/pgbackrest/%s/log" + + // suffix used with postgrescluster name for associated configmap. + // for instance, if the cluster is named 'mycluster', the + // configmap will be named 'mycluster-pgbackrest-config' + cmNameSuffix = "%s-pgbackrest-config" + + // suffix used with postgrescluster name for associated configmap. + // for instance, if the cluster is named 'mycluster', the + // configmap will be named 'mycluster-ssh-config' + // Deprecated: Repository hosts use mTLS for encryption, authentication, and authorization. + // TODO(tjmoore4): Once we no longer need this for cleanup purposes, this should be removed. + sshCMNameSuffix = "%s-ssh-config" + + // suffix used with postgrescluster name for associated secret. + // for instance, if the cluster is named 'mycluster', the + // secret will be named 'mycluster-ssh' + // Deprecated: Repository hosts use mTLS for encryption, authentication, and authorization. + // TODO(tjmoore4): Once we no longer need this for cleanup purposes, this should be removed. + sshSecretNameSuffix = "%s-ssh" + + // RestoreConfigCopySuffix is the suffix used for ConfigMap or Secret configuration + // resources needed when restoring from a PostgresCluster data source. If, for + // example, a Secret is named 'mysecret' and is the first item in the configuration + // slice, the copied Secret will be named 'mysecret-restorecopy-0' + RestoreConfigCopySuffix = "%s-restorecopy-%d" +) + +// AsObjectKey converts the ObjectMeta API type to a client.ObjectKey. +// When you have a client.Object, use client.ObjectKeyFromObject() instead. +func AsObjectKey(m metav1.ObjectMeta) client.ObjectKey { + return client.ObjectKey{Namespace: m.Namespace, Name: m.Name} +} + +// ClusterConfigMap returns the ObjectMeta necessary to lookup +// cluster's shared ConfigMap. +func ClusterConfigMap(cluster *v1beta1.PostgresCluster) metav1.ObjectMeta { + return metav1.ObjectMeta{ + Namespace: cluster.Namespace, + Name: cluster.Name + "-config", + } +} + +// ClusterInstanceRBAC returns the ObjectMeta necessary to lookup the +// ServiceAccount, Role, and RoleBinding for cluster's PostgreSQL instances. +func ClusterInstanceRBAC(cluster *v1beta1.PostgresCluster) metav1.ObjectMeta { + return metav1.ObjectMeta{ + Namespace: cluster.Namespace, + Name: cluster.Name + "-instance", + } +} + +// ClusterPGAdmin returns the ObjectMeta necessary to lookup the ConfigMap, +// Service, StatefulSet, or Volume for the cluster's pgAdmin user interface. +func ClusterPGAdmin(cluster *v1beta1.PostgresCluster) metav1.ObjectMeta { + return metav1.ObjectMeta{ + Namespace: cluster.Namespace, + Name: cluster.Name + "-pgadmin", + } +} + +// ClusterPGBouncer returns the ObjectMeta necessary to lookup the ConfigMap, +// Deployment, Secret, PodDisruptionBudget or Service that is cluster's +// PgBouncer proxy. +func ClusterPGBouncer(cluster *v1beta1.PostgresCluster) metav1.ObjectMeta { + return metav1.ObjectMeta{ + Namespace: cluster.Namespace, + Name: cluster.Name + "-pgbouncer", + } +} + +// ClusterPodService returns the ObjectMeta necessary to lookup the Service +// that is responsible for the network identity of Pods. +func ClusterPodService(cluster *v1beta1.PostgresCluster) metav1.ObjectMeta { + // The hyphen below ensures that the DNS name will not be interpreted as a + // top-level domain. Partially qualified requests for "{pod}.{cluster}-pods" + // should not leave the Kubernetes cluster, and if they do they are less + // likely to resolve. + return metav1.ObjectMeta{ + Namespace: cluster.Namespace, + Name: cluster.Name + "-pods", + } +} + +// ClusterPrimaryService returns the ObjectMeta necessary to lookup the Service +// that exposes the PostgreSQL primary instance. +func ClusterPrimaryService(cluster *v1beta1.PostgresCluster) metav1.ObjectMeta { + return metav1.ObjectMeta{ + Namespace: cluster.Namespace, + Name: cluster.Name + "-primary", + } +} + +// ClusterReplicaService returns the ObjectMeta necessary to lookup the Service +// that exposes PostgreSQL replica instances. +func ClusterReplicaService(cluster *v1beta1.PostgresCluster) metav1.ObjectMeta { + return metav1.ObjectMeta{ + Namespace: cluster.Namespace, + Name: cluster.Name + "-replicas", + } +} + +// ClusterDedicatedSnapshotVolume returns the ObjectMeta for the dedicated Snapshot +// volume for a cluster. +func ClusterDedicatedSnapshotVolume(cluster *v1beta1.PostgresCluster) metav1.ObjectMeta { + return metav1.ObjectMeta{ + Namespace: cluster.GetNamespace(), + Name: cluster.GetName() + "-snapshot", + } +} + +// ClusterVolumeSnapshot returns the ObjectMeta, including a random name, for a +// new pgdata VolumeSnapshot. +func ClusterVolumeSnapshot(cluster *v1beta1.PostgresCluster) metav1.ObjectMeta { + return metav1.ObjectMeta{ + Namespace: cluster.Namespace, + Name: cluster.Name + "-pgdata-snapshot-" + rand.String(4), + } +} + +// GenerateInstance returns a random name for a member of cluster and set. +func GenerateInstance( + cluster *v1beta1.PostgresCluster, set *v1beta1.PostgresInstanceSetSpec, +) metav1.ObjectMeta { + return metav1.ObjectMeta{ + Namespace: cluster.Namespace, + Name: cluster.Name + "-" + set.Name + "-" + rand.String(4), + } +} + +// GenerateStartupInstance returns a stable name that's shaped like +// GenerateInstance above. The stable name is based on a four character +// hash of the cluster name and instance set name +func GenerateStartupInstance( + cluster *v1beta1.PostgresCluster, set *v1beta1.PostgresInstanceSetSpec, +) metav1.ObjectMeta { + // Calculate a stable name that's shaped like GenerateInstance above. + // hash.Hash.Write never returns an error: https://pkg.go.dev/hash#Hash. + hash := fnv.New32() + _, _ = hash.Write([]byte(cluster.Name + set.Name)) + suffix := rand.SafeEncodeString(fmt.Sprint(hash.Sum32()))[:4] + + return metav1.ObjectMeta{ + Namespace: cluster.Namespace, + Name: cluster.Name + "-" + set.Name + "-" + suffix, + } +} + +// InstanceConfigMap returns the ObjectMeta necessary to lookup +// instance's shared ConfigMap. +func InstanceConfigMap(instance metav1.Object) metav1.ObjectMeta { + return metav1.ObjectMeta{ + Namespace: instance.GetNamespace(), + Name: instance.GetName() + "-config", + } +} + +// InstanceCertificates returns the ObjectMeta necessary to lookup the Secret +// containing instance's certificates. +func InstanceCertificates(instance metav1.Object) metav1.ObjectMeta { + return metav1.ObjectMeta{ + Namespace: instance.GetNamespace(), + Name: instance.GetName() + "-certs", + } +} + +// InstanceSet returns the ObjectMeta necessary to lookup the objects +// associated with a single instance set. Includes PodDisruptionBudgets +func InstanceSet(cluster *v1beta1.PostgresCluster, + set *v1beta1.PostgresInstanceSetSpec) metav1.ObjectMeta { + return metav1.ObjectMeta{ + Name: cluster.Name + "-set-" + set.Name, + Namespace: cluster.Namespace, + } +} + +// InstancePostgresDataVolume returns the ObjectMeta for the PostgreSQL data +// volume for instance. +func InstancePostgresDataVolume(instance *appsv1.StatefulSet) metav1.ObjectMeta { + return metav1.ObjectMeta{ + Namespace: instance.GetNamespace(), + Name: instance.GetName() + "-pgdata", + } +} + +// InstanceTablespaceDataVolume returns the ObjectMeta for the tablespace data +// volume for instance. +func InstanceTablespaceDataVolume(instance *appsv1.StatefulSet, tablespaceName string) metav1.ObjectMeta { + return metav1.ObjectMeta{ + Namespace: instance.GetNamespace(), + Name: instance.GetName() + + "-" + tablespaceName + + "-tablespace", + } +} + +// InstancePostgresWALVolume returns the ObjectMeta for the PostgreSQL WAL +// volume for instance. +func InstancePostgresWALVolume(instance *appsv1.StatefulSet) metav1.ObjectMeta { + return metav1.ObjectMeta{ + Namespace: instance.GetNamespace(), + Name: instance.GetName() + "-pgwal", + } +} + +// MonitoringUserSecret returns ObjectMeta necessary to lookup the Secret +// containing authentication credentials for monitoring tools. +func MonitoringUserSecret(cluster *v1beta1.PostgresCluster) metav1.ObjectMeta { + return metav1.ObjectMeta{ + Namespace: cluster.Namespace, + Name: cluster.Name + "-monitoring", + } +} + +// ExporterWebConfigMap returns ObjectMeta necessary to lookup and create the +// exporter web configmap. This configmap is used to configure the exporter +// web server. +func ExporterWebConfigMap(cluster *v1beta1.PostgresCluster) metav1.ObjectMeta { + return metav1.ObjectMeta{ + Namespace: cluster.Namespace, + Name: cluster.Name + "-exporter-web-config", + } +} + +// ExporterQueriesConfigMap returns ObjectMeta necessary to lookup and create the +// exporter queries configmap. This configmap is used to pass the default queries +// to the exporter. +func ExporterQueriesConfigMap(cluster *v1beta1.PostgresCluster) metav1.ObjectMeta { + return metav1.ObjectMeta{ + Namespace: cluster.Namespace, + Name: cluster.Name + "-exporter-queries-config", + } +} + +// OperatorConfigurationSecret returns the ObjectMeta necessary to lookup the +// Secret containing PGO configuration. +func OperatorConfigurationSecret() metav1.ObjectMeta { + return metav1.ObjectMeta{ + Namespace: config.PGONamespace(), + Name: "pgo-config", + } +} + +// ReplicationClientCertSecret returns ObjectMeta necessary to lookup the Secret +// containing the Patroni client authentication certificate information. +func ReplicationClientCertSecret(cluster *v1beta1.PostgresCluster) metav1.ObjectMeta { + return metav1.ObjectMeta{ + Namespace: cluster.Namespace, + Name: cluster.Name + "-replication-cert", + } +} + +// PatroniDistributedConfiguration returns the ObjectMeta necessary to lookup +// the DCS created by Patroni for cluster. This same name is used for both +// ConfigMap and Endpoints. See Patroni DCS "config_path". +func PatroniDistributedConfiguration(cluster *v1beta1.PostgresCluster) metav1.ObjectMeta { + return metav1.ObjectMeta{ + Namespace: cluster.Namespace, + Name: PatroniScope(cluster) + "-config", + } +} + +// PatroniLeaderConfigMap returns the ObjectMeta necessary to lookup the +// ConfigMap created by Patroni for the leader election of cluster. +// See Patroni DCS "leader_path". +func PatroniLeaderConfigMap(cluster *v1beta1.PostgresCluster) metav1.ObjectMeta { + return metav1.ObjectMeta{ + Namespace: cluster.Namespace, + Name: PatroniScope(cluster) + "-leader", + } +} + +// PatroniLeaderEndpoints returns the ObjectMeta necessary to lookup the +// Endpoints created by Patroni for the leader election of cluster. +// See Patroni DCS "leader_path". +func PatroniLeaderEndpoints(cluster *v1beta1.PostgresCluster) metav1.ObjectMeta { + return metav1.ObjectMeta{ + Namespace: cluster.Namespace, + Name: PatroniScope(cluster), + } +} + +// PatroniScope returns the "scope" Patroni uses for cluster. +func PatroniScope(cluster *v1beta1.PostgresCluster) string { + return cluster.Name + "-ha" +} + +// PatroniTrigger returns the ObjectMeta necessary to lookup the ConfigMap or +// Endpoints Patroni creates for cluster to initiate a controlled change of the +// leader. See Patroni DCS "failover_path". +func PatroniTrigger(cluster *v1beta1.PostgresCluster) metav1.ObjectMeta { + return metav1.ObjectMeta{ + Namespace: cluster.Namespace, + Name: PatroniScope(cluster) + "-failover", + } +} + +// PGBackRestConfig returns the ObjectMeta for a pgBackRest ConfigMap +func PGBackRestConfig(cluster *v1beta1.PostgresCluster) metav1.ObjectMeta { + return metav1.ObjectMeta{ + Namespace: cluster.GetNamespace(), + Name: fmt.Sprintf(cmNameSuffix, cluster.GetName()), + } +} + +// PGBackRestBackupJob returns the ObjectMeta for the pgBackRest backup Job utilized +// to create replicas using pgBackRest +func PGBackRestBackupJob(cluster *v1beta1.PostgresCluster) metav1.ObjectMeta { + return metav1.ObjectMeta{ + Name: cluster.GetName() + "-backup-" + rand.String(4), + Namespace: cluster.GetNamespace(), + } +} + +// PGBackRestCronJob returns the ObjectMeta for a pgBackRest CronJob +func PGBackRestCronJob(cluster *v1beta1.PostgresCluster, backuptype, repoName string) metav1.ObjectMeta { + return metav1.ObjectMeta{ + Namespace: cluster.GetNamespace(), + Name: cluster.Name + "-" + repoName + "-" + backuptype, + } +} + +// PGBackRestRestoreJob returns the ObjectMeta for a pgBackRest restore Job +func PGBackRestRestoreJob(cluster *v1beta1.PostgresCluster) metav1.ObjectMeta { + return metav1.ObjectMeta{ + Namespace: cluster.GetNamespace(), + Name: cluster.Name + "-pgbackrest-restore", + } +} + +// PGBackRestRBAC returns the ObjectMeta necessary to lookup the ServiceAccount, Role, and +// RoleBinding for pgBackRest Jobs +func PGBackRestRBAC(cluster *v1beta1.PostgresCluster) metav1.ObjectMeta { + return metav1.ObjectMeta{ + Namespace: cluster.Namespace, + Name: cluster.Name + "-pgbackrest", + } +} + +// PGBackRestRepoVolume returns the ObjectMeta for a pgBackRest repository volume +func PGBackRestRepoVolume(cluster *v1beta1.PostgresCluster, + repoName string) metav1.ObjectMeta { + return metav1.ObjectMeta{ + Name: fmt.Sprintf("%s-%s", cluster.GetName(), repoName), + Namespace: cluster.GetNamespace(), + } +} + +// PGBackRestSSHConfig returns the ObjectMeta for a pgBackRest SSHD ConfigMap +// Deprecated: Repository hosts use mTLS for encryption, authentication, and authorization. +// TODO(tjmoore4): Once we no longer need this for cleanup purposes, this should be removed. +func PGBackRestSSHConfig(cluster *v1beta1.PostgresCluster) metav1.ObjectMeta { + return metav1.ObjectMeta{ + Name: fmt.Sprintf(sshCMNameSuffix, cluster.GetName()), + Namespace: cluster.GetNamespace(), + } +} + +// PGBackRestSSHSecret returns the ObjectMeta for a pgBackRest SSHD Secret +// Deprecated: Repository hosts use mTLS for encryption, authentication, and authorization. +// TODO(tjmoore4): Once we no longer need this for cleanup purposes, this should be removed. +func PGBackRestSSHSecret(cluster *v1beta1.PostgresCluster) metav1.ObjectMeta { + return metav1.ObjectMeta{ + Name: fmt.Sprintf(sshSecretNameSuffix, cluster.GetName()), + Namespace: cluster.GetNamespace(), + } +} + +// PGBackRestSecret returns the ObjectMeta for a pgBackRest Secret +func PGBackRestSecret(cluster *v1beta1.PostgresCluster) metav1.ObjectMeta { + return metav1.ObjectMeta{ + Name: cluster.GetName() + "-pgbackrest", + Namespace: cluster.GetNamespace(), + } +} + +// DeprecatedPostgresUserSecret returns the ObjectMeta necessary to lookup the +// old Secret containing the default Postgres user and connection information. +// Use PostgresUserSecret instead. +func DeprecatedPostgresUserSecret(cluster *v1beta1.PostgresCluster) metav1.ObjectMeta { + return metav1.ObjectMeta{ + Namespace: cluster.Namespace, + Name: cluster.Name + "-pguser", + } +} + +// PostgresUserSecret returns the ObjectMeta necessary to lookup a Secret +// containing a PostgreSQL user and its connection information. +func PostgresUserSecret(cluster *v1beta1.PostgresCluster, username string) metav1.ObjectMeta { + return metav1.ObjectMeta{ + Namespace: cluster.Namespace, + Name: cluster.Name + "-pguser-" + username, + } +} + +// PostgresTLSSecret returns the ObjectMeta necessary to lookup the Secret +// containing the default Postgres TLS certificates and key +func PostgresTLSSecret(cluster *v1beta1.PostgresCluster) metav1.ObjectMeta { + return metav1.ObjectMeta{ + Namespace: cluster.Namespace, + Name: cluster.Name + "-cluster-cert", + } +} + +// MovePGDataDirJob returns the ObjectMeta for a pgData directory move Job +func MovePGDataDirJob(cluster *v1beta1.PostgresCluster) metav1.ObjectMeta { + return metav1.ObjectMeta{ + Namespace: cluster.GetNamespace(), + Name: cluster.Name + "-move-pgdata-dir", + } +} + +// MovePGWALDirJob returns the ObjectMeta for a pg_wal directory move Job +func MovePGWALDirJob(cluster *v1beta1.PostgresCluster) metav1.ObjectMeta { + return metav1.ObjectMeta{ + Namespace: cluster.GetNamespace(), + Name: cluster.Name + "-move-pgwal-dir", + } +} + +// MovePGBackRestRepoDirJob returns the ObjectMeta for a pgBackRest repo directory move Job +func MovePGBackRestRepoDirJob(cluster *v1beta1.PostgresCluster) metav1.ObjectMeta { + return metav1.ObjectMeta{ + Namespace: cluster.GetNamespace(), + Name: cluster.Name + "-move-pgbackrest-repo-dir", + } +} + +// StandalonePGAdmin returns the ObjectMeta necessary to lookup the ConfigMap, +// Service, StatefulSet, or Volume for the cluster's pgAdmin user interface. +func StandalonePGAdmin(pgadmin *v1beta1.PGAdmin) metav1.ObjectMeta { + return metav1.ObjectMeta{ + Namespace: pgadmin.Namespace, + Name: fmt.Sprintf("pgadmin-%s", pgadmin.UID), + } +} + +// UpgradeCheckConfigMap returns the ObjectMeta for the PGO ConfigMap +func UpgradeCheckConfigMap() metav1.ObjectMeta { + return metav1.ObjectMeta{ + Namespace: config.PGONamespace(), + Name: "pgo-upgrade-check", + } +} diff --git a/internal/naming/names_test.go b/internal/naming/names_test.go new file mode 100644 index 0000000000..27835c3e5d --- /dev/null +++ b/internal/naming/names_test.go @@ -0,0 +1,327 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package naming + +import ( + "strings" + "testing" + + "gotest.tools/v3/assert" + appsv1 "k8s.io/api/apps/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/util/sets" + "k8s.io/apimachinery/pkg/util/validation" + "sigs.k8s.io/controller-runtime/pkg/client" + + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestAsObjectKey(t *testing.T) { + assert.Equal(t, AsObjectKey( + metav1.ObjectMeta{Namespace: "ns1", Name: "thing"}), + client.ObjectKey{Namespace: "ns1", Name: "thing"}) +} + +func TestContainerNamesUniqueAndValid(t *testing.T) { + // Container names have to be unique within a Pod. The number of containers + // we deploy should be few enough that we can name them uniquely across all + // pods. + // - https://docs.k8s.io/reference/kubernetes-api/workload-resources/pod-v1/ + + names := sets.NewString() + for _, name := range []string{ + ContainerDatabase, + ContainerNSSWrapperInit, + ContainerPGAdmin, + ContainerPGAdminStartup, + ContainerPGBackRestConfig, + ContainerPGBackRestLogDirInit, + ContainerPGBouncer, + ContainerPGBouncerConfig, + ContainerPostgresStartup, + ContainerPGMonitorExporter, + } { + assert.Assert(t, !names.Has(name), "%q defined already", name) + assert.Assert(t, nil == validation.IsDNS1123Label(name)) + names.Insert(name) + } +} + +func TestClusterNamesUniqueAndValid(t *testing.T) { + cluster := &v1beta1.PostgresCluster{ + ObjectMeta: metav1.ObjectMeta{ + Namespace: "ns1", Name: "pg0", + }, + } + repoName := "hippo-repo" + instanceSet := &v1beta1.PostgresInstanceSetSpec{ + Name: "set-1", + } + + type test struct { + name string + value metav1.ObjectMeta + } + + testUniqueAndValid := func(t *testing.T, tests []test) sets.Set[string] { + names := sets.Set[string]{} + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + assert.Equal(t, tt.value.Namespace, cluster.Namespace) + assert.Assert(t, tt.value.Name != cluster.Name, "may collide") + assert.Assert(t, !names.Has(tt.value.Name), "%q defined already", tt.value.Name) + assert.Assert(t, nil == validation.IsDNS1123Label(tt.value.Name)) + names.Insert(tt.value.Name) + }) + } + return names + } + + t.Run("ConfigMaps", func(t *testing.T) { + testUniqueAndValid(t, []test{ + {"ClusterConfigMap", ClusterConfigMap(cluster)}, + {"ClusterPGAdmin", ClusterPGAdmin(cluster)}, + {"ClusterPGBouncer", ClusterPGBouncer(cluster)}, + {"PatroniDistributedConfiguration", PatroniDistributedConfiguration(cluster)}, + {"PatroniLeaderConfigMap", PatroniLeaderConfigMap(cluster)}, + {"PatroniTrigger", PatroniTrigger(cluster)}, + {"PGBackRestConfig", PGBackRestConfig(cluster)}, + {"PGBackRestSSHConfig", PGBackRestSSHConfig(cluster)}, + }) + }) + + t.Run("CronJobs", func(t *testing.T) { + testUniqueAndValid(t, []test{ + {"PGBackRestCronJon", PGBackRestCronJob(cluster, "full", "repo1")}, + {"PGBackRestCronJon", PGBackRestCronJob(cluster, "incr", "repo2")}, + {"PGBackRestCronJon", PGBackRestCronJob(cluster, "diff", "repo3")}, + {"PGBackRestCronJon", PGBackRestCronJob(cluster, "full", "repo4")}, + }) + }) + + t.Run("Deployments", func(t *testing.T) { + testUniqueAndValid(t, []test{ + {"ClusterPGBouncer", ClusterPGBouncer(cluster)}, + }) + }) + + t.Run("Jobs", func(t *testing.T) { + testUniqueAndValid(t, []test{ + {"PGBackRestBackupJob", PGBackRestBackupJob(cluster)}, + {"PGBackRestRestoreJob", PGBackRestRestoreJob(cluster)}, + }) + }) + + t.Run("PodDisruptionBudgets", func(t *testing.T) { + testUniqueAndValid(t, []test{ + {"InstanceSetPDB", InstanceSet(cluster, instanceSet)}, + {"PGBouncerPDB", ClusterPGBouncer(cluster)}, + }) + }) + + t.Run("RoleBindings", func(t *testing.T) { + testUniqueAndValid(t, []test{ + {"ClusterInstanceRBAC", ClusterInstanceRBAC(cluster)}, + {"PGBackRestRBAC", PGBackRestRBAC(cluster)}, + }) + }) + + t.Run("Roles", func(t *testing.T) { + testUniqueAndValid(t, []test{ + {"ClusterInstanceRBAC", ClusterInstanceRBAC(cluster)}, + {"PGBackRestRBAC", PGBackRestRBAC(cluster)}, + }) + }) + + t.Run("Secrets", func(t *testing.T) { + names := testUniqueAndValid(t, []test{ + {"ClusterPGBouncer", ClusterPGBouncer(cluster)}, + {"DeprecatedPostgresUserSecret", DeprecatedPostgresUserSecret(cluster)}, + {"PostgresTLSSecret", PostgresTLSSecret(cluster)}, + {"ReplicationClientCertSecret", ReplicationClientCertSecret(cluster)}, + {"PGBackRestSSHSecret", PGBackRestSSHSecret(cluster)}, + {"MonitoringUserSecret", MonitoringUserSecret(cluster)}, + }) + + // NOTE: This does not fail when a conflict is introduced. When adding a + // Secret, be sure to compare it to the function below. + t.Run("OperatorConfiguration", func(t *testing.T) { + other := OperatorConfigurationSecret().Name + assert.Assert(t, !names.Has(other), "%q defined already", other) + }) + + t.Run("PostgresUserSecret", func(t *testing.T) { + value := PostgresUserSecret(cluster, "some-user") + + assert.Equal(t, value.Namespace, cluster.Namespace) + assert.Assert(t, nil == validation.IsDNS1123Label(value.Name)) + + prefix := PostgresUserSecret(cluster, "").Name + for _, name := range sets.List(names) { + assert.Assert(t, !strings.HasPrefix(name, prefix), "%q may collide", name) + } + }) + }) + + t.Run("ServiceAccounts", func(t *testing.T) { + testUniqueAndValid(t, []test{ + {"ClusterInstanceRBAC", ClusterInstanceRBAC(cluster)}, + {"PGBackRestRBAC", PGBackRestRBAC(cluster)}, + }) + }) + + t.Run("Services", func(t *testing.T) { + testUniqueAndValid(t, []test{ + {"ClusterPGBouncer", ClusterPGBouncer(cluster)}, + {"ClusterPGAdmin", ClusterPGAdmin(cluster)}, + {"ClusterPodService", ClusterPodService(cluster)}, + {"ClusterPrimaryService", ClusterPrimaryService(cluster)}, + {"ClusterReplicaService", ClusterReplicaService(cluster)}, + // Patroni can use Endpoints which relate directly to a Service. + {"PatroniDistributedConfiguration", PatroniDistributedConfiguration(cluster)}, + {"PatroniLeaderEndpoints", PatroniLeaderEndpoints(cluster)}, + {"PatroniTrigger", PatroniTrigger(cluster)}, + }) + }) + + t.Run("StatefulSets", func(t *testing.T) { + testUniqueAndValid(t, []test{ + {"ClusterPGAdmin", ClusterPGAdmin(cluster)}, + }) + }) + + t.Run("Volumes", func(t *testing.T) { + testUniqueAndValid(t, []test{ + {"ClusterPGAdmin", ClusterPGAdmin(cluster)}, + {"PGBackRestRepoVolume", PGBackRestRepoVolume(cluster, repoName)}, + }) + }) + + t.Run("VolumeSnapshots", func(t *testing.T) { + testUniqueAndValid(t, []test{ + {"ClusterVolumeSnapshot", ClusterVolumeSnapshot(cluster)}, + }) + }) +} + +func TestInstanceNamesUniqueAndValid(t *testing.T) { + instance := &appsv1.StatefulSet{ + ObjectMeta: metav1.ObjectMeta{ + Namespace: "ns", Name: "some-such", + }, + } + + type test struct { + name string + value metav1.ObjectMeta + } + + t.Run("ConfigMaps", func(t *testing.T) { + names := sets.NewString() + for _, tt := range []test{ + {"InstanceConfigMap", InstanceConfigMap(instance)}, + } { + t.Run(tt.name, func(t *testing.T) { + assert.Equal(t, tt.value.Namespace, instance.Namespace) + assert.Assert(t, tt.value.Name != instance.Name, "may collide") + assert.Assert(t, !names.Has(tt.value.Name), "%q defined already", tt.value.Name) + assert.Assert(t, nil == validation.IsDNS1123Label(tt.value.Name)) + names.Insert(tt.value.Name) + }) + } + }) + + t.Run("PVCs", func(t *testing.T) { + names := sets.NewString() + for _, tt := range []test{ + {"InstancePostgresDataVolume", InstancePostgresDataVolume(instance)}, + {"InstancePostgresWALVolume", InstancePostgresWALVolume(instance)}, + } { + t.Run(tt.name, func(t *testing.T) { + assert.Equal(t, tt.value.Namespace, instance.Namespace) + assert.Assert(t, tt.value.Name != instance.Name, "may collide") + assert.Assert(t, !names.Has(tt.value.Name), "%q defined already", tt.value.Name) + assert.Assert(t, nil == validation.IsDNS1123Label(tt.value.Name)) + names.Insert(tt.value.Name) + }) + } + }) + + t.Run("Secrets", func(t *testing.T) { + names := sets.NewString() + for _, tt := range []test{ + {"InstanceCertificates", InstanceCertificates(instance)}, + } { + t.Run(tt.name, func(t *testing.T) { + assert.Equal(t, tt.value.Namespace, instance.Namespace) + assert.Assert(t, tt.value.Name != instance.Name, "may collide") + assert.Assert(t, !names.Has(tt.value.Name), "%q defined already", tt.value.Name) + assert.Assert(t, nil == validation.IsDNS1123Label(tt.value.Name)) + names.Insert(tt.value.Name) + }) + } + }) +} + +func TestGenerateInstance(t *testing.T) { + cluster := &v1beta1.PostgresCluster{ + ObjectMeta: metav1.ObjectMeta{ + Namespace: "ns1", Name: "pg0", + }, + } + set := &v1beta1.PostgresInstanceSetSpec{Name: "hippos"} + + instance := GenerateInstance(cluster, set) + + assert.Equal(t, cluster.Namespace, instance.Namespace) + assert.Assert(t, strings.HasPrefix(instance.Name, cluster.Name+"-"+set.Name+"-")) +} + +// TestGenerateStartupInstance ensures that a consistent ObjectMeta will be +// provided assuming the same cluster name and instance set name is passed +// into GenerateStartupInstance +func TestGenerateStartupInstance(t *testing.T) { + cluster := &v1beta1.PostgresCluster{ + ObjectMeta: metav1.ObjectMeta{ + Namespace: "ns1", Name: "pg0", + }, + } + set := &v1beta1.PostgresInstanceSetSpec{Name: "hippos"} + + instanceOne := GenerateStartupInstance(cluster, set) + + assert.Equal(t, cluster.Namespace, instanceOne.Namespace) + assert.Assert(t, strings.HasPrefix(instanceOne.Name, cluster.Name+"-"+set.Name+"-")) + + instanceTwo := GenerateStartupInstance(cluster, set) + assert.DeepEqual(t, instanceOne, instanceTwo) + +} + +func TestOperatorConfigurationSecret(t *testing.T) { + t.Setenv("PGO_NAMESPACE", "cheese") + + value := OperatorConfigurationSecret() + assert.Equal(t, value.Namespace, "cheese") + assert.Assert(t, nil == validation.IsDNS1123Label(value.Name)) +} + +func TestPortNamesUniqueAndValid(t *testing.T) { + // Port names have to be unique within a Pod. The number of ports we employ + // should be few enough that we can name them uniquely across all pods. + // - https://docs.k8s.io/reference/kubernetes-api/workload-resources/pod-v1/#ports + + names := sets.NewString() + for _, name := range []string{ + PortExporter, + PortPGAdmin, + PortPGBouncer, + PortPostgreSQL, + } { + assert.Assert(t, !names.Has(name), "%q defined already", name) + assert.Assert(t, nil == validation.IsValidPortName(name)) + names.Insert(name) + } +} diff --git a/internal/naming/selectors.go b/internal/naming/selectors.go new file mode 100644 index 0000000000..94dbc3a9fa --- /dev/null +++ b/internal/naming/selectors.go @@ -0,0 +1,164 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package naming + +import ( + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/labels" + + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +// AsSelector is a wrapper around metav1.LabelSelectorAsSelector() which converts +// the LabelSelector API type into something that implements labels.Selector. +func AsSelector(s metav1.LabelSelector) (labels.Selector, error) { + return metav1.LabelSelectorAsSelector(&s) +} + +// AnyCluster selects things for any PostgreSQL cluster. +func AnyCluster() metav1.LabelSelector { + return metav1.LabelSelector{ + MatchExpressions: []metav1.LabelSelectorRequirement{ + {Key: LabelCluster, Operator: metav1.LabelSelectorOpExists}, + }, + } +} + +// Cluster selects things for cluster. +func Cluster(cluster string) metav1.LabelSelector { + return metav1.LabelSelector{ + MatchLabels: map[string]string{ + LabelCluster: cluster, + }, + } +} + +// ClusterRestoreJobs selects all existing restore jobs in a cluster. +func ClusterRestoreJobs(cluster string) metav1.LabelSelector { + return metav1.LabelSelector{ + MatchLabels: map[string]string{ + LabelCluster: cluster, + }, + MatchExpressions: []metav1.LabelSelectorRequirement{ + {Key: LabelPGBackRestRestore, Operator: metav1.LabelSelectorOpExists}, + }, + } +} + +// ClusterBackupJobs selects things for all existing backup jobs in cluster. +func ClusterBackupJobs(cluster string) metav1.LabelSelector { + return metav1.LabelSelector{ + MatchLabels: map[string]string{ + LabelCluster: cluster, + }, + MatchExpressions: []metav1.LabelSelectorRequirement{ + {Key: LabelPGBackRestBackup, Operator: metav1.LabelSelectorOpExists}, + }, + } +} + +// ClusterDataForPostgresAndPGBackRest selects things for PostgreSQL data and +// things for pgBackRest data. +func ClusterDataForPostgresAndPGBackRest(cluster string) metav1.LabelSelector { + return metav1.LabelSelector{ + MatchLabels: map[string]string{ + LabelCluster: cluster, + }, + MatchExpressions: []metav1.LabelSelectorRequirement{{ + Key: LabelData, + Operator: metav1.LabelSelectorOpIn, + Values: []string{DataPostgres, DataPGBackRest}, + }}, + } +} + +// ClusterInstance selects things for a single instance in a cluster. +func ClusterInstance(cluster, instance string) metav1.LabelSelector { + return metav1.LabelSelector{ + MatchLabels: map[string]string{ + LabelCluster: cluster, + LabelInstance: instance, + }, + } +} + +// ClusterInstances selects things for PostgreSQL instances in cluster. +func ClusterInstances(cluster string) metav1.LabelSelector { + return metav1.LabelSelector{ + MatchLabels: map[string]string{ + LabelCluster: cluster, + }, + MatchExpressions: []metav1.LabelSelectorRequirement{ + {Key: LabelInstance, Operator: metav1.LabelSelectorOpExists}, + }, + } +} + +// ClusterInstanceSet selects things for set in cluster. +func ClusterInstanceSet(cluster, set string) metav1.LabelSelector { + return metav1.LabelSelector{ + MatchLabels: map[string]string{ + LabelCluster: cluster, + LabelInstanceSet: set, + }, + } +} + +// ClusterInstanceSets selects things for sets in a cluster. +func ClusterInstanceSets(cluster string) metav1.LabelSelector { + return metav1.LabelSelector{ + MatchLabels: map[string]string{ + LabelCluster: cluster, + }, + MatchExpressions: []metav1.LabelSelectorRequirement{ + {Key: LabelInstanceSet, Operator: metav1.LabelSelectorOpExists}, + }, + } +} + +// ClusterPatronis selects things labeled for Patroni in cluster. +func ClusterPatronis(cluster *v1beta1.PostgresCluster) metav1.LabelSelector { + return metav1.LabelSelector{ + MatchLabels: map[string]string{ + LabelCluster: cluster.Name, + LabelPatroni: PatroniScope(cluster), + }, + } +} + +// ClusterPGBouncerSelector selects things labeled for PGBouncer in cluster. +func ClusterPGBouncerSelector(cluster *v1beta1.PostgresCluster) metav1.LabelSelector { + return metav1.LabelSelector{ + MatchLabels: map[string]string{ + LabelCluster: cluster.Name, + LabelRole: RolePGBouncer, + }, + } +} + +// ClusterPostgresUsers selects things labeled for PostgreSQL users in cluster. +func ClusterPostgresUsers(cluster string) metav1.LabelSelector { + return metav1.LabelSelector{ + MatchLabels: map[string]string{ + LabelCluster: cluster, + }, + MatchExpressions: []metav1.LabelSelectorRequirement{ + // The now-deprecated default PostgreSQL user secret lacks a LabelRole. + // The existence of a LabelPostgresUser matches it and current secrets. + {Key: LabelPostgresUser, Operator: metav1.LabelSelectorOpExists}, + }, + } +} + +// CrunchyBridgeClusterPostgresRoles selects things labeled for CrunchyBridgeCluster +// PostgreSQL roles in cluster. +func CrunchyBridgeClusterPostgresRoles(clusterName string) metav1.LabelSelector { + return metav1.LabelSelector{ + MatchLabels: map[string]string{ + LabelCluster: clusterName, + LabelRole: RoleCrunchyBridgeClusterPostgresRole, + }, + } +} diff --git a/internal/naming/selectors_test.go b/internal/naming/selectors_test.go new file mode 100644 index 0000000000..1f5f42ad96 --- /dev/null +++ b/internal/naming/selectors_test.go @@ -0,0 +1,161 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package naming + +import ( + "strings" + "testing" + + "gotest.tools/v3/assert" + + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestAnyCluster(t *testing.T) { + s, err := AsSelector(AnyCluster()) + assert.NilError(t, err) + assert.DeepEqual(t, s.String(), strings.Join([]string{ + "postgres-operator.crunchydata.com/cluster", + }, ",")) +} + +func TestCluster(t *testing.T) { + s, err := AsSelector(Cluster("something")) + assert.NilError(t, err) + assert.DeepEqual(t, s.String(), strings.Join([]string{ + "postgres-operator.crunchydata.com/cluster=something", + }, ",")) + + _, err = AsSelector(Cluster("--whoa/yikes")) + assert.ErrorContains(t, err, "Invalid") +} + +func TestClusterBackupJobs(t *testing.T) { + s, err := AsSelector(ClusterBackupJobs("something")) + assert.NilError(t, err) + assert.DeepEqual(t, s.String(), strings.Join([]string{ + "postgres-operator.crunchydata.com/cluster=something", + "postgres-operator.crunchydata.com/pgbackrest-backup", + }, ",")) + + _, err = AsSelector(Cluster("--whoa/yikes")) + assert.ErrorContains(t, err, "Invalid") +} + +func TestClusterDataForPostgresAndPGBackRest(t *testing.T) { + s, err := AsSelector(ClusterDataForPostgresAndPGBackRest("something")) + assert.NilError(t, err) + assert.DeepEqual(t, s.String(), strings.Join([]string{ + "postgres-operator.crunchydata.com/cluster=something", + "postgres-operator.crunchydata.com/data in (pgbackrest,postgres)", + }, ",")) + + _, err = AsSelector(ClusterDataForPostgresAndPGBackRest("--whoa/yikes")) + assert.ErrorContains(t, err, "Invalid") +} + +func TestClusterInstance(t *testing.T) { + s, err := AsSelector(ClusterInstance("daisy", "dog")) + assert.NilError(t, err) + assert.DeepEqual(t, s.String(), strings.Join([]string{ + "postgres-operator.crunchydata.com/cluster=daisy", + "postgres-operator.crunchydata.com/instance=dog", + }, ",")) + + _, err = AsSelector(ClusterInstance("--whoa/son", "--whoa/yikes")) + assert.ErrorContains(t, err, "Invalid") +} + +func TestClusterInstances(t *testing.T) { + s, err := AsSelector(ClusterInstances("something")) + assert.NilError(t, err) + assert.DeepEqual(t, s.String(), strings.Join([]string{ + "postgres-operator.crunchydata.com/cluster=something", + "postgres-operator.crunchydata.com/instance", + }, ",")) + + _, err = AsSelector(ClusterInstances("--whoa/yikes")) + assert.ErrorContains(t, err, "Invalid") +} + +func TestClusterInstanceSet(t *testing.T) { + s, err := AsSelector(ClusterInstanceSet("something", "also")) + assert.NilError(t, err) + assert.DeepEqual(t, s.String(), strings.Join([]string{ + "postgres-operator.crunchydata.com/cluster=something", + "postgres-operator.crunchydata.com/instance-set=also", + }, ",")) + + _, err = AsSelector(ClusterInstanceSet("--whoa/yikes", "ok")) + assert.ErrorContains(t, err, "Invalid") +} + +func TestClusterInstanceSets(t *testing.T) { + s, err := AsSelector(ClusterInstanceSets("something")) + assert.NilError(t, err) + assert.DeepEqual(t, s.String(), strings.Join([]string{ + "postgres-operator.crunchydata.com/cluster=something", + "postgres-operator.crunchydata.com/instance-set", + }, ",")) + + _, err = AsSelector(ClusterInstanceSets("--whoa/yikes")) + assert.ErrorContains(t, err, "Invalid") +} + +func TestClusterPatronis(t *testing.T) { + cluster := &v1beta1.PostgresCluster{} + cluster.Name = "something" + + s, err := AsSelector(ClusterPatronis(cluster)) + assert.NilError(t, err) + assert.DeepEqual(t, s.String(), strings.Join([]string{ + "postgres-operator.crunchydata.com/cluster=something", + "postgres-operator.crunchydata.com/patroni=something-ha", + }, ",")) + + cluster.Name = "--nope--" + _, err = AsSelector(ClusterPatronis(cluster)) + assert.ErrorContains(t, err, "Invalid") +} + +func TestClusterPGBouncerSelector(t *testing.T) { + cluster := &v1beta1.PostgresCluster{} + cluster.Name = "something" + + s, err := AsSelector(ClusterPGBouncerSelector(cluster)) + assert.NilError(t, err) + assert.DeepEqual(t, s.String(), strings.Join([]string{ + "postgres-operator.crunchydata.com/cluster=something", + "postgres-operator.crunchydata.com/role=pgbouncer", + }, ",")) + + cluster.Name = "--bad--dog" + _, err = AsSelector(ClusterPGBouncerSelector(cluster)) + assert.ErrorContains(t, err, "Invalid") +} + +func TestClusterPostgresUsers(t *testing.T) { + s, err := AsSelector(ClusterPostgresUsers("something")) + assert.NilError(t, err) + assert.DeepEqual(t, s.String(), strings.Join([]string{ + "postgres-operator.crunchydata.com/cluster=something", + "postgres-operator.crunchydata.com/pguser", + }, ",")) + + _, err = AsSelector(ClusterPostgresUsers("--nope--")) + assert.ErrorContains(t, err, "Invalid") +} + +func TestCrunchyBridgeClusterPostgresRoles(t *testing.T) { + s, err := AsSelector(CrunchyBridgeClusterPostgresRoles("something")) + assert.NilError(t, err) + assert.DeepEqual(t, s.String(), strings.Join([]string{ + "postgres-operator.crunchydata.com/cluster=something", + "postgres-operator.crunchydata.com/role=cbc-pgrole", + }, ",")) + + _, err = AsSelector(CrunchyBridgeClusterPostgresRoles("--nope--")) + assert.ErrorContains(t, err, "Invalid") +} diff --git a/internal/naming/telemetry.go b/internal/naming/telemetry.go new file mode 100644 index 0000000000..5825d6299f --- /dev/null +++ b/internal/naming/telemetry.go @@ -0,0 +1,9 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package naming + +import "go.opentelemetry.io/otel" + +var tracer = otel.Tracer("github.com/crunchydata/postgres-operator/naming") diff --git a/internal/ns/nslogic.go b/internal/ns/nslogic.go deleted file mode 100644 index e579b6d47b..0000000000 --- a/internal/ns/nslogic.go +++ /dev/null @@ -1,743 +0,0 @@ -package ns - -/* -Copyright 2019 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "bytes" - "encoding/json" - "errors" - "fmt" - "os" - "reflect" - "strings" - "text/template" - "time" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/kubeapi" - "github.com/crunchydata/postgres-operator/pkg/events" - - log "github.com/sirupsen/logrus" - authv1 "k8s.io/api/authorization/v1" - corev1 "k8s.io/api/core/v1" - v1 "k8s.io/api/core/v1" - rbacv1 "k8s.io/api/rbac/v1" - kerrors "k8s.io/apimachinery/pkg/api/errors" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/runtime" - "k8s.io/apimachinery/pkg/util/validation" - "k8s.io/client-go/kubernetes" - "k8s.io/client-go/kubernetes/fake" -) - -const OPERATOR_SERVICE_ACCOUNT = "postgres-operator" -const PGO_DEFAULT_SERVICE_ACCOUNT = "pgo-default" - -const PGO_TARGET_ROLE = "pgo-target-role" -const PGO_TARGET_ROLE_BINDING = "pgo-target-role-binding" -const PGO_TARGET_SERVICE_ACCOUNT = "pgo-target" - -const PGO_BACKREST_ROLE = "pgo-backrest-role" -const PGO_BACKREST_SERVICE_ACCOUNT = "pgo-backrest" -const PGO_BACKREST_ROLE_BINDING = "pgo-backrest-role-binding" - -const PGO_PG_ROLE = "pgo-pg-role" -const PGO_PG_ROLE_BINDING = "pgo-pg-role-binding" -const PGO_PG_SERVICE_ACCOUNT = "pgo-pg" - -// PgoServiceAccount is used to populate the following ServiceAccount templates: -// pgo-default-sa.json -// pgo-target-sa.json -// pgo-backrest-sa.json -// pgo-pg-sa.json -type PgoServiceAccount struct { - TargetNamespace string -} - -// PgoRole is used to populate the following Role templates: -// pgo-target-role.json -// pgo-backrest-role.json -// pgo-pg-role.json -type PgoRole struct { - TargetNamespace string -} - -// PgoRoleBinding is used to populate the following RoleBinding templates: -// pgo-target-role-binding.json -// pgo-backrest-role-binding.json -// pgo-pg-role-binding.json -type PgoRoleBinding struct { - TargetNamespace string - OperatorNamespace string -} - -// NamespaceOperatingMode defines the different namespace operating modes for the Operator -type NamespaceOperatingMode string - -const ( - // NamespaceOperatingModeDynamic enables full dynamic namespace capabilities, in which the - // Operator can create, delete and update any namespaces within the Kubernetes cluster. - // Additionally, while in can listen for namespace events (e.g. namespace additions, updates - // and deletions), and then create or remove controllers for various namespaces as those - // namespaces are added or removed from the Kubernetes cluster. - NamespaceOperatingModeDynamic NamespaceOperatingMode = "dynamic" - // NamespaceOperatingModeReadOnly allows the Operator to listen for namespace events within the - // Kubernetetes cluster, and then create and run and/or remove controllers as namespaces are - // added and deleted. - NamespaceOperatingModeReadOnly NamespaceOperatingMode = "readonly" - // NamespaceOperatingModeDisabled causes namespace capabilities to be disabled altogether. In - // this mode the Operator will simply attempt to work with the target namespaces specified - // during installation. If no target namespaces are specified, then it will be configured to - // work within the namespace in which the Operator is deployed. - NamespaceOperatingModeDisabled NamespaceOperatingMode = "disabled" - - // DNS-1123 formatting and error message for validating namespace names - dns1123Fmt string = "[a-z0-9]([-a-z0-9]*[a-z0-9])?" - dns1123ErrMsg string = "A namespace name must consist of lower case" + - "alphanumeric characters or '-', and must start and end with an alphanumeric character" -) - -var ( - // namespacePrivsCoreDynamic defines the privileges in the Core API group required for the - // Operator to run using the NamespaceOperatingModeDynamic namespace operating mode - namespacePrivsCoreDynamic = map[string][]string{ - "namespaces": {"create", "update", "delete"}, - } - // namespacePrivsReadOnly defines the privileges in the Core API group required for the - // Operator to run using the NamespaceOperatingModeReadOnly namespace operating mode - namespacePrivsCoreReadOnly = map[string][]string{ - "namespaces": {"get", "list", "watch"}, - } - - // ErrInvalidNamespaceName defines the error that is thrown when a namespace does not meet the - // requirements for naming set by Kubernetes - ErrInvalidNamespaceName = errors.New(validation.RegexError(dns1123ErrMsg, dns1123Fmt, - "my-name", "123-abc")) - // ErrNamespaceNotWatched defines the error that is thrown when a namespace does not meet the - // requirements for naming set by Kubernetes - ErrNamespaceNotWatched = errors.New("The namespaces are not watched by the " + - "current PostgreSQL Operator installation") -) - -// CreateFakeNamespaceClient creates a fake namespace client for use with the "disabled" namespace -// operating mode -func CreateFakeNamespaceClient(installationName string) (kubernetes.Interface, error) { - - var namespaces []runtime.Object - for _, namespace := range getNamespacesFromEnv() { - namespaces = append(namespaces, &v1.Namespace{ - ObjectMeta: metav1.ObjectMeta{ - Name: namespace, - Labels: map[string]string{ - config.LABEL_VENDOR: config.LABEL_CRUNCHY, - config.LABEL_PGO_INSTALLATION_NAME: installationName, - }, - }, - }) - } - - fakeClient := fake.NewSimpleClientset(namespaces...) - - return fakeClient, nil -} - -// CreateNamespace creates a new namespace that is owned by the Operator. -func CreateNamespace(clientset kubernetes.Interface, installationName, pgoNamespace, - createdBy, newNs string) error { - - log.Debugf("CreateNamespace %s %s %s", pgoNamespace, createdBy, newNs) - - //define the new namespace - n := v1.Namespace{} - n.ObjectMeta.Labels = make(map[string]string) - n.ObjectMeta.Labels[config.LABEL_VENDOR] = config.LABEL_CRUNCHY - n.ObjectMeta.Labels[config.LABEL_PGO_CREATED_BY] = createdBy - n.ObjectMeta.Labels[config.LABEL_PGO_INSTALLATION_NAME] = installationName - - n.Name = newNs - - if _, err := clientset.CoreV1().Namespaces().Create(&n); err != nil { - log.Error(err) - return err - } - - log.Debugf("CreateNamespace %s created by %s", newNs, createdBy) - - //publish event - topics := make([]string, 1) - topics[0] = events.EventTopicPGO - - f := events.EventPGOCreateNamespaceFormat{ - EventHeader: events.EventHeader{ - Namespace: pgoNamespace, - Username: createdBy, - Topic: topics, - Timestamp: time.Now(), - EventType: events.EventPGOCreateNamespace, - }, - CreatedNamespace: newNs, - } - - return events.Publish(f) -} - -// DeleteNamespace deletes the namespace specified. -func DeleteNamespace(clientset kubernetes.Interface, installationName, pgoNamespace, deletedBy, ns string) error { - - err := clientset.CoreV1().Namespaces().Delete(ns, &metav1.DeleteOptions{}) - if err != nil { - log.Error(err) - return err - } - - log.Debugf("DeleteNamespace %s deleted by %s", ns, deletedBy) - - //publish the namespace delete event - topics := make([]string, 1) - topics[0] = events.EventTopicPGO - - f := events.EventPGODeleteNamespaceFormat{ - EventHeader: events.EventHeader{ - Namespace: pgoNamespace, - Username: deletedBy, - Topic: topics, - Timestamp: time.Now(), - EventType: events.EventPGODeleteNamespace, - }, - DeletedNamespace: ns, - } - - return events.Publish(f) -} - -// CopySecret copies a secret from the Operator namespace to target namespace -func CopySecret(clientset kubernetes.Interface, secretName, operatorNamespace, targetNamespace string) error { - secret, err := clientset.CoreV1().Secrets(operatorNamespace).Get(secretName, metav1.GetOptions{}) - - if err == nil { - secret.ObjectMeta = metav1.ObjectMeta{ - Annotations: secret.ObjectMeta.Annotations, - Labels: secret.ObjectMeta.Labels, - Name: secret.ObjectMeta.Name, - } - - if _, err = clientset.CoreV1().Secrets(targetNamespace).Create(secret); kerrors.IsAlreadyExists(err) { - _, err = clientset.CoreV1().Secrets(targetNamespace).Update(secret) - } - } - - if !kubeapi.IsNotFound(err) { - return err - } - - return nil -} - -// ReconcileRole reconciles a Role required by the operator in a target namespace -func ReconcileRole(clientset kubernetes.Interface, role, targetNamespace string, - roleTemplate *template.Template) error { - - var createRole bool - - currRole, err := clientset.RbacV1().Roles(targetNamespace).Get( - role, metav1.GetOptions{}) - if err != nil { - if kerrors.IsNotFound(err) { - log.Debugf("Role %s in namespace %s does not exist and will be created", - role, targetNamespace) - createRole = true - } else { - return err - } - } - - var buffer bytes.Buffer - if err := roleTemplate.Execute(&buffer, - PgoRole{TargetNamespace: targetNamespace}); err != nil { - return err - } - - templatedRole := rbacv1.Role{} - if err := json.Unmarshal(buffer.Bytes(), &templatedRole); err != nil { - return err - } - - if createRole { - if _, err := clientset.RbacV1().Roles(targetNamespace).Create( - &templatedRole); err != nil { - return err - } - return nil - } - - if !reflect.DeepEqual(currRole.Rules, templatedRole.Rules) { - - log.Debugf("Role %s in namespace %s is invalid and will now be reconciled", - currRole.Name, targetNamespace) - - currRole.Rules = templatedRole.Rules - - if _, err := clientset.RbacV1().Roles(targetNamespace).Update( - currRole); err != nil { - return err - } - } - - return nil -} - -// ReconcileRoleBinding reconciles a RoleBinding required by the operator in a target namespace -func ReconcileRoleBinding(clientset kubernetes.Interface, pgoNamespace, - roleBinding, targetNamespace string, roleBindingTemplate *template.Template) error { - - var createRoleBinding bool - - currRoleBinding, err := clientset.RbacV1().RoleBindings(targetNamespace).Get( - roleBinding, metav1.GetOptions{}) - if err != nil { - if kerrors.IsNotFound(err) { - log.Debugf("RoleBinding %s in namespace %s does not exist and will be created", - roleBinding, targetNamespace) - createRoleBinding = true - } else { - return err - } - } - - var buffer bytes.Buffer - if err := roleBindingTemplate.Execute(&buffer, - PgoRoleBinding{ - TargetNamespace: targetNamespace, - OperatorNamespace: pgoNamespace, - }); err != nil { - return err - } - - templatedRoleBinding := rbacv1.RoleBinding{} - if err := json.Unmarshal(buffer.Bytes(), &templatedRoleBinding); err != nil { - return err - } - - if createRoleBinding { - if _, err := clientset.RbacV1().RoleBindings(targetNamespace).Create( - &templatedRoleBinding); err != nil { - return err - } - return nil - } - - if !reflect.DeepEqual(currRoleBinding.Subjects, - templatedRoleBinding.Subjects) || - !reflect.DeepEqual(currRoleBinding.RoleRef, - templatedRoleBinding.RoleRef) { - - log.Debugf("RoleBinding %s in namespace %s is invalid and will now be reconciled", - currRoleBinding.Name, targetNamespace) - - currRoleBinding.Subjects = templatedRoleBinding.Subjects - currRoleBinding.RoleRef = templatedRoleBinding.RoleRef - - if _, err := clientset.RbacV1().RoleBindings(targetNamespace).Update( - currRoleBinding); err != nil { - return err - } - } - - return nil -} - -// ReconcileServiceAccount reconciles a ServiceAccount required by the operator in a target -// namespace -func ReconcileServiceAccount(clientset kubernetes.Interface, - serviceAccount, targetNamespace string, serviceAccountTemplate *template.Template, - imagePullSecrets []v1.LocalObjectReference) (bool, error) { - - var createServiceAccount, createdOrUpdated bool - - currServiceAccount, err := clientset.CoreV1().ServiceAccounts( - targetNamespace).Get(serviceAccount, metav1.GetOptions{}) - if err != nil { - if kerrors.IsNotFound(err) { - log.Debugf("ServiceAccount %s in namespace %s does not exist and will be created", - serviceAccount, targetNamespace) - createServiceAccount = true - } else { - return createdOrUpdated, err - } - } - - var buffer bytes.Buffer - if err := serviceAccountTemplate.Execute(&buffer, - PgoServiceAccount{TargetNamespace: targetNamespace}); err != nil { - return createdOrUpdated, err - } - - templatedServiceAccount := corev1.ServiceAccount{} - if err := json.Unmarshal(buffer.Bytes(), &templatedServiceAccount); err != nil { - return createdOrUpdated, err - } - - if createServiceAccount { - templatedServiceAccount.ImagePullSecrets = imagePullSecrets - if _, err := clientset.CoreV1().ServiceAccounts(targetNamespace).Create( - &templatedServiceAccount); err != nil { - return createdOrUpdated, err - } - createdOrUpdated = true - return createdOrUpdated, nil - } - - if !reflect.DeepEqual(currServiceAccount.ImagePullSecrets, imagePullSecrets) { - - log.Debugf("ServiceAccout %s in namespace %s is invalid and will now be reconciled", - currServiceAccount.Name, targetNamespace) - - currServiceAccount.ImagePullSecrets = imagePullSecrets - - if _, err := clientset.CoreV1().ServiceAccounts(targetNamespace).Update( - currServiceAccount); err != nil { - return createdOrUpdated, err - } - createdOrUpdated = true - } - - return createdOrUpdated, nil -} - -// UpdateNamespace updates a new namespace to be owned by the Operator. -func UpdateNamespace(clientset kubernetes.Interface, installationName, pgoNamespace, - updatedBy, ns string) error { - - log.Debugf("UpdateNamespace %s %s %s %s", installationName, pgoNamespace, updatedBy, ns) - - theNs, err := clientset.CoreV1().Namespaces().Get(ns, metav1.GetOptions{}) - if err != nil { - return err - } - - if theNs.ObjectMeta.Labels == nil { - theNs.ObjectMeta.Labels = make(map[string]string) - } - theNs.ObjectMeta.Labels[config.LABEL_VENDOR] = config.LABEL_CRUNCHY - theNs.ObjectMeta.Labels[config.LABEL_PGO_INSTALLATION_NAME] = installationName - - if _, err := clientset.CoreV1().Namespaces().Update(theNs); err != nil { - log.Error(err) - return err - } - - //publish event - topics := make([]string, 1) - topics[0] = events.EventTopicPGO - - f := events.EventPGOCreateNamespaceFormat{ - EventHeader: events.EventHeader{ - Namespace: pgoNamespace, - Username: updatedBy, - Topic: topics, - Timestamp: time.Now(), - EventType: events.EventPGOCreateNamespace, - }, - CreatedNamespace: ns, - } - - return events.Publish(f) -} - -// ConfigureInstallNamespaces is responsible for properly configuring up any namespaces provided for -// the installation of the Operator. This includes creating or updating those namespaces so they can -// be utilized by the Operator to deploy PG clusters. -func ConfigureInstallNamespaces(clientset kubernetes.Interface, installationName, pgoNamespace string, - namespaceNames []string, namespaceOperatingMode NamespaceOperatingMode) error { - - // now loop through all namespaces and either create or update them - for _, namespaceName := range namespaceNames { - - nameSpaceExists := true - // if we can get namespaces, make sure this one isn't part of another install - if namespaceOperatingMode != NamespaceOperatingModeDisabled { - - namespace, err := clientset.CoreV1().Namespaces().Get(namespaceName, - metav1.GetOptions{}) - if err != nil { - if kerrors.IsNotFound(err) { - nameSpaceExists = false - } else { - return err - } - } - - if nameSpaceExists { - // continue if already owned by this install, or if owned by another install - labels := namespace.ObjectMeta.Labels - if labels != nil && labels[config.LABEL_VENDOR] == config.LABEL_CRUNCHY && - labels[config.LABEL_PGO_INSTALLATION_NAME] != installationName { - log.Errorf("Configure install namespaces: namespace %s owned by another "+ - "installation, will not update it", namespaceName) - continue - } - } - } - - // if using the "dynamic" namespace mode we can now update the namespace to ensure it is - // properly owned by this install - if namespaceOperatingMode == NamespaceOperatingModeDynamic { - if nameSpaceExists { - // if not part of this or another install, then update the namespace to be owned by this - // Operator install - log.Infof("Configure install namespaces: namespace %s will be updated to be owned by this "+ - "installation", namespaceName) - if err := UpdateNamespace(clientset, installationName, pgoNamespace, - "operator-bootstrap", namespaceName); err != nil { - return err - } - } else { - log.Infof("Configure install namespaces: namespace %s will be created for this "+ - "installation", namespaceName) - if err := CreateNamespace(clientset, installationName, pgoNamespace, - "operator-bootstrap", namespaceName); err != nil { - return err - } - } - } - } - - return nil -} - -// GetCurrentNamespaceList returns the current list namespaces being managed by the current -// Operateor installation. When the current namespace mode is "dynamic" or "readOnly", this -// involves querying the Kube cluster for an namespaces with the "vendor" and -// "pgo-installation-name" labels corresponding to the current Operator install. When the -// namespace mode is "disabled", a list of namespaces specified using the NAMESPACE env var during -// installation is returned (with the list defaulting to the Operator's own namespace in the event -// that NAMESPACE is empty). -func GetCurrentNamespaceList(clientset kubernetes.Interface, - installationName string, namespaceOperatingMode NamespaceOperatingMode) ([]string, error) { - - if namespaceOperatingMode == NamespaceOperatingModeDisabled { - return getNamespacesFromEnv(), nil - } - - ns := make([]string, 0) - - nsList, err := clientset.CoreV1().Namespaces().List(metav1.ListOptions{}) - if err != nil { - log.Error(err.Error()) - return nil, err - } - - for _, v := range nsList.Items { - labels := v.ObjectMeta.Labels - if labels[config.LABEL_VENDOR] == config.LABEL_CRUNCHY && - labels[config.LABEL_PGO_INSTALLATION_NAME] == installationName { - ns = append(ns, v.Name) - } - } - - return ns, nil -} - -// ValidateNamespacesWatched validates whether or not the namespaces provided are being watched by -// the current Operator installation. When the current namespace mode is "dynamic" or "readOnly", -// this involves ensuring the namespace specified has the proper "vendor" and -// "pgo-installation-name" labels corresponding to the current Operator install. When the -// namespace mode is "disabled", this means ensuring the namespace is in the list of those -// specifiedusing the NAMESPACE env var during installation (with the list defaulting to the -// Operator's own namespace in the event that NAMESPACE is empty). If any namespaces are found to -// be invalid, an ErrNamespaceNotWatched error is returned containing an error message listing -// the invalid namespaces. -func ValidateNamespacesWatched(clientset kubernetes.Interface, - namespaceOperatingMode NamespaceOperatingMode, - installationName string, namespaces ...string) error { - - var err error - var currNSList []string - if namespaceOperatingMode != NamespaceOperatingModeDisabled { - currNSList, err = GetCurrentNamespaceList(clientset, installationName, - namespaceOperatingMode) - if err != nil { - return err - } - } else { - currNSList = getNamespacesFromEnv() - } - - var invalidNamespaces []string - for _, ns := range namespaces { - var validNS bool - for _, currNS := range currNSList { - if ns == currNS { - validNS = true - break - } - } - if !validNS { - invalidNamespaces = append(invalidNamespaces, ns) - } - } - - if len(invalidNamespaces) > 0 { - return fmt.Errorf("The following namespaces are invalid: %v. %w", invalidNamespaces, - ErrNamespaceNotWatched) - } - - return nil -} - -// getNamespacesFromEnv returns a slice containing the namespaces strored the NAMESPACE env var in -// csv format. If NAMESPACE is empty, then the Operator namespace as specified in env var -// PGO_OPERATOR_NAMESPACE is returned. -func getNamespacesFromEnv() []string { - namespaceEnvVar := os.Getenv("NAMESPACE") - if namespaceEnvVar == "" { - defaultNs := os.Getenv("PGO_OPERATOR_NAMESPACE") - return []string{defaultNs} - } - return strings.Split(namespaceEnvVar, ",") -} - -// ValidateNamespaceNames validates one or more namespace names to ensure they are valid per Kubernetes -// naming requirements. -func ValidateNamespaceNames(namespace ...string) error { - var invalidNamespaces []string - for _, ns := range namespace { - if validation.IsDNS1123Label(ns) != nil { - invalidNamespaces = append(invalidNamespaces, ns) - } - } - - if len(invalidNamespaces) > 0 { - return fmt.Errorf("The following namespaces are invalid %v. %w", invalidNamespaces, - ErrInvalidNamespaceName) - } - - return nil -} - -// GetNamespaceOperatingMode is responsible for returning the proper namespace operating mode for -// the current Operator install. This is done by submitting a SubjectAccessReview in the local -// Kubernetes cluster to determine whether or not certain cluster-level privileges have been -// assigned to the Operator Service Account. Based on the privileges identified, one of the -// a the proper NamespaceOperatingMode will be returned as applicable for those privileges -// (please see the various NamespaceOperatingMode types for a detailed explanation of each -// operating mode). -func GetNamespaceOperatingMode(clientset kubernetes.Interface) (NamespaceOperatingMode, error) { - - // first check to see if dynamic namespace capabilities can be enabled - isDynamic, err := CheckAccessPrivs(clientset, namespacePrivsCoreDynamic, "", "") - if err != nil { - return "", err - } - - // next check to see if readonly namespace capabilities can be enabled - isReadOnly, err := CheckAccessPrivs(clientset, namespacePrivsCoreReadOnly, "", "") - if err != nil { - return "", err - } - - // return the proper namespace operating mode based on the access privs identified - switch { - case isDynamic && isReadOnly: - return NamespaceOperatingModeDynamic, nil - case !isDynamic && isReadOnly: - return NamespaceOperatingModeReadOnly, nil - default: - return NamespaceOperatingModeDisabled, nil - } -} - -// CheckAccessPrivs checks to see if the ServiceAccount currently running the operator has -// the permissions defined for various resources as specified in the provided permissions -// map. If an empty namespace is provided then it is assumed the resource is cluster-scoped. -// If the ServiceAccount has all of the permissions defined in the permissions map, then "true" -// is returned. Otherwise, if the Service Account is missing any of the permissions specified, -// or if an error is encountered while attempting to deterine the permissions for the service -// account, then "false" is returned (along with the error in the event an error is encountered). -func CheckAccessPrivs(clientset kubernetes.Interface, - privs map[string][]string, apiGroup, namespace string) (bool, error) { - - for resource, verbs := range privs { - for _, verb := range verbs { - sar, err := clientset. - AuthorizationV1().SelfSubjectAccessReviews(). - Create(&authv1.SelfSubjectAccessReview{ - Spec: authv1.SelfSubjectAccessReviewSpec{ - ResourceAttributes: &authv1.ResourceAttributes{ - Namespace: namespace, - Group: apiGroup, - Resource: resource, - Verb: verb, - }, - }, - }) - if err != nil { - return false, err - } - if !sar.Status.Allowed { - return false, nil - } - } - } - - return true, nil -} - -// GetInitialNamespaceList returns an initial list of namespaces for the current Operator install. -// This includes first obtaining any namespaces from the NAMESPACE env var, and then if the -// namespace operating mode permits, also querying the Kube cluster in order to obtain any other -// namespaces that are part of the install, but not included in the env var. If no namespaces are -// identified via either of these methods, then the the PGO namespaces is returned as the default -// namespace. -func GetInitialNamespaceList(clientset kubernetes.Interface, - namespaceOperatingMode NamespaceOperatingMode, - installationName, pgoNamespace string) ([]string, error) { - - // next grab the namespaces provided using the NAMESPACE env var - namespaceList := getNamespacesFromEnv() - - // make sure the namespaces obtained from the NAMESPACE env var are valid - if err := ValidateNamespaceNames(namespaceList...); err != nil { - return nil, err - } - - nsEnvMap := make(map[string]struct{}) - for _, namespace := range namespaceList { - nsEnvMap[namespace] = struct{}{} - } - - // If the Operator is in a dynamic or readOnly mode, then refresh the namespace list by - // querying the Kube cluster. This allows us to account for all namespaces owned by the - // Operator, including those not explicitly specified during the Operator install. - var namespaceListCluster []string - var err error - if namespaceOperatingMode == NamespaceOperatingModeDynamic || - namespaceOperatingMode == NamespaceOperatingModeReadOnly { - namespaceListCluster, err = GetCurrentNamespaceList(clientset, installationName, - namespaceOperatingMode) - if err != nil { - return nil, err - } - } - - for _, namespace := range namespaceListCluster { - if _, ok := nsEnvMap[namespace]; !ok { - namespaceList = append(namespaceList, namespace) - } - } - - return namespaceList, nil -} diff --git a/internal/operator/backrest/backup.go b/internal/operator/backrest/backup.go deleted file mode 100644 index 690d7e5e78..0000000000 --- a/internal/operator/backrest/backup.go +++ /dev/null @@ -1,340 +0,0 @@ -package backrest - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "bytes" - "encoding/json" - "fmt" - "os" - "regexp" - "strings" - "time" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/kubeapi" - "github.com/crunchydata/postgres-operator/internal/operator" - "github.com/crunchydata/postgres-operator/internal/util" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - "github.com/crunchydata/postgres-operator/pkg/events" - pgo "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned" - log "github.com/sirupsen/logrus" - v1batch "k8s.io/api/batch/v1" - v1 "k8s.io/api/core/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/fields" - "k8s.io/client-go/kubernetes" -) - -type backrestJobTemplateFields struct { - JobName string - Name string - ClusterName string - Command string - CommandOpts string - PITRTarget string - PodName string - PGOImagePrefix string - PGOImageTag string - SecurityContext string - PgbackrestStanza string - PgbackrestDBPath string - PgbackrestRepoPath string - PgbackrestRepoType string - BackrestLocalAndS3Storage bool - PgbackrestS3VerifyTLS string - PgbackrestRestoreVolumes string - PgbackrestRestoreVolumeMounts string -} - -var backrestPgHostRegex = regexp.MustCompile("--db-host|--pg1-host") -var backrestPgPathRegex = regexp.MustCompile("--db-path|--pg1-path") - -// Backrest ... -func Backrest(namespace string, clientset kubernetes.Interface, task *crv1.Pgtask) { - - //create the Job to run the backrest command - - cmd := task.Spec.Parameters[config.LABEL_BACKREST_COMMAND] - - jobFields := backrestJobTemplateFields{ - JobName: task.Spec.Parameters[config.LABEL_JOB_NAME], - ClusterName: task.Spec.Parameters[config.LABEL_PG_CLUSTER], - PodName: task.Spec.Parameters[config.LABEL_POD_NAME], - SecurityContext: "{}", - Command: cmd, - CommandOpts: task.Spec.Parameters[config.LABEL_BACKREST_OPTS], - PITRTarget: "", - PGOImagePrefix: util.GetValueOrDefault(task.Spec.Parameters[config.LABEL_IMAGE_PREFIX], operator.Pgo.Pgo.PGOImagePrefix), - PGOImageTag: operator.Pgo.Pgo.PGOImageTag, - PgbackrestStanza: task.Spec.Parameters[config.LABEL_PGBACKREST_STANZA], - PgbackrestDBPath: task.Spec.Parameters[config.LABEL_PGBACKREST_DB_PATH], - PgbackrestRepoPath: task.Spec.Parameters[config.LABEL_PGBACKREST_REPO_PATH], - PgbackrestRestoreVolumes: "", - PgbackrestRestoreVolumeMounts: "", - PgbackrestRepoType: operator.GetRepoType(task.Spec.Parameters[config.LABEL_BACKREST_STORAGE_TYPE]), - BackrestLocalAndS3Storage: operator.IsLocalAndS3Storage(task.Spec.Parameters[config.LABEL_BACKREST_STORAGE_TYPE]), - PgbackrestS3VerifyTLS: task.Spec.Parameters[config.LABEL_BACKREST_S3_VERIFY_TLS], - } - - podCommandOpts, err := getCommandOptsFromPod(clientset, task, namespace) - if err != nil { - log.Error(err.Error()) - return - } - jobFields.CommandOpts = jobFields.CommandOpts + " " + podCommandOpts - - var doc2 bytes.Buffer - if err := config.BackrestjobTemplate.Execute(&doc2, jobFields); err != nil { - log.Error(err.Error()) - return - } - - if operator.CRUNCHY_DEBUG { - config.BackrestjobTemplate.Execute(os.Stdout, jobFields) - } - - newjob := v1batch.Job{} - err = json.Unmarshal(doc2.Bytes(), &newjob) - if err != nil { - log.Error("error unmarshalling json into Job " + err.Error()) - return - } - - // set the container image to an override value, if one exists - operator.SetContainerImageOverride(config.CONTAINER_IMAGE_PGO_BACKREST, - &newjob.Spec.Template.Spec.Containers[0]) - - newjob.ObjectMeta.Labels[config.LABEL_PGOUSER] = task.ObjectMeta.Labels[config.LABEL_PGOUSER] - newjob.ObjectMeta.Labels[config.LABEL_PG_CLUSTER_IDENTIFIER] = task.ObjectMeta.Labels[config.LABEL_PG_CLUSTER_IDENTIFIER] - - backupType := task.Spec.Parameters[config.LABEL_PGHA_BACKUP_TYPE] - if backupType != "" { - newjob.ObjectMeta.Labels[config.LABEL_PGHA_BACKUP_TYPE] = backupType - } - clientset.BatchV1().Jobs(namespace).Create(&newjob) - - //publish backrest backup event - if cmd == "backup" { - topics := make([]string, 1) - topics[0] = events.EventTopicBackup - - f := events.EventCreateBackupFormat{ - EventHeader: events.EventHeader{ - Namespace: namespace, - Username: task.ObjectMeta.Labels[config.LABEL_PGOUSER], - Topic: topics, - Timestamp: time.Now(), - EventType: events.EventCreateBackup, - }, - Clustername: jobFields.ClusterName, - BackupType: "pgbackrest", - } - - err := events.Publish(f) - if err != nil { - log.Error(err.Error()) - } - } - -} - -// CreateInitialBackup creates a Pgtask in order to initiate the initial pgBackRest backup for a cluster -// as needed to support replica creation -func CreateInitialBackup(clientset pgo.Interface, namespace, clusterName, podName string) (*crv1.Pgtask, error) { - var params map[string]string - params = make(map[string]string) - params[config.LABEL_PGHA_BACKUP_TYPE] = crv1.BackupTypeBootstrap - return CreateBackup(clientset, namespace, clusterName, podName, params, "--type=full") -} - -// CreatePostFailoverBackup creates a Pgtask in order to initiate the a pgBackRest backup following a failure -// event to ensure proper replica creation and/or reinitialization -func CreatePostFailoverBackup(clientset pgo.Interface, namespace, clusterName, podName string) (*crv1.Pgtask, error) { - var params map[string]string - params = make(map[string]string) - params[config.LABEL_PGHA_BACKUP_TYPE] = crv1.BackupTypeFailover - return CreateBackup(clientset, namespace, clusterName, podName, params, "") -} - -// CreateBackup creates a Pgtask in order to initiate a pgBackRest backup -func CreateBackup(clientset pgo.Interface, namespace, clusterName, podName string, params map[string]string, - backupOpts string) (*crv1.Pgtask, error) { - - log.Debug("pgBackRest operator CreateBackup called") - - cluster, err := clientset.CrunchydataV1().Pgclusters(namespace).Get(clusterName, metav1.GetOptions{}) - if err != nil { - log.Error(err) - return nil, err - } - - var newInstance *crv1.Pgtask - taskName := "backrest-backup-" + cluster.Name - - spec := crv1.PgtaskSpec{} - spec.Name = taskName - spec.Namespace = namespace - - spec.TaskType = crv1.PgtaskBackrest - spec.Parameters = make(map[string]string) - spec.Parameters[config.LABEL_JOB_NAME] = "backrest-" + crv1.PgtaskBackrestBackup + "-" + cluster.Name - spec.Parameters[config.LABEL_PG_CLUSTER] = cluster.Name - spec.Parameters[config.LABEL_POD_NAME] = podName - spec.Parameters[config.LABEL_CONTAINER_NAME] = "database" - // pass along the appropriate image prefix for the backup task - // this will be used by the associated backrest job - spec.Parameters[config.LABEL_IMAGE_PREFIX] = util.GetValueOrDefault(cluster.Spec.PGOImagePrefix, operator.Pgo.Pgo.PGOImagePrefix) - spec.Parameters[config.LABEL_BACKREST_COMMAND] = crv1.PgtaskBackrestBackup - spec.Parameters[config.LABEL_BACKREST_OPTS] = backupOpts - spec.Parameters[config.LABEL_BACKREST_STORAGE_TYPE] = cluster.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE] - // Get 'true' or 'false' for setting the pgBackRest S3 verify TLS value - spec.Parameters[config.LABEL_BACKREST_S3_VERIFY_TLS] = operator.GetS3VerifyTLSSetting(cluster) - - for k, v := range params { - spec.Parameters[k] = v - } - - newInstance = &crv1.Pgtask{ - ObjectMeta: metav1.ObjectMeta{ - Name: taskName, - }, - Spec: spec, - } - newInstance.ObjectMeta.Labels = make(map[string]string) - newInstance.ObjectMeta.Labels[config.LABEL_PG_CLUSTER] = cluster.Name - newInstance.ObjectMeta.Labels[config.LABEL_PG_CLUSTER_IDENTIFIER] = cluster.ObjectMeta.Labels[config.LABEL_PG_CLUSTER_IDENTIFIER] - newInstance.ObjectMeta.Labels[config.LABEL_PGOUSER] = cluster.ObjectMeta.Labels[config.LABEL_PGOUSER] - - _, err = clientset.CrunchydataV1().Pgtasks(cluster.Namespace).Create(newInstance) - if err != nil { - log.Error(err) - return nil, err - } - - return newInstance, nil -} - -// CleanBackupResources is responsible for cleaning up Kubernetes resources from a previous -// pgBackRest backup. Specifically, this function deletes the pgptask and job associate with a -// previous pgBackRest backup for the cluster. -func CleanBackupResources(clientset kubeapi.Interface, namespace, clusterName string) error { - - taskName := "backrest-backup-" + clusterName - err := clientset.CrunchydataV1().Pgtasks(namespace).Delete(taskName, &metav1.DeleteOptions{}) - if err != nil && !kubeapi.IsNotFound(err) { - log.Error(err) - return err - } - - //remove previous backup job - selector := config.LABEL_BACKREST_COMMAND + "=" + crv1.PgtaskBackrestBackup + "," + - config.LABEL_PG_CLUSTER + "=" + clusterName + "," + config.LABEL_BACKREST + "=true" - deletePropagation := metav1.DeletePropagationForeground - err = clientset. - BatchV1().Jobs(namespace). - DeleteCollection( - &metav1.DeleteOptions{PropagationPolicy: &deletePropagation}, - metav1.ListOptions{LabelSelector: selector}) - if err != nil { - log.Error(err) - } - - timeout := time.After(30 * time.Second) - tick := time.NewTicker(1 * time.Second) - defer tick.Stop() - for { - select { - case <-timeout: - return fmt.Errorf("Timed out waiting for deletion of pgBackRest backup job for "+ - "cluster %s", clusterName) - case <-tick.C: - jobList, err := clientset. - BatchV1().Jobs(namespace). - List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - log.Error(err) - return err - } - if len(jobList.Items) == 0 { - return nil - } - } - } -} - -// getCommandOptsFromPod adds command line options from the primary pod to a backrest job. -// If not already specified in the command options provided in the pgtask, add the IP of the -// primary pod as the value for the "--db-host" parameter. This will ensure direct -// communication between the repo pod and the primary via the primary's IP, instead of going -// through the primary pod's service (which could be unreliable). also if not already specified -// in the command options provided in the pgtask, then lookup the primary pod for the cluster -// and add the PGDATA dir of the pod as the value for the "--db-path" parameter -func getCommandOptsFromPod(clientset kubernetes.Interface, task *crv1.Pgtask, - namespace string) (commandOpts string, err error) { - - // lookup the primary pod in order to determine the IP of the primary and the PGDATA directory for - // the current primaty - selector := fmt.Sprintf("%s=%s,%s in (%s,%s)", config.LABEL_PG_CLUSTER, - task.Spec.Parameters[config.LABEL_PG_CLUSTER], config.LABEL_PGHA_ROLE, - "promoted", config.LABEL_PGHA_ROLE_PRIMARY) - - options := metav1.ListOptions{ - FieldSelector: fields.OneTermEqualSelector("status.phase", string(v1.PodRunning)).String(), - LabelSelector: selector, - } - - // only consider pods that are running - pods, err := clientset.CoreV1().Pods(namespace).List(options) - - if err != nil { - return - } else if len(pods.Items) > 1 { - err = fmt.Errorf("More than one primary found when creating backrest job %s", - task.Spec.Parameters[config.LABEL_JOB_NAME]) - return - } else if len(pods.Items) == 0 { - err = fmt.Errorf("Unable to find primary when creating backrest job %s", - task.Spec.Parameters[config.LABEL_JOB_NAME]) - return - } - pod := pods.Items[0] - - var cmdOpts []string - - if !backrestPgHostRegex.MatchString(task.Spec.Parameters[config.LABEL_BACKREST_OPTS]) { - cmdOpts = append(cmdOpts, fmt.Sprintf("--db-host=%s", pod.Status.PodIP)) - } - if !backrestPgPathRegex.MatchString(task.Spec.Parameters[config.LABEL_BACKREST_OPTS]) { - var podDbPath string - for _, envVar := range pod.Spec.Containers[0].Env { - if envVar.Name == "PGBACKREST_DB_PATH" { - podDbPath = envVar.Value - break - } - } - if podDbPath != "" { - cmdOpts = append(cmdOpts, fmt.Sprintf("--db-path=%s", podDbPath)) - } else { - log.Errorf("Unable to find PGBACKREST_DB_PATH on primary pod %s for backrest job %s", - pod.Name, task.Spec.Parameters[config.LABEL_JOB_NAME]) - return - } - } - // join options using a space - commandOpts = strings.Join(cmdOpts, " ") - return -} diff --git a/internal/operator/backrest/repo.go b/internal/operator/backrest/repo.go deleted file mode 100644 index ea2e5100b0..0000000000 --- a/internal/operator/backrest/repo.go +++ /dev/null @@ -1,336 +0,0 @@ -package backrest - -/* - Copyright 2017 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "bytes" - "encoding/json" - "fmt" - "os" - "regexp" - "strconv" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/operator" - "github.com/crunchydata/postgres-operator/internal/operator/pvc" - "github.com/crunchydata/postgres-operator/internal/util" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - - log "github.com/sirupsen/logrus" - appsv1 "k8s.io/api/apps/v1" - v1 "k8s.io/api/core/v1" - kerrors "k8s.io/apimachinery/pkg/api/errors" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/client-go/kubernetes" -) - -// s3RepoTypeRegex defines a regex to detect if an S3 restore has been specified using the -// pgBackRest --repo-type option -var s3RepoTypeRegex = regexp.MustCompile(`--repo-type=["']?s3["']?`) - -type RepoDeploymentTemplateFields struct { - SecurityContext string - PGOImagePrefix string - PGOImageTag string - ContainerResources string - BackrestRepoClaimName string - SshdSecretsName string - PGbackrestDBHost string - PgbackrestRepoPath string - PgbackrestDBPath string - PgbackrestPGPort string - SshdPort int - PgbackrestStanza string - PgbackrestRepoType string - PgbackrestS3EnvVars string - Name string - ClusterName string - PodAnnotations string - PodAntiAffinity string - PodAntiAffinityLabelName string - PodAntiAffinityLabelValue string - Replicas int - BootstrapCluster string -} - -type RepoServiceTemplateFields struct { - Name string - ClusterName string - Port string -} - -// CreateRepoDeployment creates a pgBackRest repository deployment for a PostgreSQL cluster, -// while also creating the associated Service and PersistentVolumeClaim. -func CreateRepoDeployment(clientset kubernetes.Interface, cluster *crv1.Pgcluster, - createPVC, bootstrapRepo bool, replicas int) error { - - namespace := cluster.GetNamespace() - restoreClusterName := cluster.Spec.PGDataSource.RestoreFrom - - repoFields := getRepoDeploymentFields(clientset, cluster, replicas) - - var repoName, serviceName string - // if this is a bootstrap repository then we now override certain fields as needed - if bootstrapRepo { - if err := setBootstrapRepoOverrides(clientset, cluster, repoFields); err != nil { - return err - } - repoName = fmt.Sprintf(util.BackrestRepoPVCName, restoreClusterName) - serviceName = fmt.Sprintf(util.BackrestRepoServiceName, restoreClusterName) - } else { - repoName = fmt.Sprintf(util.BackrestRepoPVCName, cluster.Name) - serviceName = fmt.Sprintf(util.BackrestRepoServiceName, cluster.Name) - } - - //create backrest repo service - serviceFields := RepoServiceTemplateFields{ - Name: serviceName, - ClusterName: cluster.Name, - Port: "2022", - } - - err := createService(clientset, &serviceFields, namespace) - if err != nil { - log.Error(err) - return err - } - - // if createPVC is set to true, attempt to create the PVC - if createPVC { - // create backrest repo PVC with same name as repoName - _, err := clientset.CoreV1().PersistentVolumeClaims(namespace).Get(repoName, metav1.GetOptions{}) - if err == nil { - log.Debugf("pvc [%s] already present, will not recreate", repoName) - } else if kerrors.IsNotFound(err) { - _, err = pvc.CreatePVC(clientset, &cluster.Spec.BackrestStorage, repoName, cluster.Name, namespace) - if err != nil { - return err - } - log.Debugf("created backrest-shared-repo pvc [%s]", repoName) - } else { - return err - } - } - - var b bytes.Buffer - err = config.PgoBackrestRepoTemplate.Execute(&b, repoFields) - if err != nil { - log.Error(err.Error()) - return err - } - - if operator.CRUNCHY_DEBUG { - config.PgoBackrestRepoTemplate.Execute(os.Stdout, repoFields) - } - - deployment := appsv1.Deployment{} - err = json.Unmarshal(b.Bytes(), &deployment) - if err != nil { - log.Error("error unmarshalling backrest repo json into Deployment " + err.Error()) - return err - } - - operator.AddBackRestConfigVolumeAndMounts(&deployment.Spec.Template.Spec, cluster.Name, cluster.Spec.BackrestConfig) - - // set the container image to an override value, if one exists - operator.SetContainerImageOverride(config.CONTAINER_IMAGE_PGO_BACKREST_REPO, - &deployment.Spec.Template.Spec.Containers[0]) - - if _, err := clientset.AppsV1().Deployments(namespace).Create(&deployment); err != nil && - !kerrors.IsAlreadyExists(err) { - return err - } - - return nil -} - -// setBootstrapRepoOverrides overrides certain fields used to populate the pgBackRest repository template -// as needed to support the creation of a bootstrap repository need to bootstrap a new cluster from an -// existing data source. -func setBootstrapRepoOverrides(clientset kubernetes.Interface, cluster *crv1.Pgcluster, - repoFields *RepoDeploymentTemplateFields) error { - - restoreClusterName := cluster.Spec.PGDataSource.RestoreFrom - namespace := cluster.GetNamespace() - - repoFields.ClusterName = restoreClusterName - repoFields.BootstrapCluster = cluster.GetName() - repoFields.Name = fmt.Sprintf(util.BackrestRepoServiceName, restoreClusterName) - repoFields.SshdSecretsName = fmt.Sprintf(util.BackrestRepoSecretName, restoreClusterName) - - // set the proper PVC name for the "restore from" cluster - repoFields.BackrestRepoClaimName = fmt.Sprintf(util.BackrestRepoPVCName, restoreClusterName) - - restoreFromSecret, err := clientset.CoreV1().Secrets(namespace).Get( - fmt.Sprintf("%s-%s", restoreClusterName, config.LABEL_BACKREST_REPO_SECRET), - metav1.GetOptions{}) - if err != nil { - return err - } - - repoFields.PgbackrestRepoPath = restoreFromSecret.Annotations[config.ANNOTATION_REPO_PATH] - repoFields.PgbackrestPGPort = restoreFromSecret.Annotations[config.ANNOTATION_PG_PORT] - - sshdPort, err := strconv.Atoi(restoreFromSecret.Annotations[config.ANNOTATION_SSHD_PORT]) - if err != nil { - return err - } - repoFields.SshdPort = sshdPort - - // if an s3 restore is detected, override or set the pgbackrest S3 env vars, otherwise do - // not set the s3 env vars at all - s3Restore := S3RepoTypeCLIOptionExists(cluster.Spec.PGDataSource.RestoreOpts) - if s3Restore { - // Now override any backrest S3 env vars for the bootstrap job - repoFields.PgbackrestS3EnvVars = operator.GetPgbackrestBootstrapS3EnvVars( - cluster.Spec.PGDataSource.RestoreFrom, restoreFromSecret) - } else { - repoFields.PgbackrestS3EnvVars = "" - } - - return nil -} - -// getRepoDeploymentFields returns a RepoDeploymentTemplateFields struct populated with the fields -// needed to populate the pgBackRest repository template and create a pgBackRest repository for a -// specific PostgreSQL cluster. -func getRepoDeploymentFields(clientset kubernetes.Interface, cluster *crv1.Pgcluster, - replicas int) *RepoDeploymentTemplateFields { - - namespace := cluster.GetNamespace() - - repoFields := RepoDeploymentTemplateFields{ - PGOImagePrefix: util.GetValueOrDefault(cluster.Spec.PGOImagePrefix, operator.Pgo.Pgo.PGOImagePrefix), - PGOImageTag: operator.Pgo.Pgo.PGOImageTag, - ContainerResources: operator.GetResourcesJSON(cluster.Spec.BackrestResources, cluster.Spec.BackrestLimits), - BackrestRepoClaimName: fmt.Sprintf(util.BackrestRepoPVCName, cluster.Name), - SshdSecretsName: fmt.Sprintf(util.BackrestRepoSecretName, cluster.Name), - PGbackrestDBHost: cluster.Name, - PgbackrestRepoPath: util.GetPGBackRestRepoPath(*cluster), - PgbackrestDBPath: "/pgdata/" + cluster.Name, - PgbackrestPGPort: cluster.Spec.Port, - SshdPort: operator.Pgo.Cluster.BackrestPort, - PgbackrestStanza: "db", - PgbackrestRepoType: operator.GetRepoType(cluster.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE]), - PgbackrestS3EnvVars: operator.GetPgbackrestS3EnvVars(*cluster, clientset, namespace), - Name: fmt.Sprintf(util.BackrestRepoServiceName, cluster.Name), - ClusterName: cluster.Name, - SecurityContext: operator.GetPodSecurityContext(cluster.Spec.BackrestStorage.GetSupplementalGroups()), - Replicas: replicas, - PodAnnotations: operator.GetAnnotations(cluster, crv1.ClusterAnnotationBackrest), - PodAntiAffinity: operator.GetPodAntiAffinity(cluster, - crv1.PodAntiAffinityDeploymentPgBackRest, cluster.Spec.PodAntiAffinity.PgBackRest), - PodAntiAffinityLabelName: config.LABEL_POD_ANTI_AFFINITY, - PodAntiAffinityLabelValue: string(operator.GetPodAntiAffinityType(cluster, - crv1.PodAntiAffinityDeploymentPgBackRest, cluster.Spec.PodAntiAffinity.PgBackRest)), - } - - return &repoFields -} - -// UpdateAnnotations updates the annotations in the "template" portion of a -// pgBackRest deployment -func UpdateAnnotations(clientset kubernetes.Interface, cluster *crv1.Pgcluster, - annotations map[string]string) error { - // get a list of all of the instance deployments for the cluster - deployment, err := operator.GetBackrestDeployment(clientset, cluster) - - if err != nil { - return err - } - - // now update the pgBackRest deployment - log.Debugf("update annotations on [%s]", deployment.Name) - log.Debugf("new annotations: %v", annotations) - - deployment.Spec.Template.SetAnnotations(annotations) - - // finally, update the Deployment. If something errors, we'll log that there - // was an error, but continue with processing the other deployments - if _, err := clientset.AppsV1().Deployments(deployment.Namespace).Update(deployment); err != nil { - return err - } - - return nil -} - -// UpdateResources updates the pgBackRest repository Deployment to reflect any -// resource updates -func UpdateResources(clientset kubernetes.Interface, cluster *crv1.Pgcluster) error { - // get a list of all of the instance deployments for the cluster - deployment, err := operator.GetBackrestDeployment(clientset, cluster) - - if err != nil { - return err - } - - // first, initialize the requests/limits resource to empty Resource Lists - deployment.Spec.Template.Spec.Containers[0].Resources.Requests = v1.ResourceList{} - deployment.Spec.Template.Spec.Containers[0].Resources.Limits = v1.ResourceList{} - - // now, simply deep copy the values from the CRD - if cluster.Spec.BackrestResources != nil { - deployment.Spec.Template.Spec.Containers[0].Resources.Requests = cluster.Spec.BackrestResources.DeepCopy() - } - - if cluster.Spec.BackrestLimits != nil { - deployment.Spec.Template.Spec.Containers[0].Resources.Limits = cluster.Spec.BackrestLimits.DeepCopy() - } - - // update the deployment with the new values - if _, err := clientset.AppsV1().Deployments(deployment.Namespace).Update(deployment); err != nil { - return err - } - - return nil -} - -func createService(clientset kubernetes.Interface, fields *RepoServiceTemplateFields, namespace string) error { - var err error - - var b bytes.Buffer - - _, err = clientset.CoreV1().Services(namespace).Get(fields.Name, metav1.GetOptions{}) - if err != nil { - - err = config.PgoBackrestRepoServiceTemplate.Execute(&b, fields) - if err != nil { - log.Error(err.Error()) - return err - } - - if operator.CRUNCHY_DEBUG { - config.PgoBackrestRepoServiceTemplate.Execute(os.Stdout, fields) - } - - s := v1.Service{} - err = json.Unmarshal(b.Bytes(), &s) - if err != nil { - log.Error("error unmarshalling repo service json into repo Service " + err.Error()) - return err - } - - _, err = clientset.CoreV1().Services(namespace).Create(&s) - } - - return err -} - -// S3RepoTypeCLIOptionExists detects if a S3 restore was requested via the '--repo-type' -// command line option -func S3RepoTypeCLIOptionExists(opts string) bool { - return s3RepoTypeRegex.MatchString(opts) -} diff --git a/internal/operator/backrest/restore.go b/internal/operator/backrest/restore.go deleted file mode 100644 index 9cb0ebae1c..0000000000 --- a/internal/operator/backrest/restore.go +++ /dev/null @@ -1,269 +0,0 @@ -package backrest - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "errors" - "fmt" - "regexp" - "strconv" - "strings" - "time" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/kubeapi" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - "github.com/crunchydata/postgres-operator/pkg/events" - pgo "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned" - - log "github.com/sirupsen/logrus" - kerrors "k8s.io/apimachinery/pkg/api/errors" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/types" - "k8s.io/apimachinery/pkg/util/wait" -) - -// restoreTargetRegex defines a regex to detect if a restore target has been specified -// for pgBackRest using the '--target' option -var restoreTargetRegex = regexp.MustCompile("--target(=| +)") - -type BackrestRestoreJobTemplateFields struct { - JobName string - ClusterName string - WorkflowID string - ToClusterPVCName string - SecurityContext string - PGOImagePrefix string - PGOImageTag string - CommandOpts string - PITRTarget string - PgbackrestStanza string - PgbackrestDBPath string - PgbackrestRepo1Path string - PgbackrestRepo1Host string - PgbackrestS3EnvVars string - NodeSelector string - Tablespaces string - TablespaceVolumes string - TablespaceVolumeMounts string -} - -// UpdatePGClusterSpecForRestore updates the spec for pgcluster resource provided as need to -// perform a restore -func UpdatePGClusterSpecForRestore(clientset kubeapi.Interface, cluster *crv1.Pgcluster, - task *crv1.Pgtask) { - - cluster.Spec.PGDataSource.RestoreFrom = cluster.GetName() - - restoreOpts := task.Spec.Parameters[config.LABEL_BACKREST_RESTORE_OPTS] - - // set the proper target for the restore job - pitrTarget := task.Spec.Parameters[config.LABEL_BACKREST_PITR_TARGET] - if pitrTarget != "" && !restoreTargetRegex.MatchString(restoreOpts) { - restoreOpts = fmt.Sprintf("%s --target=%s", restoreOpts, strconv.Quote(pitrTarget)) - } - - // set the proper backrest storage type for the restore job - storageType := task.Spec.Parameters[config.LABEL_BACKREST_STORAGE_TYPE] - if storageType != "" && !strings.Contains(restoreOpts, "--repo-type") { - restoreOpts = fmt.Sprintf("%s --repo-type=%s", restoreOpts, storageType) - } - - cluster.Spec.PGDataSource.RestoreOpts = restoreOpts - - // set the proper node affinity for the restore job - cluster.Spec.UserLabels[config.LABEL_NODE_LABEL_KEY] = - task.Spec.Parameters[config.LABEL_NODE_LABEL_KEY] - cluster.Spec.UserLabels[config.LABEL_NODE_LABEL_VALUE] = - task.Spec.Parameters[config.LABEL_NODE_LABEL_VALUE] - - return -} - -// PrepareClusterForRestore prepares a PostgreSQL cluster for a restore. This includes deleting -// variousresources (Deployments, Jobs, PVCs & pgtasks) while also patching various custome -// resources (pgreplicas) as needed to perform a restore. -func PrepareClusterForRestore(clientset kubeapi.Interface, cluster *crv1.Pgcluster, - task *crv1.Pgtask) (*crv1.Pgcluster, error) { - - var err error - var patchedCluster *crv1.Pgcluster - namespace := cluster.Namespace - clusterName := cluster.Name - log.Debugf("restore workflow: started for cluster %s", clusterName) - - // prepare the pgcluster CR for restore - clusterPatch := fmt.Sprintf(`{"metadata":{"annotations":{"%s":"","%s":"%s"},`+ - `"labels":{"%s":"%s"}},"spec":{"status":""},"status":{"message":"%s","state":"%s"}}`, - config.ANNOTATION_BACKREST_RESTORE, config.ANNOTATION_CURRENT_PRIMARY, clusterName, - config.LABEL_DEPLOYMENT_NAME, clusterName, "Cluster is being restored", - crv1.PgclusterStateRestore) - if patchedCluster, err = clientset.CrunchydataV1().Pgclusters(namespace).Patch(clusterName, - types.MergePatchType, []byte(clusterPatch)); err != nil { - log.Errorf("pgtask Controller: " + err.Error()) - return nil, err - } - log.Debugf("restore workflow: patched pgcluster %s for restore", clusterName) - - // find all pgreplica CR's - replicas, err := clientset.CrunchydataV1().Pgreplicas(namespace).List(metav1.ListOptions{ - LabelSelector: fmt.Sprintf("%s=%s", config.LABEL_PG_CLUSTER, clusterName), - }) - if err != nil { - return nil, err - } - - // prepare pgreplica CR's for restore - replicaPatch := fmt.Sprintf(`{"metadata":{"annotations":{"%s":null}},"spec":{"status":""},`+ - `"status":{"message":"%s","state":"%s"}}`, config.ANNOTATION_PGHA_BOOTSTRAP_REPLICA, - "Cluster is being restored", crv1.PgclusterStateRestore) - for _, r := range replicas.Items { - if _, err := clientset.CrunchydataV1().Pgreplicas(namespace).Patch(r.GetName(), - types.MergePatchType, []byte(replicaPatch)); err != nil { - return nil, err - } - } - log.Debugf("restore workflow: patched replicas in cluster %s for restore", clusterName) - - // find all current pg deployments - pgInstances, err := clientset.AppsV1().Deployments(namespace).List(metav1.ListOptions{ - LabelSelector: fmt.Sprintf("%s=%s,%s", config.LABEL_PG_CLUSTER, clusterName, - config.LABEL_PG_DATABASE), - }) - if err != nil { - return nil, err - } - - // delete all the primary and replica deployments - if err := clientset.AppsV1().Deployments(namespace).DeleteCollection(&metav1.DeleteOptions{}, - metav1.ListOptions{ - LabelSelector: fmt.Sprintf("%s=%s,%s", config.LABEL_PG_CLUSTER, clusterName, - config.LABEL_PG_DATABASE), - }); err != nil { - return nil, err - } - log.Debugf("restore workflow: deleted primary and replicas %v", pgInstances) - - // delete all existing jobs - deletePropagation := metav1.DeletePropagationBackground - if err := clientset.BatchV1().Jobs(namespace).DeleteCollection( - &metav1.DeleteOptions{PropagationPolicy: &deletePropagation}, - metav1.ListOptions{ - LabelSelector: fmt.Sprintf("%s=%s", config.LABEL_PG_CLUSTER, clusterName), - }); err != nil { - return nil, err - } - log.Debugf("restore workflow: deleted all existing jobs for cluster %s", clusterName) - - // delete all PostgreSQL PVCs (the primary and all replica PVCs) - for _, deployment := range pgInstances.Items { - err := clientset. - CoreV1().PersistentVolumeClaims(namespace). - Delete(deployment.GetName(), &metav1.DeleteOptions{}) - if err != nil && !kerrors.IsNotFound(err) { - return nil, err - } - log.Debugf("restore workflow: deleted primary or replica PVC %s", deployment.GetName()) - } - - // Wait for all PG PVCs to be removed. If unable to verify that all PVCs have been - // removed, then the restore cannot proceed the function returns. - if err := wait.Poll(time.Second/2, time.Minute*3, func() (bool, error) { - notFound := true - for _, deployment := range pgInstances.Items { - if _, err := clientset.CoreV1().PersistentVolumeClaims(namespace). - Get(deployment.GetName(), metav1.GetOptions{}); err == nil { - notFound = false - } - } - return notFound, nil - }); err != nil { - return nil, err - } - log.Debugf("restore workflow: finished waiting for PVCs for cluster %s to be removed", - clusterName) - - // Delete the DCS and leader ConfigMaps. These will be recreated during the restore. - configMaps := []string{fmt.Sprintf("%s-config", clusterName), - fmt.Sprintf("%s-leader", clusterName)} - for _, c := range configMaps { - if err := clientset.CoreV1().ConfigMaps(namespace).Delete(c, - &metav1.DeleteOptions{}); err != nil && !kerrors.IsNotFound(err) { - return nil, err - } - } - log.Debugf("restore workflow: deleted 'config' and 'leader' ConfigMaps for cluster %s", - clusterName) - - initPatch := `{"data":{"init":"true"}}` - if _, err := clientset.CoreV1().ConfigMaps(namespace).Patch(fmt.Sprintf("%s-pgha-config", - clusterName), types.MergePatchType, - []byte(initPatch)); err != nil { - return nil, err - } - log.Debugf("restore workflow: set 'init' flag to 'true' for cluster %s", - clusterName) - - return patchedCluster, nil -} - -// UpdateWorkflow is responsible for updating the workflow for a restore -func UpdateWorkflow(clientset pgo.Interface, workflowID, namespace, status string) error { - //update workflow - log.Debugf("restore workflow: update workflow %s", workflowID) - selector := crv1.PgtaskWorkflowID + "=" + workflowID - taskList, err := clientset.CrunchydataV1().Pgtasks(namespace).List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - log.Errorf("restore workflow error: could not get workflow %s", workflowID) - return err - } - if len(taskList.Items) != 1 { - log.Errorf("restore workflow error: workflow %s not found", workflowID) - return errors.New("restore workflow error: workflow not found") - } - - task := taskList.Items[0] - task.Spec.Parameters[status] = time.Now().Format(time.RFC3339) - _, err = clientset.CrunchydataV1().Pgtasks(namespace).Update(&task) - if err != nil { - log.Errorf("restore workflow error: could not update workflow %s to status %s", workflowID, status) - return err - } - return err -} - -// PublishRestore is responsible for publishing the 'RestoreCluster' event for a restore -func PublishRestore(id, clusterName, username, namespace string) { - topics := make([]string, 1) - topics[0] = events.EventTopicCluster - - f := events.EventRestoreClusterFormat{ - EventHeader: events.EventHeader{ - Namespace: namespace, - Username: username, - Topic: topics, - Timestamp: time.Now(), - EventType: events.EventRestoreCluster, - }, - Clustername: clusterName, - } - - err := events.Publish(f) - if err != nil { - log.Error(err.Error()) - } - -} diff --git a/internal/operator/backrest/stanza.go b/internal/operator/backrest/stanza.go deleted file mode 100644 index d7d55615dd..0000000000 --- a/internal/operator/backrest/stanza.go +++ /dev/null @@ -1,135 +0,0 @@ -package backrest - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "strings" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/kubeapi" - "github.com/crunchydata/postgres-operator/internal/operator" - "github.com/crunchydata/postgres-operator/internal/util" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - log "github.com/sirupsen/logrus" - kerrors "k8s.io/apimachinery/pkg/api/errors" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" -) - -// CleanStanzaCreateResources deletes any existing stanza-create pgtask and job. Useful during a -// restore when an existing stanza-create pgtask or Job might still be present from initial -// creation of the cluster. -func CleanStanzaCreateResources(namespace, clusterName string, clientset kubeapi.Interface) error { - - resourceName := clusterName + "-" + crv1.PgtaskBackrestStanzaCreate - - if err := clientset.CrunchydataV1().Pgtasks(namespace).Delete(resourceName, - &metav1.DeleteOptions{}); err != nil && !kerrors.IsNotFound(err) { - return err - } - - // job name is the same as the task name - deletePropagation := metav1.DeletePropagationBackground - if err := clientset.BatchV1().Jobs(namespace).Delete(resourceName, - &metav1.DeleteOptions{ - PropagationPolicy: &deletePropagation, - }); err != nil && !kerrors.IsNotFound(err) { - return err - } - - return nil -} - -func StanzaCreate(namespace, clusterName string, clientset kubeapi.Interface) { - - taskName := clusterName + "-" + crv1.PgtaskBackrestStanzaCreate - - //look up the backrest-repo pod name - selector := config.LABEL_PG_CLUSTER + "=" + clusterName + "," + config.LABEL_PGO_BACKREST_REPO + "=true" - pods, err := clientset.CoreV1().Pods(namespace).List(metav1.ListOptions{LabelSelector: selector}) - if len(pods.Items) != 1 { - log.Errorf("pods len != 1 for cluster %s", clusterName) - return - } - if err != nil { - log.Error(err) - return - } - - podName := pods.Items[0].Name - - // get the cluster to determine the proper storage type - cluster, err := clientset.CrunchydataV1().Pgclusters(namespace).Get(clusterName, metav1.GetOptions{}) - if err != nil { - return - } - - //create the stanza-create task - spec := crv1.PgtaskSpec{} - spec.Name = taskName - - jobName := clusterName + "-" + crv1.PgtaskBackrestStanzaCreate - - spec.TaskType = crv1.PgtaskBackrest - spec.Parameters = make(map[string]string) - spec.Parameters[config.LABEL_JOB_NAME] = jobName - spec.Parameters[config.LABEL_PG_CLUSTER] = clusterName - spec.Parameters[config.LABEL_POD_NAME] = podName - spec.Parameters[config.LABEL_CONTAINER_NAME] = "pgo-backrest-repo" - // pass along the appropriate image prefix for the backup task - // this will be used by the associated backrest job - spec.Parameters[config.LABEL_IMAGE_PREFIX] = util.GetValueOrDefault(cluster.Spec.PGOImagePrefix, operator.Pgo.Pgo.PGOImagePrefix) - spec.Parameters[config.LABEL_BACKREST_COMMAND] = crv1.PgtaskBackrestStanzaCreate - - // Handle stanza creation for a standby cluster, which requires some additional consideration. - // This includes setting the pgBackRest storage type and command options as needed to support - // stanza creation for a standby cluster. If not a standby cluster then simply set the - // storage type and options as usual. - if cluster.Spec.Standby { - // Since this is a standby cluster, if local storage is specified then ensure stanza - // creation is for the local repo only. The stanza for the S3 repo will have already been - // created by the cluster the standby is replicating from, and therefore does not need to - // be attempted again. - if strings.Contains(cluster.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE], "local") { - spec.Parameters[config.LABEL_BACKREST_STORAGE_TYPE] = "local" - } - // Since the primary will not be directly accessible to the standby cluster, create the - // stanza in offline mode - spec.Parameters[config.LABEL_BACKREST_OPTS] = "--no-online" - } else { - spec.Parameters[config.LABEL_BACKREST_STORAGE_TYPE] = - cluster.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE] - spec.Parameters[config.LABEL_BACKREST_OPTS] = "" - } - - // Get 'true' or 'false' for setting the pgBackRest S3 verify TLS value - spec.Parameters[config.LABEL_BACKREST_S3_VERIFY_TLS] = operator.GetS3VerifyTLSSetting(cluster) - - newInstance := &crv1.Pgtask{ - ObjectMeta: metav1.ObjectMeta{ - Name: taskName, - }, - Spec: spec, - } - - newInstance.ObjectMeta.Labels = make(map[string]string) - newInstance.ObjectMeta.Labels[config.LABEL_PG_CLUSTER] = clusterName - - _, err = clientset.CrunchydataV1().Pgtasks(namespace).Create(newInstance) - if err != nil { - log.Error(err) - } - -} diff --git a/internal/operator/cluster/clone.go b/internal/operator/cluster/clone.go deleted file mode 100644 index ae33ffd0cb..0000000000 --- a/internal/operator/cluster/clone.go +++ /dev/null @@ -1,1054 +0,0 @@ -package cluster - -/* - Copyright 2017 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "bytes" - "encoding/json" - "errors" - "fmt" - "strconv" - "strings" - "time" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/kubeapi" - "github.com/crunchydata/postgres-operator/internal/operator" - "github.com/crunchydata/postgres-operator/internal/operator/backrest" - "github.com/crunchydata/postgres-operator/internal/operator/pvc" - "github.com/crunchydata/postgres-operator/internal/util" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - "github.com/crunchydata/postgres-operator/pkg/events" - pgo "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned" - log "github.com/sirupsen/logrus" - batch_v1 "k8s.io/api/batch/v1" - v1 "k8s.io/api/core/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/client-go/kubernetes" - "k8s.io/client-go/rest" -) - -const ( - pgBackRestRepoSyncContainerImageName = "%s/pgo-backrest-repo-sync:%s" - pgBackRestRepoSyncJobNamePrefix = "pgo-backrest-repo-sync-%s-%s" - pgBackRestStanza = "db" // this is hardcoded throughout... - patchResource = "pgtasks" - patchURL = "/spec/status" - targetClusterPGDATAPath = "/pgdata/%s" -) - -// Clone allows for one to clone the data from an existing cluster to a new -// cluster in the Operator. It works by doing the following: -// -// 1. Create some PVCs that will be utilized by the new cluster -// 2. Syncing (i.e. using rsync) the pgBackRest repository from the old cluster -// to the new cluster -// 3. perform a pgBackRest delta restore to the new PVC -// 4. Create a new cluster by using the old cluster as a template and providing -// the specifications to the new cluster, with a few "opinionated" items (e.g. -// copying over the secrets) -func Clone(clientset kubeapi.Interface, restConfig *rest.Config, namespace string, task *crv1.Pgtask) { - // have a guard -- if the task is completed, don't proceed furter - if task.Spec.Status == crv1.CompletedStatus { - log.Warn(fmt.Sprintf("pgtask [%s] has already completed", task.Spec.Name)) - return - } - - switch task.Spec.TaskType { - // The first step is to ensure that we have PVCs available for creating the - // cluster, so then we can kick off the first job which is to copy the - // contents of the pgBackRes repo from the source cluster to a destination - // cluster - case crv1.PgtaskCloneStep1: - cloneStep1(clientset, namespace, task) - // The second step is to kick off a pgBackRest restore job to the target - // cluster PVC - case crv1.PgtaskCloneStep2: - cloneStep2(clientset, restConfig, namespace, task) - // The third step is to create the new cluster! - case crv1.PgtaskCloneStep3: - cloneStep3(clientset, namespace, task) - } -} - -// PublishCloneEvent lets one publish an event related to the clone process -func PublishCloneEvent(eventType string, namespace string, task *crv1.Pgtask, errorMessage string) { - // get the boilerplate identifiers - sourceClusterName, targetClusterName, workflowID := getCloneTaskIdentifiers(task) - // set up the event header - eventHeader := events.EventHeader{ - Namespace: namespace, - Username: task.ObjectMeta.Labels[config.LABEL_PGOUSER], - Topic: []string{events.EventTopicCluster}, - Timestamp: time.Now(), - EventType: eventType, - } - // get the event format itself and publish it based on the event type - switch eventType { - case events.EventCloneCluster: - publishCloneClusterEvent(eventHeader, sourceClusterName, targetClusterName, workflowID) - case events.EventCloneClusterCompleted: - publishCloneClusterCompletedEvent(eventHeader, sourceClusterName, targetClusterName, workflowID) - case events.EventCloneClusterFailure: - publishCloneClusterFailureEvent(eventHeader, sourceClusterName, targetClusterName, workflowID, errorMessage) - } -} - -// UpdateCloneWorkflow updates a Workflow with the current state of the clone task -func UpdateCloneWorkflow(clientset pgo.Interface, namespace, workflowID, status string) error { - log.Debugf("clone workflow: update workflow [%s]", workflowID) - - // we have to look up the name of the workflow bt the workflow ID, which - // involves using a selector - selector := fmt.Sprintf("%s=%s", crv1.PgtaskWorkflowID, workflowID) - taskList, err := clientset.CrunchydataV1().Pgtasks(namespace).List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - log.Errorf("clone workflow: could not get workflow [%s]", workflowID) - return err - } - - // if there is not one unique result, then we should display an error here - if len(taskList.Items) != 1 { - errorMsg := fmt.Sprintf("clone workflow: workflow [%s] not found", workflowID) - log.Errorf(errorMsg) - return errors.New(errorMsg) - } - - // get the first task and update on the current status based on how it is - // progressing - task := taskList.Items[0] - task.Spec.Parameters[status] = time.Now().Format(time.RFC3339) - - if _, err := clientset.CrunchydataV1().Pgtasks(namespace).Update(&task); err != nil { - log.Errorf("clone workflow: could not update workflow [%s] to status [%s]", workflowID, status) - return err - } - - return nil -} - -// cloneStep1 covers the creation of the PVCs for the new PostgreSQL cluster, -// as well as sets up and executes a job to copy (via rsync) the PgBackRest -// repository from the source cluster to the destination cluster -func cloneStep1(clientset kubeapi.Interface, namespace string, task *crv1.Pgtask) { - sourceClusterName, targetClusterName, workflowID := getCloneTaskIdentifiers(task) - - log.Debugf("clone step 1 called: namespace:[%s] sourcecluster:[%s] targetcluster:[%s] workflowid:[%s]", - namespace, sourceClusterName, targetClusterName, workflowID) - - // before we get stared, let's ensure we publish an event that the clone - // workflow has begun - // (eventType string, namespace string, task *crv1.Pgtask, errorMessage string) - PublishCloneEvent(events.EventCloneCluster, namespace, task, "") - - // first, update the workflow to indicate that we are creating the PVCs - // update the workflow to indicate that the cluster is being created - if err := UpdateCloneWorkflow(clientset, namespace, workflowID, crv1.PgtaskWorkflowCloneCreatePVC); err != nil { - log.Error(err) - // if updating the workflow fails, we can continue onward - } - - // get the information about the current pgcluster by name, to ensure it - // exists - sourcePgcluster, err := getSourcePgcluster(clientset, namespace, sourceClusterName) - - // if there is an error getting the pgcluster, abort here - if err != nil { - log.Error(err) - // publish a failure event - errorMessage := fmt.Sprintf("Could not find source cluster: %s", err.Error()) - PublishCloneEvent(events.EventCloneClusterFailure, namespace, task, errorMessage) - return - } - - sourceClusterBackrestStorageType := sourcePgcluster.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE] - cloneBackrestStorageType := task.Spec.Parameters["backrestStorageType"] - // if 's3' storage was selected for the clone, ensure it is enabled in the current pg cluster. - // also, if 'local' was selected, or if no storage type was selected, ensure the cluster is using - // local storage - err = util.ValidateBackrestStorageTypeOnBackupRestore(cloneBackrestStorageType, - sourceClusterBackrestStorageType, true) - if err != nil { - log.Error(err) - PublishCloneEvent(events.EventCloneClusterFailure, namespace, task, err.Error()) - return - } - - // Ensure that there does *not* already exist a Pgcluster for the target - if found := checkTargetPgCluster(clientset, namespace, targetClusterName); found { - log.Errorf("[%s] already exists", targetClusterName) - errorMessage := fmt.Sprintf("Not cloning the cluster: %s already exists", targetClusterName) - PublishCloneEvent(events.EventCloneClusterFailure, namespace, task, errorMessage) - return - } - - // create PVCs for pgBackRest and PostgreSQL - if _, _, _, _, err = createPVCs(clientset, task, namespace, *sourcePgcluster, targetClusterName); err != nil { - log.Error(err) - PublishCloneEvent(events.EventCloneClusterFailure, namespace, task, err.Error()) - return - } - - log.Debug("clone step 1: created pvcs") - - // awesome. now it's time to synchronize the source and targe cluster - // pgBackRest repositories - - // update the workflow to indicate that we are going to sync the repositories - if err := UpdateCloneWorkflow(clientset, namespace, workflowID, crv1.PgtaskWorkflowCloneSyncRepo); err != nil { - log.Error(err) - // if updating the workflow fails, we can continue onward - } - - // now, synchronize the repositories - if jobName, err := createPgBackRestRepoSyncJob(clientset, namespace, task, *sourcePgcluster); err == nil { - log.Debugf("clone step 1: created pgbackrest repo sync job: [%s]", jobName) - } - - // finally, update the pgtask to indicate that it's completed - patchPgtaskComplete(clientset, namespace, task.Spec.Name) -} - -// cloneStep2 creates a pgBackRest restore job for the new PostgreSQL cluster by -// running a restore from the new target cluster pgBackRest repository to the -// new target cluster PVC -func cloneStep2(clientset kubeapi.Interface, restConfig *rest.Config, namespace string, task *crv1.Pgtask) { - sourceClusterName, targetClusterName, workflowID := getCloneTaskIdentifiers(task) - - log.Debugf("clone step 2 called: namespace:[%s] sourcecluster:[%s] targetcluster:[%s] workflowid:[%s]", - namespace, sourceClusterName, targetClusterName, workflowID) - - // get the information about the current pgcluster by name, to ensure it - // exists, as we still need information about the PrimaryStorage - sourcePgcluster, err := getSourcePgcluster(clientset, namespace, sourceClusterName) - - // if there is an error getting the pgcluster, abort here - if err != nil { - log.Error(err) - errorMessage := fmt.Sprintf("Could not find source cluster: %s", err.Error()) - PublishCloneEvent(events.EventCloneClusterFailure, namespace, task, errorMessage) - return - } - - // interpret the storage specs again. the volumes were already created during - // a prior step. - _, dataVolume, walVolume, tablespaceVolumes, err := createPVCs( - clientset, task, namespace, *sourcePgcluster, targetClusterName) - if err != nil { - log.Error(err) - PublishCloneEvent(events.EventCloneClusterFailure, namespace, task, err.Error()) - return - } - - // Retrieve current S3 key & key secret - s3Creds, err := util.GetS3CredsFromBackrestRepoSecret(clientset, namespace, sourcePgcluster.Name) - if err != nil { - log.Error(err) - errorMessage := fmt.Sprintf("Unable to get S3 key and key secret from source cluster "+ - "backrest repo secret: %s", err.Error()) - PublishCloneEvent(events.EventCloneClusterFailure, namespace, task, errorMessage) - return - } - - // we need to set up the secret for the pgBackRest repo. This is the place to - // do it - if err := util.CreateBackrestRepoSecrets(clientset, - util.BackrestRepoConfig{ - BackrestS3CA: s3Creds.AWSS3CA, - BackrestS3Key: s3Creds.AWSS3Key, - BackrestS3KeySecret: s3Creds.AWSS3KeySecret, - ClusterName: targetClusterName, - ClusterNamespace: namespace, - OperatorNamespace: operator.PgoNamespace, - }); err != nil { - log.Error(err) - // publish a failure event - errorMessage := fmt.Sprintf("Could not find source cluster: %s", err.Error()) - PublishCloneEvent(events.EventCloneClusterFailure, namespace, task, errorMessage) - return - } - - // ok, time for a little bit of grottiness. Ideally here we would attempt to - // bring up the pgBackRest repo and allow the Operator to respond to this - // event in an...evented way. However, for now, we're going to set a loop and - // wait for the pgBackRest deployment to come up - // to do this, we are going to mock out a targetPgcluster with the exact - // attributes we need to make this successful - targetPgcluster := crv1.Pgcluster{ - ObjectMeta: metav1.ObjectMeta{ - Name: targetClusterName, - Namespace: namespace, - Labels: map[string]string{ - config.LABEL_BACKREST: "true", - }, - }, - Spec: crv1.PgclusterSpec{ - BackrestS3Bucket: sourcePgcluster.Spec.BackrestS3Bucket, - BackrestS3Endpoint: sourcePgcluster.Spec.BackrestS3Endpoint, - BackrestS3Region: sourcePgcluster.Spec.BackrestS3Region, - Port: sourcePgcluster.Spec.Port, - PrimaryStorage: sourcePgcluster.Spec.PrimaryStorage, - CCPImagePrefix: sourcePgcluster.Spec.CCPImagePrefix, - PGOImagePrefix: sourcePgcluster.Spec.PGOImagePrefix, - UserLabels: map[string]string{ - config.LABEL_BACKREST_STORAGE_TYPE: sourcePgcluster.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE], - }, - }, - } - - // create the deployment without creating the PVC given we've already done that - if err := backrest.CreateRepoDeployment(clientset, &targetPgcluster, false, false, - 1); err != nil { - log.Error(err) - // publish a failure event - errorMessage := fmt.Sprintf("Could not create new pgbackrest repo: %s", err.Error()) - PublishCloneEvent(events.EventCloneClusterFailure, namespace, task, errorMessage) - return - } - - // ok, let's wait for the deployment to come up...per above note. - backrestRepoDeploymentName := fmt.Sprintf(util.BackrestRepoDeploymentName, targetClusterName) - if err := waitForDeploymentReady(clientset, namespace, backrestRepoDeploymentName, 30, 3); err != nil { - log.Error(err) - errorMessage := fmt.Sprintf("Could not start pgbackrest repo: %s", err.Error()) - PublishCloneEvent(events.EventCloneClusterFailure, namespace, task, errorMessage) - return - } - - // set up a map of the names of the tablespaces as well as the storage classes - tablespaceStorageTypeMap := operator.GetTablespaceStorageTypeMap(sourcePgcluster.Spec.TablespaceMounts) - - // combine supplemental groups from all volumes - var supplementalGroups []int64 - supplementalGroups = append(supplementalGroups, dataVolume.SupplementalGroups...) - for _, v := range tablespaceVolumes { - supplementalGroups = append(supplementalGroups, v.SupplementalGroups...) - } - - backrestRestoreJobFields := backrest.BackrestRestoreJobTemplateFields{ - JobName: fmt.Sprintf("restore-%s-%s", targetClusterName, util.RandStringBytesRmndr(4)), - ClusterName: targetClusterName, - SecurityContext: operator.GetPodSecurityContext(supplementalGroups), - ToClusterPVCName: targetClusterName, // the PVC name should match that of the target cluster - WorkflowID: workflowID, - // use a delta restore in order to optimize how the restore occurs - CommandOpts: "--delta", - // PITRTarget is not supported in the first iteration of clone - PGOImagePrefix: util.GetValueOrDefault(sourcePgcluster.Spec.PGOImagePrefix, operator.Pgo.Pgo.PGOImagePrefix), - PGOImageTag: operator.Pgo.Pgo.PGOImageTag, - PgbackrestStanza: pgBackRestStanza, - PgbackrestDBPath: fmt.Sprintf(targetClusterPGDATAPath, targetClusterName), - PgbackrestRepo1Path: util.GetPGBackRestRepoPath(targetPgcluster), - PgbackrestRepo1Host: fmt.Sprintf(util.BackrestRepoServiceName, targetClusterName), - PgbackrestS3EnvVars: operator.GetPgbackrestS3EnvVars(*sourcePgcluster, clientset, namespace), - - TablespaceVolumes: operator.GetTablespaceVolumesJSON(targetClusterName, tablespaceStorageTypeMap), - TablespaceVolumeMounts: operator.GetTablespaceVolumeMountsJSON(tablespaceStorageTypeMap), - } - - // If the pgBackRest repo type is set to 's3', pass in the relevant command line argument. - // This is used in place of the environment variable so that it works as expected with - // the --no-repo1-s3-verify-tls flag, added below - pgBackrestRepoType := operator.GetRepoType(task.Spec.Parameters["backrestStorageType"]) - if pgBackrestRepoType == "s3" && - !strings.Contains(backrestRestoreJobFields.CommandOpts, "--repo1-type") && - !strings.Contains(backrestRestoreJobFields.CommandOpts, "--repo-type") { - backrestRestoreJobFields.CommandOpts = strings.TrimSpace(backrestRestoreJobFields.CommandOpts + " --repo1-type=s3") - } - - // If TLS verification is disabled for this pgcluster, pass in the appropriate - // flag to the restore command. Otherwise, leave the default behavior, which will - // perform the normal certificate validation. - verifyTLS, _ := strconv.ParseBool(operator.GetS3VerifyTLSSetting(&targetPgcluster)) - if pgBackrestRepoType == "s3" && !verifyTLS && - !strings.Contains(backrestRestoreJobFields.CommandOpts, "--no-repo1-s3-verify-tls") { - backrestRestoreJobFields.CommandOpts = strings.TrimSpace(backrestRestoreJobFields.CommandOpts + " --no-repo1-s3-verify-tls") - } - - if sourcePgcluster.Spec.WALStorage.StorageType != "" { - arg, err := getLinkMap(clientset, restConfig, *sourcePgcluster, targetClusterName) - if err != nil { - log.Error(err) - errorMessage := fmt.Sprintf("Could not determine PostgreSQL version: %s", err.Error()) - PublishCloneEvent(events.EventCloneClusterFailure, namespace, task, errorMessage) - return - } - backrestRestoreJobFields.CommandOpts += " " + arg - } - - // substitute the variables into the BackrestRestore job template - var backrestRestoreJobDoc bytes.Buffer - - if err = config.BackrestRestorejobTemplate.Execute(&backrestRestoreJobDoc, backrestRestoreJobFields); err != nil { - log.Error(err) - errorMessage := fmt.Sprintf("Could not create pgbackrest restore template: %s", err.Error()) - PublishCloneEvent(events.EventCloneClusterFailure, namespace, task, errorMessage) - return - } - - // create the pgBackRest restore job! - job := batch_v1.Job{} - - if err := json.Unmarshal(backrestRestoreJobDoc.Bytes(), &job); err != nil { - log.Error(err) - errorMessage := fmt.Sprintf("Could not turn pgbackrest restore template into JSON: %s", err.Error()) - PublishCloneEvent(events.EventCloneClusterFailure, namespace, task, errorMessage) - return - } - - if sourcePgcluster.Spec.WALStorage.StorageType != "" { - operator.AddWALVolumeAndMountsToBackRest(&job.Spec.Template.Spec, walVolume) - } - - operator.AddBackRestConfigVolumeAndMounts(&job.Spec.Template.Spec, sourcePgcluster.Name, sourcePgcluster.Spec.BackrestConfig) - - // set the container image to an override value, if one exists - operator.SetContainerImageOverride(config.CONTAINER_IMAGE_PGO_BACKREST_RESTORE, - &job.Spec.Template.Spec.Containers[0]) - - // update the job annotations to include information about the source and - // target cluster - if job.ObjectMeta.Annotations == nil { - job.ObjectMeta.Annotations = map[string]string{} - } - - job.ObjectMeta.Annotations[config.ANNOTATION_CLONE_BACKREST_PVC_SIZE] = task.Spec.Parameters[util.CloneParameterBackrestPVCSize] - job.ObjectMeta.Annotations[config.ANNOTATION_CLONE_ENABLE_METRICS] = task.Spec.Parameters[util.CloneParameterEnableMetrics] - job.ObjectMeta.Annotations[config.ANNOTATION_CLONE_PVC_SIZE] = task.Spec.Parameters[util.CloneParameterPVCSize] - job.ObjectMeta.Annotations[config.ANNOTATION_CLONE_SOURCE_CLUSTER_NAME] = sourcePgcluster.Spec.ClusterName - job.ObjectMeta.Annotations[config.ANNOTATION_CLONE_TARGET_CLUSTER_NAME] = targetClusterName - // also add the label to indicate this is also part of a clone job! - if job.ObjectMeta.Labels == nil { - job.ObjectMeta.Labels = map[string]string{} - } - job.ObjectMeta.Labels[config.LABEL_PGO_CLONE_STEP_2] = "true" - job.ObjectMeta.Labels[config.LABEL_PGOUSER] = task.ObjectMeta.Labels[config.LABEL_PGOUSER] - - // create the Job in Kubernetes - if j, err := clientset.BatchV1().Jobs(namespace).Create(&job); err != nil { - log.Error(err) - errorMessage := fmt.Sprintf("Could not create pgbackrest restore job: %s", err.Error()) - PublishCloneEvent(events.EventCloneClusterFailure, namespace, task, errorMessage) - } else { - log.Debugf("clone step 2: created restore job [%s]", j.Name) - } - - // finally, update the pgtask to indicate it's complete - patchPgtaskComplete(clientset, namespace, task.Spec.Name) -} - -// cloneStep3 creates the new cluster by creating a new Pgcluster -func cloneStep3(clientset kubeapi.Interface, namespace string, task *crv1.Pgtask) { - sourceClusterName, targetClusterName, workflowID := getCloneTaskIdentifiers(task) - - log.Debugf("clone step 3 called: namespace:[%s] sourcecluster:[%s] targetcluster:[%s] workflowid:[%s]", - namespace, sourceClusterName, targetClusterName, workflowID) - - // get the information about the current pgcluster by name, to ensure we can - // copy over some of the necessary cluster attributes - sourcePgcluster, err := getSourcePgcluster(clientset, namespace, sourceClusterName) - - // if there is an error getting the pgcluster, abort here - if err != nil { - log.Error(err) - errorMessage := fmt.Sprintf("Could not find source cluster: %s", err.Error()) - PublishCloneEvent(events.EventCloneClusterFailure, namespace, task, errorMessage) - return - } - - // first, clean up any existing pgBackRest repo deployment and services, as - // these will be recreated - backrestRepoDeploymentName := fmt.Sprintf(util.BackrestRepoDeploymentName, targetClusterName) - // ignore errors here...we can let the errors occur later on, e.g. if there is - // a failure to delete - deletePropagation := metav1.DeletePropagationForeground - _ = clientset.AppsV1().Deployments(namespace).Delete(backrestRepoDeploymentName, &metav1.DeleteOptions{PropagationPolicy: &deletePropagation}) - _ = clientset.CoreV1().Services(namespace).Delete(backrestRepoDeploymentName, &metav1.DeleteOptions{}) - - // let's actually wait to see if they are deleted - if err := waitForDeploymentDelete(clientset, namespace, backrestRepoDeploymentName, 30, 3); err != nil { - log.Error(err) - errorMessage := fmt.Sprintf("Could not remove temporary pgbackrest repo: %s", err.Error()) - PublishCloneEvent(events.EventCloneClusterFailure, namespace, task, errorMessage) - return - } - - // and go forth and create the cluster! - if err := createCluster(clientset, task, *sourcePgcluster, namespace, targetClusterName, workflowID); err != nil { - log.Error(err) - errorMessage := fmt.Sprintf("Could not create cloned cluster: %s", err.Error()) - PublishCloneEvent(events.EventCloneClusterFailure, namespace, task, errorMessage) - } - - // we did all we can do with the clone! publish an event - PublishCloneEvent(events.EventCloneClusterCompleted, namespace, task, "") - - // finally, update the pgtask to indicate it's complete - patchPgtaskComplete(clientset, namespace, task.Spec.Name) -} - -// createPgBackRestRepoSyncJob prepares and creates the job that will use -// rsync to synchronize two pgBackRest repositories, i.e. it will copy the files -// from the source PostgreSQL cluster to the pgBackRest repository in the target -// cluster -func createPgBackRestRepoSyncJob(clientset kubernetes.Interface, namespace string, task *crv1.Pgtask, sourcePgcluster crv1.Pgcluster) (string, error) { - targetClusterName := task.Spec.Parameters["targetClusterName"] - workflowID := task.Spec.Parameters[crv1.PgtaskWorkflowID] - // set the name of the job, with the "entropy" that we add - jobName := fmt.Sprintf(pgBackRestRepoSyncJobNamePrefix, targetClusterName, util.RandStringBytesRmndr(4)) - - podSecurityContext := v1.PodSecurityContext{ - SupplementalGroups: sourcePgcluster.Spec.BackrestStorage.GetSupplementalGroups(), - } - - if !operator.Pgo.Cluster.DisableFSGroup { - podSecurityContext.FSGroup = &crv1.PGFSGroup - } - - // set up the job template to synchronize the pgBackRest repo - job := batch_v1.Job{ - ObjectMeta: metav1.ObjectMeta{ - Name: jobName, - Annotations: map[string]string{ - // these annotations are used for the subsequent steps to be - // able to identify how to connect these jobs - config.ANNOTATION_CLONE_BACKREST_PVC_SIZE: task.Spec.Parameters[util.CloneParameterBackrestPVCSize], - config.ANNOTATION_CLONE_ENABLE_METRICS: task.Spec.Parameters[util.CloneParameterEnableMetrics], - config.ANNOTATION_CLONE_PVC_SIZE: task.Spec.Parameters[util.CloneParameterPVCSize], - config.ANNOTATION_CLONE_SOURCE_CLUSTER_NAME: sourcePgcluster.Spec.ClusterName, - config.ANNOTATION_CLONE_TARGET_CLUSTER_NAME: targetClusterName, - }, - Labels: map[string]string{ - config.LABEL_VENDOR: config.LABEL_CRUNCHY, - config.LABEL_PGO_CLONE_STEP_1: "true", - config.LABEL_PGOUSER: task.ObjectMeta.Labels[config.LABEL_PGOUSER], - config.LABEL_PG_CLUSTER: targetClusterName, - config.LABEL_WORKFLOW_ID: workflowID, - }, - }, - Spec: batch_v1.JobSpec{ - Template: v1.PodTemplateSpec{ - ObjectMeta: metav1.ObjectMeta{ - Name: jobName, - Labels: map[string]string{ - config.LABEL_VENDOR: config.LABEL_CRUNCHY, - config.LABEL_PGO_CLONE_STEP_1: "true", - config.LABEL_PGOUSER: task.ObjectMeta.Labels[config.LABEL_PGOUSER], - config.LABEL_PG_CLUSTER: targetClusterName, - config.LABEL_SERVICE_NAME: targetClusterName, - }, - }, - // Spec for the pod that will run the pgo-backrest-repo-sync job - Spec: v1.PodSpec{ - Containers: []v1.Container{ - { - Name: "rsync", - Image: fmt.Sprintf(pgBackRestRepoSyncContainerImageName, - util.GetValueOrDefault(sourcePgcluster.Spec.PGOImagePrefix, operator.Pgo.Pgo.PGOImagePrefix), operator.Pgo.Pgo.PGOImageTag), - Env: []v1.EnvVar{ - { - Name: "PGBACKREST_REPO1_HOST", - Value: fmt.Sprintf(util.BackrestRepoServiceName, sourcePgcluster.Spec.ClusterName), - }, - { - Name: "PGBACKREST_REPO1_PATH", - Value: util.GetPGBackRestRepoPath(sourcePgcluster), - }, - // NOTE: this needs to be a name like this in order to not - // confuse pgBackRest, which does support "REPO*" name - { - Name: "NEW_PGBACKREST_REPO", - Value: util.GetPGBackRestRepoPath(crv1.Pgcluster{ - ObjectMeta: metav1.ObjectMeta{ - Name: targetClusterName, - }, - }), - }, - }, - VolumeMounts: []v1.VolumeMount{ - { - MountPath: config.VOLUME_PGBACKREST_REPO_MOUNT_PATH, - Name: config.VOLUME_PGBACKREST_REPO_NAME, - }, - { - MountPath: config.VOLUME_SSHD_MOUNT_PATH, - Name: config.VOLUME_SSHD_NAME, - ReadOnly: true, - }, - }, - }, - }, - RestartPolicy: v1.RestartPolicyNever, - SecurityContext: &podSecurityContext, - ServiceAccountName: config.LABEL_BACKREST, - Volumes: []v1.Volume{ - { - Name: config.VOLUME_PGBACKREST_REPO_NAME, - VolumeSource: v1.VolumeSource{ - PersistentVolumeClaim: &v1.PersistentVolumeClaimVolumeSource{ - ClaimName: fmt.Sprintf(util.BackrestRepoPVCName, targetClusterName), - }, - }, - }, - // the SSHD volume that contains the SSHD secrets - { - Name: config.VOLUME_SSHD_NAME, - VolumeSource: v1.VolumeSource{ - Secret: &v1.SecretVolumeSource{ - // the SSHD secret is stored under the name of the *source* - // cluster, as we have yet to create the target cluster! - SecretName: fmt.Sprintf("%s-backrest-repo-config", sourcePgcluster.Spec.ClusterName), - // DefaultMode: &pgBackRestRepoVolumeDefaultMode, - }, - }, - }, - }, - }, - }, - }, - } - - // set the container image to an override value, if one exists - operator.SetContainerImageOverride(config.CONTAINER_IMAGE_PGO_BACKREST_REPO_SYNC, - &job.Spec.Template.Spec.Containers[0]) - - // Retrieve current S3 key & key secret - s3Creds, err := util.GetS3CredsFromBackrestRepoSecret(clientset, namespace, sourcePgcluster.Name) - if err != nil { - log.Error(err) - errorMessage := fmt.Sprintf("Unable to get S3 key and key secret from source cluster "+ - "backrest repo secret: %s", err.Error()) - PublishCloneEvent(events.EventCloneClusterFailure, namespace, task, errorMessage) - return "", err - } - // if using S3 for the clone, the add the S3 env vars to the env - if strings.Contains(sourcePgcluster.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE], - "s3") { - syncEnv := job.Spec.Template.Spec.Containers[0].Env - syncEnv = append(syncEnv, []v1.EnvVar{ - { - Name: "BACKREST_STORAGE_SOURCE", - Value: task.Spec.Parameters["backrestStorageType"], - }, - { - Name: "PGBACKREST_REPO1_S3_BUCKET", - Value: getS3Param(sourcePgcluster.Spec.BackrestS3Bucket, - operator.Pgo.Cluster.BackrestS3Bucket), - }, - { - Name: "PGBACKREST_REPO1_S3_ENDPOINT", - Value: getS3Param(sourcePgcluster.Spec.BackrestS3Endpoint, - operator.Pgo.Cluster.BackrestS3Endpoint), - }, - { - Name: "PGBACKREST_REPO1_S3_REGION", - Value: getS3Param(sourcePgcluster.Spec.BackrestS3Region, - operator.Pgo.Cluster.BackrestS3Region), - }, - { - Name: "PGBACKREST_REPO1_S3_KEY", - Value: s3Creds.AWSS3Key, - }, - { - Name: "PGBACKREST_REPO1_S3_KEY_SECRET", - Value: s3Creds.AWSS3KeySecret, - }, - { - Name: "PGBACKREST_REPO1_S3_CA_FILE", - Value: "/sshd/aws-s3-ca.crt", - }, - }...) - if operator.IsLocalAndS3Storage( - sourcePgcluster.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE]) { - syncEnv = append(syncEnv, []v1.EnvVar{ - { - Name: "PGHA_PGBACKREST_LOCAL_S3_STORAGE", - Value: "true", - }, - }...) - } - job.Spec.Template.Spec.Containers[0].Env = syncEnv - } - - // create the job! - if j, err := clientset.BatchV1().Jobs(namespace).Create(&job); err != nil { - log.Error(err) - // the error event occurs at a different level - return "", err - } else { - return j.Name, nil - } -} - -// createPVCs is the first step in cloning a PostgreSQL cluster. It creates -// several PVCs that are required for operating a PostgreSQL cluster: -// - the PVC that stores the PostgreSQL PGDATA -// - the PVC that stores the PostgreSQL WAL -// - the PVC that stores the pgBackRest repo -// -// Additionally, if there are any tablespaces on the original cluster, it will -// create those too. -// -// if the user spceified a different PVCSize than what is in the storage spec, -// then that gets used -func createPVCs(clientset kubernetes.Interface, - task *crv1.Pgtask, namespace string, sourcePgcluster crv1.Pgcluster, targetClusterName string, -) ( - backrestVolume, dataVolume, walVolume operator.StorageResult, - tablespaceVolumes map[string]operator.StorageResult, - err error, -) { - // first, create the PVC for the pgBackRest storage, as we will be needing - // that sooner - { - storage := sourcePgcluster.Spec.BackrestStorage - if size := task.Spec.Parameters[util.CloneParameterBackrestPVCSize]; size != "" { - storage.Size = size - } - // the PVCName for pgBackRest is derived from the target cluster name - backrestPVCName := fmt.Sprintf(util.BackrestRepoPVCName, targetClusterName) - backrestVolume, err = pvc.CreateIfNotExists(clientset, - storage, backrestPVCName, targetClusterName, namespace) - } - - // now create the PVC for the target cluster - if err == nil { - storage := sourcePgcluster.Spec.PrimaryStorage - if size := task.Spec.Parameters[util.CloneParameterPVCSize]; size != "" { - storage.Size = size - } - dataVolume, err = pvc.CreateIfNotExists(clientset, - storage, targetClusterName, targetClusterName, namespace) - } - - if err == nil { - walVolume, err = pvc.CreateIfNotExists(clientset, - sourcePgcluster.Spec.WALStorage, targetClusterName+"-wal", targetClusterName, namespace) - } - - // if there are any tablespaces, create PVCs for those - tablespaceVolumes = make(map[string]operator.StorageResult, len(sourcePgcluster.Spec.TablespaceMounts)) - for tablespaceName, storageSpec := range sourcePgcluster.Spec.TablespaceMounts { - if err == nil { - // generate the tablespace PVC name from the name of the clone cluster and - // the name of this tablespace - tablespacePVCName := operator.GetTablespacePVCName(targetClusterName, tablespaceName) - tablespaceVolumes[tablespaceName], err = pvc.CreateIfNotExists(clientset, - storageSpec, tablespacePVCName, targetClusterName, namespace) - } - } - - return -} - -func createCluster(clientset kubeapi.Interface, task *crv1.Pgtask, sourcePgcluster crv1.Pgcluster, namespace string, targetClusterName string, workflowID string) error { - // first, handle copying over the cluster secrets so they are available when - // the cluster is created - cloneClusterSecrets := util.CloneClusterSecrets{ - // ensure the pgBackRest secret is not copied over, as we will need to - // initialize a new repository - AdditionalSelectors: []string{"pgo-backrest-repo!=true"}, - ClientSet: clientset, - Namespace: namespace, - SourceClusterName: sourcePgcluster.Spec.ClusterName, - TargetClusterName: targetClusterName, - } - - if err := cloneClusterSecrets.Clone(); err != nil { - log.Error(err) - return err - } - - // set up the target cluster - targetPgcluster := &crv1.Pgcluster{ - ObjectMeta: metav1.ObjectMeta{ - Annotations: map[string]string{ - config.ANNOTATION_CURRENT_PRIMARY: targetClusterName, - }, - Name: targetClusterName, - Labels: map[string]string{ - config.LABEL_NAME: targetClusterName, - // we will be opinionated and say that HA must be enabled - config.LABEL_AUTOFAIL: "true", - // we will also be opinionated and say that pgBackRest must be enabled, - // otherwise a later step will cloning the pgBackRest repo will fail - config.LABEL_BACKREST: "true", - // carry the original user who issued the clone request to here - config.LABEL_PGOUSER: task.ObjectMeta.Labels[config.LABEL_PGOUSER], - // assign the current workflow ID - config.LABEL_WORKFLOW_ID: workflowID, - // want to have the vendor label here - config.LABEL_VENDOR: config.LABEL_CRUNCHY, - }, - }, - Spec: crv1.PgclusterSpec{ - ArchiveStorage: sourcePgcluster.Spec.ArchiveStorage, - BackrestConfig: sourcePgcluster.Spec.BackrestConfig, - BackrestStorage: sourcePgcluster.Spec.BackrestStorage, - BackrestS3Bucket: sourcePgcluster.Spec.BackrestS3Bucket, - BackrestS3Endpoint: sourcePgcluster.Spec.BackrestS3Endpoint, - BackrestS3Region: sourcePgcluster.Spec.BackrestS3Region, - BackrestResources: sourcePgcluster.Spec.BackrestResources, - ClusterName: targetClusterName, - CCPImage: sourcePgcluster.Spec.CCPImage, - CCPImagePrefix: sourcePgcluster.Spec.CCPImagePrefix, - CCPImageTag: sourcePgcluster.Spec.CCPImageTag, - // We're not copying over the exporter container in the clone...but we will - // maintain the secret in case one brings up the exporter container - CollectSecretName: fmt.Sprintf("%s%s", targetClusterName, crv1.ExporterSecretSuffix), - // CustomConfig is not set as in the future this will be a parameter we - // allow the user to pass in - Database: sourcePgcluster.Spec.Database, - ExporterPort: sourcePgcluster.Spec.ExporterPort, - Name: targetClusterName, - Namespace: namespace, - PGBadgerPort: sourcePgcluster.Spec.PGBadgerPort, - PGOImagePrefix: sourcePgcluster.Spec.PGOImagePrefix, - // PgBouncer will be disabled to start - PgBouncer: crv1.PgBouncerSpec{}, - PodAntiAffinity: sourcePgcluster.Spec.PodAntiAffinity, - Policies: sourcePgcluster.Spec.Policies, - Port: sourcePgcluster.Spec.Port, - PrimaryStorage: sourcePgcluster.Spec.PrimaryStorage, - PrimarySecretName: fmt.Sprintf("%s%s", targetClusterName, crv1.PrimarySecretSuffix), - // Replicas is set to "0" because we want to ensure that no replicas are - // provisioned with the clone - Replicas: "0", - ReplicaStorage: sourcePgcluster.Spec.ReplicaStorage, - Resources: sourcePgcluster.Spec.Resources, - RootSecretName: fmt.Sprintf("%s%s", targetClusterName, crv1.RootSecretSuffix), - SyncReplication: sourcePgcluster.Spec.SyncReplication, - User: sourcePgcluster.Spec.User, - UserSecretName: fmt.Sprintf("%s-%s%s", targetClusterName, sourcePgcluster.Spec.User, crv1.UserSecretSuffix), - // UserLabels can be further expanded, but for now we will just track - // which version of pgo is creating this - UserLabels: map[string]string{ - config.LABEL_PGO_VERSION: msgs.PGO_VERSION, - config.LABEL_BACKREST_STORAGE_TYPE: sourcePgcluster.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE], - }, - TablespaceMounts: sourcePgcluster.Spec.TablespaceMounts, - WALStorage: sourcePgcluster.Spec.WALStorage, - }, - Status: crv1.PgclusterStatus{ - State: crv1.PgclusterStateCreated, - Message: "Created, not processed yet", - }, - } - - // if any of the PVC sizes are overridden, indicate this in the cluster spec - // here - // first, handle the override for the primary/replica PVC size - if task.Spec.Parameters[util.CloneParameterPVCSize] != "" { - targetPgcluster.Spec.PrimaryStorage.Size = task.Spec.Parameters[util.CloneParameterPVCSize] - targetPgcluster.Spec.ReplicaStorage.Size = task.Spec.Parameters[util.CloneParameterPVCSize] - } - - // next, for the pgBackRest PVC - if task.Spec.Parameters[util.CloneParameterBackrestPVCSize] != "" { - targetPgcluster.Spec.BackrestStorage.Size = task.Spec.Parameters[util.CloneParameterBackrestPVCSize] - } - - // check to see if the metrics collection should be performed - if task.Spec.Parameters[util.CloneParameterEnableMetrics] == "true" { - targetPgcluster.Spec.UserLabels[config.LABEL_EXPORTER] = "true" - } - - // update the workflow to indicate that the cluster is being created - if err := UpdateCloneWorkflow(clientset, namespace, workflowID, crv1.PgtaskWorkflowCloneClusterCreate); err != nil { - log.Error(err) - return err - } - - // create the new cluster! - if _, err := clientset.CrunchydataV1().Pgclusters(namespace).Create(targetPgcluster); err != nil { - log.Error(err) - return err - } - - return nil -} - -// checkTargetPgCluster checks to see if the target Pgcluster may already exist. -// if it does, the likely action of the caller is to abort the clone, as we do -// not want to override a PostgreSQL cluster that already exists, but we will -// let the function caller -func checkTargetPgCluster(clientset pgo.Interface, namespace, targetClusterName string) bool { - _, err := clientset.CrunchydataV1().Pgclusters(namespace).Get(targetClusterName, metav1.GetOptions{}) - return err == nil -} - -// getCloneTaskIdentifiers returns the source and target cluster names as well -// as the workflow ID -func getCloneTaskIdentifiers(task *crv1.Pgtask) (string, string, string) { - return task.Spec.Parameters["sourceClusterName"], - task.Spec.Parameters["targetClusterName"], - task.Spec.Parameters[crv1.PgtaskWorkflowID] -} - -// getLinkMap returns the pgBackRest argument to support a WAL volume. -func getLinkMap(clientset kubernetes.Interface, restConfig *rest.Config, cluster crv1.Pgcluster, targetClusterName string) (string, error) { - pods, err := clientset.CoreV1().Pods(cluster.Namespace).List(metav1.ListOptions{LabelSelector: "pgo-pg-database=true,pg-cluster=" + cluster.Name}) - if err != nil { - return "", err - } - if len(pods.Items) < 1 { - return "", errors.New("found no cluster pods") - } - - // PGVERSION environment variable is available in our PostgreSQL containers. - // The following is the same logic we use in shell scripts there. - stdout, _, err := kubeapi.ExecToPodThroughAPI(restConfig, clientset, - []string{"bash", "-c", ` - if printf '10\n'${PGVERSION} | sort -VC - then - echo -n '--link-map=pg_wal=' - else - echo -n '--link-map=pg_xlog=' - fi`}, - pods.Items[0].Spec.Containers[0].Name, - pods.Items[0].Name, - pods.Items[0].Namespace, - nil) - - return stdout + config.PostgreSQLWALPath(targetClusterName), err -} - -// getS3Param returns either the value provided by 'sourceClusterS3param' if not en empty string, -// otherwise return the equivlant value from the pgo.yaml global configuration filer -func getS3Param(sourceClusterS3param, pgoConfigParam string) string { - if sourceClusterS3param != "" { - return sourceClusterS3param - } - - return pgoConfigParam -} - -// getSourcePgcluster attempts to find the Pgcluster CRD for the source cluster -// used for the clone -func getSourcePgcluster(clientset pgo.Interface, namespace, sourceClusterName string) (*crv1.Pgcluster, error) { - return clientset.CrunchydataV1().Pgclusters(namespace).Get(sourceClusterName, metav1.GetOptions{}) -} - -// patchPgtaskComplete updates the pgtask CRD to indicate that the task is now -// complete -func patchPgtaskComplete(clientset kubeapi.Interface, namespace, taskName string) { - if err := util.Patch(clientset.CrunchydataV1().RESTClient(), patchURL, crv1.CompletedStatus, patchResource, taskName, namespace); err != nil { - log.Error("error in status patch " + err.Error()) - } -} - -// publishCloneClusterEvent publishes the event when the cluster clone process -// has started -func publishCloneClusterEvent(eventHeader events.EventHeader, sourceClusterName, targetClusterName, workflowID string) { - // set up the event - event := events.EventCloneClusterFormat{ - EventHeader: eventHeader, - SourceClusterName: sourceClusterName, - TargetClusterName: targetClusterName, - WorkflowID: workflowID, - } - // attempt to publish the event; if it fails, log the error, but keep moving - // on - if err := events.Publish(event); err != nil { - log.Error(err) - } -} - -// publishCloneClusterCompleted publishes the event when the cluster clone process -// has successfully completed -func publishCloneClusterCompletedEvent(eventHeader events.EventHeader, sourceClusterName, targetClusterName, workflowID string) { - // set up the event - event := events.EventCloneClusterCompletedFormat{ - EventHeader: eventHeader, - SourceClusterName: sourceClusterName, - TargetClusterName: targetClusterName, - WorkflowID: workflowID, - } - // attempt to publish the event; if it fails, log the error, but keep moving - // on - if err := events.Publish(event); err != nil { - log.Error(err) - } -} - -// publishCloneClusterCompleted publishes the event when the cluster clone process -// has successfully completed, including the error message -func publishCloneClusterFailureEvent(eventHeader events.EventHeader, sourceClusterName, targetClusterName, workflowID, errorMessage string) { - // set up the event - event := events.EventCloneClusterFailureFormat{ - EventHeader: eventHeader, - ErrorMessage: errorMessage, - SourceClusterName: sourceClusterName, - TargetClusterName: targetClusterName, - WorkflowID: workflowID, - } - // attempt to publish the event; if it fails, log the error, but keep moving - // on - if err := events.Publish(event); err != nil { - log.Error(err) - } -} - -// waitForDeploymentDelete waits until a deployment and its associated service -// are deleted -func waitForDeploymentDelete(clientset kubernetes.Interface, namespace, deploymentName string, timeoutSecs, periodSecs time.Duration) error { - timeout := time.After(timeoutSecs * time.Second) - tick := time.NewTicker(periodSecs * time.Second) - defer tick.Stop() - - for { - select { - case <-timeout: - return errors.New(fmt.Sprintf("Timed out waiting for deployment to be deleted: [%s]", deploymentName)) - case <-tick.C: - _, deploymentErr := clientset.AppsV1().Deployments(namespace).Get(deploymentName, metav1.GetOptions{}) - _, serviceErr := clientset.CoreV1().Services(namespace).Get(deploymentName, metav1.GetOptions{}) - deploymentFound := deploymentErr == nil - serviceFound := serviceErr == nil - if !(deploymentFound || serviceFound) { - return nil - } - log.Debugf("deployment deleted: %t, service deleted: %t", !deploymentFound, !serviceFound) - } - } -} - -// waitFotDeploymentReady waits for a deployment to be ready, or times out -func waitForDeploymentReady(clientset kubernetes.Interface, namespace, deploymentName string, timeoutSecs, periodSecs time.Duration) error { - timeout := time.After(timeoutSecs * time.Second) - tick := time.NewTicker(periodSecs * time.Second) - defer tick.Stop() - - // loop until the timeout is met, or that all the replicas are ready - for { - select { - case <-timeout: - return errors.New(fmt.Sprintf("Timed out waiting for deployment to become ready: [%s]", deploymentName)) - case <-tick.C: - if deployment, err := clientset.AppsV1().Deployments(namespace).Get(deploymentName, metav1.GetOptions{}); err != nil { - // if there is an error, log it but continue through the loop - log.Error(err) - } else { - // check to see if the deployment status has succeed...if so, break out - // of the loop - if deployment.Status.ReadyReplicas == *deployment.Spec.Replicas { - return nil - } - } - } - } -} diff --git a/internal/operator/cluster/cluster.go b/internal/operator/cluster/cluster.go deleted file mode 100644 index eef521107f..0000000000 --- a/internal/operator/cluster/cluster.go +++ /dev/null @@ -1,788 +0,0 @@ -// Package cluster holds the cluster CRD logic and definitions -// A cluster is comprised of a primary service, replica service, -// primary deployment, and replica deployment -package cluster - -/* - Copyright 2017 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "encoding/json" - "fmt" - "strconv" - "time" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/kubeapi" - "github.com/crunchydata/postgres-operator/internal/operator" - "github.com/crunchydata/postgres-operator/internal/operator/backrest" - "github.com/crunchydata/postgres-operator/internal/operator/pvc" - "github.com/crunchydata/postgres-operator/internal/util" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - "github.com/crunchydata/postgres-operator/pkg/events" - - log "github.com/sirupsen/logrus" - apps_v1 "k8s.io/api/apps/v1" - v1 "k8s.io/api/core/v1" - kerrors "k8s.io/apimachinery/pkg/api/errors" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/types" - "k8s.io/client-go/kubernetes" - "k8s.io/client-go/rest" -) - -// ServiceTemplateFields ... -type ServiceTemplateFields struct { - Name string - ServiceName string - ClusterName string - Port string - PGBadgerPort string - ExporterPort string - ServiceType string -} - -// ReplicaSuffix ... -const ReplicaSuffix = "-replica" - -func AddClusterBase(clientset kubeapi.Interface, cl *crv1.Pgcluster, namespace string) { - var err error - - dataVolume, walVolume, tablespaceVolumes, err := pvc.CreateMissingPostgreSQLVolumes( - clientset, cl, namespace, cl.Annotations[config.ANNOTATION_CURRENT_PRIMARY], cl.Spec.PrimaryStorage) - if err != nil { - log.Error(err) - publishClusterCreateFailure(cl, err.Error()) - return - } - - if err = addClusterCreateMissingService(clientset, cl, namespace); err != nil { - log.Error("error in creating primary service " + err.Error()) - publishClusterCreateFailure(cl, err.Error()) - return - } - - // Create a configMap for the cluster that will be utilized to configure whether or not - // initialization logic should be executed when the postgres-ha container is run. This - // ensures that the original primary in a PG cluster does not attempt to run any initialization - // logic following a restart of the container. - // If the configmap already exists, the cluster creation will continue as this is required - // for certain pgcluster upgrades. - if err = operator.CreatePGHAConfigMap(clientset, cl, namespace); err != nil && - !kerrors.IsAlreadyExists(err) { - log.Error(err) - publishClusterCreateFailure(cl, err.Error()) - return - } - - if err := annotateBackrestSecret(clientset, cl); err != nil { - log.Error(err) - publishClusterCreateFailure(cl, err.Error()) - } - - if err := addClusterDeployments(clientset, cl, namespace, - dataVolume, walVolume, tablespaceVolumes); err != nil { - log.Error(err) - publishClusterCreateFailure(cl, err.Error()) - return - } - - // Now scale the repo deployment only to ensure it is initialized prior to the primary DB. - // Once the repo is ready, the primary database deployment will then also be scaled to 1. - clusterInfo, err := ScaleClusterDeployments(clientset, *cl, 1, false, false, true, false) - if err != nil { - log.Error(err) - publishClusterCreateFailure(cl, err.Error()) - } - log.Debugf("Scaled pgBackRest repo deployment %s to 1 to proceed with initializing "+ - "cluster %s", clusterInfo.PrimaryDeployment, cl.GetName()) - - err = util.Patch(clientset.CrunchydataV1().RESTClient(), "/spec/status", crv1.CompletedStatus, crv1.PgclusterResourcePlural, cl.Spec.Name, namespace) - if err != nil { - log.Error("error in status patch " + err.Error()) - } - - // patch in the correct current primary value to the CRD spec, as well as - // any updated user labels. This will handle both new and updated clusters. - // Note: in previous operator versions, this was stored in a user label - if err := util.PatchClusterCRD(clientset, cl.Spec.UserLabels, cl, cl.Annotations[config.ANNOTATION_CURRENT_PRIMARY], namespace); err != nil { - log.Error("could not patch primary crv1 with labels") - publishClusterCreateFailure(cl, err.Error()) - return - } - - err = util.Patch(clientset.CrunchydataV1().RESTClient(), "/spec/PrimaryStorage/name", dataVolume.PersistentVolumeClaimName, crv1.PgclusterResourcePlural, cl.Spec.Name, namespace) - - if err != nil { - log.Error("error in pvcname patch " + err.Error()) - } - - //publish create cluster event - //capture the cluster creation event - pgouser := cl.ObjectMeta.Labels[config.LABEL_PGOUSER] - topics := make([]string, 1) - topics[0] = events.EventTopicCluster - - f := events.EventCreateClusterFormat{ - EventHeader: events.EventHeader{ - Namespace: cl.ObjectMeta.Namespace, - Username: pgouser, - Topic: topics, - Timestamp: time.Now(), - EventType: events.EventCreateCluster, - }, - Clustername: cl.ObjectMeta.Name, - WorkflowID: cl.ObjectMeta.Labels[config.LABEL_WORKFLOW_ID], - } - - err = events.Publish(f) - if err != nil { - log.Error(err.Error()) - } - - // determine if a restore - _, restore := cl.GetAnnotations()[config.ANNOTATION_BACKREST_RESTORE] - - // add replicas if requested, and if not a restore - if cl.Spec.Replicas != "" && !restore { - replicaCount, err := strconv.Atoi(cl.Spec.Replicas) - if err != nil { - log.Error("error in replicas value " + err.Error()) - publishClusterCreateFailure(cl, err.Error()) - return - } - //create a CRD for each replica - for i := 0; i < replicaCount; i++ { - spec := crv1.PgreplicaSpec{} - //get the storage config - spec.ReplicaStorage = cl.Spec.ReplicaStorage - - spec.UserLabels = cl.Spec.UserLabels - - //the replica should not use the same node labels as the primary - spec.UserLabels[config.LABEL_NODE_LABEL_KEY] = "" - spec.UserLabels[config.LABEL_NODE_LABEL_VALUE] = "" - - labels := make(map[string]string) - labels[config.LABEL_PG_CLUSTER] = cl.Spec.Name - - spec.ClusterName = cl.Spec.Name - uniqueName := util.RandStringBytesRmndr(4) - labels[config.LABEL_NAME] = cl.Spec.Name + "-" + uniqueName - spec.Name = labels[config.LABEL_NAME] - newInstance := &crv1.Pgreplica{ - ObjectMeta: metav1.ObjectMeta{ - Name: labels[config.LABEL_NAME], - Labels: labels, - }, - Spec: spec, - Status: crv1.PgreplicaStatus{ - State: crv1.PgreplicaStateCreated, - Message: "Created, not processed yet", - }, - } - - _, err = clientset.CrunchydataV1().Pgreplicas(namespace).Create(newInstance) - if err != nil { - log.Error(" in creating Pgreplica instance" + err.Error()) - publishClusterCreateFailure(cl, err.Error()) - } - - } - } -} - -// AddClusterBootstrap creates the resources needed to bootstrap a new cluster from an existing -// data source. Specifically, this function creates the bootstrap job that will be run to -// bootstrap the cluster, along with supporting resources (e.g. ConfigMaps and volumes). -func AddClusterBootstrap(clientset kubeapi.Interface, cluster *crv1.Pgcluster) error { - - namespace := cluster.GetNamespace() - - if err := operator.CreatePGHAConfigMap(clientset, cluster, namespace); err != nil && - !kerrors.IsAlreadyExists(err) { - publishClusterCreateFailure(cluster, err.Error()) - return err - } - - dataVolume, walVolume, tablespaceVolumes, err := pvc.CreateMissingPostgreSQLVolumes( - clientset, cluster, namespace, - cluster.Annotations[config.ANNOTATION_CURRENT_PRIMARY], cluster.Spec.PrimaryStorage) - if err != nil { - publishClusterCreateFailure(cluster, err.Error()) - return err - } - - if err := addClusterBootstrapJob(clientset, cluster, namespace, dataVolume, - walVolume, tablespaceVolumes); err != nil && !kerrors.IsAlreadyExists(err) { - publishClusterCreateFailure(cluster, err.Error()) - return err - } - - patch, err := json.Marshal(map[string]interface{}{ - "status": crv1.PgclusterStatus{ - State: crv1.PgclusterStateBootstrapping, - Message: "Bootstapping cluster from an existing data source", - }, - }) - if err == nil { - _, err = clientset.CrunchydataV1().Pgclusters(namespace).Patch(cluster.Name, types.MergePatchType, patch) - } - if err != nil { - return err - } - - return nil -} - -// AddBootstrapRepo creates a pgBackRest repository and associated service to use when -// bootstrapping a cluster from an existing data source. If an existing repo is detected -// and is being used to bootstrap another cluster, then an error is returned. If an existing -// repo is detected and is not associated with a bootstrap job (but rather an active cluster), -// then no action is taken and the function resturns. Also, in addition to returning an error -// in the event an error is encountered, the function also returns a 'repoCreated' bool that -// specifies whether or not a repo was actually created. -func AddBootstrapRepo(clientset kubernetes.Interface, cluster *crv1.Pgcluster) (repoCreated bool, err error) { - - restoreClusterName := cluster.Spec.PGDataSource.RestoreFrom - repoName := fmt.Sprintf(util.BackrestRepoServiceName, restoreClusterName) - - found := true - repoDeployment, err := clientset.AppsV1().Deployments(cluster.GetNamespace()).Get( - repoName, metav1.GetOptions{}) - if err != nil { - if !kerrors.IsNotFound(err) { - return - } - found = false - } - - if !found { - if err = backrest.CreateRepoDeployment(clientset, cluster, false, true, 1); err != nil { - return - } - repoCreated = true - } else if _, ok := repoDeployment.GetLabels()[config.LABEL_PGHA_BOOTSTRAP]; ok { - err = fmt.Errorf("Unable to create bootstrap repo %s to bootstrap cluster %s "+ - "(namespace %s) because it is already running to bootstrap another cluster", - repoName, cluster.GetName(), cluster.GetNamespace()) - return - } - - return -} - -// DeleteClusterBase ... -func DeleteClusterBase(clientset kubernetes.Interface, cl *crv1.Pgcluster, namespace string) { - - DeleteCluster(clientset, cl, namespace) - - //delete any existing configmaps - if err := deleteConfigMaps(clientset, cl.Spec.Name, namespace); err != nil { - log.Error(err) - } - - //delete any existing pgtasks ??? - - //publish delete cluster event - topics := make([]string, 1) - topics[0] = events.EventTopicCluster - - f := events.EventDeleteClusterFormat{ - EventHeader: events.EventHeader{ - Namespace: namespace, - Username: cl.ObjectMeta.Labels[config.LABEL_PGOUSER], - Topic: topics, - Timestamp: time.Now(), - EventType: events.EventDeleteCluster, - }, - Clustername: cl.Spec.Name, - } - - if err := events.Publish(f); err != nil { - log.Error(err) - } -} - -// ScaleBase ... -func ScaleBase(clientset kubeapi.Interface, replica *crv1.Pgreplica, namespace string) { - - if replica.Spec.Status == crv1.CompletedStatus { - log.Warn("crv1 pgreplica " + replica.Spec.Name + " is already marked complete, will not recreate") - return - } - - //get the pgcluster CRD to base the replica off of - cluster, err := clientset.CrunchydataV1().Pgclusters(namespace).Get(replica.Spec.ClusterName, metav1.GetOptions{}) - if err != nil { - return - } - - dataVolume, walVolume, tablespaceVolumes, err := pvc.CreateMissingPostgreSQLVolumes( - clientset, cluster, namespace, replica.Spec.Name, replica.Spec.ReplicaStorage) - if err != nil { - log.Error(err) - publishScaleError(namespace, replica.ObjectMeta.Labels[config.LABEL_PGOUSER], cluster) - return - } - - //update the replica CRD pvcname - err = util.Patch(clientset.CrunchydataV1().RESTClient(), "/spec/replicastorage/name", dataVolume.PersistentVolumeClaimName, crv1.PgreplicaResourcePlural, replica.Spec.Name, namespace) - if err != nil { - log.Error("error in pvcname patch " + err.Error()) - } - - //create the replica service if it doesnt exist - if err = scaleReplicaCreateMissingService(clientset, replica, cluster, namespace); err != nil { - log.Error(err) - publishScaleError(namespace, replica.ObjectMeta.Labels[config.LABEL_PGOUSER], cluster) - return - } - - //instantiate the replica - if err = scaleReplicaCreateDeployment(clientset, replica, cluster, namespace, dataVolume, walVolume, tablespaceVolumes); err != nil { - publishScaleError(namespace, replica.ObjectMeta.Labels[config.LABEL_PGOUSER], cluster) - return - } - - //update the replica CRD status - err = util.Patch(clientset.CrunchydataV1().RESTClient(), "/spec/status", crv1.CompletedStatus, crv1.PgreplicaResourcePlural, replica.Spec.Name, namespace) - if err != nil { - log.Error("error in status patch " + err.Error()) - } - - //publish event for replica creation - topics := make([]string, 1) - topics[0] = events.EventTopicCluster - - f := events.EventScaleClusterFormat{ - EventHeader: events.EventHeader{ - Namespace: namespace, - Username: replica.ObjectMeta.Labels[config.LABEL_PGOUSER], - Topic: topics, - Timestamp: time.Now(), - EventType: events.EventScaleCluster, - }, - Clustername: cluster.Spec.UserLabels[config.LABEL_REPLICA_NAME], - Replicaname: cluster.Spec.UserLabels[config.LABEL_PG_CLUSTER], - } - - if err = events.Publish(f); err != nil { - log.Error(err.Error()) - } -} - -// ScaleDownBase ... -func ScaleDownBase(clientset kubeapi.Interface, replica *crv1.Pgreplica, namespace string) { - - //get the pgcluster CRD for this replica - _, err := clientset.CrunchydataV1().Pgclusters(namespace).Get(replica.Spec.ClusterName, metav1.GetOptions{}) - if err != nil { - return - } - - DeleteReplica(clientset, replica, namespace) - - //publish event for scale down - topics := make([]string, 1) - topics[0] = events.EventTopicCluster - - f := events.EventScaleDownClusterFormat{ - EventHeader: events.EventHeader{ - Namespace: namespace, - Username: replica.ObjectMeta.Labels[config.LABEL_PGOUSER], - Topic: topics, - Timestamp: time.Now(), - EventType: events.EventScaleDownCluster, - }, - Clustername: replica.Spec.ClusterName, - } - - err = events.Publish(f) - if err != nil { - log.Error(err.Error()) - return - } - -} - -// UpdateAnnotations updates the annotations in the "template" portion of a -// PostgreSQL deployment -func UpdateAnnotations(clientset kubernetes.Interface, restConfig *rest.Config, - cluster *crv1.Pgcluster, annotations map[string]string) error { - var updateError error - - // first, get a list of all of the instance deployments for the cluster - deployments, err := operator.GetInstanceDeployments(clientset, cluster) - - if err != nil { - return err - } - - // now update each deployment with the new annotations - for _, deployment := range deployments.Items { - log.Debugf("update annotations on [%s]", deployment.Name) - log.Debugf("new annotations: %v", annotations) - - deployment.Spec.Template.ObjectMeta.SetAnnotations(annotations) - - // Before applying the update, we want to explicitly stop PostgreSQL on each - // instance. This prevents PostgreSQL from having to boot up in crash - // recovery mode. - // - // If an error is returned, we only issue a warning - if err := stopPostgreSQLInstance(clientset, restConfig, deployment); err != nil { - log.Warn(err) - } - - // finally, update the Deployment. If something errors, we'll log that there - // was an error, but continue with processing the other deployments - if _, err := clientset.AppsV1().Deployments(deployment.Namespace).Update(&deployment); err != nil { - log.Error(err) - updateError = err - } - } - - return updateError -} - -// UpdateResources updates the PostgreSQL instance Deployments to reflect the -// update resources (i.e. CPU, memory) -func UpdateResources(clientset kubernetes.Interface, restConfig *rest.Config, cluster *crv1.Pgcluster) error { - // get a list of all of the instance deployments for the cluster - deployments, err := operator.GetInstanceDeployments(clientset, cluster) - - if err != nil { - return err - } - - // iterate through each PostgreSQL instance deployment and update the - // resource values for the database or exporter containers - // - // NOTE: a future version (near future) will first try to detect the primary - // so that all the replicas are updated first, and then the primary gets the - // update - for _, deployment := range deployments.Items { - // now, iterate through each container within that deployment - for index, container := range deployment.Spec.Template.Spec.Containers { - // first check for the database container - if container.Name == "database" { - // first, initialize the requests/limits resource to empty Resource Lists - deployment.Spec.Template.Spec.Containers[index].Resources.Requests = v1.ResourceList{} - deployment.Spec.Template.Spec.Containers[index].Resources.Limits = v1.ResourceList{} - - // now, simply deep copy the values from the CRD - if cluster.Spec.Resources != nil { - deployment.Spec.Template.Spec.Containers[index].Resources.Requests = cluster.Spec.Resources.DeepCopy() - } - - if cluster.Spec.Limits != nil { - deployment.Spec.Template.Spec.Containers[index].Resources.Limits = cluster.Spec.Limits.DeepCopy() - } - // next, check for the exporter container - } else if container.Name == "exporter" { - // first, initialize the requests/limits resource to empty Resource Lists - deployment.Spec.Template.Spec.Containers[index].Resources.Requests = v1.ResourceList{} - deployment.Spec.Template.Spec.Containers[index].Resources.Limits = v1.ResourceList{} - - // now, simply deep copy the values from the CRD - if cluster.Spec.ExporterResources != nil { - deployment.Spec.Template.Spec.Containers[index].Resources.Requests = cluster.Spec.ExporterResources.DeepCopy() - } - - if cluster.Spec.ExporterLimits != nil { - deployment.Spec.Template.Spec.Containers[index].Resources.Limits = cluster.Spec.ExporterLimits.DeepCopy() - } - - } - } - // Before applying the update, we want to explicitly stop PostgreSQL on each - // instance. This prevents PostgreSQL from having to boot up in crash - // recovery mode. - // - // If an error is returned, we only issue a warning - if err := stopPostgreSQLInstance(clientset, restConfig, deployment); err != nil { - log.Warn(err) - } - // update the deployment with the new values - if _, err := clientset.AppsV1().Deployments(deployment.Namespace).Update(&deployment); err != nil { - return err - } - } - - return nil -} - -// UpdateTablespaces updates the PostgreSQL instance Deployments to update -// what tablespaces are mounted. -// Though any new tablespaces are present in the CRD, to attempt to do less work -// this function takes a map of the new tablespaces that are being added, so we -// only have to check and create the PVCs that are being mounted at this time -// -// To do this, iterate through the tablespace mount map that is present in the -// new cluster. -func UpdateTablespaces(clientset kubernetes.Interface, restConfig *rest.Config, - cluster *crv1.Pgcluster, newTablespaces map[string]crv1.PgStorageSpec) error { - // first, get a list of all of the instance deployments for the cluster - deployments, err := operator.GetInstanceDeployments(clientset, cluster) - - if err != nil { - return err - } - - tablespaceVolumes := make([]map[string]operator.StorageResult, len(deployments.Items)) - - // now we can start creating the new tablespaces! First, create the new - // PVCs. The PVCs are created for each **instance** in the cluster, as every - // instance needs to have a distinct PVC for each tablespace - for i, deployment := range deployments.Items { - tablespaceVolumes[i] = make(map[string]operator.StorageResult) - - for tablespaceName, storageSpec := range newTablespaces { - // get the name of the tablespace PVC for that instance - tablespacePVCName := operator.GetTablespacePVCName(deployment.Name, tablespaceName) - - log.Debugf("creating tablespace PVC [%s] for [%s]", tablespacePVCName, deployment.Name) - - // and now create it! If it errors, we just need to return, which - // potentially leaves things in an inconsistent state, but at this point - // only PVC objects have been created - tablespaceVolumes[i][tablespaceName], err = pvc.CreateIfNotExists(clientset, - storageSpec, tablespacePVCName, cluster.Name, cluster.Namespace) - if err != nil { - return err - } - } - } - - // now the fun step: update each deployment with the new volumes - for i, deployment := range deployments.Items { - log.Debugf("attach tablespace volumes to [%s]", deployment.Name) - - // iterate through each table space and prepare the Volume and - // VolumeMount clause for each instance - for tablespaceName := range newTablespaces { - // this is the volume to be added for the tablespace - volume := v1.Volume{ - Name: operator.GetTablespaceVolumeName(tablespaceName), - VolumeSource: tablespaceVolumes[i][tablespaceName].VolumeSource(), - } - - // add the volume to the list of volumes - deployment.Spec.Template.Spec.Volumes = append(deployment.Spec.Template.Spec.Volumes, volume) - - // now add the volume mount point to that of the database container - volumeMount := v1.VolumeMount{ - MountPath: fmt.Sprintf("%s%s", config.VOLUME_TABLESPACE_PATH_PREFIX, tablespaceName), - Name: operator.GetTablespaceVolumeName(tablespaceName), - } - - // we can do this as we always know that the "database" container is the - // first container in the list - deployment.Spec.Template.Spec.Containers[0].VolumeMounts = append( - deployment.Spec.Template.Spec.Containers[0].VolumeMounts, volumeMount) - - // add any supplemental groups specified in storage configuration. - // SecurityContext is always initialized because we use fsGroup. - deployment.Spec.Template.Spec.SecurityContext.SupplementalGroups = append( - deployment.Spec.Template.Spec.SecurityContext.SupplementalGroups, - tablespaceVolumes[i][tablespaceName].SupplementalGroups...) - } - - // find the "PGHA_TABLESPACES" value and update it with the new tablespace - // name list - ok := false - for i, envVar := range deployment.Spec.Template.Spec.Containers[0].Env { - // yup, it's an old fashioned linear time lookup - if envVar.Name == "PGHA_TABLESPACES" { - deployment.Spec.Template.Spec.Containers[0].Env[i].Value = operator.GetTablespaceNames( - cluster.Spec.TablespaceMounts) - ok = true - } - } - - // if its not found, we need to add it to the env - if !ok { - envVar := v1.EnvVar{ - Name: "PGHA_TABLESPACES", - Value: operator.GetTablespaceNames(cluster.Spec.TablespaceMounts), - } - deployment.Spec.Template.Spec.Containers[0].Env = append(deployment.Spec.Template.Spec.Containers[0].Env, envVar) - } - - // Before applying the update, we want to explicitly stop PostgreSQL on each - // instance. This prevents PostgreSQL from having to boot up in crash - // recovery mode. - // - // If an error is returned, we only issue a warning - if err := stopPostgreSQLInstance(clientset, restConfig, deployment); err != nil { - log.Warn(err) - } - - // finally, update the Deployment. Potential to put things into an - // inconsistent state if any of these updates fail - if _, err := clientset.AppsV1().Deployments(deployment.Namespace).Update(&deployment); err != nil { - return err - } - } - - return nil -} - -// annotateBackrestSecret annotates the pgBackRest repository secret with relevant cluster -// configuration as needed to support bootstrapping from the repository after the cluster -// has been deleted -func annotateBackrestSecret(clientset kubernetes.Interface, cluster *crv1.Pgcluster) error { - - clusterName := cluster.GetName() - namespace := cluster.GetNamespace() - - // simple helper that takes two config options, returning the first if populated, and - // if not the returning the second (which also might now be populated) - cfg := func(cl, op string) string { - if cl != "" { - return cl - } - return op - } - cl := cluster.Spec - op := operator.Pgo.Cluster - values := map[string]string{ - config.ANNOTATION_PG_PORT: cluster.Spec.Port, - config.ANNOTATION_REPO_PATH: util.GetPGBackRestRepoPath(*cluster), - config.ANNOTATION_S3_BUCKET: cfg(cl.BackrestS3Bucket, op.BackrestS3Bucket), - config.ANNOTATION_S3_ENDPOINT: cfg(cl.BackrestS3Endpoint, op.BackrestS3Endpoint), - config.ANNOTATION_S3_REGION: cfg(cl.BackrestS3Region, op.BackrestS3Region), - config.ANNOTATION_SSHD_PORT: strconv.Itoa(operator.Pgo.Cluster.BackrestPort), - config.ANNOTATION_SUPPLEMENTAL_GROUPS: cluster.Spec.BackrestStorage.SupplementalGroups, - config.ANNOTATION_S3_URI_STYLE: cfg(cl.BackrestS3URIStyle, op.BackrestS3URIStyle), - config.ANNOTATION_S3_VERIFY_TLS: cfg(cl.BackrestS3VerifyTLS, op.BackrestS3VerifyTLS), - } - valuesJSON, err := json.Marshal(values) - if err != nil { - return err - } - - secretName := fmt.Sprintf(util.BackrestRepoSecretName, clusterName) - patchString := fmt.Sprintf(`{"metadata":{"annotations":%s}}`, string(valuesJSON)) - - log.Debugf("About to patch secret %s (namespace %s) using:\n%s", secretName, namespace, - patchString) - if _, err := clientset.CoreV1().Secrets(namespace).Patch(secretName, types.MergePatchType, - []byte(patchString)); err != nil { - return err - } - - return nil -} - -func deleteConfigMaps(clientset kubernetes.Interface, clusterName, ns string) error { - label := fmt.Sprintf("pg-cluster=%s", clusterName) - list, err := clientset.CoreV1().ConfigMaps(ns).List(metav1.ListOptions{LabelSelector: label}) - if err != nil { - return fmt.Errorf("No configMaps found for selector: %s", label) - } - - for _, configmap := range list.Items { - err := clientset.CoreV1().ConfigMaps(ns).Delete(configmap.Name, &metav1.DeleteOptions{}) - if err != nil { - return err - } - } - return nil -} - -func publishClusterCreateFailure(cl *crv1.Pgcluster, errorMsg string) { - pgouser := cl.ObjectMeta.Labels[config.LABEL_PGOUSER] - topics := make([]string, 1) - topics[0] = events.EventTopicCluster - - f := events.EventCreateClusterFailureFormat{ - EventHeader: events.EventHeader{ - Namespace: cl.ObjectMeta.Namespace, - Username: pgouser, - Topic: topics, - Timestamp: time.Now(), - EventType: events.EventCreateClusterFailure, - }, - Clustername: cl.ObjectMeta.Name, - ErrorMessage: errorMsg, - WorkflowID: cl.ObjectMeta.Labels[config.LABEL_WORKFLOW_ID], - } - - err := events.Publish(f) - if err != nil { - log.Error(err.Error()) - } -} - -func publishClusterShutdown(cluster crv1.Pgcluster) error { - - clusterName := cluster.Name - - //capture the cluster creation event - topics := make([]string, 1) - topics[0] = events.EventTopicCluster - - f := events.EventShutdownClusterFormat{ - EventHeader: events.EventHeader{ - Namespace: cluster.Namespace, - Username: cluster.Spec.UserLabels[config.LABEL_PGOUSER], - Topic: topics, - Timestamp: time.Now(), - EventType: events.EventShutdownCluster, - }, - Clustername: clusterName, - } - - if err := events.Publish(f); err != nil { - log.Error(err.Error()) - return err - } - - return nil -} - -// stopPostgreSQLInstance is a proxy function for the main -// StopPostgreSQLInstance function, as it preps a Deployment to have its -// PostgreSQL instance shut down. This helps to ensure that a PostgreSQL -// instance will launch and not be in crash recovery mode -func stopPostgreSQLInstance(clientset kubernetes.Interface, restConfig *rest.Config, deployment apps_v1.Deployment) error { - // First, attempt to get the PostgreSQL instance Pod attachd to this - // particular deployment - selector := fmt.Sprintf("%s=%s", config.LABEL_DEPLOYMENT_NAME, deployment.Name) - pods, err := clientset.CoreV1().Pods(deployment.Namespace).List(metav1.ListOptions{LabelSelector: selector}) - - // if there is a bona fide error, return. - // However, if no Pods are found, issue a warning, but do not return an error - // This likely means that PostgreSQL is already shutdown, but hey, it's the - // cloud - if err != nil { - return err - } else if len(pods.Items) == 0 { - log.Infof("not shutting down PostgreSQL instance [%s] as the Pod cannot be found", deployment.Name) - return nil - } - - // get the first pod off the items list - pod := pods.Items[0] - - // now we can shut down the cluster - if err := util.StopPostgreSQLInstance(clientset, restConfig, &pod, deployment.Name); err != nil { - return err - } - - return nil -} diff --git a/internal/operator/cluster/clusterlogic.go b/internal/operator/cluster/clusterlogic.go deleted file mode 100644 index a953e07213..0000000000 --- a/internal/operator/cluster/clusterlogic.go +++ /dev/null @@ -1,778 +0,0 @@ -// Package cluster holds the cluster CRD logic and definitions -// A cluster is comprised of a primary service, replica service, -// primary deployment, and replica deployment -package cluster - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "bytes" - "encoding/json" - "fmt" - "os" - "strconv" - "strings" - "time" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/kubeapi" - "github.com/crunchydata/postgres-operator/internal/operator" - "github.com/crunchydata/postgres-operator/internal/operator/backrest" - "github.com/crunchydata/postgres-operator/internal/util" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - "github.com/crunchydata/postgres-operator/pkg/events" - log "github.com/sirupsen/logrus" - appsv1 "k8s.io/api/apps/v1" - batchv1 "k8s.io/api/batch/v1" - v1 "k8s.io/api/core/v1" - kerrors "k8s.io/apimachinery/pkg/api/errors" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/fields" - "k8s.io/apimachinery/pkg/types" - "k8s.io/client-go/kubernetes" -) - -// addClusterCreateMissingService creates a service for the cluster primary if -// it does not yet exist. -func addClusterCreateMissingService(clientset kubernetes.Interface, cl *crv1.Pgcluster, namespace string) error { - st := operator.Pgo.Cluster.ServiceType - if cl.Spec.UserLabels[config.LABEL_SERVICE_TYPE] != "" { - st = cl.Spec.UserLabels[config.LABEL_SERVICE_TYPE] - } - - // create the primary service - serviceFields := ServiceTemplateFields{ - Name: cl.Spec.Name, - ServiceName: cl.Spec.Name, - ClusterName: cl.Spec.Name, - Port: cl.Spec.Port, - ServiceType: st, - } - - // only add references to the exporter / pgBadger ports - clusterLabels := cl.ObjectMeta.GetLabels() - - if val, ok := clusterLabels[config.LABEL_BADGER]; ok && val == config.LABEL_TRUE { - serviceFields.PGBadgerPort = cl.Spec.PGBadgerPort - } - - // ...due to legacy reasons, the exporter label may not be available yet in the - // main labels. so we will check here first, and then check the user labels - if val, ok := clusterLabels[config.LABEL_EXPORTER]; ok && val == config.LABEL_TRUE { - serviceFields.ExporterPort = cl.Spec.ExporterPort - } - - // ...this condition should be targeted for removal in the future - if cl.Spec.UserLabels != nil { - if val, ok := cl.Spec.UserLabels[config.LABEL_EXPORTER]; ok && val == config.LABEL_TRUE { - serviceFields.ExporterPort = cl.Spec.ExporterPort - } - } - - return CreateService(clientset, &serviceFields, namespace) -} - -// addClusterBootstrapJob creates a job that will be used to bootstrap a PostgreSQL cluster from an -// existing data source -func addClusterBootstrapJob(clientset kubeapi.Interface, - cl *crv1.Pgcluster, namespace string, dataVolume, walVolume operator.StorageResult, - tablespaceVolumes map[string]operator.StorageResult) error { - - bootstrapFields, err := getBootstrapJobFields(clientset, cl, dataVolume, walVolume, - tablespaceVolumes) - if err != nil { - return err - } - - var bootstrapSpec bytes.Buffer - if err := config.BootstrapTemplate.Execute(&bootstrapSpec, bootstrapFields); err != nil { - return err - } - - if operator.CRUNCHY_DEBUG { - config.DeploymentTemplate.Execute(os.Stdout, bootstrapFields) - } - - job := &batchv1.Job{} - if err := json.Unmarshal(bootstrapSpec.Bytes(), job); err != nil { - return err - } - - if cl.Spec.WALStorage.StorageType != "" { - operator.AddWALVolumeAndMountsToPostgreSQL(&job.Spec.Template.Spec, walVolume, - cl.Spec.Name) - } - - operator.AddBackRestConfigVolumeAndMounts(&job.Spec.Template.Spec, cl.Name, cl.Spec.BackrestConfig) - - // determine if any of the container images need to be overridden - operator.OverrideClusterContainerImages(job.Spec.Template.Spec.Containers) - - if _, err := clientset.BatchV1().Jobs(namespace).Create(job); err != nil { - return err - } - - return nil -} - -// addClusterDeployments creates deployments for pgBackRest and PostgreSQL. -func addClusterDeployments(clientset kubeapi.Interface, - cl *crv1.Pgcluster, namespace string, dataVolume, walVolume operator.StorageResult, - tablespaceVolumes map[string]operator.StorageResult) error { - - if err := backrest.CreateRepoDeployment(clientset, cl, true, false, 0); err != nil { - return err - } - - deploymentFields := getClusterDeploymentFields(clientset, cl, - dataVolume, walVolume, tablespaceVolumes) - - var primaryDoc bytes.Buffer - if err := config.DeploymentTemplate.Execute(&primaryDoc, deploymentFields); err != nil { - return err - } - - if operator.CRUNCHY_DEBUG { - config.DeploymentTemplate.Execute(os.Stdout, deploymentFields) - } - - deployment := &appsv1.Deployment{} - if err := json.Unmarshal(primaryDoc.Bytes(), deployment); err != nil { - return err - } - - if cl.Spec.WALStorage.StorageType != "" { - operator.AddWALVolumeAndMountsToPostgreSQL(&deployment.Spec.Template.Spec, walVolume, - cl.Spec.Name) - } - - operator.AddBackRestConfigVolumeAndMounts(&deployment.Spec.Template.Spec, cl.Name, cl.Spec.BackrestConfig) - - // determine if any of the container images need to be overridden - operator.OverrideClusterContainerImages(deployment.Spec.Template.Spec.Containers) - - if _, err := clientset.AppsV1().Deployments(namespace).Create(deployment); err != nil && - !kerrors.IsAlreadyExists(err) { - return err - } - - return nil -} - -// getBootstrapJobFields obtains the fields needed to populate the cluster bootstrap job template -func getBootstrapJobFields(clientset kubeapi.Interface, - cluster *crv1.Pgcluster, dataVolume, walVolume operator.StorageResult, - tablespaceVolumes map[string]operator.StorageResult) (operator.BootstrapJobTemplateFields, error) { - - restoreClusterName := cluster.Spec.PGDataSource.RestoreFrom - restoreOpts := strconv.Quote(cluster.Spec.PGDataSource.RestoreOpts) - - bootstrapFields := operator.BootstrapJobTemplateFields{ - DeploymentTemplateFields: getClusterDeploymentFields(clientset, cluster, dataVolume, - walVolume, tablespaceVolumes), - RestoreFrom: cluster.Spec.PGDataSource.RestoreFrom, - RestoreOpts: restoreOpts[1 : len(restoreOpts)-1], - } - - // A recovery target should also have a recovery target action. The PostgreSQL - // and pgBackRest defaults are `pause` which requires the user to execute SQL - // before the cluster will accept any writes. If no action has been specified, - // use `promote` which accepts writes as soon as recovery succeeds. - // - // - https://www.postgresql.org/docs/current/runtime-config-wal.html#RUNTIME-CONFIG-WAL-RECOVERY-TARGET - // - https://pgbackrest.org/command.html#command-restore/category-command/option-target-action - // - if strings.Contains(restoreOpts, "--target") && - !strings.Contains(restoreOpts, "--target-action") { - bootstrapFields.RestoreOpts = - strings.TrimSpace(bootstrapFields.RestoreOpts + " --target-action=promote") - } - - // Grab the pgBackRest secret from the "restore from" cluster to obtain the annotations - // containing the additional configuration details needed to bootstrap from the clusters - // pgBackRest repository - restoreFromSecret, err := clientset.CoreV1().Secrets(cluster.GetNamespace()).Get( - fmt.Sprintf(util.BackrestRepoSecretName, restoreClusterName), metav1.GetOptions{}) - if err != nil { - return bootstrapFields, err - } - - // Grab the cluster to restore from to see if it still exists - restoreCluster, err := clientset.CrunchydataV1().Pgclusters(cluster.GetNamespace()).Get(restoreClusterName, metav1.GetOptions{}) - found := true - if err != nil { - if !kerrors.IsNotFound(err) { - return bootstrapFields, err - } - found = false - } - - // If the cluster exists, only proceed if it isnt shutdown - if found && (restoreCluster.Status.State == crv1.PgclusterStateShutdown) { - return bootstrapFields, fmt.Errorf("Unable to bootstrap cluster %s from cluster %s "+ - "(namespace %s) because it has a %s status", cluster.GetName(), - restoreClusterName, cluster.GetNamespace(), - string(restoreCluster.Status.State)) - } - - // Now override any backrest env vars for the bootstrap job - bootstrapBackrestVars, err := operator.GetPgbackrestBootstrapEnvVars(restoreClusterName, - cluster.GetName(), restoreFromSecret) - if err != nil { - return bootstrapFields, err - } - bootstrapFields.PgbackrestEnvVars = bootstrapBackrestVars - - // if an s3 restore is detected, override or set the pgbackrest S3 env vars, otherwise do - // not set the s3 env vars at all - s3Restore := backrest.S3RepoTypeCLIOptionExists(cluster.Spec.PGDataSource.RestoreOpts) - if s3Restore { - // Now override any backrest S3 env vars for the bootstrap job - bootstrapFields.PgbackrestS3EnvVars = operator.GetPgbackrestBootstrapS3EnvVars( - cluster.Spec.PGDataSource.RestoreFrom, restoreFromSecret) - } else { - bootstrapFields.PgbackrestS3EnvVars = "" - } - - return bootstrapFields, nil -} - -// getClusterDeploymentFields obtains the fields needed to populate the cluster deployment template -func getClusterDeploymentFields(clientset kubernetes.Interface, - cl *crv1.Pgcluster, dataVolume, walVolume operator.StorageResult, - tablespaceVolumes map[string]operator.StorageResult) operator.DeploymentTemplateFields { - - namespace := cl.GetNamespace() - - log.Infof("creating Pgcluster %s in namespace %s", cl.Name, namespace) - - cl.Spec.UserLabels["name"] = cl.Spec.Name - cl.Spec.UserLabels[config.LABEL_PG_CLUSTER] = cl.Spec.ClusterName - - // if the current deployment label value does not match current primary name - // update the label so that the new deployment will match the existing PVC - // as determined previously - // Note that the use of this value brings the initial deployment creation in line with - // the paradigm used during cluster restoration, as in operator/backrest/restore.go - if cl.Annotations[config.ANNOTATION_CURRENT_PRIMARY] != cl.Spec.UserLabels[config.LABEL_DEPLOYMENT_NAME] { - cl.Spec.UserLabels[config.LABEL_DEPLOYMENT_NAME] = cl.Annotations[config.ANNOTATION_CURRENT_PRIMARY] - } - - cl.Spec.UserLabels[config.LABEL_PGOUSER] = cl.ObjectMeta.Labels[config.LABEL_PGOUSER] - cl.Spec.UserLabels[config.LABEL_PG_CLUSTER_IDENTIFIER] = cl.ObjectMeta.Labels[config.LABEL_PG_CLUSTER_IDENTIFIER] - - // Set the Patroni scope to the name of the primary deployment. Replicas will get scope using the - // 'crunchy-pgha-scope' label on the pgcluster - cl.Spec.UserLabels[config.LABEL_PGHA_SCOPE] = cl.Spec.Name - - // set up a map of the names of the tablespaces as well as the storage classes - tablespaceStorageTypeMap := operator.GetTablespaceStorageTypeMap(cl.Spec.TablespaceMounts) - - // combine supplemental groups from all volumes - var supplementalGroups []int64 - supplementalGroups = append(supplementalGroups, dataVolume.SupplementalGroups...) - for _, v := range tablespaceVolumes { - supplementalGroups = append(supplementalGroups, v.SupplementalGroups...) - } - - //create the primary deployment - deploymentFields := operator.DeploymentTemplateFields{ - Name: cl.Annotations[config.ANNOTATION_CURRENT_PRIMARY], - IsInit: true, - Replicas: "0", - ClusterName: cl.Spec.Name, - Port: cl.Spec.Port, - CCPImagePrefix: util.GetValueOrDefault(cl.Spec.CCPImagePrefix, operator.Pgo.Cluster.CCPImagePrefix), - CCPImage: cl.Spec.CCPImage, - CCPImageTag: cl.Spec.CCPImageTag, - PVCName: dataVolume.InlineVolumeSource(), - DeploymentLabels: operator.GetLabelsFromMap(cl.Spec.UserLabels), - PodAnnotations: operator.GetAnnotations(cl, crv1.ClusterAnnotationPostgres), - PodLabels: operator.GetLabelsFromMap(cl.Spec.UserLabels), - DataPathOverride: cl.Annotations[config.ANNOTATION_CURRENT_PRIMARY], - Database: cl.Spec.Database, - SecurityContext: operator.GetPodSecurityContext(supplementalGroups), - RootSecretName: cl.Spec.RootSecretName, - PrimarySecretName: cl.Spec.PrimarySecretName, - UserSecretName: cl.Spec.UserSecretName, - NodeSelector: operator.GetAffinity(cl.Spec.UserLabels["NodeLabelKey"], cl.Spec.UserLabels["NodeLabelValue"], "In"), - PodAntiAffinity: operator.GetPodAntiAffinity(cl, crv1.PodAntiAffinityDeploymentDefault, cl.Spec.PodAntiAffinity.Default), - ContainerResources: operator.GetResourcesJSON(cl.Spec.Resources, cl.Spec.Limits), - ConfVolume: operator.GetConfVolume(clientset, cl, namespace), - ExporterAddon: operator.GetExporterAddon(clientset, namespace, &cl.Spec), - BadgerAddon: operator.GetBadgerAddon(clientset, namespace, cl, cl.Annotations[config.ANNOTATION_CURRENT_PRIMARY]), - PgmonitorEnvVars: operator.GetPgmonitorEnvVars(cl.Spec.UserLabels[config.LABEL_EXPORTER], cl.Spec.CollectSecretName), - ScopeLabel: config.LABEL_PGHA_SCOPE, - PgbackrestEnvVars: operator.GetPgbackrestEnvVars(cl, cl.Labels[config.LABEL_BACKREST], cl.Annotations[config.ANNOTATION_CURRENT_PRIMARY], - cl.Spec.Port, cl.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE]), - PgbackrestS3EnvVars: operator.GetPgbackrestS3EnvVars(*cl, clientset, namespace), - EnableCrunchyadm: operator.Pgo.Cluster.EnableCrunchyadm, - ReplicaReinitOnStartFail: !operator.Pgo.Cluster.DisableReplicaStartFailReinit, - SyncReplication: operator.GetSyncReplication(cl.Spec.SyncReplication), - Tablespaces: operator.GetTablespaceNames(cl.Spec.TablespaceMounts), - TablespaceVolumes: operator.GetTablespaceVolumesJSON(cl.Annotations[config.ANNOTATION_CURRENT_PRIMARY], tablespaceStorageTypeMap), - TablespaceVolumeMounts: operator.GetTablespaceVolumeMountsJSON(tablespaceStorageTypeMap), - TLSEnabled: cl.Spec.TLS.IsTLSEnabled(), - TLSOnly: cl.Spec.TLSOnly, - TLSSecret: cl.Spec.TLS.TLSSecret, - ReplicationTLSSecret: cl.Spec.TLS.ReplicationTLSSecret, - CASecret: cl.Spec.TLS.CASecret, - Standby: cl.Spec.Standby, - } - - return deploymentFields -} - -// DeleteCluster ... -func DeleteCluster(clientset kubernetes.Interface, cl *crv1.Pgcluster, namespace string) error { - - var err error - log.Info("deleting Pgcluster object" + " in namespace " + namespace) - log.Info("deleting with Name=" + cl.Spec.Name + " in namespace " + namespace) - - //create rmdata job - isReplica := false - isBackup := false - removeData := true - removeBackup := false - err = CreateRmdataJob(clientset, cl, namespace, removeData, removeBackup, isReplica, isBackup) - if err != nil { - log.Error(err) - return err - } else { - publishDeleteCluster(namespace, cl.ObjectMeta.Labels[config.LABEL_PGOUSER], cl.Spec.Name, cl.ObjectMeta.Labels[config.LABEL_PG_CLUSTER_IDENTIFIER]) - } - - return err - -} - -// scaleReplicaCreateMissingService creates a service for cluster replicas if -// it does not yet exist. -func scaleReplicaCreateMissingService(clientset kubernetes.Interface, replica *crv1.Pgreplica, cluster *crv1.Pgcluster, namespace string) error { - st := operator.Pgo.Cluster.ServiceType - if replica.Spec.UserLabels[config.LABEL_SERVICE_TYPE] != "" { - st = replica.Spec.UserLabels[config.LABEL_SERVICE_TYPE] - } else if cluster.Spec.UserLabels[config.LABEL_SERVICE_TYPE] != "" { - st = cluster.Spec.UserLabels[config.LABEL_SERVICE_TYPE] - } - - serviceName := fmt.Sprintf("%s-replica", replica.Spec.ClusterName) - serviceFields := ServiceTemplateFields{ - Name: serviceName, - ServiceName: serviceName, - ClusterName: replica.Spec.ClusterName, - Port: cluster.Spec.Port, - ServiceType: st, - } - - // only add references to the exporter / pgBadger ports - clusterLabels := cluster.ObjectMeta.GetLabels() - - if val, ok := clusterLabels[config.LABEL_EXPORTER]; ok && val == config.LABEL_TRUE { - serviceFields.ExporterPort = cluster.Spec.ExporterPort - } - - if val, ok := clusterLabels[config.LABEL_BADGER]; ok && val == config.LABEL_TRUE { - serviceFields.PGBadgerPort = cluster.Spec.PGBadgerPort - } - - return CreateService(clientset, &serviceFields, namespace) -} - -// scaleReplicaCreateDeployment creates a deployment for the cluster replica. -func scaleReplicaCreateDeployment(clientset kubernetes.Interface, - replica *crv1.Pgreplica, cluster *crv1.Pgcluster, namespace string, - dataVolume, walVolume operator.StorageResult, - tablespaceVolumes map[string]operator.StorageResult, -) error { - var err error - log.Debugf("Scale called for %s in %s", replica.Name, namespace) - - var replicaDoc bytes.Buffer - - serviceName := replica.Spec.ClusterName + "-replica" - //replicaFlag := true - - // replicaLabels := operator.GetPrimaryLabels(serviceName, replica.Spec.ClusterName, replicaFlag, cluster.Spec.UserLabels) - cluster.Spec.UserLabels[config.LABEL_REPLICA_NAME] = replica.Spec.Name - cluster.Spec.UserLabels["name"] = serviceName - cluster.Spec.UserLabels[config.LABEL_PG_CLUSTER] = replica.Spec.ClusterName - - archiveMode := "off" - if cluster.Spec.UserLabels[config.LABEL_ARCHIVE] == "true" { - archiveMode = "on" - } - if cluster.Labels[config.LABEL_BACKREST] == "true" { - //backrest requires archive mode be set to on - archiveMode = "on" - } - - image := cluster.Spec.CCPImage - - //check for --ccp-image-tag at the command line - imageTag := cluster.Spec.CCPImageTag - if replica.Spec.UserLabels[config.LABEL_CCP_IMAGE_TAG_KEY] != "" { - imageTag = replica.Spec.UserLabels[config.LABEL_CCP_IMAGE_TAG_KEY] - } - - cluster.Spec.UserLabels[config.LABEL_DEPLOYMENT_NAME] = replica.Spec.Name - - // set up a map of the names of the tablespaces as well as the storage classes - tablespaceStorageTypeMap := operator.GetTablespaceStorageTypeMap(cluster.Spec.TablespaceMounts) - - // combine supplemental groups from all volumes - var supplementalGroups []int64 - supplementalGroups = append(supplementalGroups, dataVolume.SupplementalGroups...) - for _, v := range tablespaceVolumes { - supplementalGroups = append(supplementalGroups, v.SupplementalGroups...) - } - - //create the replica deployment - replicaDeploymentFields := operator.DeploymentTemplateFields{ - Name: replica.Spec.Name, - ClusterName: replica.Spec.ClusterName, - Port: cluster.Spec.Port, - CCPImagePrefix: util.GetValueOrDefault(cluster.Spec.CCPImagePrefix, operator.Pgo.Cluster.CCPImagePrefix), - CCPImageTag: imageTag, - CCPImage: image, - PVCName: dataVolume.InlineVolumeSource(), - Database: cluster.Spec.Database, - DataPathOverride: replica.Spec.Name, - ArchiveMode: archiveMode, - Replicas: "1", - ConfVolume: operator.GetConfVolume(clientset, cluster, namespace), - DeploymentLabels: operator.GetLabelsFromMap(cluster.Spec.UserLabels), - PodAnnotations: operator.GetAnnotations(cluster, crv1.ClusterAnnotationPostgres), - PodLabels: operator.GetLabelsFromMap(cluster.Spec.UserLabels), - SecurityContext: operator.GetPodSecurityContext(supplementalGroups), - RootSecretName: cluster.Spec.RootSecretName, - PrimarySecretName: cluster.Spec.PrimarySecretName, - UserSecretName: cluster.Spec.UserSecretName, - ContainerResources: operator.GetResourcesJSON(cluster.Spec.Resources, cluster.Spec.Limits), - NodeSelector: operator.GetAffinity(replica.Spec.UserLabels["NodeLabelKey"], replica.Spec.UserLabels["NodeLabelValue"], "In"), - PodAntiAffinity: operator.GetPodAntiAffinity(cluster, crv1.PodAntiAffinityDeploymentDefault, cluster.Spec.PodAntiAffinity.Default), - ExporterAddon: operator.GetExporterAddon(clientset, namespace, &cluster.Spec), - BadgerAddon: operator.GetBadgerAddon(clientset, namespace, cluster, replica.Spec.Name), - ScopeLabel: config.LABEL_PGHA_SCOPE, - PgbackrestEnvVars: operator.GetPgbackrestEnvVars(cluster, cluster.Labels[config.LABEL_BACKREST], replica.Spec.Name, - cluster.Spec.Port, cluster.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE]), - PgbackrestS3EnvVars: operator.GetPgbackrestS3EnvVars(*cluster, clientset, namespace), - EnableCrunchyadm: operator.Pgo.Cluster.EnableCrunchyadm, - ReplicaReinitOnStartFail: !operator.Pgo.Cluster.DisableReplicaStartFailReinit, - SyncReplication: operator.GetSyncReplication(cluster.Spec.SyncReplication), - Tablespaces: operator.GetTablespaceNames(cluster.Spec.TablespaceMounts), - TablespaceVolumes: operator.GetTablespaceVolumesJSON(replica.Spec.Name, tablespaceStorageTypeMap), - TablespaceVolumeMounts: operator.GetTablespaceVolumeMountsJSON(tablespaceStorageTypeMap), - TLSEnabled: cluster.Spec.TLS.IsTLSEnabled(), - TLSOnly: cluster.Spec.TLSOnly, - TLSSecret: cluster.Spec.TLS.TLSSecret, - ReplicationTLSSecret: cluster.Spec.TLS.ReplicationTLSSecret, - CASecret: cluster.Spec.TLS.CASecret, - } - - switch replica.Spec.ReplicaStorage.StorageType { - case "", "emptydir": - log.Debug("PrimaryStorage.StorageType is emptydir") - err = config.DeploymentTemplate.Execute(&replicaDoc, replicaDeploymentFields) - case "existing", "create", "dynamic": - log.Debug("using the shared replica template ") - err = config.DeploymentTemplate.Execute(&replicaDoc, replicaDeploymentFields) - } - - if err != nil { - log.Error(err.Error()) - return err - } - - if operator.CRUNCHY_DEBUG { - config.DeploymentTemplate.Execute(os.Stdout, replicaDeploymentFields) - } - - replicaDeployment := appsv1.Deployment{} - err = json.Unmarshal(replicaDoc.Bytes(), &replicaDeployment) - if err != nil { - log.Error("error unmarshalling replica json into Deployment " + err.Error()) - return err - } - - if cluster.Spec.WALStorage.StorageType != "" { - operator.AddWALVolumeAndMountsToPostgreSQL(&replicaDeployment.Spec.Template.Spec, walVolume, replica.Spec.Name) - } - - operator.AddBackRestConfigVolumeAndMounts(&replicaDeployment.Spec.Template.Spec, cluster.Name, cluster.Spec.BackrestConfig) - - // determine if any of the container images need to be overridden - operator.OverrideClusterContainerImages(replicaDeployment.Spec.Template.Spec.Containers) - - // set the replica scope to the same scope as the primary, i.e. the scope defined using label - // 'crunchy-pgha-scope' - replicaDeployment.Labels[config.LABEL_PGHA_SCOPE] = cluster.Labels[config.LABEL_PGHA_SCOPE] - replicaDeployment.Spec.Template.Labels[config.LABEL_PGHA_SCOPE] = cluster.Labels[config.LABEL_PGHA_SCOPE] - - _, err = clientset.AppsV1().Deployments(namespace).Create(&replicaDeployment) - return err -} - -// DeleteReplica ... -func DeleteReplica(clientset kubernetes.Interface, cl *crv1.Pgreplica, namespace string) error { - - var err error - log.Info("deleting Pgreplica object" + " in namespace " + namespace) - log.Info("deleting with Name=" + cl.Spec.Name + " in namespace " + namespace) - deletePropagation := metav1.DeletePropagationForeground - err = clientset. - AppsV1().Deployments(namespace). - Delete(cl.Spec.Name, &metav1.DeleteOptions{ - PropagationPolicy: &deletePropagation, - }) - - return err - -} - -func publishScaleError(namespace string, username string, cluster *crv1.Pgcluster) { - topics := make([]string, 1) - topics[0] = events.EventTopicCluster - - f := events.EventScaleClusterFormat{ - EventHeader: events.EventHeader{ - Namespace: namespace, - Username: username, - Topic: topics, - Timestamp: time.Now(), - EventType: events.EventScaleCluster, - }, - Clustername: cluster.Spec.UserLabels[config.LABEL_REPLICA_NAME], - Replicaname: cluster.Spec.UserLabels[config.LABEL_PG_CLUSTER], - } - - err := events.Publish(f) - if err != nil { - log.Error(err.Error()) - } -} - -func publishDeleteCluster(namespace, username, clusterName, identifier string) { - topics := make([]string, 1) - topics[0] = events.EventTopicCluster - - f := events.EventDeleteClusterFormat{ - EventHeader: events.EventHeader{ - Namespace: namespace, - Username: username, - Topic: topics, - Timestamp: time.Now(), - EventType: events.EventDeleteCluster, - }, - Clustername: clusterName, - } - - err := events.Publish(f) - if err != nil { - log.Error(err.Error()) - } -} - -// ScaleClusterInfo contains information about a cluster obtained when scaling the various -// deployments for a cluster. This includes the name of the primary deployment, all replica -// deployments, along with the names of the services enabled for the cluster. -type ScaleClusterInfo struct { - PrimaryDeployment string - ReplicaDeployments []string - PGBackRestRepoDeployment string - PGBouncerDeployment string -} - -// ShutdownCluster is responsible for shutting down a cluster that is currently running. This -// includes changing the replica count for all clusters to 0, and then updating the pgcluster -// with a shutdown status. -func ShutdownCluster(clientset kubeapi.Interface, cluster crv1.Pgcluster) error { - - // first ensure the current primary deployment is properly recorded in the pg - // cluster. Only consider primaries that are running, as there could be - // evicted, etc. pods hanging around - selector := fmt.Sprintf("%s=%s,%s=%s", config.LABEL_PG_CLUSTER, cluster.Name, - config.LABEL_PGHA_ROLE, config.LABEL_PGHA_ROLE_PRIMARY) - - options := metav1.ListOptions{ - FieldSelector: fields.OneTermEqualSelector("status.phase", string(v1.PodRunning)).String(), - LabelSelector: selector, - } - - // only consider pods that are running - pods, err := clientset.CoreV1().Pods(cluster.Namespace).List(options) - - if err != nil { - return err - } - - if len(pods.Items) > 1 { - return fmt.Errorf("Cluster Operator: Invalid number of primary pods (%d) found when "+ - "shutting down cluster %s", len(pods.Items), cluster.Name) - } - - primaryPod := pods.Items[0] - if cluster.Annotations == nil { - cluster.Annotations = make(map[string]string) - } - cluster.Annotations[config.ANNOTATION_PRIMARY_DEPLOYMENT] = - primaryPod.Labels[config.LABEL_DEPLOYMENT_NAME] - - if _, err := clientset.CrunchydataV1().Pgclusters(cluster.Namespace).Update(&cluster); err != nil { - return fmt.Errorf("Cluster Operator: Unable to update the current primary deployment "+ - "in the pgcluster when shutting down cluster %s", cluster.Name) - } - - // disable autofailover to prevent failovers while shutting down deployments - if err := util.ToggleAutoFailover(clientset, false, cluster.Labels[config.LABEL_PGHA_SCOPE], - cluster.Namespace); err != nil { - return fmt.Errorf("Cluster Operator: Unable to toggle autofailover when shutting "+ - "down cluster %s", cluster.Name) - } - - clusterInfo, err := ScaleClusterDeployments(clientset, cluster, 0, true, true, true, true) - if err != nil { - return err - } - patch, err := json.Marshal(map[string]interface{}{ - "status": crv1.PgclusterStatus{ - State: crv1.PgclusterStateShutdown, - Message: fmt.Sprintf("Database shutdown along with the following services: %v", []string{ - clusterInfo.PGBackRestRepoDeployment, - clusterInfo.PGBouncerDeployment, - }), - }, - }) - if err == nil { - _, err = clientset.CrunchydataV1().Pgclusters(cluster.Namespace).Patch(cluster.Name, types.MergePatchType, patch) - } - if err != nil { - return err - } - - if err := clientset.CoreV1().ConfigMaps(cluster.Namespace).Delete(fmt.Sprintf("%s-leader", - cluster.Labels[config.LABEL_PGHA_SCOPE]), &metav1.DeleteOptions{}); err != nil { - return err - } - - publishClusterShutdown(cluster) - - return nil -} - -// StartupCluster is responsible for starting a cluster that was previsouly shutdown. This -// includes changing the replica count for all clusters to 1, and then updating the pgcluster -// with a shutdown status. -func StartupCluster(clientset kubernetes.Interface, cluster crv1.Pgcluster) error { - - log.Debugf("Cluster Operator: starting cluster %s", cluster.Name) - - // ensure autofailover is enabled to ensure proper startup of the cluster - if err := util.ToggleAutoFailover(clientset, true, cluster.Labels[config.LABEL_PGHA_SCOPE], - cluster.Namespace); err != nil { - return fmt.Errorf("Cluster Operator: Unable to toggle autofailover when starting "+ - "cluster %s", cluster.Name) - } - - // Scale up the primary and supporting services, but not the replicas. Replicas will be - // scaled up after the primary is ready. This ensures the primary at the time of shutdown - // is the primary when the cluster comes back online. - clusterInfo, err := ScaleClusterDeployments(clientset, cluster, 1, true, false, true, true) - if err != nil { - return err - } - - log.Debugf("Cluster Operator: primary deployment %s started for cluster %s along with "+ - "services %v. The following replicas will be started once the primary has initialized: "+ - "%v", clusterInfo.PrimaryDeployment, cluster.Name, append(make([]string, 0), - clusterInfo.PGBackRestRepoDeployment, clusterInfo.PGBouncerDeployment), - clusterInfo.ReplicaDeployments) - - return nil -} - -// ScaleClusterDeployments scales all deployments for a cluster to the number of replicas -// specified using the 'replicas' parameter. This is typically used to scale-up or down the -// primary deployment and any supporting services (pgBackRest and pgBouncer) when shutting down -// or starting up the cluster due to a scale or scale-down request. -func ScaleClusterDeployments(clientset kubernetes.Interface, cluster crv1.Pgcluster, replicas int, - scalePrimary, scaleReplicas, scaleBackRestRepo, - scalePGBouncer bool) (clusterInfo ScaleClusterInfo, err error) { - - clusterName := cluster.Name - namespace := cluster.Namespace - // Get *all* remaining deployments for the cluster. This includes the deployment for the - // primary, any replicas, the pgBackRest repo and any optional services (e.g. pgBouncer) - var deploymentList *appsv1.DeploymentList - selector := fmt.Sprintf("%s=%s", config.LABEL_PG_CLUSTER, clusterName) - deploymentList, err = clientset. - AppsV1().Deployments(namespace). - List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - return - } - - for _, deployment := range deploymentList.Items { - - // determine if the deployment is a primary, replica, or supporting service (pgBackRest, - // pgBouncer, etc.) - switch { - case deployment.Name == cluster.Annotations[config.ANNOTATION_CURRENT_PRIMARY]: - clusterInfo.PrimaryDeployment = deployment.Name - // if not scaling the primary simply move on to the next deployment - if !scalePrimary { - continue - } - case deployment.Labels[config.LABEL_PGBOUNCER] == "true": - clusterInfo.PGBouncerDeployment = deployment.Name - // if not scaling services simply move on to the next deployment - if !scalePGBouncer { - continue - } - // if the replica total is greater than 0, set number of pgBouncer - // replicas to the number that is specified in the cluster entry - if replicas > 0 { - replicas = int(cluster.Spec.PgBouncer.Replicas) - } - case deployment.Labels[config.LABEL_PGO_BACKREST_REPO] == "true": - clusterInfo.PGBackRestRepoDeployment = deployment.Name - // if not scaling services simply move on to the next deployment - if !scaleBackRestRepo { - continue - } - default: - clusterInfo.ReplicaDeployments = append(clusterInfo.ReplicaDeployments, - deployment.Name) - // if not scaling replicas simply move on to the next deployment - if !scaleReplicas { - continue - } - } - - log.Debugf("scaling deployment %s to %d for cluster %s", deployment.Name, replicas, - clusterName) - - // Scale the deployment according to the number of replicas specified. If an error is - // encountered, log it and move on to scaling the next deployment - patchString := fmt.Sprintf(`{"spec":{"replicas":%d}}`, replicas) - if _, err := clientset.AppsV1().Deployments(namespace).Patch(deployment.GetName(), - types.MergePatchType, []byte(patchString)); err != nil { - log.Errorf("Error scaling deployment %s to %d: %v", deployment.Name, replicas, err) - } - } - return -} diff --git a/internal/operator/cluster/failover.go b/internal/operator/cluster/failover.go deleted file mode 100644 index 45870d9a75..0000000000 --- a/internal/operator/cluster/failover.go +++ /dev/null @@ -1,138 +0,0 @@ -// Package cluster holds the cluster CRD logic and definitions -// A cluster is comprised of a primary service, replica service, -// primary deployment, and replica deployment -package cluster - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "encoding/json" - "time" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/kubeapi" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - "github.com/crunchydata/postgres-operator/pkg/events" - pgo "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned" - log "github.com/sirupsen/logrus" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/types" - "k8s.io/client-go/rest" -) - -// FailoverBase ... -// gets called first on a failover -func FailoverBase(namespace string, clientset kubeapi.Interface, task *crv1.Pgtask, restconfig *rest.Config) { - var err error - - //look up the pgcluster for this task - //in the case, the clustername is passed as a key in the - //parameters map - var clusterName string - for k := range task.Spec.Parameters { - clusterName = k - } - - cluster, err := clientset.CrunchydataV1().Pgclusters(namespace).Get(clusterName, metav1.GetOptions{}) - if err != nil { - return - } - - //create marker (clustername, namespace) - err = PatchpgtaskFailoverStatus(clientset, task, namespace) - if err != nil { - log.Errorf("could not set failover started marker for task %s cluster %s", task.Spec.Name, clusterName) - return - } - - //get initial count of replicas --selector=pg-cluster=clusterName - selector := config.LABEL_PG_CLUSTER + "=" + clusterName - replicaList, err := clientset.CrunchydataV1().Pgreplicas(namespace).List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - log.Error(err) - return - } - log.Debugf("replica count before failover is %d", len(replicaList.Items)) - - //publish event for failover - topics := make([]string, 1) - topics[0] = events.EventTopicCluster - - f := events.EventFailoverClusterFormat{ - EventHeader: events.EventHeader{ - Namespace: namespace, - Username: task.ObjectMeta.Labels[config.LABEL_PGOUSER], - Topic: topics, - Timestamp: time.Now(), - EventType: events.EventFailoverCluster, - }, - Clustername: clusterName, - Target: task.ObjectMeta.Labels[config.LABEL_TARGET], - } - - err = events.Publish(f) - if err != nil { - log.Error(err) - } - - Failover(cluster.ObjectMeta.Labels[config.LABEL_PG_CLUSTER_IDENTIFIER], clientset, clusterName, task, namespace, restconfig) - - //publish event for failover completed - topics = make([]string, 1) - topics[0] = events.EventTopicCluster - - g := events.EventFailoverClusterCompletedFormat{ - EventHeader: events.EventHeader{ - Namespace: namespace, - Username: task.ObjectMeta.Labels[config.LABEL_PGOUSER], - Topic: topics, - Timestamp: time.Now(), - EventType: events.EventFailoverClusterCompleted, - }, - Clustername: clusterName, - Target: task.ObjectMeta.Labels[config.LABEL_TARGET], - } - - err = events.Publish(g) - if err != nil { - log.Error(err) - } - - //remove marker - -} - -func PatchpgtaskFailoverStatus(clientset pgo.Interface, oldCrd *crv1.Pgtask, namespace string) error { - - //change it - oldCrd.Spec.Parameters[config.LABEL_FAILOVER_STARTED] = time.Now().Format(time.RFC3339) - - //create the patch - patchBytes, err := json.Marshal(map[string]interface{}{ - "spec": map[string]interface{}{ - "parameters": oldCrd.Spec.Parameters, - }, - }) - if err != nil { - return err - } - - //apply patch - _, err6 := clientset.CrunchydataV1().Pgtasks(namespace).Patch(oldCrd.Name, types.MergePatchType, patchBytes) - - return err6 - -} diff --git a/internal/operator/cluster/failoverlogic.go b/internal/operator/cluster/failoverlogic.go deleted file mode 100644 index 3dee469682..0000000000 --- a/internal/operator/cluster/failoverlogic.go +++ /dev/null @@ -1,232 +0,0 @@ -// Package cluster holds the cluster CRD logic and definitions -// A cluster is comprised of a primary service, replica service, -// primary deployment, and replica deployment -package cluster - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - "time" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/kubeapi" - "github.com/crunchydata/postgres-operator/internal/util" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - "github.com/crunchydata/postgres-operator/pkg/events" - pgo "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned" - log "github.com/sirupsen/logrus" - v1 "k8s.io/api/core/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/fields" - "k8s.io/client-go/kubernetes" - "k8s.io/client-go/rest" -) - -func Failover(identifier string, clientset kubeapi.Interface, clusterName string, task *crv1.Pgtask, namespace string, restconfig *rest.Config) error { - - var pod *v1.Pod - var err error - target := task.ObjectMeta.Labels[config.LABEL_TARGET] - - log.Infof("Failover called on [%s] target [%s]", clusterName, target) - - pod, err = util.GetPod(clientset, target, namespace) - if err != nil { - log.Error(err) - return err - } - log.Debugf("pod selected to failover to is %s", pod.Name) - - updateFailoverStatus(clientset, task, namespace, clusterName, "deleted primary deployment "+clusterName) - - //trigger the failover to the selected replica - if err := promote(pod, clientset, namespace, restconfig); err != nil { - log.Warn(err) - } - - publishPromoteEvent(identifier, namespace, task.ObjectMeta.Labels[config.LABEL_PGOUSER], clusterName, target) - - updateFailoverStatus(clientset, task, namespace, clusterName, "promoting pod "+pod.Name+" target "+target) - - //relabel the deployment with primary labels - //by setting service-name=clustername - upod, err := clientset.CoreV1().Pods(namespace).Get(pod.Name, metav1.GetOptions{}) - if err != nil { - log.Error(err) - log.Error("error in getting pod during failover relabel") - return err - } - - //set the service-name label to the cluster name to match - //the primary service selector - log.Debugf("setting label on pod %s=%s", config.LABEL_SERVICE_NAME, clusterName) - - err = kubeapi.AddLabelToPod(clientset, upod, config.LABEL_SERVICE_NAME, clusterName, namespace) - if err != nil { - log.Error(err) - log.Error("error in updating pod during failover relabel") - return err - } - - targetDepName := upod.ObjectMeta.Labels[config.LABEL_DEPLOYMENT_NAME] - log.Debugf("targetDepName %s", targetDepName) - targetDep, err := clientset.AppsV1().Deployments(namespace).Get(targetDepName, metav1.GetOptions{}) - if err != nil { - log.Error(err) - log.Errorf("not found error in getting Deployment during failover relabel %s", targetDepName) - return err - } - - err = kubeapi.AddLabelToDeployment(clientset, targetDep, config.LABEL_SERVICE_NAME, clusterName, namespace) - if err != nil { - log.Error(err) - log.Error("error in updating deployment during failover relabel") - return err - } - - updateFailoverStatus(clientset, task, namespace, clusterName, "updating label deployment...pod "+pod.Name+"was the failover target...failover completed") - - //update the pgcluster current-primary to new deployment name - cluster, err := clientset.CrunchydataV1().Pgclusters(namespace).Get(clusterName, metav1.GetOptions{}) - if err != nil { - log.Errorf("could not find pgcluster %s with labels", clusterName) - return err - } - - // update the CRD with the new current primary. If there is an error, log it - // here, otherwise return - if err := util.CurrentPrimaryUpdate(clientset, cluster, target, namespace); err != nil { - log.Error(err) - return err - } - - return nil - -} - -func updateFailoverStatus(clientset pgo.Interface, task *crv1.Pgtask, namespace, clusterName, message string) { - - log.Debugf("updateFailoverStatus namespace=[%s] taskName=[%s] message=[%s]", namespace, task.Name, message) - - //update the task - t, err := clientset.CrunchydataV1().Pgtasks(task.Namespace).Get(task.Name, metav1.GetOptions{}) - if err != nil { - return - } - *task = *t - - task.Status.Message = message - - t, err = clientset.CrunchydataV1().Pgtasks(task.Namespace).Update(task) - if err != nil { - return - } - *task = *t - -} - -func promote( - pod *v1.Pod, - clientset kubernetes.Interface, - namespace string, restconfig *rest.Config) error { - - // generate the curl command that will be run on the pod selected for the failover in order - // to trigger the failover and promote that specific pod to primary - command := make([]string, 3) - command[0] = "/bin/bash" - command[1] = "-c" - command[2] = fmt.Sprintf("curl -s http://127.0.0.1:%s/failover -XPOST "+ - "-d '{\"candidate\":\"%s\"}'", config.DEFAULT_PATRONI_PORT, pod.Name) - - log.Debugf("running Exec with namespace=[%s] podname=[%s] container name=[%s]", namespace, pod.Name, pod.Spec.Containers[0].Name) - stdout, stderr, err := kubeapi.ExecToPodThroughAPI(restconfig, clientset, command, pod.Spec.Containers[0].Name, pod.Name, namespace, nil) - log.Debugf("stdout=[%s] stderr=[%s]", stdout, stderr) - if err != nil { - log.Error(err) - } - - return err -} - -func publishPromoteEvent(identifier, namespace, username, clusterName, target string) { - topics := make([]string, 1) - topics[0] = events.EventTopicCluster - - f := events.EventFailoverClusterFormat{ - EventHeader: events.EventHeader{ - Namespace: namespace, - Username: username, - Topic: topics, - Timestamp: time.Now(), - EventType: events.EventFailoverCluster, - }, - Clustername: clusterName, - Target: target, - } - - err := events.Publish(f) - if err != nil { - log.Error(err.Error()) - } - -} - -// RemovePrimaryOnRoleChangeTag sets the 'primary_on_role_change' tag to null in the -// Patroni DCS, effectively removing the tag. This is accomplished by exec'ing into -// the primary PG pod, and sending a patch request to update the appropriate data (i.e. -// the 'primary_on_role_change' tag) in the DCS. -func RemovePrimaryOnRoleChangeTag(clientset kubernetes.Interface, restconfig *rest.Config, - clusterName, namespace string) error { - - selector := config.LABEL_PG_CLUSTER + "=" + clusterName + - "," + config.LABEL_PGHA_ROLE + "=" + config.LABEL_PGHA_ROLE_PRIMARY - - // only consider pods that are running - options := metav1.ListOptions{ - FieldSelector: fields.OneTermEqualSelector("status.phase", string(v1.PodRunning)).String(), - LabelSelector: selector, - } - - pods, err := clientset.CoreV1().Pods(namespace).List(options) - - if err != nil { - log.Error(err) - return err - } else if len(pods.Items) > 1 { - log.Error("More than one primary found after completing the post-failover backup") - } - pod := pods.Items[0] - - // generate the curl command that will be run on the pod selected for the failover in order - // to trigger the failover and promote that specific pod to primary - command := make([]string, 3) - command[0] = "/bin/bash" - command[1] = "-c" - command[2] = fmt.Sprintf("curl -s 127.0.0.1:%s/config -XPATCH -d "+ - "'{\"tags\":{\"primary_on_role_change\":null}}'", config.DEFAULT_PATRONI_PORT) - - log.Debugf("running Exec command '%s' with namespace=[%s] podname=[%s] container name=[%s]", - command, namespace, pod.Name, pod.Spec.Containers[0].Name) - stdout, stderr, err := kubeapi.ExecToPodThroughAPI(restconfig, clientset, command, - pod.Spec.Containers[0].Name, pod.Name, namespace, nil) - log.Debugf("stdout=[%s] stderr=[%s]", stdout, stderr) - if err != nil { - log.Error(err) - return err - } - return nil -} diff --git a/internal/operator/cluster/pgadmin.go b/internal/operator/cluster/pgadmin.go deleted file mode 100644 index 86ea0129cc..0000000000 --- a/internal/operator/cluster/pgadmin.go +++ /dev/null @@ -1,462 +0,0 @@ -package cluster - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "bytes" - "encoding/base64" - "encoding/json" - "fmt" - weakrand "math/rand" - "os" - "time" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/kubeapi" - "github.com/crunchydata/postgres-operator/internal/operator" - "github.com/crunchydata/postgres-operator/internal/operator/pvc" - "github.com/crunchydata/postgres-operator/internal/pgadmin" - "github.com/crunchydata/postgres-operator/internal/util" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - "github.com/crunchydata/postgres-operator/pkg/events" - - log "github.com/sirupsen/logrus" - appsv1 "k8s.io/api/apps/v1" - v1 "k8s.io/api/core/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/client-go/kubernetes" - "k8s.io/client-go/rest" -) - -const ( - defPgAdminPort = config.DEFAULT_PGADMIN_PORT - defSetupUsername = "pgadminsetup" -) - -type pgAdminTemplateFields struct { - Name string - ClusterName string - CCPImagePrefix string - CCPImageTag string - DisableFSGroup bool - Port string - ServicePort string - InitUser string - InitPass string - PVCName string -} - -// pgAdminDeploymentFormat is the name of the Kubernetes Deployment that -// manages pgAdmin, and follows the format "-pgadmin" -const pgAdminDeploymentFormat = "%s-pgadmin" - -// initPassLen is the length of the one-time setup password for pgadmin -const initPassLen = 20 - -const ( - deployTimeout = 60 - pollInterval = 3 -) - -// AddPgAdmin contains the various functions that are used to add a pgAdmin -// Deployment to a PostgreSQL cluster -// -// Any returned error is logged in the calling function -func AddPgAdmin( - clientset kubeapi.Interface, - restconfig *rest.Config, - cluster *crv1.Pgcluster, - storageClass *crv1.PgStorageSpec) error { - log.Debugf("adding pgAdmin") - - // first, ensure that the Cluster CR is updated to know that there is now - // a pgAdmin associated with it. This may also include other CR updates too, - // such as if the pgAdmin is being added via a pgtask, and as such the - // values for memory/CPU may be set as well. - // - // if we cannot update this we abort - cluster.Labels[config.LABEL_PGADMIN] = "true" - - ns := cluster.Namespace - - if _, err := clientset.CrunchydataV1().Pgclusters(ns).Update(cluster); err != nil { - return err - } - - // Using deployment/service name for PVC also - pvcName := fmt.Sprintf(pgAdminDeploymentFormat, cluster.Name) - - // create the pgAdmin storage volume - if _, err := pvc.CreateIfNotExists(clientset, *storageClass, pvcName, cluster.Name, ns); err != nil { - log.Errorf("Error creating PVC: %s", err.Error()) - return err - } else { - log.Info("created pgadmin PVC =" + pvcName + " in namespace " + ns) - } - - // create the pgAdmin deployment - if err := createPgAdminDeployment(clientset, cluster, pvcName); err != nil { - return err - } - - // create the pgAdmin service - if err := createPgAdminService(clientset, cluster); err != nil { - return err - } - - log.Debugf("added pgAdmin to cluster [%s]", cluster.Name) - - return nil -} - -// AddPgAdminFromPgTask is a method that helps to bring up -// the pgAdmin deployment that sits alongside a PostgreSQL cluster -func AddPgAdminFromPgTask(clientset kubeapi.Interface, restconfig *rest.Config, task *crv1.Pgtask) { - clusterName := task.Spec.Parameters[config.LABEL_PGADMIN_TASK_CLUSTER] - namespace := task.Spec.Namespace - storage := task.Spec.StorageSpec - - log.Debugf("add pgAdmin from task called for cluster [%s] in namespace [%s]", - clusterName, namespace) - - // first, check to ensure that the cluster still exosts - cluster, err := clientset.CrunchydataV1().Pgclusters(namespace).Get(clusterName, metav1.GetOptions{}) - if err != nil { - log.Error(err) - return - } - - // bring up the pgAdmin deployment - if err := AddPgAdmin(clientset, restconfig, cluster, &storage); err != nil { - log.Error(err) - return - } - - // publish an event - publishPgAdminEvent(events.EventCreatePgAdmin, task) - - // at this point, the pgtask is successful, so we can safely rvemove it - // we can fallthrough in the event of an error, because we're returning anyway - if err := clientset.CrunchydataV1().Pgtasks(namespace).Delete(task.Name, &metav1.DeleteOptions{}); err != nil { - log.Error(err) - } - - deployName := fmt.Sprintf(pgAdminDeploymentFormat, clusterName) - if err := waitForDeploymentReady(clientset, namespace, deployName, deployTimeout, pollInterval); err != nil { - log.Error(err) - } - - // Lock down setup user and prepopulate connections for managed users - if err := BootstrapPgAdminUsers(clientset, restconfig, cluster); err != nil { - log.Error(err) - } - - return -} - -func BootstrapPgAdminUsers( - clientset kubernetes.Interface, - restconfig *rest.Config, - cluster *crv1.Pgcluster) error { - - qr, err := pgadmin.GetPgAdminQueryRunner(clientset, restconfig, cluster) - if err != nil { - return err - } else if qr == nil { - // Cluster doesn't claim to have pgAdmin setup, we're done here - return nil - } - - // Disables setup user and breaks the password hash value - err = qr.Exec("UPDATE user SET active = 0, password = substr(password,1,50) WHERE id=1;") - if err != nil { - log.Errorf("failed to lock down pgadmin db [%v], deleting instance", err) - return err - } - - // Get service details and prep connection metadata - service, err := clientset.CoreV1().Services(cluster.Namespace).Get(cluster.Name, metav1.GetOptions{}) - if err != nil { - return err - } - - dbService := pgadmin.ServerEntryFromPgService(service, cluster.Name) - - // Get current users of cluster and add them to pgadmin's db if they - // have kubernetes-stored passwords, using the connection info above - // - - // Get the secrets managed by Kubernetes - any users existing only in - // Postgres don't have their passwords available - sel := fmt.Sprintf("%s=%s", config.LABEL_PG_CLUSTER, cluster.Name) - secretList, err := clientset. - CoreV1().Secrets(cluster.Namespace). - List(metav1.ListOptions{LabelSelector: sel}) - if err != nil { - return err - } - for _, secret := range secretList.Items { - dbService.Password = "" - - uname, ok := secret.Data["username"] - if !ok { - continue - } - user := string(uname[:]) - if secret.Name != fmt.Sprintf("%s-%s-secret", cluster.Name, user) { - // Doesn't look like the secrets we seek - continue - } - if util.IsPostgreSQLUserSystemAccount(user) { - continue - } - rawpass, ok := secret.Data["password"] - if !ok { - // password not stored in secret, can't use this one - continue - } - - dbService.Password = string(rawpass[:]) - err = pgadmin.SetLoginPassword(qr, user, dbService.Password) - if err != nil { - return err - } - - if dbService.Name != "" { - err = pgadmin.SetClusterConnection(qr, user, dbService) - if err != nil { - return err - } - } - } - // - // Initial autobinding complete - - return nil -} - -// DeletePgAdmin contains the various functions that are used to delete a -// pgAdmin Deployment for a PostgreSQL cluster -// -// Any errors that are returned should be logged in the calling function, though -// some logging occurs in this function as well -func DeletePgAdmin(clientset kubeapi.Interface, restconfig *rest.Config, cluster *crv1.Pgcluster) error { - clusterName := cluster.Name - namespace := cluster.Namespace - - log.Debugf("delete pgAdmin from cluster [%s] in namespace [%s]", clusterName, namespace) - - // first, ensure that the Cluster CR is updated to know that there is no - // longer a pgAdmin associated with it - // if we cannot update this we abort - cluster.Labels[config.LABEL_PGADMIN] = "false" - - if _, err := clientset.CrunchydataV1().Pgclusters(namespace).Update(cluster); err != nil { - return err - } - - // delete the various Kubernetes objects associated with the pgAdmin - // these include the Service, Deployment, and the pgAdmin data PVC - // If these fail, we'll just pass through - // - // Delete the PVC, Service and Deployment, which share the same naem - pgAdminDeploymentName := fmt.Sprintf(pgAdminDeploymentFormat, clusterName) - - deletePropagation := metav1.DeletePropagationForeground - if err := clientset.CoreV1().PersistentVolumeClaims(namespace).Delete(pgAdminDeploymentName, &metav1.DeleteOptions{ - PropagationPolicy: &deletePropagation, - }); err != nil { - log.Warn(err) - } - - if err := clientset.CoreV1().Services(namespace).Delete(pgAdminDeploymentName, &metav1.DeleteOptions{}); err != nil { - log.Warn(err) - } - - if err := clientset.AppsV1().Deployments(namespace).Delete(pgAdminDeploymentName, &metav1.DeleteOptions{ - PropagationPolicy: &deletePropagation, - }); err != nil { - log.Warn(err) - } - - return nil -} - -// DeletePgAdminFromPgTask is effectively a legacy method that helps to delete -// the pgAdmin deployment that sits alongside a PostgreSQL cluster -func DeletePgAdminFromPgTask(clientset kubeapi.Interface, restconfig *rest.Config, task *crv1.Pgtask) { - clusterName := task.Spec.Parameters[config.LABEL_PGADMIN_TASK_CLUSTER] - namespace := task.Spec.Namespace - - log.Debugf("delete pgAdmin from task called for cluster [%s] in namespace [%s]", - clusterName, namespace) - - // find the pgcluster that is associated with this task - cluster, err := clientset.CrunchydataV1().Pgclusters(namespace).Get(clusterName, metav1.GetOptions{}) - if err != nil { - log.Error(err) - return - } - - // attempt to delete the pgAdmin! - if err := DeletePgAdmin(clientset, restconfig, cluster); err != nil { - log.Error(err) - return - } - - // publish an event - publishPgAdminEvent(events.EventDeletePgAdmin, task) - - // lastly, remove the task - if err := clientset.CrunchydataV1().Pgtasks(namespace).Delete(task.Name, &metav1.DeleteOptions{}); err != nil { - log.Warn(err) - } -} - -// createPgAdminDeployment creates the Kubernetes Deployment for pgAdmin -func createPgAdminDeployment(clientset kubernetes.Interface, cluster *crv1.Pgcluster, pvcName string) error { - log.Debugf("creating pgAdmin deployment: %s", cluster.Name) - - // derive the name of the Deployment...which is also used as the name of the - // service - pgAdminDeploymentName := fmt.Sprintf(pgAdminDeploymentFormat, cluster.Name) - - // Password provided to initialize pgadmin setup (admin) - credentials - // not given to users (account gets disabled) - // - // This password is throwaway so low entropy genreation method is fine - randBytes := make([]byte, initPassLen) - // weakrand Read is always nil error - weakrand.Read(randBytes) - throwawayPass := base64.RawStdEncoding.EncodeToString(randBytes) - - // get the fields that will be substituted in the pgAdmin template - fields := pgAdminTemplateFields{ - Name: pgAdminDeploymentName, - ClusterName: cluster.Name, - CCPImagePrefix: operator.Pgo.Cluster.CCPImagePrefix, - CCPImageTag: cluster.Spec.CCPImageTag, - DisableFSGroup: operator.Pgo.Cluster.DisableFSGroup, - Port: defPgAdminPort, - InitUser: defSetupUsername, - InitPass: throwawayPass, - PVCName: pvcName, - } - - // For debugging purposes, put the template substitution in stdout - if operator.CRUNCHY_DEBUG { - config.PgAdminTemplate.Execute(os.Stdout, fields) - } - - // perform the actual template substitution - doc := bytes.Buffer{} - - if err := config.PgAdminTemplate.Execute(&doc, fields); err != nil { - return err - } - - // Set up the Kubernetes deployment for pgAdmin - deployment := appsv1.Deployment{} - - if err := json.Unmarshal(doc.Bytes(), &deployment); err != nil { - return err - } - - // set the container image to an override value, if one exists - operator.SetContainerImageOverride(config.CONTAINER_IMAGE_CRUNCHY_PGADMIN, - &deployment.Spec.Template.Spec.Containers[0]) - - if _, err := clientset.AppsV1().Deployments(cluster.Namespace).Create(&deployment); err != nil { - return err - } - - return nil -} - -// createPgAdminService creates the Kubernetes Service for pgAdmin -func createPgAdminService(clientset kubernetes.Interface, cluster *crv1.Pgcluster) error { - // pgAdminServiceName is the name of the Service of the pgAdmin, which - // matches that for the Deploymnt - pgAdminSvcName := fmt.Sprintf(pgAdminDeploymentFormat, cluster.Name) - - // get the fields that will be substituted in the pgAdmin template - fields := pgAdminTemplateFields{ - Name: pgAdminSvcName, - ClusterName: cluster.Name, - Port: defPgAdminPort, - } - - // For debugging purposes, put the template substitution in stdout - if operator.CRUNCHY_DEBUG { - config.PgAdminServiceTemplate.Execute(os.Stdout, fields) - } - - // perform the actual template substitution - doc := bytes.Buffer{} - - if err := config.PgAdminServiceTemplate.Execute(&doc, fields); err != nil { - return err - } - - // Set up the Kubernetes service for pgAdmin - service := v1.Service{} - - if err := json.Unmarshal(doc.Bytes(), &service); err != nil { - return err - } - - if _, err := clientset.CoreV1().Services(cluster.Namespace).Create(&service); err != nil { - return err - } - - return nil -} - -// publishPgAdminEvent publishes one of the events on the event stream -func publishPgAdminEvent(eventType string, task *crv1.Pgtask) { - var event events.EventInterface - - // prepare the topics to publish to - topics := []string{events.EventTopicPgAdmin} - // set up the event header - eventHeader := events.EventHeader{ - Namespace: task.Spec.Namespace, - Username: task.ObjectMeta.Labels[config.LABEL_PGOUSER], - Topic: topics, - Timestamp: time.Now(), - EventType: eventType, - } - clusterName := task.Spec.Parameters[config.LABEL_PGADMIN_TASK_CLUSTER] - - // now determine which event format to use! - switch eventType { - case events.EventCreatePgAdmin: - event = events.EventCreatePgAdminFormat{ - EventHeader: eventHeader, - Clustername: clusterName, - } - case events.EventDeletePgAdmin: - event = events.EventDeletePgAdminFormat{ - EventHeader: eventHeader, - Clustername: clusterName, - } - } - - // publish the event; if there is an error, log it, but we don't care - if err := events.Publish(event); err != nil { - log.Error(err.Error()) - } -} diff --git a/internal/operator/cluster/pgbouncer.go b/internal/operator/cluster/pgbouncer.go deleted file mode 100644 index 7737198480..0000000000 --- a/internal/operator/cluster/pgbouncer.go +++ /dev/null @@ -1,991 +0,0 @@ -package cluster - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "bufio" - "bytes" - "encoding/json" - "fmt" - "os" - "reflect" - "strconv" - "strings" - "time" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/kubeapi" - "github.com/crunchydata/postgres-operator/internal/operator" - pgpassword "github.com/crunchydata/postgres-operator/internal/postgres/password" - "github.com/crunchydata/postgres-operator/internal/util" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - "github.com/crunchydata/postgres-operator/pkg/events" - - log "github.com/sirupsen/logrus" - appsv1 "k8s.io/api/apps/v1" - v1 "k8s.io/api/core/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/client-go/kubernetes" - "k8s.io/client-go/rest" -) - -type PgbouncerPasswdFields struct { - Username string - Password string -} - -type PgbouncerConfFields struct { - PG_PRIMARY_SERVICE_NAME string - PG_PORT string -} - -type pgBouncerTemplateFields struct { - Name string - ClusterName string - CCPImagePrefix string - CCPImageTag string - Port string - PrimaryServiceName string - ContainerResources string - PGBouncerConfigMap string - PGBouncerSecret string - PodAnnotations string - PodAntiAffinity string - PodAntiAffinityLabelName string - PodAntiAffinityLabelValue string - Replicas int32 `json:",string"` -} - -// pgBouncerDeploymentFormat is the name of the Kubernetes Deployment that -// manages pgBouncer, and follows the format "-pgbouncer" -const pgBouncerDeploymentFormat = "%s-pgbouncer" - -// ...the default PostgreSQL port -const pgPort = "5432" - -const ( - // the path to the pgbouncer uninstallation script script - pgBouncerUninstallScript = "/opt/cpm/bin/sql/pgbouncer/pgbouncer-uninstall.sql" - - // the path to the pgbouncer installation script - pgBouncerInstallScript = "/opt/cpm/bin/sql/pgbouncer/pgbouncer-install.sql" -) - -const ( - // pgBouncerSecretPropagationPeriod is the number of seconds between each - // check of when the secret is propogated - pgBouncerSecretPropagationPeriod = 5 - // pgBouncerSecretPropagationTimeout is the maximum amount of time in seconds - // to wait for the secret to propagate - pgBouncerSecretPropagationTimeout = 60 -) - -const ( - // a string to check to see if the pgbouncer machinery is installed in the - // PostgreSQL cluster - sqlCheckPgBouncerInstall = `SELECT EXISTS (SELECT 1 FROM pg_catalog.pg_roles WHERE rolname = 'pgbouncer' LIMIT 1);` - - // disable the pgbouncer user from logging in. This is safe from SQL injection - // as the string that is being interpolated is the util.PgBouncerUser constant - // - // This had the "PASSWORD NULL" feature, but this is only found in - // PostgreSQL 11+, and given we don't want to check for the PG version before - // running the command, we will not use it - sqlDisableLogin = `ALTER ROLE "%s" NOLOGIN;` - - // sqlEnableLogin is the SQL to update the password - // NOTE: this is safe from SQL injection as we explicitly add the inerpolated - // string as a MD5 hash and we are using the crv1.PGUserPgBouncer constant - // However, the escaping is handled in the util.SetPostgreSQLPassword function - sqlEnableLogin = `ALTER ROLE %s PASSWORD %s LOGIN;` - - // sqlGetDatabasesForPgBouncer gets all the databases where pgBouncer can be - // installed or uninstalled - sqlGetDatabasesForPgBouncer = `SELECT datname FROM pg_catalog.pg_database WHERE datname NOT IN ('template0') AND datallowconn;` -) - -var ( - // this command allows one to view the users.txt file secret to determine if - // it has propagated - cmdViewPgBouncerUsersSecret = []string{"cat", "/pgconf/users.txt"} - // sqlUninstallPgBouncer provides the final piece of SQL to uninstall - // pgbouncer, which is to remove the user - sqlUninstallPgBouncer = fmt.Sprintf(`DROP ROLE "%s";`, crv1.PGUserPgBouncer) -) - -// AddPgbouncer contains the various functions that are used to add a pgBouncer -// Deployment to a PostgreSQL cluster -// -// Any returned error is logged in the calling function -func AddPgbouncer(clientset kubernetes.Interface, restconfig *rest.Config, cluster *crv1.Pgcluster) error { - log.Debugf("adding a pgbouncer") - - // get the primary pod, which is needed to update the password for the - // pgBouncer administrative user - pod, err := util.GetPrimaryPod(clientset, cluster) - - if err != nil { - return err - } - - // check to see if pgBoncer is "installed" in the PostgreSQL cluster. This - // means checking to see if there is a pgbouncer user, effetively - if installed, err := checkPgBouncerInstall(clientset, restconfig, pod, cluster.Spec.Port); err != nil { - return err - } else if !installed { - // this can't be installed if this is a standby, so abort if that's the case - if cluster.Spec.Standby { - return ErrStandbyNotAllowed - } - - if err := installPgBouncer(clientset, restconfig, pod, cluster.Spec.Port); err != nil { - return err - } - } - - // set the password that will be used for the "pgbouncer" PostgreSQL account - pgBouncerPassword, err := generatePassword() - - if err != nil { - return err - } - - // only attempt to set the password if the cluster is not in standby mode - if !cluster.Spec.Standby { - // attempt to update the password in PostgreSQL, as this is how pgBouncer - // will properly interface with PostgreSQL - if err := setPostgreSQLPassword(clientset, restconfig, pod, cluster.Spec.Port, pgBouncerPassword); err != nil { - return err - } - } - - // next, create the pgBouncer config map that will allow pgBouncer to be - // properly configured - if err := createPgbouncerConfigMap(clientset, cluster); err != nil { - return err - } - - // next, create the secret that pgbouncer will include the pgBouncer - // credentials - if err := createPgbouncerSecret(clientset, cluster, pgBouncerPassword); err != nil { - return err - } - - // next, create the pgBouncer deployment - if err := createPgBouncerDeployment(clientset, cluster); err != nil { - return err - } - - // finally, try to create the pgBouncer service - if err := createPgBouncerService(clientset, cluster); err != nil { - return err - } - - log.Debugf("added pgbouncer to cluster [%s]", cluster.Name) - - // publish an event - publishPgBouncerEvent(events.EventCreatePgbouncer, cluster) - - return nil -} - -// DeletePgbouncer contains the various functions that are used to delete a -// pgBouncer Deployment for a PostgreSQL cluster -// -// Note that "uninstall" deletes all of the objects that are added to the -// PostgreSQL database, such as the "pgbouncer" user. This is not normally -// neded to be done as pgbouncer user is disabled, but if the user wishes to be -// thorough they can do this -// -// Any errors that are returned should be logged in the calling function, though -// some logging occurs in this function as well -func DeletePgbouncer(clientset kubernetes.Interface, restconfig *rest.Config, cluster *crv1.Pgcluster) error { - clusterName := cluster.Name - namespace := cluster.Namespace - - log.Debugf("delete pgbouncer from cluster [%s] in namespace [%s]", clusterName, namespace) - - // if this is a standby cluster, we cannot execute any of the SQL on the - // PostgreSQL server, but we can still remove the Deployment and Service. - if !cluster.Spec.Standby { - if err := disablePgBouncer(clientset, restconfig, cluster); err != nil { - return err - } - } - - // next, delete the various Kubernetes objects associated with the pgbouncer - // these include the Service, Deployment, and the pgBouncer secret - // If these fail, we'll just pass through - // - // First, delete the Service and Deployment, which share the same naem - pgbouncerDeploymentName := fmt.Sprintf(pgBouncerDeploymentFormat, clusterName) - - if err := clientset.CoreV1().Services(namespace).Delete(pgbouncerDeploymentName, &metav1.DeleteOptions{}); err != nil { - log.Warn(err) - } - - deletePropagation := metav1.DeletePropagationForeground - if err := clientset.AppsV1().Deployments(namespace).Delete(pgbouncerDeploymentName, &metav1.DeleteOptions{ - PropagationPolicy: &deletePropagation, - }); err != nil { - log.Warn(err) - } - - // remove the config map. again, if this fails, just log the error and pass - // through - configMapName := util.GeneratePgBouncerConfigMapName(clusterName) - - if err := clientset.CoreV1().ConfigMaps(namespace).Delete(configMapName, &metav1.DeleteOptions{}); err != nil { - log.Warn(err) - } - - // remove the secret. again, if this fails, just log the error and pass - // through - secretName := util.GeneratePgBouncerSecretName(clusterName) - - if err := clientset.CoreV1().Secrets(namespace).Delete(secretName, &metav1.DeleteOptions{}); err != nil { - log.Warn(err) - } - - // publish an event - publishPgBouncerEvent(events.EventDeletePgbouncer, cluster) - - return nil -} - -// RotatePgBouncerPassword rotates the password for a pgBouncer PostgreSQL user, -// which involves updating the password in the PostgreSQL cluster as well as -// the users secret that is available in the pgbouncer Pod -func RotatePgBouncerPassword(clientset kubernetes.Interface, restconfig *rest.Config, cluster *crv1.Pgcluster) error { - // determine if we are able to access the primary Pod - primaryPod, err := util.GetPrimaryPod(clientset, cluster) - - if err != nil { - return err - } - - // let's also go ahead and get the secret that contains the pgBouncer - // information. If we can't find the secret, we're basically done here - secretName := util.GeneratePgBouncerSecretName(cluster.Name) - secret, err := clientset.CoreV1().Secrets(cluster.Namespace).Get(secretName, metav1.GetOptions{}) - - if err != nil { - return err - } - - // there are a few steps that must occur in order for the password to be - // successfully rotated: - // - // 1. The PostgreSQL cluster must have the pgbouncer user's password updated - // 2. The secret that containers the values of "users.txt" must be updated - // 3. The pgBouncer pods must be bounced and have the new password loaded, but - // we must first ensure the password propagates to them - // - // ...wouldn't it be nice if we could run this in a transaction? rolling back - // is hard :( - - // first, generate a new password - password, err := generatePassword() - - if err != nil { - return err - } - - // next, update the PostgreSQL primary with the new password. If this fails - // we definitely return an error - if err := setPostgreSQLPassword(clientset, restconfig, primaryPod, cluster.Spec.Port, password); err != nil { - return err - } - - // next, update the users.txt and password fields of the secret. the important - // one to update is the users.txt, as that is used by pgbouncer to connect to - // PostgreSQL to perform its authentication - secret.Data["password"] = []byte(password) - secret.Data["users.txt"] = util.GeneratePgBouncerUsersFileBytes( - makePostgresPassword(pgpassword.MD5, password)) - - // update the secret - if _, err := clientset.CoreV1().Secrets(cluster.Namespace).Update(secret); err != nil { - return err - } - - // force the password to propagate to all of the pgbouncer pods in - // the deployment - selector := fmt.Sprintf("%s=%s,%s=true", config.LABEL_PG_CLUSTER, cluster.Name, - config.LABEL_PGBOUNCER) - - // query the pods - pods, err := clientset.CoreV1().Pods(cluster.Namespace).List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - return err - } - - for _, pod := range pods.Items { - if err := clientset.CoreV1().Pods(pod.Namespace).Delete(pod.Name, &metav1.DeleteOptions{}); err != nil { - log.Warn(err) - } - } - - return nil -} - -// UninstallPgBouncer uninstalls the "pgbouncer" user and other management -// objects from the PostgreSQL cluster -func UninstallPgBouncer(clientset kubernetes.Interface, restconfig *rest.Config, cluster *crv1.Pgcluster) error { - // if this is a standby cluster, exit and return an error - if cluster.Spec.Standby { - return ErrStandbyNotAllowed - } - - // determine if we are able to access the primary Pod. If not, then the - // journey ends right here - pod, err := util.GetPrimaryPod(clientset, cluster) - - if err != nil { - return err - } - - // get the list of databases that we need to scan through - databases, err := getPgBouncerDatabases(clientset, restconfig, pod, cluster.Spec.Port) - - if err != nil { - return err - } - - // iterate through the list of databases that are returned, and execute the - // uninstallation script - for databases.Scan() { - databaseName := strings.TrimSpace(databases.Text()) - execPgBouncerScript(clientset, restconfig, pod, cluster.Spec.Port, databaseName, pgBouncerUninstallScript) - } - - // lastly, delete the "pgbouncer" role from the PostgreSQL database - // This is safe from SQL injection as we are using constants and a well defined - // string - sql := strings.NewReader(sqlUninstallPgBouncer) - cmd := []string{"psql", "-p", cluster.Spec.Port} - - // exec into the pod to run the query - _, stderr, err := kubeapi.ExecToPodThroughAPI(restconfig, clientset, - cmd, "database", pod.Name, pod.ObjectMeta.Namespace, sql) - - // if there is an error executing the command, log the error message from - // stderr and return the error - if err != nil { - log.Error(stderr) - return err - } - - return nil -} - -// UpdatePgbouncer contains the various functions that are used to perform -// updates to the pgBouncer deployment for a cluster, such as rotating a -// password -// -// Any errors that are returned should be logged in the calling function, though -// some logging occurs in this function as well -func UpdatePgbouncer(clientset kubernetes.Interface, oldCluster, newCluster *crv1.Pgcluster) error { - clusterName := newCluster.Name - namespace := newCluster.Namespace - - log.Debugf("update pgbouncer from cluster [%s] in namespace [%s]", clusterName, namespace) - - // we need to detect what has changed. presently, two "groups" of things could - // have changed - // 1. The # of replicas to maintain - // 2. The pgBouncer container resources - // - // As #2 is a bit more destructive, we'll do that last - - // check if the replicas differ - if oldCluster.Spec.PgBouncer.Replicas != newCluster.Spec.PgBouncer.Replicas { - if err := updatePgBouncerReplicas(clientset, newCluster); err != nil { - return err - } - } - - // check if the resources differ - if !reflect.DeepEqual(oldCluster.Spec.PgBouncer.Resources, newCluster.Spec.PgBouncer.Resources) || - !reflect.DeepEqual(oldCluster.Spec.PgBouncer.Limits, newCluster.Spec.PgBouncer.Limits) { - if err := updatePgBouncerResources(clientset, newCluster); err != nil { - return err - } - } - - // publish an event - publishPgBouncerEvent(events.EventUpdatePgbouncer, newCluster) - - // and that's it! - return nil -} - -// UpdatePgBouncerAnnotations updates the annotations in the "template" portion -// of a pgBouncer deployment -func UpdatePgBouncerAnnotations(clientset kubernetes.Interface, cluster *crv1.Pgcluster, - annotations map[string]string) error { - // get a list of all of the instance deployments for the cluster - deployment, err := getPgBouncerDeployment(clientset, cluster) - - if err != nil { - return err - } - - // now update the pgBackRest deployment - log.Debugf("update annotations on [%s]", deployment.Name) - log.Debugf("new annotations: %v", annotations) - - deployment.Spec.Template.ObjectMeta.SetAnnotations(annotations) - - // finally, update the Deployment. If something errors, we'll log that there - // was an error, but continue with processing the other deployments - if _, err := clientset.AppsV1().Deployments(deployment.Namespace).Update(deployment); err != nil { - return err - } - - return nil -} - -// checkPgBouncerInstall checks to see if pgBouncer is installed in the -// PostgreSQL custer, which involves check to see if the pgBouncer role is -// present in the PostgreSQL cluster -func checkPgBouncerInstall(clientset kubernetes.Interface, restconfig *rest.Config, pod *v1.Pod, port string) (bool, error) { - // set up the SQL - sql := strings.NewReader(sqlCheckPgBouncerInstall) - - // have the command return an unaligned string of just the "t" or "f" - cmd := []string{"psql", "-A", "-t", "-p", port} - - // exec into the pod to run the query - stdout, stderr, err := kubeapi.ExecToPodThroughAPI(restconfig, clientset, - cmd, "database", pod.Name, pod.ObjectMeta.Namespace, sql) - - // if there is an error executing the command, log the error message from - // stderr and return the error - if err != nil { - log.Error(stderr) - return false, err - } - - // next, parse the boolean value and determine if the pgbouncer user is - // present - if installed, err := strconv.ParseBool(strings.TrimSpace(stdout)); err != nil { - return false, err - } else { - return installed, nil - } -} - -// createPgbouncerConfigMap create a config map used by pgbouncer, specifically -// containing the pgbouncer.ini configuration file. returns an error if it fails -func createPgbouncerConfigMap(clientset kubernetes.Interface, cluster *crv1.Pgcluster) error { - // get the name of the configmap - configMapName := util.GeneratePgBouncerConfigMapName(cluster.Name) - - // see if this config map already exists...if it does, then take an early exit - if _, err := clientset.CoreV1().ConfigMaps(cluster.Namespace).Get(configMapName, metav1.GetOptions{}); err == nil { - log.Infof("pgbouncer configmap %q already present, will reuse", configMapName) - return nil - } - - // generate the pgbouncer.ini information - pgBouncerConf, err := generatePgBouncerConf(cluster) - - if err != nil { - log.Error(err) - return err - } - - // generate the pgbouncer HBA file - pgbouncerHBA, err := generatePgBouncerHBA() - - if err != nil { - log.Error(err) - return err - } - - // now, we can do what we came here to do, which is create the config map - cm := v1.ConfigMap{ - ObjectMeta: metav1.ObjectMeta{ - Name: configMapName, - Labels: map[string]string{ - config.LABEL_PG_CLUSTER: cluster.Name, - config.LABEL_PGBOUNCER: "true", - config.LABEL_VENDOR: config.LABEL_CRUNCHY, - }, - }, - Data: map[string]string{ - "pgbouncer.ini": pgBouncerConf, - "pg_hba.conf": pgbouncerHBA, - }, - } - - if _, err := clientset.CoreV1().ConfigMaps(cluster.Namespace).Create(&cm); err != nil { - log.Error(err) - return err - } - - return nil -} - -// createPgBouncerDeployment creates the Kubernetes Deployment for pgBouncer -func createPgBouncerDeployment(clientset kubernetes.Interface, cluster *crv1.Pgcluster) error { - log.Debugf("creating pgbouncer deployment: %s", cluster.Name) - - // derive the name of the Deployment...which is also used as the name of the - // service - pgbouncerDeploymentName := fmt.Sprintf(pgBouncerDeploymentFormat, cluster.Name) - - // get the fields that will be substituted in the pgBouncer template - fields := pgBouncerTemplateFields{ - Name: pgbouncerDeploymentName, - ClusterName: cluster.Name, - CCPImagePrefix: util.GetValueOrDefault(cluster.Spec.CCPImagePrefix, operator.Pgo.Cluster.CCPImagePrefix), - CCPImageTag: cluster.Spec.CCPImageTag, - Port: cluster.Spec.Port, - PGBouncerConfigMap: util.GeneratePgBouncerConfigMapName(cluster.Name), - PGBouncerSecret: util.GeneratePgBouncerSecretName(cluster.Name), - ContainerResources: operator.GetResourcesJSON(cluster.Spec.PgBouncer.Resources, - cluster.Spec.PgBouncer.Limits), - PodAnnotations: operator.GetAnnotations(cluster, crv1.ClusterAnnotationPgBouncer), - PodAntiAffinity: operator.GetPodAntiAffinity(cluster, - crv1.PodAntiAffinityDeploymentPgBouncer, cluster.Spec.PodAntiAffinity.PgBouncer), - PodAntiAffinityLabelName: config.LABEL_POD_ANTI_AFFINITY, - PodAntiAffinityLabelValue: string(operator.GetPodAntiAffinityType(cluster, - crv1.PodAntiAffinityDeploymentPgBouncer, cluster.Spec.PodAntiAffinity.PgBouncer)), - Replicas: cluster.Spec.PgBouncer.Replicas, - } - - // For debugging purposes, put the template substitution in stdout - if operator.CRUNCHY_DEBUG { - config.PgbouncerTemplate.Execute(os.Stdout, fields) - } - - // perform the actual template substitution - doc := bytes.Buffer{} - - if err := config.PgbouncerTemplate.Execute(&doc, fields); err != nil { - return err - } - - // Set up the Kubernetes deployment for pgBouncer - deployment := appsv1.Deployment{} - - if err := json.Unmarshal(doc.Bytes(), &deployment); err != nil { - return err - } - - // set the container image to an override value, if one exists - operator.SetContainerImageOverride(config.CONTAINER_IMAGE_CRUNCHY_PGBOUNCER, - &deployment.Spec.Template.Spec.Containers[0]) - - if _, err := clientset.AppsV1().Deployments(cluster.Namespace).Create(&deployment); err != nil { - return err - } - - return nil -} - -// createPgbouncerSecret create a secret used by pgbouncer. Returns the -// plaintext password and/or an error -func createPgbouncerSecret(clientset kubernetes.Interface, cluster *crv1.Pgcluster, password string) error { - secretName := util.GeneratePgBouncerSecretName(cluster.Name) - - // see if this secret already exists...if it does, then take an early exit - if _, err := util.GetPasswordFromSecret(clientset, cluster.Namespace, secretName); err == nil { - log.Infof("pgbouncer secret %s already present, will reuse", secretName) - return nil - } - - // the remainder of this is generating the various entries in the pgbouncer - // secret, i.e. substituting values into templates files that contain: - // - the pgbouncer user password - // - the pgbouncer "users.txt" file that contains the credentials for the - // "pgbouncer" user - - // now, we can do what we came here to do, which is create the secret - secret := v1.Secret{ - ObjectMeta: metav1.ObjectMeta{ - Name: secretName, - Labels: map[string]string{ - config.LABEL_PG_CLUSTER: cluster.Name, - config.LABEL_PGBOUNCER: "true", - config.LABEL_VENDOR: config.LABEL_CRUNCHY, - }, - }, - Data: map[string][]byte{ - "password": []byte(password), - "users.txt": util.GeneratePgBouncerUsersFileBytes( - makePostgresPassword(pgpassword.MD5, password)), - }, - } - - if _, err := clientset.CoreV1().Secrets(cluster.Namespace).Create(&secret); err != nil { - log.Error(err) - return err - } - - return nil -} - -// createPgBouncerService creates the Kubernetes Service for pgBouncer -func createPgBouncerService(clientset kubernetes.Interface, cluster *crv1.Pgcluster) error { - // pgBouncerServiceName is the name of the Service of the pgBouncer, which - // matches that for the Deploymnt - pgBouncerServiceName := fmt.Sprintf(pgBouncerDeploymentFormat, cluster.Name) - - // set up the service template fields - fields := ServiceTemplateFields{ - Name: pgBouncerServiceName, - ServiceName: pgBouncerServiceName, - ClusterName: cluster.Name, - // TODO: I think "port" needs to be evaluated, but I think for now using - // the standard PostgreSQL port works - Port: operator.Pgo.Cluster.Port, - } - - if err := CreateService(clientset, &fields, cluster.Namespace); err != nil { - return err - } - - return nil -} - -// disablePgBouncer executes codes on the primary PostgreSQL pod in order to -// disable the "pgbouncer" role from being able to log in. It keeps the -// artificats that were created during normal pgBouncer operation -func disablePgBouncer(clientset kubernetes.Interface, restconfig *rest.Config, cluster *crv1.Pgcluster) error { - log.Debugf("disable pgbouncer user on cluster [%s]", cluster.Name) - // disable the pgbouncer user in the PostgreSQL cluster. - // first, get the primary pod. If we cannot do this, let's consider it an - // error and abort - pod, err := util.GetPrimaryPod(clientset, cluster) - - if err != nil { - return err - } - - // This is safe from SQL injection as we are using constants and a well defined - // string - sql := strings.NewReader(fmt.Sprintf(sqlDisableLogin, crv1.PGUserPgBouncer)) - cmd := []string{"psql"} - - // exec into the pod to run the query - _, stderr, err := kubeapi.ExecToPodThroughAPI(restconfig, clientset, - cmd, "database", pod.Name, pod.ObjectMeta.Namespace, sql) - - // if there is an error, log the error from the stderr and return the error - if err != nil { - log.Error(stderr) - return err - } - - return nil -} - -// execPgBouncerScript runs a script pertaining to the management of pgBouncer -// on the PostgreSQL pod -func execPgBouncerScript(clientset kubernetes.Interface, restconfig *rest.Config, pod *v1.Pod, port, databaseName, script string) { - cmd := []string{"psql", "-p", port, databaseName, "-f", script} - - // exec into the pod to run the query - _, stderr, err := kubeapi.ExecToPodThroughAPI(restconfig, clientset, - cmd, "database", pod.Name, pod.ObjectMeta.Namespace, nil) - - // if there is an error executing the command, log the error as a warning - // that it failed, and continue. It's hard to rollback from this one :\ - if err != nil { - log.Warn(stderr) - log.Warnf("You can attempt to rerun the script [%s] on [%s]", - script, databaseName) - } -} - -// generatePassword generates a password that is used for the "pgbouncer" -// PostgreSQL user that provides the associated pgBouncer functionality -func generatePassword() (string, error) { - // first, get the length of what the password should be - generatedPasswordLength := util.GeneratedPasswordLength(operator.Pgo.Cluster.PasswordLength) - // from there, the password can be generated! - return util.GeneratePassword(generatedPasswordLength) -} - -// generatePgBouncerConf generates the content that is stored in the secret -// for the "pgbouncer.ini" file -func generatePgBouncerConf(cluster *crv1.Pgcluster) (string, error) { - // first, get the port - port := cluster.Spec.Port - // if the "port" value is not set, default to the PostgreSQL port. - if port == "" { - port = pgPort - } - - // set up the substitution fields for the pgbouncer.ini file - fields := PgbouncerConfFields{ - PG_PRIMARY_SERVICE_NAME: cluster.Name, - PG_PORT: port, - } - - // perform the substitution - doc := bytes.Buffer{} - - // if there is an error, return an empty byte slice - if err := config.PgbouncerConfTemplate.Execute(&doc, fields); err != nil { - log.Error(err) - - return "", err - } - - log.Debug(doc.String()) - - // and if not, return the full string - return doc.String(), nil -} - -// generatePgBouncerHBA generates the pgBouncer host-based authentication file -// using the template that is vailable -func generatePgBouncerHBA() (string, error) { - // ...apparently this is overkill, but this is here from the legacy method - // and it seems like it's "ok" to leave it like this for now... - doc := bytes.Buffer{} - - if err := config.PgbouncerHBATemplate.Execute(&doc, struct{}{}); err != nil { - log.Error(err) - - return "", err - } - - log.Debug(doc.String()) - - return doc.String(), nil -} - -// generatePgtaskForPgBouncer generates a pgtask specific to a pgbouncer -// deployment -func generatePgtaskForPgBouncer(cluster *crv1.Pgcluster, pgouser, taskType, taskLabel string, parameters map[string]string) *crv1.Pgtask { - // create the specfile with the required parameters for creating a pgtask - spec := crv1.PgtaskSpec{ - Namespace: cluster.Namespace, - Name: fmt.Sprintf("%s-%s", taskLabel, cluster.Name), - TaskType: taskType, - Parameters: parameters, - } - - // create the pgtask object - task := &crv1.Pgtask{ - ObjectMeta: metav1.ObjectMeta{ - Name: spec.Name, - Labels: map[string]string{ - config.LABEL_PG_CLUSTER: cluster.Name, - taskLabel: "true", - config.LABEL_PGOUSER: pgouser, - }, - }, - Spec: spec, - } - - return task -} - -// getPgBouncerDatabases gets the databases in a PostgreSQL cluster that have -// the pgBouncer objects, etc. -func getPgBouncerDatabases(clientset kubernetes.Interface, restconfig *rest.Config, pod *v1.Pod, port string) (*bufio.Scanner, error) { - // so the way this works is that there needs to be a special SQL installation - // script that is executed on every database EXCEPT for postgres and template0 - // but is executed on template1 - sql := strings.NewReader(sqlGetDatabasesForPgBouncer) - - // have the command return an unaligned string of just the "t" or "f" - cmd := []string{"psql", "-A", "-t", "-p", port} - - // exec into the pod to run the query - stdout, stderr, err := kubeapi.ExecToPodThroughAPI(restconfig, clientset, - cmd, "database", pod.Name, pod.ObjectMeta.Namespace, sql) - - // if there is an error executing the command, log the error message from - // stderr and return the error - if err != nil { - log.Error(stderr) - return nil, err - } - - // return the list of databases, that will be in a multi-line string - return bufio.NewScanner(strings.NewReader(stdout)), nil -} - -// getPgBouncerDeployment finds the pgBouncer deployment for a PostgreSQL -// cluster -func getPgBouncerDeployment(clientset kubernetes.Interface, cluster *crv1.Pgcluster) (*appsv1.Deployment, error) { - log.Debugf("find pgbouncer for: %s", cluster.Name) - - // derive the name of the Deployment...which is also used as the name of the - // service - pgbouncerDeploymentName := fmt.Sprintf(pgBouncerDeploymentFormat, cluster.Name) - - deployment, err := clientset.AppsV1().Deployments(cluster.Namespace).Get(pgbouncerDeploymentName, metav1.GetOptions{}) - - if err != nil { - return nil, err - } - - return deployment, nil -} - -// installPgBouncer installs the "pgbouncer" user and other management objects -// into the PostgreSQL pod -func installPgBouncer(clientset kubernetes.Interface, restconfig *rest.Config, pod *v1.Pod, port string) error { - // get the list of databases that we need to scan through - databases, err := getPgBouncerDatabases(clientset, restconfig, pod, port) - - if err != nil { - return err - } - - // iterate through the list of databases that are returned, and execute the - // installation script - for databases.Scan() { - databaseName := strings.TrimSpace(databases.Text()) - - execPgBouncerScript(clientset, restconfig, pod, port, databaseName, pgBouncerInstallScript) - } - - return nil -} - -// makePostgresPassword creates the expected hash for a password type for a -// PostgreSQL password -func makePostgresPassword(passwordType pgpassword.PasswordType, password string) string { - // get the PostgreSQL password generate based on the password type - // as all of these values are valid, this not not error - postgresPassword, _ := pgpassword.NewPostgresPassword(passwordType, crv1.PGUserPgBouncer, password) - - // create the PostgreSQL style hashed password and return - hashedPassword, _ := postgresPassword.Build() - - return hashedPassword -} - -// publishPgBouncerEvent publishes one of the events on the event stream -func publishPgBouncerEvent(eventType string, cluster *crv1.Pgcluster) { - var event events.EventInterface - - // prepare the topics to publish to - topics := []string{events.EventTopicPgbouncer} - // set up the event header - eventHeader := events.EventHeader{ - Namespace: cluster.Namespace, - Topic: topics, - Timestamp: time.Now(), - EventType: eventType, - } - clusterName := cluster.Name - - // now determine which event format to use! - switch eventType { - case events.EventCreatePgbouncer: - event = events.EventCreatePgbouncerFormat{ - EventHeader: eventHeader, - Clustername: clusterName, - } - case events.EventUpdatePgbouncer: - event = events.EventUpdatePgbouncerFormat{ - EventHeader: eventHeader, - Clustername: clusterName, - } - case events.EventDeletePgbouncer: - event = events.EventDeletePgbouncerFormat{ - EventHeader: eventHeader, - Clustername: clusterName, - } - } - - // publish the event; if there is an error, log it, but we don't care - if err := events.Publish(event); err != nil { - log.Error(err.Error()) - } -} - -// setPostgreSQLPassword updates the pgBouncer password in the PostgreSQL -// cluster by executing into the primary Pod and changing it -func setPostgreSQLPassword(clientset kubernetes.Interface, restconfig *rest.Config, pod *v1.Pod, port, password string) error { - log.Debug("set pgbouncer password in PostgreSQL") - - // we use the PostgreSQL "md5" hashing mechanism here to pre-hash the - // password. This is semi-hard coded but is now prepped for SCRAM as a - // password type can be passed in. Almost to SCRAM! - sqlpgBouncerPassword := makePostgresPassword(pgpassword.MD5, password) - - if err := util.SetPostgreSQLPassword(clientset, restconfig, pod, - port, crv1.PGUserPgBouncer, sqlpgBouncerPassword, sqlEnableLogin); err != nil { - log.Error(err) - return err - } - - // and that's all! - return nil -} - -// updatePgBouncerReplicas updates the pgBouncer Deployment with the number -// of replicas (Pods) that it should run. Presently, this is fairly naive, but -// as pgBouncer is "semi-stateful" we may want to improve upon this in the -// future -func updatePgBouncerReplicas(clientset kubernetes.Interface, cluster *crv1.Pgcluster) error { - log.Debugf("scale pgbouncer replicas to [%d]", cluster.Spec.PgBouncer.Replicas) - - // get the pgBouncer deployment so the resources can be updated - deployment, err := getPgBouncerDeployment(clientset, cluster) - - if err != nil { - return err - } - - // update the number of replicas - deployment.Spec.Replicas = &cluster.Spec.PgBouncer.Replicas - - // and update the deployment - // update the deployment with the new values - if _, err := clientset.AppsV1().Deployments(deployment.Namespace).Update(deployment); err != nil { - return err - } - - return nil -} - -// updatePgBouncerResources updates the pgBouncer Deployment with the container -// resource request values that are desired -func updatePgBouncerResources(clientset kubernetes.Interface, cluster *crv1.Pgcluster) error { - log.Debugf("update pgbouncer resources to [%+v]", cluster.Spec.PgBouncer.Resources) - - // get the pgBouncer deployment so the resources can be updated - deployment, err := getPgBouncerDeployment(clientset, cluster) - - if err != nil { - return err - } - - // the pgBouncer container is the first one, the resources can be updated - // from it - deployment.Spec.Template.Spec.Containers[0].Resources.Requests = cluster.Spec.PgBouncer.Resources.DeepCopy() - deployment.Spec.Template.Spec.Containers[0].Resources.Limits = cluster.Spec.PgBouncer.Limits.DeepCopy() - - // and update the deployment - // update the deployment with the new values - if _, err := clientset.AppsV1().Deployments(deployment.Namespace).Update(deployment); err != nil { - return err - } - - return nil -} diff --git a/internal/operator/cluster/pgbouncer_test.go b/internal/operator/cluster/pgbouncer_test.go deleted file mode 100644 index 06ff30d8b6..0000000000 --- a/internal/operator/cluster/pgbouncer_test.go +++ /dev/null @@ -1,40 +0,0 @@ -package cluster - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "testing" - - pgpassword "github.com/crunchydata/postgres-operator/internal/postgres/password" -) - -func TestMakePostgresPassword(t *testing.T) { - - t.Run("md5", func(t *testing.T) { - t.Run("valid", func(t *testing.T) { - passwordType := pgpassword.MD5 - password := "datalake" - expected := "md56294153764d389dc6830b6ce4f923cdb" - - actual := makePostgresPassword(passwordType, password) - - if actual != expected { - t.Errorf("expected: %q actual: %q", expected, actual) - } - }) - - }) -} diff --git a/internal/operator/cluster/rmdata.go b/internal/operator/cluster/rmdata.go deleted file mode 100644 index b25d260ca4..0000000000 --- a/internal/operator/cluster/rmdata.go +++ /dev/null @@ -1,88 +0,0 @@ -// Package cluster holds the cluster CRD logic and definitions -// A cluster is comprised of a primary service, replica service, -// primary deployment, and replica deployment -package cluster - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "bytes" - "encoding/json" - "os" - "strconv" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/operator" - "github.com/crunchydata/postgres-operator/internal/util" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - log "github.com/sirupsen/logrus" - v1batch "k8s.io/api/batch/v1" - "k8s.io/client-go/kubernetes" -) - -type RmdataJob struct { - JobName string - ClusterName string - PGOImagePrefix string - PGOImageTag string - // SecurityContext string - RemoveData string - RemoveBackup string - IsBackup string - IsReplica string -} - -func CreateRmdataJob(clientset kubernetes.Interface, cl *crv1.Pgcluster, namespace string, removeData, removeBackup, isReplica, isBackup bool) error { - var err error - - jobName := cl.Spec.Name + "-rmdata-" + util.RandStringBytesRmndr(4) - - jobFields := RmdataJob{ - JobName: jobName, - ClusterName: cl.Spec.Name, - PGOImagePrefix: util.GetValueOrDefault(cl.Spec.PGOImagePrefix, operator.Pgo.Pgo.PGOImagePrefix), - PGOImageTag: operator.Pgo.Pgo.PGOImageTag, - RemoveData: strconv.FormatBool(removeData), - RemoveBackup: strconv.FormatBool(removeBackup), - IsBackup: strconv.FormatBool(isReplica), - IsReplica: strconv.FormatBool(isBackup), - } - - doc := bytes.Buffer{} - - if err := config.RmdatajobTemplate.Execute(&doc, jobFields); err != nil { - log.Error(err.Error()) - return err - } - - if operator.CRUNCHY_DEBUG { - config.RmdatajobTemplate.Execute(os.Stdout, jobFields) - } - - newjob := v1batch.Job{} - - if err := json.Unmarshal(doc.Bytes(), &newjob); err != nil { - log.Error("error unmarshalling json into Job " + err.Error()) - return err - } - - // set the container image to an override value, if one exists - operator.SetContainerImageOverride(config.CONTAINER_IMAGE_PGO_RMDATA, - &newjob.Spec.Template.Spec.Containers[0]) - - _, err = clientset.BatchV1().Jobs(namespace).Create(&newjob) - return err -} diff --git a/internal/operator/cluster/service.go b/internal/operator/cluster/service.go deleted file mode 100644 index 2812a88389..0000000000 --- a/internal/operator/cluster/service.go +++ /dev/null @@ -1,64 +0,0 @@ -// Package cluster holds the cluster CRD logic and definitions -// A cluster is comprised of a primary service, replica service, -// primary deployment, and replica deployment -package cluster - -/* - Copyright 2017 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "bytes" - "encoding/json" - "os" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/operator" - log "github.com/sirupsen/logrus" - corev1 "k8s.io/api/core/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/client-go/kubernetes" -) - -// CreateService ... -func CreateService(clientset kubernetes.Interface, fields *ServiceTemplateFields, namespace string) error { - var serviceDoc bytes.Buffer - - //create the service if it doesn't exist - _, err := clientset.CoreV1().Services(namespace).Get(fields.Name, metav1.GetOptions{}) - if err != nil { - - err = config.ServiceTemplate.Execute(&serviceDoc, fields) - if err != nil { - log.Error(err.Error()) - return err - } - - if operator.CRUNCHY_DEBUG { - config.ServiceTemplate.Execute(os.Stdout, fields) - } - - service := corev1.Service{} - err = json.Unmarshal(serviceDoc.Bytes(), &service) - if err != nil { - log.Error("error unmarshalling json into Service " + err.Error()) - return err - } - - _, err = clientset.CoreV1().Services(namespace).Create(&service) - } - - return err - -} diff --git a/internal/operator/cluster/standby.go b/internal/operator/cluster/standby.go deleted file mode 100644 index 154a2dd3f6..0000000000 --- a/internal/operator/cluster/standby.go +++ /dev/null @@ -1,300 +0,0 @@ -package cluster - -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "encoding/json" - "errors" - "fmt" - "time" - - kerrors "k8s.io/apimachinery/pkg/api/errors" - "k8s.io/apimachinery/pkg/types" - "k8s.io/apimachinery/pkg/util/wait" - - "github.com/crunchydata/postgres-operator/internal/operator" - "github.com/crunchydata/postgres-operator/internal/operator/pvc" - "github.com/crunchydata/postgres-operator/internal/util" - "github.com/crunchydata/postgres-operator/pkg/events" - log "github.com/sirupsen/logrus" - - "github.com/crunchydata/postgres-operator/internal/config" - cfg "github.com/crunchydata/postgres-operator/internal/operator/config" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/client-go/kubernetes" -) - -var ( - // ErrStandbyNotAllowed contains the error message returned when an API call is not - // permitted because it involves a cluster that is in standby mode - ErrStandbyNotAllowed = errors.New("Action not permitted because standby mode is enabled") - // ErrStandbyNotEnabled defines the error that is thrown when - // standby mode is not enabled but a standby action was attempted - ErrStandbyNotEnabled = errors.New("Standby mode not enabled") - // ErrClusterNotShutdown defines the error that is thrown when an action cannot - // proceed because the cluster is not in standby mode - ErrClusterNotShutdown = errors.New("Cluster not in shutdown status") -) - -const ( - standbyClusterConfigJSON = ` -{ - "create_replica_methods": [ - "pgbackrest_standby" - ], - "restore_command": "source /opt/cpm/bin/pgbackrest/pgbackrest-set-env.sh && pgbackrest archive-get %f \"%p\"" -}` -) - -// DisableStandby disables standby mode for the cluster -func DisableStandby(clientset kubernetes.Interface, cluster crv1.Pgcluster) error { - - clusterName := cluster.Name - namespace := cluster.Namespace - - log.Debugf("Disable standby: disabling standby for cluster %s", clusterName) - - configMapName := fmt.Sprintf("%s-pgha-config", cluster.Labels[config.LABEL_PGHA_SCOPE]) - configMap, err := clientset.CoreV1().ConfigMaps(namespace).Get(configMapName, - metav1.GetOptions{}) - if err != nil { - return err - } - dcs := cfg.NewDCS(configMap, clientset, - cluster.GetObjectMeta().GetLabels()[config.LABEL_PGHA_SCOPE]) - dcsConfig, _, err := dcs.GetDCSConfig() - if err != nil { - return err - } - dcsConfig.StandbyCluster = nil - if err := dcs.Update(dcsConfig); err != nil { - return err - } - - // ensure any repo override is removed - pghaConfigMapName := fmt.Sprintf("%s-pgha-config", cluster.Labels[config.LABEL_PGHA_SCOPE]) - jsonOp := []util.JSONPatchOperation{{ - Op: "remove", - Path: fmt.Sprintf("/data/%s", operator.PGHAConfigReplicaBootstrapRepoType), - }} - - jsonOpBytes, err := json.Marshal(jsonOp) - if err != nil { - return err - } - - if _, err := clientset.CoreV1().ConfigMaps(namespace).Patch(pghaConfigMapName, - types.JSONPatchType, jsonOpBytes); err != nil { - return err - } - - if err := publishStandbyEnabled(&cluster); err != nil { - log.Error(err) - } - - log.Debugf("Disable standby: finished disabling standby mode for cluster %s", clusterName) - - return nil -} - -// EnableStandby enables standby mode for the cluster -func EnableStandby(clientset kubernetes.Interface, cluster crv1.Pgcluster) error { - - clusterName := cluster.Name - namespace := cluster.Namespace - - log.Debugf("Enable standby: attempting to enable standby for cluster %s", clusterName) - - // First verify that the cluster is in a shut down status. If not then return an - // error - if cluster.Status.State != crv1.PgclusterStateShutdown { - return fmt.Errorf("Unable to enable standby mode: %w", ErrClusterNotShutdown) - } - - // Now find the existing PVCs for the primary and backrest repo and delete them. - // These should be the only remaining PVCs for the cluster since all replica PVCs - // were deleted when scaling down the cluster in order to shut down the database. - remainingPVCSelector := fmt.Sprintf("%s=%s", config.LABEL_PG_CLUSTER, clusterName) - remainingPVC, err := clientset. - CoreV1().PersistentVolumeClaims(namespace). - List(metav1.ListOptions{LabelSelector: remainingPVCSelector}) - if err != nil { - log.Error(err) - return fmt.Errorf("Unable to get remaining PVCs while enabling standby mode: %w", err) - } - - for _, currPVC := range remainingPVC.Items { - - // delete the original PVC and wait for it to be removed - deletePropagation := metav1.DeletePropagationForeground - err := clientset. - CoreV1().PersistentVolumeClaims(namespace). - Delete(currPVC.Name, &metav1.DeleteOptions{PropagationPolicy: &deletePropagation}) - if err == nil { - err = wait.Poll(time.Second/2, time.Minute, func() (bool, error) { - _, err := clientset.CoreV1().PersistentVolumeClaims(namespace).Get(currPVC.Name, metav1.GetOptions{}) - return false, err - }) - } - if !kerrors.IsNotFound(err) { - log.Error(err) - return err - } - - // determine whether the PVC is a backrest repo, primary or replica, and then re-create - // using the proper storage spec as defined in pgo.yaml - storageSpec := crv1.PgStorageSpec{} - if currPVC.Name == cluster.Labels[config.ANNOTATION_PRIMARY_DEPLOYMENT] { - storageSpec = cluster.Spec.PrimaryStorage - } else if currPVC.Name == fmt.Sprintf(util.BackrestRepoPVCName, clusterName) { - storageSpec = cluster.Spec.BackrestStorage - } else { - storageSpec = cluster.Spec.ReplicaStorage - } - if err := pvc.Create(clientset, currPVC.Name, clusterName, &storageSpec, - namespace); err != nil { - log.Error(err) - return fmt.Errorf("Unable to create primary PVC while enabling standby mode: %w", err) - } - } - - log.Debugf("Enable standby: re-created PVC's %v for cluster %s", remainingPVC.Items, - clusterName) - - // find the "config" configMap created by Patroni - dcsConfigMapName := cluster.Labels[config.LABEL_PGHA_SCOPE] + "-config" - dcsConfigMap, err := clientset.CoreV1().ConfigMaps(namespace).Get(dcsConfigMapName, metav1.GetOptions{}) - if err != nil { - return fmt.Errorf("Unable to find configMap %s when attempting to enable standby", - dcsConfigMapName) - } - - // return ErrMissingConfigAnnotation error if configMap is missing the "config" annotation - if _, ok := dcsConfigMap.ObjectMeta.Annotations["config"]; !ok { - return util.ErrMissingConfigAnnotation - } - - // grab the json stored in the config annotation - configJSONStr := dcsConfigMap.ObjectMeta.Annotations["config"] - var configJSON map[string]interface{} - json.Unmarshal([]byte(configJSONStr), &configJSON) - - var standbyJSON map[string]interface{} - json.Unmarshal([]byte(standbyClusterConfigJSON), &standbyJSON) - - // set standby_cluster to default config unless already set - if _, ok := configJSON["standby_cluster"]; !ok { - configJSON["standby_cluster"] = standbyJSON - } - - configJSONFinalStr, err := json.Marshal(configJSON) - if err != nil { - return err - } - dcsConfigMap.ObjectMeta.Annotations["config"] = string(configJSONFinalStr) - _, err = clientset.CoreV1().ConfigMaps(namespace).Update(dcsConfigMap) - if err != nil { - return err - } - - leaderConfigMapName := cluster.Labels[config.LABEL_PGHA_SCOPE] + "-leader" - // Delete the "leader" configMap - if err = clientset.CoreV1().ConfigMaps(namespace).Delete(leaderConfigMapName, &metav1.DeleteOptions{}); err != nil && - !kerrors.IsNotFound(err) { - log.Error("Unable to delete configMap %s while enabling standby mode for cluster "+ - "%s: %v", leaderConfigMapName, clusterName, err) - return err - } - - // override to the repo type to ensure s3 is utilized for standby creation - pghaConfigMapName := cluster.Labels[config.LABEL_PGHA_SCOPE] + "-pgha-config" - pghaConfigMap, err := clientset.CoreV1().ConfigMaps(namespace).Get(pghaConfigMapName, metav1.GetOptions{}) - if err != nil { - return fmt.Errorf("Unable to find configMap %s when attempting to enable standby", - pghaConfigMapName) - } - pghaConfigMap.Data[operator.PGHAConfigReplicaBootstrapRepoType] = "s3" - - // delete the DCS config so that it will refresh with the included standby settings - delete(pghaConfigMap.Data, fmt.Sprintf(cfg.PGHADCSConfigName, clusterName)) - - if _, err := clientset.CoreV1().ConfigMaps(namespace).Update(pghaConfigMap); err != nil { - return err - } - - if err := publishStandbyEnabled(&cluster); err != nil { - log.Error(err) - } - - log.Debugf("Enable standby: finished enabling standby mode for cluster %s", clusterName) - - return nil -} - -func publishStandbyEnabled(cluster *crv1.Pgcluster) error { - - clusterName := cluster.Name - - //capture the cluster creation event - topics := make([]string, 1) - topics[0] = events.EventTopicCluster - - f := events.EventStandbyEnabledFormat{ - EventHeader: events.EventHeader{ - Namespace: cluster.Namespace, - Username: cluster.Spec.UserLabels[config.LABEL_PGOUSER], - Topic: topics, - Timestamp: time.Now(), - EventType: events.EventStandbyEnabled, - }, - Clustername: clusterName, - } - - if err := events.Publish(f); err != nil { - log.Error(err.Error()) - return err - } - - return nil -} - -func publishStandbyDisabled(cluster *crv1.Pgcluster) error { - - clusterName := cluster.Name - - //capture the cluster creation event - topics := make([]string, 1) - topics[0] = events.EventTopicCluster - - f := events.EventStandbyDisabledFormat{ - EventHeader: events.EventHeader{ - Namespace: cluster.Namespace, - Username: cluster.Spec.UserLabels[config.LABEL_PGOUSER], - Topic: topics, - Timestamp: time.Now(), - EventType: events.EventStandbyDisabled, - }, - Clustername: clusterName, - } - - if err := events.Publish(f); err != nil { - log.Error(err.Error()) - return err - } - - return nil -} diff --git a/internal/operator/cluster/upgrade.go b/internal/operator/cluster/upgrade.go deleted file mode 100644 index e2d58e7dc4..0000000000 --- a/internal/operator/cluster/upgrade.go +++ /dev/null @@ -1,720 +0,0 @@ -package cluster - -/* - Copyright 2017 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "errors" - "fmt" - "io/ioutil" - "strconv" - "time" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/kubeapi" - "github.com/crunchydata/postgres-operator/internal/operator" - "github.com/crunchydata/postgres-operator/internal/util" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - "github.com/crunchydata/postgres-operator/pkg/events" - pgo "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned" - - log "github.com/sirupsen/logrus" - - v1 "k8s.io/api/core/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/fields" - "k8s.io/client-go/kubernetes" - "sigs.k8s.io/yaml" -) - -// Store image names as constants to use later -const ( - postgresImage = "crunchy-postgres" - postgresHAImage = "crunchy-postgres-ha" - postgresGISImage = "crunchy-postgres-gis" - postgresGISHAImage = "crunchy-postgres-gis-ha" -) - -// store the replica postfix string -const replicaServicePostfix = "-replica" - -// AddUpgrade implements the upgrade workflow in accordance with the received pgtask -// the general process is outlined below: -// 1) get the existing pgcluster CRD instance that matches the name provided in the pgtask -// 2) Patch the existing services -// 3) Determine the current Primary PVC -// 4) Scale down existing replicas and store the number for recreation -// 5) Delete the various resources that will need to be recreated -// 6) Recreate the BackrestRepo secret, since the key encryption algorithm has been updated -// 7) Update the existing pgcluster CRD instance to match the current version -// 8) Submit the pgcluster CRD for recreation -func AddUpgrade(clientset kubeapi.Interface, upgrade *crv1.Pgtask, namespace string) { - - upgradeTargetClusterName := upgrade.ObjectMeta.Labels[config.LABEL_PG_CLUSTER] - - log.Debugf("started upgrade of cluster: %s", upgradeTargetClusterName) - - // publish our upgrade event - PublishUpgradeEvent(events.EventUpgradeCluster, namespace, upgrade, "") - - pgcluster, err := clientset.CrunchydataV1().Pgclusters(namespace).Get(upgradeTargetClusterName, metav1.GetOptions{}) - if err != nil { - errormessage := "cound not find pgcluster for pgcluster upgrade" - log.Errorf("%s Error: %s", errormessage, err) - PublishUpgradeEvent(events.EventUpgradeClusterFailure, namespace, upgrade, errormessage) - return - } - - // update the workflow status to 'in progress' while the upgrade takes place - updateUpgradeWorkflow(clientset, namespace, upgrade.ObjectMeta.Labels[crv1.PgtaskWorkflowID], crv1.PgtaskUpgradeInProgress) - - // grab the existing pgo version - oldpgoversion := pgcluster.ObjectMeta.Labels[config.LABEL_PGO_VERSION] - - // grab the current primary value. In differnet versions of the Operator, this was stored in multiple - // ways depending on version. If the 'primary' role is available on a particular pod (starting in 4.2.0), - // this is the most authoritative option. Next, in the current version, the current primary value is stored - // in an annotation on the pgcluster CRD and should be used if available and the primary pod cannot be identified. - // Next, if the current primary label is present (used by previous Operator versions), we will use that. - // Finally, if none of the above is available, we will set the default pgcluster name as the current primary value - currentPrimaryFromPod := getPrimaryPodDeploymentName(clientset, pgcluster) - currentPrimaryFromAnnotation := pgcluster.Annotations[config.ANNOTATION_CURRENT_PRIMARY] - currentPrimaryFromLabel := pgcluster.ObjectMeta.Labels[config.LABEL_CURRENT_PRIMARY] - - // compare the three values, and return the correct current primary value - currentPrimary := getCurrentPrimary(pgcluster.Name, currentPrimaryFromPod, currentPrimaryFromAnnotation, currentPrimaryFromLabel) - - // remove and count the existing replicas - replicas := handleReplicas(clientset, pgcluster.Name, currentPrimary, namespace) - SetReplicaNumber(pgcluster, replicas) - - // create the 'pgha-config' configmap while taking the init value from any existing 'pgha-default-config' configmap - createUpgradePGHAConfigMap(clientset, pgcluster, namespace) - - // delete the existing pgcluster CRDs and other resources that will be recreated - deleteBeforeUpgrade(clientset, pgcluster.Name, currentPrimary, namespace, pgcluster.Spec.Standby) - - // recreate new Backrest Repo secret that was just deleted - recreateBackrestRepoSecret(clientset, upgradeTargetClusterName, namespace, operator.PgoNamespace) - - // set proper values for the pgcluster that are updated between CR versions - preparePgclusterForUpgrade(pgcluster, upgrade.Spec.Parameters, oldpgoversion, currentPrimary) - - // create a new workflow for this recreated cluster - workflowid, err := createClusterRecreateWorkflowTask(clientset, pgcluster.Name, namespace, upgrade.Spec.Parameters[config.LABEL_PGOUSER]) - if err != nil { - // we will log any errors here, but will attempt to continue to submit the cluster for recreation regardless - log.Errorf("error generating a new workflow task for the recreation of the upgraded cluster %s, Error: %s", pgcluster.Name, err) - } - - // update pgcluster CRD workflow ID - pgcluster.Spec.UserLabels[config.LABEL_WORKFLOW_ID] = workflowid - - _, err = clientset.CrunchydataV1().Pgclusters(namespace).Create(pgcluster) - if err != nil { - log.Errorf("error submitting upgraded pgcluster CRD for cluster recreation of cluster %s, Error: %v", pgcluster.Name, err) - } else { - log.Debugf("upgraded cluster %s submitted for recreation", pgcluster.Name) - } - - // submit an event now that the new pgcluster has been submitted to the cluster creation process - PublishUpgradeEvent(events.EventUpgradeClusterCreateSubmitted, namespace, upgrade, "") - - log.Debugf("finished main upgrade workflow for cluster: %s", upgradeTargetClusterName) - -} - -// getPrimaryPodDeploymentName searches through the pods associated with this pgcluster for the 'primary' pod, -// if set. This will not be applicable to releases before the Operator 4.2.0 HA features were -// added. If this label does not exist or is otherwise not set as expected, return an empty -// string value and call an alternate function to determine the current primary pod. -func getPrimaryPodDeploymentName(clientset kubernetes.Interface, cluster *crv1.Pgcluster) string { - // first look for a 'primary' role label on the current primary deployment - selector := fmt.Sprintf("%s=%s,%s=%s", config.LABEL_PG_CLUSTER, cluster.Name, - config.LABEL_PGHA_ROLE, config.LABEL_PGHA_ROLE_PRIMARY) - - options := metav1.ListOptions{ - FieldSelector: fields.OneTermEqualSelector("status.phase", string(v1.PodRunning)).String(), - LabelSelector: selector, - } - - // only consider pods that are running - pods, err := clientset.CoreV1().Pods(cluster.Namespace).List(options) - - if err != nil { - log.Errorf("no pod with the primary role label was found for cluster %s. Error: %s", cluster.Name, err.Error()) - return "" - } - - // if no pod with that role is found, return an empty string since the current primary pod - // cannot be established - if len(pods.Items) < 1 { - log.Debugf("no pod with the primary role label was found for cluster %s", cluster.Name) - return "" - } - // similarly, if more than one pod with that role is found, return an empty string since the - // true primary pod cannot be determined - if len(pods.Items) > 1 { - log.Errorf("%v pods with the primary role label were found for cluster %s. There should only be one.", - len(pods.Items), cluster.Name) - return "" - } - // if only one pod was returned, this is the proper primary pod - primaryPod := pods.Items[0] - // now return the primary pod's deployment name - return primaryPod.Labels[config.LABEL_DEPLOYMENT_NAME] -} - -// getCurrentPrimary returns the correct current primary value to use for the upgrade. -// the deployment name of the pod with the 'primary' role is considered the most authoritative, -// followed by the CRD's 'current-primary' annotation, followed then by the current primary -// label. If none of these values are set, return the default name. -func getCurrentPrimary(clusterName, podPrimary, crPrimary, labelPrimary string) string { - // the primary pod is the preferred source of truth, as it will be correct - // for 4.2 pgclusters and beyond, regardless of failover method - if podPrimary != "" { - return podPrimary - } - - // the CRD annotation is the next preferred value - if crPrimary != "" { - return crPrimary - } - - // the current primary label should be used if the spec value and primary pod - // values are missing - if labelPrimary != "" { - return labelPrimary - } - - // if none of these are set, return the pgcluster name as the default - return clusterName -} - -// handleReplicas deletes all pgreplicas related to the pgcluster to be upgraded, then returns the number -// of pgreplicas that were found. This will delete any PVCs that match the existing pgreplica CRs, but -// will leave any other PVCs, whether they are from the current primary, previous primaries that are now -// unassociated because of a failover or the backrest-shared-repo PVC. The total number of current replicas -// will also be captured during this process so that the correct number of replicas can be recreated. -func handleReplicas(clientset kubeapi.Interface, clusterName, currentPrimaryPVC, namespace string) string { - log.Debugf("deleting pgreplicas and noting the number found for cluster %s", clusterName) - // Save the number of found replicas for this cluster - numReps := 0 - replicaList, err := clientset.CrunchydataV1().Pgreplicas(namespace).List(metav1.ListOptions{}) - if err != nil { - log.Errorf("unable to get pgreplicas. Error: %s", err) - } - - // go through the list of found replicas - for index := range replicaList.Items { - if replicaList.Items[index].Spec.ClusterName == clusterName { - log.Debugf("scaling down pgreplica: %s", replicaList.Items[index].Name) - ScaleDownBase(clientset, &replicaList.Items[index], namespace) - log.Debugf("deleting pgreplica CRD: %s", replicaList.Items[index].Name) - clientset.CrunchydataV1().Pgreplicas(namespace).Delete(replicaList.Items[index].Name, &metav1.DeleteOptions{}) - // if the existing replica PVC is not being used as the primary PVC, delete - // note this will not remove any leftover PVCs from previous failovers, - // those will require manual deletion so as to avoid any accidental - // deletion of valid PVCs. - if replicaList.Items[index].Name != currentPrimaryPVC { - deletePropagation := metav1.DeletePropagationForeground - clientset. - CoreV1().PersistentVolumeClaims(namespace). - Delete(replicaList.Items[index].Name, &metav1.DeleteOptions{PropagationPolicy: &deletePropagation}) - log.Debugf("deleting replica pvc: %s", replicaList.Items[index].Name) - } - - // regardless of whether the pgreplica PVC is being used as the primary or not, we still - // want to count it toward the number of replicas to create - numReps++ - } - } - // return the number of pgreplicas as a string - return strconv.Itoa(numReps) -} - -// SetReplicaNumber sets the pgcluster's replica value based off of the number of pgreplicas -// discovered during the deletion process. This is necessary because the pgcluser will only -// include the number of replicas created when the pgcluster was first generated -// (e.g. pgo create cluster hippo --replica-count=2) but will not included any replicas -// created using the 'pgo scale' command -func SetReplicaNumber(pgcluster *crv1.Pgcluster, numReplicas string) { - - pgcluster.Spec.Replicas = numReplicas -} - -// deleteBeforeUpgrade deletes the deployments, services, pgcluster, jobs, tasks and default configmaps before attempting -// to upgrade the pgcluster deployment. This preserves existing secrets, non-standard configmaps and service definitions -// for use in the newly upgraded cluster. -func deleteBeforeUpgrade(clientset kubeapi.Interface, clusterName, currentPrimary, namespace string, isStandby bool) { - - // first, get all deployments for the pgcluster in question - deployments, err := clientset. - AppsV1().Deployments(namespace). - List(metav1.ListOptions{LabelSelector: config.LABEL_PG_CLUSTER + "=" + clusterName}) - if err != nil { - log.Errorf("unable to get deployments. Error: %s", err) - } - - // next, delete those deployments - for index := range deployments.Items { - deletePropagation := metav1.DeletePropagationForeground - _ = clientset. - AppsV1().Deployments(namespace). - Delete(deployments.Items[index].Name, &metav1.DeleteOptions{ - PropagationPolicy: &deletePropagation, - }) - } - - // wait until the backrest shared repo pod deployment has been deleted before continuing - waitStatus := deploymentWait(clientset, namespace, clusterName+"-backrest-shared-repo", 180, 10) - log.Debug(waitStatus) - // wait until the primary pod deployment has been deleted before continuing - waitStatus = deploymentWait(clientset, namespace, currentPrimary, 180, 10) - log.Debug(waitStatus) - - // delete the pgcluster - clientset.CrunchydataV1().Pgclusters(namespace).Delete(clusterName, &metav1.DeleteOptions{}) - - // delete all existing job references - deletePropagation := metav1.DeletePropagationForeground - clientset. - BatchV1().Jobs(namespace). - DeleteCollection( - &metav1.DeleteOptions{PropagationPolicy: &deletePropagation}, - metav1.ListOptions{LabelSelector: config.LABEL_PG_CLUSTER + "=" + clusterName}) - - // delete all existing pgtask references except for the upgrade task - // Note: this will be deleted by the existing pgcluster creation process once the - // updated pgcluster created and processed by the cluster controller - if err = deleteNonupgradePgtasks(clientset, config.LABEL_PG_CLUSTER+"="+clusterName, namespace); err != nil { - log.Errorf("error while deleting pgtasks for cluster %s, Error: %v", clusterName, err) - } - - // delete the leader configmap used by the Postgres Operator since this information may change after - // the upgrade is complete - // Note: deletion is required for cluster recreation - clientset.CoreV1().ConfigMaps(namespace).Delete(clusterName+"-leader", &metav1.DeleteOptions{}) - - // delete the '-pgha-default-config' configmap, if it exists so the config syncer - // will not try to use it instead of '-pgha-config' - clientset.CoreV1().ConfigMaps(namespace).Delete(clusterName+"-pgha-default-config", &metav1.DeleteOptions{}) -} - -// deploymentWait is modified from cluster.waitForDeploymentDelete. It simply waits for the current primary deployment -// deletion to complete before proceeding with the rest of the pgcluster upgrade. -func deploymentWait(clientset kubernetes.Interface, namespace, deploymentName string, timeoutSecs, periodSecs time.Duration) string { - timeout := time.After(timeoutSecs * time.Second) - tick := time.NewTicker(periodSecs * time.Second) - defer tick.Stop() - - for { - select { - case <-timeout: - return fmt.Sprintf("Timed out waiting for deployment to be deleted: [%s]", deploymentName) - case <-tick.C: - _, err := clientset.AppsV1().Deployments(namespace).Get(deploymentName, metav1.GetOptions{}) - if err != nil { - return fmt.Sprintf("Deployment %s has been deleted.", deploymentName) - } - } - } -} - -// deleteNonupgradePgtasks deletes all existing pgtasks by selector with the exception of the -// upgrade task itself -func deleteNonupgradePgtasks(clientset pgo.Interface, selector, namespace string) error { - taskList, err := clientset.CrunchydataV1().Pgtasks(namespace).List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - return err - } - - // get the pgtask list - for _, v := range taskList.Items { - // if the pgtask is not for the upgrade, delete it - if v.ObjectMeta.Name != v.Name+"-"+config.LABEL_UPGRADE { - err = clientset.CrunchydataV1().Pgtasks(namespace).Delete(v.Name, &metav1.DeleteOptions{}) - if err != nil { - return err - } - } - } - return err -} - -// createUpgradePGHAConfigMap is a modified copy of CreatePGHAConfigMap from operator/clusterutilities.go -// It also creates a configMap that will be utilized to store configuration settings for a PostgreSQL, -// cluster, but with the added step of looking for an existing configmap, -// "-pgha-default-config". If that configmap exists, it will get the init value, as this is -// needed for the proper reinitialziation of Patroni. -func createUpgradePGHAConfigMap(clientset kubernetes.Interface, cluster *crv1.Pgcluster, - namespace string) error { - - labels := make(map[string]string) - labels[config.LABEL_VENDOR] = config.LABEL_CRUNCHY - labels[config.LABEL_PG_CLUSTER] = cluster.Name - labels[config.LABEL_PGHA_CONFIGMAP] = "true" - - data := make(map[string]string) - - // if the "pgha-default-config" config map exists, this cluster is being upgraded - // and should use the initialization value from this existing configmap - defaultConfigmap, err := clientset.CoreV1().ConfigMaps(namespace).Get(cluster.Name+"-pgha-default-config", metav1.GetOptions{}) - if err == nil { - data[operator.PGHAConfigInitSetting] = defaultConfigmap.Data[operator.PGHAConfigInitSetting] - } else { - // set "init" to true in the postgres-ha configMap - data[operator.PGHAConfigInitSetting] = "true" - } - - // if a standby cluster then we want to create replicas using the S3 pgBackRest repository - // (and not the local in-cluster pgBackRest repository) - if cluster.Spec.Standby { - data[operator.PGHAConfigReplicaBootstrapRepoType] = "s3" - } - - configmap := &v1.ConfigMap{ - ObjectMeta: metav1.ObjectMeta{ - Name: cluster.Name + "-" + operator.PGHAConfigMapSuffix, - Labels: labels, - }, - Data: data, - } - - if _, err := clientset.CoreV1().ConfigMaps(namespace).Create(configmap); err != nil { - return err - } - - return nil -} - -// recreateBackrestRepoSecret overwrites the secret for the pgBackRest repo. This is needed -// because the key encryption algorithm has been updated from RSA to EdDSA -func recreateBackrestRepoSecret(clientset kubernetes.Interface, clustername, namespace, operatorNamespace string) { - config := util.BackrestRepoConfig{ - ClusterName: clustername, - ClusterNamespace: namespace, - OperatorNamespace: operatorNamespace, - } - - secretName := clustername + "-backrest-repo-config" - secret, err := clientset.CoreV1().Secrets(namespace).Get(secretName, metav1.GetOptions{}) - - // 4.1, 4.2 - if err == nil { - if b, ok := secret.Data["aws-s3-ca.crt"]; ok { - config.BackrestS3CA = b - } - if b, ok := secret.Data["aws-s3-credentials.yaml"]; ok { - var parsed struct { - Key string `yaml:"aws-s3-key"` - KeySecret string `yaml:"aws-s3-key-secret"` - } - if err = yaml.Unmarshal(b, &parsed); err == nil { - config.BackrestS3Key = parsed.Key - config.BackrestS3KeySecret = parsed.KeySecret - } - } - } - - // >= 4.3 - if err == nil { - if b, ok := secret.Data["aws-s3-ca.crt"]; ok { - config.BackrestS3CA = b - } - if b, ok := secret.Data["aws-s3-key"]; ok { - config.BackrestS3Key = string(b) - } - if b, ok := secret.Data["aws-s3-key-secret"]; ok { - config.BackrestS3KeySecret = string(b) - } - } - - if err == nil { - err = util.CreateBackrestRepoSecrets(clientset, config) - } - if err != nil { - log.Errorf("error generating new backrest repo secrets during pgcluster upgrade: %v", err) - } -} - -// preparePgclusterForUpgrade specifically updates the existing CRD instance to set correct values -// for the current Postgres Operator version, updating or deleting values where appropriate, and sets -// an expected status so that the CRD object can be recreated. -func preparePgclusterForUpgrade(pgcluster *crv1.Pgcluster, parameters map[string]string, oldpgoversion, currentPrimary string) { - - // first, update the PGO version references to the current Postgres Operator version - pgcluster.ObjectMeta.Labels[config.LABEL_PGO_VERSION] = parameters[config.LABEL_PGO_VERSION] - pgcluster.Spec.UserLabels[config.LABEL_PGO_VERSION] = parameters[config.LABEL_PGO_VERSION] - - // next, capture the existing Crunchy Postgres Exporter configuration settings (previous to version - // 4.5.0 referred to as Crunchy Collect), if they exist, and store them in the current labels - if value, ok := pgcluster.ObjectMeta.Labels["crunchy_collect"]; ok { - pgcluster.ObjectMeta.Labels[config.LABEL_EXPORTER] = value - delete(pgcluster.ObjectMeta.Labels, "crunchy_collect") - } - - if value, ok := pgcluster.Spec.UserLabels["crunchy_collect"]; ok { - pgcluster.Spec.UserLabels[config.LABEL_EXPORTER] = value - delete(pgcluster.Spec.UserLabels, "crunchy_collect") - } - - // since the current primary label is not used in this version of the Postgres Operator, - // delete it before moving on to other upgrade tasks - delete(pgcluster.ObjectMeta.Labels, config.LABEL_CURRENT_PRIMARY) - - // next, update the image name to the appropriate image - if pgcluster.Spec.CCPImage == postgresImage { - pgcluster.Spec.CCPImage = postgresHAImage - } - - if pgcluster.Spec.CCPImage == postgresGISImage { - pgcluster.Spec.CCPImage = postgresGISHAImage - } - - // if there are not any annotations on the current pgcluster (which may be the case depending on - // which version we are upgrading from), create a new map to hold them - if pgcluster.Annotations == nil { - pgcluster.Annotations = make(map[string]string) - } - // update our pgcluster annotation with the correct current primary value - pgcluster.Annotations[config.ANNOTATION_CURRENT_PRIMARY] = currentPrimary - - // if the current primary value is set to a different value than the default deployment label, a failover has occurred. - // update the deployment label to match this updated value so that the deployment will match the underlying PVC name. - // since we cannot assume the state of the original primary's PVC is valid after the upgrade, this ensures the new - // base primary name will match the deployment name. Please note, going forward, failovers to other replicas will - // result in a new currentprimary value in the CRD annotations, but the deployment label will stay the same, in keeping with - // the current deployment naming method. In simpler terms, this deployment value is the 'primary deployment' name - // for this cluster. - pgcluster.ObjectMeta.Labels[config.LABEL_DEPLOYMENT_NAME] = currentPrimary - - // update the image tag to the value provided with the upgrade task. This will either be - // the standard value set in the Postgres Operator's main configuration (which will have already - // been verified to match the MAJOR PostgreSQL version) or the value provided by the user for - // use with PostGIS enabled pgclusters - pgcluster.Spec.CCPImageTag = parameters[config.LABEL_CCP_IMAGE_KEY] - - // set a default autofail value of "true" to enable Patroni's replication. If left to an existing - // value of "false," Patroni will be in a paused state and unable to sync all replicas to the - // current timeline - pgcluster.ObjectMeta.Labels[config.LABEL_AUTOFAIL] = "true" - - // Don't think we'll need to do this, but leaving the comment for now.... - // pgcluster.ObjectMeta.Labels[config.LABEL_POD_ANTI_AFFINITY] = "" - - // set pgouser to match the default configuration currently in use after the Operator upgrade - pgcluster.ObjectMeta.Labels[config.LABEL_PGOUSER] = parameters[config.LABEL_PGOUSER] - - // if the exporter port is not set, set to the configuration value for the default configuration - if pgcluster.Spec.ExporterPort == "" { - pgcluster.Spec.ExporterPort = operator.Pgo.Cluster.ExporterPort - } - - // if the pgbadger port is not set, set to the configuration value for the default configuration - if pgcluster.Spec.PGBadgerPort == "" { - pgcluster.Spec.PGBadgerPort = operator.Pgo.Cluster.PGBadgerPort - } - - // ensure that the pgo-backrest label is set to 'true' since pgbackrest is required for normal - // cluster operations in this version of the Postgres Operator - pgcluster.ObjectMeta.Labels[config.LABEL_BACKREST] = "true" - - // added in 4.2 and copied from configuration in 4.4 - if pgcluster.Spec.BackrestS3Bucket == "" { - pgcluster.Spec.BackrestS3Bucket = operator.Pgo.Cluster.BackrestS3Bucket - } - if pgcluster.Spec.BackrestS3Endpoint == "" { - pgcluster.Spec.BackrestS3Endpoint = operator.Pgo.Cluster.BackrestS3Endpoint - } - if pgcluster.Spec.BackrestS3Region == "" { - pgcluster.Spec.BackrestS3Region = operator.Pgo.Cluster.BackrestS3Region - } - - // added in 4.4 - if pgcluster.Spec.BackrestS3VerifyTLS == "" { - pgcluster.Spec.BackrestS3VerifyTLS = operator.Pgo.Cluster.BackrestS3VerifyTLS - } - - // add a label with the PGO version upgraded from and to - pgcluster.Annotations[config.ANNOTATION_UPGRADE_INFO] = "From_" + oldpgoversion + "_to_" + parameters[config.LABEL_PGO_VERSION] - // update the "is upgraded" label to indicate cluster has been upgraded - pgcluster.Annotations[config.ANNOTATION_IS_UPGRADED] = "true" - - // set the default CCPImagePrefix, if empty - if pgcluster.Spec.CCPImagePrefix == "" { - pgcluster.Spec.CCPImagePrefix = operator.Pgo.Cluster.CCPImagePrefix - } - - // set the default PGOImagePrefix, if empty - if pgcluster.Spec.PGOImagePrefix == "" { - pgcluster.Spec.PGOImagePrefix = operator.Pgo.Pgo.PGOImagePrefix - } - - // finally, clear the resource version and status messages, and set to the appropriate - // state for use by the pgcluster controller - pgcluster.ObjectMeta.ResourceVersion = "" - pgcluster.Spec.Status = "" - pgcluster.Status.State = crv1.PgclusterStateCreated - pgcluster.Status.Message = "Created, not processed yet" -} - -// createClusterRecreateWorkflowTask creates a cluster creation task for the upgraded cluster's recreation -// to maintain the expected workflow and tasking -func createClusterRecreateWorkflowTask(clientset pgo.Interface, clusterName, ns, pgouser string) (string, error) { - // create pgtask CRD - spec := crv1.PgtaskSpec{} - spec.Namespace = ns - spec.Name = clusterName + "-" + crv1.PgtaskWorkflowCreateClusterType - spec.TaskType = crv1.PgtaskWorkflow - - spec.Parameters = make(map[string]string) - spec.Parameters[crv1.PgtaskWorkflowSubmittedStatus] = time.Now().Format(time.RFC3339) - spec.Parameters[config.LABEL_PG_CLUSTER] = clusterName - - u, err := ioutil.ReadFile("/proc/sys/kernel/random/uuid") - if err != nil { - log.Error(err) - return "", err - } - spec.Parameters[crv1.PgtaskWorkflowID] = string(u[:len(u)-1]) - - newInstance := &crv1.Pgtask{ - ObjectMeta: metav1.ObjectMeta{ - Name: spec.Name, - }, - Spec: spec, - } - newInstance.ObjectMeta.Labels = make(map[string]string) - newInstance.ObjectMeta.Labels[config.LABEL_PGOUSER] = pgouser - newInstance.ObjectMeta.Labels[config.LABEL_PG_CLUSTER] = clusterName - newInstance.ObjectMeta.Labels[crv1.PgtaskWorkflowID] = spec.Parameters[crv1.PgtaskWorkflowID] - - _, err = clientset.CrunchydataV1().Pgtasks(ns).Create(newInstance) - if err != nil { - log.Error(err) - return "", err - } - return spec.Parameters[crv1.PgtaskWorkflowID], err -} - -// updateUpgradeWorkflow updates a Workflow with the current state of the pgcluster upgrade task -// modified from the cluster.UpdateCloneWorkflow function -func updateUpgradeWorkflow(clientset pgo.Interface, namespace, workflowID, status string) error { - log.Debugf("pgcluster upgrade workflow: update workflow [%s]", workflowID) - - // we have to look up the name of the workflow bt the workflow ID, which - // involves using a selector - selector := fmt.Sprintf("%s=%s", crv1.PgtaskWorkflowID, workflowID) - taskList, err := clientset.CrunchydataV1().Pgtasks(namespace).List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - log.Errorf("pgcluster upgrade workflow: could not get workflow [%s]", workflowID) - return err - } - - // if there is not one unique result, then we should display an error here - if len(taskList.Items) != 1 { - errorMsg := fmt.Sprintf("pgcluster upgrade workflow: workflow [%s] not found", workflowID) - log.Errorf(errorMsg) - return errors.New(errorMsg) - } - - // get the first task and update on the current status based on how it is - // progressing - task := taskList.Items[0] - task.Spec.Parameters[status] = time.Now().Format(time.RFC3339) - - if _, err := clientset.CrunchydataV1().Pgtasks(namespace).Update(&task); err != nil { - log.Errorf("pgcluster upgrade workflow: could not update workflow [%s] to status [%s]", workflowID, status) - return err - } - - return nil -} - -// PublishUpgradeEvent lets one publish an event related to the upgrade process -func PublishUpgradeEvent(eventType string, namespace string, task *crv1.Pgtask, errorMessage string) { - // get the boilerplate identifiers - clusterName, workflowID := getUpgradeTaskIdentifiers(task) - // set up the event header - eventHeader := events.EventHeader{ - Namespace: namespace, - Username: task.ObjectMeta.Labels[config.LABEL_PGOUSER], - Topic: []string{events.EventTopicCluster, events.EventTopicUpgrade}, - Timestamp: time.Now(), - EventType: eventType, - } - // get the event format itself and publish it based on the event type - switch eventType { - case events.EventUpgradeCluster: - publishUpgradeClusterEvent(eventHeader, clusterName, workflowID) - case events.EventUpgradeClusterCreateSubmitted: - publishUpgradeClusterCreateEvent(eventHeader, clusterName, workflowID) - case events.EventUpgradeClusterFailure: - publishUpgradeClusterFailureEvent(eventHeader, clusterName, workflowID, errorMessage) - } -} - -// getUpgradeTaskIdentifiers returns the cluster name and the workflow ID -func getUpgradeTaskIdentifiers(task *crv1.Pgtask) (string, string) { - return task.Spec.Parameters[config.LABEL_PG_CLUSTER], - task.Spec.Parameters[crv1.PgtaskWorkflowID] -} - -// publishUpgradeClusterEvent publishes the event when the cluster Upgrade process -// has started -func publishUpgradeClusterEvent(eventHeader events.EventHeader, clustername, workflowID string) { - // set up the event - event := events.EventUpgradeClusterFormat{ - EventHeader: eventHeader, - Clustername: clustername, - WorkflowID: workflowID, - } - // attempt to publish the event; if it fails, log the error, but keep moving on - if err := events.Publish(event); err != nil { - log.Errorf("error publishing event. Error: %s", err.Error()) - } -} - -// publishUpgradeClusterCreateEvent publishes the event when the cluster Upgrade process -// has reached the point where the upgrade pgcluster CRD is submitted for cluster recreation -func publishUpgradeClusterCreateEvent(eventHeader events.EventHeader, clustername, workflowID string) { - // set up the event - event := events.EventUpgradeClusterCreateFormat{ - EventHeader: eventHeader, - Clustername: clustername, - WorkflowID: workflowID, - } - // attempt to publish the event; if it fails, log the error, but keep moving on - if err := events.Publish(event); err != nil { - log.Errorf("error publishing event. Error: %s", err.Error()) - } -} - -// publishUpgradeClusterFailureEvent publishes the event when the cluster upgrade process -// has failed, including the error message -func publishUpgradeClusterFailureEvent(eventHeader events.EventHeader, clustername, workflowID, errorMessage string) { - // set up the event - event := events.EventUpgradeClusterFailureFormat{ - EventHeader: eventHeader, - ErrorMessage: errorMessage, - Clustername: clustername, - WorkflowID: workflowID, - } - // attempt to publish the event; if it fails, log the error, but keep moving on - if err := events.Publish(event); err != nil { - log.Errorf("error publishing event. Error: %s", err.Error()) - } -} diff --git a/internal/operator/clusterutilities.go b/internal/operator/clusterutilities.go deleted file mode 100644 index 42d97e336d..0000000000 --- a/internal/operator/clusterutilities.go +++ /dev/null @@ -1,1013 +0,0 @@ -package operator - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "bytes" - "encoding/json" - "fmt" - "os" - "strconv" - "strings" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/util" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - - log "github.com/sirupsen/logrus" - apps_v1 "k8s.io/api/apps/v1" - v1 "k8s.io/api/core/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/util/validation" - "k8s.io/client-go/kubernetes" -) - -// consolidate with cluster.affinityTemplateFields -const AffinityInOperator = "In" -const AFFINITY_NOTINOperator = "NotIn" - -// PGHAConfigMapSuffix defines the suffix for the name of the PGHA configMap created for each PG -// cluster -const PGHAConfigMapSuffix = "pgha-config" - -// the following constants define the settings in the PGHA configMap that is created for each PG -// cluster -const ( - // PGHAConfigInitSetting determines whether or not initialization logic should be run in the - // crunchy-postgres-ha (or GIS equivilaent) container - PGHAConfigInitSetting = "init" - // PGHAConfigReplicaBootstrapRepoType defines an override for the type of repo (local, S3, etc.) - // that should be utilized when bootstrapping a replica (i.e. it override the - // PGBACKREST_REPO_TYPE env var in the environment). Allows for dynamic changing of the - // backrest repo type without requiring container restarts (as would be required to update - // PGBACKREST_REPO_TYPE). - PGHAConfigReplicaBootstrapRepoType = "replica-bootstrap-repo-type" -) - -// defaultPGBackRestS3URIStyle is the default pgBackRest S3 URI style to use if a specific style is -// not provided -const defaultPGBackRestS3URIStyle = "host" - -// affinityType represents the two affinity types provided by Kubernetes, specifically -// either preferredDuringSchedulingIgnoredDuringExecution or -// requiredDuringSchedulingIgnoredDuringExecution -type affinityType string - -const ( - requireScheduleIgnoreExec affinityType = "requiredDuringSchedulingIgnoredDuringExecution" - preferScheduleIgnoreExec affinityType = "preferredDuringSchedulingIgnoredDuringExecution" -) - -type affinityTemplateFields struct { - NodeLabelKey string - NodeLabelValue string - OperatorValue string -} - -type podAntiAffinityTemplateFields struct { - AffinityType affinityType - ClusterName string - PodAntiAffinityLabelKey string - VendorLabelKey string - VendorLabelValue string -} - -// consolidate -type exporterTemplateFields struct { - Name string - JobName string - PGOImageTag string - PGOImagePrefix string - PgPort string - ExporterPort string - CollectSecretName string - ContainerResources string - TLSOnly bool -} - -//consolidate -type badgerTemplateFields struct { - CCPImageTag string - CCPImagePrefix string - BadgerTarget string - PGBadgerPort string -} - -type PgbackrestEnvVarsTemplateFields struct { - PgbackrestStanza string - PgbackrestDBPath string - PgbackrestRepo1Path string - PgbackrestRepo1Host string - PgbackrestRepo1Type string - PgbackrestLocalAndS3Storage bool - PgbackrestPGPort string -} - -type PgbackrestS3EnvVarsTemplateFields struct { - PgbackrestS3Bucket string - PgbackrestS3Endpoint string - PgbackrestS3Region string - PgbackrestS3Key string - PgbackrestS3KeySecret string - PgbackrestS3SecretName string - PgbackrestS3URIStyle string - PgbackrestS3VerifyTLS string -} - -type PgmonitorEnvVarsTemplateFields struct { - ExporterSecret string -} - -// BootstrapJobTemplateFields defines the fields needed to populate the cluster bootstrap job -// template -type BootstrapJobTemplateFields struct { - DeploymentTemplateFields - // RestoreFrom defines the name of a cluster to restore from when bootstrapping from an - // existing data source - RestoreFrom string - // RestoreOpts defines the command line options that should be passed to the restore utility - // (e.g. pgBackRest) when bootstrapping the cluster from an existing data source - RestoreOpts string -} - -// DeploymentTemplateFields ... -type DeploymentTemplateFields struct { - Name string - ClusterName string - Port string - CCPImagePrefix string - CCPImageTag string - CCPImage string - Database string - DeploymentLabels string - // PodAnnotations are user-specified annotations that can be applied to a - // Pod, e.g. annotations specific to a PostgreSQL instance - PodAnnotations string - PodLabels string - DataPathOverride string - ArchiveMode string - PVCName string - RootSecretName string - UserSecretName string - PrimarySecretName string - SecurityContext string - ContainerResources string - NodeSelector string - ConfVolume string - ExporterAddon string - BadgerAddon string - PgbackrestEnvVars string - PgbackrestS3EnvVars string - PgmonitorEnvVars string - ScopeLabel string - Replicas string - IsInit bool - EnableCrunchyadm bool - ReplicaReinitOnStartFail bool - PodAntiAffinity string - SyncReplication bool - Standby bool - // A comma-separated list of tablespace names...this could be an array, but - // given how this would ultimately be interpreted in a shell script somewhere - // down the line, it's easier for the time being to do it this way. In the - // future, we should consider having an array - Tablespaces string - TablespaceVolumes string - TablespaceVolumeMounts string - // The following fields set the TLS requirements as well as provide - // information on how to configure TLS in a PostgreSQL cluster - // TLSEnabled enables TLS in a cluster if set to true. Only works in actuality - // if CASecret and TLSSecret are set - TLSEnabled bool - // TLSOnly is set to true if the PostgreSQL cluster should only accept TLS - // connections - TLSOnly bool - // TLSSecret is the name of the Secret that has the PostgreSQL server's TLS - // keypair - TLSSecret string - // ReplicationTLSSecret is the name of the Secret that has the TLS keypair - // for performing certificate-based authentication between instances - ReplicationTLSSecret string - // CASecret is the name of the Secret that has the trusted CA that the - // PostgreSQL server is using - CASecret string -} - -// tablespaceVolumeFields are the fields used to create the volumes in a -// Deployment template spec or the like. These are turned into JSON. -type tablespaceVolumeFields struct { - Name string `json:"name"` - PVC tablespaceVolumePVCFields `json:"persistentVolumeClaim"` -} - -// tablespaceVolumePVCFields used for specifying the PVC that should be attached -// to the volume. These are turned into JSON -type tablespaceVolumePVCFields struct { - PVCName string `json:"claimName"` -} - -// tablespaceVolumeMountFields are the field used to create the volume mounts -// in a Deployment template spec. These are turned into JSON. -type tablespaceVolumeMountFields struct { - Name string `json:"name"` - MountPath string `json:"mountPath"` -} - -// GetAnnotations returns the annotations in a JSON format can be used by the -// template. If no annotations are found, returns an empty string -func GetAnnotations(cluster *crv1.Pgcluster, annotationType crv1.ClusterAnnotationType) string { - annotations := map[string]string{} - - // no matter what, grab any of the global annotations and put into the - // annotations list - for k, v := range cluster.Spec.Annotations.Global { - annotations[k] = v - } - - // determine if we need to add any additional annotations to the list that may - // be pod specific - switch annotationType { - case crv1.ClusterAnnotationBackrest: - for k, v := range cluster.Spec.Annotations.Backrest { - annotations[k] = v - } - case crv1.ClusterAnnotationPgBouncer: - for k, v := range cluster.Spec.Annotations.PgBouncer { - annotations[k] = v - } - case crv1.ClusterAnnotationPostgres: - for k, v := range cluster.Spec.Annotations.Postgres { - annotations[k] = v - } - } - - // if the map is empty, return an empty string - if len(annotations) == 0 { - return "" - } - - // let's try to create a JSON document out of the above - doc, err := json.Marshal(annotations) - - // if there is an error, warn in our logs and return an empty string - if err != nil { - log.Errorf("could not set custom annotations: %q", err) - return "" - } - - return string(doc) -} - -//consolidate with cluster.GetPgbackrestEnvVars -func GetPgbackrestEnvVars(cluster *crv1.Pgcluster, backrestEnabled, depName, port, storageType string) string { - if backrestEnabled == "true" { - fields := PgbackrestEnvVarsTemplateFields{ - PgbackrestStanza: "db", - PgbackrestRepo1Host: cluster.Name + "-backrest-shared-repo", - PgbackrestRepo1Path: util.GetPGBackRestRepoPath(*cluster), - PgbackrestDBPath: "/pgdata/" + depName, - PgbackrestPGPort: port, - PgbackrestRepo1Type: GetRepoType(storageType), - PgbackrestLocalAndS3Storage: IsLocalAndS3Storage(storageType), - } - - var doc bytes.Buffer - err := config.PgbackrestEnvVarsTemplate.Execute(&doc, fields) - if err != nil { - log.Error(err.Error()) - return "" - } - return doc.String() - } - return "" - -} - -// GetPgbackrestBootstrapEnvVars returns a string containing the pgBackRest environment variables -// for a bootstrap job -func GetPgbackrestBootstrapEnvVars(restoreClusterName, depName string, - restoreFromSecret *v1.Secret) (string, error) { - - fields := PgbackrestEnvVarsTemplateFields{ - PgbackrestStanza: "db", - PgbackrestDBPath: fmt.Sprintf("/pgdata/%s", depName), - PgbackrestRepo1Path: restoreFromSecret.Annotations[config.ANNOTATION_REPO_PATH], - PgbackrestPGPort: restoreFromSecret.Annotations[config.ANNOTATION_PG_PORT], - PgbackrestRepo1Host: fmt.Sprintf(util.BackrestRepoDeploymentName, restoreClusterName), - PgbackrestRepo1Type: "posix", // just set to the default, can be overridden via CLI args - } - - var doc bytes.Buffer - if err := config.PgbackrestEnvVarsTemplate.Execute(&doc, fields); err != nil { - log.Error(err.Error()) - return "", err - } - return doc.String(), nil -} - -// GetBackrestDeployment finds the pgBackRest repository Deployments for a -// PostgreQL cluster -func GetBackrestDeployment(clientset kubernetes.Interface, cluster *crv1.Pgcluster) (*apps_v1.Deployment, error) { - // find the pgBackRest repository Deployment, which follows a known pattern - deploymentName := fmt.Sprintf(util.BackrestRepoDeploymentName, cluster.Name) - deployment, err := clientset.AppsV1().Deployments(cluster.Namespace).Get(deploymentName, metav1.GetOptions{}) - - return deployment, err -} - -func GetBadgerAddon(clientset kubernetes.Interface, namespace string, cluster *crv1.Pgcluster, pgbadger_target string) string { - - spec := cluster.Spec - - if cluster.Labels[config.LABEL_BADGER] == "true" { - log.Debug("crunchy_badger was found as a label on cluster create") - badgerTemplateFields := badgerTemplateFields{} - badgerTemplateFields.CCPImageTag = spec.CCPImageTag - badgerTemplateFields.BadgerTarget = pgbadger_target - badgerTemplateFields.PGBadgerPort = spec.PGBadgerPort - badgerTemplateFields.CCPImagePrefix = util.GetValueOrDefault(spec.CCPImagePrefix, Pgo.Cluster.CCPImagePrefix) - - var badgerDoc bytes.Buffer - err := config.BadgerTemplate.Execute(&badgerDoc, badgerTemplateFields) - if err != nil { - log.Error(err.Error()) - return "" - } - - if CRUNCHY_DEBUG { - config.BadgerTemplate.Execute(os.Stdout, badgerTemplateFields) - } - return badgerDoc.String() - } - return "" -} - -func GetExporterAddon(clientset kubernetes.Interface, namespace string, spec *crv1.PgclusterSpec) string { - - if spec.UserLabels[config.LABEL_EXPORTER] == "true" { - log.Debug("crunchy-postgres-exporter was found as a label on cluster create") - - log.Debugf("creating exporter secret for cluster %s", spec.Name) - err := util.CreateSecret(clientset, spec.Name, spec.CollectSecretName, config.LABEL_EXPORTER_PG_USER, - Pgo.Cluster.PgmonitorPassword, namespace) - - exporterTemplateFields := exporterTemplateFields{} - exporterTemplateFields.Name = spec.Name - exporterTemplateFields.JobName = spec.Name - exporterTemplateFields.PGOImageTag = Pgo.Pgo.PGOImageTag - exporterTemplateFields.ExporterPort = spec.ExporterPort - exporterTemplateFields.PGOImagePrefix = util.GetValueOrDefault(spec.PGOImagePrefix, Pgo.Pgo.PGOImagePrefix) - exporterTemplateFields.PgPort = spec.Port - exporterTemplateFields.CollectSecretName = spec.CollectSecretName - exporterTemplateFields.ContainerResources = GetResourcesJSON(spec.ExporterResources, spec.ExporterLimits) - // see if TLS only is set. however, this also requires checking to see if - // TLS is enabled in this case. The reason is that even if TLS is only just - // enabled, because the connection is over an internal interface, we do not - // need to have the overhead of a TLS connection - exporterTemplateFields.TLSOnly = spec.TLS.IsTLSEnabled() && spec.TLSOnly - - var exporterDoc bytes.Buffer - err = config.ExporterTemplate.Execute(&exporterDoc, exporterTemplateFields) - if err != nil { - log.Error(err.Error()) - return "" - } - - if CRUNCHY_DEBUG { - config.ExporterTemplate.Execute(os.Stdout, exporterTemplateFields) - } - return exporterDoc.String() - } - return "" -} - -//consolidate with cluster.GetConfVolume -func GetConfVolume(clientset kubernetes.Interface, cl *crv1.Pgcluster, namespace string) string { - var configMapStr string - - //check for user provided configmap - if cl.Spec.CustomConfig != "" { - _, err := clientset.CoreV1().ConfigMaps(namespace).Get(cl.Spec.CustomConfig, metav1.GetOptions{}) - if err != nil { - //you should NOT get this error because of apiserver validation of this value! - log.Errorf("%s was not found, error, skipping user provided configMap", cl.Spec.CustomConfig) - } else { - log.Debugf("user provided configmap %s was used for this cluster", cl.Spec.CustomConfig) - return "\"" + cl.Spec.CustomConfig + "\"" - } - } - - //check for global custom configmap "pgo-custom-pg-config" - _, err := clientset.CoreV1().ConfigMaps(namespace).Get(config.GLOBAL_CUSTOM_CONFIGMAP, metav1.GetOptions{}) - if err == nil { - return `"pgo-custom-pg-config"` - } - log.Debug(config.GLOBAL_CUSTOM_CONFIGMAP + " was not found, skipping global configMap") - - return configMapStr -} - -// CreatePGHAConfigMap creates a configMap that will be utilized to store configuration settings -// for a PostgreSQL cluster. Currently this configMap simply defines an "init" setting, which is -// utilized by the crunchy-postgres-ha container (or GIS equivalent) to determine whether or not -// initialization logic should be executed when the container is run. This ensures that the -// original primary in a PostgreSQL cluster does not attempt to run any initialization logic more -// than once, such as following a restart of the container. In the future this configMap can also -// be leveraged to manage other configuration settings for the PostgreSQL cluster and its -// associated containers. -func CreatePGHAConfigMap(clientset kubernetes.Interface, cluster *crv1.Pgcluster, - namespace string) error { - - labels := make(map[string]string) - labels[config.LABEL_VENDOR] = config.LABEL_CRUNCHY - labels[config.LABEL_PG_CLUSTER] = cluster.Name - labels[config.LABEL_PGHA_CONFIGMAP] = "true" - - data := make(map[string]string) - // set "init" to true in the postgres-ha configMap - data[PGHAConfigInitSetting] = "true" - - // if a standby cluster then we want to create replicas using the S3 pgBackRest repository - // (and not the local in-cluster pgBackRest repository) - if cluster.Spec.Standby { - data[PGHAConfigReplicaBootstrapRepoType] = "s3" - } - - configmap := &v1.ConfigMap{ - ObjectMeta: metav1.ObjectMeta{ - Name: cluster.Name + "-" + PGHAConfigMapSuffix, - Labels: labels, - }, - Data: data, - } - - if _, err := clientset.CoreV1().ConfigMaps(namespace).Create(configmap); err != nil { - return err - } - - return nil -} - -// GetTablespaceNamePVCMap returns a map of the tablespace name to the PVC name -func GetTablespaceNamePVCMap(clusterName string, tablespaceStorageTypeMap map[string]string) map[string]string { - tablespacePVCMap := map[string]string{} - - // iterate through all of the tablespace mounts and match the name of the - // tablespace to its PVC - for tablespaceName := range tablespaceStorageTypeMap { - tablespacePVCMap[tablespaceName] = GetTablespacePVCName(clusterName, tablespaceName) - } - - return tablespacePVCMap -} - -// GetInstanceDeployments finds the Deployments that represent PostgreSQL -// instances -func GetInstanceDeployments(clientset kubernetes.Interface, cluster *crv1.Pgcluster) (*apps_v1.DeploymentList, error) { - // first, get a list of all of the available deployments so we can properly - // mount the tablespace PVCs after we create them - // NOTE: this will also get the pgBackRest deployments, but we will filter - // these out later - selector := fmt.Sprintf("%s=%s,%s=%s", config.LABEL_VENDOR, config.LABEL_CRUNCHY, - config.LABEL_PG_CLUSTER, cluster.Name) - - // get the deployments for this specific PostgreSQL luster - clusterDeployments, err := clientset. - AppsV1().Deployments(cluster.Namespace). - List(metav1.ListOptions{LabelSelector: selector}) - - if err != nil { - return nil, err - } - - // start prepping the instance deployments - instanceDeployments := apps_v1.DeploymentList{} - - // iterate through the list of deployments -- if it matches the definition of - // a PostgreSQL instance deployment, then add it to the slice - for _, deployment := range clusterDeployments.Items { - labels := deployment.ObjectMeta.GetLabels() - - // get the name of the PostgreSQL instance. If the "deployment-name" - // label is not present, then we know it's not a PostgreSQL cluster. - // Otherwise, the "deployment-name" label doubles as the name of the - // instance - if instanceName, ok := labels[config.LABEL_DEPLOYMENT_NAME]; ok { - log.Debugf("instance found [%s]", instanceName) - - instanceDeployments.Items = append(instanceDeployments.Items, deployment) - } - } - - return &instanceDeployments, nil -} - -// GetTablespaceNames generates a comma-separated list of the format -// "tablespaceName1,tablespceName2" so that the PVC containing a tablespace -// can be properly mounted in the container, and the tablespace can be -// referenced by the specified human readable name. We use a comma-separated -// list to make it "easier" to work with the shell scripts that currently setup -// the container -func GetTablespaceNames(tablespaceMounts map[string]crv1.PgStorageSpec) string { - tablespaces := []string{} - - // iterate through the list of tablespace mounts and extract the tablespace - // name - for tablespaceName := range tablespaceMounts { - tablespaces = append(tablespaces, tablespaceName) - } - - // return the string that joins the list with the comma - return strings.Join(tablespaces, ",") -} - -// GetTablespaceStorageTypeMap returns a map of "tablespaceName => storageType" -func GetTablespaceStorageTypeMap(tablespaceMounts map[string]crv1.PgStorageSpec) map[string]string { - tablespaceStorageTypeMap := map[string]string{} - - // iterate through all of the tablespaceMounts and extract the storage type - for tablespaceName, storageSpec := range tablespaceMounts { - tablespaceStorageTypeMap[tablespaceName] = storageSpec.StorageType - } - - return tablespaceStorageTypeMap -} - -// GetTablespacePVCName returns the formatted name that is used for a PVC for -// a tablespace -func GetTablespacePVCName(clusterName string, tablespaceName string) string { - return fmt.Sprintf(config.VOLUME_TABLESPACE_PVC_NAME_FORMAT, clusterName, tablespaceName) -} - -// GetTablespaceVolumeMountsJSON Creates an appendable list for the volumeMounts -// that are used to mount table spacs and returns them in a JSON-ish string -func GetTablespaceVolumeMountsJSON(tablespaceStorageTypeMap map[string]string) string { - volumeMounts := bytes.Buffer{} - - // iterate over each table space and generate the JSON snippet that is loaded - // into a Kubernetes Deployment template (or equivalent structure) - for tablespaceName := range tablespaceStorageTypeMap { - log.Debugf("generating tablespace volume mount json for %s", tablespaceName) - - volumeMountFields := tablespaceVolumeMountFields{ - Name: GetTablespaceVolumeName(tablespaceName), - MountPath: fmt.Sprintf("%s%s", config.VOLUME_TABLESPACE_PATH_PREFIX, tablespaceName), - } - - // write the generated JSON into a buffer. if there is an error, log the - // error and continue - if err := writeTablespaceJSON(&volumeMounts, volumeMountFields); err != nil { - log.Error(err) - continue - } - } - - return volumeMounts.String() -} - -// GetTablespaceVolumes Creates an appendable list for the volumes section of a -// Kubernetes pod -func GetTablespaceVolumesJSON(clusterName string, tablespaceStorageTypeMap map[string]string) string { - volumes := bytes.Buffer{} - - // iterate over each table space and generate the JSON snippet that is loaded - // into a Kubernetes Deployment template (or equivalent structure) - for tablespaceName := range tablespaceStorageTypeMap { - log.Debugf("generating tablespace volume json for %s", tablespaceName) - - volumeFields := tablespaceVolumeFields{ - Name: GetTablespaceVolumeName(tablespaceName), - PVC: tablespaceVolumePVCFields{ - PVCName: GetTablespacePVCName(clusterName, tablespaceName), - }, - } - - // write the generated JSON into a buffer. if there is an error, log the - // error and continue - if err := writeTablespaceJSON(&volumes, volumeFields); err != nil { - log.Error(err) - continue - } - } - - return volumes.String() -} - -// GetTableSpaceVolumeName returns the name that is used to identify the volume -// that is used to mount the tablespace -func GetTablespaceVolumeName(tablespaceName string) string { - return fmt.Sprintf("%s%s", config.VOLUME_TABLESPACE_NAME_PREFIX, tablespaceName) -} - -// needs to be consolidated with cluster.GetLabelsFromMap -// GetLabelsFromMap ... -func GetLabelsFromMap(labels map[string]string) string { - var output string - - for key, value := range labels { - if len(validation.IsQualifiedName(key)) == 0 && len(validation.IsValidLabelValue(value)) == 0 { - output += fmt.Sprintf("\"%s\": \"%s\",", key, value) - } - } - // removing the trailing comma from the final label - return strings.TrimSuffix(output, ",") -} - -// GetAffinity ... -func GetAffinity(nodeLabelKey, nodeLabelValue string, affoperator string) string { - log.Debugf("GetAffinity with nodeLabelKey=[%s] nodeLabelKey=[%s] and operator=[%s]\n", nodeLabelKey, nodeLabelValue, affoperator) - output := "" - if nodeLabelKey == "" { - return output - } - - affinityTemplateFields := affinityTemplateFields{} - affinityTemplateFields.NodeLabelKey = nodeLabelKey - affinityTemplateFields.NodeLabelValue = nodeLabelValue - affinityTemplateFields.OperatorValue = affoperator - - var affinityDoc bytes.Buffer - err := config.AffinityTemplate.Execute(&affinityDoc, affinityTemplateFields) - if err != nil { - log.Error(err.Error()) - return output - } - - if CRUNCHY_DEBUG { - config.AffinityTemplate.Execute(os.Stdout, affinityTemplateFields) - } - - return affinityDoc.String() -} - -// GetPodAntiAffinity returns the populated pod anti-affinity json that should be attached to -// the various pods comprising the pg cluster -func GetPodAntiAffinity(cluster *crv1.Pgcluster, deploymentType crv1.PodAntiAffinityDeployment, podAntiAffinityType crv1.PodAntiAffinityType) string { - - log.Debugf("GetPodAnitAffinity with clusterName=[%s]", cluster.Spec.Name) - - // run through the checks on the pod anti-affinity type to see if it is not - // provided by the user, it's set by one of many defaults - podAntiAffinityType = GetPodAntiAffinityType(cluster, deploymentType, podAntiAffinityType) - - // verify that the affinity type provided is valid (i.e. 'required' or 'preferred'), and - // log an error and return an empty string if not - if err := podAntiAffinityType.Validate(); err != nil { - log.Error(fmt.Sprintf("Invalid affinity type '%s' specified when attempting to set "+ - "default pod anti-affinity for cluster %s. Pod anti-affinity will not be applied.", - podAntiAffinityType, cluster.Spec.Name)) - return "" - } - - // set requiredDuringSchedulingIgnoredDuringExecution or - // prefferedDuringSchedulingIgnoredDuringExecution depending on the pod anti-affinity type - // specified in the pgcluster CR. Defaults to preffered if not explicitly specified - // in the CR or in the pgo.yaml configuration file - templateAffinityType := preferScheduleIgnoreExec - switch podAntiAffinityType { - case crv1.PodAntiAffinityDisabled: // if disabled return an empty string - log.Debugf("Default pod anti-affinity disabled for clusterName=[%s]", cluster.Spec.Name) - return "" - case crv1.PodAntiAffinityRequired: - templateAffinityType = requireScheduleIgnoreExec - } - - podAntiAffinityTemplateFields := podAntiAffinityTemplateFields{ - AffinityType: templateAffinityType, - ClusterName: cluster.Spec.Name, - VendorLabelKey: config.LABEL_VENDOR, - VendorLabelValue: config.LABEL_CRUNCHY, - PodAntiAffinityLabelKey: config.LABEL_POD_ANTI_AFFINITY, - } - - var podAntiAffinityDoc bytes.Buffer - err := config.PodAntiAffinityTemplate.Execute(&podAntiAffinityDoc, - podAntiAffinityTemplateFields) - if err != nil { - log.Error(err.Error()) - return "" - } - - if CRUNCHY_DEBUG { - config.PodAntiAffinityTemplate.Execute(os.Stdout, podAntiAffinityTemplateFields) - } - - return podAntiAffinityDoc.String() -} - -// GetPodAntiAffinityType returns the type of pod anti-affinity to use. This is -// based on the deployment type (cluster, pgBackRest, pgBouncer), the value -// in the cluster spec, and the defaults available in pgo.yaml. -// -// In other words, the pod anti-affinity is determined by this heuristic, in -// priority order: -// -// 1. If it's pgBackRest/pgBouncer the value set by the user (available in the -// cluster spec) -// 2. If it's pgBackRest/pgBouncer the value set in pgo.yaml -// 3. The value set in "Default" in the cluster spec -// 4. The value set for PodAntiAffinity in pgo.yaml -func GetPodAntiAffinityType(cluster *crv1.Pgcluster, deploymentType crv1.PodAntiAffinityDeployment, podAntiAffinityType crv1.PodAntiAffinityType) crv1.PodAntiAffinityType { - // early exit: if podAntiAffinityType is already set, return - if podAntiAffinityType != "" { - return podAntiAffinityType - } - - // if this is a pgBouncer or pgBackRest deployment, see if there is a value - // set in the configuration. If there is, return that - switch deploymentType { - case crv1.PodAntiAffinityDeploymentPgBackRest: - if Pgo.Cluster.PodAntiAffinityPgBackRest != "" { - podAntiAffinityType = crv1.PodAntiAffinityType(Pgo.Cluster.PodAntiAffinityPgBackRest) - - if podAntiAffinityType != "" { - return podAntiAffinityType - } - } - case crv1.PodAntiAffinityDeploymentPgBouncer: - if Pgo.Cluster.PodAntiAffinityPgBouncer != "" { - podAntiAffinityType = crv1.PodAntiAffinityType(Pgo.Cluster.PodAntiAffinityPgBouncer) - - if podAntiAffinityType != "" { - return podAntiAffinityType - } - } - } - - // check to see if the value for the cluster anti-affinity is set. If so, use - // this value - if cluster.Spec.PodAntiAffinity.Default != "" { - return cluster.Spec.PodAntiAffinity.Default - } - - // At this point, check the value in the configuration that is used for pod - // anti-affinity. Ensure it is cast to be of PodAntiAffinityType - return crv1.PodAntiAffinityType(Pgo.Cluster.PodAntiAffinity) -} - -// GetPgmonitorEnvVars populates the pgmonitor env var template, which contains any -// pgmonitor env vars that need to be included in the Deployment spec for a PG cluster. -func GetPgmonitorEnvVars(metricsEnabled, exporterSecret string) string { - if metricsEnabled == "true" { - fields := PgmonitorEnvVarsTemplateFields{ - ExporterSecret: exporterSecret, - } - - var doc bytes.Buffer - err := config.PgmonitorEnvVarsTemplate.Execute(&doc, fields) - if err != nil { - log.Error(err.Error()) - return "" - } - return doc.String() - } - return "" -} - -// GetPgbackrestS3EnvVars retrieves the values for the various configuration settings require to -// configure pgBackRest for AWS S3, including a bucket, endpoint, region, key and key secret. -// The bucket, endpoint & region are obtained from the associated parameters in the pgcluster -// CR, while the key and key secret are obtained from the backrest repository secret. Once these -// values have been obtained, they are used to populate a template containing the various -// pgBackRest environment variables required to enable S3 support. After the template has been -// executed with the proper values, the result is then returned a string for inclusion in the PG -// and pgBackRest deployments. -func GetPgbackrestS3EnvVars(cluster crv1.Pgcluster, clientset kubernetes.Interface, - ns string) string { - - if !strings.Contains(cluster.Spec.UserLabels[config.LABEL_BACKREST_STORAGE_TYPE], "s3") { - return "" - } - - // determine the secret for getting the credentials for using S3 as a - // pgBackRest repository. If we can't do that, then we can't move on - if _, err := util.GetS3CredsFromBackrestRepoSecret(clientset, cluster.Namespace, cluster.Name); err != nil { - return "" - } - - // populate the S3 bucket, endpoint and region using either the values in the pgcluster - // spec (if present), otherwise populate using the values from the pgo.yaml config file - s3EnvVars := PgbackrestS3EnvVarsTemplateFields{ - PgbackrestS3Key: util.BackRestRepoSecretKeyAWSS3KeyAWSS3Key, - PgbackrestS3KeySecret: util.BackRestRepoSecretKeyAWSS3KeyAWSS3KeySecret, - PgbackrestS3SecretName: fmt.Sprintf("%s-%s", cluster.Name, config.LABEL_BACKREST_REPO_SECRET), - } - - if cluster.Spec.BackrestS3Bucket != "" { - s3EnvVars.PgbackrestS3Bucket = cluster.Spec.BackrestS3Bucket - } else { - s3EnvVars.PgbackrestS3Bucket = Pgo.Cluster.BackrestS3Bucket - } - - if cluster.Spec.BackrestS3Endpoint != "" { - s3EnvVars.PgbackrestS3Endpoint = cluster.Spec.BackrestS3Endpoint - } else { - s3EnvVars.PgbackrestS3Endpoint = Pgo.Cluster.BackrestS3Endpoint - } - - if cluster.Spec.BackrestS3Region != "" { - s3EnvVars.PgbackrestS3Region = cluster.Spec.BackrestS3Region - } else { - s3EnvVars.PgbackrestS3Region = Pgo.Cluster.BackrestS3Region - } - if cluster.Spec.BackrestS3URIStyle != "" { - s3EnvVars.PgbackrestS3URIStyle = cluster.Spec.BackrestS3URIStyle - } else { - s3EnvVars.PgbackrestS3URIStyle = Pgo.Cluster.BackrestS3URIStyle - } - - // if the URI style is not configured, set to the default value - if s3EnvVars.PgbackrestS3URIStyle == "" { - s3EnvVars.PgbackrestS3URIStyle = defaultPGBackRestS3URIStyle - } - // if set, pgBackRest URI style must be set to either 'path' or 'host'. If it is neither, - // log an error and stop the cluster from being created. - if s3EnvVars.PgbackrestS3URIStyle != "path" && s3EnvVars.PgbackrestS3URIStyle != "host" { - log.Error("pgBackRest S3 URI style must be set to either \"path\" or \"host\".") - return "" - } - - // get the verify TLS boolean value as a string - s3EnvVars.PgbackrestS3VerifyTLS = GetS3VerifyTLSSetting(&cluster) - - doc := bytes.Buffer{} - - if err := config.PgbackrestS3EnvVarsTemplate.Execute(&doc, s3EnvVars); err != nil { - log.Error(err.Error()) - return "" - } - - return doc.String() -} - -// GetS3VerifyTLSSetting parses the configured value as a boolean to ensure a valid -// option is used, then returns the pgBackRest S3 configuration value to either enable -// or disable TLS verification as the expected string value. -func GetS3VerifyTLSSetting(cluster *crv1.Pgcluster) string { - - // If the pgcluster has already been set, either by the PGO client or from the - // CRD definition, parse the boolean value given. - // If this value is not set, then parse the value stored in the default - // configuration and set the value accordingly - verifyTLS, _ := strconv.ParseBool(Pgo.Cluster.BackrestS3VerifyTLS) - - if cluster.Spec.BackrestS3VerifyTLS != "" { - verifyTLS, _ = strconv.ParseBool(cluster.Spec.BackrestS3VerifyTLS) - } - - return strconv.FormatBool(verifyTLS) -} - -// GetPgbackrestBootstrapS3EnvVars retrieves the values for the various configuration settings -// required to configure pgBackRest for AWS S3, specifically for a bootstrap job (includes a -// bucket, endpoint, region, key and key secret. The bucket, endpoint & region are obtained from -// annotations in the pgbackrest secret from the cluster being restored from during the bootstrap -// job, while the key and key secret are then obtained from the data in this same secret. Once -// these values have been obtained, they are used to populate a template containing the various -// pgBackRest environment variables required to enable S3 support for the boostrap job. After -// the template has been executed with the proper values, the result is then returned a string -// for inclusion in the PG and pgBackRest deployments. -func GetPgbackrestBootstrapS3EnvVars(pgDataSourceRestoreFrom string, - restoreFromSecret *v1.Secret) string { - - s3EnvVars := PgbackrestS3EnvVarsTemplateFields{ - PgbackrestS3Key: util.BackRestRepoSecretKeyAWSS3KeyAWSS3Key, - PgbackrestS3KeySecret: util.BackRestRepoSecretKeyAWSS3KeyAWSS3KeySecret, - PgbackrestS3Bucket: restoreFromSecret.Annotations[config.ANNOTATION_S3_BUCKET], - PgbackrestS3Endpoint: restoreFromSecret.Annotations[config.ANNOTATION_S3_ENDPOINT], - PgbackrestS3Region: restoreFromSecret.Annotations[config.ANNOTATION_S3_REGION], - PgbackrestS3SecretName: fmt.Sprintf(util.BackrestRepoSecretName, pgDataSourceRestoreFrom), - } - - // if the URI style annotation is empty then set the proper default - if restoreFromSecret.Annotations[config.ANNOTATION_S3_URI_STYLE] != "" { - s3EnvVars.PgbackrestS3URIStyle = restoreFromSecret.Annotations[config.ANNOTATION_S3_URI_STYLE] - } else { - s3EnvVars.PgbackrestS3URIStyle = defaultPGBackRestS3URIStyle - } - - verifyTLS := restoreFromSecret.Annotations[config.ANNOTATION_S3_VERIFY_TLS] - if verifyTLS != "" { - s3EnvVars.PgbackrestS3VerifyTLS = verifyTLS - } else { - s3EnvVars.PgbackrestS3VerifyTLS = "true" - } - - doc := bytes.Buffer{} - - if err := config.PgbackrestS3EnvVarsTemplate.Execute(&doc, s3EnvVars); err != nil { - log.Error(err.Error()) - return "" - } - - return doc.String() -} - -// UpdatePGHAConfigInitFlag sets the value for the "init" setting in the PGHA configMap for the -// PG cluster to the value specified via the "initVal" parameter. For instance, following the -// initialization of a PG cluster this function will be utilized to set the "init" value to false -// to ensure the primary does not attempt to run initialization logic in the event that it is -// restarted. -func UpdatePGHAConfigInitFlag(clientset kubernetes.Interface, initVal bool, clusterName, - namespace string) error { - - log.Debugf("updating init value to %t in the pgha configMap for cluster %s", initVal, clusterName) - - selector := config.LABEL_PG_CLUSTER + "=" + clusterName + "," + config.LABEL_PGHA_CONFIGMAP + "=true" - configMapList, err := clientset.CoreV1().ConfigMaps(namespace).List(metav1.ListOptions{LabelSelector: selector}) - switch { - case err != nil: - return fmt.Errorf("unable to find the default pgha configMap found for cluster %s using selector %s, unable to set "+ - "init value to false", clusterName, selector) - case len(configMapList.Items) > 1: - return fmt.Errorf("more than one default pgha configMap found for cluster %s using selector %s, unable to set "+ - "init value to false", clusterName, selector) - } - - configMap := &configMapList.Items[0] - configMap.Data[PGHAConfigInitSetting] = strconv.FormatBool(initVal) - - if _, err := clientset.CoreV1().ConfigMaps(namespace).Update(configMap); err != nil { - return err - } - - return nil -} - -// GetSyncReplication returns true if synchronous replication has been enabled using either the -// pgcluster CR specification or the pgo.yaml configuration file. Otherwise, if synchronous -// mode has not been enabled, it returns false. -func GetSyncReplication(specSyncReplication *bool) bool { - // alawys use the value from the CR if explicitly provided - if specSyncReplication != nil { - return *specSyncReplication - } else if Pgo.Cluster.SyncReplication { - return true - } - return false -} - -// OverrideClusterContainerImages is a helper function that provides the -// appropriate hooks to override any of the container images that might be -// deployed with a PostgreSQL cluster -func OverrideClusterContainerImages(containers []v1.Container) { - // set the container image to an override value, if one exists, which involves - // looping through the containers array - for i, container := range containers { - var containerImageName string - // there are a few images we need to check for: - // 1. "database" image, which is PostgreSQL or some flavor of it - // 2. "crunchyadm" image, which helps with administration - // 3. "exporter" image, which helps with monitoring - // 4. "pgbadger" image, which helps with...pgbadger - switch container.Name { - - case "exporter": - containerImageName = config.CONTAINER_IMAGE_CRUNCHY_POSTGRES_EXPORTER - case "crunchyadm": - containerImageName = config.CONTAINER_IMAGE_CRUNCHY_ADMIN - case "database": - containerImageName = config.CONTAINER_IMAGE_CRUNCHY_POSTGRES_HA - // one more step here...determine if this is GIS enabled - // ...yes, this is not ideal - if strings.Contains(container.Image, "gis-ha") { - containerImageName = config.CONTAINER_IMAGE_CRUNCHY_POSTGRES_GIS_HA - } - case "pgbadger": - containerImageName = config.CONTAINER_IMAGE_CRUNCHY_PGBADGER - } - - SetContainerImageOverride(containerImageName, &containers[i]) - } -} - -// writeTablespaceJSON is a convenience function to write the tablespace JSON -// into the current buffer -func writeTablespaceJSON(w *bytes.Buffer, jsonFields interface{}) error { - json, err := json.Marshal(jsonFields) - - // if there is an error, log the error and continue - if err != nil { - return err - } - - // We are appending to the end of a list so we can always assume this comma - // ...at least for now - w.WriteString(",") - w.Write(json) - - return nil -} diff --git a/internal/operator/clusterutilities_test.go b/internal/operator/clusterutilities_test.go deleted file mode 100644 index 72de4844ff..0000000000 --- a/internal/operator/clusterutilities_test.go +++ /dev/null @@ -1,365 +0,0 @@ -package operator - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "encoding/json" - "fmt" - "reflect" - "strings" - "testing" - - "github.com/crunchydata/postgres-operator/internal/config" - fakekubeapi "github.com/crunchydata/postgres-operator/internal/kubeapi/fake" - "github.com/crunchydata/postgres-operator/internal/util" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - - corev1 "k8s.io/api/core/v1" - v1 "k8s.io/api/core/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" -) - -func mockSetupContainers(values map[string]struct { - name string - image string -}) []v1.Container { - containers := []v1.Container{} - - for _, value := range values { - container := v1.Container{ - Name: value.name, - Image: value.image, - } - - containers = append(containers, container) - } - - return containers -} - -func TestGetAnnotations(t *testing.T) { - cluster := &crv1.Pgcluster{} - cluster.Spec.Annotations.Global = map[string]string{"global": "yes", "hey": "there"} - cluster.Spec.Annotations.Postgres = map[string]string{"postgres": "yup", "elephant": "yay"} - cluster.Spec.Annotations.Backrest = map[string]string{"backrest": "woo"} - cluster.Spec.Annotations.PgBouncer = map[string]string{"pgbouncer": "yas", "hippo": "awesome"} - - t.Run("annotations empty", func(t *testing.T) { - cluster := &crv1.Pgcluster{} - ats := []crv1.ClusterAnnotationType{ - crv1.ClusterAnnotationGlobal, - crv1.ClusterAnnotationPostgres, - crv1.ClusterAnnotationBackrest, - crv1.ClusterAnnotationPgBouncer, - } - - for _, at := range ats { - result := GetAnnotations(cluster, at) - - if result != "" { - t.Errorf("expected empty string, got %q", result) - } - } - }) - - tests := []struct { - testName string - expected string - arg crv1.ClusterAnnotationType - }{ - { - testName: "global", - expected: `{"global":"yes","hey":"there"}`, - arg: crv1.ClusterAnnotationGlobal, - }, - { - testName: "postgres", - expected: `{"global":"yes", "hey":"there", "postgres": "yup", "elephant": "yay"}`, - arg: crv1.ClusterAnnotationPostgres, - }, - { - testName: "pgbackrest", - expected: `{"global":"yes", "hey":"there", "backrest": "woo"}`, - arg: crv1.ClusterAnnotationBackrest, - }, - { - testName: "pgbouncer", - expected: `{"global":"yes", "hey":"there", "pgbouncer": "yas", "hippo": "awesome"}`, - arg: crv1.ClusterAnnotationPgBouncer, - }, - } - - for _, test := range tests { - t.Run(test.testName, func(t *testing.T) { - var expected, actual interface{} - - if err := json.Unmarshal([]byte(test.expected), &expected); err != nil { - t.Fatalf("could not unmarshal expected json: %q", err.Error()) - } - - result := GetAnnotations(cluster, test.arg) - - if err := json.Unmarshal([]byte(result), &actual); err != nil { - t.Fatalf("could not unmarshal actual json: %q", err.Error()) - } - - if !reflect.DeepEqual(expected, actual) { - t.Errorf("expected %v, got %v", expected, actual) - } - }) - } -} - -func TestOverrideClusterContainerImages(t *testing.T) { - - containerDefaults := map[string]struct { - name string - image string - }{ - "database": {name: "database", image: config.CONTAINER_IMAGE_CRUNCHY_POSTGRES_HA}, - "crunchyadm": {name: "crunchyadm", image: config.CONTAINER_IMAGE_CRUNCHY_ADMIN}, - "exporter": {name: "exporter", image: config.CONTAINER_IMAGE_CRUNCHY_POSTGRES_EXPORTER}, - "pgbadger": {name: "pgbadger", image: config.CONTAINER_IMAGE_CRUNCHY_PGBADGER}, - "future": {name: "future", image: "crunchy-future"}, - } - - t.Run("no override", func(t *testing.T) { - containers := mockSetupContainers(containerDefaults) - - OverrideClusterContainerImages(containers) - - for _, container := range containers { - containerDefault, ok := containerDefaults[container.Name] - - if !ok { - t.Errorf("could not find container %q", container.Name) - return - } - - if containerDefault.image != container.Image { - t.Errorf("image overwritten when it should not have been. expected %q actual %q", - containerDefault.image, container.Image) - } - } - }) - - // test overriding each container and ensure that it takes in the container - // slice. Skip the "future" container, that will be in an upcoming test - for name, defaults := range containerDefaults { - if name == "future" { - continue - } - - t.Run(fmt.Sprintf("override %s", name), func(t *testing.T) { - // override the struct that contains the value - ContainerImageOverrides[defaults.image] = "overridden" - containers := mockSetupContainers(containerDefaults) - - OverrideClusterContainerImages(containers) - - // determine if this container is overridden - for _, container := range containers { - containerDefault, ok := containerDefaults[container.Name] - - if !ok { - t.Errorf("could not find container %q", container.Name) - return - } - - if containerDefault.name == name && containerDefault.image == container.Image { - t.Errorf("container %q not overwritten. image name is %q", - containerDefault.name, container.Image) - } - } - // unoverride at the end of the test - delete(ContainerImageOverrides, defaults.image) - }) - } - - // test that future does not get overridden - t.Run("do not override unmanaged container", func(t *testing.T) { - ContainerImageOverrides["crunchy-future"] = "overridden" - containers := mockSetupContainers(containerDefaults) - - OverrideClusterContainerImages(containers) - - // determine if this container is overridden - for _, container := range containers { - containerDefault, ok := containerDefaults[container.Name] - - if !ok { - t.Errorf("could not find container %q", container.Name) - return - } - - if containerDefault.name == "future" && containerDefault.image != container.Image { - t.Errorf("image overwritten when it should not have been. expected %q actual %q", - containerDefault.image, container.Image) - } - } - - delete(ContainerImageOverrides, "crunchy-future") - }) - - // test that gis can be overridden - t.Run("override postgis", func(t *testing.T) { - defaults := containerDefaults - - defaults["database"] = struct { - name string - image string - }{ - name: "database", - image: config.CONTAINER_IMAGE_CRUNCHY_POSTGRES_GIS_HA, - } - containers := mockSetupContainers(defaults) - - ContainerImageOverrides[config.CONTAINER_IMAGE_CRUNCHY_POSTGRES_GIS_HA] = "overridden" - - OverrideClusterContainerImages(containers) - - // determine if this container is overridden - for _, container := range containers { - containerDefault, ok := containerDefaults[container.Name] - - if !ok { - t.Errorf("could not find container %q", container.Name) - return - } - - if containerDefault.name == "database" && containerDefault.image == container.Image { - t.Errorf("container %q not overwritten. image name is %q", - containerDefault.name, container.Image) - } - } - - delete(ContainerImageOverrides, config.CONTAINER_IMAGE_CRUNCHY_POSTGRES_GIS_HA) - }) -} - -func TestGetPgbackrestBootstrapS3EnvVars(t *testing.T) { - - // create a fake client that will be used to "fake" the initialization of the operator for - // this test - fakePGOClient, err := fakekubeapi.NewFakePGOClient() - if err != nil { - t.Fatal(err) - } - // now initialize the operator using the fake client. This loads various configs, templates, - // global vars, etc. as needed to run the tests below - Initialize(fakePGOClient) - - // create a mock backrest repo secret with default values populated for the various S3 - // annotations - mockBackRestRepoSecret := v1.Secret{ - ObjectMeta: metav1.ObjectMeta{ - Annotations: map[string]string{ - config.ANNOTATION_S3_BUCKET: "bucket", - config.ANNOTATION_S3_ENDPOINT: "endpoint", - config.ANNOTATION_S3_REGION: "region", - config.ANNOTATION_S3_URI_STYLE: "path", - config.ANNOTATION_S3_VERIFY_TLS: "false", - }, - }, - } - defaultRestoreFromCluster := "restoreFromCluster" - - type Env struct { - EnvVars []corev1.EnvVar - } - - // test all env vars are properly set according the contents of an existing pgBackRest - // repo secret - t.Run("populate from secret", func(t *testing.T) { - - backRestRepoSecret := mockBackRestRepoSecret.DeepCopy() - s3EnvVars := GetPgbackrestBootstrapS3EnvVars(defaultRestoreFromCluster, backRestRepoSecret) - // massage the results a bit so that we can parse as proper JSON to validate contents - s3EnvVarsJSON := strings.TrimSuffix(`{"EnvVars": [`+s3EnvVars, ",\n") + "]}" - - s3Env := &Env{} - if err := json.Unmarshal([]byte(s3EnvVarsJSON), s3Env); err != nil { - t.Fatal(err) - } - - for _, envVar := range s3Env.EnvVars { - validValue := true - switch envVar.Name { - case "PGBACKREST_REPO1_S3_BUCKET": - validValue = (envVar.Value == mockBackRestRepoSecret. - GetAnnotations()[config.ANNOTATION_S3_BUCKET]) - case "PGBACKREST_REPO1_S3_ENDPOINT": - validValue = (envVar.Value == mockBackRestRepoSecret. - GetAnnotations()[config.ANNOTATION_S3_ENDPOINT]) - case "PGBACKREST_REPO1_S3_REGION": - validValue = (envVar.Value == mockBackRestRepoSecret. - GetAnnotations()[config.ANNOTATION_S3_REGION]) - case "PGBACKREST_REPO1_S3_URI_STYLE": - validValue = (envVar.Value == mockBackRestRepoSecret. - GetAnnotations()[config.ANNOTATION_S3_URI_STYLE]) - case "PGHA_PGBACKREST_S3_VERIFY_TLS": - validValue = (envVar.Value == mockBackRestRepoSecret. - GetAnnotations()[config.ANNOTATION_S3_VERIFY_TLS]) - case "PGBACKREST_REPO1_S3_KEY": - validValue = (envVar.ValueFrom.SecretKeyRef.Name == - fmt.Sprintf(util.BackrestRepoSecretName, defaultRestoreFromCluster)) && - (envVar.ValueFrom.SecretKeyRef.Key == - util.BackRestRepoSecretKeyAWSS3KeyAWSS3Key) - case "PGBACKREST_REPO1_S3_KEY_SECRET": - validValue = (envVar.ValueFrom.SecretKeyRef.Name == - fmt.Sprintf(util.BackrestRepoSecretName, defaultRestoreFromCluster)) && - (envVar.ValueFrom.SecretKeyRef.Key == - util.BackRestRepoSecretKeyAWSS3KeyAWSS3KeySecret) - } - if !validValue { - t.Errorf("Invalid value for env var %s", envVar.Name) - } - } - }) - - // test that the proper default S3 URI style is set for the bootstrap S3 env vars when the - // S3 URI style annotation is an empty string in a pgBackRest repo secret - t.Run("default URI style", func(t *testing.T) { - - // the expected default for the pgBackRest URI style - defaultURIStyle := "host" - - backRestRepoSecret := mockBackRestRepoSecret.DeepCopy() - // set the URI style annotation to an empty string so that we can ensure the proper - // default is set when no URI style annotation value is present - backRestRepoSecret.GetAnnotations()[config.ANNOTATION_S3_URI_STYLE] = "" - - s3EnvVars := GetPgbackrestBootstrapS3EnvVars("restoreFromCluster", backRestRepoSecret) - // massage the results a bit so that we can parse as proper JSON to validate contents - s3EnvVarsJSON := strings.TrimSuffix(`{"EnvVars": [`+s3EnvVars, ",\n") + "]}" - - s3Env := &Env{} - if err := json.Unmarshal([]byte(s3EnvVarsJSON), s3Env); err != nil { - t.Error(err) - } - - validValue := false - for _, envVar := range s3Env.EnvVars { - if envVar.Name == "PGBACKREST_REPO1_S3_URI_STYLE" && - envVar.Value == defaultURIStyle { - validValue = true - } - } - if !validValue { - t.Errorf("Invalid default URI style, it should be '%s'", defaultURIStyle) - } - }) -} diff --git a/internal/operator/common.go b/internal/operator/common.go deleted file mode 100644 index 2d4360deb7..0000000000 --- a/internal/operator/common.go +++ /dev/null @@ -1,410 +0,0 @@ -package operator - -/* - Copyright 2017 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "bytes" - "encoding/json" - "os" - "strings" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/ns" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - log "github.com/sirupsen/logrus" - - v1 "k8s.io/api/core/v1" - "k8s.io/client-go/kubernetes" -) - -const ( - // defaultRegistry is the default registry to pull the container images from - defaultRegistry = "registry.developers.crunchydata.com/crunchydata" -) - -var CRUNCHY_DEBUG bool -var NAMESPACE string - -var InstallationName string -var PgoNamespace string -var EventTCPAddress = "localhost:4150" - -var Pgo config.PgoConfig - -// ContainerImageOverrides contains a list of container images that are -// overridden by the RELATED_IMAGE_* environmental variables that can be set by -// people deploying the Operator -var ContainerImageOverrides = map[string]string{} - -// NamespaceOperatingMode defines the namespace operating mode for the cluster, -// e.g. "dynamic", "readonly" or "disabled". See type NamespaceOperatingMode -// for detailed explanations of each mode available. -var namespaceOperatingMode ns.NamespaceOperatingMode - -type containerResourcesTemplateFields struct { - // LimitsMemory and LimitsCPU detemrine the memory/CPU limits - LimitsMemory, LimitsCPU string - // RequestsMemory and RequestsCPU determine how much memory/CPU resources to - // request - RequestsMemory, RequestsCPU string -} - -func Initialize(clientset kubernetes.Interface) { - - tmp := os.Getenv("CRUNCHY_DEBUG") - if tmp == "true" { - CRUNCHY_DEBUG = true - log.Debug("CRUNCHY_DEBUG flag set to true") - } else { - CRUNCHY_DEBUG = false - log.Info("CRUNCHY_DEBUG flag set to false") - } - - NAMESPACE = os.Getenv("NAMESPACE") - log.Infof("NAMESPACE %s", NAMESPACE) - - InstallationName = os.Getenv("PGO_INSTALLATION_NAME") - log.Infof("InstallationName %s", InstallationName) - if InstallationName == "" { - log.Error("PGO_INSTALLATION_NAME env var is required") - os.Exit(2) - } - - PgoNamespace = os.Getenv("PGO_OPERATOR_NAMESPACE") - if PgoNamespace == "" { - log.Error("PGO_OPERATOR_NAMESPACE environment variable is not set and is required, this is the namespace that the Operator is to run within.") - os.Exit(2) - } - - var err error - - err = Pgo.GetConfig(clientset, PgoNamespace) - if err != nil { - log.Error(err) - log.Error("pgo-config files and templates did not load") - os.Exit(2) - } - - log.Printf("PrimaryStorage=%v\n", Pgo.Storage["storage1"]) - - if Pgo.Cluster.CCPImagePrefix == "" { - log.Debugf("pgo.yaml CCPImagePrefix not set, using default %q", defaultRegistry) - Pgo.Cluster.CCPImagePrefix = defaultRegistry - } else { - log.Debugf("pgo.yaml CCPImagePrefix set, using %s", Pgo.Cluster.CCPImagePrefix) - } - if Pgo.Pgo.PGOImagePrefix == "" { - log.Debugf("pgo.yaml PGOImagePrefix not set, using default %q", defaultRegistry) - Pgo.Pgo.PGOImagePrefix = defaultRegistry - } else { - log.Debugf("PGOImagePrefix set, using %s", Pgo.Pgo.PGOImagePrefix) - } - - if Pgo.Cluster.PgmonitorPassword == "" { - log.Debug("pgo.yaml PgmonitorPassword not set, using default") - Pgo.Cluster.PgmonitorPassword = "password" - } - - // In a RELATED_IMAGE_* world, this does not _need_ to be set, but our - // installer does set it up so we could be ok... - if Pgo.Pgo.PGOImageTag == "" { - log.Error("pgo.yaml PGOImageTag not set, required ") - os.Exit(2) - } - - // initialize any container image overrides that are set by the "RELATED_*" - // variables - initializeContainerImageOverrides() - - tmp = os.Getenv("EVENT_TCP_ADDRESS") - if tmp != "" { - EventTCPAddress = tmp - } - log.Info("EventTCPAddress set to " + EventTCPAddress) - - // set controller refresh intervals and worker counts - initializeControllerRefreshIntervals() - initializeControllerWorkerCounts() -} - -// GetPodSecurityContext will generate the security context required for a -// Deployment by incorporating the standard fsGroup for the user that runs the -// container (typically the "postgres" user), and adds any supplemental groups -// that may need to be added, e.g. for NFS storage. -// -// Following the legacy method, this returns a JSON string, which will be -// modified in the future. Mainly this is transitioning from the legacy function -// by adding the expected types -func GetPodSecurityContext(supplementalGroups []int64) string { - // set up the security context struct - securityContext := v1.PodSecurityContext{ - // add any supplemental groups that the user passed in - SupplementalGroups: supplementalGroups, - } - - // determine if we should use the PostgreSQL FSGroup. - if !Pgo.Cluster.DisableFSGroup { - // we store the PostgreSQL FSGroup in this constant as an int64, so it's - // just carried over - securityContext.FSGroup = &crv1.PGFSGroup - } - - // ...convert to JSON. Errors are ignored - doc, err := json.Marshal(securityContext) - - // if there happens to be an error, warn about it - if err != nil { - log.Warn(err) - } - - // for debug purposes, we can look at the document - log.Debug(doc) - - // return a string of the security context - return string(doc) -} - -// GetResourcesJSON is a pseudo-legacy method that creates JSON that applies the -// CPU and Memory settings. The settings are only included if: -// a) they exist -// b) they are nonzero -func GetResourcesJSON(resources, limits v1.ResourceList) string { - fields := containerResourcesTemplateFields{} - - // first, if the contents of the resources list happen to be nil, exit out - if resources == nil && limits == nil { - return "" - } - - if resources != nil { - if resources.Cpu() != nil && !resources.Cpu().IsZero() { - fields.RequestsCPU = resources.Cpu().String() - } - - if resources.Memory() != nil && !resources.Memory().IsZero() { - fields.RequestsMemory = resources.Memory().String() - } - } - - if limits != nil { - if limits.Cpu() != nil && !limits.Cpu().IsZero() { - fields.LimitsCPU = limits.Cpu().String() - } - - if limits.Memory() != nil && !limits.Memory().IsZero() { - fields.LimitsMemory = limits.Memory().String() - } - } - - doc := bytes.Buffer{} - - if err := config.ContainerResourcesTemplate.Execute(&doc, fields); err != nil { - log.Error(err) - return "" - } - - if log.GetLevel() == log.DebugLevel { - config.ContainerResourcesTemplate.Execute(os.Stdout, fields) - } - - return doc.String() -} - -// GetRepoType returns the proper repo type to set in container based on the -// backrest storage type provided -func GetRepoType(backrestStorageType string) string { - if backrestStorageType != "" && backrestStorageType == "s3" { - return "s3" - } else { - return "posix" - } -} - -// IsLocalAndS3Storage a boolean indicating whether or not local and s3 storage should -// be enabled for pgBackRest based on the backrestStorageType string provided -func IsLocalAndS3Storage(backrestStorageType string) bool { - if backrestStorageType != "" && strings.Contains(backrestStorageType, "s3") && - strings.Contains(backrestStorageType, "local") { - return true - } - return false -} - -// SetContainerImageOverride determines if there is an override available for -// a container image, and sets said value on the Kubernetes Container image -// definition -func SetContainerImageOverride(containerImageName string, container *v1.Container) { - // if a container image name override is available, set it! - overrideImageName := ContainerImageOverrides[containerImageName] - - if overrideImageName != "" { - log.Debugf("overriding image %s with %s", containerImageName, overrideImageName) - - container.Image = overrideImageName - } -} - -// initializeContainerImageOverrides initializes the container image overrides -// that could be set if there are any `RELATED_IMAGE_*` environmental variables -func initializeContainerImageOverrides() { - // the easiest way to handle this is to iterate over the RelatedImageMap, - // check if said image exist in the environmental variable, and if it does - // load it in as an override. Otherwise, ignore. - for relatedImageEnvVar, imageName := range config.RelatedImageMap { - // see if the envirionmental variable overrides the image name or not - overrideImageName := os.Getenv(relatedImageEnvVar) - - // if it is overridden, set the image name the map - if overrideImageName != "" { - ContainerImageOverrides[imageName] = overrideImageName - log.Infof("image %s overridden by: %s", imageName, overrideImageName) - } - } -} - -// initControllerRefreshIntervals initializes the refresh intervals for any informers -// created by the Operator requiring a refresh interval. This includes first attempting -// to utilize the refresh interval(s) defined in the pgo.yaml config file, and if not -// present then falling back to a default value. -func initializeControllerRefreshIntervals() { - // set the namespace controller refresh interval if not provided in the pgo.yaml - if Pgo.Pgo.NamespaceRefreshInterval == nil { - log.Debugf("NamespaceRefreshInterval not set, defaulting to %d seconds", - config.DefaultNamespaceRefreshInterval) - defaultVal := int(config.DefaultNamespaceRefreshInterval) - Pgo.Pgo.NamespaceRefreshInterval = &defaultVal - } else { - log.Debugf("NamespaceRefreshInterval is set, using %d seconds", - *Pgo.Pgo.NamespaceRefreshInterval) - } - - // set the default controller group refresh interval if not provided in the pgo.yaml - if Pgo.Pgo.ControllerGroupRefreshInterval == nil { - log.Debugf("ControllerGroupRefreshInterval not set, defaulting to %d seconds", - config.DefaultControllerGroupRefreshInterval) - defaultVal := int(config.DefaultControllerGroupRefreshInterval) - Pgo.Pgo.ControllerGroupRefreshInterval = &defaultVal - } else { - log.Debugf("ControllerGroupRefreshInterval is set, using %d seconds", - *Pgo.Pgo.ControllerGroupRefreshInterval) - } -} - -// initControllerWorkerCounts sets the number of workers that will be created for any worker -// queues created within the various controllers created by the Operator. This includes first -// attempting to utilize the worker counts defined in the pgo.yaml config file, and if not -// present then falling back to a default value. -func initializeControllerWorkerCounts() { - - if Pgo.Pgo.ConfigMapWorkerCount == nil { - log.Debugf("ConfigMapWorkerCount not set, defaulting to %d worker(s)", - config.DefaultConfigMapWorkerCount) - defaultVal := int(config.DefaultConfigMapWorkerCount) - Pgo.Pgo.ConfigMapWorkerCount = &defaultVal - } else { - log.Debugf("ConfigMapWorkerCount is set, using %d worker(s)", - *Pgo.Pgo.ConfigMapWorkerCount) - } - - if Pgo.Pgo.NamespaceWorkerCount == nil { - log.Debugf("NamespaceWorkerCount not set, defaulting to %d worker(s)", - config.DefaultNamespaceWorkerCount) - defaultVal := int(config.DefaultNamespaceWorkerCount) - Pgo.Pgo.NamespaceWorkerCount = &defaultVal - } else { - log.Debugf("NamespaceWorkerCount is set, using %d worker(s)", - *Pgo.Pgo.NamespaceWorkerCount) - } - - if Pgo.Pgo.PGClusterWorkerCount == nil { - log.Debugf("PGClusterWorkerCount not set, defaulting to %d worker(s)", - config.DefaultPGClusterWorkerCount) - defaultVal := int(config.DefaultPGClusterWorkerCount) - Pgo.Pgo.PGClusterWorkerCount = &defaultVal - } else { - log.Debugf("PGClusterWorkerCount is set, using %d worker(s)", - *Pgo.Pgo.PGClusterWorkerCount) - } - - if Pgo.Pgo.PGReplicaWorkerCount == nil { - log.Debugf("PGReplicaWorkerCount not set, defaulting to %d worker(s)", - config.DefaultPGReplicaWorkerCount) - defaultVal := int(config.DefaultPGReplicaWorkerCount) - Pgo.Pgo.PGReplicaWorkerCount = &defaultVal - } else { - log.Debugf("PGReplicaWorkerCount is set, using %d worker(s)", - *Pgo.Pgo.PGReplicaWorkerCount) - } - - if Pgo.Pgo.PGTaskWorkerCount == nil { - log.Debugf("PGTaskWorkerCount not set, defaulting to %d worker(s)", - config.DefaultPGTaskWorkerCount) - defaultVal := int(config.DefaultPGTaskWorkerCount) - Pgo.Pgo.PGTaskWorkerCount = &defaultVal - } else { - log.Debugf("PGTaskWorkerCount is set, using %d worker(s)", - *Pgo.Pgo.PGTaskWorkerCount) - } -} - -// SetupNamespaces is responsible for the initial namespace configuration for the Operator -// install. This includes setting the proper namespace operating mode, creating and/or updating -// namespaces as needed (or as permitted by the current operator mode), and returning a valid list -// of namespaces for the current Operator install. -func SetupNamespaces(clientset kubernetes.Interface) ([]string, error) { - - // First set the proper namespace operating mode for the Operator install. The mode identified - // determines whether or not certain namespace capabilities are enabled. - if err := setNamespaceOperatingMode(clientset); err != nil { - log.Errorf("Error detecting namespace operating mode: %v", err) - return nil, err - } - log.Debugf("Namespace operating mode is '%s'", NamespaceOperatingMode()) - - namespaceList, err := ns.GetInitialNamespaceList(clientset, NamespaceOperatingMode(), - InstallationName, PgoNamespace) - if err != nil { - return nil, err - } - - // proceed with creating and/or updating any namespaces provided for the installation - if err := ns.ConfigureInstallNamespaces(clientset, InstallationName, - PgoNamespace, namespaceList, NamespaceOperatingMode()); err != nil { - log.Errorf("Unable to setup namespaces: %v", err) - return nil, err - } - - return namespaceList, nil -} - -// setNamespaceOperatingMode set the namespace operating mode for the Operator by calling the -// proper utility function to determine which mode is applicable based on the current -// permissions assigned to the Operator Service Account. -func setNamespaceOperatingMode(clientset kubernetes.Interface) error { - nsOpMode, err := ns.GetNamespaceOperatingMode(clientset) - if err != nil { - return err - } - namespaceOperatingMode = nsOpMode - - return nil -} - -// NamespaceOperatingMode returns the namespace operating mode for the current Operator -// installation, which is stored in the "namespaceOperatingMode" variable -func NamespaceOperatingMode() ns.NamespaceOperatingMode { - return namespaceOperatingMode -} diff --git a/internal/operator/config/configutil.go b/internal/operator/config/configutil.go deleted file mode 100644 index af6bc62f35..0000000000 --- a/internal/operator/config/configutil.go +++ /dev/null @@ -1,76 +0,0 @@ -package config - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "encoding/json" - "errors" - "fmt" - - "github.com/crunchydata/postgres-operator/internal/util" - corev1 "k8s.io/api/core/v1" - "k8s.io/client-go/kubernetes" - - "k8s.io/apimachinery/pkg/types" -) - -const ( - // pghaConfigMapName represents the name of the PGHA configMap created for each cluster, which - // has the name "-pgha-config" - // pghaConfigMapName = "%s-pgha-config" - // pghaDCSConfigName represents the name of the DCS configuration stored in the - // "-pgha-config" configMap, which is "-dcs-config" - // PGHADCSConfigName = "%s-dcs-config" - // pghaLocalConfigName represents the name of the local configuration stored for each database - // server in the "-pgha-config" configMap, which is "-local-config" - // pghaLocalConfigName = "%s-local-config" - // - pghLocalConfigSuffix = "-local-config" -) - -var ( - // ErrMissingClusterConfig is the error thrown when configuration is missing from a configMap - ErrMissingClusterConfig error = errors.New("Configuration is missing from configMap") -) - -// Syncer defines a resource that is able to sync its configuration stored configuration with a -// service, application, etc. -type Syncer interface { - Sync() error -} - -// patchConfigMapData replaces the configuration stored the configuration specified with the -// provided content -func patchConfigMapData(kubeclientset kubernetes.Interface, configMap *corev1.ConfigMap, - configName string, content []byte) error { - - jsonOp := []util.JSONPatchOperation{{ - Op: "replace", - Path: fmt.Sprintf("/data/%s", configName), - Value: string(content), - }} - jsonOpBytes, err := json.Marshal(jsonOp) - if err != nil { - return err - } - - if _, err := kubeclientset.CoreV1().ConfigMaps(configMap.GetNamespace()).Patch(configMap.GetName(), - types.JSONPatchType, jsonOpBytes); err != nil { - return err - } - - return nil -} diff --git a/internal/operator/config/dcs.go b/internal/operator/config/dcs.go deleted file mode 100644 index 7b5c721c5f..0000000000 --- a/internal/operator/config/dcs.go +++ /dev/null @@ -1,324 +0,0 @@ -package config - -/* - Copyright 2020 Crunchy Data Solutions, Ind. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "encoding/json" - "errors" - "fmt" - "reflect" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/util" - log "github.com/sirupsen/logrus" - corev1 "k8s.io/api/core/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/types" - "k8s.io/client-go/kubernetes" - "sigs.k8s.io/yaml" -) - -const ( - // PGHADCSConfigName represents the name of the DCS configuration stored in the - // "-pgha-config" configMap, which is "-dcs-config" - PGHADCSConfigName = "%s-dcs-config" - // DCSConfigMapName represents the name of the DCS configMap created for each cluster, which - // has the name "-config" - dcsConfigMapName = "%s-config" - // dcsConfigAnnotation represents that name of the annotation used to store the cluster's DCS - // configuration - dcsConfigAnnotation = "config" -) - -// DCS configures the DCS configuration settings for a specific PG cluster. -type DCS struct { - kubeclientset kubernetes.Interface - configMap *corev1.ConfigMap - configName string - clusterScope string -} - -// DCSConfig represents the cluster-wide configuration that is stored in the Distributed -// Configuration Store (DCS). -type DCSConfig struct { - LoopWait int `json:"loop_wait,omitempty"` - TTL int `json:"ttl,omitempty"` - RetryTimeout int `json:"retry_timeout,omitempty"` - MaximumLagOnFailover int `json:"maximum_lag_on_failover,omitempty"` - MasterStartTimeout int `json:"master_start_timeout,omitempty"` - SynchronousMode bool `json:"synchronous_mode,omitempty"` - SynchronousModeStrict bool `json:"synchronous_mode_strict,omitempty"` - PostgreSQL *PostgresDCS `json:"postgresql,omitempty"` - StandbyCluster *StandbyDCS `json:"standby_cluster,omitempty"` - Slots map[string]SlotDCS `json:"slots,omitempty"` -} - -// PostgresDCS represents the PostgreSQL settings that can be applied cluster-wide to a -// PostgreSQL cluster via the DCS. -type PostgresDCS struct { - UsePGRewind bool `json:"use_pg_rewind,omitempty"` - UseSlots bool `json:"use_slots,omitempty"` - RecoveryConf map[string]interface{} `json:"recovery_conf,omitempty"` - Parameters map[string]interface{} `json:"parameters,omitempty"` -} - -// StandbyDCS represents standby cluster settings that can be applied cluster-wide via the DCS. -type StandbyDCS struct { - Host string `json:"host,omitempty"` - Port int `json:"port,omitempty"` - PrimarySlotName map[string]interface{} `json:"primary_slot_name,omitempty"` - CreateReplicaMethods []string `json:"create_replica_methods,omitempty"` - RestoreCommand string `json:"restore_command,omitempty"` - ArchiveCleanupCommand string `json:"archive_cleanup_command,omitempty"` - RecoveryMinApplyDelay int `json:"recovery_min_apply_delay,omitempty"` -} - -// SlotDCS represents slot settings that can be applied cluster-wide via the DCS. -type SlotDCS struct { - Type string `json:"type,omitempty"` - Database string `json:"database,omitempty"` - Plugin string `json:"plugin,omitempty"` -} - -// NewDCS creates a new DCS config struct using the configMap provided. The DCSConfig will -// include a configMap that will be used to configure the DCS for a specific cluster. -func NewDCS(configMap *corev1.ConfigMap, kubeclientset kubernetes.Interface, - clusterScope string) *DCS { - - clusterName := configMap.GetLabels()[config.LABEL_PG_CLUSTER] - - return &DCS{ - kubeclientset: kubeclientset, - configMap: configMap, - configName: fmt.Sprintf(PGHADCSConfigName, clusterName), - clusterScope: clusterScope, - } -} - -// Sync attempts to apply all configuration in the the DCSConfig's configMap. If the DCS -// configuration is missing from the configMap, then and attempt is made to add it by refreshing -// the DCS configuration. -func (d *DCS) Sync() error { - - clusterName := d.configMap.GetObjectMeta().GetLabels()[config.LABEL_PG_CLUSTER] - namespace := d.configMap.GetObjectMeta().GetNamespace() - - log.Debugf("Cluster Config: syncing DCS config for cluster %s (namespace %s)", clusterName, - namespace) - - if err := d.apply(); err != nil && - errors.Is(err, ErrMissingClusterConfig) { - - if err := d.refresh(); err != nil { - return err - } - } else if err != nil { - return err - } - - log.Debugf("Cluster Config: finished syncing DCS config for cluster %s (namespace %s)", - clusterName, namespace) - - return nil -} - -// Update updates the contents of the DCS configuration stored within the configMap included -// in the DCS. -func (d *DCS) Update(dcsConfig *DCSConfig) error { - - clusterName := d.configMap.GetObjectMeta().GetLabels()[config.LABEL_PG_CLUSTER] - namespace := d.configMap.GetObjectMeta().GetNamespace() - - log.Debugf("Cluster Config: updating DCS config for cluster %s (namespace %s)", clusterName, - namespace) - - content, err := yaml.Marshal(dcsConfig) - if err != nil { - return err - } - - if err := patchConfigMapData(d.kubeclientset, d.configMap, d.configName, content); err != nil { - return err - } - - log.Debugf("Cluster Config: successfully updated DCS config for cluster %s (namespace %s)", - clusterName, namespace) - - return nil -} - -// apply applies the DCS configuration stored in the ClusterConfig's configMap to the cluster's -// DCS. Specicially, it updates the cluster's DCS, i.e. the the "config" annotation of the -// "-config" configMap, with the contents of the "" -// configuration included in the DCS's configMap. -func (d *DCS) apply() error { - - clusterName := d.configMap.GetLabels()[config.LABEL_PG_CLUSTER] - namespace := d.configMap.GetObjectMeta().GetNamespace() - - log.Debugf("Cluster Config: applying DCS config to cluster %s in namespace %s", clusterName, - namespace) - - // first grab the DCS config from the PGHA config map - dcsConfig, rawDCS, err := d.GetDCSConfig() - if err != nil { - return err - } - - // next grab the current/live DCS from the "config" annotation of the Patroni configMap - clusterDCS, rawClusterDCS, err := d.getClusterDCSConfig() - if err != nil { - return err - } - - // if the DCS contents are equal then no further action is needed - if reflect.DeepEqual(dcsConfig, clusterDCS) { - log.Debugf("Cluster Config: DCS config for cluster %s in namespace %s is up-to-date, "+ - "nothing to apply", clusterName, namespace) - return nil - } - - // ensure the current "pause" setting is not overridden if currently set for the cluster - if _, ok := rawClusterDCS["pause"]; ok { - rawDCS["pause"] = rawClusterDCS["pause"] - } - - // proceed with updating the DCS with the contents of the configMap - dcsConfigJSON, err := json.Marshal(rawDCS) - if err != nil { - return err - } - - if err := d.patchDCSAnnotation(string(dcsConfigJSON)); err != nil { - return err - } - - log.Debugf("Cluster Config: successfully applied DCS to cluster %s in namespace %s", - clusterName, namespace) - - return nil -} - -// getClusterDCSConfig obtains the configuration that is currently stored in the cluster's DCS. -// Specifically, it obtains the configuration stored in the "config" annotation of the -// "-config" configMap. -func (d *DCS) getClusterDCSConfig() (*DCSConfig, map[string]json.RawMessage, error) { - - clusterDCS := &DCSConfig{} - - namespace := d.configMap.GetObjectMeta().GetNamespace() - - dcsCM, err := d.kubeclientset.CoreV1().ConfigMaps(namespace). - Get(fmt.Sprintf(dcsConfigMapName, d.clusterScope), metav1.GetOptions{}) - if err != nil { - return nil, nil, err - } - - config, ok := dcsCM.GetObjectMeta().GetAnnotations()[dcsConfigAnnotation] - if !ok { - return nil, nil, util.ErrMissingConfigAnnotation - } - - if err := json.Unmarshal([]byte(config), clusterDCS); err != nil { - return nil, nil, err - } - - var rawJSON map[string]json.RawMessage - if err := json.Unmarshal([]byte(config), &rawJSON); err != nil { - return nil, nil, err - } - - return clusterDCS, rawJSON, nil -} - -// GetDCSConfig returns the current DCS configuration included in the ClusterConfig's -// configMap, i.e. the contents of the "" configuration unmarshalled -// into a DCSConfig struct. -func (d *DCS) GetDCSConfig() (*DCSConfig, map[string]json.RawMessage, error) { - - dcsYAML, ok := d.configMap.Data[d.configName] - if !ok { - return nil, nil, ErrMissingClusterConfig - } - - dcsConfig := &DCSConfig{} - - if err := yaml.Unmarshal([]byte(dcsYAML), dcsConfig); err != nil { - return nil, nil, err - } - - var rawJSON map[string]json.RawMessage - if err := yaml.Unmarshal([]byte(dcsYAML), &rawJSON); err != nil { - return nil, nil, err - } - - return dcsConfig, rawJSON, nil -} - -// patchDCSAnnotation patches the "config" annotation within the DCS configMap with the -// content provided. -func (d *DCS) patchDCSAnnotation(content string) error { - - jsonOp := []util.JSONPatchOperation{{ - Op: "replace", - Path: fmt.Sprintf("/metadata/annotations/%s", dcsConfigAnnotation), - Value: content, - }} - jsonOpBytes, err := json.Marshal(jsonOp) - if err != nil { - return err - } - - if _, err := d.kubeclientset.CoreV1().ConfigMaps(d.configMap.GetNamespace()).Patch( - fmt.Sprintf(dcsConfigMapName, d.clusterScope), types.JSONPatchType, - jsonOpBytes); err != nil { - return err - } - - return nil -} - -// refresh updates the DCS configuration stored in the "-pgha-config" -// configMap with the current DCS configuration for the cluster. Specifically, it is updated with -// the configuration stored in the "config" annotation of the "-config" configMap. -func (d *DCS) refresh() error { - - clusterName := d.configMap.Labels[config.LABEL_PG_CLUSTER] - namespace := d.configMap.GetObjectMeta().GetNamespace() - - log.Debugf("Cluster Config: refreshing DCS config for cluster %s (namespace %s)", clusterName, - namespace) - - clusterDCS, _, err := d.getClusterDCSConfig() - if err != nil { - return err - } - - clusterDCSBytes, err := yaml.Marshal(clusterDCS) - if err != nil { - return err - } - - if err := patchConfigMapData(d.kubeclientset, d.configMap, d.configName, - clusterDCSBytes); err != nil { - return err - } - - log.Debugf("Cluster Config: successfully refreshed DCS config for cluster %s (namespace %s)", - clusterName, namespace) - - return nil -} diff --git a/internal/operator/config/localdb.go b/internal/operator/config/localdb.go deleted file mode 100644 index 0e3c182e17..0000000000 --- a/internal/operator/config/localdb.go +++ /dev/null @@ -1,437 +0,0 @@ -package config - -/* - Copyright 2020 Crunchy Data Solutions, Inl. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "encoding/json" - "errors" - "fmt" - "strings" - "sync" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/kubeapi" - "github.com/crunchydata/postgres-operator/internal/util" - - log "github.com/sirupsen/logrus" - corev1 "k8s.io/api/core/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/types" - "k8s.io/client-go/kubernetes" - "k8s.io/client-go/rest" - "sigs.k8s.io/yaml" -) - -var ( - // readConfigCMD is the command used to read local cluster configuration in a database - // container - readConfigCMD []string = []string{"bash", "-c", - "/opt/cpm/bin/yq r /tmp/postgres-ha-bootstrap.yaml postgresql | " + - "/opt/cpm/bin/yq p - postgresql", - } - // applyAndReloadConfigCMD is the command for calling the script to apply and reload the local - // configuration for a database container. The required arguments are appended to this command - // when the script is called. - applyAndReloadConfigCMD []string = []string{"/opt/cpm/bin/common/pgha-reload-local.sh"} - - // pghaLocalConfigName represents the name of the local configuration stored for each database - // server in the "-pgha-config" configMap, which is "-local-config" - pghaLocalConfigName = "%s-local-config" - // pghaLocalConfigSuffix is the suffix for a local server configuration - pghaLocalConfigSuffix = "-local-config" -) - -// LocalDB configures the local configuration settings for a specific database server within a -// PG cluster. -type LocalDB struct { - kubeclientset kubernetes.Interface - configMap *corev1.ConfigMap - configNames []string - restConfig *rest.Config -} - -// LocalDBConfig represents the local configuration for a specific PostgreSQL database server -// within a PostgreSQL cluster. Only user-facing configuration is exposed via this struct, -// and not any configuration that is controlled/managed by the Operator itself. -type LocalDBConfig struct { - PostgreSQL PostgresLocalDB `json:"postgresql,omitempty"` -} - -// PostgresLocalDB represents the PostgreSQL settings that can be applied to an individual -// PostgreSQL server within a PostgreSQL cluster. -type PostgresLocalDB struct { - // Authentication is the block for managing the Patroni managed accounts - // (superuser, replication, rewind). While the PostgreSQL Operator manages - // these overall, one may want to override them. We allow for this, but the - // deployer should take care when overriding this value - Authentication map[string]interface{} `json:"authentication,omitempty"` - Callbacks *Callbacks `json:"callbacks,omitempty"` - CreateReplicaMethods []string `json:"create_replica_methods,omitempty"` - ConfigDir string `json:"config_dir,omitempty"` - UseUnixSocket bool `json:"use_unix_socket,omitempty"` - PGPass string `json:"pgpass,omitempty"` - RecoveryConf map[string]interface{} `json:"recovery_conf,omitempty"` - CustomConf map[string]interface{} `json:"custom_conf,omitempty"` - Parameters map[string]interface{} `json:"parameters,omitempty"` - PGHBA []string `json:"pg_hba,omitempty"` - PGIdent []string `json:"pg_ident,omitempty"` - PGCTLTimeout int `json:"pg_ctl_timeout,omitempty"` - UsePGRewind bool `json:"use_pg_rewind,omitempty"` - RemoveDataDirectoryOnRewindFailure bool `json:"remove_data_directory_on_rewind_failure,omitempty"` - RemoveDataDirectoryOnDivergedTimelines bool `json:"remove_data_directory_on_diverged_timelines,omitempty"` - PGBackRest *CreateReplicaMethod `json:"pgbackrest,omitempty"` - PGBackRestStandby *CreateReplicaMethod `json:"pgbackrest_standby,omitempty"` -} - -// Callbacks defines the various Patroni callbacks -type Callbacks struct { - OnReload string `json:"on_reload,omitempty"` - OnRestart string `json:"on_restart,omitempty"` - OnRoleChange string `json:"on_role_change,omitempty"` - OnStart string `json:"on_start,omitempty"` - OnStop string `json:"on_stop,omitempty"` -} - -// CreateReplicaMethod represents a Patroni replica creation method -type CreateReplicaMethod struct { - Command string `json:"command,omitempty"` - KeepData bool `json:"keep_data,omitempty"` - NoParams bool `json:"no_params,omitempty"` - NoMaster int `json:"no_master,omitempty"` -} - -// NewLocalDB creates a new LocalDB, which includes a configMap that contains the local -// configuration settings for the database servers within a specific PG cluster. Additionally -// the LocalDB includes the client(s) and other applicable resources needed to access and modify -// various resources within the Kubernetes cluster in support of configuring the included database -// servers. -func NewLocalDB(configMap *corev1.ConfigMap, restConfig *rest.Config, - kubeclientset kubernetes.Interface) (*LocalDB, error) { - - clusterName := configMap.GetLabels()[config.LABEL_PG_CLUSTER] - namespace := configMap.GetObjectMeta().GetNamespace() - - configNames, err := GetLocalDBConfigNames(kubeclientset, clusterName, namespace) - if err != nil { - return nil, err - } - - return &LocalDB{ - kubeclientset: kubeclientset, - restConfig: restConfig, - configMap: configMap, - configNames: configNames, - }, nil -} - -// Sync attempts to apply all local database server configuration settings in the the LocalDB's configMap -// to the various servers included in the LocalDB. If the configuration for a server is missing from the -// configMap, then and attempt is made to add it by refreshing that specific configuration. Also, any -// configurations within the configMap associated with servers that no longer exist are removed. -func (l *LocalDB) Sync() error { - - clusterName := l.configMap.GetObjectMeta().GetLabels()[config.LABEL_PG_CLUSTER] - namespace := l.configMap.GetObjectMeta().GetNamespace() - - log.Debugf("Cluster Config: syncing local config for cluster %s (namespace %s)", clusterName, - namespace) - - var wg sync.WaitGroup - - wg.Add(1) - - // delete any configs that are in the configMap but don't have an associated DB server in the - // cluster - go func() { - l.clean() - wg.Done() - }() - - // attempt to apply local config - for _, configName := range l.configNames { - - wg.Add(1) - - go func(config string) { - - // attempt to apply DCS config - if err := l.apply(config); err != nil && - errors.Is(err, ErrMissingClusterConfig) { - - if err := l.refresh(config); err != nil { - // log the error and move on - log.Error(err) - } - } else if err != nil { - // log the error and move on - log.Error(err) - } - - wg.Done() - }(configName) - } - - wg.Wait() - - log.Debugf("Cluster Config: finished syncing config for cluster %s (namespace %s)", - clusterName, namespace) - - return nil -} - -// Update updates the contents of the configuration for a specific database server in -// the PG cluster, specifically within the configMap included in the LocalDB. -func (l *LocalDB) Update(configName string, localDBConfig LocalDBConfig) error { - - clusterName := l.configMap.GetObjectMeta().GetLabels()[config.LABEL_PG_CLUSTER] - namespace := l.configMap.GetObjectMeta().GetNamespace() - - log.Debugf("Cluster Config: updating local config %s in cluster %s "+ - "(namespace %s)", configName, clusterName, namespace) - - content, err := yaml.Marshal(localDBConfig) - if err != nil { - return err - } - - if err := patchConfigMapData(l.kubeclientset, l.configMap, configName, content); err != nil { - return err - } - - log.Debugf("Cluster Config: successfully updated local config %s in cluster %s "+ - "(namespace %s)", configName, clusterName, namespace) - - return nil -} - -// apply applies the configuration stored in the cluster ConfigMap for a specific database server -// to that server. This is done by updating the contents of that database server's local -// configuration with the configuration for that cluster stored in the LocalDB's configMap, and -// then issuing a Patroni "reload" for that specific server. -func (l *LocalDB) apply(configName string) error { - - clusterName := l.configMap.GetObjectMeta().GetLabels()[config.LABEL_PG_CLUSTER] - namespace := l.configMap.GetObjectMeta().GetNamespace() - - log.Debugf("Cluster Config: applying local config %s in cluster %s "+ - "(namespace %s)", configName, clusterName, namespace) - - localConfig, err := l.getLocalConfig(configName) - if err != nil { - return err - } - - // selector in the format "pg-cluster=,deployment-name=" - selector := fmt.Sprintf("%s=%s,%s=%s", config.LABEL_PG_CLUSTER, clusterName, - config.LABEL_DEPLOYMENT_NAME, strings.TrimSuffix(configName, pghLocalConfigSuffix)) - dbPodList, err := l.kubeclientset.CoreV1().Pods(namespace).List(metav1.ListOptions{ - LabelSelector: selector, - }) - if err != nil { - return err - } - // if the pod list is empty, also return an error - if len(dbPodList.Items) == 0 { - return fmt.Errorf("no pod found for %q", clusterName) - } - - dbPod := &dbPodList.Items[0] - - // add the config name and patroni port as params for the call to the apply & reload script - applyCommand := append(applyAndReloadConfigCMD, localConfig, config.DEFAULT_PATRONI_PORT) - - stdout, stderr, err := kubeapi.ExecToPodThroughAPI(l.restConfig, l.kubeclientset, applyCommand, - dbPod.Spec.Containers[0].Name, dbPod.GetName(), namespace, nil) - - if err != nil { - log.Error(stderr, stdout) - return err - } - - log.Debugf("Cluster Config: successfully applied local config %s in cluster %s "+ - "(namespace %s)", configName, clusterName, namespace) - - return nil -} - -// clean removes any local database server configurations from the configMap included in the -// LocalDB if the database server they are associated with no longer exists -func (l *LocalDB) clean() error { - - var jsonPatch = []util.JSONPatchOperation{} - var cmlocalConfigs []string - - // first grab all current local configs from the configMap - for configName := range l.configMap.Data { - if strings.HasSuffix(configName, pghaLocalConfigSuffix) { - cmlocalConfigs = append(cmlocalConfigs, configName) - } - } - - // now see if any need to be deleted - for _, cmLocalConfig := range cmlocalConfigs { - deleteConfig := true - for _, managedConfigName := range l.configNames { - if cmLocalConfig == managedConfigName { - deleteConfig = false - break - } - } - if deleteConfig { - jsonPatch = append(jsonPatch, util.JSONPatchOperation{ - Op: "remove", - Path: fmt.Sprintf("/data/%s", cmLocalConfig), - }) - - } - } - - jsonOpBytes, err := json.Marshal(jsonPatch) - if err != nil { - return err - } - - if _, err := l.kubeclientset.CoreV1().ConfigMaps(l.configMap.GetNamespace()).Patch( - l.configMap.GetName(), types.JSONPatchType, jsonOpBytes); err != nil { - return err - } - - return nil -} - -// getLocalConfigFromCluster obtains the local configuration for a specific database server in the -// cluster. It also returns the Pod that is currently running that specific server. -func (l *LocalDB) getLocalConfigFromCluster(configName string) (*LocalDBConfig, error) { - - clusterName := l.configMap.GetObjectMeta().GetLabels()[config.LABEL_PG_CLUSTER] - namespace := l.configMap.GetObjectMeta().GetNamespace() - - // selector in the format "pg-cluster=,deployment-name=" - selector := fmt.Sprintf("%s=%s,%s=%s", config.LABEL_PG_CLUSTER, clusterName, - config.LABEL_DEPLOYMENT_NAME, strings.TrimSuffix(configName, pghLocalConfigSuffix)) - dbPodList, err := l.kubeclientset.CoreV1().Pods(namespace).List(metav1.ListOptions{ - LabelSelector: selector, - }) - - if err != nil { - return nil, err - } - - // if the pod list is empty, also return an error - if len(dbPodList.Items) == 0 { - return nil, fmt.Errorf("no pod found for %q", clusterName) - } - - dbPod := &dbPodList.Items[0] - - stdout, stderr, err := kubeapi.ExecToPodThroughAPI(l.restConfig, l.kubeclientset, readConfigCMD, - dbPod.Spec.Containers[0].Name, dbPod.GetName(), namespace, nil) - if err != nil { - log.Errorf(stderr) - return nil, err - } - - // we unmarshall to ensure the configMap only contains the settings that we want to expose - // to the end-user - localDBConfig := &LocalDBConfig{} - if err := yaml.Unmarshal([]byte(stdout), localDBConfig); err != nil { - return nil, err - } - - return localDBConfig, nil -} - -// getLocalConfig returns the current local configuration included in the ClusterConfig's -// configMap for a specific database server, i.e. the contents of the "" -// configuration unmarshalled into a LocalConfig struct. -func (l *LocalDB) getLocalConfig(configName string) (string, error) { - - localYAML, ok := l.configMap.Data[configName] - if !ok { - return "", ErrMissingClusterConfig - } - - jsonConfig, err := yaml.YAMLToJSON([]byte(localYAML)) - if err != nil { - return "", err - } - - // decode just to ensure no disallowed fields in the config - dec := json.NewDecoder(strings.NewReader(string(jsonConfig))) - dec.DisallowUnknownFields() - if err := dec.Decode(&LocalDBConfig{}); err != nil { - return "", err - } - - return localYAML, nil -} - -// refresh updates the local configuration for a specific database server in the Refresh's -// configMap with the current local configuration for that server. Specifically, it is updated -// with the contents of the Patroni YAML configuration file stored in the container running the -// server. -func (l *LocalDB) refresh(configName string) error { - - clusterName := l.configMap.GetObjectMeta().GetLabels()[config.LABEL_PG_CLUSTER] - namespace := l.configMap.GetObjectMeta().GetNamespace() - - log.Debugf("Cluster Config: refreshing local config %s in cluster %s "+ - "(namespace %s)", configName, clusterName, namespace) - - localConfig, err := l.getLocalConfigFromCluster(configName) - if err != nil { - return err - } - - localConfigYAML, err := yaml.Marshal(localConfig) - if err != nil { - return err - } - - if err := patchConfigMapData(l.kubeclientset, l.configMap, configName, - localConfigYAML); err != nil { - return err - } - - log.Debugf("Cluster Config: successfully refreshed local %s in cluster %s "+ - "(namespace %s)", configName, clusterName, namespace) - - return nil -} - -// GetLocalDBConfigNames returns the names of the local configuration for each database server in -// the cluster as stored in the -pgha-config configMap per naming conventions. -func GetLocalDBConfigNames(kubeclientset kubernetes.Interface, clusterName, - namespace string) ([]string, error) { - - // selector in the format "pg-cluster=,pgo-pg-database" - // to get all db Deployments - selector := fmt.Sprintf("%s=%s,%s", config.LABEL_PG_CLUSTER, clusterName, - config.LABEL_PG_DATABASE) - dbDeploymentList, err := kubeclientset.AppsV1().Deployments(namespace).List(metav1.ListOptions{ - LabelSelector: selector, - }) - if err != nil { - return nil, err - } - - localConfigNames := make([]string, len(dbDeploymentList.Items)) - for i, deployment := range dbDeploymentList.Items { - localConfigNames[i] = fmt.Sprintf(pghaLocalConfigName, deployment.GetName()) - } - - return localConfigNames, nil -} diff --git a/internal/operator/operatorupgrade/version-check.go b/internal/operator/operatorupgrade/version-check.go deleted file mode 100644 index 81461417cb..0000000000 --- a/internal/operator/operatorupgrade/version-check.go +++ /dev/null @@ -1,85 +0,0 @@ -package operatorupgrade - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http:// www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - - "github.com/crunchydata/postgres-operator/internal/config" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - pgo "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned" - log "github.com/sirupsen/logrus" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" -) - -const ( - // ErrUnsuccessfulVersionCheck defines the error string that is displayed when a pgcluster - // version check in a target namespace is unsuccessful - ErrUnsuccessfulVersionCheck = "unsuccessful pgcluster version check" -) - -// CheckVersion looks at the Postgres Operator version information for existing pgclusters and replicas -// if the Operator version listed does not match the current Operator version, create an annotation indicating -// it has not been upgraded -func CheckVersion(clientset pgo.Interface, ns string) error { - // get all pgclusters - clusterList, err := clientset.CrunchydataV1().Pgclusters(ns).List(metav1.ListOptions{}) - if err != nil { - return fmt.Errorf("%s: %w", ErrUnsuccessfulVersionCheck, err) - } - - // where the Operator versions do not match, label the pgclusters accordingly - for _, cluster := range clusterList.Items { - if msgs.PGO_VERSION != cluster.Spec.UserLabels[config.LABEL_PGO_VERSION] { - log.Infof("operator version check - pgcluster %s version is currently %s, current version is %s", cluster.Name, cluster.Spec.UserLabels[config.LABEL_PGO_VERSION], msgs.PGO_VERSION) - // check if the annotations map has been created - if cluster.Annotations == nil { - // if not, create the map - cluster.Annotations = map[string]string{} - } - cluster.Annotations[config.ANNOTATION_IS_UPGRADED] = config.ANNOTATIONS_FALSE - _, err = clientset.CrunchydataV1().Pgclusters(ns).Update(&cluster) - if err != nil { - return fmt.Errorf("%s: %w", ErrUnsuccessfulVersionCheck, err) - } - } - } - - // update pgreplica CRD userlabels["pgo-version"] to current version - replicaList, err := clientset.CrunchydataV1().Pgreplicas(ns).List(metav1.ListOptions{}) - if err != nil { - log.Error(err) - return fmt.Errorf("%s: %w", ErrUnsuccessfulVersionCheck, err) - } - - // where the Operator versions do not match, label the replicas accordingly - for _, replica := range replicaList.Items { - if msgs.PGO_VERSION != replica.Spec.UserLabels[config.LABEL_PGO_VERSION] { - log.Infof("operator version check - pgcluster replica %s version is currently %s, current version is %s", replica.Name, replica.Spec.UserLabels[config.LABEL_PGO_VERSION], msgs.PGO_VERSION) - // check if the annotations map has been created - if replica.Annotations == nil { - // if not, create the map - replica.Annotations = map[string]string{} - } - replica.Annotations[config.ANNOTATION_IS_UPGRADED] = config.ANNOTATIONS_FALSE - _, err = clientset.CrunchydataV1().Pgreplicas(ns).Update(&replica) - if err != nil { - return fmt.Errorf("%s: %w", ErrUnsuccessfulVersionCheck, err) - } - } - } - return err -} diff --git a/internal/operator/pgbackrest.go b/internal/operator/pgbackrest.go deleted file mode 100644 index 42a8f645d1..0000000000 --- a/internal/operator/pgbackrest.go +++ /dev/null @@ -1,90 +0,0 @@ -package operator - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - v1 "k8s.io/api/core/v1" - "k8s.io/apimachinery/pkg/util/sets" - - "github.com/crunchydata/postgres-operator/internal/kubeapi" -) - -func addBackRestConfigDirectoryVolume(podSpec *v1.PodSpec, volumeName string, projections []v1.VolumeProjection) { - // v1.PodSpec.Volumes is keyed on Name. - volume := kubeapi.FindOrAppendVolume(&podSpec.Volumes, volumeName) - if volume.Projected == nil { - volume.Projected = &v1.ProjectedVolumeSource{} - } - volume.Projected.Sources = append(volume.Projected.Sources, projections...) -} - -func addBackRestConfigDirectoryVolumeMount(container *v1.Container, volumeName string) { - // v1.Container.VolumeMounts is keyed on MountPoint, *not* Name. - mount := kubeapi.FindOrAppendVolumeMount(&container.VolumeMounts, volumeName) - mount.MountPath = "/etc/pgbackrest/conf.d" -} - -func addBackRestConfigDirectoryVolumeAndMounts(podSpec *v1.PodSpec, volumeName string, projections []v1.VolumeProjection, containerNames ...string) { - names := sets.NewString(containerNames...) - - for i := range podSpec.InitContainers { - if names.Has(podSpec.InitContainers[i].Name) { - addBackRestConfigDirectoryVolumeMount(&podSpec.InitContainers[i], volumeName) - } - } - - for i := range podSpec.Containers { - if names.Has(podSpec.Containers[i].Name) { - addBackRestConfigDirectoryVolumeMount(&podSpec.Containers[i], volumeName) - } - } - - addBackRestConfigDirectoryVolume(podSpec, volumeName, projections) -} - -// AddBackRestConfigVolumeAndMounts modifies podSpec to include pgBackRest configuration. -// Any projections are included as custom pgBackRest configuration. -func AddBackRestConfigVolumeAndMounts(podSpec *v1.PodSpec, clusterName string, projections []v1.VolumeProjection) { - var combined []v1.VolumeProjection - var defaultConfigNames = clusterName + "-config-backrest" - var varTrue = true - - // Start with custom configurations from the CRD. - combined = append(combined, projections...) - - // Followed by built-in configurations. Items later in the list take precedence - // over earlier items (that is, last write wins). - // - // - https://docs.openshift.com/container-platform/4.5/nodes/containers/nodes-containers-projected-volumes.html - // - https://kubernetes.io/docs/concepts/storage/volumes/#projected - // - configmap := v1.ConfigMapProjection{} - configmap.Name = defaultConfigNames - configmap.Optional = &varTrue - combined = append(combined, v1.VolumeProjection{ConfigMap: &configmap}) - - secret := v1.SecretProjection{} - secret.Name = defaultConfigNames - secret.Optional = &varTrue - combined = append(combined, v1.VolumeProjection{Secret: &secret}) - - // The built-in configurations above also happen to bypass a bug in Kubernetes. - // Kubernetes 1.15 through 1.19 store an empty list of sources as `null` which - // breaks some clients, notably the Python client used by Patroni 1.6.5. - // - https://issue.k8s.io/93903 - - addBackRestConfigDirectoryVolumeAndMounts(podSpec, "pgbackrest-config", combined, "backrest", "database") -} diff --git a/internal/operator/pgbackrest_test.go b/internal/operator/pgbackrest_test.go deleted file mode 100644 index 046d2be770..0000000000 --- a/internal/operator/pgbackrest_test.go +++ /dev/null @@ -1,103 +0,0 @@ -package operator - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "strings" - "testing" - - v1 "k8s.io/api/core/v1" -) - -func TestAddBackRestConfigVolumeAndMounts(t *testing.T) { - t.Parallel() - - { - // Don't rely on anything in particular from operator templates. - var spec v1.PodSpec - AddBackRestConfigVolumeAndMounts(&spec, "cname", nil) - - if expected, actual := 1, len(spec.Volumes); expected != actual { - t.Fatalf("expected a new volume, got %v", actual) - } - if spec.Volumes[0].Projected == nil || len(spec.Volumes[0].Projected.Sources) == 0 { - t.Fatalf("expected a non-empty projected volume, got %#v", spec.Volumes[0]) - } - - spec = v1.PodSpec{} - AddBackRestConfigVolumeAndMounts(&spec, "cname", []v1.VolumeProjection{ - {ConfigMap: &v1.ConfigMapProjection{ - LocalObjectReference: v1.LocalObjectReference{Name: "somesuch"}, - }}, - }) - - if expected, actual := 1, len(spec.Volumes); expected != actual { - t.Fatalf("expected a new volume, got %v", actual) - } - if spec.Volumes[0].Projected == nil || len(spec.Volumes[0].Projected.Sources) == 0 { - t.Fatalf("expected a non-empty projected volume, got %#v", spec.Volumes[0]) - } - if spec.Volumes[0].Projected.Sources[0].ConfigMap == nil || - spec.Volumes[0].Projected.Sources[0].ConfigMap.Name != "somesuch" { - t.Fatalf("expected custom config first, got %v", spec.Volumes[0].Projected.Sources) - } - } - - { - // Mount into existing containers, with or without existing mounts. - spec := v1.PodSpec{ - Containers: []v1.Container{ - {Name: "database"}, - {Name: "database", VolumeMounts: []v1.VolumeMount{ - {Name: "already"}, - }}, - {Name: "database", VolumeMounts: []v1.VolumeMount{ - {Name: "pgbackrest-config"}, - }}, - }, - } - - AddBackRestConfigVolumeAndMounts(&spec, "cname", nil) - - if expected, actual := 3, len(spec.Containers); expected != actual { - t.Fatalf("expected no new containers, got %v", actual) - } - if expected, actual := 1, len(spec.Volumes); expected != actual { - t.Fatalf("expected a new volume, got %v", actual) - } - - if expected, actual := 1, len(spec.Containers[0].VolumeMounts); expected != actual { - t.Fatalf("expected a new mount, got %v", actual) - } - if !strings.Contains(spec.Containers[0].VolumeMounts[0].MountPath, "pgbackrest") { - t.Fatalf("expected new mount to be for pgbackrest, got %#v", spec.Containers[0].VolumeMounts[0]) - } - - if expected, actual := 2, len(spec.Containers[1].VolumeMounts); expected != actual { - t.Fatalf("expected a new mount, got %v", actual) - } - if !strings.Contains(spec.Containers[1].VolumeMounts[1].MountPath, "pgbackrest") { - t.Fatalf("expected new mount to be for pgbackrest, got %#v", spec.Containers[1].VolumeMounts[0]) - } - - if expected, actual := 1, len(spec.Containers[2].VolumeMounts); expected != actual { - t.Fatalf("expected no new mounts, got %v", actual) - } - if !strings.Contains(spec.Containers[2].VolumeMounts[0].MountPath, "pgbackrest") { - t.Fatalf("expected existing mount to be updated, got %#v", spec.Containers[2].VolumeMounts[0]) - } - } -} diff --git a/internal/operator/pgdump/dump.go b/internal/operator/pgdump/dump.go deleted file mode 100644 index 78d51a0a38..0000000000 --- a/internal/operator/pgdump/dump.go +++ /dev/null @@ -1,155 +0,0 @@ -package pgdump - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "bytes" - "encoding/json" - "os" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/kubeapi" - "github.com/crunchydata/postgres-operator/internal/operator" - "github.com/crunchydata/postgres-operator/internal/operator/pvc" - "github.com/crunchydata/postgres-operator/internal/util" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - log "github.com/sirupsen/logrus" - v1batch "k8s.io/api/batch/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" -) - -type pgDumpJobTemplateFields struct { - JobName string - TaskName string - Name string // ?? - ClusterName string - Command string // ?? - CommandOpts string - PvcName string - PodName string // ?? - CCPImagePrefix string - CCPImageTag string - SecurityContext string - PgDumpHost string - PgDumpUserSecret string - PgDumpDB string - PgDumpPort string - PgDumpOpts string - PgDumpFilename string - PgDumpAll string - PgDumpPVC string -} - -// Dump ... -func Dump(namespace string, clientset kubeapi.Interface, task *crv1.Pgtask) { - - var err error - //create the Job to run the pgdump command - - cmd := task.Spec.Parameters[config.LABEL_PGDUMP_COMMAND] - - pvcName := task.Spec.Parameters[config.LABEL_PVC_NAME] - - // create the PVC if name is empty or it doesn't exist - if !(len(pvcName) > 0) || !pvc.Exists(clientset, pvcName, namespace) { - - // set pvcName if empty - should not be empty as apiserver code should have specified. - if !(len(pvcName) > 0) { - pvcName = task.Spec.Name + "-pvc" - } - - pvcName, err = pvc.CreatePVC(clientset, &task.Spec.StorageSpec, pvcName, - task.Spec.Parameters[config.LABEL_PGDUMP_HOST], namespace) - if err != nil { - log.Error(err.Error()) - } else { - log.Info("created backup PVC =" + pvcName + " in namespace " + namespace) - } - } - - // make sure the provided clustername is not empty - clusterName := task.Spec.Parameters[config.LABEL_PG_CLUSTER] - if clusterName == "" { - log.Error("unable to create pgdump job, clustername is empty.") - return - } - - // get the pgcluster CRD for cases where a CCPImagePrefix is specified - cluster, err := clientset.CrunchydataV1().Pgclusters(namespace).Get(clusterName, metav1.GetOptions{}) - if err != nil { - log.Error(err) - return - } - - // this task name should match - taskName := task.Name - jobName := taskName + "-" + util.RandStringBytesRmndr(4) - - jobFields := pgDumpJobTemplateFields{ - JobName: jobName, - TaskName: taskName, - ClusterName: task.Spec.Parameters[config.LABEL_PG_CLUSTER], - PodName: task.Spec.Parameters[config.LABEL_POD_NAME], - SecurityContext: operator.GetPodSecurityContext(task.Spec.StorageSpec.GetSupplementalGroups()), - Command: cmd, //?? - CommandOpts: task.Spec.Parameters[config.LABEL_PGDUMP_OPTS], - CCPImagePrefix: util.GetValueOrDefault(cluster.Spec.CCPImagePrefix, operator.Pgo.Cluster.CCPImagePrefix), - CCPImageTag: operator.Pgo.Cluster.CCPImageTag, - PgDumpHost: task.Spec.Parameters[config.LABEL_PGDUMP_HOST], - PgDumpUserSecret: task.Spec.Parameters[config.LABEL_PGDUMP_USER], - PgDumpDB: task.Spec.Parameters[config.LABEL_PGDUMP_DB], - PgDumpPort: task.Spec.Parameters[config.LABEL_PGDUMP_PORT], - PgDumpOpts: task.Spec.Parameters[config.LABEL_PGDUMP_OPTS], - PgDumpAll: task.Spec.Parameters[config.LABEL_PGDUMP_ALL], - PgDumpPVC: pvcName, - } - - var doc2 bytes.Buffer - err = config.PgDumpBackupJobTemplate.Execute(&doc2, jobFields) - if err != nil { - log.Error(err.Error()) - return - } - - if operator.CRUNCHY_DEBUG { - config.PgDumpBackupJobTemplate.Execute(os.Stdout, jobFields) - } - - newjob := v1batch.Job{} - err = json.Unmarshal(doc2.Bytes(), &newjob) - if err != nil { - log.Error("error unmarshalling json into Job " + err.Error()) - return - } - - // set the container image to an override value, if one exists - operator.SetContainerImageOverride(config.CONTAINER_IMAGE_CRUNCHY_PGDUMP, - &newjob.Spec.Template.Spec.Containers[0]) - - _, err = clientset.BatchV1().Jobs(namespace).Create(&newjob) - - if err != nil { - return - } - - //update the pgdump task status to submitted - updates task, not the job. - err = util.Patch(clientset.CrunchydataV1().RESTClient(), "/spec/status", crv1.PgBackupJobSubmitted, "pgtasks", task.Spec.Name, namespace) - - if err != nil { - log.Error(err.Error()) - } - -} diff --git a/internal/operator/pgdump/restore.go b/internal/operator/pgdump/restore.go deleted file mode 100644 index d813331dd5..0000000000 --- a/internal/operator/pgdump/restore.go +++ /dev/null @@ -1,127 +0,0 @@ -package pgdump - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "bytes" - "encoding/json" - "fmt" - "os" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/kubeapi" - "github.com/crunchydata/postgres-operator/internal/operator" - "github.com/crunchydata/postgres-operator/internal/operator/pvc" - "github.com/crunchydata/postgres-operator/internal/util" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - log "github.com/sirupsen/logrus" - v1batch "k8s.io/api/batch/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" -) - -type restorejobTemplateFields struct { - JobName string - TaskName string - ClusterName string - SecurityContext string - FromClusterPVCName string - PgRestoreHost string - PgRestoreDB string - PgRestoreUserSecret string - PgPrimaryPort string - PGRestoreOpts string - PITRTarget string - CCPImagePrefix string - CCPImageTag string - PgPort string - NodeSelector string -} - -// Restore ... -func Restore(namespace string, clientset kubeapi.Interface, task *crv1.Pgtask) { - - log.Infof(" PgDump Restore not implemented %s, %s", namespace, task.Name) - - clusterName := task.Spec.Parameters[config.LABEL_PGRESTORE_FROM_CLUSTER] - - fromPvcName := task.Spec.Parameters[config.LABEL_PGRESTORE_FROM_PVC] - - if !(len(fromPvcName) > 0) || !pvc.Exists(clientset, fromPvcName, namespace) { - log.Errorf("pgrestore: could not find source pvc required for restore: %s", fromPvcName) - return - } - - cluster, err := clientset.CrunchydataV1().Pgclusters(namespace).Get(clusterName, metav1.GetOptions{}) - if err != nil { - log.Errorf("pgrestore: could not find a pgcluster in Restore Workflow for %s", clusterName) - return - } - - //use the storage config from the primary PostgreSQL cluster - storage := cluster.Spec.PrimaryStorage - - taskName := task.Name - - jobFields := restorejobTemplateFields{ - JobName: fmt.Sprintf("pgrestore-%s-%s", task.Spec.Parameters[config.LABEL_PGRESTORE_FROM_CLUSTER], - util.RandStringBytesRmndr(4)), - TaskName: taskName, - ClusterName: clusterName, - SecurityContext: operator.GetPodSecurityContext(storage.GetSupplementalGroups()), - FromClusterPVCName: fromPvcName, - PgRestoreHost: task.Spec.Parameters[config.LABEL_PGRESTORE_HOST], - PgRestoreDB: task.Spec.Parameters[config.LABEL_PGRESTORE_DB], - PgRestoreUserSecret: task.Spec.Parameters[config.LABEL_PGRESTORE_USER], - PgPrimaryPort: operator.Pgo.Cluster.Port, - PGRestoreOpts: task.Spec.Parameters[config.LABEL_PGRESTORE_OPTS], - PITRTarget: task.Spec.Parameters[config.LABEL_PGRESTORE_PITR_TARGET], - CCPImagePrefix: util.GetValueOrDefault(cluster.Spec.CCPImagePrefix, operator.Pgo.Cluster.CCPImagePrefix), - CCPImageTag: operator.Pgo.Cluster.CCPImageTag, - NodeSelector: operator.GetAffinity(task.Spec.Parameters["NodeLabelKey"], task.Spec.Parameters["NodeLabelValue"], "In"), - } - - var doc2 bytes.Buffer - err = config.PgRestoreJobTemplate.Execute(&doc2, jobFields) - if err != nil { - log.Error(err.Error()) - log.Error("restore workflow: error executing job template") - return - } - - if operator.CRUNCHY_DEBUG { - config.PgRestoreJobTemplate.Execute(os.Stdout, jobFields) - } - - newjob := v1batch.Job{} - err = json.Unmarshal(doc2.Bytes(), &newjob) - if err != nil { - log.Error("restore workflow: error unmarshalling json into Job " + err.Error()) - return - } - - // set the container image to an override value, if one exists - operator.SetContainerImageOverride(config.CONTAINER_IMAGE_CRUNCHY_PGRESTORE, - &newjob.Spec.Template.Spec.Containers[0]) - - j, err := clientset.BatchV1().Jobs(namespace).Create(&newjob) - if err != nil { - log.Error(err) - log.Error("restore workflow: error in creating restore job") - return - } - log.Debugf("pgrestore job %s created", j.Name) - -} diff --git a/internal/operator/pvc/pvc.go b/internal/operator/pvc/pvc.go deleted file mode 100644 index 0c37e8d27c..0000000000 --- a/internal/operator/pvc/pvc.go +++ /dev/null @@ -1,229 +0,0 @@ -package pvc - -/* - Copyright 2017 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "bytes" - "encoding/json" - "errors" - "os" - "strings" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/operator" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - log "github.com/sirupsen/logrus" - v1 "k8s.io/api/core/v1" - kerrors "k8s.io/apimachinery/pkg/api/errors" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/client-go/kubernetes" -) - -type matchLabelsTemplateFields struct { - Key string - Value string -} - -// TemplateFields ... -type TemplateFields struct { - Name string - AccessMode string - ClusterName string - Size string - StorageClass string - MatchLabels string -} - -// CreateMissingPostgreSQLVolumes converts the storage specifications of cluster -// related to PostgreSQL into StorageResults. When a specification calls for a -// PVC to be created, the PVC is created unless it already exists. -func CreateMissingPostgreSQLVolumes(clientset kubernetes.Interface, - cluster *crv1.Pgcluster, namespace string, - pvcNamePrefix string, dataStorageSpec crv1.PgStorageSpec, -) ( - dataVolume, walVolume operator.StorageResult, - tablespaceVolumes map[string]operator.StorageResult, - err error, -) { - dataVolume, err = CreateIfNotExists(clientset, - dataStorageSpec, pvcNamePrefix, cluster.Spec.Name, namespace) - - if err == nil { - walVolume, err = CreateIfNotExists(clientset, - cluster.Spec.WALStorage, pvcNamePrefix+"-wal", cluster.Spec.Name, namespace) - } - - tablespaceVolumes = make(map[string]operator.StorageResult, len(cluster.Spec.TablespaceMounts)) - for tablespaceName, storageSpec := range cluster.Spec.TablespaceMounts { - if err == nil { - tablespacePVCName := operator.GetTablespacePVCName(pvcNamePrefix, tablespaceName) - tablespaceVolumes[tablespaceName], err = CreateIfNotExists(clientset, - storageSpec, tablespacePVCName, cluster.Spec.Name, namespace) - } - } - - return -} - -// CreateIfNotExists converts a storage specification into a StorageResult. If -// spec calls for a PVC to be created and pvcName does not exist, it will be created. -func CreateIfNotExists(clientset kubernetes.Interface, spec crv1.PgStorageSpec, pvcName, clusterName, namespace string) (operator.StorageResult, error) { - result := operator.StorageResult{ - SupplementalGroups: spec.GetSupplementalGroups(), - } - - switch spec.StorageType { - case "", "emptydir": - // no-op - - case "existing": - result.PersistentVolumeClaimName = spec.Name - - case "create", "dynamic": - result.PersistentVolumeClaimName = pvcName - err := Create(clientset, pvcName, clusterName, &spec, namespace) - if err != nil && !kerrors.IsAlreadyExists(err) { - log.Errorf("error in pvc create: %v", err) - return result, err - } - } - - return result, nil -} - -// CreatePVC create a pvc -func CreatePVC(clientset kubernetes.Interface, storageSpec *crv1.PgStorageSpec, pvcName, clusterName, namespace string) (string, error) { - var err error - - switch storageSpec.StorageType { - case "": - log.Debug("StorageType is empty") - case "emptydir": - log.Debug("StorageType is emptydir") - case "existing": - log.Debug("StorageType is existing") - pvcName = storageSpec.Name - case "create", "dynamic": - log.Debug("StorageType is create") - log.Debugf("pvcname=%s storagespec=%v", pvcName, storageSpec) - err = Create(clientset, pvcName, clusterName, storageSpec, namespace) - if err != nil { - log.Error("error in pvc create " + err.Error()) - return pvcName, err - } - log.Info("created PVC =" + pvcName + " in namespace " + namespace) - } - - return pvcName, err -} - -// Create a pvc -func Create(clientset kubernetes.Interface, name, clusterName string, storageSpec *crv1.PgStorageSpec, namespace string) error { - log.Debug("in createPVC") - var doc2 bytes.Buffer - var err error - - pvcFields := TemplateFields{ - Name: name, - AccessMode: storageSpec.AccessMode, - StorageClass: storageSpec.StorageClass, - ClusterName: clusterName, - Size: storageSpec.Size, - MatchLabels: storageSpec.MatchLabels, - } - - if storageSpec.StorageType == "dynamic" { - log.Debug("using dynamic PVC template") - err = config.PVCStorageClassTemplate.Execute(&doc2, pvcFields) - if operator.CRUNCHY_DEBUG { - config.PVCStorageClassTemplate.Execute(os.Stdout, pvcFields) - } - } else { - log.Debugf("matchlabels from spec is [%s]", storageSpec.MatchLabels) - if storageSpec.MatchLabels != "" { - arr := strings.Split(storageSpec.MatchLabels, "=") - if len(arr) != 2 { - log.Errorf("%s MatchLabels is not formatted correctly", storageSpec.MatchLabels) - return errors.New("match labels is not formatted correctly") - } - pvcFields.MatchLabels = getMatchLabels(arr[0], arr[1]) - log.Debugf("matchlabels constructed is %s", pvcFields.MatchLabels) - } - - err = config.PVCTemplate.Execute(&doc2, pvcFields) - if operator.CRUNCHY_DEBUG { - config.PVCTemplate.Execute(os.Stdout, pvcFields) - } - } - if err != nil { - log.Error("error in pvc create exec" + err.Error()) - return err - } - - newpvc := v1.PersistentVolumeClaim{} - err = json.Unmarshal(doc2.Bytes(), &newpvc) - if err != nil { - log.Error("error unmarshalling json into PVC " + err.Error()) - return err - } - - _, err = clientset.CoreV1().PersistentVolumeClaims(namespace).Create(&newpvc) - return err -} - -// Delete a pvc -func DeleteIfExists(clientset kubernetes.Interface, name string, namespace string) error { - pvc, err := clientset.CoreV1().PersistentVolumeClaims(namespace).Get(name, metav1.GetOptions{}) - if kerrors.IsNotFound(err) { - return nil - } else if err != nil { - return err - } - - log.Debugf("PVC %s is found", pvc.Name) - - if pvc.ObjectMeta.Labels[config.LABEL_PGREMOVE] == "true" { - log.Debugf("delete PVC %s in namespace %s", name, namespace) - deletePropagation := metav1.DeletePropagationForeground - err = clientset. - CoreV1().PersistentVolumeClaims(namespace). - Delete(name, &metav1.DeleteOptions{PropagationPolicy: &deletePropagation}) - } - return err -} - -// Exists test to see if pvc exists -func Exists(clientset kubernetes.Interface, name string, namespace string) bool { - _, err := clientset.CoreV1().PersistentVolumeClaims(namespace).Get(name, metav1.GetOptions{}) - return err == nil -} - -func getMatchLabels(key, value string) string { - - matchLabelsTemplateFields := matchLabelsTemplateFields{} - matchLabelsTemplateFields.Key = key - matchLabelsTemplateFields.Value = value - - var doc bytes.Buffer - err := config.PVCMatchLabelsTemplate.Execute(&doc, matchLabelsTemplateFields) - if err != nil { - log.Error(err.Error()) - return "" - } - - return doc.String() - -} diff --git a/internal/operator/storage.go b/internal/operator/storage.go deleted file mode 100644 index da06087deb..0000000000 --- a/internal/operator/storage.go +++ /dev/null @@ -1,53 +0,0 @@ -package operator - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "bytes" - "encoding/json" - - v1 "k8s.io/api/core/v1" -) - -// StorageResult is a resolved PgStorageSpec. The zero value is an emptyDir. -type StorageResult struct { - PersistentVolumeClaimName string - SupplementalGroups []int64 -} - -// InlineVolumeSource returns the key and value of a k8s.io/api/core/v1.VolumeSource. -func (s StorageResult) InlineVolumeSource() string { - b := new(bytes.Buffer) - e := json.NewEncoder(b) - e.SetEscapeHTML(false) - e.Encode(s.VolumeSource()) - - // remove trailing newline and surrounding brackets - return b.String()[1 : b.Len()-2] -} - -// VolumeSource returns the VolumeSource equivalent of s. -func (s StorageResult) VolumeSource() v1.VolumeSource { - if s.PersistentVolumeClaimName != "" { - return v1.VolumeSource{ - PersistentVolumeClaim: &v1.PersistentVolumeClaimVolumeSource{ - ClaimName: s.PersistentVolumeClaimName, - }, - } - } - - return v1.VolumeSource{EmptyDir: &v1.EmptyDirVolumeSource{}} -} diff --git a/internal/operator/storage_test.go b/internal/operator/storage_test.go deleted file mode 100644 index 280b1c6cd0..0000000000 --- a/internal/operator/storage_test.go +++ /dev/null @@ -1,44 +0,0 @@ -package operator - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "encoding/json" - "testing" - - v1 "k8s.io/api/core/v1" -) - -func TestStorageResultInlineVolumeSource(t *testing.T) { - if b, _ := json.Marshal(v1.VolumeSource{}); string(b) != "{}" { - t.Logf("expected VolumeSource to always marshal with brackets, got %q", b) - } - - for _, tt := range []struct { - value StorageResult - expected string - }{ - {StorageResult{}, `"emptyDir":{}`}, - {StorageResult{PersistentVolumeClaimName: "<\x00"}, - `"persistentVolumeClaim":{"claimName":"<\u0000"}`}, - {StorageResult{PersistentVolumeClaimName: "some-name"}, - `"persistentVolumeClaim":{"claimName":"some-name"}`}, - } { - if actual := tt.value.InlineVolumeSource(); actual != tt.expected { - t.Errorf("expected %q for %v, got %q", tt.expected, tt.value, actual) - } - } -} diff --git a/internal/operator/task/applypolicies.go b/internal/operator/task/applypolicies.go deleted file mode 100644 index 8d3b540927..0000000000 --- a/internal/operator/task/applypolicies.go +++ /dev/null @@ -1,105 +0,0 @@ -package task - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "encoding/json" - "strings" - - "github.com/crunchydata/postgres-operator/internal/kubeapi" - "github.com/crunchydata/postgres-operator/internal/util" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - pgo "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned" - jsonpatch "github.com/evanphx/json-patch" - log "github.com/sirupsen/logrus" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/types" - "k8s.io/client-go/rest" -) - -// RemoveBackups ... -func ApplyPolicies(clusterName string, clientset kubeapi.Interface, RESTConfig *rest.Config, ns string) { - - taskName := clusterName + "-policies" - - task, err := clientset.CrunchydataV1().Pgtasks(ns).Get(taskName, metav1.GetOptions{}) - if err == nil { - //apply those policies - for k := range task.Spec.Parameters { - log.Debugf("applying policy %s to %s", k, clusterName) - applyPolicy(clientset, RESTConfig, k, clusterName, ns) - } - //delete the pgtask to not redo this again - clientset.CrunchydataV1().Pgtasks(ns).Delete(taskName, &metav1.DeleteOptions{}) - } -} - -func applyPolicy(clientset kubeapi.Interface, restconfig *rest.Config, policyName, clusterName, ns string) { - - cl, err := clientset.CrunchydataV1().Pgclusters(ns).Get(clusterName, metav1.GetOptions{}) - if err != nil { - log.Error(err) - return - } - - if err := util.ExecPolicy(clientset, restconfig, ns, policyName, clusterName, cl.Spec.Port); err != nil { - log.Error(err) - return - } - - labels := make(map[string]string) - labels[policyName] = "pgpolicy" - - if err := util.UpdatePolicyLabels(clientset, clusterName, ns, labels); err != nil { - log.Error(err) - } - - //update the pgcluster crd labels with the new policy - if err := PatchPgcluster(clientset, policyName+"=pgpolicy", *cl, ns); err != nil { - log.Error(err) - } - -} - -func PatchPgcluster(clientset pgo.Interface, newLabel string, oldCRD crv1.Pgcluster, ns string) error { - - fields := strings.Split(newLabel, "=") - labelKey := fields[0] - labelValue := fields[1] - oldData, err := json.Marshal(oldCRD) - if err != nil { - return err - } - if oldCRD.ObjectMeta.Labels == nil { - oldCRD.ObjectMeta.Labels = make(map[string]string) - } - oldCRD.ObjectMeta.Labels[labelKey] = labelValue - var newData, patchBytes []byte - newData, err = json.Marshal(oldCRD) - if err != nil { - return err - } - patchBytes, err = jsonpatch.CreateMergePatch(oldData, newData) - if err != nil { - return err - } - - log.Debug(string(patchBytes)) - _, err6 := clientset.CrunchydataV1().Pgclusters(ns).Patch(oldCRD.Spec.Name, types.MergePatchType, patchBytes) - - return err6 - -} diff --git a/internal/operator/task/rmbackups.go b/internal/operator/task/rmbackups.go deleted file mode 100644 index 928a8cb977..0000000000 --- a/internal/operator/task/rmbackups.go +++ /dev/null @@ -1,39 +0,0 @@ -package task - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "github.com/crunchydata/postgres-operator/internal/config" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - log "github.com/sirupsen/logrus" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/client-go/kubernetes" -) - -// RemoveBackups ... -func RemoveBackups(namespace string, clientset kubernetes.Interface, task *crv1.Pgtask) { - - //delete any backup jobs for this cluster - //kubectl delete job --selector=pg-cluster=clustername - - log.Debugf("deleting backup jobs with selector=%s=%s", config.LABEL_PG_CLUSTER, task.Spec.Parameters[config.LABEL_PG_CLUSTER]) - deletePropagation := metav1.DeletePropagationForeground - clientset. - BatchV1().Jobs(namespace). - DeleteCollection( - &metav1.DeleteOptions{PropagationPolicy: &deletePropagation}, - metav1.ListOptions{LabelSelector: config.LABEL_PG_CLUSTER + "=" + task.Spec.Parameters[config.LABEL_PG_CLUSTER]}) -} diff --git a/internal/operator/task/rmdata.go b/internal/operator/task/rmdata.go deleted file mode 100644 index d4e62a5775..0000000000 --- a/internal/operator/task/rmdata.go +++ /dev/null @@ -1,185 +0,0 @@ -package task - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "bytes" - "encoding/json" - "os" - "time" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/kubeapi" - "github.com/crunchydata/postgres-operator/internal/operator" - "github.com/crunchydata/postgres-operator/internal/util" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - "github.com/crunchydata/postgres-operator/pkg/events" - pgo "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned" - jsonpatch "github.com/evanphx/json-patch" - log "github.com/sirupsen/logrus" - v1batch "k8s.io/api/batch/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/types" -) - -type rmdatajobTemplateFields struct { - JobName string - Name string - ClusterName string - ClusterPGHAScope string - ReplicaName string - PGOImagePrefix string - PGOImageTag string - SecurityContext string - RemoveData string - RemoveBackup string - IsBackup string - IsReplica string -} - -// RemoveData ... -func RemoveData(namespace string, clientset kubeapi.Interface, task *crv1.Pgtask) { - - //create marker (clustername, namespace) - err := PatchpgtaskDeleteDataStatus(clientset, task, namespace) - if err != nil { - log.Errorf("could not set delete data started marker for task %s cluster %s", task.Spec.Name, task.Spec.Parameters[config.LABEL_PG_CLUSTER]) - return - } - - //create the Job to remove the data - //pvcName := task.Spec.Parameters[config.LABEL_PVC_NAME] - clusterName := task.Spec.Parameters[config.LABEL_PG_CLUSTER] - clusterPGHAScope := task.Spec.Parameters[config.LABEL_PGHA_SCOPE] - replicaName := task.Spec.Parameters[config.LABEL_REPLICA_NAME] - isReplica := task.Spec.Parameters[config.LABEL_IS_REPLICA] - isBackup := task.Spec.Parameters[config.LABEL_IS_BACKUP] - removeData := task.Spec.Parameters[config.LABEL_DELETE_DATA] - removeBackup := task.Spec.Parameters[config.LABEL_DELETE_BACKUPS] - - // make sure the provided clustername is not empty - if clusterName == "" { - log.Error("unable to create pgdump job, clustername is empty.") - return - } - - // if the clustername is not empty, get the pgcluster - cluster, err := clientset.CrunchydataV1().Pgclusters(namespace).Get(clusterName, metav1.GetOptions{}) - if err != nil { - log.Error(err) - return - } - - jobName := clusterName + "-rmdata-" + util.RandStringBytesRmndr(4) - - jobFields := rmdatajobTemplateFields{ - JobName: jobName, - Name: task.Spec.Name, - ClusterName: clusterName, - ClusterPGHAScope: clusterPGHAScope, - ReplicaName: replicaName, - RemoveData: removeData, - RemoveBackup: removeBackup, - IsReplica: isReplica, - IsBackup: isBackup, - PGOImagePrefix: util.GetValueOrDefault(cluster.Spec.PGOImagePrefix, operator.Pgo.Pgo.PGOImagePrefix), - PGOImageTag: operator.Pgo.Pgo.PGOImageTag, - SecurityContext: operator.GetPodSecurityContext(task.Spec.StorageSpec.GetSupplementalGroups()), - } - log.Debugf("creating rmdata job %s for cluster %s ", jobName, task.Spec.Name) - - var doc2 bytes.Buffer - err = config.RmdatajobTemplate.Execute(&doc2, jobFields) - if err != nil { - log.Error(err.Error()) - return - } - - if operator.CRUNCHY_DEBUG { - config.RmdatajobTemplate.Execute(os.Stdout, jobFields) - } - - newjob := v1batch.Job{} - err = json.Unmarshal(doc2.Bytes(), &newjob) - if err != nil { - log.Error("error unmarshalling json into Job " + err.Error()) - return - } - - // set the container image to an override value, if one exists - operator.SetContainerImageOverride(config.CONTAINER_IMAGE_PGO_RMDATA, - &newjob.Spec.Template.Spec.Containers[0]) - - j, err := clientset.BatchV1().Jobs(namespace).Create(&newjob) - if err != nil { - log.Errorf("got error when creating rmdata job %s", newjob.Name) - return - } - log.Debugf("successfully created rmdata job %s", j.Name) - - publishDeleteCluster(task.Spec.Parameters[config.LABEL_PG_CLUSTER], task.ObjectMeta.Labels[config.LABEL_PG_CLUSTER_IDENTIFIER], - task.ObjectMeta.Labels[config.LABEL_PGOUSER], namespace) -} - -func PatchpgtaskDeleteDataStatus(clientset pgo.Interface, oldCrd *crv1.Pgtask, namespace string) error { - - oldData, err := json.Marshal(oldCrd) - if err != nil { - return err - } - - //change it - oldCrd.Spec.Parameters[config.LABEL_DELETE_DATA_STARTED] = time.Now().Format(time.RFC3339) - - //create the patch - var newData, patchBytes []byte - newData, err = json.Marshal(oldCrd) - if err != nil { - return err - } - patchBytes, err = jsonpatch.CreateMergePatch(oldData, newData) - if err != nil { - return err - } - log.Debug(string(patchBytes)) - - //apply patch - _, err6 := clientset.CrunchydataV1().Pgtasks(namespace).Patch(oldCrd.Spec.Name, types.MergePatchType, patchBytes) - - return err6 - -} - -func publishDeleteCluster(clusterName, identifier, username, namespace string) { - topics := make([]string, 1) - topics[0] = events.EventTopicCluster - - f := events.EventDeleteClusterFormat{ - EventHeader: events.EventHeader{ - Namespace: namespace, - Username: username, - Topic: topics, - Timestamp: time.Now(), - EventType: events.EventDeleteCluster, - }, - Clustername: clusterName, - } - - err := events.Publish(f) - if err != nil { - log.Error(err.Error()) - } -} diff --git a/internal/operator/task/workflow.go b/internal/operator/task/workflow.go deleted file mode 100644 index 2f55b0f481..0000000000 --- a/internal/operator/task/workflow.go +++ /dev/null @@ -1,74 +0,0 @@ -package task - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "encoding/json" - "time" - - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - pgo "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned" - log "github.com/sirupsen/logrus" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/types" -) - -// CompleteCreateClusterWorkflow ... update the pgtask for the -// create cluster workflow for a given cluster -func CompleteCreateClusterWorkflow(clusterName string, clientset pgo.Interface, ns string) { - - taskName := clusterName + "-" + crv1.PgtaskWorkflowCreateClusterType - - completeWorkflow(clientset, ns, taskName) - -} - -func CompleteBackupWorkflow(clusterName string, clientset pgo.Interface, ns string) { - - taskName := clusterName + "-" + crv1.PgtaskWorkflowBackupType - - completeWorkflow(clientset, ns, taskName) - -} - -func completeWorkflow(clientset pgo.Interface, taskNamespace, taskName string) { - - task, err := clientset.CrunchydataV1().Pgtasks(taskNamespace).Get(taskName, metav1.GetOptions{}) - if err != nil { - log.Errorf("Error completing workflow %s", taskName) - log.Error(err) - return - } - - //mark this workflow as completed - id := task.Spec.Parameters[crv1.PgtaskWorkflowID] - log.Debugf("completing workflow %s id %s", taskName, id) - - task.Spec.Parameters[crv1.PgtaskWorkflowCompletedStatus] = time.Now().Format(time.RFC3339) - - patch, err := json.Marshal(map[string]interface{}{ - "spec": map[string]interface{}{ - "parameters": task.Spec.Parameters, - }, - }) - if err == nil { - _, err = clientset.CrunchydataV1().Pgtasks(task.Namespace).Patch(task.Name, types.MergePatchType, patch) - } - if err != nil { - log.Error(err) - } - -} diff --git a/internal/operator/wal.go b/internal/operator/wal.go deleted file mode 100644 index 1b679755fb..0000000000 --- a/internal/operator/wal.go +++ /dev/null @@ -1,67 +0,0 @@ -package operator - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "github.com/crunchydata/postgres-operator/internal/config" - core_v1 "k8s.io/api/core/v1" -) - -// addWALVolumeAndMounts modifies podSpec to include walVolume on each containerNames. -func addWALVolumeAndMounts(podSpec *core_v1.PodSpec, walVolume StorageResult, containerNames ...string) { - walVolumeMount := config.PostgreSQLWALVolumeMount() - - if podSpec.SecurityContext == nil { - podSpec.SecurityContext = &core_v1.PodSecurityContext{} - } - - podSpec.SecurityContext.SupplementalGroups = append( - podSpec.SecurityContext.SupplementalGroups, walVolume.SupplementalGroups...) - - podSpec.Volumes = append(podSpec.Volumes, core_v1.Volume{ - Name: walVolumeMount.Name, - VolumeSource: walVolume.VolumeSource(), - }) - - for i := range podSpec.Containers { - container := &podSpec.Containers[i] - for _, name := range containerNames { - if container.Name == name { - container.VolumeMounts = append(container.VolumeMounts, walVolumeMount) - } - } - } -} - -// AddWALVolumeAndMountsToBackRest modifies a pgBackRest podSpec to include walVolume. -func AddWALVolumeAndMountsToBackRest(podSpec *core_v1.PodSpec, walVolume StorageResult) { - addWALVolumeAndMounts(podSpec, walVolume, "backrest") -} - -// AddWALVolumeAndMountsToPostgreSQL modifies a PostgreSQL podSpec to include walVolume. -func AddWALVolumeAndMountsToPostgreSQL(podSpec *core_v1.PodSpec, walVolume StorageResult, instanceName string) { - addWALVolumeAndMounts(podSpec, walVolume, "database") - - for i := range podSpec.Containers { - container := &podSpec.Containers[i] - if container.Name == "database" { - container.Env = append(container.Env, core_v1.EnvVar{ - Name: "PGHA_WALDIR", - Value: config.PostgreSQLWALPath(instanceName), - }) - } - } -} diff --git a/internal/patroni/api.go b/internal/patroni/api.go new file mode 100644 index 0000000000..679da5f4af --- /dev/null +++ b/internal/patroni/api.go @@ -0,0 +1,208 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package patroni + +import ( + "bytes" + "context" + "encoding/json" + "errors" + "io" + "strings" + + "github.com/crunchydata/postgres-operator/internal/logging" +) + +// API defines a general interface for interacting with the Patroni API. +type API interface { + // ChangePrimaryAndWait tries to demote the current Patroni leader. It + // returns true when an election completes successfully. When Patroni is + // paused, next cannot be blank. + ChangePrimaryAndWait(ctx context.Context, current, next string) (bool, error) + + // ReplaceConfiguration replaces Patroni's entire dynamic configuration. + ReplaceConfiguration(ctx context.Context, configuration map[string]any) error +} + +// Executor implements API by calling "patronictl". +type Executor func( + ctx context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, +) error + +// Executor implements API. +var _ API = Executor(nil) + +// ChangePrimaryAndWait tries to demote the current Patroni leader by calling +// "patronictl". It returns true when an election completes successfully. It +// waits up to two "loop_wait" or until an error occurs. When Patroni is paused, +// next cannot be blank. Similar to the "POST /switchover" REST endpoint. +func (exec Executor) ChangePrimaryAndWait( + ctx context.Context, current, next string, +) (bool, error) { + var stdout, stderr bytes.Buffer + + err := exec(ctx, nil, &stdout, &stderr, + "patronictl", "switchover", "--scheduled=now", "--force", + "--master="+current, "--candidate="+next) + + log := logging.FromContext(ctx) + log.V(1).Info("changed primary", + "stdout", stdout.String(), + "stderr", stderr.String(), + ) + + // The command exits zero when it is able to communicate with the Patroni + // HTTP API. It exits zero even when the API says switchover did not occur. + // Check for the text that indicates success. + // - https://github.com/zalando/patroni/blob/v2.0.2/patroni/api.py#L351-L367 + // - https://github.com/zalando/patroni/blob/v2.1.1/patroni/api.py#L461-L477 + return strings.Contains(stdout.String(), "switched over"), err +} + +// SwitchoverAndWait tries to change the current Patroni leader by calling +// "patronictl". It returns true when an election completes successfully. It +// waits up to two "loop_wait" or until an error occurs. When Patroni is paused, +// next cannot be blank. Similar to the "POST /switchover" REST endpoint. +// The "patronictl switchover" variant does not require the current master to be passed +// as a flag. +func (exec Executor) SwitchoverAndWait( + ctx context.Context, target string, +) (bool, error) { + var stdout, stderr bytes.Buffer + + err := exec(ctx, nil, &stdout, &stderr, + "patronictl", "switchover", "--scheduled=now", "--force", + "--candidate="+target) + + log := logging.FromContext(ctx) + log.V(1).Info("changed primary", + "stdout", stdout.String(), + "stderr", stderr.String(), + ) + + // The command exits zero when it is able to communicate with the Patroni + // HTTP API. It exits zero even when the API says switchover did not occur. + // Check for the text that indicates success. + // - https://github.com/zalando/patroni/blob/v2.0.2/patroni/api.py#L351-L367 + // Patroni has an edge case where it could switchover to an instance other + // than the requested candidate. In this case, stdout will contain + // "Switched over" instead of "switched over" and return false, nil + return strings.Contains(stdout.String(), "switched over"), err +} + +// FailoverAndWait tries to change the current Patroni leader by calling +// "patronictl". It returns true when an election completes successfully. It +// waits up to two "loop_wait" or until an error occurs. When Patroni is paused, +// next cannot be blank. Similar to the "POST /switchover" REST endpoint. +// The "patronictl failover" variant does not require the current master to be passed +// as a flag. +func (exec Executor) FailoverAndWait( + ctx context.Context, target string, +) (bool, error) { + var stdout, stderr bytes.Buffer + + err := exec(ctx, nil, &stdout, &stderr, + "patronictl", "failover", "--force", + "--candidate="+target) + + log := logging.FromContext(ctx) + log.V(1).Info("changed primary", + "stdout", stdout.String(), + "stderr", stderr.String(), + ) + + // The command exits zero when it is able to communicate with the Patroni + // HTTP API. It exits zero even when the API says failover did not occur. + // Check for the text that indicates success. + // - https://github.com/zalando/patroni/blob/v2.0.2/patroni/api.py#L351-L367 + // Patroni has an edge case where it could failover to an instance other + // than the requested candidate. In this case, stdout will contain "Failed over" + // instead of "failed over" and return false, nil + return strings.Contains(stdout.String(), "failed over"), err +} + +// ReplaceConfiguration replaces Patroni's entire dynamic configuration by +// calling "patronictl". Similar to the "POST /switchover" REST endpoint. +func (exec Executor) ReplaceConfiguration( + ctx context.Context, configuration map[string]any, +) error { + var stdin, stdout, stderr bytes.Buffer + + err := json.NewEncoder(&stdin).Encode(configuration) + if err == nil { + err = exec(ctx, &stdin, &stdout, &stderr, + "patronictl", "edit-config", "--replace=-", "--force") + + log := logging.FromContext(ctx) + log.V(1).Info("replaced configuration", + "stdout", stdout.String(), + "stderr", stderr.String(), + ) + } + + return err +} + +// RestartPendingMembers looks up Patroni members with role in scope and restarts +// those that have a pending restart. +func (exec Executor) RestartPendingMembers(ctx context.Context, role, scope string) error { + var stdout, stderr bytes.Buffer + + // The following exits zero when it is able to read the DCS and communicate + // with the Patroni HTTP API. It prints the result of calling "POST /restart" + // on each member found with the desired role. The "Failed … 503 … restart + // conditions are not satisfied" message is normal and means that a particular + // member has already restarted. + // - https://github.com/zalando/patroni/blob/v2.1.1/patroni/ctl.py#L580-L596 + err := exec(ctx, nil, &stdout, &stderr, + "patronictl", "restart", "--pending", "--force", "--role="+role, scope) + + log := logging.FromContext(ctx) + log.V(1).Info("restarted members", + "stdout", stdout.String(), + "stderr", stderr.String(), + ) + + return err +} + +// GetTimeline gets the patronictl status and returns the timeline, +// currently the only information required by PGO. +// Returns zero if it runs into errors or cannot find a running Leader pod +// to get the up-to-date timeline from. +func (exec Executor) GetTimeline(ctx context.Context) (int64, error) { + var stdout, stderr bytes.Buffer + + // The following exits zero when it is able to read the DCS and communicate + // with the Patroni HTTP API. It prints the result of calling "GET /cluster" + // - https://github.com/zalando/patroni/blob/v2.1.1/patroni/ctl.py#L849 + err := exec(ctx, nil, &stdout, &stderr, + "patronictl", "list", "--format", "json") + if err != nil { + return 0, err + } + + if stderr.String() != "" { + return 0, errors.New(stderr.String()) + } + + var members []struct { + Role string `json:"Role"` + State string `json:"State"` + Timeline int64 `json:"TL"` + } + err = json.Unmarshal(stdout.Bytes(), &members) + if err != nil { + return 0, err + } + + for _, member := range members { + if member.Role == "Leader" && member.State == "running" { + return member.Timeline, nil + } + } + + return 0, err +} diff --git a/internal/patroni/api_test.go b/internal/patroni/api_test.go new file mode 100644 index 0000000000..1603d2fc75 --- /dev/null +++ b/internal/patroni/api_test.go @@ -0,0 +1,289 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package patroni + +import ( + "context" + "errors" + "fmt" + "io" + "os/exec" + "strings" + "testing" + + "gotest.tools/v3/assert" +) + +// This example demonstrates how Executor can work with exec.Cmd. +func ExampleExecutor_execCmd() { + _ = Executor(func( + ctx context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + // #nosec G204 Nothing calls the function defined in this example. + cmd := exec.CommandContext(ctx, command[0], command[1:]...) + cmd.Stdin, cmd.Stdout, cmd.Stderr = stdin, stdout, stderr + return cmd.Run() + }) +} + +func TestExecutorChangePrimaryAndWait(t *testing.T) { + t.Run("Arguments", func(t *testing.T) { + called := false + exec := func( + _ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + called = true + assert.DeepEqual(t, command, strings.Fields( + `patronictl switchover --scheduled=now --force --master=old --candidate=new`, + )) + assert.Assert(t, stdin == nil, "expected no stdin, got %T", stdin) + assert.Assert(t, stderr != nil, "should capture stderr") + assert.Assert(t, stdout != nil, "should capture stdout") + return nil + } + + _, _ = Executor(exec).ChangePrimaryAndWait(context.Background(), "old", "new") + assert.Assert(t, called) + }) + + t.Run("Error", func(t *testing.T) { + expected := errors.New("bang") + _, actual := Executor(func( + context.Context, io.Reader, io.Writer, io.Writer, ...string, + ) error { + return expected + }).ChangePrimaryAndWait(context.Background(), "any", "thing") + + assert.Equal(t, expected, actual) + }) + + t.Run("Result", func(t *testing.T) { + success, _ := Executor(func( + _ context.Context, _ io.Reader, stdout, _ io.Writer, _ ...string, + ) error { + _, _ = stdout.Write([]byte(`no luck`)) + return nil + }).ChangePrimaryAndWait(context.Background(), "any", "thing") + + assert.Assert(t, !success, "expected failure message to become false") + + success, _ = Executor(func( + _ context.Context, _ io.Reader, stdout, _ io.Writer, _ ...string, + ) error { + _, _ = stdout.Write([]byte(`Successfully switched over to something`)) + return nil + }).ChangePrimaryAndWait(context.Background(), "any", "thing") + + assert.Assert(t, success, "expected success message to become true") + }) +} + +func TestExecutorSwitchoverAndWait(t *testing.T) { + t.Run("Arguments", func(t *testing.T) { + called := false + exec := func( + _ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + called = true + assert.DeepEqual(t, command, strings.Fields( + `patronictl switchover --scheduled=now --force --candidate=new`, + )) + assert.Assert(t, stdin == nil, "expected no stdin, got %T", stdin) + assert.Assert(t, stderr != nil, "should capture stderr") + assert.Assert(t, stdout != nil, "should capture stdout") + return nil + } + + _, _ = Executor(exec).SwitchoverAndWait(context.Background(), "new") + assert.Assert(t, called) + }) + + t.Run("Error", func(t *testing.T) { + expected := errors.New("bang") + _, actual := Executor(func( + context.Context, io.Reader, io.Writer, io.Writer, ...string, + ) error { + return expected + }).SwitchoverAndWait(context.Background(), "next") + + assert.Equal(t, expected, actual) + }) + + t.Run("Result", func(t *testing.T) { + success, _ := Executor(func( + _ context.Context, _ io.Reader, stdout, _ io.Writer, _ ...string, + ) error { + _, _ = stdout.Write([]byte(`no luck`)) + return nil + }).SwitchoverAndWait(context.Background(), "next") + + assert.Assert(t, !success, "expected failure message to become false") + + success, _ = Executor(func( + _ context.Context, _ io.Reader, stdout, _ io.Writer, _ ...string, + ) error { + _, _ = stdout.Write([]byte(`Successfully switched over to something`)) + return nil + }).SwitchoverAndWait(context.Background(), "next") + + assert.Assert(t, success, "expected success message to become true") + }) +} + +func TestExecutorFailoverAndWait(t *testing.T) { + t.Run("Arguments", func(t *testing.T) { + called := false + exec := func( + _ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + called = true + assert.DeepEqual(t, command, strings.Fields( + `patronictl failover --force --candidate=new`, + )) + assert.Assert(t, stdin == nil, "expected no stdin, got %T", stdin) + assert.Assert(t, stderr != nil, "should capture stderr") + assert.Assert(t, stdout != nil, "should capture stdout") + return nil + } + + _, _ = Executor(exec).FailoverAndWait(context.Background(), "new") + assert.Assert(t, called) + }) + + t.Run("Error", func(t *testing.T) { + expected := errors.New("bang") + _, actual := Executor(func( + context.Context, io.Reader, io.Writer, io.Writer, ...string, + ) error { + return expected + }).FailoverAndWait(context.Background(), "next") + + assert.Equal(t, expected, actual) + }) + + t.Run("Result", func(t *testing.T) { + success, _ := Executor(func( + _ context.Context, _ io.Reader, stdout, _ io.Writer, _ ...string, + ) error { + _, _ = stdout.Write([]byte(`no luck`)) + return nil + }).FailoverAndWait(context.Background(), "next") + + assert.Assert(t, !success, "expected failure message to become false") + + success, _ = Executor(func( + _ context.Context, _ io.Reader, stdout, _ io.Writer, _ ...string, + ) error { + _, _ = stdout.Write([]byte(`Successfully failed over to something`)) + return nil + }).FailoverAndWait(context.Background(), "next") + + assert.Assert(t, success, "expected success message to become true") + }) +} + +func TestExecutorReplaceConfiguration(t *testing.T) { + expected := errors.New("bang") + exec := func( + _ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + assert.DeepEqual(t, command, strings.Fields( + `patronictl edit-config --replace=- --force`, + )) + str, ok := stdin.(fmt.Stringer) + assert.Assert(t, ok, "bug in test: wanted to call String()") + assert.Equal(t, str.String(), `{"some":"values"}`+"\n", "should send JSON on stdin") + assert.Assert(t, stderr != nil, "should capture stderr") + assert.Assert(t, stdout != nil, "should capture stdout") + return expected + } + + actual := Executor(exec).ReplaceConfiguration( + context.Background(), map[string]any{"some": "values"}) + + assert.Equal(t, expected, actual, "should call exec") +} + +func TestExecutorRestartPendingMembers(t *testing.T) { + expected := errors.New("oop") + exec := func( + _ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + assert.DeepEqual(t, command, strings.Fields( + `patronictl restart --pending --force --role=sock-role shoe-scope`, + )) + assert.Assert(t, stdin == nil, "expected no stdin, got %T", stdin) + assert.Assert(t, stderr != nil, "should capture stderr") + assert.Assert(t, stdout != nil, "should capture stdout") + return expected + } + + actual := Executor(exec).RestartPendingMembers( + context.Background(), "sock-role", "shoe-scope") + + assert.Equal(t, expected, actual, "should call exec") +} + +func TestExecutorGetTimeline(t *testing.T) { + t.Run("Error", func(t *testing.T) { + expected := errors.New("bang") + tl, actual := Executor(func( + context.Context, io.Reader, io.Writer, io.Writer, ...string, + ) error { + return expected + }).GetTimeline(context.Background()) + + assert.Equal(t, expected, actual) + assert.Equal(t, tl, int64(0)) + }) + + t.Run("Stderr", func(t *testing.T) { + tl, actual := Executor(func( + _ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + stderr.Write([]byte(`no luck`)) + return nil + }).GetTimeline(context.Background()) + + assert.Error(t, actual, "no luck") + assert.Equal(t, tl, int64(0)) + }) + + t.Run("BadJSON", func(t *testing.T) { + tl, actual := Executor(func( + _ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + stdout.Write([]byte(`no luck`)) + return nil + }).GetTimeline(context.Background()) + + assert.Error(t, actual, "invalid character 'o' in literal null (expecting 'u')") + assert.Equal(t, tl, int64(0)) + }) + + t.Run("NoLeader", func(t *testing.T) { + tl, actual := Executor(func( + _ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + stdout.Write([]byte(`[{"Cluster": "hippo-ha", "Member": "hippo-instance1-ltcf-0", "Host": "hippo-instance1-ltcf-0.hippo-pods", "Role": "Replica", "State": "running", "TL": 4, "Lag in MB": 0}]`)) + return nil + }).GetTimeline(context.Background()) + + assert.NilError(t, actual) + assert.Equal(t, tl, int64(0)) + }) + + t.Run("Success", func(t *testing.T) { + tl, actual := Executor(func( + _ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + stdout.Write([]byte(`[{"Cluster": "hippo-ha", "Member": "hippo-instance1-67mc-0", "Host": "hippo-instance1-67mc-0.hippo-pods", "Role": "Leader", "State": "running", "TL": 4}, {"Cluster": "hippo-ha", "Member": "hippo-instance1-ltcf-0", "Host": "hippo-instance1-ltcf-0.hippo-pods", "Role": "Replica", "State": "running", "TL": 4, "Lag in MB": 0}]`)) + return nil + }).GetTimeline(context.Background()) + + assert.NilError(t, actual) + assert.Equal(t, tl, int64(4)) + }) +} diff --git a/internal/patroni/certificates.go b/internal/patroni/certificates.go new file mode 100644 index 0000000000..9aa1525769 --- /dev/null +++ b/internal/patroni/certificates.go @@ -0,0 +1,56 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package patroni + +import ( + "encoding" + + corev1 "k8s.io/api/core/v1" +) + +const ( + certAuthorityConfigPath = "~postgres-operator/patroni.ca-roots" + certServerConfigPath = "~postgres-operator/patroni.crt+key" + + certAuthorityFileKey = "patroni.ca-roots" + certServerFileKey = "patroni.crt-combined" +) + +// certFile concatenates the results of multiple PEM-encoding marshalers. +func certFile(texts ...encoding.TextMarshaler) ([]byte, error) { + var out []byte + + for i := range texts { + if b, err := texts[i].MarshalText(); err == nil { + out = append(out, b...) + } else { + return nil, err + } + } + + return out, nil +} + +// instanceCertificates returns projections of Patroni's CAs, keys, and +// certificates to include in the instance configuration volume. +func instanceCertificates(certificates *corev1.Secret) []corev1.VolumeProjection { + return []corev1.VolumeProjection{{ + Secret: &corev1.SecretProjection{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: certificates.Name, + }, + Items: []corev1.KeyToPath{ + { + Key: certAuthorityFileKey, + Path: certAuthorityConfigPath, + }, + { + Key: certServerFileKey, + Path: certServerConfigPath, + }, + }, + }, + }} +} diff --git a/internal/patroni/certificates.md b/internal/patroni/certificates.md new file mode 100644 index 0000000000..f58786ce20 --- /dev/null +++ b/internal/patroni/certificates.md @@ -0,0 +1,50 @@ + + +Server +------ + +Patroni uses Python's `ssl` module to protect its REST API, `patroni`. + +- `restapi.cafile` is used for client verification. It is the path to a file of + trusted certificates concatenated in PEM format. + + See https://docs.python.org/3/library/ssl.html#ssl.SSLContext.load_verify_locations + +- `restapi.certfile` is the server certificate. It is the path to a file in PEM + format containing the certificate as well as any number of CA certificates + needed to establish its authenticity. + + See https://docs.python.org/3/library/ssl.html#ssl.SSLContext.load_cert_chain + +- `restapi.keyfile` is the server certificate's private key. This can be omitted + if the contents are included in the certificate file. + + See https://docs.python.org/3/library/ssl.html#combined-key-and-certificate + + +Client +------ + +Patroni uses the `urllib3` module to call the REST API from `patronictl`. That, +in turn, uses Python's `ssl` module for HTTPS. + +- `ctl.cacert` is used for server verification. It is the path to a file of + trusted certificates concatenated in PEM format. + + See https://docs.python.org/3/library/ssl.html#ssl.SSLContext.load_verify_locations + +- `ctl.certfile` is the client certificate. It is the path to a file in PEM + format containing the certificate as well as any number of CA certificates + needed to establish its authenticity. + + See https://urllib3.readthedocs.io/en/stable/reference/urllib3.connection.html + See https://docs.python.org/3/library/ssl.html#ssl.SSLContext.load_cert_chain + +- `ctl.keyfile` is the client certificate's private key. This can be omitted + if the contents are included in the certificate file. + + See https://docs.python.org/3/library/ssl.html#combined-key-and-certificate diff --git a/internal/patroni/certificates_test.go b/internal/patroni/certificates_test.go new file mode 100644 index 0000000000..3073f2247f --- /dev/null +++ b/internal/patroni/certificates_test.go @@ -0,0 +1,50 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package patroni + +import ( + "errors" + "testing" + + "gotest.tools/v3/assert" + corev1 "k8s.io/api/core/v1" + + "github.com/crunchydata/postgres-operator/internal/testing/cmp" +) + +type funcMarshaler func() ([]byte, error) + +func (f funcMarshaler) MarshalText() ([]byte, error) { return f() } + +func TestCertFile(t *testing.T) { + expected := errors.New("boom") + var short funcMarshaler = func() ([]byte, error) { return []byte(`one`), nil } + var fail funcMarshaler = func() ([]byte, error) { return nil, expected } + + text, err := certFile(short, short, short) + assert.NilError(t, err) + assert.DeepEqual(t, text, []byte(`oneoneone`)) + + text, err = certFile(short, fail, short) + assert.Equal(t, err, expected) + assert.DeepEqual(t, text, []byte(nil)) +} + +func TestInstanceCertificates(t *testing.T) { + certs := new(corev1.Secret) + certs.Name = "some-name" + + projections := instanceCertificates(certs) + + assert.Assert(t, cmp.MarshalMatches(projections, ` +- secret: + items: + - key: patroni.ca-roots + path: ~postgres-operator/patroni.ca-roots + - key: patroni.crt-combined + path: ~postgres-operator/patroni.crt+key + name: some-name + `)) +} diff --git a/internal/patroni/config.go b/internal/patroni/config.go new file mode 100644 index 0000000000..b4d7e54f68 --- /dev/null +++ b/internal/patroni/config.go @@ -0,0 +1,666 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package patroni + +import ( + "fmt" + "path" + "strings" + + corev1 "k8s.io/api/core/v1" + "sigs.k8s.io/yaml" + + "github.com/crunchydata/postgres-operator/internal/config" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/postgres" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +const ( + configDirectory = "/etc/patroni" + configMapFileKey = "patroni.yaml" +) + +const ( + basebackupCreateReplicaMethod = "basebackup" + pgBackRestCreateReplicaMethod = "pgbackrest" +) + +const ( + yamlGeneratedWarning = "" + + "# Generated by postgres-operator. DO NOT EDIT.\n" + + "# Your changes will not be saved.\n" +) + +// quoteShellWord ensures that s is interpreted by a shell as single word. +func quoteShellWord(s string) string { + // https://www.gnu.org/software/bash/manual/html_node/Quoting.html + return `'` + strings.ReplaceAll(s, `'`, `'"'"'`) + `'` +} + +// clusterYAML returns Patroni settings that apply to the entire cluster. +func clusterYAML( + cluster *v1beta1.PostgresCluster, + pgHBAs postgres.HBAs, pgParameters postgres.Parameters, +) (string, error) { + root := map[string]any{ + // The cluster identifier. This value cannot change during the cluster's + // lifetime. + "scope": naming.PatroniScope(cluster), + + // Use Kubernetes Endpoints for the distributed configuration store (DCS). + // These values cannot change during the cluster's lifetime. + // + // NOTE(cbandy): It *might* be possible to *carefully* change the role and + // scope labels, but there is no way to reconfigure all instances at once. + "kubernetes": map[string]any{ + "namespace": cluster.Namespace, + "role_label": naming.LabelRole, + "scope_label": naming.LabelPatroni, + "use_endpoints": true, + + // In addition to "scope_label" above, Patroni will add the following to + // every object it creates. It will also use these as filters when doing + // any lookups. + "labels": map[string]string{ + naming.LabelCluster: cluster.Name, + }, + }, + + "postgresql": map[string]any{ + // TODO(cbandy): "callbacks" + + // Custom configuration "must exist on all cluster nodes". + // + // TODO(cbandy): I imagine we will always set this to a file we own. At + // the very least, it will start with an "include_dir" directive. + // - https://www.postgresql.org/docs/current/config-setting.html#CONFIG-INCLUDES + //"custom_conf": nil, + + // TODO(cbandy): Should "parameters", "pg_hba", and "pg_ident" be set in + // DCS? If so, are they are automatically regenerated and reloaded? + + // PostgreSQL Auth settings used by Patroni to + // create replication, and pg_rewind accounts + // TODO(tjmoore4): add "superuser" account + "authentication": map[string]any{ + "replication": map[string]any{ + "sslcert": "/tmp/replication/tls.crt", + "sslkey": "/tmp/replication/tls.key", + "sslmode": "verify-ca", + "sslrootcert": "/tmp/replication/ca.crt", + "username": postgres.ReplicationUser, + }, + "rewind": map[string]any{ + "sslcert": "/tmp/replication/tls.crt", + "sslkey": "/tmp/replication/tls.key", + "sslmode": "verify-ca", + "sslrootcert": "/tmp/replication/ca.crt", + "username": postgres.ReplicationUser, + }, + }, + }, + + // NOTE(cbandy): Every Patroni instance is a client of every other Patroni + // instance. TLS and/or authentication settings need to be applied consistently + // across the entire cluster. + + "restapi": map[string]any{ + // Use TLS to encrypt traffic and verify clients. + // NOTE(cbandy): The path package always uses slash separators. + "cafile": path.Join(configDirectory, certAuthorityConfigPath), + "certfile": path.Join(configDirectory, certServerConfigPath), + + // The private key is bundled into "restapi.certfile". + "keyfile": nil, + + // Require clients to present a certificate verified by "restapi.cafile" + // when calling "unsafe" API endpoints. + // - https://github.com/zalando/patroni/blob/v2.0.1/docs/security.rst#protecting-the-rest-api + // + // NOTE(cbandy): We'd prefer "required" here, but Kubernetes HTTPS probes + // offer no way to present client certificates. Perhaps Patroni could change + // to relax the requirement on *just* liveness and readiness? + // - https://issue.k8s.io/92647 + "verify_client": "optional", + + // TODO(cbandy): The next release of Patroni will allow more control over + // the TLS protocols/ciphers. + // Maybe "ciphers": "EECDH+AESGCM+FIPS:EDH+AESGCM+FIPS". Maybe add ":!DHE". + // - https://github.com/zalando/patroni/commit/ba4ab58d4069ee30 + }, + + "ctl": map[string]any{ + // Use TLS to verify the server and present a client certificate. + // NOTE(cbandy): The path package always uses slash separators. + "cacert": path.Join(configDirectory, certAuthorityConfigPath), + "certfile": path.Join(configDirectory, certServerConfigPath), + + // The private key is bundled into "ctl.certfile". + "keyfile": nil, + + // Always verify the server certificate against "ctl.cacert". + "insecure": false, + }, + + "watchdog": map[string]any{ + // Disable leader watchdog device. Kubernetes' liveness probe is a less + // flexible approximation. + "mode": "off", + }, + } + + if !ClusterBootstrapped(cluster) { + // Patroni has not yet bootstrapped. Populate the "bootstrap.dcs" field to + // facilitate it. When Patroni is already bootstrapped, this field is ignored. + + var configuration map[string]any + if cluster.Spec.Patroni != nil { + configuration = cluster.Spec.Patroni.DynamicConfiguration + } + + root["bootstrap"] = map[string]any{ + "dcs": DynamicConfiguration(cluster, configuration, pgHBAs, pgParameters), + + // Missing here is "users" which runs *after* "post_bootstrap". It is + // not possible to use roles created by the former in the latter. + // - https://github.com/zalando/patroni/issues/667 + } + } + + b, err := yaml.Marshal(root) + return string(append([]byte(yamlGeneratedWarning), b...)), err +} + +// DynamicConfiguration combines configuration with some PostgreSQL settings +// and returns a value that can be marshaled to JSON. +func DynamicConfiguration( + cluster *v1beta1.PostgresCluster, + configuration map[string]any, + pgHBAs postgres.HBAs, pgParameters postgres.Parameters, +) map[string]any { + // Copy the entire configuration before making any changes. + root := make(map[string]any, len(configuration)) + for k, v := range configuration { + root[k] = v + } + + root["ttl"] = *cluster.Spec.Patroni.LeaderLeaseDurationSeconds + root["loop_wait"] = *cluster.Spec.Patroni.SyncPeriodSeconds + + // Copy the "postgresql" section before making any changes. + postgresql := map[string]any{ + // TODO(cbandy): explain this. requires an archive, perhaps. + "use_slots": false, + } + + // When TDE is configured, override the pg_rewind binary name to point + // to the wrapper script. + if config.FetchKeyCommand(&cluster.Spec) != "" { + postgresql["bin_name"] = map[string]any{ + "pg_rewind": "/tmp/pg_rewind_tde.sh", + } + } + + if section, ok := root["postgresql"].(map[string]any); ok { + for k, v := range section { + postgresql[k] = v + } + } + root["postgresql"] = postgresql + + // Copy the "postgresql.parameters" section over any defaults. + parameters := make(map[string]any) + if pgParameters.Default != nil { + for k, v := range pgParameters.Default.AsMap() { + parameters[k] = v + } + } + if section, ok := postgresql["parameters"].(map[string]any); ok { + for k, v := range section { + parameters[k] = v + } + } + // Override the above with mandatory parameters. + if pgParameters.Mandatory != nil { + for k, v := range pgParameters.Mandatory.AsMap() { + + // This parameter is a comma-separated list. Rather than overwrite the + // user-defined value, we want to combine it with the mandatory one. + // Some libraries belong at specific positions in the list, so figure + // that out as well. + if k == "shared_preload_libraries" { + // Load mandatory libraries ahead of user-defined libraries. + if s, ok := parameters[k].(string); ok && len(s) > 0 { + v = v + "," + s + } + // Load "citus" ahead of any other libraries. + // - https://github.com/citusdata/citus/blob/v12.0.0/src/backend/distributed/shared_library_init.c#L417-L419 + if strings.Contains(v, "citus") { + v = "citus," + v + } + } + + parameters[k] = v + } + } + postgresql["parameters"] = parameters + + // Copy the "postgresql.pg_hba" section after any mandatory values. + hba := make([]string, 0, len(pgHBAs.Mandatory)) + for i := range pgHBAs.Mandatory { + hba = append(hba, pgHBAs.Mandatory[i].String()) + } + if section, ok := postgresql["pg_hba"].([]any); ok { + for i := range section { + // any pg_hba values that are not strings will be skipped + if value, ok := section[i].(string); ok { + hba = append(hba, value) + } + } + } + // When the section is missing or empty, include the recommended defaults. + if len(hba) == len(pgHBAs.Mandatory) { + for i := range pgHBAs.Default { + hba = append(hba, pgHBAs.Default[i].String()) + } + } + postgresql["pg_hba"] = hba + + // Enabling `pg_rewind` allows a former primary to automatically rejoin the + // cluster even if it has commits that were not sent to a replica. In other + // words, this favors availability over consistency. Without it, the former + // primary needs patronictl reinit to rejoin. + // + // Recent versions of `pg_rewind` can run with limited permissions granted + // by Patroni to the user defined in "postgresql.authentication.rewind". + // PostgreSQL v10 and earlier require superuser access over the network. + postgresql["use_pg_rewind"] = cluster.Spec.PostgresVersion > 10 + + if cluster.Spec.Standby != nil && cluster.Spec.Standby.Enabled { + // Copy the "standby_cluster" section before making any changes. + standby := make(map[string]any) + if section, ok := root["standby_cluster"].(map[string]any); ok { + for k, v := range section { + standby[k] = v + } + } + + // Unset any previous value for restore_command - we will set it later if needed + delete(standby, "restore_command") + + // Populate replica creation methods based on options provided in the standby spec: + methods := []string{} + if cluster.Spec.Standby.Host != "" { + standby["host"] = cluster.Spec.Standby.Host + if cluster.Spec.Standby.Port != nil { + standby["port"] = *cluster.Spec.Standby.Port + } + + methods = append([]string{basebackupCreateReplicaMethod}, methods...) + } + + if cluster.Spec.Standby.RepoName != "" { + // Append pgbackrest as the first choice when creating the standby + methods = append([]string{pgBackRestCreateReplicaMethod}, methods...) + + // Populate the standby leader by shipping logs through pgBackRest. + // This also overrides the "restore_command" used by standby replicas. + // - https://www.postgresql.org/docs/current/warm-standby.html + standby["restore_command"] = pgParameters.Mandatory.Value("restore_command") + } + + standby["create_replica_methods"] = methods + root["standby_cluster"] = standby + } + + return root +} + +// instanceEnvironment returns the environment variables needed by Patroni's +// instance container. +func instanceEnvironment( + cluster *v1beta1.PostgresCluster, + clusterPodService *corev1.Service, + leaderService *corev1.Service, + podContainers []corev1.Container, +) []corev1.EnvVar { + var ( + patroniPort = *cluster.Spec.Patroni.Port + postgresPort = *cluster.Spec.Port + podSubdomain = clusterPodService.Name + ) + + // Gather Endpoint ports for any Container ports that match the leader + // Service definition. + ports := []corev1.EndpointPort{} + for _, sp := range leaderService.Spec.Ports { + for i := range podContainers { + for _, cp := range podContainers[i].Ports { + if sp.TargetPort.StrVal == cp.Name { + ports = append(ports, corev1.EndpointPort{ + Name: sp.Name, + Port: cp.ContainerPort, + Protocol: cp.Protocol, + }) + } + } + } + } + portsYAML, _ := yaml.Marshal(ports) + + // NOTE(cbandy): Patroni consumes and then removes environment variables + // starting with "PATRONI_". + // - https://github.com/zalando/patroni/blob/v2.0.2/patroni/config.py#L247 + // - https://github.com/zalando/patroni/blob/v2.0.2/patroni/postgresql/postmaster.py#L215-L216 + + variables := []corev1.EnvVar{ + // Set "name" to the v1.Pod's name. Required when using Kubernetes for DCS. + // Patroni must be restarted when changing this value. + { + Name: "PATRONI_NAME", + ValueFrom: &corev1.EnvVarSource{FieldRef: &corev1.ObjectFieldSelector{ + APIVersion: "v1", + FieldPath: "metadata.name", + }}, + }, + + // Set "kubernetes.pod_ip" to the v1.Pod's primary IP address. + // Patroni must be restarted when changing this value. + { + Name: "PATRONI_KUBERNETES_POD_IP", + ValueFrom: &corev1.EnvVarSource{FieldRef: &corev1.ObjectFieldSelector{ + APIVersion: "v1", + FieldPath: "status.podIP", + }}, + }, + + // When using Endpoints for DCS, Patroni needs to replicate the leader + // ServicePort definitions. Set "kubernetes.ports" to the YAML of this + // Pod's equivalent EndpointPort definitions. + // + // This is connascent with PATRONI_POSTGRESQL_CONNECT_ADDRESS below. + // Patroni must be restarted when changing this value. + { + Name: "PATRONI_KUBERNETES_PORTS", + Value: string(portsYAML), + }, + + // Set "postgresql.connect_address" using the Pod's stable DNS name. + // PostgreSQL must be restarted when changing this value. + { + Name: "PATRONI_POSTGRESQL_CONNECT_ADDRESS", + Value: fmt.Sprintf("%s.%s:%d", "$(PATRONI_NAME)", podSubdomain, postgresPort), + }, + + // Set "postgresql.listen" using the special address "*" to mean all TCP + // interfaces. When connecting locally over TCP, Patroni will use "localhost". + // + // This is connascent with PATRONI_POSTGRESQL_CONNECT_ADDRESS above. + // PostgreSQL must be restarted when changing this value. + { + Name: "PATRONI_POSTGRESQL_LISTEN", + Value: fmt.Sprintf("*:%d", postgresPort), + }, + + // Set "postgresql.config_dir" to PostgreSQL's $PGDATA directory. + // Patroni must be restarted when changing this value. + { + Name: "PATRONI_POSTGRESQL_CONFIG_DIR", + Value: postgres.ConfigDirectory(cluster), + }, + + // Set "postgresql.data_dir" to PostgreSQL's "data_directory". + // Patroni must be restarted when changing this value. + { + Name: "PATRONI_POSTGRESQL_DATA_DIR", + Value: postgres.DataDirectory(cluster), + }, + + // Set "restapi.connect_address" using the Pod's stable DNS name. + // Patroni must be reloaded when changing this value. + { + Name: "PATRONI_RESTAPI_CONNECT_ADDRESS", + Value: fmt.Sprintf("%s.%s:%d", "$(PATRONI_NAME)", podSubdomain, patroniPort), + }, + + // Set "restapi.listen" using the special address "*" to mean all TCP interfaces. + // This is connascent with PATRONI_RESTAPI_CONNECT_ADDRESS above. + // Patroni must be reloaded when changing this value. + { + Name: "PATRONI_RESTAPI_LISTEN", + Value: fmt.Sprintf("*:%d", patroniPort), + }, + + // The Patroni client `patronictl` looks here for its configuration file(s). + { + Name: "PATRONICTL_CONFIG_FILE", + Value: configDirectory, + }, + } + + return variables +} + +// instanceConfigFiles returns projections of Patroni's configuration files +// to include in the instance configuration volume. +func instanceConfigFiles(cluster, instance *corev1.ConfigMap) []corev1.VolumeProjection { + return []corev1.VolumeProjection{ + { + ConfigMap: &corev1.ConfigMapProjection{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: cluster.Name, + }, + Items: []corev1.KeyToPath{{ + Key: configMapFileKey, + Path: "~postgres-operator_cluster.yaml", + }}, + }, + }, + { + ConfigMap: &corev1.ConfigMapProjection{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: instance.Name, + }, + Items: []corev1.KeyToPath{{ + Key: configMapFileKey, + Path: "~postgres-operator_instance.yaml", + }}, + }, + }, + } +} + +// instanceYAML returns Patroni settings that apply to instance. +func instanceYAML( + cluster *v1beta1.PostgresCluster, instance *v1beta1.PostgresInstanceSetSpec, + pgbackrestReplicaCreateCommand []string, +) (string, error) { + root := map[string]any{ + // Missing here is "name" which cannot be known until the instance Pod is + // created. That value should be injected using the downward API and the + // PATRONI_NAME environment variable. + + "kubernetes": map[string]any{ + // Missing here is "pod_ip" which cannot be known until the instance Pod is + // created. That value should be injected using the downward API and the + // PATRONI_KUBERNETES_POD_IP environment variable. + + // Missing here is "ports" which is is connascent with "postgresql.connect_address". + // See the PATRONI_KUBERNETES_PORTS env variable. + }, + + "restapi": map[string]any{ + // Missing here is "connect_address" which cannot be known until the + // instance Pod is created. That value should be injected using the downward + // API and the PATRONI_RESTAPI_CONNECT_ADDRESS environment variable. + + // Missing here is "listen" which is connascent with "connect_address". + // See the PATRONI_RESTAPI_LISTEN environment variable. + }, + + "tags": map[string]any{ + // TODO(cbandy): "nofailover" + // TODO(cbandy): "nosync" + }, + } + + postgresql := map[string]any{ + // TODO(cbandy): "bin_dir" + + // Missing here is "connect_address" which cannot be known until the + // instance Pod is created. That value should be injected using the downward + // API and the PATRONI_POSTGRESQL_CONNECT_ADDRESS environment variable. + + // Missing here is "listen" which is connascent with "connect_address". + // See the PATRONI_POSTGRESQL_LISTEN environment variable. + + // During startup, Patroni checks that this path is writable whether we use passwords or not. + // - https://github.com/zalando/patroni/issues/1888 + "pgpass": "/tmp/.pgpass", + + // Prefer to use UNIX domain sockets for local connections. If the PostgreSQL + // parameter "unix_socket_directories" is set, Patroni will connect using one + // of those directories. Otherwise, it will use the client (libpq) default. + "use_unix_socket": true, + } + root["postgresql"] = postgresql + + // The "basebackup" replica method is configured differently from others. + // Patroni prepends "--" before it calls `pg_basebackup`. + // - https://github.com/zalando/patroni/blob/v2.0.2/patroni/postgresql/bootstrap.py#L45 + postgresql["basebackup"] = []string{ + // NOTE(cbandy): The "--waldir" option was introduced in PostgreSQL v10. + "waldir=" + postgres.WALDirectory(cluster, instance), + } + methods := []string{"basebackup"} + + // Prefer a pgBackRest method when it is available, and fallback to other + // methods when it fails. + if command := pgbackrestReplicaCreateCommand; len(command) > 0 { + + // Regardless of the "keep_data" setting below, Patroni deletes the + // data directory when all methods fail. pgBackRest will not restore + // when the data directory is missing, so create it before running the + // command. PostgreSQL requires that the directory is writable by only + // itself. + // - https://github.com/zalando/patroni/blob/v2.0.2/patroni/ha.py#L249 + // - https://github.com/pgbackrest/pgbackrest/issues/1445 + // - https://git.postgresql.org/gitweb/?p=postgresql.git;f=src/backend/utils/init/miscinit.c;hb=REL_13_0#l319 + // + // NOTE(cbandy): The "PATRONI_POSTGRESQL_DATA_DIR" environment variable + // is defined in this package, but it is removed by Patroni at runtime. + command = append([]string{ + "bash", "-ceu", "--", + `install --directory --mode=0700 "${PGDATA?}" && exec "$@"`, + "-", + }, command...) + + quoted := make([]string, len(command)) + for i := range command { + quoted[i] = quoteShellWord(command[i]) + } + postgresql[pgBackRestCreateReplicaMethod] = map[string]any{ + "command": strings.Join(quoted, " "), + "keep_data": true, + "no_master": true, + "no_params": true, + } + methods = append([]string{pgBackRestCreateReplicaMethod}, methods...) + } + + // NOTE(cbandy): Is there any chance a user might want to specify their own + // method? This is a list and cannot be merged. + postgresql["create_replica_methods"] = methods + + if !ClusterBootstrapped(cluster) { + isRestore := (cluster.Status.PGBackRest != nil && cluster.Status.PGBackRest.Restore != nil) + isDataSource := (cluster.Spec.DataSource != nil && cluster.Spec.DataSource.Volumes != nil && + cluster.Spec.DataSource.Volumes.PGDataVolume != nil && + cluster.Spec.DataSource.Volumes.PGDataVolume.Directory != "") + // If the cluster is being bootstrapped using existing volumes, or if the cluster is being + // bootstrapped following a restore, then use the "existing" + // bootstrap method. Otherwise use "initdb". + if isRestore || isDataSource { + data_dir := postgres.DataDirectory(cluster) + root["bootstrap"] = map[string]any{ + "method": "existing", + "existing": map[string]any{ + "command": fmt.Sprintf(`mv %q %q`, data_dir+"_bootstrap", data_dir), + "no_params": "true", + }, + } + } else { + + initdb := []string{ + // Enable checksums on data pages to help detect corruption of + // storage that would otherwise be silent. This also enables + // "wal_log_hints" which is a prerequisite for using `pg_rewind`. + // - https://www.postgresql.org/docs/current/app-initdb.html + // - https://www.postgresql.org/docs/current/app-pgrewind.html + // - https://www.postgresql.org/docs/current/runtime-config-wal.html + // + // The benefits of checksums in the Kubernetes storage landscape + // outweigh their negligible overhead, and enabling them later + // is costly. (Every file of the cluster must be rewritten.) + // PostgreSQL v12 introduced the `pg_checksums` utility which + // can cheaply disable them while PostgreSQL is stopped. + // - https://www.postgresql.org/docs/current/app-pgchecksums.html + "data-checksums", + "encoding=UTF8", + + // NOTE(cbandy): The "--waldir" option was introduced in PostgreSQL v10. + "waldir=" + postgres.WALDirectory(cluster, instance), + } + + // Append the encryption key command, if provided. + if ekc := config.FetchKeyCommand(&cluster.Spec); ekc != "" { + initdb = append(initdb, fmt.Sprintf("encryption-key-command=%s", ekc)) + } + + // Populate some "bootstrap" fields to initialize the cluster. + // When Patroni is already bootstrapped, this section is ignored. + // - https://github.com/zalando/patroni/blob/v2.0.2/docs/SETTINGS.rst#bootstrap-configuration + // - https://github.com/zalando/patroni/blob/v2.0.2/docs/replica_bootstrap.rst#bootstrap + root["bootstrap"] = map[string]any{ + "method": "initdb", + + // The "initdb" bootstrap method is configured differently from others. + // Patroni prepends "--" before it calls `initdb`. + // - https://github.com/zalando/patroni/blob/v2.0.2/patroni/postgresql/bootstrap.py#L45 + "initdb": initdb, + } + } + } + + b, err := yaml.Marshal(root) + return string(append([]byte(yamlGeneratedWarning), b...)), err +} + +// probeTiming returns a Probe with thresholds and timeouts set according to spec. +func probeTiming(spec *v1beta1.PatroniSpec) *corev1.Probe { + // "Probes should be configured in such a way that they start failing about + // time when the leader key is expiring." + // - https://github.com/zalando/patroni/blob/v2.0.1/docs/rest_api.rst + // - https://github.com/zalando/patroni/blob/v2.0.1/docs/watchdog.rst + + // TODO(cbandy): When the probe times out, failure triggers at + // (FailureThreshold × PeriodSeconds + TimeoutSeconds) + probe := corev1.Probe{ + TimeoutSeconds: *spec.SyncPeriodSeconds / 2, + PeriodSeconds: *spec.SyncPeriodSeconds, + SuccessThreshold: 1, + FailureThreshold: *spec.LeaderLeaseDurationSeconds / *spec.SyncPeriodSeconds, + } + + if probe.TimeoutSeconds < 1 { + probe.TimeoutSeconds = 1 + } + if probe.FailureThreshold < 1 { + probe.FailureThreshold = 1 + } + + return &probe +} diff --git a/internal/patroni/config.md b/internal/patroni/config.md new file mode 100644 index 0000000000..18d28d8a4e --- /dev/null +++ b/internal/patroni/config.md @@ -0,0 +1,256 @@ + + +Patroni configuration is complicated. The daemon `patroni` and the client +`patronictl` are configured slightly differently. Some settings are the same for +the whole cluster, some are different on each instance. + +Some things are stored in Kubernetes (our "DCS") and automatically applied by +Patroni every HA reconciliation. Everything else requires a restart or reload +of Patroni to be applied. + +Configuration files take precedence over DCS contents. +Environment variables take precedence over configuration files. + +`patronictl` uses both the DCS and the Patroni API, so it must be configured for both. +`patroni` takes one required argument, the path to its configuration file(s). + +When the configuration path is a directory, the YAML files it contains are +loaded in alphabetical order. Mappings are merged recursively such that later +files (and deeper mappings) take precedence. A key with an undefined or `null` +value removes that key/value from the merged result. (Don't accidentally +generate `null` sections!) Sequences are not merged; a later value overwrites +an earlier one. + +--- + +Given the above, we provide to the user two ways to configure Patroni and thus +PostgreSQL: YAML files and DCS. + +Configuration that applies to the whole cluster is in PostgresCluster.Spec and +we copy it into DCS. This allows us to add rules to `pg_hba` and `pg_ident` +for replication and other service accounts. These settings are automatically +applied by Patroni. + +Configuration that applies to an individual instance will be in ConfigMaps and +Secrets that get mounted as files into `/etc/patroni`. The user can effectively +use these cluster-wide by referencing the same objects for every instance. +These settings take effect after a Patroni reload. + +We will also configure Patroni using YAML files mounted into `/etc/patroni`. To +give these high precedence, they are last in the projected volume and named to +sort last alphabetically. + +``` +$ ls -1dF /etc/patroni/* /etc/patroni/*/* +/etc/patroni/~~jailbreak.yaml +/etc/patroni/other.yaml +/etc/patroni/~postgres-operator/ +/etc/patroni/~postgres-operator/stuff.txt +/etc/patroni/~postgres-operator_x.yaml +/etc/patroni/some.yaml + +$ python3 +>>> import os +>>> sorted(os.listdir('/etc/patroni')) +['other.yaml', 'some.yaml', '~postgres-operator', '~postgres-operator_x.yaml', '~~jailbreak.yaml'] +``` + +- `/etc/patroni`
+ Use this directory to store Patroni configuration. Files with YAML extensions + are loaded in alphabetical order. + +- `/etc/patroni/~postgres-operator/*`
+ Use this subdirectory to store things like TLS certificates and keys. Files in + subdirectories are not loaded automatically, but avoid YAML extensions just in + case. + +- ConfigMap `{cluster}-config`, Key `patroni.yaml` → + `/etc/patroni/~postgres-operator_cluster.yaml` + +- ConfigMap `{instance}-config`, Key `patroni.yaml` → + `/etc/patroni/~postgres-operator_instance.yaml` + + + + +- https://github.com/zalando/patroni/blob/v2.0.1/docs/dynamic_configuration.rst +- https://github.com/zalando/patroni/blob/v2.0.1/docs/SETTINGS.rst +- https://github.com/zalando/patroni/blob/v2.0.1/docs/ENVIRONMENT.rst + +TODO: document PostgreSQL parameters separately... + +# Client and Daemon configuration + +| Environment | YAML | DCS | Mutable | C/I | C/D | . | +|-------------|------|-----|---------|-----|-----|---| +| PATRONI_CONFIGURATION | - | - | - | - | both | All configuration as a single YAML document. No files nor other environment are considered. +| PATRONI_SCOPE | scope | No | immutable | cluster | both | Cluster identifier. +| PATRONI_NAME | name | No | immutable | instance | patroni | Instance identifier. Must be Pod.Name for Kubernetes DCS. +|| +| PATRONI_LOG_LEVEL | - | - | - | - | patronictl | Logging level. (default: WARNING) +| PATRONI_LOG_LEVEL | log.level | No | mutable | either | patroni | Logging level. (default: INFO) +| PATRONI_LOG_TRACEBACK_LEVEL | log.traceback_level | No | mutable | either | patroni | Logging level that includes tracebacks. (default: ERROR) +| PATRONI_LOG_FORMAT | log.format | No | mutable | either | patroni | Format of log entries. +| PATRONI_LOG_DATEFORMAT | log.dateformat | No | mutable | either | patroni | Format of log entry timestamps. +| PATRONI_LOG_MAX_QUEUE_SIZE | log.max_queue_size | No | mutable | either | patroni | +| PATRONI_LOG_DIR | log.dir | No | mutable | either | patroni | Directory for log files. +| PATRONI_LOG_FILE_SIZE | log.file_size | No | mutable | either | patroni | Size of log file (in bytes) that triggers rotation. (default: 25MB) +| PATRONI_LOG_FILE_NUM | log.file_num | No | mutable | either | patroni | Number of rotated log files to retain. (default: 4) +| PATRONI_LOG_LOGGERS | log.loggers | No | mutable | either | patroni | Mapping of log levels per Python module. (Environment is YAML.) +|| +| PATRONI_RESTAPI_LISTEN | restapi.listen | No | mutable | either | patroni | Address and port on which to bind. +| PATRONI_RESTAPI_CONNECT_ADDRESS | restapi.connect_address | No | mutable | instance | patroni | How to connect to Patroni from outside the Pod. +| PATRONI_RESTAPI_CERTFILE | restapi.certfile | No | mutable | either | both | Path to the server certificate. Set this to enable TLS. +| PATRONI_RESTAPI_KEYFILE | restapi.keyfile | No | mutable | either | both | Path to the server certificate key. +| PATRONI_RESTAPI_CAFILE | restapi.cafile | No | mutable | either | both | Path to the client certificate authority. +| PATRONI_RESTAPI_VERIFY_CLIENT | restapi.verify_client | No | mutable | either | patroni | Whether or not to verify client certificates. (default: none) +| PATRONI_RESTAPI_USERNAME | restapi.authentication.username | No | mutable | either | both | HTTP Basic Authentication for "unsafe" endpoints: DELETE, PATCH, POST, PUT +| PATRONI_RESTAPI_PASSWORD | restapi.authentication.password | No | mutable | either | both | HTTP Basic Authentication for "unsafe" endpoints: DELETE, PATCH, POST, PUT +| PATRONI_RESTAPI_HTTP_EXTRA_HEADERS | restapi.http_extra_headers | No | mutable | either | patroni | Additional headers for HTTP responses. +| PATRONI_RESTAPI_HTTPS_EXTRA_HEADERS | restapi.https_extra_headers | No | mutable | either | patroni | Additional headers for HTTP responses over TLS. +|| +| PATRONICTL_CONFIG_FILE | - | - | - | - | patronictl | Path to the config file. (default: ~/.config/patroni/patronictl.yaml) +| PATRONI_CTL_INSECURE | ctl.insecure | No | mutable | either | patronictl | Whether or not to verify the server certificate. +| PATRONI_CTL_CACERT | ctl.cacert | No | mutable | either | patronictl | Path to the server certificate authority. (default: restapi.cafile) +| PATRONI_CTL_CERTFILE | ctl.certfile | No | mutable | either | patronictl | Path to the client certificate. (default: restapi.certfile) +| PATRONI_CTL_KEYFILE | ctl.keyfile | No | mutable | either | patronictl | Path to the client certificate key. (default: restapi.keyfile) +|| +| PATRONI_KUBERNETES_BYPASS_API_SERVICE | kubernetes.bypass_api_service | No | restart | either | both | Resolve the IPs behind the service periodically and use them directly. +| PATRONI_KUBERNETES_USE_ENDPOINTS | kubernetes.use_endpoints | No | immutable | cluster | both | Elect and store state using Endpoints (instead of ConfigMap). +| PATRONI_KUBERNETES_PORTS | kubernetes.ports | No | restart | either | both | When using Endpoints, port details need to match the leader Service. +| PATRONI_KUBERNETES_LABELS | kubernetes.labels | No | immutable | cluster | both | Used to find objects of the cluster. Patroni writes them on things it creates. +| PATRONI_KUBERNETES_ROLE_LABEL | kubernetes.role_label | No | immutable | cluster | both | Name of the label containing "master", "replica", etc. +| PATRONI_KUBERNETES_SCOPE_LABEL | kubernetes.scope_label | No | immutable | cluster | both | Name of the label containing cluster identifier. +| PATRONI_KUBERNETES_NAMESPACE | kubernetes.namespace | No | immutable | cluster | both | +| PATRONI_KUBERNETES_POD_IP | kubernetes.pod_ip | No | immutable | instance | both | +|| +| - | watchdog.mode | Yes¹ | mutable | either | patroni | (default: automatic) +| - | watchdog.device | Yes¹ | mutable | either | patroni | Path to watchdog device. (default: /dev/watchdog) +| - | watchdog.safety_margin | Yes¹ | mutable | either | patroni | (default: 5) + +¹ This section must be entirely in DCS or entirely in YAML. + + +# PostgreSQL and Failover configuration + +Used only by `patroni`, not `patronictl`. + +| Environment | YAML | DCS | Mutable | C/I | . | +|-------------|------|-----|---------|-----|---| +| - | ttl | Only | mutable | cluster | TTL of the leader lock in seconds. (default: 30) +| - | loop_wait | Only | mutable | cluster | Seconds between HA reconciliations. (default: 10) +| - | retry_timeout | Only | mutable | cluster | Timeout for DCS and PostgreSQL operations in seconds. (default: 10) + +There is an implicit relationship between `ttl`, `loop_wait`, and `retry_timeout`. +According to https://github.com/zalando/patroni/issues/1579#issuecomment-641830296, +`ttl` should be greater than the maximum time it may take for a single +synchronization which is `loop_wait` plus two `retry_timeout`. That is, +`ttl > loop_wait + retry_timeout + retry_timeout` because immediately after +acquiring the leader lock, the Patroni leader: + + 1. Sleeps until the next scheduled sync (at most `loop_wait`) + 2. Wakes and tries to read from DCS (at most `retry_timeout`) + 3. Decides to release or retain the lock + 4. Then tries to write that to DCS (at most `retry_timeout`) + +| Environment | YAML | DCS | Mutable | C/I | . | +|-------------|------|-----|---------|-----|---| +|||||| https://github.com/zalando/patroni/blob/v2.0.1/docs/replication_modes.rst +| - | maximum_lag_on_failover | Only | mutable | cluster | Bytes behind which a replica may not become leader. (default: 1MB) +| - | check_timeline | Only | mutable | cluster | Whether or not a replica on an older timeline may become leader. (default: false) +| - | max_timelines_history | Only | mutable | cluster | (default: 0) +| - | synchronous_mode | Only | mutable | cluster | (default: false) +| - | synchronous_mode_strict | Only | mutable | cluster | (default: false) +| - | synchronous_node_count | Only | mutable | cluster | (default: 1) +| - | master_stop_timeout | Yes | mutable | cluster | (default: 0) +| - | master_start_timeout | Yes | mutable | cluster | (default: 300) +|| +|||||| Setting `host`, `port`, or `restore_command` enables standby behavior. +| - | standby_cluster.create_replica_methods | Only | immutable | cluster | List of methods to use when creating a standby leader. See `postgresql.create_replica_methods`. (default: basebackup) +| - | standby_cluster.host | Only | immutable | cluster | Address to dial for streaming replication. +| - | standby_cluster.port | Only | immutable | cluster | +| - | standby_cluster.primary_slot_name | Only | immutable | cluster | +| - | standby_cluster.restore_command | Only | immutable | cluster | Override "postgresql.parameters.restore_command" on leader and replicas. +| - | standby_cluster.archive_cleanup_command | Only | immutable | cluster | +| - | standby_cluster.recovery_min_apply_delay | Only | immutable | cluster | +|| +| - | tags.nofailover | No | mutable | instance | Whether or not this instance can be leader. (default: false) +| - | tags.nosync | No | mutable | instance | Whether or not this instance can be synchronous replica. (default: false) +| - | tags.clonefrom | No | mutable | instance | Whether or not this instance is preferred source for pg_basebackup. (default: false) +| - | tags.noloadbalance | No | mutable | instance | Whether or not `/replica` endpoint ever returns success. (default: false) +| - | tags.replicatefrom | No | mutable | instance | The address of another replica for cascading replication. +|| +| PATRONI_POSTGRESQL_LISTEN | postgresql.listen | No | ? | either | Addresses and port on which to bind. Patroni uses the first address for local connections. +| PATRONI_POSTGRESQL_CONNECT_ADDRESS | postgresql.connect_address | No | ? | instance | How to connect to PostgreSQL from outside the Pod. +| - | postgresql.use_unix_socket | Yes | mutable | either | Prefer to use sockets. (default: false) +| PATRONI_POSTGRESQL_DATA_DIR | postgresql.data_dir | No | ? | either | Location of the PostgreSQL data directory. +| PATRONI_POSTGRESQL_CONFIG_DIR | postgresql.config_dir | Yes | ? | either | Location of the writable PostgreSQL config directory, defaults to data directory. +| PATRONI_POSTGRESQL_BIN_DIR | postgresql.bin_dir | Yes | ? | either | Location of the PostgreSQL binaries. Empty means use PATH. +| PATRONI_POSTGRESQL_PGPASS | postgresql.pgpass | No | mutable | either | Location of the writable password file. +| - | postgresql.recovery_conf | Yes | mutable | either | Mapping of additional settings written to recovery.conf of follower. (replica?) +| - | postgresql.custom_conf | Yes | mutable | cluster | Path to a custom configuration file instead of `postgresql.base.conf`. +| - | postgresql.parameters | Yes | mutable | either | PostgreSQL parameters. +| - | postgresql.pg_hba | Yes | mutable | either | The entirety of pg_hba.conf as lines. +| - | postgresql.pg_ident | Yes | mutable | either | The entirety of pg_ident.conf as lines. +| - | postgresql.pg_ctl_timeout | Yes | mutable | either | Timeout when performing start, stop, or restart. (default: 60s) +| - | postgresql.use_pg_rewind | Yes | mutable | either | Whether or not to use pg_rewind when a former leader rejoins the cluster. (default: false) +| - | postgresql.use_slots | Only | mutable | either | Whether or not to use replication slots. (default: true) +|| +| - | postgresql.remove_data_directory_on_rewind_failure | Yes | mutable | either | +| - | postgresql.remove_data_directory_on_diverged_timelines | Yes | mutable | either | +|| +| - | postgresql.callbacks.on_reload | Yes¹ | mutable | either | Command to execute when (before? after?) (Patroni? PostgreSQL?) configuration reloads. +| - | postgresql.callbacks.on_restart | Yes¹ | mutable | either | Command to execute when (before? after?) PostgreSQL restarts. +| - | postgresql.callbacks.on_role_change | Yes¹ | mutable | either | Command to execute when (before? after?) the instance is promoted or demoted. +| - | postgresql.callbacks.on_start | Yes¹ | mutable | either | Command to execute when (before? after?) PostgreSQL starts. +| - | postgresql.callbacks.on_stop | Yes¹ | mutable | either | Command to execute when (before? after?) PostgreSQL stops. +|| +|||||| https://github.com/zalando/patroni/blob/v2.0.1/docs/replica_bootstrap.rst#building-replicas +| - | postgresql.create_replica_methods | Yes | mutable | either | List of methods to use when creating a replica. (default: basebackup) +| - | postgresql.basebackup | Yes | mutable | either | List of arguments to pass to pg_basebackup when using the `basebackup` replica method. +| - | postgresql.{method}.command | Yes¹ | mutable | either | Command to execute for this replica method. +| - | postgresql.{method}.keep_data | Yes¹ | mutable | either | Whether or not Patroni should empty the data directory before. (default: false) +| - | postgresql.{method}.no_master | Yes¹ | mutable | either | Whether or not Patroni can call this method when no instances are running. (default: false) +| - | postgresql.{method}.no_params | Yes¹ | mutable | either | Whether or not Patroni should pass extra arguments to the command. (default: false) +|| +|||||| https://github.com/zalando/patroni/blob/v2.0.1/docs/replica_bootstrap.rst#bootstrap +| - | bootstrap.method | No | immutable | cluster | Method to use when initializing a new cluster. (default: initdb) +| - | bootstrap.initdb | No | immutable | cluster | List of arguments to pass to initdb when using the `initdb` bootstrap method. +| - | bootstrap.{method}.command | No | immutable | cluster | Command to execute for this bootstrap method. +| - | bootstrap.{method}.no_params | No | immutable | cluster | Whether or not Patroni should pass extra arguments to the command. (default: false) +| - | bootstrap.{method}.recovery_conf | No | immutable | cluster | Mapping of recovery settings. Before PostgreSQL 12, these go into a special file. +| - | bootstrap.{method}.keep_existing_recovery_conf | No | immutable | cluster | Whether or not Patroni should remove signal files. +|| +| - | bootstrap.dcs | No | immutable | cluster | Mapping to load into DCS when initializing a new cluster. +| - | bootstrap.pg_hba | No | immutable | cluster | Lines of HBA to use when no `postgresql.pg_hba` nor `postgresql.parameters.hba_file`. +| - | bootstrap.post_bootstrap | No | immutable | cluster | Command to execute after PostgreSQL is initialized and running but before "users" below. (string) +| - | bootstrap.users.{username}.options | No | immutable | cluster | List of options for `CREATE ROLE` SQL. +| - | bootstrap.users.{username}.password | No | immutable | cluster | Password for the role. (optional) +|| +| PATRONI_SUPERUSER_USERNAME | postgresql.authentication.superuser.username | No | immutable | cluster | Used during initdb and later to connect. (optional) +| PATRONI_SUPERUSER_PASSWORD | postgresql.authentication.superuser.password | No | immutable | cluster | Used during initdb and later to connect. (optional) +| PATRONI_SUPERUSER_SSLMODE | postgresql.authentication.superuser.sslmode | No | mutable | either | +| PATRONI_SUPERUSER_SSLCERT | postgresql.authentication.superuser.sslcert | No | mutable | either | Path to the client certificate. +| PATRONI_SUPERUSER_SSLKEY | postgresql.authentication.superuser.sslkey | No | mutable | either | Path to the client certificate key. +| PATRONI_SUPERUSER_SSLPASSWORD | postgresql.authentication.superuser.sslpassword | No | mutable | either | Password for the client certificate key. +| PATRONI_SUPERUSER_SSLROOTCERT | postgresql.authentication.superuser.sslrootcert | No | mutable | either | Path to the server certificate authority. +| PATRONI_SUPERUSER_SSLCRL | postgresql.authentication.superuser.sslcrl | No | mutable | either | Path to the server CRL. +| PATRONI_SUPERUSER_CHANNEL_BINDING | postgresql.authentication.superuser.channel_binding | No | mutable | either | Applicable when using SCRAM auth over SSL. +|| +| PATRONI_REPLICATION_USERNAME | postgresql.authentication.replication.username | No | immutable | cluster | Used during bootstrap and later to connect. +| PATRONI_REPLICATION_PASSWORD | postgresql.authentication.replication.password | No | immutable | cluster | Used during bootstrap and later to connect. (optional) +| PATRONI_REPLICATION_SSLMODE | postgresql.authentication.replication.sslmode | No | mutable | either | +| PATRONI_REPLICATION_SSLCERT | postgresql.authentication.replication.sslcert | No | mutable | either | Path to the client certificate. +| PATRONI_REPLICATION_SSLKEY | postgresql.authentication.replication.sslkey | No | mutable | either | Path to the client certificate key. +| PATRONI_REPLICATION_SSLPASSWORD | postgresql.authentication.replication.sslpassword | No | mutable | either | Password for the client certificate key. +| PATRONI_REPLICATION_SSLROOTCERT | postgresql.authentication.replication.sslrootcert | No | mutable | either | Path to the server certificate authority. +| PATRONI_REPLICATION_SSLCRL | postgresql.authentication.replication.sslcrl | No | mutable | either | Path to the server CRL. +| PATRONI_REPLICATION_CHANNEL_BINDING | postgresql.authentication.replication.channel_binding | No | mutable | either | Applicable when using SCRAM auth over SSL. +|| +| PATRONI_REWIND_* | postgresql.authentication.rewind.* | No | " | " | Same as above. (Patroni uses superuser when this is not set.) + +¹ This section must be entirely in DCS or entirely in YAML. diff --git a/internal/patroni/config_test.go b/internal/patroni/config_test.go new file mode 100644 index 0000000000..a45568df8b --- /dev/null +++ b/internal/patroni/config_test.go @@ -0,0 +1,1094 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package patroni + +import ( + "os" + "os/exec" + "path/filepath" + "strings" + "testing" + + "gotest.tools/v3/assert" + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "sigs.k8s.io/yaml" + + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/postgres" + "github.com/crunchydata/postgres-operator/internal/testing/cmp" + "github.com/crunchydata/postgres-operator/internal/testing/require" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestClusterYAML(t *testing.T) { + t.Parallel() + + t.Run("PG version defaulted", func(t *testing.T) { + cluster := new(v1beta1.PostgresCluster) + cluster.Default() + cluster.Namespace = "some-namespace" + cluster.Name = "cluster-name" + + data, err := clusterYAML(cluster, postgres.HBAs{}, postgres.Parameters{}) + assert.NilError(t, err) + assert.Equal(t, data, strings.TrimSpace(` +# Generated by postgres-operator. DO NOT EDIT. +# Your changes will not be saved. +bootstrap: + dcs: + loop_wait: 10 + postgresql: + parameters: {} + pg_hba: [] + use_pg_rewind: false + use_slots: false + ttl: 30 +ctl: + cacert: /etc/patroni/~postgres-operator/patroni.ca-roots + certfile: /etc/patroni/~postgres-operator/patroni.crt+key + insecure: false + keyfile: null +kubernetes: + labels: + postgres-operator.crunchydata.com/cluster: cluster-name + namespace: some-namespace + role_label: postgres-operator.crunchydata.com/role + scope_label: postgres-operator.crunchydata.com/patroni + use_endpoints: true +postgresql: + authentication: + replication: + sslcert: /tmp/replication/tls.crt + sslkey: /tmp/replication/tls.key + sslmode: verify-ca + sslrootcert: /tmp/replication/ca.crt + username: _crunchyrepl + rewind: + sslcert: /tmp/replication/tls.crt + sslkey: /tmp/replication/tls.key + sslmode: verify-ca + sslrootcert: /tmp/replication/ca.crt + username: _crunchyrepl +restapi: + cafile: /etc/patroni/~postgres-operator/patroni.ca-roots + certfile: /etc/patroni/~postgres-operator/patroni.crt+key + keyfile: null + verify_client: optional +scope: cluster-name-ha +watchdog: + mode: "off" + `)+"\n") + }) + + t.Run(">PG10", func(t *testing.T) { + cluster := new(v1beta1.PostgresCluster) + cluster.Default() + cluster.Namespace = "some-namespace" + cluster.Name = "cluster-name" + cluster.Spec.PostgresVersion = 14 + + data, err := clusterYAML(cluster, postgres.HBAs{}, postgres.Parameters{}) + assert.NilError(t, err) + assert.Equal(t, data, strings.TrimSpace(` +# Generated by postgres-operator. DO NOT EDIT. +# Your changes will not be saved. +bootstrap: + dcs: + loop_wait: 10 + postgresql: + parameters: {} + pg_hba: [] + use_pg_rewind: true + use_slots: false + ttl: 30 +ctl: + cacert: /etc/patroni/~postgres-operator/patroni.ca-roots + certfile: /etc/patroni/~postgres-operator/patroni.crt+key + insecure: false + keyfile: null +kubernetes: + labels: + postgres-operator.crunchydata.com/cluster: cluster-name + namespace: some-namespace + role_label: postgres-operator.crunchydata.com/role + scope_label: postgres-operator.crunchydata.com/patroni + use_endpoints: true +postgresql: + authentication: + replication: + sslcert: /tmp/replication/tls.crt + sslkey: /tmp/replication/tls.key + sslmode: verify-ca + sslrootcert: /tmp/replication/ca.crt + username: _crunchyrepl + rewind: + sslcert: /tmp/replication/tls.crt + sslkey: /tmp/replication/tls.key + sslmode: verify-ca + sslrootcert: /tmp/replication/ca.crt + username: _crunchyrepl +restapi: + cafile: /etc/patroni/~postgres-operator/patroni.ca-roots + certfile: /etc/patroni/~postgres-operator/patroni.crt+key + keyfile: null + verify_client: optional +scope: cluster-name-ha +watchdog: + mode: "off" + `)+"\n") + }) +} + +func TestDynamicConfiguration(t *testing.T) { + t.Parallel() + + parameters := func(in map[string]string) *postgres.ParameterSet { + out := postgres.NewParameterSet() + for k, v := range in { + out.Add(k, v) + } + return out + } + + for _, tt := range []struct { + name string + cluster *v1beta1.PostgresCluster + input map[string]any + hbas postgres.HBAs + params postgres.Parameters + expected map[string]any + }{ + { + name: "empty is valid", + expected: map[string]any{ + "loop_wait": int32(10), + "ttl": int32(30), + "postgresql": map[string]any{ + "parameters": map[string]any{}, + "pg_hba": []string{}, + "use_pg_rewind": true, + "use_slots": false, + }, + }, + }, + { + name: "top-level passes through", + input: map[string]any{ + "retry_timeout": 5, + }, + expected: map[string]any{ + "loop_wait": int32(10), + "ttl": int32(30), + "retry_timeout": 5, + "postgresql": map[string]any{ + "parameters": map[string]any{}, + "pg_hba": []string{}, + "use_pg_rewind": true, + "use_slots": false, + }, + }, + }, + { + name: "top-level: spec overrides input", + cluster: &v1beta1.PostgresCluster{ + Spec: v1beta1.PostgresClusterSpec{ + Patroni: &v1beta1.PatroniSpec{ + LeaderLeaseDurationSeconds: initialize.Int32(99), + SyncPeriodSeconds: initialize.Int32(8), + }, + }, + }, + input: map[string]any{ + "loop_wait": 3, + "ttl": "nope", + }, + expected: map[string]any{ + "loop_wait": int32(8), + "ttl": int32(99), + "postgresql": map[string]any{ + "parameters": map[string]any{}, + "pg_hba": []string{}, + "use_pg_rewind": true, + "use_slots": false, + }, + }, + }, + { + name: "postgresql: wrong-type is ignored", + input: map[string]any{ + "postgresql": true, + }, + expected: map[string]any{ + "loop_wait": int32(10), + "ttl": int32(30), + "postgresql": map[string]any{ + "parameters": map[string]any{}, + "pg_hba": []string{}, + "use_pg_rewind": true, + "use_slots": false, + }, + }, + }, + { + name: "postgresql: defaults and overrides", + input: map[string]any{ + "postgresql": map[string]any{ + "use_pg_rewind": "overridden", + "use_slots": "input", + }, + }, + expected: map[string]any{ + "loop_wait": int32(10), + "ttl": int32(30), + "postgresql": map[string]any{ + "parameters": map[string]any{}, + "pg_hba": []string{}, + "use_pg_rewind": true, + "use_slots": "input", + }, + }, + }, + { + name: "postgresql.parameters: wrong-type is ignored", + input: map[string]any{ + "postgresql": map[string]any{ + "parameters": true, + }, + }, + expected: map[string]any{ + "loop_wait": int32(10), + "ttl": int32(30), + "postgresql": map[string]any{ + "parameters": map[string]any{}, + "pg_hba": []string{}, + "use_pg_rewind": true, + "use_slots": false, + }, + }, + }, + { + name: "postgresql.parameters: input passes through", + input: map[string]any{ + "postgresql": map[string]any{ + "parameters": map[string]any{ + "something": "str", + "another": 5, + }, + }, + }, + expected: map[string]any{ + "loop_wait": int32(10), + "ttl": int32(30), + "postgresql": map[string]any{ + "parameters": map[string]any{ + "something": "str", + "another": 5, + }, + "pg_hba": []string{}, + "use_pg_rewind": true, + "use_slots": false, + }, + }, + }, + { + name: "postgresql.parameters: input overrides default", + input: map[string]any{ + "postgresql": map[string]any{ + "parameters": map[string]any{ + "something": "str", + "another": 5, + }, + }, + }, + params: postgres.Parameters{ + Default: parameters(map[string]string{ + "something": "overridden", + "unrelated": "default", + }), + }, + expected: map[string]any{ + "loop_wait": int32(10), + "ttl": int32(30), + "postgresql": map[string]any{ + "parameters": map[string]any{ + "something": "str", + "another": 5, + "unrelated": "default", + }, + "pg_hba": []string{}, + "use_pg_rewind": true, + "use_slots": false, + }, + }, + }, + { + name: "postgresql.parameters: mandatory overrides input", + input: map[string]any{ + "postgresql": map[string]any{ + "parameters": map[string]any{ + "something": "str", + "another": 5, + }, + }, + }, + params: postgres.Parameters{ + Mandatory: parameters(map[string]string{ + "something": "overrides", + "unrelated": "setting", + }), + }, + expected: map[string]any{ + "loop_wait": int32(10), + "ttl": int32(30), + "postgresql": map[string]any{ + "parameters": map[string]any{ + "something": "overrides", + "another": 5, + "unrelated": "setting", + }, + "pg_hba": []string{}, + "use_pg_rewind": true, + "use_slots": false, + }, + }, + }, + { + name: "postgresql.parameters: mandatory shared_preload_libraries", + input: map[string]any{ + "postgresql": map[string]any{ + "parameters": map[string]any{ + "shared_preload_libraries": "given", + }, + }, + }, + params: postgres.Parameters{ + Mandatory: parameters(map[string]string{ + "shared_preload_libraries": "mandatory", + }), + }, + expected: map[string]any{ + "loop_wait": int32(10), + "ttl": int32(30), + "postgresql": map[string]any{ + "parameters": map[string]any{ + "shared_preload_libraries": "mandatory,given", + }, + "pg_hba": []string{}, + "use_pg_rewind": true, + "use_slots": false, + }, + }, + }, + { + name: "postgresql.parameters: mandatory shared_preload_libraries wrong-type is ignored", + input: map[string]any{ + "postgresql": map[string]any{ + "parameters": map[string]any{ + "shared_preload_libraries": 1, + }, + }, + }, + params: postgres.Parameters{ + Mandatory: parameters(map[string]string{ + "shared_preload_libraries": "mandatory", + }), + }, + expected: map[string]any{ + "loop_wait": int32(10), + "ttl": int32(30), + "postgresql": map[string]any{ + "parameters": map[string]any{ + "shared_preload_libraries": "mandatory", + }, + "pg_hba": []string{}, + "use_pg_rewind": true, + "use_slots": false, + }, + }, + }, + { + name: "postgresql.parameters: shared_preload_libraries order", + input: map[string]any{ + "postgresql": map[string]any{ + "parameters": map[string]any{ + "shared_preload_libraries": "given, citus, more", + }, + }, + }, + params: postgres.Parameters{ + Mandatory: parameters(map[string]string{ + "shared_preload_libraries": "mandatory", + }), + }, + expected: map[string]any{ + "loop_wait": int32(10), + "ttl": int32(30), + "postgresql": map[string]any{ + "parameters": map[string]any{ + "shared_preload_libraries": "citus,mandatory,given, citus, more", + }, + "pg_hba": []string{}, + "use_pg_rewind": true, + "use_slots": false, + }, + }, + }, + { + name: "postgresql.pg_hba: wrong-type is ignored", + input: map[string]any{ + "postgresql": map[string]any{ + "pg_hba": true, + }, + }, + expected: map[string]any{ + "loop_wait": int32(10), + "ttl": int32(30), + "postgresql": map[string]any{ + "parameters": map[string]any{}, + "pg_hba": []string{}, + "use_pg_rewind": true, + "use_slots": false, + }, + }, + }, + { + name: "postgresql.pg_hba: default when no input", + input: map[string]any{ + "postgresql": map[string]any{ + "pg_hba": nil, + }, + }, + hbas: postgres.HBAs{ + Default: []postgres.HostBasedAuthentication{ + *postgres.NewHBA().Local().Method("peer"), + }, + }, + expected: map[string]any{ + "loop_wait": int32(10), + "ttl": int32(30), + "postgresql": map[string]any{ + "parameters": map[string]any{}, + "pg_hba": []string{ + "local all all peer", + }, + "use_pg_rewind": true, + "use_slots": false, + }, + }, + }, + { + name: "postgresql.pg_hba: no default when input", + input: map[string]any{ + "postgresql": map[string]any{ + "pg_hba": []any{"custom"}, + }, + }, + hbas: postgres.HBAs{ + Default: []postgres.HostBasedAuthentication{ + *postgres.NewHBA().Local().Method("peer"), + }, + }, + expected: map[string]any{ + "loop_wait": int32(10), + "ttl": int32(30), + "postgresql": map[string]any{ + "parameters": map[string]any{}, + "pg_hba": []string{ + "custom", + }, + "use_pg_rewind": true, + "use_slots": false, + }, + }, + }, + { + name: "postgresql.pg_hba: mandatory before others", + input: map[string]any{ + "postgresql": map[string]any{ + "pg_hba": []any{"custom"}, + }, + }, + hbas: postgres.HBAs{ + Mandatory: []postgres.HostBasedAuthentication{ + *postgres.NewHBA().Local().Method("peer"), + }, + }, + expected: map[string]any{ + "loop_wait": int32(10), + "ttl": int32(30), + "postgresql": map[string]any{ + "parameters": map[string]any{}, + "pg_hba": []string{ + "local all all peer", + "custom", + }, + "use_pg_rewind": true, + "use_slots": false, + }, + }, + }, + { + name: "postgresql.pg_hba: ignore non-string types", + input: map[string]any{ + "postgresql": map[string]any{ + "pg_hba": []any{1, true, "custom", map[string]string{}, []string{}}, + }, + }, + hbas: postgres.HBAs{ + Mandatory: []postgres.HostBasedAuthentication{ + *postgres.NewHBA().Local().Method("peer"), + }, + }, + expected: map[string]any{ + "loop_wait": int32(10), + "ttl": int32(30), + "postgresql": map[string]any{ + "parameters": map[string]any{}, + "pg_hba": []string{ + "local all all peer", + "custom", + }, + "use_pg_rewind": true, + "use_slots": false, + }, + }, + }, + { + name: "standby_cluster: input passes through", + input: map[string]any{ + "standby_cluster": map[string]any{ + "primary_slot_name": "str", + }, + }, + expected: map[string]any{ + "loop_wait": int32(10), + "ttl": int32(30), + "postgresql": map[string]any{ + "parameters": map[string]any{}, + "pg_hba": []string{}, + "use_pg_rewind": true, + "use_slots": false, + }, + "standby_cluster": map[string]any{ + "primary_slot_name": "str", + }, + }, + }, + { + name: "standby_cluster: repo only", + cluster: &v1beta1.PostgresCluster{ + Spec: v1beta1.PostgresClusterSpec{ + Standby: &v1beta1.PostgresStandbySpec{ + Enabled: true, + RepoName: "repo", + }, + }, + }, + input: map[string]any{ + "standby_cluster": map[string]any{ + "restore_command": "overridden", + "unrelated": "input", + }, + }, + params: postgres.Parameters{ + Mandatory: parameters(map[string]string{ + "restore_command": "mandatory", + }), + }, + expected: map[string]any{ + "loop_wait": int32(10), + "ttl": int32(30), + "postgresql": map[string]any{ + "parameters": map[string]any{ + "restore_command": "mandatory", + }, + "pg_hba": []string{}, + "use_pg_rewind": true, + "use_slots": false, + }, + "standby_cluster": map[string]any{ + "create_replica_methods": []string{"pgbackrest"}, + "restore_command": "mandatory", + "unrelated": "input", + }, + }, + }, + { + name: "standby_cluster: basebackup for streaming", + cluster: &v1beta1.PostgresCluster{ + Spec: v1beta1.PostgresClusterSpec{ + Standby: &v1beta1.PostgresStandbySpec{ + Enabled: true, + Host: "0.0.0.0", + Port: initialize.Int32(5432), + }, + }, + }, + input: map[string]any{ + "standby_cluster": map[string]any{ + "host": "overridden", + "port": int32(0000), + "restore_command": "overridden", + "unrelated": "input", + }, + }, + params: postgres.Parameters{ + Mandatory: parameters(map[string]string{ + "restore_command": "mandatory", + }), + }, + expected: map[string]any{ + "loop_wait": int32(10), + "ttl": int32(30), + "postgresql": map[string]any{ + "parameters": map[string]any{ + "restore_command": "mandatory", + }, + "pg_hba": []string{}, + "use_pg_rewind": true, + "use_slots": false, + }, + "standby_cluster": map[string]any{ + "create_replica_methods": []string{"basebackup"}, + "host": "0.0.0.0", + "port": int32(5432), + "unrelated": "input", + }, + }, + }, + { + name: "standby_cluster: both repo and streaming", + cluster: &v1beta1.PostgresCluster{ + Spec: v1beta1.PostgresClusterSpec{ + Standby: &v1beta1.PostgresStandbySpec{ + Enabled: true, + Host: "0.0.0.0", + Port: initialize.Int32(5432), + RepoName: "repo", + }, + }, + }, + input: map[string]any{ + "standby_cluster": map[string]any{ + "host": "overridden", + "port": int32(9999), + "restore_command": "overridden", + "unrelated": "input", + }, + }, + params: postgres.Parameters{ + Mandatory: parameters(map[string]string{ + "restore_command": "mandatory", + }), + }, + expected: map[string]any{ + "loop_wait": int32(10), + "ttl": int32(30), + "postgresql": map[string]any{ + "parameters": map[string]any{ + "restore_command": "mandatory", + }, + "pg_hba": []string{}, + "use_pg_rewind": true, + "use_slots": false, + }, + "standby_cluster": map[string]any{ + "create_replica_methods": []string{"pgbackrest", "basebackup"}, + "host": "0.0.0.0", + "port": int32(5432), + "restore_command": "mandatory", + "unrelated": "input", + }, + }, + }, + { + name: "tde enabled", + cluster: &v1beta1.PostgresCluster{ + Spec: v1beta1.PostgresClusterSpec{ + Patroni: &v1beta1.PatroniSpec{ + DynamicConfiguration: map[string]any{ + "postgresql": map[string]any{ + "parameters": map[string]any{ + "encryption_key_command": "echo test", + }, + }, + }, + }, + }, + }, + expected: map[string]any{ + "loop_wait": int32(10), + "ttl": int32(30), + "postgresql": map[string]any{ + "bin_name": map[string]any{"pg_rewind": string("/tmp/pg_rewind_tde.sh")}, + "parameters": map[string]any{}, + "pg_hba": []string{}, + "use_pg_rewind": bool(true), + "use_slots": bool(false), + }, + }, + }, + } { + t.Run(tt.name, func(t *testing.T) { + cluster := tt.cluster + if cluster == nil { + cluster = new(v1beta1.PostgresCluster) + } + if cluster.Spec.PostgresVersion == 0 { + cluster.Spec.PostgresVersion = 14 + } + cluster.Default() + actual := DynamicConfiguration(cluster, tt.input, tt.hbas, tt.params) + assert.DeepEqual(t, tt.expected, actual) + }) + } +} + +func TestInstanceConfigFiles(t *testing.T) { + t.Parallel() + + cm1 := &corev1.ConfigMap{ObjectMeta: metav1.ObjectMeta{Name: "cm1"}} + cm2 := &corev1.ConfigMap{ObjectMeta: metav1.ObjectMeta{Name: "cm2"}} + + projections := instanceConfigFiles(cm1, cm2) + + assert.Assert(t, cmp.MarshalMatches(projections, ` +- configMap: + items: + - key: patroni.yaml + path: ~postgres-operator_cluster.yaml + name: cm1 +- configMap: + items: + - key: patroni.yaml + path: ~postgres-operator_instance.yaml + name: cm2 + `)) +} + +func TestInstanceEnvironment(t *testing.T) { + t.Parallel() + + cluster := new(v1beta1.PostgresCluster) + cluster.Default() + cluster.Spec.PostgresVersion = 12 + leaderService := new(corev1.Service) + podService := new(corev1.Service) + podService.Name = "pod-dns" + + vars := instanceEnvironment(cluster, podService, leaderService, nil) + + assert.Assert(t, cmp.MarshalMatches(vars, ` +- name: PATRONI_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name +- name: PATRONI_KUBERNETES_POD_IP + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: status.podIP +- name: PATRONI_KUBERNETES_PORTS + value: | + [] +- name: PATRONI_POSTGRESQL_CONNECT_ADDRESS + value: $(PATRONI_NAME).pod-dns:5432 +- name: PATRONI_POSTGRESQL_LISTEN + value: '*:5432' +- name: PATRONI_POSTGRESQL_CONFIG_DIR + value: /pgdata/pg12 +- name: PATRONI_POSTGRESQL_DATA_DIR + value: /pgdata/pg12 +- name: PATRONI_RESTAPI_CONNECT_ADDRESS + value: $(PATRONI_NAME).pod-dns:8008 +- name: PATRONI_RESTAPI_LISTEN + value: '*:8008' +- name: PATRONICTL_CONFIG_FILE + value: /etc/patroni + `)) + + t.Run("MatchingPorts", func(t *testing.T) { + leaderService.Spec.Ports = []corev1.ServicePort{{Name: "postgres"}} + leaderService.Spec.Ports[0].TargetPort.StrVal = "postgres" + containers := []corev1.Container{{Name: "okay"}} + containers[0].Ports = []corev1.ContainerPort{{ + Name: "postgres", ContainerPort: 9999, Protocol: corev1.ProtocolTCP, + }} + + vars := instanceEnvironment(cluster, podService, leaderService, containers) + + assert.Assert(t, cmp.MarshalMatches(vars, ` +- name: PATRONI_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name +- name: PATRONI_KUBERNETES_POD_IP + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: status.podIP +- name: PATRONI_KUBERNETES_PORTS + value: | + - name: postgres + port: 9999 + protocol: TCP +- name: PATRONI_POSTGRESQL_CONNECT_ADDRESS + value: $(PATRONI_NAME).pod-dns:5432 +- name: PATRONI_POSTGRESQL_LISTEN + value: '*:5432' +- name: PATRONI_POSTGRESQL_CONFIG_DIR + value: /pgdata/pg12 +- name: PATRONI_POSTGRESQL_DATA_DIR + value: /pgdata/pg12 +- name: PATRONI_RESTAPI_CONNECT_ADDRESS + value: $(PATRONI_NAME).pod-dns:8008 +- name: PATRONI_RESTAPI_LISTEN + value: '*:8008' +- name: PATRONICTL_CONFIG_FILE + value: /etc/patroni + `)) + }) +} + +func TestInstanceYAML(t *testing.T) { + t.Parallel() + + cluster := &v1beta1.PostgresCluster{Spec: v1beta1.PostgresClusterSpec{PostgresVersion: 12}} + instance := new(v1beta1.PostgresInstanceSetSpec) + + data, err := instanceYAML(cluster, instance, nil) + assert.NilError(t, err) + assert.Equal(t, data, strings.Trim(` +# Generated by postgres-operator. DO NOT EDIT. +# Your changes will not be saved. +bootstrap: + initdb: + - data-checksums + - encoding=UTF8 + - waldir=/pgdata/pg12_wal + method: initdb +kubernetes: {} +postgresql: + basebackup: + - waldir=/pgdata/pg12_wal + create_replica_methods: + - basebackup + pgpass: /tmp/.pgpass + use_unix_socket: true +restapi: {} +tags: {} + `, "\t\n")+"\n") + + dataWithReplicaCreate, err := instanceYAML(cluster, instance, []string{"some", "backrest", "cmd"}) + assert.NilError(t, err) + assert.Equal(t, dataWithReplicaCreate, strings.Trim(` +# Generated by postgres-operator. DO NOT EDIT. +# Your changes will not be saved. +bootstrap: + initdb: + - data-checksums + - encoding=UTF8 + - waldir=/pgdata/pg12_wal + method: initdb +kubernetes: {} +postgresql: + basebackup: + - waldir=/pgdata/pg12_wal + create_replica_methods: + - pgbackrest + - basebackup + pgbackrest: + command: '''bash'' ''-ceu'' ''--'' ''install --directory --mode=0700 "${PGDATA?}" + && exec "$@"'' ''-'' ''some'' ''backrest'' ''cmd''' + keep_data: true + no_master: true + no_params: true + pgpass: /tmp/.pgpass + use_unix_socket: true +restapi: {} +tags: {} + `, "\t\n")+"\n") + + cluster.Spec.Patroni = &v1beta1.PatroniSpec{ + DynamicConfiguration: map[string]any{ + "postgresql": map[string]any{ + "parameters": map[string]any{ + "encryption_key_command": "echo test", + }, + }, + }, + } + + datawithTDE, err := instanceYAML(cluster, instance, nil) + assert.NilError(t, err) + assert.Equal(t, datawithTDE, strings.Trim(` +# Generated by postgres-operator. DO NOT EDIT. +# Your changes will not be saved. +bootstrap: + initdb: + - data-checksums + - encoding=UTF8 + - waldir=/pgdata/pg12_wal + - encryption-key-command=echo test + method: initdb +kubernetes: {} +postgresql: + basebackup: + - waldir=/pgdata/pg12_wal + create_replica_methods: + - basebackup + pgpass: /tmp/.pgpass + use_unix_socket: true +restapi: {} +tags: {} + `, "\t\n")+"\n") + +} + +func TestPGBackRestCreateReplicaCommand(t *testing.T) { + t.Parallel() + + shellcheck := require.ShellCheck(t) + cluster := new(v1beta1.PostgresCluster) + instance := new(v1beta1.PostgresInstanceSetSpec) + + data, err := instanceYAML(cluster, instance, []string{"some", "backrest", "cmd"}) + assert.NilError(t, err) + + var parsed struct { + PostgreSQL struct { + PGBackRest struct { + Command string + } + } + } + assert.NilError(t, yaml.Unmarshal([]byte(data), &parsed)) + + dir := t.TempDir() + + // The command should be compatible with any shell. + { + command := parsed.PostgreSQL.PGBackRest.Command + file := filepath.Join(dir, "command.sh") + assert.NilError(t, os.WriteFile(file, []byte(command), 0o600)) + + cmd := exec.Command(shellcheck, "--enable=all", "--shell=sh", file) + output, err := cmd.CombinedOutput() + assert.NilError(t, err, "%q\n%s", cmd.Args, output) + } + + // Naive parsing of shell words... + command := strings.Split(strings.Trim(parsed.PostgreSQL.PGBackRest.Command, "'"), "' '") + + // Expect a bash command with an inline script. + assert.DeepEqual(t, command[:3], []string{"bash", "-ceu", "--"}) + assert.Assert(t, len(command) > 3) + script := command[3] + + // It should call the pgBackRest command. + assert.Assert(t, strings.HasSuffix(script, ` exec "$@"`)) + assert.DeepEqual(t, command[len(command)-3:], []string{"some", "backrest", "cmd"}) + + // It should pass shellcheck. + { + file := filepath.Join(dir, "script.bash") + assert.NilError(t, os.WriteFile(file, []byte(script), 0o600)) + + cmd := exec.Command(shellcheck, "--enable=all", file) + output, err := cmd.CombinedOutput() + assert.NilError(t, err, "%q\n%s", cmd.Args, output) + } +} + +func TestProbeTiming(t *testing.T) { + t.Parallel() + + defaults := new(v1beta1.PatroniSpec) + defaults.Default() + + // Defaults should match the suggested/documented timing. + // - https://github.com/zalando/patroni/blob/v2.0.1/docs/rest_api.rst + assert.DeepEqual(t, probeTiming(defaults), &corev1.Probe{ + TimeoutSeconds: 5, + PeriodSeconds: 10, + SuccessThreshold: 1, + FailureThreshold: 3, + }) + + for _, tt := range []struct { + lease, sync int32 + expected corev1.Probe + }{ + // The smallest possible values for "loop_wait" and "retry_timeout" are + // both 1 sec which makes 3 sec the smallest appropriate value for "ttl". + // These are the validation minimums in v1beta1.PatroniSpec. + {lease: 3, sync: 1, expected: corev1.Probe{ + TimeoutSeconds: 1, + PeriodSeconds: 1, + SuccessThreshold: 1, + FailureThreshold: 3, + }}, + + // These are plausible values for "ttl" and "loop_wait". + {lease: 60, sync: 15, expected: corev1.Probe{ + TimeoutSeconds: 7, + PeriodSeconds: 15, + SuccessThreshold: 1, + FailureThreshold: 4, + }}, + {lease: 10, sync: 5, expected: corev1.Probe{ + TimeoutSeconds: 2, + PeriodSeconds: 5, + SuccessThreshold: 1, + FailureThreshold: 2, + }}, + + // These are plausible values that aren't multiples of each other. + // Failure triggers sooner than "ttl", which seems to agree with docs: + // - https://github.com/zalando/patroni/blob/v2.0.1/docs/watchdog.rst + {lease: 19, sync: 7, expected: corev1.Probe{ + TimeoutSeconds: 3, + PeriodSeconds: 7, + SuccessThreshold: 1, + FailureThreshold: 2, + }}, + {lease: 13, sync: 7, expected: corev1.Probe{ + TimeoutSeconds: 3, + PeriodSeconds: 7, + SuccessThreshold: 1, + FailureThreshold: 1, + }}, + + // These values are infeasible for Patroni but produce valid v1.Probes. + {lease: 60, sync: 60, expected: corev1.Probe{ + TimeoutSeconds: 30, + PeriodSeconds: 60, + SuccessThreshold: 1, + FailureThreshold: 1, + }}, + {lease: 10, sync: 20, expected: corev1.Probe{ + TimeoutSeconds: 10, + PeriodSeconds: 20, + SuccessThreshold: 1, + FailureThreshold: 1, + }}, + } { + tt := tt + actual := probeTiming(&v1beta1.PatroniSpec{ + LeaderLeaseDurationSeconds: &tt.lease, + SyncPeriodSeconds: &tt.sync, + }) + assert.DeepEqual(t, actual, &tt.expected) + + // v1.Probe validation + assert.Assert(t, actual.TimeoutSeconds >= 1) // Minimum value is 1. + assert.Assert(t, actual.PeriodSeconds >= 1) // Minimum value is 1. + assert.Assert(t, actual.SuccessThreshold == 1) // Must be 1 for liveness and startup. + assert.Assert(t, actual.FailureThreshold >= 1) // Minimum value is 1. + } +} diff --git a/internal/patroni/doc.go b/internal/patroni/doc.go index 63a42c84d3..500305406d 100644 --- a/internal/patroni/doc.go +++ b/internal/patroni/doc.go @@ -1,19 +1,7 @@ -// package patroni provides clients, utilities and resources for interacting with Patroni inside -// of a PostgreSQL cluster +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 +// Package patroni provides clients, utilities and resources for configuring and +// interacting with Patroni inside of a PostgreSQL cluster package patroni - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ diff --git a/internal/patroni/patroni.go b/internal/patroni/patroni.go deleted file mode 100644 index e1bc58a01a..0000000000 --- a/internal/patroni/patroni.go +++ /dev/null @@ -1,225 +0,0 @@ -package patroni - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "errors" - "fmt" - - log "github.com/sirupsen/logrus" - corev1 "k8s.io/api/core/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/client-go/kubernetes" - "k8s.io/client-go/rest" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/kubeapi" -) - -// dbContainerName is the name of the container containing the PG database in a PG (primary or -// replica) pod -const dbContainerName = "database" - -var ( - // reloadCMD is the command for reloading a specific PG instance (primary or replica) within a - // PG cluster - reloadCMD = []string{"/bin/bash", "-c", - fmt.Sprintf("curl -X POST --silent http://127.0.0.1:%s/reload", config.DEFAULT_PATRONI_PORT)} - // restartCMD is the command for restart a specific PG database (primary or replica) within a - // PG cluster - restartCMD = []string{"/bin/bash", "-c", - fmt.Sprintf("curl -X POST --silent http://127.0.0.1:%s/restart", config.DEFAULT_PATRONI_PORT)} - - // ErrInstanceNotFound is the error thrown when a target instance cannot be found in the cluster - ErrInstanceNotFound = errors.New("The instance does not exist in the cluster") -) - -// Client defines the various actions a Patroni client is able to perform against a specified -// PGCluster -type Client interface { - ReloadCluster() error - RestartCluster() ([]RestartResult, error) - RestartInstances(instance ...string) ([]RestartResult, error) -} - -// patroniClient represents a Patroni client that is able to perform various Patroni actions -// within specific PG Cluster. The actions available correspond to the endpoints exposed by the -// Patroni REST API, as well the associated commands available via the 'patronictl' client. -type patroniClient struct { - restConfig *rest.Config - kubeclientset kubernetes.Interface - clusterName string - namespace string -} - -// RestartResult represents the result of a cluster restart, specifically the name of the -// an instance that was restarted within a cluster, and an error that can be populated in -// the event an instance cannot be successfully restarted. -type RestartResult struct { - Instance string - Error error -} - -// NewPatroniClient creates a new Patroni client -func NewPatroniClient(restConfig *rest.Config, kubeclientset kubernetes.Interface, - clusterName, namespace string) Client { - - return &patroniClient{ - restConfig: restConfig, - kubeclientset: kubeclientset, - clusterName: clusterName, - namespace: namespace, - } -} - -// getClusterInstances returns a map primary -func (p *patroniClient) getClusterInstances() (map[string]corev1.Pod, error) { - - // selector in the format "pg-cluster=,any role" - selector := fmt.Sprintf("%s=%s,%s", config.LABEL_PG_CLUSTER, p.clusterName, - config.LABEL_PG_DATABASE) - instances, err := p.kubeclientset.CoreV1().Pods(p.namespace).List(metav1.ListOptions{ - LabelSelector: selector, - }) - if err != nil { - return nil, err - } - - instanceMap := make(map[string]corev1.Pod) - - for _, instance := range instances.Items { - instanceMap[instance.GetObjectMeta().GetLabels()[config.LABEL_DEPLOYMENT_NAME]] = instance - } - - return instanceMap, nil -} - -// ReloadCluster reloads the configuration for a PostgreSQL cluster. Specififcally, a Patroni -// reload (which includes a PG reload) is executed on the primary and each replica within the cluster. -func (p *patroniClient) ReloadCluster() error { - - instanceMap, err := p.getClusterInstances() - if err != nil { - return err - } - - for _, instancePod := range instanceMap { - if err := p.reload(instancePod.GetName()); err != nil { - return err - } - } - - return nil -} - -// ReloadCluster restarts all PostgreSQL databases within a PostgreSQL cluster. Specififcally, a -// Patroni restart is executed on the primary and each replica within the cluster. A slice is also -// returned containing the names of all instances restarted within the cluster. -func (p *patroniClient) RestartCluster() ([]RestartResult, error) { - - var restartResult []RestartResult - - instanceMap, err := p.getClusterInstances() - if err != nil { - return nil, err - } - - for instance, instancePod := range instanceMap { - if err := p.restart(instancePod.GetName()); err != nil { - restartResult = append(restartResult, RestartResult{ - Instance: instance, - Error: err, - }) - continue - } - restartResult = append(restartResult, RestartResult{Instance: instance}) - } - - return restartResult, nil -} - -// RestartInstances restarts the PostgreSQL databases for the instances specified. Specififcally, a -// Patroni restart is executed on the primary and each replica within the cluster. -func (p *patroniClient) RestartInstances(instances ...string) ([]RestartResult, error) { - - var restartResult []RestartResult - - instanceMap, err := p.getClusterInstances() - if err != nil { - return nil, err - } - - targetInstanceMap := make(map[string]corev1.Pod) - - // verify the targets specified (if any are specified) actually exist in the cluster - for _, instance := range instances { - if _, ok := instanceMap[instance]; ok { - targetInstanceMap[instance] = instanceMap[instance] - } else { - restartResult = append(restartResult, RestartResult{ - Instance: instance, - Error: ErrInstanceNotFound, - }) - } - } - - for instance, instancePod := range targetInstanceMap { - if err := p.restart(instancePod.GetName()); err != nil { - restartResult = append(restartResult, RestartResult{ - Instance: instance, - Error: err, - }) - continue - } - restartResult = append(restartResult, RestartResult{Instance: instance}) - } - - return restartResult, nil -} - -// reload performs a Patroni reload (which includes a PG reload) on a specific instance (primary or -// replica) within a PG cluster -func (p *patroniClient) reload(podName string) error { - - stdout, stderr, err := kubeapi.ExecToPodThroughAPI(p.restConfig, p.kubeclientset, reloadCMD, - dbContainerName, podName, p.namespace, nil) - if err != nil { - return err - } else if stderr != "" { - return fmt.Errorf(stderr) - } - - log.Debugf("Successfully reloaded PG on pod %s: %s", podName, stdout) - - return err -} - -// restart performs a Patroni restart on a specific instance (primary or replica) within a PG -// cluster. -func (p *patroniClient) restart(podName string) error { - - stdout, stderr, err := kubeapi.ExecToPodThroughAPI(p.restConfig, p.kubeclientset, restartCMD, - dbContainerName, podName, p.namespace, nil) - if err != nil { - return err - } else if stderr != "" { - return fmt.Errorf(stderr) - } - - log.Debugf("Successfully restarted PG on pod %s: %s", podName, stdout) - - return err -} diff --git a/internal/patroni/rbac.go b/internal/patroni/rbac.go new file mode 100644 index 0000000000..dcf3f18cea --- /dev/null +++ b/internal/patroni/rbac.go @@ -0,0 +1,72 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package patroni + +import ( + corev1 "k8s.io/api/core/v1" + rbacv1 "k8s.io/api/rbac/v1" + + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +// "list", "patch", and "watch" are required. Include "get" for good measure. +// +kubebuilder:rbac:groups="",resources="pods",verbs={get} +// +kubebuilder:rbac:groups="",resources="pods",verbs={list,watch} +// +kubebuilder:rbac:groups="",resources="pods",verbs={patch} + +// TODO(cbandy): Separate these so that one can choose ConfigMap over Endpoints. + +// When using Endpoints for DCS, "create", "list", "patch", and "watch" are +// required. Include "get" for good measure. The `patronictl scaffold` and +// `patronictl remove` commands require "deletecollection". +// +kubebuilder:rbac:groups="",resources="endpoints",verbs={get} +// +kubebuilder:rbac:groups="",resources="endpoints",verbs={create,deletecollection} +// +kubebuilder:rbac:groups="",resources="endpoints",verbs={list,watch} +// +kubebuilder:rbac:groups="",resources="endpoints",verbs={patch} +// +kubebuilder:rbac:groups="",resources="services",verbs={create} + +// The OpenShift RestrictedEndpointsAdmission plugin requires special +// authorization to create Endpoints that contain Pod IPs. +// - https://github.com/openshift/origin/pull/9383 +// +kubebuilder:rbac:groups="",resources="endpoints/restricted",verbs={create} + +// Permissions returns the RBAC rules Patroni needs for cluster. +func Permissions(cluster *v1beta1.PostgresCluster) []rbacv1.PolicyRule { + // TODO(cbandy): This must change when using ConfigMaps for DCS. + + rules := make([]rbacv1.PolicyRule, 0, 4) + + rules = append(rules, rbacv1.PolicyRule{ + APIGroups: []string{corev1.SchemeGroupVersion.Group}, + Resources: []string{"endpoints"}, + Verbs: []string{"create", "deletecollection", "get", "list", "patch", "watch"}, + }) + + if cluster.Spec.OpenShift != nil && *cluster.Spec.OpenShift { + rules = append(rules, rbacv1.PolicyRule{ + APIGroups: []string{corev1.SchemeGroupVersion.Group}, + Resources: []string{"endpoints/restricted"}, + Verbs: []string{"create"}, + }) + } + + rules = append(rules, rbacv1.PolicyRule{ + APIGroups: []string{corev1.SchemeGroupVersion.Group}, + Resources: []string{"pods"}, + Verbs: []string{"get", "list", "patch", "watch"}, + }) + + // When using Endpoints for DCS, Patroni tries to create the "{scope}-config" service. + // NOTE(cbandy): The PostgresCluster controller already creates this Service; + // it might be possible to eliminate this permission if it also created the + // Endpoints. + rules = append(rules, rbacv1.PolicyRule{ + APIGroups: []string{corev1.SchemeGroupVersion.Group}, + Resources: []string{"services"}, + Verbs: []string{"create"}, + }) + + return rules +} diff --git a/internal/patroni/rbac_test.go b/internal/patroni/rbac_test.go new file mode 100644 index 0000000000..39a8dff245 --- /dev/null +++ b/internal/patroni/rbac_test.go @@ -0,0 +1,117 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package patroni + +import ( + "testing" + + "gotest.tools/v3/assert" + + "github.com/crunchydata/postgres-operator/internal/testing/cmp" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func isUniqueAndSorted(slice []string) bool { + if len(slice) > 1 { + previous := slice[0] + for _, next := range slice[1:] { + if next <= previous { + return false + } + previous = next + } + } + return true +} + +func TestPermissions(t *testing.T) { + cluster := new(v1beta1.PostgresCluster) + cluster.Default() + + t.Run("Upstream", func(t *testing.T) { + permissions := Permissions(cluster) + for _, rule := range permissions { + assert.Assert(t, isUniqueAndSorted(rule.APIGroups), "got %q", rule.APIGroups) + assert.Assert(t, isUniqueAndSorted(rule.Resources), "got %q", rule.Resources) + assert.Assert(t, isUniqueAndSorted(rule.Verbs), "got %q", rule.Verbs) + } + + assert.Assert(t, cmp.MarshalMatches(permissions, ` +- apiGroups: + - "" + resources: + - endpoints + verbs: + - create + - deletecollection + - get + - list + - patch + - watch +- apiGroups: + - "" + resources: + - pods + verbs: + - get + - list + - patch + - watch +- apiGroups: + - "" + resources: + - services + verbs: + - create + `)) + }) + + t.Run("OpenShift", func(t *testing.T) { + cluster.Spec.OpenShift = new(bool) + *cluster.Spec.OpenShift = true + + permissions := Permissions(cluster) + for _, rule := range permissions { + assert.Assert(t, isUniqueAndSorted(rule.APIGroups), "got %q", rule.APIGroups) + assert.Assert(t, isUniqueAndSorted(rule.Resources), "got %q", rule.Resources) + assert.Assert(t, isUniqueAndSorted(rule.Verbs), "got %q", rule.Verbs) + } + + assert.Assert(t, cmp.MarshalMatches(permissions, ` +- apiGroups: + - "" + resources: + - endpoints + verbs: + - create + - deletecollection + - get + - list + - patch + - watch +- apiGroups: + - "" + resources: + - endpoints/restricted + verbs: + - create +- apiGroups: + - "" + resources: + - pods + verbs: + - get + - list + - patch + - watch +- apiGroups: + - "" + resources: + - services + verbs: + - create + `)) + }) +} diff --git a/internal/patroni/reconcile.go b/internal/patroni/reconcile.go new file mode 100644 index 0000000000..4fbb08b67d --- /dev/null +++ b/internal/patroni/reconcile.go @@ -0,0 +1,220 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package patroni + +import ( + "context" + "strings" + + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/util/intstr" + + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/pgbackrest" + "github.com/crunchydata/postgres-operator/internal/pki" + "github.com/crunchydata/postgres-operator/internal/postgres" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +// ClusterBootstrapped returns a bool indicating whether or not Patroni has successfully +// bootstrapped the PostgresCluster +func ClusterBootstrapped(postgresCluster *v1beta1.PostgresCluster) bool { + return postgresCluster.Status.Patroni.SystemIdentifier != "" +} + +// ClusterConfigMap populates the shared ConfigMap with fields needed to run Patroni. +func ClusterConfigMap(ctx context.Context, + inCluster *v1beta1.PostgresCluster, + inHBAs postgres.HBAs, + inParameters postgres.Parameters, + outClusterConfigMap *corev1.ConfigMap, +) error { + var err error + + initialize.Map(&outClusterConfigMap.Data) + + outClusterConfigMap.Data[configMapFileKey], err = clusterYAML(inCluster, inHBAs, + inParameters) + + return err +} + +// InstanceConfigMap populates the shared ConfigMap with fields needed to run Patroni. +func InstanceConfigMap(ctx context.Context, + inCluster *v1beta1.PostgresCluster, + inInstanceSpec *v1beta1.PostgresInstanceSetSpec, + outInstanceConfigMap *corev1.ConfigMap, +) error { + var err error + + initialize.Map(&outInstanceConfigMap.Data) + + command := pgbackrest.ReplicaCreateCommand(inCluster, inInstanceSpec) + + outInstanceConfigMap.Data[configMapFileKey], err = instanceYAML( + inCluster, inInstanceSpec, command) + + return err +} + +// InstanceCertificates populates the shared Secret with certificates needed to run Patroni. +func InstanceCertificates(ctx context.Context, + inRoot pki.Certificate, inDNS pki.Certificate, + inDNSKey pki.PrivateKey, outInstanceCertificates *corev1.Secret, +) error { + initialize.Map(&outInstanceCertificates.Data) + + var err error + outInstanceCertificates.Data[certAuthorityFileKey], err = certFile(inRoot) + + if err == nil { + outInstanceCertificates.Data[certServerFileKey], err = certFile(inDNSKey, inDNS) + } + + return err +} + +// InstancePod populates a PodTemplateSpec with the fields needed to run Patroni. +// The database container must already be in the template. +func InstancePod(ctx context.Context, + inCluster *v1beta1.PostgresCluster, + inClusterConfigMap *corev1.ConfigMap, + inClusterPodService *corev1.Service, + inPatroniLeaderService *corev1.Service, + inInstanceSpec *v1beta1.PostgresInstanceSetSpec, + inInstanceCertificates *corev1.Secret, + inInstanceConfigMap *corev1.ConfigMap, + outInstancePod *corev1.PodTemplateSpec, +) error { + initialize.Labels(outInstancePod) + + // When using Kubernetes for DCS, Patroni discovers members by listing Pods + // that have the "scope" label. See the "kubernetes.scope_label" and + // "kubernetes.labels" settings. + outInstancePod.Labels[naming.LabelPatroni] = naming.PatroniScope(inCluster) + + var container *corev1.Container + for i := range outInstancePod.Spec.Containers { + if outInstancePod.Spec.Containers[i].Name == naming.ContainerDatabase { + container = &outInstancePod.Spec.Containers[i] + } + } + + container.Command = []string{"patroni", configDirectory} + + container.Env = append(container.Env, + instanceEnvironment(inCluster, inClusterPodService, inPatroniLeaderService, + outInstancePod.Spec.Containers)...) + + volume := corev1.Volume{Name: "patroni-config"} + volume.Projected = new(corev1.ProjectedVolumeSource) + + // Add our projections after those specified in the CR. Items later in the + // list take precedence over earlier items (that is, last write wins). + // - https://kubernetes.io/docs/concepts/storage/volumes/#projected + volume.Projected.Sources = append(append(volume.Projected.Sources, + instanceConfigFiles(inClusterConfigMap, inInstanceConfigMap)...), + instanceCertificates(inInstanceCertificates)...) + + outInstancePod.Spec.Volumes = append(outInstancePod.Spec.Volumes, volume) + + container.VolumeMounts = append(container.VolumeMounts, corev1.VolumeMount{ + Name: volume.Name, + MountPath: configDirectory, + ReadOnly: true, + }) + + instanceProbes(inCluster, container) + + return nil +} + +// instanceProbes adds Patroni liveness and readiness probes to container. +func instanceProbes(cluster *v1beta1.PostgresCluster, container *corev1.Container) { + + // Patroni uses a watchdog to ensure that PostgreSQL does not accept commits + // after the leader lock expires, even if Patroni becomes unresponsive. + // - https://github.com/zalando/patroni/blob/v2.0.1/docs/watchdog.rst + // + // Similar functionality is provided by a liveness probe. When the probe + // finally fails, kubelet will send a SIGTERM to the Patroni process. + // If the process does not stop, kubelet will send a SIGKILL after the pod's + // TerminationGracePeriodSeconds. + // - https://docs.k8s.io/concepts/workloads/pods/pod-lifecycle/ + // + // TODO(cbandy): Consider TerminationGracePeriodSeconds' impact here. + // TODO(cbandy): Consider if a PreStop hook is necessary. + container.LivenessProbe = probeTiming(cluster.Spec.Patroni) + container.LivenessProbe.InitialDelaySeconds = 3 + container.LivenessProbe.HTTPGet = &corev1.HTTPGetAction{ + Path: "/liveness", + Port: intstr.FromInt(int(*cluster.Spec.Patroni.Port)), + Scheme: corev1.URISchemeHTTPS, + } + + // Readiness is reflected in the controlling object's status (e.g. ReadyReplicas) + // and allows our controller to react when Patroni bootstrap completes. + // + // When using Endpoints for DCS, this probe does not affect the availability + // of the leader Pod in the leader Service. + container.ReadinessProbe = probeTiming(cluster.Spec.Patroni) + container.ReadinessProbe.InitialDelaySeconds = 3 + container.ReadinessProbe.HTTPGet = &corev1.HTTPGetAction{ + Path: "/readiness", + Port: intstr.FromInt(int(*cluster.Spec.Patroni.Port)), + Scheme: corev1.URISchemeHTTPS, + } +} + +// PodIsPrimary returns whether or not pod is currently acting as the leader with +// the "master" role. This role will be called "primary" in the future, see: +// - https://github.com/zalando/patroni/blob/master/docs/releases.rst?plain=1#L213 +func PodIsPrimary(pod metav1.Object) bool { + if pod == nil { + return false + } + + // TODO(cbandy): This works only when using Kubernetes for DCS. + + // - https://github.com/zalando/patroni/blob/v3.1.1/patroni/ha.py#L296 + // - https://github.com/zalando/patroni/blob/v3.1.1/patroni/ha.py#L583 + // - https://github.com/zalando/patroni/blob/v3.1.1/patroni/ha.py#L782 + // - https://github.com/zalando/patroni/blob/v3.1.1/patroni/ha.py#L1574 + status := pod.GetAnnotations()["status"] + return strings.Contains(status, `"role":"master"`) +} + +// PodIsStandbyLeader returns whether or not pod is currently acting as a "standby_leader". +func PodIsStandbyLeader(pod metav1.Object) bool { + if pod == nil { + return false + } + + // TODO(cbandy): This works only when using Kubernetes for DCS. + + // - https://github.com/zalando/patroni/blob/v2.0.2/patroni/ha.py#L190 + // - https://github.com/zalando/patroni/blob/v2.0.2/patroni/ha.py#L294 + // - https://github.com/zalando/patroni/blob/v2.0.2/patroni/ha.py#L353 + status := pod.GetAnnotations()["status"] + return strings.Contains(status, `"role":"standby_leader"`) +} + +// PodRequiresRestart returns whether or not PostgreSQL inside pod has (pending) +// parameter changes that require a PostgreSQL restart. +func PodRequiresRestart(pod metav1.Object) bool { + if pod == nil { + return false + } + + // TODO(cbandy): This works only when using Kubernetes for DCS. + + // - https://github.com/zalando/patroni/blob/v2.1.1/patroni/ha.py#L198 + // - https://github.com/zalando/patroni/blob/v2.1.1/patroni/postgresql/config.py#L977 + // - https://github.com/zalando/patroni/blob/v2.1.1/patroni/postgresql/config.py#L1007 + status := pod.GetAnnotations()["status"] + return strings.Contains(status, `"pending_restart":true`) +} diff --git a/internal/patroni/reconcile_test.go b/internal/patroni/reconcile_test.go new file mode 100644 index 0000000000..5d2a2c0ad5 --- /dev/null +++ b/internal/patroni/reconcile_test.go @@ -0,0 +1,292 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package patroni + +import ( + "context" + "testing" + + "gotest.tools/v3/assert" + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/pki" + "github.com/crunchydata/postgres-operator/internal/postgres" + "github.com/crunchydata/postgres-operator/internal/testing/cmp" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestClusterConfigMap(t *testing.T) { + t.Parallel() + ctx := context.Background() + + cluster := new(v1beta1.PostgresCluster) + pgHBAs := postgres.HBAs{} + pgParameters := postgres.Parameters{} + + cluster.Default() + config := new(corev1.ConfigMap) + assert.NilError(t, ClusterConfigMap(ctx, cluster, pgHBAs, pgParameters, config)) + + // The output of clusterYAML should go into config. + data, _ := clusterYAML(cluster, pgHBAs, pgParameters) + assert.DeepEqual(t, config.Data["patroni.yaml"], data) + + // No change when called again. + before := config.DeepCopy() + assert.NilError(t, ClusterConfigMap(ctx, cluster, pgHBAs, pgParameters, config)) + assert.DeepEqual(t, config, before) +} + +func TestReconcileInstanceCertificates(t *testing.T) { + t.Parallel() + + root, err := pki.NewRootCertificateAuthority() + assert.NilError(t, err, "bug in test") + + leaf, err := root.GenerateLeafCertificate("any", nil) + assert.NilError(t, err, "bug in test") + + dataCA, _ := certFile(root.Certificate) + assert.Assert(t, + cmp.Regexp(`^`+ + `-----BEGIN CERTIFICATE-----\n`+ + `([^-]+\n)+`+ + `-----END CERTIFICATE-----\n`+ + `$`, string(dataCA), + ), + "expected a PEM-encoded certificate bundle") + + dataCert, _ := certFile(leaf.PrivateKey, leaf.Certificate) + assert.Assert(t, + cmp.Regexp(`^`+ + `-----BEGIN [^ ]+ PRIVATE KEY-----\n`+ + `([^-]+\n)+`+ + `-----END [^ ]+ PRIVATE KEY-----\n`+ + `-----BEGIN CERTIFICATE-----\n`+ + `([^-]+\n)+`+ + `-----END CERTIFICATE-----\n`+ + `$`, string(dataCert), + ), + // - https://docs.python.org/3/library/ssl.html#combined-key-and-certificate + // - https://docs.python.org/3/library/ssl.html#certificate-chains + "expected a PEM-encoded key followed by the certificate") + + ctx := context.Background() + secret := new(corev1.Secret) + + assert.NilError(t, InstanceCertificates(ctx, + root.Certificate, leaf.Certificate, leaf.PrivateKey, secret)) + + assert.DeepEqual(t, secret.Data["patroni.ca-roots"], dataCA) + assert.DeepEqual(t, secret.Data["patroni.crt-combined"], dataCert) + + // No change when called again. + before := secret.DeepCopy() + assert.NilError(t, InstanceCertificates(ctx, + root.Certificate, leaf.Certificate, leaf.PrivateKey, secret)) + assert.DeepEqual(t, secret, before) +} + +func TestInstanceConfigMap(t *testing.T) { + t.Parallel() + + ctx := context.Background() + cluster := new(v1beta1.PostgresCluster) + instance := new(v1beta1.PostgresInstanceSetSpec) + config := new(corev1.ConfigMap) + data, _ := instanceYAML(cluster, instance, nil) + + assert.NilError(t, InstanceConfigMap(ctx, cluster, instance, config)) + + assert.DeepEqual(t, config.Data["patroni.yaml"], data) + + // No change when called again. + before := config.DeepCopy() + assert.NilError(t, InstanceConfigMap(ctx, cluster, instance, config)) + assert.DeepEqual(t, config, before) +} + +func TestInstancePod(t *testing.T) { + t.Parallel() + + cluster := new(v1beta1.PostgresCluster) + cluster.Default() + cluster.Name = "some-such" + cluster.Spec.PostgresVersion = 11 + cluster.Spec.Image = "image" + cluster.Spec.ImagePullPolicy = corev1.PullAlways + clusterConfigMap := new(corev1.ConfigMap) + clusterPodService := new(corev1.Service) + instanceCertificates := new(corev1.Secret) + instanceConfigMap := new(corev1.ConfigMap) + instanceSpec := new(v1beta1.PostgresInstanceSetSpec) + patroniLeaderService := new(corev1.Service) + template := new(corev1.PodTemplateSpec) + template.Spec.Containers = []corev1.Container{{Name: "database"}} + + call := func() error { + return InstancePod(context.Background(), + cluster, clusterConfigMap, clusterPodService, patroniLeaderService, + instanceSpec, instanceCertificates, instanceConfigMap, template) + } + + assert.NilError(t, call()) + + assert.DeepEqual(t, template.ObjectMeta, metav1.ObjectMeta{ + Labels: map[string]string{naming.LabelPatroni: "some-such-ha"}, + }) + + assert.Assert(t, cmp.MarshalMatches(template.Spec, ` +containers: +- command: + - patroni + - /etc/patroni + env: + - name: PATRONI_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: PATRONI_KUBERNETES_POD_IP + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: status.podIP + - name: PATRONI_KUBERNETES_PORTS + value: | + [] + - name: PATRONI_POSTGRESQL_CONNECT_ADDRESS + value: $(PATRONI_NAME).:5432 + - name: PATRONI_POSTGRESQL_LISTEN + value: '*:5432' + - name: PATRONI_POSTGRESQL_CONFIG_DIR + value: /pgdata/pg11 + - name: PATRONI_POSTGRESQL_DATA_DIR + value: /pgdata/pg11 + - name: PATRONI_RESTAPI_CONNECT_ADDRESS + value: $(PATRONI_NAME).:8008 + - name: PATRONI_RESTAPI_LISTEN + value: '*:8008' + - name: PATRONICTL_CONFIG_FILE + value: /etc/patroni + livenessProbe: + failureThreshold: 3 + httpGet: + path: /liveness + port: 8008 + scheme: HTTPS + initialDelaySeconds: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 5 + name: database + readinessProbe: + failureThreshold: 3 + httpGet: + path: /readiness + port: 8008 + scheme: HTTPS + initialDelaySeconds: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 5 + resources: {} + volumeMounts: + - mountPath: /etc/patroni + name: patroni-config + readOnly: true +volumes: +- name: patroni-config + projected: + sources: + - configMap: + items: + - key: patroni.yaml + path: ~postgres-operator_cluster.yaml + - configMap: + items: + - key: patroni.yaml + path: ~postgres-operator_instance.yaml + - secret: + items: + - key: patroni.ca-roots + path: ~postgres-operator/patroni.ca-roots + - key: patroni.crt-combined + path: ~postgres-operator/patroni.crt+key + `)) +} + +func TestPodIsPrimary(t *testing.T) { + // No object + assert.Assert(t, !PodIsPrimary(nil)) + + // No annotations + pod := &corev1.Pod{} + assert.Assert(t, !PodIsPrimary(pod)) + + // No role + pod.Annotations = map[string]string{"status": `{}`} + assert.Assert(t, !PodIsPrimary(pod)) + + // Replica + pod.Annotations["status"] = `{"role":"replica"}` + assert.Assert(t, !PodIsPrimary(pod)) + + // Standby leader + pod.Annotations["status"] = `{"role":"standby_leader"}` + assert.Assert(t, !PodIsPrimary(pod)) + + // Primary + pod.Annotations["status"] = `{"role":"master"}` + assert.Assert(t, PodIsPrimary(pod)) +} + +func TestPodIsStandbyLeader(t *testing.T) { + // No object + assert.Assert(t, !PodIsStandbyLeader(nil)) + + // No annotations + pod := &corev1.Pod{} + assert.Assert(t, !PodIsStandbyLeader(pod)) + + // No role + pod.Annotations = map[string]string{"status": `{}`} + assert.Assert(t, !PodIsStandbyLeader(pod)) + + // Leader + pod.Annotations["status"] = `{"role":"master"}` + assert.Assert(t, !PodIsStandbyLeader(pod)) + + // Replica + pod.Annotations["status"] = `{"role":"replica"}` + assert.Assert(t, !PodIsStandbyLeader(pod)) + + // Standby leader + pod.Annotations["status"] = `{"role":"standby_leader"}` + assert.Assert(t, PodIsStandbyLeader(pod)) +} + +func TestPodRequiresRestart(t *testing.T) { + // No object + assert.Assert(t, !PodRequiresRestart(nil)) + + // No annotations + pod := &corev1.Pod{} + assert.Assert(t, !PodRequiresRestart(pod)) + + // Normal; no flag + pod.Annotations = map[string]string{"status": `{}`} + assert.Assert(t, !PodRequiresRestart(pod)) + + // Unexpected value + pod.Annotations["status"] = `{"pending_restart":"mystery"}` + assert.Assert(t, !PodRequiresRestart(pod)) + + // Expected value + pod.Annotations["status"] = `{"pending_restart":true}` + assert.Assert(t, PodRequiresRestart(pod)) +} diff --git a/internal/pgadmin/backoff.go b/internal/pgadmin/backoff.go deleted file mode 100644 index d1df68c80d..0000000000 --- a/internal/pgadmin/backoff.go +++ /dev/null @@ -1,100 +0,0 @@ -package pgadmin - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -// This file should one day be refactored into a library with other alike tools -// - -import ( - "math" - "math/rand" // Good enough for jitter purposes, don't need crypto/rand - "time" -) - -// Jitter is an enum representing a distinct jitter mode -type Jitter int - -const ( - // JitterNone performs no Jitter, with multiple clients, can be bursty - JitterNone Jitter = iota - // JitterFull represents a jitter range of (0, Duration) - JitterFull - // JitterCenter represents a jitter range of (0.5 Duration, 1.5 Duration) - // That is, full, but centered on the value - JitterCenter - // JitterSmall represents a jitter range of 0.75 Duration, 1.25 Duration) - JitterSmall -) - -// Apply provides a new time with respect to t based on the jitter mode -func (jm Jitter) Apply(t time.Duration) time.Duration { - switch jm { - case JitterNone: // being explicit in case default case changes - return t - case JitterFull: - return time.Duration(rand.Float64() * float64(t)) - case JitterCenter: - return time.Duration(float64(t/2) + (rand.Float64() * float64(t))) - case JitterSmall: - return time.Duration(float64(3*t/4) + (rand.Float64() * float64(t) / 2)) - default: - return t - } -} - -// Backoff interface provides increasing length delays for event spacing -type Backoff interface { - Duration(round int) time.Duration -} - -// SpecificBackoffPolicy allows manually specifying retry times -type SpecificBackoffPolicy struct { - Times []time.Duration - JitterMode Jitter -} - -func (sbp SpecificBackoffPolicy) Duration(n int) time.Duration { - if l := len(sbp.Times); sbp.Times == nil || n < 0 || l == 0 { - return time.Duration(0) - } else if n >= l { - n = l - 1 - } - - return sbp.JitterMode.Apply(sbp.Times[n]) -} - -// ExponentialBackoffPolicy provides an exponential backoff based on: -// Base * (Ratio ^ Iteration) -// -// For example a base of 10ms, ratio of 2, and no jitter would produce: -// 10ms, 20ms, 40ms, 80ms, 160ms, 320ms, 640ms, 1.28s, 2.56s... -// -type ExponentialBackoffPolicy struct { - Ratio float64 - Base time.Duration - Maximum time.Duration - JitterMode Jitter -} - -func (cbp ExponentialBackoffPolicy) Duration(n int) time.Duration { - d := time.Duration(math.Pow(cbp.Ratio, float64(n)) * float64(cbp.Base)) - - if j := cbp.JitterMode.Apply(d); cbp.Maximum > 0 && j > cbp.Maximum { - return cbp.Maximum - } else { - return j - } -} diff --git a/internal/pgadmin/backoff_test.go b/internal/pgadmin/backoff_test.go deleted file mode 100644 index aeae16f7a5..0000000000 --- a/internal/pgadmin/backoff_test.go +++ /dev/null @@ -1,294 +0,0 @@ -package pgadmin - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "testing" - "time" -) - -type testPair struct { - Exp time.Duration - Iter int -} - -func TestDoubleExp(t *testing.T) { - bp := ExponentialBackoffPolicy{ - Base: 10 * time.Millisecond, - Ratio: 2, - } - cases := []testPair{ - {Iter: 0, Exp: 10 * time.Millisecond}, - {Iter: 1, Exp: 20 * time.Millisecond}, - {Iter: 2, Exp: 40 * time.Millisecond}, - {Iter: 3, Exp: 80 * time.Millisecond}, - {Iter: 4, Exp: 160 * time.Millisecond}, - {Iter: 5, Exp: 320 * time.Millisecond}, - {Iter: 6, Exp: 640 * time.Millisecond}, - {Iter: 7, Exp: 1280 * time.Millisecond}, - {Iter: 8, Exp: 2560 * time.Millisecond}, - {Iter: 9, Exp: 5120 * time.Millisecond}, - } - - for _, tCase := range cases { - if res := bp.Duration(tCase.Iter); res != tCase.Exp { - t.Logf("Expected %v, Got %v", tCase.Exp, res) - t.Fail() - } - } -} - -func TestCalcMax(t *testing.T) { - const limit = 1279 * time.Millisecond - bp := ExponentialBackoffPolicy{ - Base: 10 * time.Millisecond, - Ratio: 2, - Maximum: limit, - } - cases := []testPair{ - {Iter: 6, Exp: 640 * time.Millisecond}, - {Iter: 7, Exp: limit}, - {Iter: 8, Exp: limit}, - {Iter: 9, Exp: limit}, - } - - for _, tCase := range cases { - if res := bp.Duration(tCase.Iter); res != tCase.Exp { - t.Logf("Expected %v, Got %v", tCase.Exp, res) - t.Fail() - } - } -} - -func TestSubscripts(t *testing.T) { - cases := []struct { - label string - iter int - pol SpecificBackoffPolicy - }{ - { - label: "nil", - iter: 0, - pol: SpecificBackoffPolicy{}, - }, - { - label: "zerolen", - iter: 0, - pol: SpecificBackoffPolicy{ - Times: []time.Duration{}, - }, - }, - { - label: "negative", - iter: -42, - pol: SpecificBackoffPolicy{ - Times: []time.Duration{ - 9 * time.Second, - }, - }, - }, - } - - for _, tCase := range cases { - if d := tCase.pol.Duration(tCase.iter); d != 0 { - t.Logf("Expected 0 from case, got %v", d) - t.Fail() - } - } - -} - -func TestUniformPolicy(t *testing.T) { - bp := SpecificBackoffPolicy{ - Times: []time.Duration{ - 8 * time.Second, - }, - } - - cases := []testPair{ - {Iter: 0, Exp: 8 * time.Second}, - {Iter: 1, Exp: 8 * time.Second}, - {Iter: 2, Exp: 8 * time.Second}, - {Iter: 3, Exp: 8 * time.Second}, - {Iter: 4, Exp: 8 * time.Second}, - {Iter: 5, Exp: 8 * time.Second}, - {Iter: 6, Exp: 8 * time.Second}, - {Iter: 7, Exp: 8 * time.Second}, - {Iter: 8, Exp: 8 * time.Second}, - {Iter: 9, Exp: 8 * time.Second}, - } - - for _, tCase := range cases { - if res := bp.Duration(tCase.Iter); res != tCase.Exp { - t.Logf("Expected %v, Got %v", tCase.Exp, res) - t.Fail() - } - } -} - -func TestStatedPolicy(t *testing.T) { - bp := SpecificBackoffPolicy{ - Times: []time.Duration{ - 1 * time.Millisecond, - 1 * time.Millisecond, - 2 * time.Millisecond, - 3 * time.Millisecond, - 5 * time.Millisecond, - 8 * time.Millisecond, - 13 * time.Millisecond, - 21 * time.Millisecond, - 33 * time.Millisecond, - 54 * time.Millisecond, - }, - } - - cases := []testPair{ - {Iter: 0, Exp: 1 * time.Millisecond}, - {Iter: 1, Exp: 1 * time.Millisecond}, - {Iter: 2, Exp: 2 * time.Millisecond}, - {Iter: 3, Exp: 3 * time.Millisecond}, - {Iter: 4, Exp: 5 * time.Millisecond}, - {Iter: 5, Exp: 8 * time.Millisecond}, - {Iter: 6, Exp: 13 * time.Millisecond}, - {Iter: 7, Exp: 21 * time.Millisecond}, - {Iter: 8, Exp: 33 * time.Millisecond}, - {Iter: 9, Exp: 54 * time.Millisecond}, - } - - for _, tCase := range cases { - if res := bp.Duration(tCase.Iter); res != tCase.Exp { - t.Logf("Expected %v, Got %v", tCase.Exp, res) - t.Fail() - } - } -} - -func TestJitterFullLimits(t *testing.T) { - bp := SpecificBackoffPolicy{ - Times: []time.Duration{ - 10 * time.Second, - }, - JitterMode: JitterFull, - } - - for i := 0; i < 1000; i++ { - if d := bp.Duration(i); d < 0 || d > 10*time.Second { - t.Fatalf("On iteration %d, found unexpected value: %v\n", i, d) - } - } -} - -func TestJitterFullExtents(t *testing.T) { - bp := SpecificBackoffPolicy{ - Times: []time.Duration{ - 10 * time.Second, - }, - JitterMode: JitterFull, - } - - var nearLow, nearHigh bool - for i := 0; i < 1000; i++ { - // See if we've had at least one value near the low limit - if d := bp.Duration(i); !nearLow && d < 250*time.Millisecond { - nearLow = true - } - // See if we've had at least one value near the high limit - if d := bp.Duration(i); !nearHigh && d > 9750*time.Millisecond { - nearHigh = true - } - } - if !(nearLow && nearHigh) { - t.Fatalf("Expected generated values near edges: near low [%t], near high [%t]", nearLow, nearHigh) - } -} - -func TestJitterCenterLimits(t *testing.T) { - bp := SpecificBackoffPolicy{ - Times: []time.Duration{ - 20 * time.Second, - }, - JitterMode: JitterCenter, - } - - for i := 0; i < 1000; i++ { - if d := bp.Duration(i); d < 10*time.Second || d > 30*time.Second { - t.Fatalf("On iteration %d, found unexpected value: %v\n", i, d) - } - } -} - -func TestJitterCenterExtents(t *testing.T) { - bp := SpecificBackoffPolicy{ - Times: []time.Duration{ - 20 * time.Second, - }, - JitterMode: JitterCenter, - } - - var nearLow, nearHigh bool - for i := 0; i < 1000; i++ { - // See if we've had at least one value near the low limit - if d := bp.Duration(i); !nearLow && d < 10250*time.Millisecond { - nearLow = true - } - // See if we've had at least one value near the high limit - if d := bp.Duration(i); !nearHigh && d > 29750*time.Millisecond { - nearHigh = true - } - } - if !(nearLow && nearHigh) { - t.Fatalf("Expected generated values near edges: near low [%t], near high [%t]", nearLow, nearHigh) - } -} - -func TestJitterSmallLimits(t *testing.T) { - bp := SpecificBackoffPolicy{ - Times: []time.Duration{ - 20 * time.Second, - }, - JitterMode: JitterSmall, - } - - for i := 0; i < 1000; i++ { - if d := bp.Duration(i); d < 15*time.Second || d > 25*time.Second { - t.Fatalf("On iteration %d, found unexpected value: %v\n", i, d) - } - } -} - -func TestJitterSmallExtents(t *testing.T) { - bp := SpecificBackoffPolicy{ - Times: []time.Duration{ - 20 * time.Second, - }, - JitterMode: JitterSmall, - } - - var nearLow, nearHigh bool - for i := 0; i < 1000; i++ { - // See if we've had at least one value near the low limit - if d := bp.Duration(i); !nearLow && d < 15250*time.Millisecond { - nearLow = true - } - // See if we've had at least one value near the high limit - if d := bp.Duration(i); !nearHigh && d > 24750*time.Millisecond { - nearHigh = true - } - } - if !(nearLow && nearHigh) { - t.Fatalf("Expected generated values near edges: near low [%t], near high [%t]", nearLow, nearHigh) - } -} diff --git a/internal/pgadmin/config.go b/internal/pgadmin/config.go new file mode 100644 index 0000000000..553a90f656 --- /dev/null +++ b/internal/pgadmin/config.go @@ -0,0 +1,173 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgadmin + +import ( + "strings" + + corev1 "k8s.io/api/core/v1" + + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +const ( + // tmp volume to hold the nss_wrapper, process and socket files + // both the '/tmp' mount path and '/etc/httpd/run' mount path + // mount the 'tmp' volume + tmpVolume = "tmp" + + // runMountPath holds the pgAdmin run path, which mounts the 'tmp' volume + runMountPath = "/etc/httpd/run" + + // log volume and path where the pgadmin4.log is located + logVolume = "pgadmin-log" + logMountPath = "/var/log/pgadmin" + + // data volume and path to hold persistent pgAdmin data + dataVolume = "pgadmin-data" + dataMountPath = "/var/lib/pgadmin" + + // ldapPasswordPath is the path for mounting the LDAP Bind Password + ldapPasswordPath = "~postgres-operator/ldap-bind-password" /* #nosec */ + ldapPasswordAbsolutePath = configMountPath + "/" + ldapPasswordPath + + // TODO(tjmoore4): The login and password implementation will be updated in + // upcoming enhancement work. + + // initial pgAdmin login email address + loginEmail = "admin" + + // initial pgAdmin login password + loginPassword = "admin" + + // default pgAdmin port + pgAdminPort = 5050 + + // configMountPath is where to mount configuration files, secrets, etc. + configMountPath = "/etc/pgadmin/conf.d" + + settingsAbsolutePath = configMountPath + "/" + settingsProjectionPath + settingsConfigMapKey = "pgadmin-settings.json" + settingsProjectionPath = "~postgres-operator/pgadmin.json" + + // startupMountPath is where to mount a temporary directory that is only + // writable during Pod initialization. + // + // NOTE: No ConfigMap nor Secret should ever be mounted here because they + // could be used to inject code through "config_system.py". + startupMountPath = "/etc/pgadmin" + + // configSystemAbsolutePath is imported by pgAdmin after all other config files. + // - https://github.com/pgadmin-org/pgadmin4/blob/REL-4_30/docs/en_US/config_py.rst + configSystemAbsolutePath = startupMountPath + "/config_system.py" +) + +// podConfigFiles returns projections of pgAdmin's configuration files to +// include in the configuration volume. +func podConfigFiles(configmap *corev1.ConfigMap, spec v1beta1.PGAdminPodSpec) []corev1.VolumeProjection { + config := append(append([]corev1.VolumeProjection{}, spec.Config.Files...), + []corev1.VolumeProjection{ + { + ConfigMap: &corev1.ConfigMapProjection{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: configmap.Name, + }, + Items: []corev1.KeyToPath{ + { + Key: settingsConfigMapKey, + Path: settingsProjectionPath, + }, + }, + }, + }, + }...) + + // To enable LDAP authentication for pgAdmin, various LDAP settings must be configured. + // While most of the required configuration can be set using the 'settings' + // feature on the spec (.Spec.UserInterface.PGAdmin.Config.Settings), those + // values are stored in a ConfigMap in plaintext. + // As a special case, here we mount a provided Secret containing the LDAP_BIND_PASSWORD + // for use with the other pgAdmin LDAP configuration. + // - https://www.pgadmin.org/docs/pgadmin4/latest/config_py.html + // - https://www.pgadmin.org/docs/pgadmin4/development/enabling_ldap_authentication.html + if spec.Config.LDAPBindPassword != nil { + config = append(config, corev1.VolumeProjection{ + Secret: &corev1.SecretProjection{ + LocalObjectReference: spec.Config.LDAPBindPassword.LocalObjectReference, + Optional: spec.Config.LDAPBindPassword.Optional, + Items: []corev1.KeyToPath{ + { + Key: spec.Config.LDAPBindPassword.Key, + Path: ldapPasswordPath, + }, + }, + }, + }) + } + + return config +} + +// startupCommand returns an entrypoint that prepares the filesystem for pgAdmin. +func startupCommand() []string { + // pgAdmin reads from the following file by importing its public names. + // Make sure to assign only to variables that begin with underscore U+005F. + // - https://github.com/pgadmin-org/pgadmin4/blob/REL-4_30/web/config.py#L669 + // - https://docs.python.org/3/reference/simple_stmts.html#import + // + // DEFAULT_BINARY_PATHS contains the paths to various client tools. The "pg" + // key is for PostgreSQL. Use the latest version found in "/usr" or fallback + // to the default of empty string. + // - https://github.com/pgadmin-org/pgadmin4/blob/REL-4_30/web/config.py#L415 + // + // Python 3.6.8 (default, Sep 10 2021, 09:13:53) + // >>> sorted(['']+[]).pop() + // '' + // >>> sorted(['']+['/pg13','/pg10']).pop() + // '/pg13' + // + // Set all remaining variables from the JSON in settingsAbsolutePath. All + // pgAdmin settings are uppercase with underscores, so ignore any keys/names + // that are not. + // + // Lastly, set pgAdmin's LDAP_BIND_PASSWORD setting, if the value was provided + // via Secret. As this assignment happens after any values provided via the + // 'Settings' ConfigMap loaded above, this value will overwrite any previous + // configuration of LDAP_BIND_PASSWORD (that is, last write wins). + const configSystem = ` +import glob, json, re, os +DEFAULT_BINARY_PATHS = {'pg': sorted([''] + glob.glob('/usr/pgsql-*/bin')).pop()} +with open('` + settingsAbsolutePath + `') as _f: + _conf, _data = re.compile(r'[A-Z_0-9]+'), json.load(_f) + if type(_data) is dict: + globals().update({k: v for k, v in _data.items() if _conf.fullmatch(k)}) +if os.path.isfile('` + ldapPasswordAbsolutePath + `'): + with open('` + ldapPasswordAbsolutePath + `') as _f: + LDAP_BIND_PASSWORD = _f.read() +` + + args := []string{strings.TrimLeft(configSystem, "\n")} + + script := strings.Join([]string{ + // Write the system configuration into a read-only file. + `(umask a-w && echo "$1" > ` + configSystemAbsolutePath + `)`, + }, "\n") + + return append([]string{"bash", "-ceu", "--", script, "startup"}, args...) +} + +// systemSettings returns pgAdmin settings as a value that can be marshaled to JSON. +func systemSettings(spec *v1beta1.PGAdminPodSpec) map[string]interface{} { + settings := *spec.Config.Settings.DeepCopy() + if settings == nil { + settings = make(map[string]interface{}) + } + + // SERVER_MODE must always be enabled when running on a webserver. + // - https://github.com/pgadmin-org/pgadmin4/blob/REL-4_30/web/config.py#L105 + settings["SERVER_MODE"] = true + + return settings +} diff --git a/internal/pgadmin/config_test.go b/internal/pgadmin/config_test.go new file mode 100644 index 0000000000..87cd7847c2 --- /dev/null +++ b/internal/pgadmin/config_test.go @@ -0,0 +1,119 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgadmin + +import ( + "os" + "os/exec" + "path/filepath" + "testing" + + "gotest.tools/v3/assert" + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + + "github.com/crunchydata/postgres-operator/internal/testing/cmp" + "github.com/crunchydata/postgres-operator/internal/testing/require" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestPodConfigFiles(t *testing.T) { + configmap := &corev1.ConfigMap{ObjectMeta: metav1.ObjectMeta{Name: "some-cm"}} + + spec := v1beta1.PGAdminPodSpec{ + Config: v1beta1.PGAdminConfiguration{Files: []corev1.VolumeProjection{{ + Secret: &corev1.SecretProjection{LocalObjectReference: corev1.LocalObjectReference{ + Name: "test-secret", + }}, + }, { + ConfigMap: &corev1.ConfigMapProjection{LocalObjectReference: corev1.LocalObjectReference{ + Name: "test-cm", + }}, + }}}, + } + + projections := podConfigFiles(configmap, spec) + assert.Assert(t, cmp.MarshalMatches(projections, ` +- secret: + name: test-secret +- configMap: + name: test-cm +- configMap: + items: + - key: pgadmin-settings.json + path: ~postgres-operator/pgadmin.json + name: some-cm + `)) +} + +func TestStartupCommand(t *testing.T) { + assert.Assert(t, cmp.MarshalMatches(startupCommand(), ` +- bash +- -ceu +- -- +- (umask a-w && echo "$1" > /etc/pgadmin/config_system.py) +- startup +- | + import glob, json, re, os + DEFAULT_BINARY_PATHS = {'pg': sorted([''] + glob.glob('/usr/pgsql-*/bin')).pop()} + with open('/etc/pgadmin/conf.d/~postgres-operator/pgadmin.json') as _f: + _conf, _data = re.compile(r'[A-Z_0-9]+'), json.load(_f) + if type(_data) is dict: + globals().update({k: v for k, v in _data.items() if _conf.fullmatch(k)}) + if os.path.isfile('/etc/pgadmin/conf.d/~postgres-operator/ldap-bind-password'): + with open('/etc/pgadmin/conf.d/~postgres-operator/ldap-bind-password') as _f: + LDAP_BIND_PASSWORD = _f.read() +`)) + + t.Run("ShellCheck", func(t *testing.T) { + command := startupCommand() + shellcheck := require.ShellCheck(t) + + assert.Assert(t, len(command) > 3) + dir := t.TempDir() + file := filepath.Join(dir, "script.bash") + assert.NilError(t, os.WriteFile(file, []byte(command[3]), 0o600)) + + // Expect shellcheck to be happy. + cmd := exec.Command(shellcheck, "--enable=all", file) + output, err := cmd.CombinedOutput() + assert.NilError(t, err, "%q\n%s", cmd.Args, output) + }) + + t.Run("ConfigSystemFlake8", func(t *testing.T) { + command := startupCommand() + flake8 := require.Flake8(t) + + assert.Assert(t, len(command) > 5) + dir := t.TempDir() + file := filepath.Join(dir, "script.py") + assert.NilError(t, os.WriteFile(file, []byte(command[5]), 0o600)) + + // Expect flake8 to be happy. Ignore "E401 multiple imports on one line" + // in addition to the defaults. The file contents appear in PodSpec, so + // allow lines longer than the default to save some vertical space. + cmd := exec.Command(flake8, "--extend-ignore=E401", "--max-line-length=99", file) + output, err := cmd.CombinedOutput() + assert.NilError(t, err, "%q\n%s", cmd.Args, output) + }) +} + +func TestSystemSettings(t *testing.T) { + spec := new(v1beta1.PGAdminPodSpec) + assert.Assert(t, cmp.MarshalMatches(systemSettings(spec), ` +SERVER_MODE: true + `)) + + spec.Config.Settings = map[string]interface{}{ + "ALLOWED_HOSTS": []interface{}{"225.0.0.0/8", "226.0.0.0/7", "228.0.0.0/6"}, + } + assert.Assert(t, cmp.MarshalMatches(systemSettings(spec), ` +ALLOWED_HOSTS: +- 225.0.0.0/8 +- 226.0.0.0/7 +- 228.0.0.0/6 +SERVER_MODE: true + `)) +} diff --git a/internal/pgadmin/crypto.go b/internal/pgadmin/crypto.go deleted file mode 100644 index 55ebc8b771..0000000000 --- a/internal/pgadmin/crypto.go +++ /dev/null @@ -1,150 +0,0 @@ -package pgadmin - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "crypto/aes" - "crypto/cipher" - "crypto/rand" - "encoding/base64" - "fmt" - "io" - "os" -) - -// padKey ensures the resultant key is 32 bytes long, using the Procrustes method -func padKey(key []byte) []byte { - if strLen := len(key); strLen > 32 { - newKey := make([]byte, 32) - copy(newKey, key) - return newKey - } else if strLen > 8 && strLen%8 == 0 { - return key - } - - // 31 bytes of '}', as per PyCrypto impl - buffer := []byte{ - 125, 125, 125, 125, 125, 125, 125, 125, 125, 125, - 125, 125, 125, 125, 125, 125, 125, 125, 125, 125, - 125, 125, 125, 125, 125, 125, 125, 125, 125, 125, 125, - } - - padded := append(key, buffer...) - newKey := make([]byte, 32) - copy(newKey, padded) - - return newKey -} - -func encrypt(plaintext, key string) string { - iv := make([]byte, aes.BlockSize) - if _, err := io.ReadFull(rand.Reader, iv); err != nil { - fmt.Fprintf(os.Stderr, "Unable to initialize AES vector: %v\n", err) - os.Exit(1) - } - return encryptImpl(key, []byte(plaintext), iv) -} - -func encryptImpl(key string, pt, iv []byte) string { - ciphertext := make([]byte, aes.BlockSize+len(pt)) - copy(ciphertext[:aes.BlockSize], iv) - - aesBlockEnc, err := aes.NewCipher(padKey([]byte(key))) - if err != nil { - fmt.Fprintf(os.Stderr, "Unable to initialize AES encrypter: %v\n", err) - os.Exit(1) - } - - cfbEnc := newCFB8Encrypter(aesBlockEnc, iv) - cfbEnc.XORKeyStream(ciphertext[aes.BlockSize:], pt) - - return base64.StdEncoding.EncodeToString(ciphertext) -} - -func decrypt(ciphertext, key string) string { - bCipher, err := base64.StdEncoding.DecodeString(ciphertext) - if err != nil { - panic(err) - } - decoded := make([]byte, len(bCipher)-aes.BlockSize) - - aesBlockDec, err := aes.NewCipher(padKey([]byte(key))) - if err != nil { - panic(err) - } - - aesDecrypt := newCFB8Decrypter(aesBlockDec, bCipher[:aes.BlockSize]) - aesDecrypt.XORKeyStream(decoded, bCipher[aes.BlockSize:]) - - return string(decoded) -} - -// 8-bit CFB implementation needed to match PyCrypt CFB impl -// Implemented in an idiomatic way to Golang crypto libraries (e.g. CFBEncrypter/Decrypter) -type cfb8 struct { - blk cipher.Block - blockSize int - in []byte - out []byte - decrypt bool -} - -// Implemnets cipher.Stream interface -func (x *cfb8) XORKeyStream(dst, src []byte) { - for i := range src { - x.blk.Encrypt(x.out, x.in) - copy(x.in[:x.blockSize-1], x.in[1:]) - if x.decrypt { - x.in[x.blockSize-1] = src[i] - } - dst[i] = src[i] ^ x.out[0] - if !x.decrypt { - x.in[x.blockSize-1] = dst[i] - } - } -} - -// NewCFB8Encrypter returns a Stream which encrypts with cipher feedback mode -// (segment size = 8), using the given Block. The iv must be the same length as -// the Block's block size. -func newCFB8Encrypter(block cipher.Block, iv []byte) cipher.Stream { - return newCFB8(block, iv, false) -} - -// NewCFB8Decrypter returns a Stream which decrypts with cipher feedback mode -// (segment size = 8), using the given Block. The iv must be the same length as -// the Block's block size. -func newCFB8Decrypter(block cipher.Block, iv []byte) cipher.Stream { - return newCFB8(block, iv, true) -} - -func newCFB8(block cipher.Block, iv []byte, decrypt bool) cipher.Stream { - blockSize := block.BlockSize() - if len(iv) != blockSize { - // stack trace will indicate whether it was de or encryption - panic("cipher.newCFB: IV length must equal block size") - } - x := &cfb8{ - blk: block, - blockSize: blockSize, - out: make([]byte, blockSize), - in: make([]byte, blockSize), - decrypt: decrypt, - } - copy(x.in, iv) - - return x -} diff --git a/internal/pgadmin/crypto_test.go b/internal/pgadmin/crypto_test.go deleted file mode 100644 index 36f8468379..0000000000 --- a/internal/pgadmin/crypto_test.go +++ /dev/null @@ -1,74 +0,0 @@ -package pgadmin - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "testing" -) - -var testData = struct { - clearPW string - encPW string - key string - iv []byte -}{ - clearPW: "w052H0UBM783B$x6N___", - encPW: "5PN+lp8XXalwRzCptI21hmT5S9FvvEYpD8chWa39akY6Srwl", - key: "$pbkdf2-sha512$19000$knLuvReC8H7v/T8n5JwTwg$OsVGpDa/zpCE2pKEOsZ4/SqdxcQZ0UU6v41ev/gkk4ROsrws/4I03oHqN37k.v1d25QckESs3NlPxIUv5gTf2Q", - iv: []byte{0xe4, 0xf3, 0x7e, 0x96, 0x9f, 0x17, 0x5d, 0xa9, - 0x70, 0x47, 0x30, 0xa9, 0xb4, 0x8d, 0xb5, 0x86}, -} - -func TestSymmetry(t *testing.T) { - expected := "Hello World! How are you today?" - ciphertext := encrypt(expected, testData.key) - decoded := decrypt(ciphertext, testData.key) - if decoded != expected { - t.Fatalf("\nExpected\t[%s]\nReceived\t[%s]\n", expected, decoded) - } -} - -func TestEncryption(t *testing.T) { - encrypted := encryptImpl(testData.key, []byte(testData.clearPW), testData.iv) - if encrypted != testData.encPW { - t.Fatalf("\nExpected\t[%s]\nReceived\t[%s]\n", testData.encPW, encrypted) - } -} - -func TestDecryption(t *testing.T) { - decrypted := decrypt(testData.encPW, testData.key) - - if decrypted != testData.clearPW { - t.Fatalf("\nExpected\t[%s]\nReceived\t[%s]\n", testData.clearPW, decrypted) - } -} - -func TestShortKey(t *testing.T) { - expected := "JwTwg$OsVG}}}}}}}}}}}}}}}}}}}}}}" - paddedKey := padKey([]byte("JwTwg$OsVG")) - if string(paddedKey) != expected { - t.Fatalf("\nExpected\t[%s]\nReceived\t[%s]\n", expected, paddedKey) - } -} - -func TestSymmetryShortKey(t *testing.T) { - expected := "Hello World! How are you today?" - ciphertext := encrypt(expected, "JwTwg$OsVG") - decoded := decrypt(ciphertext, "JwTwg$OsVG") - if decoded != expected { - t.Fatalf("\nExpected\t[%s]\nReceived\t[%s]\n", expected, decoded) - } -} diff --git a/internal/pgadmin/doc.go b/internal/pgadmin/doc.go deleted file mode 100644 index 97900b0227..0000000000 --- a/internal/pgadmin/doc.go +++ /dev/null @@ -1,19 +0,0 @@ -/* package pgadmin provides a set of tools for interacting with the sqlite -database which powers pgadmin */ - -package pgadmin - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ diff --git a/internal/pgadmin/hash.go b/internal/pgadmin/hash.go deleted file mode 100644 index b73222fb8b..0000000000 --- a/internal/pgadmin/hash.go +++ /dev/null @@ -1,75 +0,0 @@ -package pgadmin - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "crypto/hmac" - "crypto/rand" - "crypto/sha512" - "encoding/base64" - "fmt" - "strings" - - "golang.org/x/crypto/pbkdf2" -) - -// HashPassword emulates the PBKDF2 password hashing mechanism using a salt -// randomly generated and stored in the pgadmin database -// -// It returns a string of the Modular Crypt Format result of the hash, -// suitable for insertion/replacement of pgadmin login password fields -func HashPassword(qr *queryRunner, pass string) (string, error) { - // Hashing parameters - const saltLenBytes = 16 - const iterations = 25000 - const hashLenBytes = 64 - - if qr.secSalt == "" { - // Retrieve the database-specific random salt - securitySalt, err := qr.Query("SELECT value FROM keys WHERE name='SECURITY_PASSWORD_SALT';") - if err != nil { - return "", err - } - qr.secSalt = securitySalt - } - - // This looks strange, but the algorithm really does use the byte - // representation of the string (i.e. string isn't base64 or other - // encoding format) - saltBytes := []byte(qr.secSalt) - - // Generate a "new" password derived from the provided password - // Satisfies OWASP sec. 2.4.5: 'provide additional iteration of a key derivation' - mac := hmac.New(sha512.New, saltBytes) - mac.Write([]byte(pass)) - macBytes := mac.Sum(nil) - macBase64 := base64.StdEncoding.EncodeToString(macBytes) - - // Generate random salt for the pbkdf2 run, this is the salt that ends - // up in the salt field of the returned hash - hashSalt := make([]byte, saltLenBytes) - if _, err := rand.Read(hashSalt); err != nil { - return "", err - } - - hashed := pbkdf2.Key([]byte(macBase64), hashSalt, iterations, hashLenBytes, sha512.New) - - // Base64 encode and convert to storage format expected by Flask-Security - saltEncoded := strings.ReplaceAll(base64.RawStdEncoding.EncodeToString(hashSalt), "+", ".") - keyEncoded := strings.ReplaceAll(base64.RawStdEncoding.EncodeToString(hashed), "+", ".") - - return fmt.Sprintf("$pbkdf2-sha512$%d$%s$%s", iterations, saltEncoded, keyEncoded), nil -} diff --git a/internal/pgadmin/logic.go b/internal/pgadmin/logic.go deleted file mode 100644 index 2a3fdf0c30..0000000000 --- a/internal/pgadmin/logic.go +++ /dev/null @@ -1,204 +0,0 @@ -package pgadmin - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - "strings" - - "github.com/crunchydata/postgres-operator/internal/config" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - - log "github.com/sirupsen/logrus" - v1 "k8s.io/api/core/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/client-go/kubernetes" - "k8s.io/client-go/rest" -) - -// N.B. Changing this name will cause a new group to be created and redirect -// connection updates to that new group without any cleanup of the old -// group name -const sgLabel = "Crunchy PostgreSQL Operator" - -// DeleteUser deletes the specified user, their servergroups, and servers -func DeleteUser(qr *queryRunner, username string) error { - uid, err := qr.Query(fmt.Sprintf("SELECT id FROM user WHERE email='%s'", sqlLiteral(username))) - if err != nil { - return err - } - - if uid != "" { - rm := fmt.Sprintf( - `DELETE FROM server WHERE user_id='%[1]s'; - DELETE FROM servergroup WHERE user_id='%[1]s'; - DELETE FROM user where id='%[1]s';`, uid) - err = qr.Exec(rm) - if err != nil { - return err - } - } // Otherwise treat delete as no-op - return nil -} - -// Sets the login password for the given username in the pgadmin database -// Adds the user to the pgadmin database if it does not exist -func SetLoginPassword(qr *queryRunner, username, pass string) error { - hp, err := HashPassword(qr, pass) - if err != nil { - return err - } - - // Idempotent user insertion and update, this implies that setting a - // password (e.g. update) will establish a user entry - // - // role_id(2) == User role (vs 1:Administrator) - // - // Bulk query to reduce loss potential from exec errors - query := fmt.Sprintf( - `INSERT OR IGNORE INTO user(email,password,active) VALUES ('%[1]s','%[2]s',1); - INSERT OR IGNORE INTO roles_users(user_id, role_id) VALUES - ((SELECT id FROM user WHERE email='%[1]s'), 2); - UPDATE user SET password='%[2]s' WHERE email='%[1]s';`, sqlLiteral(username), hp, - ) - - if err := qr.Exec(query); err != nil { - return err - } - return nil -} - -// Configures a PG connection for the given username in the pgadmin database -func SetClusterConnection(qr *queryRunner, username string, dbInfo ServerEntry) error { - // Encryption key for db connections is the user's login password hash - // - result, err := qr.Query(fmt.Sprintf("SELECT id, password FROM user WHERE email='%s';", sqlLiteral(username))) - if err != nil { - return err - } - if result == "" { - return fmt.Errorf("error: no user found for [%s]", username) - } - - fields := strings.SplitN(result, qr.Separator(), 2) - uid, encKey := fields[0], fields[1] - - encPassword := encrypt(dbInfo.Password, encKey) - // Insert entries into servergroups and servers for the dbInfo provided - addSG := fmt.Sprintf(`INSERT OR IGNORE INTO servergroup(user_id,name) - VALUES('%s','%s');`, uid, sgLabel) - hasSvc := fmt.Sprintf(`SELECT name FROM server WHERE user_id = '%s';`, uid) - addSvc := fmt.Sprintf(`INSERT INTO server(user_id, servergroup_id, - name, host, port, maintenance_db, username, password, ssl_mode, - comment) VALUES ('%[1]s', - (SELECT id FROM servergroup WHERE user_id='%[1]s' AND name='%s'), - '%s', '%s', %d, '%s', '%s', '%s', '%s', '%s');`, - uid, // user_id && servergroup_id %s (user_id) - sgLabel, // servergroup_id %s (name) - dbInfo.Name, - dbInfo.Host, - dbInfo.Port, - dbInfo.MaintenanceDB, - sqlLiteral(username), - encPassword, - dbInfo.SSLMode, - dbInfo.Comment, - ) - updSvcPass := fmt.Sprintf("UPDATE server SET password='%s' WHERE user_id = '%s';", encPassword, uid) - if err := qr.Exec(addSG); err != nil { - return err - } - serverName, err := qr.Query(hasSvc) - if err != nil { - return err - } - if serverName == "" { - if err := qr.Exec(addSvc); err != nil { - return err - } - } else { - // Currently, ignoring overwriting existing entry as the user may have - // modified through app, but ensure password updates make it through - // to avoid the user inconvenience of entering their password - if err := qr.Exec(updSvcPass); err != nil { - return err - } - } - return nil -} - -// GetUsernames provides a list of the provisioned pgadmin login users -func GetUsernames(qr *queryRunner) ([]string, error) { - q := "SELECT email FROM user WHERE active=1 AND id>1" - results, err := qr.Query(q) - - return strings.Split(results, "\n"), err -} - -// GetPgAdminQueryRunner takes cluster information, identifies whether -// it has a pgAdmin deployment and provides a query runner for executing -// queries against the pgAdmin database -// -// The pointer will be nil if there is no pgAdmin deployed for the cluster -func GetPgAdminQueryRunner(clientset kubernetes.Interface, restconfig *rest.Config, cluster *crv1.Pgcluster) (*queryRunner, error) { - if active, ok := cluster.Labels[config.LABEL_PGADMIN]; !ok || active != "true" { - return nil, nil - } - - selector := fmt.Sprintf("%s=true,%s=%s", config.LABEL_PGADMIN, config.LABEL_PG_CLUSTER, cluster.Name) - - pods, err := clientset.CoreV1().Pods(cluster.Namespace).List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - log.Errorf("failed to find pgadmin pod [%v]", err) - return nil, err - } - - // pgAdmin deployment is single-replica, not HA, should only be one pod - if l := len(pods.Items); l > 1 { - log.Warnf("Unexpected number of pods for pgadmin [%d], defaulting to first", l) - } else if l == 0 { - err := fmt.Errorf("Unable to find pgadmin pod for cluster %s, deleting instance", cluster.Name) - return nil, err - } - - return NewQueryRunner(clientset, restconfig, pods.Items[0]), nil -} - -// ServerEntryFromPgService populates the ServerEntry struct based on -// details of the kubernetes service, it is up to the caller to provide -// the assumed PgCluster service -func ServerEntryFromPgService(service *v1.Service, clustername string) ServerEntry { - dbService := ServerEntry{ - Name: clustername, - Host: service.Spec.ClusterIP, - Port: 5432, - SSLMode: "prefer", - MaintenanceDB: clustername, - } - - // Set Port info - for _, portInfo := range service.Spec.Ports { - if portInfo.Name == "postgres" { - dbService.Port = int(portInfo.Port) - } - } - return dbService -} - -// sqlLiteral escapes single quotes in strings -func sqlLiteral(s string) string { - return strings.ReplaceAll(s, `'`, `''`) -} diff --git a/internal/pgadmin/reconcile.go b/internal/pgadmin/reconcile.go new file mode 100644 index 0000000000..af62c482f2 --- /dev/null +++ b/internal/pgadmin/reconcile.go @@ -0,0 +1,301 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgadmin + +import ( + "bytes" + "encoding/json" + + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/resource" + "k8s.io/apimachinery/pkg/util/intstr" + + "github.com/crunchydata/postgres-operator/internal/config" + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +// startupScript is the script for the configuration and startup of the pgAdmin service. +// It is based on the start-pgadmin4.sh script from the Crunchy Containers Project. +// Any required functions from common_lib.sh are added as required. +// - https://github.com/CrunchyData/crunchy-containers/blob/master/bin/pgadmin4/start-pgadmin4.sh +// - https://github.com/CrunchyData/crunchy-containers/blob/master/bin/common/common_lib.sh +const startupScript = `CRUNCHY_DIR=${CRUNCHY_DIR:-'/opt/crunchy'} +PGADMIN_DIR=/usr/lib/python3.6/site-packages/pgadmin4-web +APACHE_PIDFILE='/tmp/httpd.pid' +export PATH=$PATH:/usr/pgsql-*/bin + +RED="\033[0;31m" +GREEN="\033[0;32m" +RESET="\033[0m" + +function enable_debugging() { + if [[ ${CRUNCHY_DEBUG:-false} == "true" ]] + then + echo_info "Turning debugging on.." + export PS4='+(${BASH_SOURCE}:${LINENO})> ${FUNCNAME[0]:+${FUNCNAME[0]}(): }' + set -x + fi +} + +function env_check_err() { + if [[ -z ${!1} ]] + then + echo_err "$1 environment variable is not set, aborting." + exit 1 + fi +} + +function echo_info() { + echo -e "${GREEN?}$(date) INFO: ${1?}${RESET?}" +} + +function echo_err() { + echo -e "${RED?}$(date) ERROR: ${1?}${RESET?}" +} + +function err_check { + RC=${1?} + CONTEXT=${2?} + ERROR=${3?} + + if [[ ${RC?} != 0 ]] + then + echo_err "${CONTEXT?}: ${ERROR?}" + exit ${RC?} + fi +} + +function trap_sigterm() { + echo_info "Doing trap logic.." + echo_warn "Clean shutdown of Apache.." + /usr/sbin/httpd -k stop + kill -SIGINT $(head -1 $APACHE_PIDFILE) +} + +enable_debugging +trap 'trap_sigterm' SIGINT SIGTERM + +env_check_err "PGADMIN_SETUP_EMAIL" +env_check_err "PGADMIN_SETUP_PASSWORD" + +if [[ ${ENABLE_TLS:-false} == 'true' ]] +then + echo_info "TLS enabled. Applying https configuration.." + if [[ ( ! -f /certs/server.key ) || ( ! -f /certs/server.crt ) ]] + then + echo_err "ENABLE_TLS true but /certs/server.key or /certs/server.crt not found, aborting" + exit 1 + fi + cp "${CRUNCHY_DIR}/conf/pgadmin-https.conf" /var/lib/pgadmin/pgadmin.conf +else + echo_info "TLS disabled. Applying http configuration.." + cp "${CRUNCHY_DIR}/conf/pgadmin-http.conf" /var/lib/pgadmin/pgadmin.conf +fi + +cp "${CRUNCHY_DIR}/conf/config_local.py" /var/lib/pgadmin/config_local.py + +if [[ -z "${SERVER_PATH}" ]] +then + sed -i "/RedirectMatch/d" /var/lib/pgadmin/pgadmin.conf +fi + +sed -i "s|SERVER_PATH|${SERVER_PATH:-/}|g" /var/lib/pgadmin/pgadmin.conf +sed -i "s|SERVER_PORT|${SERVER_PORT:-5050}|g" /var/lib/pgadmin/pgadmin.conf +sed -i "s/^DEFAULT_SERVER_PORT.*/DEFAULT_SERVER_PORT = ${SERVER_PORT:-5050}/" /var/lib/pgadmin/config_local.py +sed -i "s|\"pg\":.*|\"pg\": \"/usr/pgsql-${PGVERSION?}/bin\",|g" /var/lib/pgadmin/config_local.py + +cd ${PGADMIN_DIR?} + +if [[ ! -f /var/lib/pgadmin/pgadmin4.db ]] +then + echo_info "Setting up pgAdmin4 database.." + python3 setup.py > /tmp/pgadmin4.stdout 2> /tmp/pgadmin4.stderr + err_check "$?" "pgAdmin4 Database Setup" "Could not create pgAdmin4 database: \n$(cat /tmp/pgadmin4.stderr)" +fi + +echo_info "Starting Apache web server.." +/usr/sbin/httpd -D FOREGROUND & +echo $! > $APACHE_PIDFILE + +wait` + +// ConfigMap populates a ConfigMap with the configuration needed to run pgAdmin. +func ConfigMap( + inCluster *v1beta1.PostgresCluster, + outConfigMap *corev1.ConfigMap, +) error { + if inCluster.Spec.UserInterface == nil || inCluster.Spec.UserInterface.PGAdmin == nil { + // pgAdmin is disabled; there is nothing to do. + return nil + } + + initialize.Map(&outConfigMap.Data) + + // To avoid spurious reconciles, the following value must not change when + // the spec does not change. [json.Encoder] and [json.Marshal] do this by + // emitting map keys in sorted order. Indent so the value is not rendered + // as one long line by `kubectl`. + buffer := new(bytes.Buffer) + encoder := json.NewEncoder(buffer) + encoder.SetEscapeHTML(false) + encoder.SetIndent("", " ") + err := encoder.Encode(systemSettings(inCluster.Spec.UserInterface.PGAdmin)) + if err == nil { + outConfigMap.Data[settingsConfigMapKey] = buffer.String() + } + return err +} + +// Pod populates a PodSpec with the container and volumes needed to run pgAdmin. +func Pod( + inCluster *v1beta1.PostgresCluster, + inConfigMap *corev1.ConfigMap, + outPod *corev1.PodSpec, pgAdminVolume *corev1.PersistentVolumeClaim, +) { + if inCluster.Spec.UserInterface == nil || inCluster.Spec.UserInterface.PGAdmin == nil { + // pgAdmin is disabled; there is nothing to do. + return + } + + // create the pgAdmin Pod volumes + tmp := corev1.Volume{Name: tmpVolume} + tmp.EmptyDir = &corev1.EmptyDirVolumeSource{ + Medium: corev1.StorageMediumMemory, + } + + pgAdminLog := corev1.Volume{Name: logVolume} + pgAdminLog.EmptyDir = &corev1.EmptyDirVolumeSource{ + Medium: corev1.StorageMediumMemory, + } + + pgAdminData := corev1.Volume{Name: dataVolume} + pgAdminData.VolumeSource = corev1.VolumeSource{ + PersistentVolumeClaim: &corev1.PersistentVolumeClaimVolumeSource{ + ClaimName: pgAdminVolume.Name, + ReadOnly: false, + }, + } + + configVolumeMount := corev1.VolumeMount{ + Name: "pgadmin-config", MountPath: configMountPath, ReadOnly: true, + } + configVolume := corev1.Volume{Name: configVolumeMount.Name} + configVolume.Projected = &corev1.ProjectedVolumeSource{ + Sources: podConfigFiles(inConfigMap, *inCluster.Spec.UserInterface.PGAdmin), + } + + startupVolumeMount := corev1.VolumeMount{ + Name: "pgadmin-startup", MountPath: startupMountPath, ReadOnly: true, + } + startupVolume := corev1.Volume{Name: startupVolumeMount.Name} + startupVolume.EmptyDir = &corev1.EmptyDirVolumeSource{ + Medium: corev1.StorageMediumMemory, + + // When this volume is too small, the Pod will be evicted and recreated + // by the StatefulSet controller. + // - https://kubernetes.io/docs/concepts/storage/volumes/#emptydir + // NOTE: tmpfs blocks are PAGE_SIZE, usually 4KiB, and size rounds up. + SizeLimit: resource.NewQuantity(32<<10, resource.BinarySI), + } + + // pgadmin container + container := corev1.Container{ + Name: naming.ContainerPGAdmin, + Env: []corev1.EnvVar{ + { + Name: "PGADMIN_SETUP_EMAIL", + Value: loginEmail, + }, + { + Name: "PGADMIN_SETUP_PASSWORD", + Value: loginPassword, + }, + // Setting the KRB5_CONFIG for kerberos + // - https://web.mit.edu/kerberos/krb5-current/doc/admin/conf_files/krb5_conf.html + { + Name: "KRB5_CONFIG", + Value: configMountPath + "/krb5.conf", + }, + // In testing it was determined that we need to set this env var for the replay cache + // otherwise it defaults to the read-only location `/var/tmp/` + // - https://web.mit.edu/kerberos/krb5-current/doc/basic/rcache_def.html#replay-cache-types + { + Name: "KRB5RCACHEDIR", + Value: "/tmp", + }, + }, + Command: []string{"bash", "-c", startupScript}, + Image: config.PGAdminContainerImage(inCluster), + ImagePullPolicy: inCluster.Spec.ImagePullPolicy, + Resources: inCluster.Spec.UserInterface.PGAdmin.Resources, + + SecurityContext: initialize.RestrictedSecurityContext(), + + Ports: []corev1.ContainerPort{{ + Name: naming.PortPGAdmin, + ContainerPort: int32(pgAdminPort), + Protocol: corev1.ProtocolTCP, + }}, + VolumeMounts: []corev1.VolumeMount{ + startupVolumeMount, + configVolumeMount, + { + Name: tmpVolume, + MountPath: runMountPath, + }, + { + Name: logVolume, + MountPath: logMountPath, + }, + { + Name: dataVolume, + MountPath: dataMountPath, + }, + }, + ReadinessProbe: &corev1.Probe{ + ProbeHandler: corev1.ProbeHandler{ + TCPSocket: &corev1.TCPSocketAction{ + Port: intstr.FromInt(pgAdminPort), + }, + }, + InitialDelaySeconds: 20, + PeriodSeconds: 10, + }, + LivenessProbe: &corev1.Probe{ + ProbeHandler: corev1.ProbeHandler{ + TCPSocket: &corev1.TCPSocketAction{ + Port: intstr.FromInt(pgAdminPort), + }, + }, + InitialDelaySeconds: 15, + PeriodSeconds: 20, + }, + } + + startup := corev1.Container{ + Name: naming.ContainerPGAdminStartup, + Command: startupCommand(), + + Image: container.Image, + ImagePullPolicy: container.ImagePullPolicy, + Resources: container.Resources, + SecurityContext: initialize.RestrictedSecurityContext(), + VolumeMounts: []corev1.VolumeMount{ + startupVolumeMount, + configVolumeMount, + }, + } + + // The startup container is the only one allowed to write to the startup volume. + startup.VolumeMounts[0].ReadOnly = false + + outPod.InitContainers = []corev1.Container{startup} + // add all volumes other than 'tmp' as that is added later + outPod.Volumes = []corev1.Volume{pgAdminLog, pgAdminData, configVolume, startupVolume} + + outPod.Containers = []corev1.Container{container} +} diff --git a/internal/pgadmin/reconcile_test.go b/internal/pgadmin/reconcile_test.go new file mode 100644 index 0000000000..f91a9b807f --- /dev/null +++ b/internal/pgadmin/reconcile_test.go @@ -0,0 +1,551 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgadmin + +import ( + "testing" + + "gotest.tools/v3/assert" + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/resource" + + "github.com/crunchydata/postgres-operator/internal/testing/cmp" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestConfigMap(t *testing.T) { + t.Parallel() + + cluster := new(v1beta1.PostgresCluster) + config := new(corev1.ConfigMap) + + t.Run("Disabled", func(t *testing.T) { + before := config.DeepCopy() + assert.NilError(t, ConfigMap(cluster, config)) + + // No change when pgAdmin is not requested in the spec. + assert.DeepEqual(t, before, config) + }) + + t.Run("Defaults", func(t *testing.T) { + cluster.Spec.UserInterface = new(v1beta1.UserInterfaceSpec) + cluster.Spec.UserInterface.PGAdmin = new(v1beta1.PGAdminPodSpec) + cluster.Default() + + assert.NilError(t, ConfigMap(cluster, config)) + + assert.Assert(t, cmp.MarshalMatches(config.Data, ` +pgadmin-settings.json: | + { + "SERVER_MODE": true + } + `)) + }) + + t.Run("Customizations", func(t *testing.T) { + cluster.Spec.UserInterface = new(v1beta1.UserInterfaceSpec) + cluster.Spec.UserInterface.PGAdmin = new(v1beta1.PGAdminPodSpec) + cluster.Spec.UserInterface.PGAdmin.Config.Settings = map[string]interface{}{ + "some": "thing", + "UPPER_CASE": false, + } + cluster.Default() + + assert.NilError(t, ConfigMap(cluster, config)) + + assert.Assert(t, cmp.MarshalMatches(config.Data, ` +pgadmin-settings.json: | + { + "SERVER_MODE": true, + "UPPER_CASE": false, + "some": "thing" + } + `)) + }) +} + +func TestPod(t *testing.T) { + t.Parallel() + + cluster := new(v1beta1.PostgresCluster) + config := new(corev1.ConfigMap) + pod := new(corev1.PodSpec) + pvc := new(corev1.PersistentVolumeClaim) + + call := func() { Pod(cluster, config, pod, pvc) } + + t.Run("Disabled", func(t *testing.T) { + before := pod.DeepCopy() + call() + + // No change when pgAdmin is not requested in the spec. + assert.DeepEqual(t, before, pod) + }) + + t.Run("Defaults", func(t *testing.T) { + cluster.Spec.UserInterface = new(v1beta1.UserInterfaceSpec) + cluster.Spec.UserInterface.PGAdmin = new(v1beta1.PGAdminPodSpec) + cluster.Default() + + call() + + assert.Assert(t, cmp.MarshalMatches(pod, ` +containers: +- command: + - bash + - -c + - |- + CRUNCHY_DIR=${CRUNCHY_DIR:-'/opt/crunchy'} + PGADMIN_DIR=/usr/lib/python3.6/site-packages/pgadmin4-web + APACHE_PIDFILE='/tmp/httpd.pid' + export PATH=$PATH:/usr/pgsql-*/bin + + RED="\033[0;31m" + GREEN="\033[0;32m" + RESET="\033[0m" + + function enable_debugging() { + if [[ ${CRUNCHY_DEBUG:-false} == "true" ]] + then + echo_info "Turning debugging on.." + export PS4='+(${BASH_SOURCE}:${LINENO})> ${FUNCNAME[0]:+${FUNCNAME[0]}(): }' + set -x + fi + } + + function env_check_err() { + if [[ -z ${!1} ]] + then + echo_err "$1 environment variable is not set, aborting." + exit 1 + fi + } + + function echo_info() { + echo -e "${GREEN?}$(date) INFO: ${1?}${RESET?}" + } + + function echo_err() { + echo -e "${RED?}$(date) ERROR: ${1?}${RESET?}" + } + + function err_check { + RC=${1?} + CONTEXT=${2?} + ERROR=${3?} + + if [[ ${RC?} != 0 ]] + then + echo_err "${CONTEXT?}: ${ERROR?}" + exit ${RC?} + fi + } + + function trap_sigterm() { + echo_info "Doing trap logic.." + echo_warn "Clean shutdown of Apache.." + /usr/sbin/httpd -k stop + kill -SIGINT $(head -1 $APACHE_PIDFILE) + } + + enable_debugging + trap 'trap_sigterm' SIGINT SIGTERM + + env_check_err "PGADMIN_SETUP_EMAIL" + env_check_err "PGADMIN_SETUP_PASSWORD" + + if [[ ${ENABLE_TLS:-false} == 'true' ]] + then + echo_info "TLS enabled. Applying https configuration.." + if [[ ( ! -f /certs/server.key ) || ( ! -f /certs/server.crt ) ]] + then + echo_err "ENABLE_TLS true but /certs/server.key or /certs/server.crt not found, aborting" + exit 1 + fi + cp "${CRUNCHY_DIR}/conf/pgadmin-https.conf" /var/lib/pgadmin/pgadmin.conf + else + echo_info "TLS disabled. Applying http configuration.." + cp "${CRUNCHY_DIR}/conf/pgadmin-http.conf" /var/lib/pgadmin/pgadmin.conf + fi + + cp "${CRUNCHY_DIR}/conf/config_local.py" /var/lib/pgadmin/config_local.py + + if [[ -z "${SERVER_PATH}" ]] + then + sed -i "/RedirectMatch/d" /var/lib/pgadmin/pgadmin.conf + fi + + sed -i "s|SERVER_PATH|${SERVER_PATH:-/}|g" /var/lib/pgadmin/pgadmin.conf + sed -i "s|SERVER_PORT|${SERVER_PORT:-5050}|g" /var/lib/pgadmin/pgadmin.conf + sed -i "s/^DEFAULT_SERVER_PORT.*/DEFAULT_SERVER_PORT = ${SERVER_PORT:-5050}/" /var/lib/pgadmin/config_local.py + sed -i "s|\"pg\":.*|\"pg\": \"/usr/pgsql-${PGVERSION?}/bin\",|g" /var/lib/pgadmin/config_local.py + + cd ${PGADMIN_DIR?} + + if [[ ! -f /var/lib/pgadmin/pgadmin4.db ]] + then + echo_info "Setting up pgAdmin4 database.." + python3 setup.py > /tmp/pgadmin4.stdout 2> /tmp/pgadmin4.stderr + err_check "$?" "pgAdmin4 Database Setup" "Could not create pgAdmin4 database: \n$(cat /tmp/pgadmin4.stderr)" + fi + + echo_info "Starting Apache web server.." + /usr/sbin/httpd -D FOREGROUND & + echo $! > $APACHE_PIDFILE + + wait + env: + - name: PGADMIN_SETUP_EMAIL + value: admin + - name: PGADMIN_SETUP_PASSWORD + value: admin + - name: KRB5_CONFIG + value: /etc/pgadmin/conf.d/krb5.conf + - name: KRB5RCACHEDIR + value: /tmp + livenessProbe: + initialDelaySeconds: 15 + periodSeconds: 20 + tcpSocket: + port: 5050 + name: pgadmin + ports: + - containerPort: 5050 + name: pgadmin + protocol: TCP + readinessProbe: + initialDelaySeconds: 20 + periodSeconds: 10 + tcpSocket: + port: 5050 + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault + volumeMounts: + - mountPath: /etc/pgadmin + name: pgadmin-startup + readOnly: true + - mountPath: /etc/pgadmin/conf.d + name: pgadmin-config + readOnly: true + - mountPath: /etc/httpd/run + name: tmp + - mountPath: /var/log/pgadmin + name: pgadmin-log + - mountPath: /var/lib/pgadmin + name: pgadmin-data +initContainers: +- command: + - bash + - -ceu + - -- + - (umask a-w && echo "$1" > /etc/pgadmin/config_system.py) + - startup + - | + import glob, json, re, os + DEFAULT_BINARY_PATHS = {'pg': sorted([''] + glob.glob('/usr/pgsql-*/bin')).pop()} + with open('/etc/pgadmin/conf.d/~postgres-operator/pgadmin.json') as _f: + _conf, _data = re.compile(r'[A-Z_0-9]+'), json.load(_f) + if type(_data) is dict: + globals().update({k: v for k, v in _data.items() if _conf.fullmatch(k)}) + if os.path.isfile('/etc/pgadmin/conf.d/~postgres-operator/ldap-bind-password'): + with open('/etc/pgadmin/conf.d/~postgres-operator/ldap-bind-password') as _f: + LDAP_BIND_PASSWORD = _f.read() + name: pgadmin-startup + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault + volumeMounts: + - mountPath: /etc/pgadmin + name: pgadmin-startup + - mountPath: /etc/pgadmin/conf.d + name: pgadmin-config + readOnly: true +volumes: +- emptyDir: + medium: Memory + name: pgadmin-log +- name: pgadmin-data + persistentVolumeClaim: + claimName: "" +- name: pgadmin-config + projected: + sources: + - configMap: + items: + - key: pgadmin-settings.json + path: ~postgres-operator/pgadmin.json +- emptyDir: + medium: Memory + sizeLimit: 32Ki + name: pgadmin-startup + `)) + + // No change when called again. + before := pod.DeepCopy() + call() + assert.DeepEqual(t, before, pod) + }) + + t.Run("Customizations", func(t *testing.T) { + cluster.Spec.ImagePullPolicy = corev1.PullAlways + cluster.Spec.UserInterface.PGAdmin.Image = "new-image" + cluster.Spec.UserInterface.PGAdmin.Resources.Requests = corev1.ResourceList{ + corev1.ResourceCPU: resource.MustParse("100m"), + } + cluster.Spec.UserInterface.PGAdmin.Config.Files = []corev1.VolumeProjection{{ + Secret: &corev1.SecretProjection{LocalObjectReference: corev1.LocalObjectReference{ + Name: "test", + }}, + }} + cluster.Spec.UserInterface.PGAdmin.Config.LDAPBindPassword = &corev1.SecretKeySelector{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: "podtest", + }, + Key: "podtestpw", + } + + call() + + assert.Assert(t, cmp.MarshalMatches(pod, ` +containers: +- command: + - bash + - -c + - |- + CRUNCHY_DIR=${CRUNCHY_DIR:-'/opt/crunchy'} + PGADMIN_DIR=/usr/lib/python3.6/site-packages/pgadmin4-web + APACHE_PIDFILE='/tmp/httpd.pid' + export PATH=$PATH:/usr/pgsql-*/bin + + RED="\033[0;31m" + GREEN="\033[0;32m" + RESET="\033[0m" + + function enable_debugging() { + if [[ ${CRUNCHY_DEBUG:-false} == "true" ]] + then + echo_info "Turning debugging on.." + export PS4='+(${BASH_SOURCE}:${LINENO})> ${FUNCNAME[0]:+${FUNCNAME[0]}(): }' + set -x + fi + } + + function env_check_err() { + if [[ -z ${!1} ]] + then + echo_err "$1 environment variable is not set, aborting." + exit 1 + fi + } + + function echo_info() { + echo -e "${GREEN?}$(date) INFO: ${1?}${RESET?}" + } + + function echo_err() { + echo -e "${RED?}$(date) ERROR: ${1?}${RESET?}" + } + + function err_check { + RC=${1?} + CONTEXT=${2?} + ERROR=${3?} + + if [[ ${RC?} != 0 ]] + then + echo_err "${CONTEXT?}: ${ERROR?}" + exit ${RC?} + fi + } + + function trap_sigterm() { + echo_info "Doing trap logic.." + echo_warn "Clean shutdown of Apache.." + /usr/sbin/httpd -k stop + kill -SIGINT $(head -1 $APACHE_PIDFILE) + } + + enable_debugging + trap 'trap_sigterm' SIGINT SIGTERM + + env_check_err "PGADMIN_SETUP_EMAIL" + env_check_err "PGADMIN_SETUP_PASSWORD" + + if [[ ${ENABLE_TLS:-false} == 'true' ]] + then + echo_info "TLS enabled. Applying https configuration.." + if [[ ( ! -f /certs/server.key ) || ( ! -f /certs/server.crt ) ]] + then + echo_err "ENABLE_TLS true but /certs/server.key or /certs/server.crt not found, aborting" + exit 1 + fi + cp "${CRUNCHY_DIR}/conf/pgadmin-https.conf" /var/lib/pgadmin/pgadmin.conf + else + echo_info "TLS disabled. Applying http configuration.." + cp "${CRUNCHY_DIR}/conf/pgadmin-http.conf" /var/lib/pgadmin/pgadmin.conf + fi + + cp "${CRUNCHY_DIR}/conf/config_local.py" /var/lib/pgadmin/config_local.py + + if [[ -z "${SERVER_PATH}" ]] + then + sed -i "/RedirectMatch/d" /var/lib/pgadmin/pgadmin.conf + fi + + sed -i "s|SERVER_PATH|${SERVER_PATH:-/}|g" /var/lib/pgadmin/pgadmin.conf + sed -i "s|SERVER_PORT|${SERVER_PORT:-5050}|g" /var/lib/pgadmin/pgadmin.conf + sed -i "s/^DEFAULT_SERVER_PORT.*/DEFAULT_SERVER_PORT = ${SERVER_PORT:-5050}/" /var/lib/pgadmin/config_local.py + sed -i "s|\"pg\":.*|\"pg\": \"/usr/pgsql-${PGVERSION?}/bin\",|g" /var/lib/pgadmin/config_local.py + + cd ${PGADMIN_DIR?} + + if [[ ! -f /var/lib/pgadmin/pgadmin4.db ]] + then + echo_info "Setting up pgAdmin4 database.." + python3 setup.py > /tmp/pgadmin4.stdout 2> /tmp/pgadmin4.stderr + err_check "$?" "pgAdmin4 Database Setup" "Could not create pgAdmin4 database: \n$(cat /tmp/pgadmin4.stderr)" + fi + + echo_info "Starting Apache web server.." + /usr/sbin/httpd -D FOREGROUND & + echo $! > $APACHE_PIDFILE + + wait + env: + - name: PGADMIN_SETUP_EMAIL + value: admin + - name: PGADMIN_SETUP_PASSWORD + value: admin + - name: KRB5_CONFIG + value: /etc/pgadmin/conf.d/krb5.conf + - name: KRB5RCACHEDIR + value: /tmp + image: new-image + imagePullPolicy: Always + livenessProbe: + initialDelaySeconds: 15 + periodSeconds: 20 + tcpSocket: + port: 5050 + name: pgadmin + ports: + - containerPort: 5050 + name: pgadmin + protocol: TCP + readinessProbe: + initialDelaySeconds: 20 + periodSeconds: 10 + tcpSocket: + port: 5050 + resources: + requests: + cpu: 100m + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault + volumeMounts: + - mountPath: /etc/pgadmin + name: pgadmin-startup + readOnly: true + - mountPath: /etc/pgadmin/conf.d + name: pgadmin-config + readOnly: true + - mountPath: /etc/httpd/run + name: tmp + - mountPath: /var/log/pgadmin + name: pgadmin-log + - mountPath: /var/lib/pgadmin + name: pgadmin-data +initContainers: +- command: + - bash + - -ceu + - -- + - (umask a-w && echo "$1" > /etc/pgadmin/config_system.py) + - startup + - | + import glob, json, re, os + DEFAULT_BINARY_PATHS = {'pg': sorted([''] + glob.glob('/usr/pgsql-*/bin')).pop()} + with open('/etc/pgadmin/conf.d/~postgres-operator/pgadmin.json') as _f: + _conf, _data = re.compile(r'[A-Z_0-9]+'), json.load(_f) + if type(_data) is dict: + globals().update({k: v for k, v in _data.items() if _conf.fullmatch(k)}) + if os.path.isfile('/etc/pgadmin/conf.d/~postgres-operator/ldap-bind-password'): + with open('/etc/pgadmin/conf.d/~postgres-operator/ldap-bind-password') as _f: + LDAP_BIND_PASSWORD = _f.read() + image: new-image + imagePullPolicy: Always + name: pgadmin-startup + resources: + requests: + cpu: 100m + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault + volumeMounts: + - mountPath: /etc/pgadmin + name: pgadmin-startup + - mountPath: /etc/pgadmin/conf.d + name: pgadmin-config + readOnly: true +volumes: +- emptyDir: + medium: Memory + name: pgadmin-log +- name: pgadmin-data + persistentVolumeClaim: + claimName: "" +- name: pgadmin-config + projected: + sources: + - secret: + name: test + - configMap: + items: + - key: pgadmin-settings.json + path: ~postgres-operator/pgadmin.json + - secret: + items: + - key: podtestpw + path: ~postgres-operator/ldap-bind-password + name: podtest +- emptyDir: + medium: Memory + sizeLimit: 32Ki + name: pgadmin-startup + `)) + }) +} diff --git a/internal/pgadmin/runner.go b/internal/pgadmin/runner.go deleted file mode 100644 index 233dc092be..0000000000 --- a/internal/pgadmin/runner.go +++ /dev/null @@ -1,201 +0,0 @@ -package pgadmin - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - "strings" - "time" - - "github.com/crunchydata/postgres-operator/internal/kubeapi" - - log "github.com/sirupsen/logrus" - v1 "k8s.io/api/core/v1" - "k8s.io/client-go/kubernetes" - "k8s.io/client-go/rest" -) - -const ( - defaultPath = "/var/lib/pgadmin/pgadmin4.db" - maxRetries = 10 -) - -// queryRunner provides a helper for performing queries against the pgadmin -// sqlite database via Kubernetes Exec functionality -type queryRunner struct { - BackoffPolicy Backoff - Namespace string - Path string - Pod v1.Pod - - clientset kubernetes.Interface - apicfg *rest.Config - secSalt string // Cached value of the database-specific security salt - separator string // Field separator for multi-field queries - ready bool // Flagged true once db has been set up -} - -// NewQueryRunner creates a query runner instance with the configuration -// necessary to exec into the named pod in the provided namespace -func NewQueryRunner(clientset kubernetes.Interface, apic *rest.Config, pod v1.Pod) *queryRunner { - qr := &queryRunner{ - Namespace: pod.ObjectMeta.Namespace, - Path: defaultPath, - Pod: pod, - apicfg: apic, - clientset: clientset, - separator: ",", - } - - // Set up a default policy as an 'intelligent default', creators can - // override, naturally - default will hit max at n == 10 - qr.BackoffPolicy = ExponentialBackoffPolicy{ - Base: 35 * time.Millisecond, - JitterMode: JitterSmall, - Maximum: 2 * time.Second, - Ratio: 1.5, - } - - return qr -} - -// EnsureReady waits until the database both exists and has content - -// determined by entires in the user table or the timeout has occurred -func (qr *queryRunner) EnsureReady() error { - // Use cached status to avoid repeated querying - if qr.ready { - return nil - } - - cmd := []string{ - "sqlite3", - qr.Path, - "SELECT email FROM user WHERE id=1", - } - - // short-fuse test, otherwise minimum wait is tick time - stdout, _, err := kubeapi.ExecToPodThroughAPI(qr.apicfg, qr.clientset, - cmd, qr.Pod.Spec.Containers[0].Name, qr.Pod.Name, qr.Namespace, nil) - if len(strings.TrimSpace(stdout)) > 0 && err == nil { - return nil - } - - var output string - var lastError error - // Extended retries compared to "normal" queries - for i := 0; i < maxRetries; i++ { - // exec into the pod to run the query - stdout, stderr, err := kubeapi.ExecToPodThroughAPI(qr.apicfg, qr.clientset, - cmd, qr.Pod.Spec.Containers[0].Name, qr.Pod.Name, qr.Namespace, nil) - - if err != nil && !strings.Contains(stderr, "no such table") { - lastError = fmt.Errorf("%v - %v", err, stderr) - nextRoundIn := qr.BackoffPolicy.Duration(i) - log.Debugf("[InitWait attempt %02d]: %v - retry in %v", i, err, nextRoundIn) - time.Sleep(nextRoundIn) - } else { - // trim any space that may be there for an accurate read - output = strings.TrimSpace(stdout) - if output == "" || len(strings.TrimSpace(stderr)) > 0 { - log.Debugf("InitWait stderr: %s", stderr) - nextRoundIn := qr.BackoffPolicy.Duration(i) - time.Sleep(nextRoundIn) - } else { - qr.ready = true - lastError = nil - break - } - } - } - if lastError != nil && output == "" { - return fmt.Errorf("error executing query: %v", lastError) - } - - return nil -} - -// Exec performs a query on the database but expects no results -func (qr *queryRunner) Exec(query string) error { - if err := qr.EnsureReady(); err != nil { - return err - } - - cmd := []string{"sqlite3", qr.Path, query} - - var lastError error - for i := 0; i < maxRetries; i++ { - // exec into the pod to run the query - _, stderr, err := kubeapi.ExecToPodThroughAPI(qr.apicfg, qr.clientset, - cmd, qr.Pod.Spec.Containers[0].Name, qr.Pod.Name, qr.Namespace, nil) - if err != nil { - lastError = fmt.Errorf("%v - %v", err, stderr) - nextRoundIn := qr.BackoffPolicy.Duration(i) - log.Debugf("[Exec attempt %02d]: %v - retry in %v", i, err, nextRoundIn) - time.Sleep(nextRoundIn) - } else { - lastError = nil - break - } - } - if lastError != nil { - return fmt.Errorf("error executing query: %vv", lastError) - } - - return nil -} - -// Query performs a query on the database expecting results -func (qr *queryRunner) Query(query string) (string, error) { - if err := qr.EnsureReady(); err != nil { - return "", err - } - - cmd := []string{ - "sqlite3", - "-separator", - qr.separator, - qr.Path, - query, - } - - var output string - var lastError error - for i := 0; i < maxRetries; i++ { - // exec into the pod to run the query - stdout, stderr, err := kubeapi.ExecToPodThroughAPI(qr.apicfg, qr.clientset, - cmd, qr.Pod.Spec.Containers[0].Name, qr.Pod.Name, qr.Namespace, nil) - if err != nil { - lastError = fmt.Errorf("%v - %v", err, stderr) - nextRoundIn := qr.BackoffPolicy.Duration(i) - log.Debugf("[Query attempt %02d]: %v - retry in %v", i, err, nextRoundIn) - time.Sleep(nextRoundIn) - } else { - output = strings.TrimSpace(stdout) - lastError = nil - break - } - } - if lastError != nil && output == "" { - return "", fmt.Errorf("error executing query: %v", lastError) - } - - return output, nil -} - -// Separator gets the configured field separator -func (qr *queryRunner) Separator() string { - return qr.separator -} diff --git a/internal/pgadmin/server.go b/internal/pgadmin/server.go deleted file mode 100644 index 26568e8806..0000000000 --- a/internal/pgadmin/server.go +++ /dev/null @@ -1,52 +0,0 @@ -package pgadmin - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -// ServerEntry models parts of the pgadmin server table -type ServerEntry struct { - Name string // Typically set to the cluster name - Host string - Port int - MaintenanceDB string - SSLMode string - Comment string - Password string - - // Not yet used, latest params of 4.18 - // - // servergroup_id int // associated at query time - // user_id int // set based on login username - // username string // set based on login username - // - // role_text string - // discovery_id string - // hostaddr string - // db_res string - // passfile string - // sslcert string - // sslkey string - // sslrootcert string - // sslcrl string - // sslcompression bool - // use_ssh_tunnel bool - // tunnel_host string - // tunnel_port string - // tunnel_username string - // tunnel_authentication bool - // tunnel_identity_file string - // connect_timeout int - // tunnel_password string -} diff --git a/internal/pgadmin/users.go b/internal/pgadmin/users.go new file mode 100644 index 0000000000..7ce69ce211 --- /dev/null +++ b/internal/pgadmin/users.go @@ -0,0 +1,258 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgadmin + +import ( + "bytes" + "context" + "encoding/json" + "fmt" + "io" + "strings" + + "github.com/crunchydata/postgres-operator/internal/logging" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +type Executor func( + ctx context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, +) error + +// WriteUsersInPGAdmin uses exec and "python" to create users in pgAdmin and +// update their passwords when they already exist. A blank password for a user +// blocks that user from logging in to pgAdmin. The pgAdmin configuration +// database must exist before calling this. +func WriteUsersInPGAdmin( + ctx context.Context, cluster *v1beta1.PostgresCluster, exec Executor, + users []v1beta1.PostgresUserSpec, passwords map[string]string, +) error { + primary := naming.ClusterPrimaryService(cluster) + + args := []string{ + cluster.Name, + primary.Name + "." + primary.Namespace + ".svc", + fmt.Sprint(*cluster.Spec.Port), + } + script := strings.Join([]string{ + // Unpack arguments into an object. + // - https://docs.python.org/3/library/types.html#types.SimpleNamespace + ` +import sys +import types + +cluster = types.SimpleNamespace() +(cluster.name, cluster.hostname, cluster.port) = sys.argv[1:]`, + + // The location of pgAdmin files can vary by container image. Look for + // typical names in the module search path: the PyPI package is named + // "pgadmin4" while custom builds might use "pgadmin4-web". The pgAdmin + // packages expect to find themselves on the search path, so prepend + // that directory there (like pgAdmin does in its WSGI entrypoint). + // - https://pypi.org/project/pgadmin4/ + // - https://github.com/pgadmin-org/pgadmin4/blob/REL-4_30/web/pgAdmin4.wsgi#L18 + ` +import importlib.util +import os +import sys + +spec = importlib.util.find_spec('.pgadmin', ( + importlib.util.find_spec('pgadmin4') or + importlib.util.find_spec('pgadmin4-web') +).name) +root = os.path.dirname(spec.submodule_search_locations[0]) +if sys.path[0] != root: + sys.path.insert(0, root)`, + + // Import pgAdmin modules now that they are on the search path. + // NOTE: When testing with the REPL, use the `__enter__` method to + // avoid one level of indentation. + // + // create_app().app_context().__enter__() + // + ` +import copy +import json +import sys + +from pgadmin import create_app +from pgadmin.model import db, Role, User, Server, ServerGroup +from pgadmin.utils.constants import INTERNAL +from pgadmin.utils.crypto import encrypt + +with create_app().app_context():`, + + // The user with id=1 is automatically created by pgAdmin when it + // creates its configuration database. Clear that email and username + // so they cannot conflict with users we create, and deactivate the user + // so it cannot log in. + // - https://github.com/pgadmin-org/pgadmin4/blob/REL-4_30/web/migrations/versions/fdc58d9bd449_.py#L129 + ` + admin = db.session.query(User).filter_by(id=1).first() + admin.active = False + admin.email = '' + admin.password = '' + admin.username = '' + + db.session.add(admin) + db.session.commit()`, + + // Process each line of input as a single user definition. Those with + // a non-blank password are allowed to login. + // + // The "internal" authentication source requires that username and email + // be the same and be an email address. Append "@pgo" to the username + // to pass login validation. + // - https://github.com/pgadmin-org/pgadmin4/blob/REL-4_30/web/pgadmin/authenticate/internal.py#L88 + // - https://github.com/pgadmin-org/pgadmin4/blob/REL-4_30/web/pgadmin/utils/validation_utils.py#L13 + // + // The "auth_source" and "username" attributes are part of the User + // model since pgAdmin v4.21. + // - https://github.com/pgadmin-org/pgadmin4/blob/REL-4_30/web/pgadmin/model/__init__.py#L66 + ` + for line in sys.stdin: + if not line.strip(): + continue + + data = json.loads(line) + address = data['username'] + '@pgo' + user = ( + db.session.query(User).filter_by(username=address).first() or + User() + ) + user.auth_source = INTERNAL + user.email = user.username = address + user.password = data['password'] + user.active = bool(user.password) + user.roles = db.session.query(Role).filter_by(name='User').all()`, + + // After a user logs in, pgAdmin checks that the "master password" is + // set. It does not seem to use the value nor check that it is valid. + // We set it to "any" to satisfy the check. + // - https://github.com/pgadmin-org/pgadmin4/blob/REL-4_30/web/pgadmin/browser/__init__.py#L963 + // + // The "verify_and_update_password" method hashes the plaintext password + // according to pgAdmin security settings. It is part of the User model + // since pgAdmin v4.19 and Flask-Security-Too v3.20. + // - https://github.com/pgadmin-org/pgadmin4/blob/REL-4_30/requirements.txt#L40 + // - https://github.com/pgadmin-org/pgadmin4/blob/REL-4_30/web/pgadmin/model/__init__.py#L66 + // - https://flask-security-too.readthedocs.io/en/stable/api.html#flask_security.UserMixin.verify_and_update_password + ` + if user.password: + user.masterpass_check = 'any' + user.verify_and_update_password(user.password)`, + + // Write the user to get its generated identity. + ` + db.session.add(user) + db.session.commit()`, + + // One server group and connection are configured for each user, similar + // to the way they are made using their respective dialog windows. + // - https://www.pgadmin.org/docs/pgadmin4/latest/server_group_dialog.html + // - https://www.pgadmin.org/docs/pgadmin4/latest/server_dialog.html + // + // We use a similar method to the import method when creating server connections + // - https://www.pgadmin.org/docs/pgadmin4/latest/import_export_servers.html + // - https://github.com/pgadmin-org/pgadmin4/blob/REL-4_30/web/setup.py#L294 + ` + group = ( + db.session.query(ServerGroup).filter_by( + user_id=user.id, + ).order_by("id").first() or + ServerGroup() + ) + group.name = "Crunchy PostgreSQL Operator" + group.user_id = user.id + db.session.add(group) + db.session.commit()`, + + // The name of the server connection is the same as the cluster name. + // Note that the server connections are created when the users are created or + // modified. Changes to a server connection will generally persist until a + // change is made to the corresponding user. For custom server connections, + // a new server should be created with a unique name. + ` + server = ( + db.session.query(Server).filter_by( + servergroup_id=group.id, + user_id=user.id, + name=cluster.name, + ).first() or + Server() + ) + + server.name = cluster.name + server.host = cluster.hostname + server.port = cluster.port + server.servergroup_id = group.id + server.user_id = user.id + server.maintenance_db = "postgres" + server.ssl_mode = "prefer"`, + + // Encrypt the Server password with the User's plaintext password. + // - https://github.com/pgadmin-org/pgadmin4/blob/REL-4_30/web/pgadmin/__init__.py#L601 + // - https://github.com/pgadmin-org/pgadmin4/blob/REL-4_30/web/pgadmin/utils/master_password.py#L21 + // - https://github.com/pgadmin-org/pgadmin4/blob/REL-4_30/web/pgadmin/browser/server_groups/servers/__init__.py#L1091 + // + // The "save_password" attribute is part of the Server model since + // pgAdmin v4.21. + // - https://github.com/pgadmin-org/pgadmin4/blob/REL-4_30/web/pgadmin/model/__init__.py#L108 + ` + server.username = data['username'] + server.password = encrypt(data['password'], data['password']) + server.save_password = int(bool(data['password']))`, + + // Due to limitations on the types of updates that can be made to active + // server connections, when the current server connection is updated, we + // need to delete it and add a new server connection in its place. This + // will require a refresh if pgAdmin web GUI is being used when the + // update takes place. + // - https://github.com/pgadmin-org/pgadmin4/blob/REL-4_30/web/pgadmin/browser/server_groups/servers/__init__.py#L772 + // + // TODO(cbandy): We could possibly get the same effect by invalidating + // the user's sessions in pgAdmin v5.4 with Flask-Security-Too v4. + // - https://github.com/pgadmin-org/pgadmin4/blob/REL-5_4/web/pgadmin/model/__init__.py#L67 + // - https://flask-security-too.readthedocs.io/en/stable/api.html#flask_security.UserDatastore.set_uniquifier + ` + if server.id and db.session.is_modified(server): + old = copy.deepcopy(server) + db.make_transient(server) + server.id = None + db.session.delete(old) + + db.session.add(server) + db.session.commit()`, + }, "\n") + "\n" + + var err error + var stdin, stdout, stderr bytes.Buffer + + encoder := json.NewEncoder(&stdin) + encoder.SetEscapeHTML(false) + + for i := range users { + spec := users[i] + + if err == nil { + err = encoder.Encode(map[string]interface{}{ + "username": spec.Name, + "password": passwords[string(spec.Name)], + }) + } + } + + if err == nil { + err = exec(ctx, &stdin, &stdout, &stderr, + append([]string{"python", "-c", script}, args...)...) + + log := logging.FromContext(ctx) + log.V(1).Info("wrote pgAdmin users", + "stdout", stdout.String(), + "stderr", stderr.String()) + } + + return err +} diff --git a/internal/pgadmin/users_test.go b/internal/pgadmin/users_test.go new file mode 100644 index 0000000000..69619667af --- /dev/null +++ b/internal/pgadmin/users_test.go @@ -0,0 +1,255 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgadmin + +import ( + "context" + "errors" + "io" + "os" + "os/exec" + "path/filepath" + "strings" + "testing" + + "gotest.tools/v3/assert" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/testing/require" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestWriteUsersInPGAdmin(t *testing.T) { + ctx := context.Background() + cluster := &v1beta1.PostgresCluster{ + ObjectMeta: metav1.ObjectMeta{ + Name: "testcluster", + Namespace: "testnamespace", + }, + Spec: v1beta1.PostgresClusterSpec{ + Port: initialize.Int32(5432), + }, + } + + t.Run("Arguments", func(t *testing.T) { + expected := errors.New("pass-through") + exec := func( + _ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + assert.Assert(t, stdin != nil, "should send stdin") + assert.Assert(t, stdout != nil, "should capture stdout") + assert.Assert(t, stderr != nil, "should capture stderr") + + assert.Check(t, !strings.ContainsRune(strings.Join(command, ""), '\t'), + "Python should not be indented with tabs") + + assert.DeepEqual(t, command, []string{"python", "-c", ` +import sys +import types + +cluster = types.SimpleNamespace() +(cluster.name, cluster.hostname, cluster.port) = sys.argv[1:] + +import importlib.util +import os +import sys + +spec = importlib.util.find_spec('.pgadmin', ( + importlib.util.find_spec('pgadmin4') or + importlib.util.find_spec('pgadmin4-web') +).name) +root = os.path.dirname(spec.submodule_search_locations[0]) +if sys.path[0] != root: + sys.path.insert(0, root) + +import copy +import json +import sys + +from pgadmin import create_app +from pgadmin.model import db, Role, User, Server, ServerGroup +from pgadmin.utils.constants import INTERNAL +from pgadmin.utils.crypto import encrypt + +with create_app().app_context(): + + admin = db.session.query(User).filter_by(id=1).first() + admin.active = False + admin.email = '' + admin.password = '' + admin.username = '' + + db.session.add(admin) + db.session.commit() + + for line in sys.stdin: + if not line.strip(): + continue + + data = json.loads(line) + address = data['username'] + '@pgo' + user = ( + db.session.query(User).filter_by(username=address).first() or + User() + ) + user.auth_source = INTERNAL + user.email = user.username = address + user.password = data['password'] + user.active = bool(user.password) + user.roles = db.session.query(Role).filter_by(name='User').all() + + if user.password: + user.masterpass_check = 'any' + user.verify_and_update_password(user.password) + + db.session.add(user) + db.session.commit() + + group = ( + db.session.query(ServerGroup).filter_by( + user_id=user.id, + ).order_by("id").first() or + ServerGroup() + ) + group.name = "Crunchy PostgreSQL Operator" + group.user_id = user.id + db.session.add(group) + db.session.commit() + + server = ( + db.session.query(Server).filter_by( + servergroup_id=group.id, + user_id=user.id, + name=cluster.name, + ).first() or + Server() + ) + + server.name = cluster.name + server.host = cluster.hostname + server.port = cluster.port + server.servergroup_id = group.id + server.user_id = user.id + server.maintenance_db = "postgres" + server.ssl_mode = "prefer" + + server.username = data['username'] + server.password = encrypt(data['password'], data['password']) + server.save_password = int(bool(data['password'])) + + if server.id and db.session.is_modified(server): + old = copy.deepcopy(server) + db.make_transient(server) + server.id = None + db.session.delete(old) + + db.session.add(server) + db.session.commit() +`, + "testcluster", + "testcluster-primary.testnamespace.svc", + "5432", + }) + return expected + } + + assert.Equal(t, expected, WriteUsersInPGAdmin(ctx, cluster, exec, nil, nil)) + }) + + t.Run("Flake8", func(t *testing.T) { + flake8 := require.Flake8(t) + + called := false + exec := func( + _ context.Context, _ io.Reader, _, _ io.Writer, command ...string, + ) error { + called = true + + // Expect a python command with an inline script. + assert.DeepEqual(t, command[:2], []string{"python", "-c"}) + assert.Assert(t, len(command) > 2) + script := command[2] + + // Write out that inline script. + dir := t.TempDir() + file := filepath.Join(dir, "script.py") + assert.NilError(t, os.WriteFile(file, []byte(script), 0o600)) + + // Expect flake8 to be happy. Ignore "E402 module level import not + // at top of file" in addition to the defaults. + cmd := exec.Command(flake8, "--extend-ignore=E402", file) + output, err := cmd.CombinedOutput() + assert.NilError(t, err, "%q\n%s", cmd.Args, output) + + return nil + } + + _ = WriteUsersInPGAdmin(ctx, cluster, exec, nil, nil) + assert.Assert(t, called) + }) + + t.Run("Empty", func(t *testing.T) { + calls := 0 + exec := func( + _ context.Context, stdin io.Reader, _, _ io.Writer, _ ...string, + ) error { + calls++ + + b, err := io.ReadAll(stdin) + assert.NilError(t, err) + assert.Assert(t, len(b) == 0, "expected no stdin, got %q", string(b)) + return nil + } + + assert.NilError(t, WriteUsersInPGAdmin(ctx, cluster, exec, nil, nil)) + assert.Equal(t, calls, 1) + + assert.NilError(t, WriteUsersInPGAdmin(ctx, cluster, exec, []v1beta1.PostgresUserSpec{}, nil)) + assert.Equal(t, calls, 2) + + assert.NilError(t, WriteUsersInPGAdmin(ctx, cluster, exec, nil, map[string]string{})) + assert.Equal(t, calls, 3) + }) + + t.Run("Passwords", func(t *testing.T) { + calls := 0 + exec := func( + _ context.Context, stdin io.Reader, _, _ io.Writer, _ ...string, + ) error { + calls++ + + b, err := io.ReadAll(stdin) + assert.NilError(t, err) + assert.DeepEqual(t, string(b), strings.TrimLeft(` +{"password":"","username":"user-no-options"} +{"password":"","username":"user-no-databases"} +{"password":"some$pass!word","username":"user-with-password"} +`, "\n")) + return nil + } + + assert.NilError(t, WriteUsersInPGAdmin(ctx, cluster, exec, + []v1beta1.PostgresUserSpec{ + { + Name: "user-no-options", + Databases: []v1beta1.PostgresIdentifier{"db1"}, + }, + { + Name: "user-no-databases", + Options: "some options here", + }, + { + Name: "user-with-password", + }, + }, + map[string]string{ + "no-user": "ignored", + "user-with-password": "some$pass!word", + }, + )) + assert.Equal(t, calls, 1) + }) +} diff --git a/internal/pgaudit/postgres.go b/internal/pgaudit/postgres.go new file mode 100644 index 0000000000..07867d020e --- /dev/null +++ b/internal/pgaudit/postgres.go @@ -0,0 +1,59 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgaudit + +import ( + "context" + + "github.com/crunchydata/postgres-operator/internal/logging" + "github.com/crunchydata/postgres-operator/internal/postgres" +) + +// When the pgAudit shared library is not loaded, the extension cannot be +// installed. The "CREATE EXTENSION" command fails with an error, "pgaudit must +// be loaded…". +// +// When the pgAudit shared library is loaded but the extension is not installed, +// AUDIT messages are logged according to the various levels and settings +// (including both SESSION and OBJECT events) but the messages contain fewer +// details than normal. DDL messages, for example, lack the affected object name +// and type. +// +// When the pgAudit extension is installed but the shared library is not loaded, +// 1. No AUDIT messages are logged. +// 2. DDL commands fail with error "pgaudit must be loaded…". +// 3. DML commands and SELECT queries succeed and return results. +// 4. Databases can be created and dropped. +// 5. Roles and privileges can be created, dropped, granted, and revoked, but +// the "DROP OWNED" command fails. + +// EnableInPostgreSQL installs pgAudit triggers into every database. +func EnableInPostgreSQL(ctx context.Context, exec postgres.Executor) error { + log := logging.FromContext(ctx) + + stdout, stderr, err := exec.ExecInAllDatabases(ctx, + // Quiet the NOTICE from IF EXISTS, and install the pgAudit event triggers. + // - https://www.postgresql.org/docs/current/runtime-config-client.html + // - https://github.com/pgaudit/pgaudit#settings + `SET client_min_messages = WARNING; CREATE EXTENSION IF NOT EXISTS pgaudit;`, + map[string]string{ + "ON_ERROR_STOP": "on", // Abort when any one command fails. + "QUIET": "on", // Do not print successful commands to stdout. + }) + + log.V(1).Info("enabled pgAudit", "stdout", stdout, "stderr", stderr) + + return err +} + +// PostgreSQLParameters sets the parameters required by pgAudit. +func PostgreSQLParameters(outParameters *postgres.Parameters) { + + // Load the shared library when PostgreSQL starts. + // PostgreSQL must be restarted when changing this value. + // - https://github.com/pgaudit/pgaudit#settings + // - https://www.postgresql.org/docs/current/runtime-config-client.html + outParameters.Mandatory.AppendToList("shared_preload_libraries", "pgaudit") +} diff --git a/internal/pgaudit/postgres_test.go b/internal/pgaudit/postgres_test.go new file mode 100644 index 0000000000..3734e511f0 --- /dev/null +++ b/internal/pgaudit/postgres_test.go @@ -0,0 +1,65 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgaudit + +import ( + "context" + "errors" + "io" + "strings" + "testing" + + "gotest.tools/v3/assert" + + "github.com/crunchydata/postgres-operator/internal/postgres" +) + +func TestEnableInPostgreSQL(t *testing.T) { + expected := errors.New("whoops") + exec := func( + _ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + assert.Assert(t, stdout != nil, "should capture stdout") + assert.Assert(t, stderr != nil, "should capture stderr") + + assert.Assert(t, strings.Contains(strings.Join(command, "\n"), + `SELECT datname FROM pg_catalog.pg_database`, + ), "expected all databases and templates") + + b, err := io.ReadAll(stdin) + assert.NilError(t, err) + assert.Equal(t, string(b), strings.Trim(` +SET client_min_messages = WARNING; CREATE EXTENSION IF NOT EXISTS pgaudit; + `, "\t\n")) + + return expected + } + + ctx := context.Background() + assert.Equal(t, expected, EnableInPostgreSQL(ctx, exec)) +} + +func TestPostgreSQLParameters(t *testing.T) { + parameters := postgres.Parameters{ + Mandatory: postgres.NewParameterSet(), + } + + // No comma when empty. + PostgreSQLParameters(¶meters) + + assert.Assert(t, parameters.Default == nil) + assert.DeepEqual(t, parameters.Mandatory.AsMap(), map[string]string{ + "shared_preload_libraries": "pgaudit", + }) + + // Appended when not empty. + parameters.Mandatory.Add("shared_preload_libraries", "some,existing") + PostgreSQLParameters(¶meters) + + assert.Assert(t, parameters.Default == nil) + assert.DeepEqual(t, parameters.Mandatory.AsMap(), map[string]string{ + "shared_preload_libraries": "some,existing,pgaudit", + }) +} diff --git a/internal/pgbackrest/certificates.go b/internal/pgbackrest/certificates.go new file mode 100644 index 0000000000..bb2633dfe7 --- /dev/null +++ b/internal/pgbackrest/certificates.go @@ -0,0 +1,129 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgbackrest + +import ( + "encoding" + + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + + "github.com/crunchydata/postgres-operator/internal/initialize" +) + +const ( + certAuthorityAbsolutePath = configDirectory + "/" + certAuthorityProjectionPath + certClientPrivateKeyAbsolutePath = configDirectory + "/" + certClientPrivateKeyProjectionPath + certClientAbsolutePath = configDirectory + "/" + certClientProjectionPath + certServerPrivateKeyAbsolutePath = serverMountPath + "/" + certServerPrivateKeyProjectionPath + certServerAbsolutePath = serverMountPath + "/" + certServerProjectionPath + + certAuthorityProjectionPath = "~postgres-operator/tls-ca.crt" + certClientPrivateKeyProjectionPath = "~postgres-operator/client-tls.key" + certClientProjectionPath = "~postgres-operator/client-tls.crt" + certServerPrivateKeyProjectionPath = "server-tls.key" + certServerProjectionPath = "server-tls.crt" + + certAuthoritySecretKey = "pgbackrest.ca-roots" // #nosec G101 this is a name, not a credential + certClientPrivateKeySecretKey = "pgbackrest-client.key" // #nosec G101 this is a name, not a credential + certClientSecretKey = "pgbackrest-client.crt" // #nosec G101 this is a name, not a credential + + certInstancePrivateKeySecretKey = "pgbackrest-server.key" + certInstanceSecretKey = "pgbackrest-server.crt" + + certRepoPrivateKeySecretKey = "pgbackrest-repo-host.key" // #nosec G101 this is a name, not a credential + certRepoSecretKey = "pgbackrest-repo-host.crt" // #nosec G101 this is a name, not a credential +) + +// certFile concatenates the results of multiple PEM-encoding marshalers. +func certFile(texts ...encoding.TextMarshaler) ([]byte, error) { + var out []byte + + for i := range texts { + if b, err := texts[i].MarshalText(); err == nil { + out = append(out, b...) + } else { + return nil, err + } + } + + return out, nil +} + +// clientCertificates returns projections of CAs, keys, and certificates to +// include in a configuration volume from the pgBackRest Secret. +func clientCertificates() []corev1.KeyToPath { + return []corev1.KeyToPath{ + { + Key: certAuthoritySecretKey, + Path: certAuthorityProjectionPath, + }, + { + Key: certClientSecretKey, + Path: certClientProjectionPath, + }, + { + Key: certClientPrivateKeySecretKey, + Path: certClientPrivateKeyProjectionPath, + + // pgBackRest requires that certificate keys not be readable by any + // other user. + // - https://github.com/pgbackrest/pgbackrest/blob/release/2.38/src/common/io/tls/common.c#L128 + Mode: initialize.Int32(0o600), + }, + } +} + +// clientCommonName returns a client certificate common name (CN) for cluster. +func clientCommonName(cluster metav1.Object) string { + // The common name (ASN.1 OID 2.5.4.3) of a certificate must be + // 64 characters or less. ObjectMeta.UID is a UUID in its 36-character + // string representation. + // - https://tools.ietf.org/html/rfc5280#appendix-A + // - https://docs.k8s.io/concepts/overview/working-with-objects/names/#uids + // - https://releases.k8s.io/v1.22.0/staging/src/k8s.io/apiserver/pkg/registry/rest/create.go#L111 + // - https://releases.k8s.io/v1.22.0/staging/src/k8s.io/apiserver/pkg/registry/rest/meta.go#L30 + return "pgbackrest@" + string(cluster.GetUID()) +} + +// instanceServerCertificates returns projections of keys and certificates to +// include in a server volume from an instance Secret. +func instanceServerCertificates() []corev1.KeyToPath { + return []corev1.KeyToPath{ + { + Key: certInstanceSecretKey, + Path: certServerProjectionPath, + }, + { + Key: certInstancePrivateKeySecretKey, + Path: certServerPrivateKeyProjectionPath, + + // pgBackRest requires that certificate keys not be readable by any + // other user. + // - https://github.com/pgbackrest/pgbackrest/blob/release/2.38/src/common/io/tls/common.c#L128 + Mode: initialize.Int32(0o600), + }, + } +} + +// repositoryServerCertificates returns projections of keys and certificates to +// include in a server volume from the pgBackRest Secret. +func repositoryServerCertificates() []corev1.KeyToPath { + return []corev1.KeyToPath{ + { + Key: certRepoSecretKey, + Path: certServerProjectionPath, + }, + { + Key: certRepoPrivateKeySecretKey, + Path: certServerPrivateKeyProjectionPath, + + // pgBackRest requires that certificate keys not be readable by any + // other user. + // - https://github.com/pgbackrest/pgbackrest/blob/release/2.38/src/common/io/tls/common.c#L128 + Mode: initialize.Int32(0o600), + }, + } +} diff --git a/internal/pgbackrest/certificates.md b/internal/pgbackrest/certificates.md new file mode 100644 index 0000000000..344616486b --- /dev/null +++ b/internal/pgbackrest/certificates.md @@ -0,0 +1,74 @@ + + +Server +------ + +pgBackRest uses OpenSSL to protect connections between machines. The [TLS server](tls-server.md) +listens on a TCP port, encrypts connections with its server certificate, and +verifies client certificates against a certificate authority. + +- `tls-server-ca-file` is used for client verification. It is the path to a file + of trusted certificates concatenated in PEM format. When this is set, clients + are also authorized according to `tls-server-auth`. + + See https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_load_verify_locations.html + +- `tls-server-cert-file` is the server certificate. It is the path to a file in + PEM format containing the certificate as well as any number of CA certificates + needed to establish its authenticity. + + See https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_use_certificate_chain_file.html + +- `tls-server-key-file` is the server certificate's private key. It is the path + to a file in PEM format. + + See https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_use_PrivateKey_file.html + + +Clients +------- + +pgBackRest uses OpenSSL to protect connections it makes to PostgreSQL instances +and repository hosts. It presents a client certificate that is verified by the +server and must contain a common name (CN) that is authorized according to `tls-server-auth`. + +- `pg-host-ca-file` is used for server verification when connecting to + pgBackRest on a PostgreSQL instance. It is the path to a file of trusted + certificates concatenated in PEM format. + + See https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_load_verify_locations.html + +- `pg-host-cert-file` is the client certificate to present when connecting to + pgBackRest on a PostgreSQL instance. It is the path to a file in PEM format + containing the certificate as well as any number of CA certificates needed to + establish its authenticity. + + See https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_use_certificate_chain_file.html + +- `pg-host-key-file` is the client certificate's private key. It is the path + to a file in PEM format. + + See https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_use_PrivateKey_file.html + +- `repo-host-ca-file` is used for server verification when connecting to + pgBackRest on a repository host. It is the path to a file of trusted + certificates concatenated in PEM format. + + See https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_load_verify_locations.html + +- `repo-host-cert-file` is the client certificate to present when connecting to + pgBackRest on a repository host. It is the path to a file in PEM format + containing the certificate as well as any number of CA certificates needed to + establish its authenticity. + + See https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_use_certificate_chain_file.html + +- `repo-host-key-file` is the client certificate's private key. It is the path + to a file in PEM format. + + See https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_use_PrivateKey_file.html + diff --git a/internal/pgbackrest/certificates_test.go b/internal/pgbackrest/certificates_test.go new file mode 100644 index 0000000000..4ef41b2879 --- /dev/null +++ b/internal/pgbackrest/certificates_test.go @@ -0,0 +1,51 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgbackrest + +import ( + "errors" + "strings" + "testing" + + "gotest.tools/v3/assert" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/util/uuid" + + "github.com/crunchydata/postgres-operator/internal/testing/cmp" +) + +type funcMarshaler func() ([]byte, error) + +func (f funcMarshaler) MarshalText() ([]byte, error) { return f() } + +func TestCertFile(t *testing.T) { + expected := errors.New("boom") + var short funcMarshaler = func() ([]byte, error) { return []byte(`one`), nil } + var fail funcMarshaler = func() ([]byte, error) { return nil, expected } + + text, err := certFile(short, short, short) + assert.NilError(t, err) + assert.DeepEqual(t, text, []byte(`oneoneone`)) + + text, err = certFile(short, fail, short) + assert.Equal(t, err, expected) + assert.DeepEqual(t, text, []byte(nil)) +} + +func TestClientCommonName(t *testing.T) { + t.Parallel() + + cluster := &metav1.ObjectMeta{UID: uuid.NewUUID()} + cn := clientCommonName(cluster) + + assert.Assert(t, cmp.Regexp("^[-[:xdigit:]]{36}$", string(cluster.UID)), + "expected Kubernetes UID to be a UUID string") + + assert.Assert(t, cmp.Regexp("^[[:print:]]{1,64}$", cn), + "expected printable ASCII within 64 characters for %q", cluster) + + assert.Assert(t, strings.HasPrefix(cn, "pgbackrest@"), + `expected %q to begin with "pgbackrest@" for %q`, cn, cluster) +} diff --git a/internal/pgbackrest/config.go b/internal/pgbackrest/config.go new file mode 100644 index 0000000000..f50b2690ee --- /dev/null +++ b/internal/pgbackrest/config.go @@ -0,0 +1,576 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgbackrest + +import ( + "context" + "fmt" + "strconv" + "strings" + + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + + "github.com/crunchydata/postgres-operator/internal/config" + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/postgres" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +const ( + // defaultRepo1Path stores the default pgBackRest repo path + defaultRepo1Path = "/pgbackrest/" + + // DefaultStanzaName is the name of the default pgBackRest stanza + DefaultStanzaName = "db" + + // CMInstanceKey is the name of the pgBackRest configuration file for a PostgreSQL instance + CMInstanceKey = "pgbackrest_instance.conf" + + // CMRepoKey is the name of the pgBackRest configuration file for a pgBackRest dedicated + // repository host + CMRepoKey = "pgbackrest_repo.conf" + + // configDirectory is the pgBackRest configuration directory. + configDirectory = "/etc/pgbackrest/conf.d" + + // ConfigHashKey is the name of the file storing the pgBackRest config hash + ConfigHashKey = "config-hash" + + // repoMountPath is where to mount the pgBackRest repo volume. + repoMountPath = "/pgbackrest" + + serverConfigAbsolutePath = configDirectory + "/" + serverConfigProjectionPath + serverConfigProjectionPath = "~postgres-operator_server.conf" + + serverConfigMapKey = "pgbackrest-server.conf" + + // serverMountPath is the directory containing the TLS server certificate + // and key. This is outside of configDirectory so the hash calculated by + // backup jobs does not change when the primary changes. + serverMountPath = "/etc/pgbackrest/server" +) + +const ( + iniGeneratedWarning = "" + + "# Generated by postgres-operator. DO NOT EDIT.\n" + + "# Your changes will not be saved.\n" +) + +// CreatePGBackRestConfigMapIntent creates a configmap struct with pgBackRest pgbackrest.conf settings in the data field. +// The keys within the data field correspond to the use of that configuration. +// pgbackrest_job.conf is used by certain jobs, such as stanza create and backup +// pgbackrest_primary.conf is used by the primary database pod +// pgbackrest_repo.conf is used by the pgBackRest repository pod +func CreatePGBackRestConfigMapIntent(postgresCluster *v1beta1.PostgresCluster, + repoHostName, configHash, serviceName, serviceNamespace string, + instanceNames []string) *corev1.ConfigMap { + + meta := naming.PGBackRestConfig(postgresCluster) + meta.Annotations = naming.Merge( + postgresCluster.Spec.Metadata.GetAnnotationsOrNil(), + postgresCluster.Spec.Backups.PGBackRest.Metadata.GetAnnotationsOrNil()) + meta.Labels = naming.Merge( + postgresCluster.Spec.Metadata.GetLabelsOrNil(), + postgresCluster.Spec.Backups.PGBackRest.Metadata.GetLabelsOrNil(), + naming.PGBackRestConfigLabels(postgresCluster.GetName()), + ) + + cm := &corev1.ConfigMap{ + TypeMeta: metav1.TypeMeta{ + Kind: "ConfigMap", + APIVersion: "v1", + }, + ObjectMeta: meta, + } + + // create an empty map for the config data + initialize.Map(&cm.Data) + + pgdataDir := postgres.DataDirectory(postgresCluster) + // Port will always be populated, since the API will set a default of 5432 if not provided + pgPort := *postgresCluster.Spec.Port + cm.Data[CMInstanceKey] = iniGeneratedWarning + + populatePGInstanceConfigurationMap( + serviceName, serviceNamespace, repoHostName, pgdataDir, + config.FetchKeyCommand(&postgresCluster.Spec), + strconv.Itoa(postgresCluster.Spec.PostgresVersion), + pgPort, postgresCluster.Spec.Backups.PGBackRest.Repos, + postgresCluster.Spec.Backups.PGBackRest.Global, + ).String() + + // PostgreSQL instances that have not rolled out expect to mount a server + // config file. Always populate that file so those volumes stay valid and + // Kubernetes propagates their contents to those pods. The repo host name + // given below should always be set, but this guards for cases when it might + // not be. + cm.Data[serverConfigMapKey] = "" + + if repoHostName != "" { + cm.Data[serverConfigMapKey] = iniGeneratedWarning + + serverConfig(postgresCluster).String() + + cm.Data[CMRepoKey] = iniGeneratedWarning + + populateRepoHostConfigurationMap( + serviceName, serviceNamespace, + pgdataDir, config.FetchKeyCommand(&postgresCluster.Spec), + strconv.Itoa(postgresCluster.Spec.PostgresVersion), + pgPort, instanceNames, + postgresCluster.Spec.Backups.PGBackRest.Repos, + postgresCluster.Spec.Backups.PGBackRest.Global, + ).String() + } + + cm.Data[ConfigHashKey] = configHash + + return cm +} + +// MakePGBackrestLogDir creates the pgBackRest default log path directory used when a +// dedicated repo host is configured. +func MakePGBackrestLogDir(template *corev1.PodTemplateSpec, + cluster *v1beta1.PostgresCluster) { + + var pgBackRestLogPath string + for _, repo := range cluster.Spec.Backups.PGBackRest.Repos { + if repo.Volume != nil { + pgBackRestLogPath = fmt.Sprintf(naming.PGBackRestRepoLogPath, repo.Name) + break + } + } + + container := corev1.Container{ + Command: []string{"bash", "-c", "mkdir -p " + pgBackRestLogPath}, + Image: config.PGBackRestContainerImage(cluster), + ImagePullPolicy: cluster.Spec.ImagePullPolicy, + Name: naming.ContainerPGBackRestLogDirInit, + SecurityContext: initialize.RestrictedSecurityContext(), + } + + // Set the container resources to the 'pgbackrest' container configuration. + for i, c := range template.Spec.Containers { + if c.Name == naming.PGBackRestRepoContainerName { + container.Resources = template.Spec.Containers[i].Resources + break + } + } + template.Spec.InitContainers = append(template.Spec.InitContainers, container) +} + +// RestoreCommand returns the command for performing a pgBackRest restore. In addition to calling +// the pgBackRest restore command with any pgBackRest options provided, the script also does the +// following: +// - Removes the patroni.dynamic.json file if present. This ensures the configuration from the +// cluster being restored from is not utilized when bootstrapping a new cluster, and the +// configuration for the new cluster is utilized instead. +// - Starts the database and allows recovery to complete. A temporary postgresql.conf file +// with the minimum settings needed to safely start the database is created and utilized. +// - Renames the data directory as needed to bootstrap the cluster using the restored database. +// This ensures compatibility with the "existing" bootstrap method that is included in the +// Patroni config when bootstrapping a cluster using an existing data directory. +func RestoreCommand(pgdata, hugePagesSetting, fetchKeyCommand string, tablespaceVolumes []*corev1.PersistentVolumeClaim, args ...string) []string { + + // After pgBackRest restores files, PostgreSQL starts in recovery to finish + // replaying WAL files. "hot_standby" is "on" (by default) so we can detect + // when recovery has finished. In that mode, some parameters cannot be + // smaller than they were when PostgreSQL was backed up. Configure them to + // match the values reported by "pg_controldata". Those parameters are also + // written to WAL files and may change during recovery. When they increase, + // PostgreSQL exits and we reconfigure and restart it. + // For PG14, when some parameters from WAL require a restart, the behavior is + // to pause unless a restart is requested. For this edge case, we run a CASE + // query to check + // (a) if the instance is in recovery; + // (b) if so, if the WAL replay is paused; + // (c) if so, to unpause WAL replay, allowing our expected behavior to resume. + // A note on the PostgreSQL code: we cast `pg_catalog.pg_wal_replay_resume()` as text + // because that method returns a void (which is a non-NULL but empty result). When + // that void is cast as a string, it is an '' + // - https://www.postgresql.org/docs/current/hot-standby.html + // - https://www.postgresql.org/docs/current/app-pgcontroldata.html + + // The postmaster.pid file is removed, if it exists, before attempting a restore. + // This allows the restore to be tried more than once without the causing an + // error due to the presence of the file in subsequent attempts. + + // The 'pg_ctl' timeout is set to a very large value (1 year) to ensure there + // are no timeouts when starting or stopping Postgres. + + tablespaceCmd := "" + for _, tablespaceVolume := range tablespaceVolumes { + tablespaceCmd = tablespaceCmd + fmt.Sprintf( + "\ninstall --directory --mode=0700 '/tablespaces/%s/data'", + tablespaceVolume.Labels[naming.LabelData]) + } + + // If the fetch key command is not empty, save the GUC variable and value + // to a new string. + var ekc string + if fetchKeyCommand != "" { + ekc = ` +encryption_key_command = '` + fetchKeyCommand + `'` + } + + restoreScript := `declare -r pgdata="$1" opts="$2" +install --directory --mode=0700 "${pgdata}"` + tablespaceCmd + ` +rm -f "${pgdata}/postmaster.pid" +bash -xc "pgbackrest restore ${opts}" +rm -f "${pgdata}/patroni.dynamic.json" +export PGDATA="${pgdata}" PGHOST='/tmp' + +until [[ "${recovery=}" == 'f' ]]; do +if [[ -z "${recovery}" ]]; then +control=$(pg_controldata) +read -r max_conn <<< "${control##*max_connections setting:}" +read -r max_lock <<< "${control##*max_locks_per_xact setting:}" +read -r max_ptxn <<< "${control##*max_prepared_xacts setting:}" +read -r max_work <<< "${control##*max_worker_processes setting:}" +echo > /tmp/pg_hba.restore.conf 'local all "postgres" peer' +cat > /tmp/postgres.restore.conf <> /tmp/postgres.restore.conf "max_wal_senders = '${max_wals}'" +fi + +pg_ctl start --silent --timeout=31536000 --wait --options='--config-file=/tmp/postgres.restore.conf' +fi + +recovery=$(psql -Atc "SELECT CASE + WHEN NOT pg_catalog.pg_is_in_recovery() THEN false + WHEN NOT pg_catalog.pg_is_wal_replay_paused() THEN true + ELSE pg_catalog.pg_wal_replay_resume()::text = '' +END recovery" && sleep 1) ||: +done + +pg_ctl stop --silent --wait --timeout=31536000 +mv "${pgdata}" "${pgdata}_bootstrap"` + + return append([]string{"bash", "-ceu", "--", restoreScript, "-", pgdata}, args...) +} + +// DedicatedSnapshotVolumeRestoreCommand returns the command for performing a pgBackRest delta restore +// into a dedicated snapshot volume. In addition to calling the pgBackRest restore command with any +// pgBackRest options provided, the script also removes the patroni.dynamic.json file if present. This +// ensures the configuration from the cluster being restored from is not utilized when bootstrapping a +// new cluster, and the configuration for the new cluster is utilized instead. +func DedicatedSnapshotVolumeRestoreCommand(pgdata string, args ...string) []string { + + // The postmaster.pid file is removed, if it exists, before attempting a restore. + // This allows the restore to be tried more than once without the causing an + // error due to the presence of the file in subsequent attempts. + + // Wrap pgbackrest restore command in backup_label checks. If pre/post + // backup_labels are different, restore moved database forward, so return 0 + // so that the Job is successful and we know to proceed with snapshot. + // Otherwise return 1, Job will fail, and we will not proceed with snapshot. + restoreScript := `declare -r pgdata="$1" opts="$2" +BACKUP_LABEL=$([[ ! -e "${pgdata}/backup_label" ]] || md5sum "${pgdata}/backup_label") +echo "Starting pgBackRest delta restore" + +install --directory --mode=0700 "${pgdata}" +rm -f "${pgdata}/postmaster.pid" +bash -xc "pgbackrest restore ${opts}" +rm -f "${pgdata}/patroni.dynamic.json" + +BACKUP_LABEL_POST=$([[ ! -e "${pgdata}/backup_label" ]] || md5sum "${pgdata}/backup_label") +if [[ "${BACKUP_LABEL}" != "${BACKUP_LABEL_POST}" ]] +then + exit 0 +fi +echo Database was not advanced by restore. No snapshot will be taken. +echo Check that your last backup was successful. +exit 1` + + return append([]string{"bash", "-ceu", "--", restoreScript, "-", pgdata}, args...) +} + +// populatePGInstanceConfigurationMap returns options representing the pgBackRest configuration for +// a PostgreSQL instance +func populatePGInstanceConfigurationMap( + serviceName, serviceNamespace, repoHostName, pgdataDir, + fetchKeyCommand, postgresVersion string, + pgPort int32, repos []v1beta1.PGBackRestRepo, + globalConfig map[string]string, +) iniSectionSet { + + // TODO(cbandy): pass a FQDN in already. + repoHostFQDN := repoHostName + "-0." + + serviceName + "." + serviceNamespace + ".svc." + + naming.KubernetesClusterDomain(context.Background()) + + global := iniMultiSet{} + stanza := iniMultiSet{} + + // For faster and more robust WAL archiving, we turn on pgBackRest archive-async. + global.Set("archive-async", "y") + // pgBackRest spool-path should always be co-located with the Postgres WAL path. + global.Set("spool-path", "/pgdata/pgbackrest-spool") + // pgBackRest will log to the pgData volume for commands run on the PostgreSQL instance + global.Set("log-path", naming.PGBackRestPGDataLogPath) + + for _, repo := range repos { + global.Set(repo.Name+"-path", defaultRepo1Path+repo.Name) + + // repo volumes do not contain configuration (unlike other repo types which has actual + // pgBackRest settings such as "bucket", "region", etc.), so only grab the name from the + // repo if a Volume is detected, and don't attempt to get an configs + if repo.Volume == nil { + for option, val := range getExternalRepoConfigs(repo) { + global.Set(option, val) + } + } + + // Only "volume" (i.e. PVC-based) repos should ever have a repo host configured. This + // means cloud-based repos (S3, GCS or Azure) should not have a repo host configured. + if repoHostName != "" && repo.Volume != nil { + global.Set(repo.Name+"-host", repoHostFQDN) + global.Set(repo.Name+"-host-type", "tls") + global.Set(repo.Name+"-host-ca-file", certAuthorityAbsolutePath) + global.Set(repo.Name+"-host-cert-file", certClientAbsolutePath) + global.Set(repo.Name+"-host-key-file", certClientPrivateKeyAbsolutePath) + global.Set(repo.Name+"-host-user", "postgres") + } + } + + for option, val := range globalConfig { + global.Set(option, val) + } + + // Now add the local PG instance to the stanza section. The local PG host must always be + // index 1: https://github.com/pgbackrest/pgbackrest/issues/1197#issuecomment-708381800 + stanza.Set("pg1-path", pgdataDir) + stanza.Set("pg1-port", fmt.Sprint(pgPort)) + stanza.Set("pg1-socket-path", postgres.SocketDirectory) + + if fetchKeyCommand != "" { + stanza.Set("archive-header-check", "n") + stanza.Set("page-header-check", "n") + stanza.Set("pg-version-force", postgresVersion) + } + + return iniSectionSet{ + "global": global, + DefaultStanzaName: stanza, + } +} + +// populateRepoHostConfigurationMap returns options representing the pgBackRest configuration for +// a pgBackRest dedicated repository host +func populateRepoHostConfigurationMap( + serviceName, serviceNamespace, pgdataDir, + fetchKeyCommand, postgresVersion string, + pgPort int32, pgHosts []string, repos []v1beta1.PGBackRestRepo, + globalConfig map[string]string, +) iniSectionSet { + + global := iniMultiSet{} + stanza := iniMultiSet{} + + var pgBackRestLogPathSet bool + for _, repo := range repos { + global.Set(repo.Name+"-path", defaultRepo1Path+repo.Name) + + // repo volumes do not contain configuration (unlike other repo types which has actual + // pgBackRest settings such as "bucket", "region", etc.), so only grab the name from the + // repo if a Volume is detected, and don't attempt to get an configs + if repo.Volume == nil { + for option, val := range getExternalRepoConfigs(repo) { + global.Set(option, val) + } + } + + if !pgBackRestLogPathSet && repo.Volume != nil { + // pgBackRest will log to the first configured repo volume when commands + // are run on the pgBackRest repo host. With our previous check in + // RepoHostVolumeDefined(), we've already validated that at least one + // defined repo has a volume. + global.Set("log-path", fmt.Sprintf(naming.PGBackRestRepoLogPath, repo.Name)) + pgBackRestLogPathSet = true + } + } + + // If no log path was set, don't log because the default path is not writable. + if !pgBackRestLogPathSet { + global.Set("log-level-file", "off") + } + + for option, val := range globalConfig { + global.Set(option, val) + } + + // set the configs for all PG hosts + for i, pgHost := range pgHosts { + // TODO(cbandy): pass a FQDN in already. + pgHostFQDN := pgHost + "-0." + + serviceName + "." + serviceNamespace + ".svc." + + naming.KubernetesClusterDomain(context.Background()) + + stanza.Set(fmt.Sprintf("pg%d-host", i+1), pgHostFQDN) + stanza.Set(fmt.Sprintf("pg%d-host-type", i+1), "tls") + stanza.Set(fmt.Sprintf("pg%d-host-ca-file", i+1), certAuthorityAbsolutePath) + stanza.Set(fmt.Sprintf("pg%d-host-cert-file", i+1), certClientAbsolutePath) + stanza.Set(fmt.Sprintf("pg%d-host-key-file", i+1), certClientPrivateKeyAbsolutePath) + + stanza.Set(fmt.Sprintf("pg%d-path", i+1), pgdataDir) + stanza.Set(fmt.Sprintf("pg%d-port", i+1), fmt.Sprint(pgPort)) + stanza.Set(fmt.Sprintf("pg%d-socket-path", i+1), postgres.SocketDirectory) + + if fetchKeyCommand != "" { + stanza.Set("archive-header-check", "n") + stanza.Set("page-header-check", "n") + stanza.Set("pg-version-force", postgresVersion) + } + } + + return iniSectionSet{ + "global": global, + DefaultStanzaName: stanza, + } +} + +// getExternalRepoConfigs returns a map containing the configuration settings for an external +// pgBackRest repository as defined in the PostgresCluster spec +func getExternalRepoConfigs(repo v1beta1.PGBackRestRepo) map[string]string { + + repoConfigs := make(map[string]string) + + if repo.Azure != nil { + repoConfigs[repo.Name+"-type"] = "azure" + repoConfigs[repo.Name+"-azure-container"] = repo.Azure.Container + } else if repo.GCS != nil { + repoConfigs[repo.Name+"-type"] = "gcs" + repoConfigs[repo.Name+"-gcs-bucket"] = repo.GCS.Bucket + } else if repo.S3 != nil { + repoConfigs[repo.Name+"-type"] = "s3" + repoConfigs[repo.Name+"-s3-bucket"] = repo.S3.Bucket + repoConfigs[repo.Name+"-s3-endpoint"] = repo.S3.Endpoint + repoConfigs[repo.Name+"-s3-region"] = repo.S3.Region + } + + return repoConfigs +} + +// reloadCommand returns an entrypoint that convinces the pgBackRest TLS server +// to reload its options and certificate files when they change. The process +// will appear as name in `ps` and `top`. +func reloadCommand(name string) []string { + // Use a Bash loop to periodically check the mtime of the mounted server + // volume and configuration file. When either changes, signal pgBackRest + // and print the observed timestamp. + // + // We send SIGHUP because this allows the TLS server configuration to be + // reloaded starting in pgBackRest 2.37. We filter by parent process to ignore + // the forked connection handlers. The server parent process is zero because + // it is started by Kubernetes. + // - https://github.com/pgbackrest/pgbackrest/commit/7b3ea883c7c010aafbeb14d150d073a113b703e4 + + // Coreutils `sleep` uses a lot of memory, so the following opens a file + // descriptor and uses the timeout of the builtin `read` to wait. That same + // descriptor gets closed and reopened to use the builtin `[ -nt` to check + // mtimes. + // - https://unix.stackexchange.com/a/407383 + const script = ` +exec {fd}<> <(:||:) +until read -r -t 5 -u "${fd}"; do + if + [[ "${filename}" -nt "/proc/self/fd/${fd}" ]] && + pkill -HUP --exact --parent=0 pgbackrest + then + exec {fd}>&- && exec {fd}<> <(:||:) + stat --dereference --format='Loaded configuration dated %y' "${filename}" + elif + { [[ "${directory}" -nt "/proc/self/fd/${fd}" ]] || + [[ "${authority}" -nt "/proc/self/fd/${fd}" ]] + } && + pkill -HUP --exact --parent=0 pgbackrest + then + exec {fd}>&- && exec {fd}<> <(:||:) + stat --format='Loaded certificates dated %y' "${directory}" + fi +done +` + + // Elide the above script from `ps` and `top` by wrapping it in a function + // and calling that. + wrapper := `monitor() {` + script + `};` + + ` export directory="$1" authority="$2" filename="$3"; export -f monitor;` + + ` exec -a "$0" bash -ceu monitor` + + return []string{"bash", "-ceu", "--", wrapper, name, + serverMountPath, certAuthorityAbsolutePath, serverConfigAbsolutePath} +} + +// serverConfig returns the options needed to run the TLS server for cluster. +func serverConfig(cluster *v1beta1.PostgresCluster) iniSectionSet { + global := iniMultiSet{} + server := iniMultiSet{} + + // IPv6 support is a relatively recent addition to Kubernetes, so listen on + // the IPv4 wildcard address and trust that Pod DNS names will resolve to + // IPv4 addresses for now. + // + // NOTE(cbandy): The unspecified IPv6 address, which ends up being the IPv6 + // wildcard address, did not work in all environments. In some cases, the + // "server-ping" command would not connect. + // - https://tools.ietf.org/html/rfc3493#section-3.8 + // + // TODO(cbandy): When pgBackRest provides a way to bind to all addresses, + // use that here and configure "server-ping" to use "localhost" which + // Kubernetes guarantees resolves to a loopback address. + // - https://kubernetes.io/docs/concepts/cluster-administration/networking/ + // - https://releases.k8s.io/v1.18.0/pkg/kubelet/kubelet_pods.go#L327 + // - https://releases.k8s.io/v1.23.0/pkg/kubelet/kubelet_pods.go#L345 + global.Set("tls-server-address", "0.0.0.0") + + // NOTE (dsessler7): As pointed out by Chris above, there is an issue in + // pgBackRest (#1841), where using a wildcard address to bind all addresses + // does not work in certain IPv6 environments. Until this is fixed, we are + // going to workaround the issue by allowing the user to add an annotation to + // enable IPv6. We will check for that annotation here and override the + // "tls-server-address" setting accordingly. + if strings.EqualFold(cluster.Annotations[naming.PGBackRestIPVersion], "ipv6") { + global.Set("tls-server-address", "::") + } + + // The client certificate for this cluster is allowed to connect for any stanza. + // Without the wildcard "*", the "pgbackrest info" and "pgbackrest repo-ls" + // commands fail with "access denied" when invoked without a "--stanza" flag. + global.Add("tls-server-auth", clientCommonName(cluster)+"=*") + + global.Set("tls-server-ca-file", certAuthorityAbsolutePath) + global.Set("tls-server-cert-file", certServerAbsolutePath) + global.Set("tls-server-key-file", certServerPrivateKeyAbsolutePath) + + // Send all server logs to stderr and stdout without timestamps. + // - stderr has ERROR messages + // - stdout has WARN, INFO, and DETAIL messages + // + // The "trace" level shows when a connection is accepted, but nothing about + // the remote address or what commands it might send. + // - https://github.com/pgbackrest/pgbackrest/blob/release/2.38/src/command/server/server.c#L158-L159 + // - https://pgbackrest.org/configuration.html#section-log + server.Set("log-level-console", "detail") + server.Set("log-level-stderr", "error") + server.Set("log-level-file", "off") + server.Set("log-timestamp", "n") + + return iniSectionSet{ + "global": global, + "global:server": server, + } +} diff --git a/internal/pgbackrest/config.md b/internal/pgbackrest/config.md new file mode 100644 index 0000000000..2101535b3a --- /dev/null +++ b/internal/pgbackrest/config.md @@ -0,0 +1,260 @@ + + +# pgBackRest Configuration Overview + +The initial pgBackRest configuration for the Postgres Clusters is designed to stand up a +minimal configuration for use by the various pgBackRest functions needed by the Postgres +cluster. These settings are meant to be the minimally required settings, with other +settings supported through the use of custom configurations. + +During initial cluster creation, four pgBackRest use cases are involved. + +These settings are configured in either the [global] or [stanza] sections of the +pgBackRest configuration based on their designation in the pgBackRest code. +For more information on the above, and other settings, please see +https://github.com/pgbackrest/pgbackrest/blob/release/2.38/src/config/parse.auto.c + +As shown, the settings with the `cfgSectionGlobal` designation are + +`log-path`: The log path provides a location for pgBackRest to store log files. + +`log-level-file`: Level for file logging. Set to 'off' when the repo host has no volume. + +`repo-path`: Path where backups and archive are stored. + The repository is where pgBackRest stores backups and archives WAL segments. + +`repo-host`: Repository host when operating remotely via TLS. + + +The settings with the `cfgSectionStanza` designation are + +`pg-host`: PostgreSQL host for operating remotely via TLS. + +`pg-path`: The path of the PostgreSQL data directory. + This should be the same as the data_directory setting in postgresql.conf. + +`pg-port`: The port that PostgreSQL is running on. + +`pg-socket-path`: The unix socket directory that is specified when PostgreSQL is started. + +For more information on these and other configuration settings, please see +`https://pgbackrest.org/configuration.html`. + +# Configuration Per Function + +Below, each of the four configuration sets is outlined by use case. Please note that certain +settings have acceptable defaults for the cluster's usage (such as for `repo1-type` which +defaults to `posix`), so those settings are not included. + + +1. Primary Database Pod + +[global] +log-path +repo1-host +repo1-path + +[stanza] +pg1-path +pg1-port +pg1-socket-path + +2. pgBackRest Repo Pod + +[global] +log-path +repo1-path +log-level-file + +[stanza] +pg1-host +pg1-path +pg1-port +pg1-socket-path + +3. pgBackRest Stanza Job Pod + +[global] +log-path + +4. pgBackRest Backup Job Pod + +[global] +log-path + + +# Initial pgBackRest Configuration + +In order to be used by the Postgres cluster, these default configurations are stored in +a configmap. This configmap is named with the following convention `-pgbackrest-config`, +such that a cluster named 'mycluster' would have a configuration configmap named +`mycluster-pgbackrest-config`. + +As noted above, there are three distinct default configurations, each of which is referenced +by a key value in the configmap's data section. For the primary database pod, the key is +`pgbackrest_primary.conf`. For the pgBackRest repo pod, the key is `pgbackrest_repo.conf`. +Finally, for the pgBackRest stanza job pod and the initial pgBackRest backup job pod, the +key is `pgbackrest_job.conf`. + +For each pod, the relevant configuration file is mounted as a projected volume named +`pgbackrest-config-vol`. The configuration file will be found in the `/etc/pgbackrest` directory +of the relevant container and is named `pgbackrest.conf`, matching the default pgBackRest location. +For more information, please see +`https://pgbackrest.org/configuration.html#introduction` + + +# Custom Configuration Support + +TODO(tjmoore4): Document custom configuration solution once implemented + +Custom pgBackRest configurations is supported by using the `--config-include-path` +flag with the desired pgBackRest command. This should point to the directory path +where the `*.conf` file with the custom configuration is located. + +This file will be added as a projected volume and must be formatted in the standard +pgBackRest INI convention. Please note that any of the configuration settings listed +above MUST BE CONFIGURED VIA THE POSTGRESCLUSTER SPEC so as to avoid errors. + +For more information, please see +`https://pgbackrest.org/user-guide.html#quickstart/configure-stanza`. + +--- + +There are three ways to configure pgBackRest: INI files, environment variables, +and command-line arguments. Any particular option comes from exactly one of those +places. For example, when an option is in an INI file and a command-line argument, +only the command-line argument is used. This is true even for options that can +be specified more than once. The [precedence](https://pgbackrest.org/command.html#introduction): + +> Command-line options override environment options which override config file options. + +From one of those places, only a handful of options may be set more than once +(see `PARSE_RULE_OPTION_MULTI` in [parse.auto.c][]). The resulting value of +these options matches the order in which they were loaded: left-to-right on the +command-line or top-to-bottom in INI files. + +The remaining options must be set exactly once. `pgbackrest` exits non-zero when +the option occurs twice on the command-line or twice in a file: + +``` +ERROR: [031]: option 'io-timeout' cannot be set multiple times +``` + +A few options are only allowed in certain places. Credentials, for example, +cannot be passed as command-line arguments (see `PARSE_RULE_OPTION_SECURE` in [parse.auto.c][]). +Some others cannot be in INI files (see `cfgSectionCommandLine` in [parse.auto.c][]). +Notably, these must be environment variables or command-line arguments: + +- `--repo` and `--stanza` +- restore `--target` and `--target-action` +- backup and restore `--type` + +pgBackRest looks for and loads multiple INI files from multiple places according +to the `config`, `config-include-path`, and/or `config-path` options. The order +is a [little complicated][file-precedence]. When none of these options are set: + + 1. One of `/etc/pgbackrest/pgbackrest.conf` or `/etc/pgbackrest.conf` is read + in that order, [whichever exists][default-config]. + 2. All `/etc/pgbackrest/conf.d/*.conf` files that exist are read in alphabetical order. + +There is no "precedence" between these files; they do not "override" each other. +Options that can be set multiple times are interpreted as each file is loaded. +Options that cannot be set multiple times will error when they are in multiple files. + +There *is* precedence, however, *inside* these files, organized by INI sections. + +- The "global" section applies to all repositories, stanzas, and commands. +- The "global:*command*" section applies to all repositories and stanzas for a particular command. +- The "*stanza*" section applies to all repositories and commands for a particular stanza. +- The "*stanza*:*command*" section applies to all repositories for a particular stanza and command. + +Options in more specific sections (lower in the list) [override][file-precedence] +options in less specific sections. + +[default-config]: https://pgbackrest.org/configuration.html#introduction +[file-precedence]: https://pgbackrest.org/user-guide.html#quickstart/configure-stanza +[parse.auto.c]: https://github.com/pgbackrest/pgbackrest/blob/release/2.38/src/config/parse.auto.c + +```console +$ tail -vn+0 pgbackrest.conf conf.d/* +==> pgbackrest.conf <== +[global] +exclude = main +exclude = main +io-timeout = 10 +link-map = x=x1 +link-map = x=x2 +link-map = y=y1 + +[global:backup] +io-timeout = 20 + +[db] +io-timeout = 30 +link-map = y=y2 + +[db:backup] +io-timeout = 40 + +==> conf.d/one.conf <== +[global] +exclude = one + +==> conf.d/two.conf <== +[global] +exclude = two + +==> conf.d/!three.conf <== +[global] +exclude = three + +==> conf.d/~four.conf <== +[global] +exclude = four + +$ pgbackrest --config-path="$(pwd)" help backup | grep -A1 exclude + --exclude exclude paths/files from the backup + [current=main, main, three, one, two, four] + +$ pgbackrest --config-path="$(pwd)" help backup --exclude=five | grep -A1 exclude + --exclude exclude paths/files from the backup + [current=five] + +$ pgbackrest --config-path="$(pwd)" help backup | grep io-timeout + --io-timeout I/O timeout [current=20, default=60] + +$ pgbackrest --config-path="$(pwd)" help backup --stanza=db | grep io-timeout + --io-timeout I/O timeout [current=40, default=60] + +$ pgbackrest --config-path="$(pwd)" help info | grep io-timeout + --io-timeout I/O timeout [current=10, default=60] + +$ pgbackrest --config-path="$(pwd)" help info --stanza=db | grep io-timeout + --io-timeout I/O timeout [current=30, default=60] + +$ pgbackrest --config-path="$(pwd)" help restore | grep -A1 link-map + --link-map modify the destination of a symlink + [current=x=x2, y=y1] + +$ pgbackrest --config-path="$(pwd)" help restore --stanza=db | grep -A1 link-map + --link-map modify the destination of a symlink + [current=y=y2] +``` + +--- + +Given all the above, we configure pgBackRest using files mounted into the +`/etc/pgbackrest/conf.d` directory. They are last in the projected volume to +ensure they take precedence over other projections. + +- `/etc/pgbackrest/conf.d`
+ Use this directory to store pgBackRest configuration. Files ending with `.conf` + are loaded in alphabetical order. + +- `/etc/pgbackrest/conf.d/~postgres-operator/*`
+ Use this subdirectory to store things like TLS certificates and keys. Files in + subdirectories are not loaded automatically. diff --git a/internal/pgbackrest/config_test.go b/internal/pgbackrest/config_test.go new file mode 100644 index 0000000000..b74bf9a4a8 --- /dev/null +++ b/internal/pgbackrest/config_test.go @@ -0,0 +1,439 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgbackrest + +import ( + "context" + "os" + "os/exec" + "path/filepath" + "strings" + "testing" + + "gotest.tools/v3/assert" + "gotest.tools/v3/assert/cmp" + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/resource" + "sigs.k8s.io/yaml" + + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/testing/require" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestCreatePGBackRestConfigMapIntent(t *testing.T) { + cluster := v1beta1.PostgresCluster{} + cluster.Namespace = "ns1" + cluster.Name = "hippo-dance" + + cluster.Spec.Port = initialize.Int32(2345) + cluster.Spec.PostgresVersion = 12 + + domain := naming.KubernetesClusterDomain(context.Background()) + + t.Run("NoVolumeRepo", func(t *testing.T) { + cluster := cluster.DeepCopy() + cluster.Spec.Backups.PGBackRest.Repos = nil + + configmap := CreatePGBackRestConfigMapIntent(cluster, + "", "number", "pod-service-name", "test-ns", + []string{"some-instance"}) + + assert.Equal(t, configmap.Data["config-hash"], "number") + assert.Equal(t, configmap.Data["pgbackrest-server.conf"], "") + }) + + t.Run("DedicatedRepoHost", func(t *testing.T) { + cluster := cluster.DeepCopy() + cluster.Spec.Backups.PGBackRest.Global = map[string]string{ + "repo3-test": "something", + } + cluster.Spec.Backups.PGBackRest.Repos = []v1beta1.PGBackRestRepo{ + { + Name: "repo1", + Volume: &v1beta1.RepoPVC{}, + }, + { + Name: "repo2", + Azure: &v1beta1.RepoAzure{Container: "a-container"}, + }, + { + Name: "repo3", + GCS: &v1beta1.RepoGCS{Bucket: "g-bucket"}, + }, + { + Name: "repo4", + S3: &v1beta1.RepoS3{ + Bucket: "s-bucket", Endpoint: "endpoint-s", Region: "earth", + }, + }, + } + + configmap := CreatePGBackRestConfigMapIntent(cluster, + "repo-hostname", "abcde12345", "pod-service-name", "test-ns", + []string{"some-instance"}) + + assert.DeepEqual(t, configmap.Annotations, map[string]string{}) + assert.DeepEqual(t, configmap.Labels, map[string]string{ + "postgres-operator.crunchydata.com/cluster": "hippo-dance", + "postgres-operator.crunchydata.com/pgbackrest": "", + "postgres-operator.crunchydata.com/pgbackrest-config": "", + }) + + assert.Equal(t, configmap.Data["config-hash"], "abcde12345") + assert.Equal(t, configmap.Data["pgbackrest_repo.conf"], strings.Trim(` +# Generated by postgres-operator. DO NOT EDIT. +# Your changes will not be saved. + +[global] +log-path = /pgbackrest/repo1/log +repo1-path = /pgbackrest/repo1 +repo2-azure-container = a-container +repo2-path = /pgbackrest/repo2 +repo2-type = azure +repo3-gcs-bucket = g-bucket +repo3-path = /pgbackrest/repo3 +repo3-test = something +repo3-type = gcs +repo4-path = /pgbackrest/repo4 +repo4-s3-bucket = s-bucket +repo4-s3-endpoint = endpoint-s +repo4-s3-region = earth +repo4-type = s3 + +[db] +pg1-host = some-instance-0.pod-service-name.test-ns.svc.`+domain+` +pg1-host-ca-file = /etc/pgbackrest/conf.d/~postgres-operator/tls-ca.crt +pg1-host-cert-file = /etc/pgbackrest/conf.d/~postgres-operator/client-tls.crt +pg1-host-key-file = /etc/pgbackrest/conf.d/~postgres-operator/client-tls.key +pg1-host-type = tls +pg1-path = /pgdata/pg12 +pg1-port = 2345 +pg1-socket-path = /tmp/postgres + `, "\t\n")+"\n") + + assert.Equal(t, configmap.Data["pgbackrest_instance.conf"], strings.Trim(` +# Generated by postgres-operator. DO NOT EDIT. +# Your changes will not be saved. + +[global] +archive-async = y +log-path = /pgdata/pgbackrest/log +repo1-host = repo-hostname-0.pod-service-name.test-ns.svc.`+domain+` +repo1-host-ca-file = /etc/pgbackrest/conf.d/~postgres-operator/tls-ca.crt +repo1-host-cert-file = /etc/pgbackrest/conf.d/~postgres-operator/client-tls.crt +repo1-host-key-file = /etc/pgbackrest/conf.d/~postgres-operator/client-tls.key +repo1-host-type = tls +repo1-host-user = postgres +repo1-path = /pgbackrest/repo1 +repo2-azure-container = a-container +repo2-path = /pgbackrest/repo2 +repo2-type = azure +repo3-gcs-bucket = g-bucket +repo3-path = /pgbackrest/repo3 +repo3-test = something +repo3-type = gcs +repo4-path = /pgbackrest/repo4 +repo4-s3-bucket = s-bucket +repo4-s3-endpoint = endpoint-s +repo4-s3-region = earth +repo4-type = s3 +spool-path = /pgdata/pgbackrest-spool + +[db] +pg1-path = /pgdata/pg12 +pg1-port = 2345 +pg1-socket-path = /tmp/postgres + `, "\t\n")+"\n") + }) + + t.Run("CustomMetadata", func(t *testing.T) { + cluster := cluster.DeepCopy() + cluster.Spec.Metadata = &v1beta1.Metadata{ + Annotations: map[string]string{ + "ak1": "cluster-av1", + "ak2": "cluster-av2", + }, + Labels: map[string]string{ + "lk1": "cluster-lv1", + "lk2": "cluster-lv2", + + "postgres-operator.crunchydata.com/cluster": "cluster-ignored", + }, + } + cluster.Spec.Backups.PGBackRest.Metadata = &v1beta1.Metadata{ + Annotations: map[string]string{ + "ak2": "backups-av2", + "ak3": "backups-av3", + }, + Labels: map[string]string{ + "lk2": "backups-lv2", + "lk3": "backups-lv3", + + "postgres-operator.crunchydata.com/cluster": "backups-ignored", + }, + } + + configmap := CreatePGBackRestConfigMapIntent(cluster, + "any", "any", "any", "any", nil) + + assert.DeepEqual(t, configmap.Annotations, map[string]string{ + "ak1": "cluster-av1", + "ak2": "backups-av2", + "ak3": "backups-av3", + }) + assert.DeepEqual(t, configmap.Labels, map[string]string{ + "lk1": "cluster-lv1", + "lk2": "backups-lv2", + "lk3": "backups-lv3", + + "postgres-operator.crunchydata.com/cluster": "hippo-dance", + "postgres-operator.crunchydata.com/pgbackrest": "", + "postgres-operator.crunchydata.com/pgbackrest-config": "", + }) + }) + + t.Run("EnabledTDE", func(t *testing.T) { + cluster := cluster.DeepCopy() + cluster.Spec.Patroni = &v1beta1.PatroniSpec{ + DynamicConfiguration: map[string]any{ + "postgresql": map[string]any{ + "parameters": map[string]any{ + "encryption_key_command": "echo test", + }, + }, + }, + } + + configmap := CreatePGBackRestConfigMapIntent(cluster, + "", "number", "pod-service-name", "test-ns", + []string{"some-instance"}) + + assert.Assert(t, + strings.Contains(configmap.Data["pgbackrest_instance.conf"], + "archive-header-check = n")) + assert.Assert(t, + strings.Contains(configmap.Data["pgbackrest_instance.conf"], + "page-header-check = n")) + assert.Assert(t, + strings.Contains(configmap.Data["pgbackrest_instance.conf"], + "pg-version-force")) + + cluster.Spec.Backups.PGBackRest.Repos = []v1beta1.PGBackRestRepo{ + { + Name: "repo1", + Volume: &v1beta1.RepoPVC{}, + }, + } + + configmap = CreatePGBackRestConfigMapIntent(cluster, + "repo1", "number", "pod-service-name", "test-ns", + []string{"some-instance"}) + + assert.Assert(t, + strings.Contains(configmap.Data["pgbackrest_repo.conf"], + "archive-header-check = n")) + assert.Assert(t, + strings.Contains(configmap.Data["pgbackrest_repo.conf"], + "page-header-check = n")) + assert.Assert(t, + strings.Contains(configmap.Data["pgbackrest_repo.conf"], + "pg-version-force")) + }) +} + +func TestMakePGBackrestLogDir(t *testing.T) { + podTemplate := &corev1.PodTemplateSpec{Spec: corev1.PodSpec{ + InitContainers: []corev1.Container{ + {Name: "test"}, + }, + Containers: []corev1.Container{ + {Name: "pgbackrest", + Resources: corev1.ResourceRequirements{ + Limits: corev1.ResourceList{ + corev1.ResourceCPU: resource.MustParse("23m"), + }, + }, + }, + }}} + + cluster := &v1beta1.PostgresCluster{ + Spec: v1beta1.PostgresClusterSpec{ + ImagePullPolicy: corev1.PullAlways, + Backups: v1beta1.Backups{ + PGBackRest: v1beta1.PGBackRestArchive{ + Image: "test-image", + Repos: []v1beta1.PGBackRestRepo{ + {Name: "repo1"}, + {Name: "repo2", + Volume: &v1beta1.RepoPVC{}, + }, + }, + }, + }, + }, + } + + beforeAddInit := podTemplate.Spec.InitContainers + + MakePGBackrestLogDir(podTemplate, cluster) + + assert.Equal(t, len(beforeAddInit)+1, len(podTemplate.Spec.InitContainers)) + + var foundInitContainer bool + // verify init container command, image & name + for _, c := range podTemplate.Spec.InitContainers { + if c.Name == naming.ContainerPGBackRestLogDirInit { + // ignore "bash -c", should skip repo with no volume + assert.Equal(t, "mkdir -p /pgbackrest/repo2/log", c.Command[2]) + assert.Equal(t, c.Image, "test-image") + assert.Equal(t, c.ImagePullPolicy, corev1.PullAlways) + assert.Assert(t, !cmp.DeepEqual(c.SecurityContext, + &corev1.SecurityContext{})().Success()) + assert.Equal(t, c.Resources.Limits.Cpu().String(), "23m") + foundInitContainer = true + break + } + } + // verify init container is present + assert.Assert(t, foundInitContainer) +} + +func TestReloadCommand(t *testing.T) { + shellcheck := require.ShellCheck(t) + + command := reloadCommand("some-name") + + // Expect a bash command with an inline script. + assert.DeepEqual(t, command[:3], []string{"bash", "-ceu", "--"}) + assert.Assert(t, len(command) > 3) + + // Write out that inline script. + dir := t.TempDir() + file := filepath.Join(dir, "script.bash") + assert.NilError(t, os.WriteFile(file, []byte(command[3]), 0o600)) + + // Expect shellcheck to be happy. + cmd := exec.Command(shellcheck, "--enable=all", file) + output, err := cmd.CombinedOutput() + assert.NilError(t, err, "%q\n%s", cmd.Args, output) +} + +func TestReloadCommandPrettyYAML(t *testing.T) { + b, err := yaml.Marshal(reloadCommand("any")) + assert.NilError(t, err) + assert.Assert(t, strings.Contains(string(b), "\n- |"), + "expected literal block scalar, got:\n%s", b) +} + +func TestRestoreCommand(t *testing.T) { + shellcheck := require.ShellCheck(t) + + pgdata := "/pgdata/pg13" + opts := []string{ + "--stanza=" + DefaultStanzaName, "--pg1-path=" + pgdata, + "--repo=1"} + command := RestoreCommand(pgdata, "try", "", nil, strings.Join(opts, " ")) + + assert.DeepEqual(t, command[:3], []string{"bash", "-ceu", "--"}) + assert.Assert(t, len(command) > 3) + + dir := t.TempDir() + file := filepath.Join(dir, "script.bash") + assert.NilError(t, os.WriteFile(file, []byte(command[3]), 0o600)) + + cmd := exec.Command(shellcheck, "--enable=all", file) + output, err := cmd.CombinedOutput() + assert.NilError(t, err, "%q\n%s", cmd.Args, output) +} + +func TestRestoreCommandPrettyYAML(t *testing.T) { + b, err := yaml.Marshal(RestoreCommand("/dir", "try", "", nil, "--options")) + + assert.NilError(t, err) + assert.Assert(t, strings.Contains(string(b), "\n- |"), + "expected literal block scalar, got:\n%s", b) +} + +func TestRestoreCommandTDE(t *testing.T) { + b, err := yaml.Marshal(RestoreCommand("/dir", "try", "echo testValue", nil, "--options")) + + assert.NilError(t, err) + assert.Assert(t, strings.Contains(string(b), "encryption_key_command = 'echo testValue'"), + "expected encryption_key_command setting, got:\n%s", b) +} + +func TestDedicatedSnapshotVolumeRestoreCommand(t *testing.T) { + shellcheck := require.ShellCheck(t) + + pgdata := "/pgdata/pg13" + opts := []string{ + "--stanza=" + DefaultStanzaName, "--pg1-path=" + pgdata, + "--repo=1"} + command := DedicatedSnapshotVolumeRestoreCommand(pgdata, strings.Join(opts, " ")) + + assert.DeepEqual(t, command[:3], []string{"bash", "-ceu", "--"}) + assert.Assert(t, len(command) > 3) + + dir := t.TempDir() + file := filepath.Join(dir, "script.bash") + assert.NilError(t, os.WriteFile(file, []byte(command[3]), 0o600)) + + cmd := exec.Command(shellcheck, "--enable=all", file) + output, err := cmd.CombinedOutput() + assert.NilError(t, err, "%q\n%s", cmd.Args, output) +} + +func TestDedicatedSnapshotVolumeRestoreCommandPrettyYAML(t *testing.T) { + b, err := yaml.Marshal(DedicatedSnapshotVolumeRestoreCommand("/dir", "--options")) + + assert.NilError(t, err) + assert.Assert(t, strings.Contains(string(b), "\n- |"), + "expected literal block scalar, got:\n%s", b) +} + +func TestServerConfig(t *testing.T) { + cluster := &v1beta1.PostgresCluster{} + cluster.UID = "shoe" + + assert.Equal(t, serverConfig(cluster).String(), ` +[global] +tls-server-address = 0.0.0.0 +tls-server-auth = pgbackrest@shoe=* +tls-server-ca-file = /etc/pgbackrest/conf.d/~postgres-operator/tls-ca.crt +tls-server-cert-file = /etc/pgbackrest/server/server-tls.crt +tls-server-key-file = /etc/pgbackrest/server/server-tls.key + +[global:server] +log-level-console = detail +log-level-file = off +log-level-stderr = error +log-timestamp = n +`) +} + +func TestServerConfigIPv6(t *testing.T) { + cluster := &v1beta1.PostgresCluster{} + cluster.UID = "shoe" + cluster.Annotations = map[string]string{ + naming.PGBackRestIPVersion: "IPv6", + } + + assert.Equal(t, serverConfig(cluster).String(), ` +[global] +tls-server-address = :: +tls-server-auth = pgbackrest@shoe=* +tls-server-ca-file = /etc/pgbackrest/conf.d/~postgres-operator/tls-ca.crt +tls-server-cert-file = /etc/pgbackrest/server/server-tls.crt +tls-server-key-file = /etc/pgbackrest/server/server-tls.key + +[global:server] +log-level-console = detail +log-level-file = off +log-level-stderr = error +log-timestamp = n +`) +} diff --git a/internal/pgbackrest/iana.go b/internal/pgbackrest/iana.go new file mode 100644 index 0000000000..c6e2f71e6c --- /dev/null +++ b/internal/pgbackrest/iana.go @@ -0,0 +1,16 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgbackrest + +// The protocol used by pgBackRest is registered with the Internet Assigned +// Numbers Authority (IANA). +// - https://www.iana.org/assignments/service-names-port-numbers +const ( + // IANAPortNumber is the port assigned to pgBackRest at the IANA. + IANAPortNumber = 8432 + + // IANAServiceName is the name of the pgBackRest protocol at the IANA. + IANAServiceName = "pgbackrest" +) diff --git a/internal/pgbackrest/options.go b/internal/pgbackrest/options.go new file mode 100644 index 0000000000..2439901e47 --- /dev/null +++ b/internal/pgbackrest/options.go @@ -0,0 +1,81 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgbackrest + +import ( + "fmt" + "sort" + "strings" +) + +// iniMultiSet represents the key-value pairs in a pgBackRest config file section. +type iniMultiSet map[string][]string + +func (ms iniMultiSet) String() string { + keys := make([]string, 0, len(ms)) + for k := range ms { + keys = append(keys, k) + } + + sort.Strings(keys) + + var b strings.Builder + for _, k := range keys { + for _, v := range ms[k] { + if len(v) <= 0 { + _, _ = fmt.Fprintf(&b, "%s =\n", k) + } else { + _, _ = fmt.Fprintf(&b, "%s = %s\n", k, v) + } + } + } + return b.String() +} + +// Add associates value with key, appending it to any values already associated +// with key. The key is case-sensitive. +func (ms iniMultiSet) Add(key, value string) { + ms[key] = append(ms[key], value) +} + +// Set replaces the values associated with key. The key is case-sensitive. +func (ms iniMultiSet) Set(key string, values ...string) { + ms[key] = make([]string, len(values)) + copy(ms[key], values) +} + +// Values returns all values associated with the given key. +// The key is case-sensitive. The returned slice is not a copy. +func (ms iniMultiSet) Values(key string) []string { + return ms[key] +} + +// iniSectionSet represents the different sections in a pgBackRest config file. +type iniSectionSet map[string]iniMultiSet + +func (sections iniSectionSet) String() string { + global := make([]string, 0, len(sections)) + stanza := make([]string, 0, len(sections)) + + for k := range sections { + if k == "global" || strings.HasPrefix(k, "global:") { + global = append(global, k) + } else { + stanza = append(stanza, k) + } + } + + sort.Strings(global) + sort.Strings(stanza) + + var b strings.Builder + for _, k := range global { + _, _ = fmt.Fprintf(&b, "\n[%s]\n%s", k, sections[k]) + } + for _, k := range stanza { + _, _ = fmt.Fprintf(&b, "\n[%s]\n%s", k, sections[k]) + } + return b.String() +} diff --git a/internal/pgbackrest/options_test.go b/internal/pgbackrest/options_test.go new file mode 100644 index 0000000000..374737ec7f --- /dev/null +++ b/internal/pgbackrest/options_test.go @@ -0,0 +1,100 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgbackrest + +import ( + "strings" + "testing" + + "gotest.tools/v3/assert" + "sigs.k8s.io/yaml" +) + +func TestMultiSet(t *testing.T) { + t.Parallel() + + ms := iniMultiSet{} + assert.Equal(t, ms.String(), "") + assert.DeepEqual(t, ms.Values("any"), []string(nil)) + + ms.Add("x", "y") + assert.DeepEqual(t, ms.Values("x"), []string{"y"}) + + ms.Add("x", "a") + assert.DeepEqual(t, ms.Values("x"), []string{"y", "a"}) + + ms.Add("abc", "j'l") + assert.DeepEqual(t, ms, iniMultiSet{ + "x": []string{"y", "a"}, + "abc": []string{"j'l"}, + }) + assert.Equal(t, ms.String(), + "abc = j'l\nx = y\nx = a\n") + + ms.Set("x", "n") + assert.DeepEqual(t, ms.Values("x"), []string{"n"}) + assert.Equal(t, ms.String(), + "abc = j'l\nx = n\n") + + ms.Set("x", "p", "q") + assert.DeepEqual(t, ms.Values("x"), []string{"p", "q"}) + + t.Run("PrettyYAML", func(t *testing.T) { + b, err := yaml.Marshal(iniMultiSet{ + "x": []string{"y"}, + "z": []string{""}, + }.String()) + + assert.NilError(t, err) + assert.Assert(t, strings.HasPrefix(string(b), `|`), + "expected literal block scalar, got:\n%s", b) + }) +} + +func TestSectionSet(t *testing.T) { + t.Parallel() + + sections := iniSectionSet{} + assert.Equal(t, sections.String(), "") + + sections["db"] = iniMultiSet{"x": []string{"y"}} + assert.Equal(t, sections.String(), + "\n[db]\nx = y\n") + + sections["db:backup"] = iniMultiSet{"x": []string{"w"}} + assert.Equal(t, sections.String(), + "\n[db]\nx = y\n\n[db:backup]\nx = w\n", + "expected subcommand after its stanza") + + sections["another"] = iniMultiSet{"x": []string{"z"}} + assert.Equal(t, sections.String(), + "\n[another]\nx = z\n\n[db]\nx = y\n\n[db:backup]\nx = w\n", + "expected alphabetical stanzas") + + sections["global"] = iniMultiSet{"x": []string{"t"}} + assert.Equal(t, sections.String(), + "\n[global]\nx = t\n\n[another]\nx = z\n\n[db]\nx = y\n\n[db:backup]\nx = w\n", + "expected global before stanzas") + + sections["global:command"] = iniMultiSet{"t": []string{"v"}} + assert.Equal(t, sections.String(), + strings.Join([]string{ + "\n[global]\nx = t\n", + "\n[global:command]\nt = v\n", + "\n[another]\nx = z\n", + "\n[db]\nx = y\n", + "\n[db:backup]\nx = w\n", + }, ""), + "expected global subcommand after global") + + t.Run("PrettyYAML", func(t *testing.T) { + sections["last"] = iniMultiSet{"z": []string{""}} + b, err := yaml.Marshal(sections.String()) + + assert.NilError(t, err) + assert.Assert(t, strings.HasPrefix(string(b), `|`), + "expected literal block scalar, got:\n%s", b) + }) +} diff --git a/internal/pgbackrest/pgbackrest.go b/internal/pgbackrest/pgbackrest.go new file mode 100644 index 0000000000..21124b9744 --- /dev/null +++ b/internal/pgbackrest/pgbackrest.go @@ -0,0 +1,109 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgbackrest + +import ( + "bytes" + "context" + "fmt" + "io" + + "github.com/pkg/errors" + + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +const ( + // errMsgConfigHashMismatch is the error message displayed when a configuration hash mismatch + // is detected while attempting stanza creation + errMsgConfigHashMismatch = "postgres operator error: pgBackRest config hash mismatch" + + // errMsgStaleReposWithVolumesConfig is the error message displayed when a volume-backed repo has been + // configured, but the configuration has not yet propagated into the container. + errMsgStaleReposWithVolumesConfig = "postgres operator error: pgBackRest stale volume-backed repo configuration" +) + +// Executor calls "pgbackrest" commands +type Executor func( + ctx context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, +) error + +// StanzaCreateOrUpgrade runs either the pgBackRest "stanza-create" or "stanza-upgrade" command +// depending on the boolean value of the "upgrade" function parameter. This function is invoked +// by the "reconcileStanzaCreate" function with "upgrade" set to false; if the stanza already +// exists but the PG version has changed, pgBackRest will error with the "errMsgBackupDbMismatch" +// error. If that occurs, we then rerun the command with "upgrade" set to true. +// - https://github.com/pgbackrest/pgbackrest/blob/release/2.38/src/command/check/common.c#L154-L156 +// If the bool returned from this function is true, this indicates that a pgBackRest config hash +// mismatch was identified that prevented the pgBackRest stanza-create or stanza-upgrade command +// from running (with a config mismatch indicating that the pgBackRest configuration as stored in +// the cluster's pgBackRest ConfigMap has not yet propagated to the Pod). +func (exec Executor) StanzaCreateOrUpgrade(ctx context.Context, configHash string, + postgresCluster *v1beta1.PostgresCluster) (bool, error) { + + var stdout, stderr bytes.Buffer + + var reposWithVolumes []v1beta1.PGBackRestRepo + for _, repo := range postgresCluster.Spec.Backups.PGBackRest.Repos { + if repo.Volume != nil { + reposWithVolumes = append(reposWithVolumes, repo) + } + } + + grep := "grep %s-path /etc/pgbackrest/conf.d/pgbackrest_instance.conf" + + var checkRepoCmd string + if len(reposWithVolumes) > 0 { + repo := reposWithVolumes[0] + checkRepoCmd = checkRepoCmd + fmt.Sprintf(grep, repo.Name) + + reposWithVolumes = reposWithVolumes[1:] + for _, repo := range reposWithVolumes { + checkRepoCmd = checkRepoCmd + fmt.Sprintf(" && "+grep, repo.Name) + } + } + + // this is the script that is run to create a stanza. First it checks the + // "config-hash" file to ensure all configuration changes (e.g. from ConfigMaps) have + // propagated to the container, and if not, it prints an error and returns with exit code 1). + // Next, it checks that any volume-backed repo added to the config has propagated into + // the container, and if not, prints an error and exits with code 1. + // Otherwise, it runs the pgbackrest command, which will either be "stanza-create" or + // "stanza-upgrade", depending on the value of the boolean "upgrade" parameter. + const script = ` +declare -r hash="$1" stanza="$2" hash_msg="$3" vol_msg="$4" check_repo_cmd="$5" +if [[ "$(< /etc/pgbackrest/conf.d/config-hash)" != "${hash}" ]]; then + printf >&2 "%s" "${hash_msg}"; exit 1; +elif ! bash -c "${check_repo_cmd}"; then + printf >&2 "%s" "${vol_msg}"; exit 1; +else + pgbackrest stanza-create --stanza="${stanza}" || pgbackrest stanza-upgrade --stanza="${stanza}" +fi +` + if err := exec(ctx, nil, &stdout, &stderr, "bash", "-ceu", "--", + script, "-", configHash, DefaultStanzaName, errMsgConfigHashMismatch, errMsgStaleReposWithVolumesConfig, + checkRepoCmd); err != nil { + + errReturn := stderr.String() + + // if the config hashes didn't match, return true and don't return an error since this is + // expected while waiting for config changes in ConfigMaps and Secrets to make it to the + // container + if errReturn == errMsgConfigHashMismatch { + return true, nil + } + + // if the configuration for volume-backed repositories is stale, return true and don't return an error since this + // is expected while waiting for config changes in ConfigMaps to make it to the container + if errReturn == errMsgStaleReposWithVolumesConfig { + return true, nil + } + + // if none of the above errors, return the err + return false, errors.WithStack(fmt.Errorf("%w: %v", err, errReturn)) + } + + return false, nil +} diff --git a/internal/pgbackrest/pgbackrest_test.go b/internal/pgbackrest/pgbackrest_test.go new file mode 100644 index 0000000000..33c97913cf --- /dev/null +++ b/internal/pgbackrest/pgbackrest_test.go @@ -0,0 +1,100 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgbackrest + +import ( + "context" + "io" + "os" + "os/exec" + "path/filepath" + "testing" + + "gotest.tools/v3/assert" + "k8s.io/apimachinery/pkg/api/resource" + + corev1 "k8s.io/api/core/v1" + + "github.com/crunchydata/postgres-operator/internal/testing/require" + + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestStanzaCreateOrUpgrade(t *testing.T) { + shellcheck := require.ShellCheck(t) + + ctx := context.Background() + configHash := "7f5d4d5bdc" + expectedCommand := []string{"bash", "-ceu", "--", ` +declare -r hash="$1" stanza="$2" hash_msg="$3" vol_msg="$4" check_repo_cmd="$5" +if [[ "$(< /etc/pgbackrest/conf.d/config-hash)" != "${hash}" ]]; then + printf >&2 "%s" "${hash_msg}"; exit 1; +elif ! bash -c "${check_repo_cmd}"; then + printf >&2 "%s" "${vol_msg}"; exit 1; +else + pgbackrest stanza-create --stanza="${stanza}" || pgbackrest stanza-upgrade --stanza="${stanza}" +fi +`, + "-", "7f5d4d5bdc", "db", "postgres operator error: pgBackRest config hash mismatch", + "postgres operator error: pgBackRest stale volume-backed repo configuration", + "grep repo1-path /etc/pgbackrest/conf.d/pgbackrest_instance.conf", + } + + var shellCheckScript string + stanzaExec := func(ctx context.Context, stdin io.Reader, stdout, stderr io.Writer, + command ...string) error { + + // verify the command created by StanzaCreate() matches the expected command + assert.DeepEqual(t, command, expectedCommand) + + assert.Assert(t, len(command) > 3) + shellCheckScript = command[3] + + return nil + } + postgresCluster := &v1beta1.PostgresCluster{ + Spec: v1beta1.PostgresClusterSpec{ + Backups: v1beta1.Backups{ + PGBackRest: v1beta1.PGBackRestArchive{ + Repos: []v1beta1.PGBackRestRepo{{ + Name: "repo1", + Volume: &v1beta1.RepoPVC{ + VolumeClaimSpec: corev1.PersistentVolumeClaimSpec{ + AccessModes: []corev1.PersistentVolumeAccessMode{corev1.ReadWriteMany}, + Resources: corev1.VolumeResourceRequirements{ + Requests: map[corev1.ResourceName]resource.Quantity{ + corev1.ResourceStorage: resource.MustParse("1Gi"), + }, + }, + }, + }, + }, { + Name: "repo2", + S3: &v1beta1.RepoS3{ + Bucket: "bucket", + Endpoint: "endpoint", + Region: "region", + }, + }}, + }, + }, + }, + } + + configHashMismatch, err := Executor(stanzaExec).StanzaCreateOrUpgrade(ctx, configHash, postgresCluster) + assert.NilError(t, err) + assert.Assert(t, !configHashMismatch) + + // shell check the script + // Write out that inline script. + dir := t.TempDir() + file := filepath.Join(dir, "script.bash") + assert.NilError(t, os.WriteFile(file, []byte(shellCheckScript), 0o600)) + + // Expect shellcheck to be happy. + cmd := exec.Command(shellcheck, "--enable=all", file) + output, err := cmd.CombinedOutput() + assert.NilError(t, err, "%q\n%s", cmd.Args, output) +} diff --git a/internal/pgbackrest/postgres.go b/internal/pgbackrest/postgres.go new file mode 100644 index 0000000000..ab5c71868c --- /dev/null +++ b/internal/pgbackrest/postgres.go @@ -0,0 +1,71 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgbackrest + +import ( + "strings" + + "github.com/crunchydata/postgres-operator/internal/postgres" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +// PostgreSQL populates outParameters with any settings needed to run pgBackRest. +func PostgreSQL( + inCluster *v1beta1.PostgresCluster, + outParameters *postgres.Parameters, + backupsEnabled bool, +) { + if outParameters.Mandatory == nil { + outParameters.Mandatory = postgres.NewParameterSet() + } + if outParameters.Default == nil { + outParameters.Default = postgres.NewParameterSet() + } + + // Send WAL files to all configured repositories when not in recovery. + // - https://pgbackrest.org/user-guide.html#quickstart/configure-archiving + // - https://pgbackrest.org/command.html#command-archive-push + // - https://www.postgresql.org/docs/current/runtime-config-wal.html + outParameters.Mandatory.Add("archive_mode", "on") + if backupsEnabled { + archive := `pgbackrest --stanza=` + DefaultStanzaName + ` archive-push "%p"` + outParameters.Mandatory.Add("archive_command", archive) + } else { + // If backups are disabled, keep archive_mode on (to avoid a Postgres restart) + // and throw away WAL. + outParameters.Mandatory.Add("archive_command", `true`) + } + + // archive_timeout is used to determine at what point a WAL file is switched, + // if the WAL archive has not reached its full size in # of transactions + // (16MB). This has ramifications for log shipping, i.e. it ensures a WAL file + // is shipped to an archive every X seconds to help reduce the risk of data + // loss in a disaster recovery scenario. For standby servers that are not + // connected using streaming replication, this also ensures that new data is + // available at least once a minute. + // + // PostgreSQL documentation considers an archive_timeout of 60 seconds to be + // reasonable. There are cases where you may want to set archive_timeout to 0, + // for example, when the remote archive (pgBackRest repo) is unavailable; this + // is to prevent WAL accumulation on your primary. + // - https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-ARCHIVE-TIMEOUT + outParameters.Default.Add("archive_timeout", "60s") + + // Fetch WAL files from any configured repository during recovery. + // - https://pgbackrest.org/command.html#command-archive-get + // - https://www.postgresql.org/docs/current/runtime-config-wal.html + restore := `pgbackrest --stanza=` + DefaultStanzaName + ` archive-get %f "%p"` + outParameters.Mandatory.Add("restore_command", restore) + + if inCluster.Spec.Standby != nil && inCluster.Spec.Standby.Enabled && inCluster.Spec.Standby.RepoName != "" { + + // Fetch WAL files from the designated repository. The repository name + // is validated by the Kubernetes API, so it does not need to be quoted + // nor escaped. + repoName := inCluster.Spec.Standby.RepoName + restore += " --repo=" + strings.TrimPrefix(repoName, "repo") + outParameters.Mandatory.Add("restore_command", restore) + } +} diff --git a/internal/pgbackrest/postgres_test.go b/internal/pgbackrest/postgres_test.go new file mode 100644 index 0000000000..b87b35631a --- /dev/null +++ b/internal/pgbackrest/postgres_test.go @@ -0,0 +1,49 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgbackrest + +import ( + "testing" + + "gotest.tools/v3/assert" + + "github.com/crunchydata/postgres-operator/internal/postgres" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestPostgreSQLParameters(t *testing.T) { + cluster := new(v1beta1.PostgresCluster) + parameters := new(postgres.Parameters) + + PostgreSQL(cluster, parameters, true) + assert.DeepEqual(t, parameters.Mandatory.AsMap(), map[string]string{ + "archive_mode": "on", + "archive_command": `pgbackrest --stanza=db archive-push "%p"`, + "restore_command": `pgbackrest --stanza=db archive-get %f "%p"`, + }) + + assert.DeepEqual(t, parameters.Default.AsMap(), map[string]string{ + "archive_timeout": "60s", + }) + + PostgreSQL(cluster, parameters, false) + assert.DeepEqual(t, parameters.Mandatory.AsMap(), map[string]string{ + "archive_mode": "on", + "archive_command": "true", + "restore_command": `pgbackrest --stanza=db archive-get %f "%p"`, + }) + + cluster.Spec.Standby = &v1beta1.PostgresStandbySpec{ + Enabled: true, + RepoName: "repo99", + } + + PostgreSQL(cluster, parameters, true) + assert.DeepEqual(t, parameters.Mandatory.AsMap(), map[string]string{ + "archive_mode": "on", + "archive_command": `pgbackrest --stanza=db archive-push "%p"`, + "restore_command": `pgbackrest --stanza=db archive-get %f "%p" --repo=99`, + }) +} diff --git a/internal/pgbackrest/rbac.go b/internal/pgbackrest/rbac.go new file mode 100644 index 0000000000..950f10ef8b --- /dev/null +++ b/internal/pgbackrest/rbac.go @@ -0,0 +1,35 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgbackrest + +import ( + corev1 "k8s.io/api/core/v1" + rbacv1 "k8s.io/api/rbac/v1" + + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +// +kubebuilder:rbac:groups="",resources="pods",verbs={list} +// +kubebuilder:rbac:groups="",resources="pods/exec",verbs={create} + +// Permissions returns the RBAC rules pgBackRest needs for a cluster. +func Permissions(cluster *v1beta1.PostgresCluster) []rbacv1.PolicyRule { + + rules := make([]rbacv1.PolicyRule, 0, 2) + + rules = append(rules, rbacv1.PolicyRule{ + APIGroups: []string{corev1.SchemeGroupVersion.Group}, + Resources: []string{"pods"}, + Verbs: []string{"list"}, + }) + + rules = append(rules, rbacv1.PolicyRule{ + APIGroups: []string{corev1.SchemeGroupVersion.Group}, + Resources: []string{"pods/exec"}, + Verbs: []string{"create"}, + }) + + return rules +} diff --git a/internal/pgbackrest/rbac_test.go b/internal/pgbackrest/rbac_test.go new file mode 100644 index 0000000000..a620276f64 --- /dev/null +++ b/internal/pgbackrest/rbac_test.go @@ -0,0 +1,54 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgbackrest + +import ( + "testing" + + "gotest.tools/v3/assert" + + "github.com/crunchydata/postgres-operator/internal/testing/cmp" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func isUniqueAndSorted(slice []string) bool { + if len(slice) > 1 { + previous := slice[0] + for _, next := range slice[1:] { + if next <= previous { + return false + } + previous = next + } + } + return true +} + +func TestPermissions(t *testing.T) { + cluster := new(v1beta1.PostgresCluster) + cluster.Default() + + permissions := Permissions(cluster) + for _, rule := range permissions { + assert.Assert(t, isUniqueAndSorted(rule.APIGroups), "got %q", rule.APIGroups) + assert.Assert(t, isUniqueAndSorted(rule.Resources), "got %q", rule.Resources) + assert.Assert(t, isUniqueAndSorted(rule.Verbs), "got %q", rule.Verbs) + } + + assert.Assert(t, cmp.MarshalMatches(permissions, ` +- apiGroups: + - "" + resources: + - pods + verbs: + - list +- apiGroups: + - "" + resources: + - pods/exec + verbs: + - create + `)) +} diff --git a/internal/pgbackrest/reconcile.go b/internal/pgbackrest/reconcile.go new file mode 100644 index 0000000000..d22bccc3c0 --- /dev/null +++ b/internal/pgbackrest/reconcile.go @@ -0,0 +1,573 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgbackrest + +import ( + "context" + "strings" + + "github.com/pkg/errors" + appsv1 "k8s.io/api/apps/v1" + corev1 "k8s.io/api/core/v1" + + "github.com/crunchydata/postgres-operator/internal/config" + "github.com/crunchydata/postgres-operator/internal/feature" + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/pki" + "github.com/crunchydata/postgres-operator/internal/postgres" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +// AddRepoVolumesToPod adds pgBackRest repository volumes to the provided Pod template spec, while +// also adding associated volume mounts to the containers specified. +func AddRepoVolumesToPod(postgresCluster *v1beta1.PostgresCluster, template *corev1.PodTemplateSpec, + repoPVCNames map[string]string, containerNames ...string) error { + + for _, repo := range postgresCluster.Spec.Backups.PGBackRest.Repos { + // we only care about repos created using PVCs + if repo.Volume == nil { + continue + } + + var repoVolName string + if repoPVCNames[repo.Name] != "" { + // if there is an existing volume for this PVC, use it + repoVolName = repoPVCNames[repo.Name] + } else { + // use the default name to create a new volume + repoVolName = naming.PGBackRestRepoVolume(postgresCluster, + repo.Name).Name + } + template.Spec.Volumes = append(template.Spec.Volumes, corev1.Volume{ + Name: repo.Name, + VolumeSource: corev1.VolumeSource{ + PersistentVolumeClaim: &corev1.PersistentVolumeClaimVolumeSource{ + ClaimName: repoVolName}, + }, + }) + + var initContainerFound bool + var index int + for index = range template.Spec.InitContainers { + if template.Spec.InitContainers[index].Name == naming.ContainerPGBackRestLogDirInit { + initContainerFound = true + break + } + } + if !initContainerFound { + return errors.Errorf( + "Unable to find init container %q when adding pgBackRest repo volumes", + naming.ContainerPGBackRestLogDirInit) + } + template.Spec.InitContainers[index].VolumeMounts = + append(template.Spec.InitContainers[index].VolumeMounts, corev1.VolumeMount{ + Name: repo.Name, + MountPath: "/pgbackrest/" + repo.Name, + }) + + for _, name := range containerNames { + var containerFound bool + var index int + for index = range template.Spec.Containers { + if template.Spec.Containers[index].Name == name { + containerFound = true + break + } + } + if !containerFound { + return errors.Errorf("Unable to find container %q when adding pgBackRest repo volumes", + name) + } + template.Spec.Containers[index].VolumeMounts = + append(template.Spec.Containers[index].VolumeMounts, corev1.VolumeMount{ + Name: repo.Name, + MountPath: "/pgbackrest/" + repo.Name, + }) + } + } + + return nil +} + +// AddConfigToInstancePod adds and mounts the pgBackRest configuration volumes +// for an instance of cluster to pod. The database container and any pgBackRest +// containers must already be in pod. +func AddConfigToInstancePod( + cluster *v1beta1.PostgresCluster, pod *corev1.PodSpec, +) { + configmap := corev1.VolumeProjection{ConfigMap: &corev1.ConfigMapProjection{}} + configmap.ConfigMap.Name = naming.PGBackRestConfig(cluster).Name + configmap.ConfigMap.Items = []corev1.KeyToPath{ + {Key: CMInstanceKey, Path: CMInstanceKey}, + {Key: ConfigHashKey, Path: ConfigHashKey}, + } + + secret := corev1.VolumeProjection{Secret: &corev1.SecretProjection{}} + secret.Secret.Name = naming.PGBackRestSecret(cluster).Name + + configmap.ConfigMap.Items = append( + configmap.ConfigMap.Items, corev1.KeyToPath{ + Key: serverConfigMapKey, + Path: serverConfigProjectionPath, + }) + secret.Secret.Items = append(secret.Secret.Items, clientCertificates()...) + + // Start with a copy of projections specified in the cluster. Items later in + // the list take precedence over earlier items (that is, last write wins). + // - https://kubernetes.io/docs/concepts/storage/volumes/#projected + sources := append([]corev1.VolumeProjection{}, + cluster.Spec.Backups.PGBackRest.Configuration...) + + if len(secret.Secret.Items) > 0 { + sources = append(sources, configmap, secret) + } else { + sources = append(sources, configmap) + } + + addConfigVolumeAndMounts(pod, sources) +} + +// AddConfigToRepoPod adds and mounts the pgBackRest configuration volume for +// the dedicated repository host of cluster to pod. The pgBackRest containers +// must already be in pod. +func AddConfigToRepoPod( + cluster *v1beta1.PostgresCluster, pod *corev1.PodSpec, +) { + configmap := corev1.VolumeProjection{ConfigMap: &corev1.ConfigMapProjection{}} + configmap.ConfigMap.Name = naming.PGBackRestConfig(cluster).Name + configmap.ConfigMap.Items = []corev1.KeyToPath{ + {Key: CMRepoKey, Path: CMRepoKey}, + {Key: ConfigHashKey, Path: ConfigHashKey}, + {Key: serverConfigMapKey, Path: serverConfigProjectionPath}, + } + + secret := corev1.VolumeProjection{Secret: &corev1.SecretProjection{}} + secret.Secret.Name = naming.PGBackRestSecret(cluster).Name + secret.Secret.Items = append(secret.Secret.Items, clientCertificates()...) + + // Start with a copy of projections specified in the cluster. Items later in + // the list take precedence over earlier items (that is, last write wins). + // - https://kubernetes.io/docs/concepts/storage/volumes/#projected + sources := append([]corev1.VolumeProjection{}, + cluster.Spec.Backups.PGBackRest.Configuration...) + + addConfigVolumeAndMounts(pod, append(sources, configmap, secret)) +} + +// AddConfigToRestorePod adds and mounts the pgBackRest configuration volume +// for the restore job of cluster to pod. The pgBackRest containers must +// already be in pod. +func AddConfigToRestorePod( + cluster *v1beta1.PostgresCluster, sourceCluster *v1beta1.PostgresCluster, pod *corev1.PodSpec, +) { + configmap := corev1.VolumeProjection{ConfigMap: &corev1.ConfigMapProjection{}} + configmap.ConfigMap.Name = naming.PGBackRestConfig(cluster).Name + configmap.ConfigMap.Items = []corev1.KeyToPath{ + // TODO(cbandy): This may be the instance configuration of a cluster + // different from the one we are building/creating. For now the + // stanza options are "pg1-path", "pg1-port", and "pg1-socket-path" + // and these are safe enough to use across different clusters running + // the same PostgreSQL version. When that list grows, consider changing + // this to use local stanza options and remote repository options. + // See also [RestoreConfig]. + {Key: CMInstanceKey, Path: CMInstanceKey}, + } + + // Mount client certificates of the source cluster if they exist. + secret := corev1.VolumeProjection{Secret: &corev1.SecretProjection{}} + secret.Secret.Name = naming.PGBackRestSecret(cluster).Name + secret.Secret.Items = append(secret.Secret.Items, clientCertificates()...) + secret.Secret.Optional = initialize.Bool(true) + + // Start with a copy of projections specified in the cluster. Items later in + // the list take precedence over earlier items (that is, last write wins). + // - https://kubernetes.io/docs/concepts/storage/volumes/#projected + sources := append([]corev1.VolumeProjection{}, + cluster.Spec.Backups.PGBackRest.Configuration...) + + // For a PostgresCluster restore, append all pgBackRest configuration from + // the source cluster for the restore. + if sourceCluster != nil { + sources = append(sources, sourceCluster.Spec.Backups.PGBackRest.Configuration...) + } + + // Currently the spec accepts a dataSource with both a PostgresCluster and + // a PGBackRest section. In that case only the PostgresCluster is honored (see + // internal/controller/postgrescluster/cluster.go, reconcileDataSource). + // + // `sourceCluster` is always nil for a cloud based restore (see + // internal/controller/postgrescluster/pgbackrest.go, reconcileCloudBasedDataSource). + // + // So, if `sourceCluster` is nil and `DataSource.PGBackRest` is not, + // this is a cloud based datasource restore and only the configuration from + // `dataSource.pgbackrest` section should be included. + if sourceCluster == nil && + cluster.Spec.DataSource != nil && + cluster.Spec.DataSource.PGBackRest != nil { + + sources = append([]corev1.VolumeProjection{}, + cluster.Spec.DataSource.PGBackRest.Configuration...) + } + + // mount any provided configuration files to the restore Job Pod + if len(cluster.Spec.Config.Files) != 0 { + additionalConfigVolumeMount := postgres.AdditionalConfigVolumeMount() + additionalConfigVolume := corev1.Volume{Name: additionalConfigVolumeMount.Name} + additionalConfigVolume.Projected = &corev1.ProjectedVolumeSource{ + Sources: append(sources, cluster.Spec.Config.Files...), + } + for i := range pod.Containers { + container := &pod.Containers[i] + + if container.Name == naming.PGBackRestRestoreContainerName { + container.VolumeMounts = append(container.VolumeMounts, additionalConfigVolumeMount) + } + } + pod.Volumes = append(pod.Volumes, additionalConfigVolume) + } + + addConfigVolumeAndMounts(pod, append(sources, configmap, secret)) +} + +// addConfigVolumeAndMounts adds the config projections to pod as the +// configuration volume. It mounts that volume to the database container and +// all pgBackRest containers in pod. +func addConfigVolumeAndMounts( + pod *corev1.PodSpec, config []corev1.VolumeProjection, +) { + configVolumeMount := corev1.VolumeMount{ + Name: "pgbackrest-config", + MountPath: configDirectory, + ReadOnly: true, + } + + configVolume := corev1.Volume{ + Name: configVolumeMount.Name, + VolumeSource: corev1.VolumeSource{ + Projected: &corev1.ProjectedVolumeSource{Sources: config}, + }, + } + + for i := range pod.Containers { + container := &pod.Containers[i] + + switch container.Name { + case + naming.ContainerDatabase, + naming.ContainerPGBackRestConfig, + naming.PGBackRestRepoContainerName, + naming.PGBackRestRestoreContainerName: + + container.VolumeMounts = append(container.VolumeMounts, configVolumeMount) + } + } + + pod.Volumes = append(pod.Volumes, configVolume) +} + +// addServerContainerAndVolume adds the TLS server container and certificate +// projections to pod. Any PostgreSQL data and WAL volumes in pod are also mounted. +func addServerContainerAndVolume( + ctx context.Context, + cluster *v1beta1.PostgresCluster, pod *corev1.PodSpec, + certificates []corev1.VolumeProjection, resources *corev1.ResourceRequirements, +) { + serverVolumeMount := corev1.VolumeMount{ + Name: "pgbackrest-server", + MountPath: serverMountPath, + ReadOnly: true, + } + + serverVolume := corev1.Volume{ + Name: serverVolumeMount.Name, + VolumeSource: corev1.VolumeSource{ + Projected: &corev1.ProjectedVolumeSource{Sources: certificates}, + }, + } + + container := corev1.Container{ + Name: naming.PGBackRestRepoContainerName, + Command: []string{"pgbackrest", "server"}, + Image: config.PGBackRestContainerImage(cluster), + ImagePullPolicy: cluster.Spec.ImagePullPolicy, + SecurityContext: initialize.RestrictedSecurityContext(), + + LivenessProbe: &corev1.Probe{ + ProbeHandler: corev1.ProbeHandler{ + Exec: &corev1.ExecAction{ + Command: []string{"pgbackrest", "server-ping"}, + }, + }, + }, + + VolumeMounts: []corev1.VolumeMount{serverVolumeMount}, + } + + if resources != nil { + container.Resources = *resources + } + + // Mount PostgreSQL volumes that are present in pod. + postgresMounts := map[string]corev1.VolumeMount{ + postgres.DataVolumeMount().Name: postgres.DataVolumeMount(), + postgres.WALVolumeMount().Name: postgres.WALVolumeMount(), + } + if feature.Enabled(ctx, feature.TablespaceVolumes) { + for _, instance := range cluster.Spec.InstanceSets { + for _, vol := range instance.TablespaceVolumes { + tablespaceVolumeMount := postgres.TablespaceVolumeMount(vol.Name) + postgresMounts[tablespaceVolumeMount.Name] = tablespaceVolumeMount + } + } + } + for i := range pod.Volumes { + if mount, ok := postgresMounts[pod.Volumes[i].Name]; ok { + container.VolumeMounts = append(container.VolumeMounts, mount) + } + } + + reloader := corev1.Container{ + Name: naming.ContainerPGBackRestConfig, + Command: reloadCommand(naming.ContainerPGBackRestConfig), + Image: container.Image, + ImagePullPolicy: container.ImagePullPolicy, + SecurityContext: initialize.RestrictedSecurityContext(), + + // The configuration mount is appended by [addConfigVolumeAndMounts]. + VolumeMounts: []corev1.VolumeMount{serverVolumeMount}, + } + + if sidecars := cluster.Spec.Backups.PGBackRest.Sidecars; sidecars != nil && + sidecars.PGBackRestConfig != nil && + sidecars.PGBackRestConfig.Resources != nil { + reloader.Resources = *sidecars.PGBackRestConfig.Resources + } + + pod.Containers = append(pod.Containers, container, reloader) + pod.Volumes = append(pod.Volumes, serverVolume) +} + +// AddServerToInstancePod adds the TLS server container and volume to pod for +// an instance of cluster. Any PostgreSQL volumes must already be in pod. +func AddServerToInstancePod( + ctx context.Context, + cluster *v1beta1.PostgresCluster, pod *corev1.PodSpec, + instanceCertificateSecretName string, +) { + certificates := []corev1.VolumeProjection{{ + Secret: &corev1.SecretProjection{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: instanceCertificateSecretName, + }, + Items: instanceServerCertificates(), + }, + }} + + var resources *corev1.ResourceRequirements + if sidecars := cluster.Spec.Backups.PGBackRest.Sidecars; sidecars != nil && sidecars.PGBackRest != nil { + resources = sidecars.PGBackRest.Resources + } + + addServerContainerAndVolume(ctx, cluster, pod, certificates, resources) +} + +// AddServerToRepoPod adds the TLS server container and volume to pod for +// the dedicated repository host of cluster. +func AddServerToRepoPod( + ctx context.Context, + cluster *v1beta1.PostgresCluster, pod *corev1.PodSpec, +) { + certificates := []corev1.VolumeProjection{{ + Secret: &corev1.SecretProjection{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: naming.PGBackRestSecret(cluster).Name, + }, + Items: repositoryServerCertificates(), + }, + }} + + var resources *corev1.ResourceRequirements + if cluster.Spec.Backups.PGBackRest.RepoHost != nil { + resources = &cluster.Spec.Backups.PGBackRest.RepoHost.Resources + } + + addServerContainerAndVolume(ctx, cluster, pod, certificates, resources) +} + +// InstanceCertificates populates the shared Secret with certificates needed to run pgBackRest. +func InstanceCertificates(ctx context.Context, + inCluster *v1beta1.PostgresCluster, + inRoot pki.Certificate, + inDNS pki.Certificate, inDNSKey pki.PrivateKey, + outInstanceCertificates *corev1.Secret, +) error { + var err error + + initialize.Map(&outInstanceCertificates.Data) + + if err == nil { + outInstanceCertificates.Data[certInstanceSecretKey], err = certFile(inDNS) + } + if err == nil { + outInstanceCertificates.Data[certInstancePrivateKeySecretKey], err = certFile(inDNSKey) + } + + return err +} + +// ReplicaCreateCommand returns the command that can initialize the PostgreSQL +// data directory on an instance from one of cluster's repositories. It returns +// nil when no repository is available. +func ReplicaCreateCommand( + cluster *v1beta1.PostgresCluster, instance *v1beta1.PostgresInstanceSetSpec, +) []string { + command := func(repoName string) []string { + return []string{ + "pgbackrest", "restore", "--delta", + "--stanza=" + DefaultStanzaName, + "--repo=" + strings.TrimPrefix(repoName, "repo"), + "--link-map=pg_wal=" + postgres.WALDirectory(cluster, instance), + + // Do not create a recovery signal file on PostgreSQL v12 or later; + // Patroni creates a standby signal file which takes precedence. + // Patroni manages recovery.conf prior to PostgreSQL v12. + // - https://github.com/pgbackrest/pgbackrest/blob/release/2.38/src/command/restore/restore.c#L1824 + // - https://www.postgresql.org/docs/12/runtime-config-wal.html + "--type=standby", + } + } + + if cluster.Spec.Standby != nil && cluster.Spec.Standby.Enabled && cluster.Spec.Standby.RepoName != "" { + // Patroni initializes standby clusters using the same command it uses + // for any replica. Assume the repository in the spec has a stanza + // and can be used to restore. The repository name is validated by the + // Kubernetes API and begins with "repo". + // + // NOTE(cbandy): A standby cluster cannot use "online" stanza-create + // nor create backups because every instance is always in recovery. + return command(cluster.Spec.Standby.RepoName) + } + + if cluster.Status.PGBackRest != nil { + for _, repo := range cluster.Status.PGBackRest.Repos { + if repo.ReplicaCreateBackupComplete { + return command(repo.Name) + } + } + } + + return nil +} + +// RepoVolumeMount returns the name and mount path of the pgBackRest repo volume. +func RepoVolumeMount() corev1.VolumeMount { + return corev1.VolumeMount{Name: "pgbackrest-repo", MountPath: repoMountPath} +} + +// RestoreConfig populates targetConfigMap and targetSecret with values needed +// to restore a cluster from repositories defined in sourceConfigMap and sourceSecret. +func RestoreConfig( + sourceConfigMap, targetConfigMap *corev1.ConfigMap, + sourceSecret, targetSecret *corev1.Secret, +) { + initialize.Map(&targetConfigMap.Data) + + // Use the repository definitions from the source cluster. + // + // TODO(cbandy): This is the *entire* instance configuration from another + // cluster. For now, the stanza options are "pg1-path", "pg1-port", and + // "pg1-socket-path" and these are safe enough to use across different + // clusters running the same PostgreSQL version. When that list grows, + // consider changing this to use local stanza options and remote repository options. + targetConfigMap.Data[CMInstanceKey] = sourceConfigMap.Data[CMInstanceKey] + + if sourceSecret != nil && targetSecret != nil { + initialize.Map(&targetSecret.Data) + + // - https://golang.org/issue/45038 + bytesClone := func(b []byte) []byte { return append([]byte(nil), b...) } + + // Use the CA and client certificate from the source cluster. + for _, item := range clientCertificates() { + targetSecret.Data[item.Key] = bytesClone(sourceSecret.Data[item.Key]) + } + } +} + +// Secret populates the pgBackRest Secret. +func Secret(ctx context.Context, + inCluster *v1beta1.PostgresCluster, + inRepoHost *appsv1.StatefulSet, + inRoot *pki.RootCertificateAuthority, + inSecret *corev1.Secret, + outSecret *corev1.Secret, +) error { + var err error + + // Save the CA and generate a TLS client certificate for the entire cluster. + if inRepoHost != nil { + initialize.Map(&outSecret.Data) + + // The server verifies its "tls-server-auth" option contains the common + // name (CN) of the certificate presented by a client. The entire + // cluster uses a single client certificate so the "tls-server-auth" + // option can stay the same when PostgreSQL instances and repository + // hosts are added or removed. + leaf := &pki.LeafCertificate{} + commonName := clientCommonName(inCluster) + dnsNames := []string{commonName} + + if err == nil { + // Unmarshal and validate the stored leaf. These first errors can + // be ignored because they result in an invalid leaf which is then + // correctly regenerated. + _ = leaf.Certificate.UnmarshalText(inSecret.Data[certClientSecretKey]) + _ = leaf.PrivateKey.UnmarshalText(inSecret.Data[certClientPrivateKeySecretKey]) + + leaf, err = inRoot.RegenerateLeafWhenNecessary(leaf, commonName, dnsNames) + err = errors.WithStack(err) + } + + if err == nil { + outSecret.Data[certAuthoritySecretKey], err = certFile(inRoot.Certificate) + } + if err == nil { + outSecret.Data[certClientPrivateKeySecretKey], err = certFile(leaf.PrivateKey) + } + if err == nil { + outSecret.Data[certClientSecretKey], err = certFile(leaf.Certificate) + } + } + + // Generate a TLS server certificate for each repository host. + if inRepoHost != nil { + // The client verifies the "pg-host" or "repo-host" option it used is + // present in the DNS names of the server certificate. + leaf := &pki.LeafCertificate{} + dnsNames := naming.RepoHostPodDNSNames(ctx, inRepoHost) + commonName := dnsNames[0] // FQDN + + if err == nil { + // Unmarshal and validate the stored leaf. These first errors can + // be ignored because they result in an invalid leaf which is then + // correctly regenerated. + _ = leaf.Certificate.UnmarshalText(inSecret.Data[certRepoSecretKey]) + _ = leaf.PrivateKey.UnmarshalText(inSecret.Data[certRepoPrivateKeySecretKey]) + + leaf, err = inRoot.RegenerateLeafWhenNecessary(leaf, commonName, dnsNames) + err = errors.WithStack(err) + } + + if err == nil { + outSecret.Data[certRepoPrivateKeySecretKey], err = certFile(leaf.PrivateKey) + } + if err == nil { + outSecret.Data[certRepoSecretKey], err = certFile(leaf.Certificate) + } + } + + return err +} diff --git a/internal/pgbackrest/reconcile_test.go b/internal/pgbackrest/reconcile_test.go new file mode 100644 index 0000000000..4957d58f7b --- /dev/null +++ b/internal/pgbackrest/reconcile_test.go @@ -0,0 +1,1075 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgbackrest + +import ( + "context" + "fmt" + "reflect" + "testing" + + "github.com/google/go-cmp/cmp/cmpopts" + "gotest.tools/v3/assert" + appsv1 "k8s.io/api/apps/v1" + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/resource" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + + "github.com/crunchydata/postgres-operator/internal/feature" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/pki" + "github.com/crunchydata/postgres-operator/internal/testing/cmp" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestAddRepoVolumesToPod(t *testing.T) { + + postgresCluster := &v1beta1.PostgresCluster{ObjectMeta: metav1.ObjectMeta{Name: "hippo"}} + + testsCases := []struct { + repos []v1beta1.PGBackRestRepo + containers []corev1.Container + initContainers []corev1.Container + testMap map[string]string + }{{ + repos: []v1beta1.PGBackRestRepo{ + {Name: "repo1", Volume: &v1beta1.RepoPVC{}}, + {Name: "repo2", Volume: &v1beta1.RepoPVC{}}, + }, + initContainers: []corev1.Container{{Name: "pgbackrest-log-dir"}}, + containers: []corev1.Container{{Name: "database"}, {Name: "pgbackrest"}}, + testMap: map[string]string{}, + }, { + repos: []v1beta1.PGBackRestRepo{ + {Name: "repo1", Volume: &v1beta1.RepoPVC{}}, + {Name: "repo2", Volume: &v1beta1.RepoPVC{}}, + }, + initContainers: []corev1.Container{{Name: "pgbackrest-log-dir"}}, + containers: []corev1.Container{{Name: "database"}}, + testMap: map[string]string{}, + }, { + repos: []v1beta1.PGBackRestRepo{{Name: "repo1", Volume: &v1beta1.RepoPVC{}}}, + initContainers: []corev1.Container{{Name: "pgbackrest-log-dir"}}, + containers: []corev1.Container{{Name: "database"}, {Name: "pgbackrest"}}, + testMap: map[string]string{}, + }, { + repos: []v1beta1.PGBackRestRepo{{Name: "repo1", Volume: &v1beta1.RepoPVC{}}}, + initContainers: []corev1.Container{{Name: "pgbackrest-log-dir"}}, + containers: []corev1.Container{{Name: "database"}}, + testMap: map[string]string{}, + }, { + repos: []v1beta1.PGBackRestRepo{{Name: "repo1", Volume: &v1beta1.RepoPVC{}}}, + initContainers: []corev1.Container{}, + containers: []corev1.Container{{Name: "database"}}, + testMap: map[string]string{}, + }, + // rerun the same tests, but this time simulate an existing PVC + { + repos: []v1beta1.PGBackRestRepo{ + {Name: "repo1", Volume: &v1beta1.RepoPVC{}}, + {Name: "repo2", Volume: &v1beta1.RepoPVC{}}, + }, + initContainers: []corev1.Container{{Name: "pgbackrest-log-dir"}}, + containers: []corev1.Container{{Name: "database"}, {Name: "pgbackrest"}}, + testMap: map[string]string{ + "repo1": "hippo-repo1", + }, + }, { + repos: []v1beta1.PGBackRestRepo{ + {Name: "repo1", Volume: &v1beta1.RepoPVC{}}, + {Name: "repo2", Volume: &v1beta1.RepoPVC{}}, + }, + initContainers: []corev1.Container{{Name: "pgbackrest-log-dir"}}, + containers: []corev1.Container{{Name: "database"}}, + testMap: map[string]string{ + "repo1": "hippo-repo1", + }, + }, { + repos: []v1beta1.PGBackRestRepo{{Name: "repo1", Volume: &v1beta1.RepoPVC{}}}, + initContainers: []corev1.Container{{Name: "pgbackrest-log-dir"}}, + containers: []corev1.Container{{Name: "database"}, {Name: "pgbackrest"}}, + testMap: map[string]string{ + "repo1": "hippo-repo1", + }, + }, { + repos: []v1beta1.PGBackRestRepo{{Name: "repo1", Volume: &v1beta1.RepoPVC{}}}, + initContainers: []corev1.Container{{Name: "pgbackrest-log-dir"}}, + containers: []corev1.Container{{Name: "database"}}, + testMap: map[string]string{ + "repo1": "hippo-repo1", + }, + }, { + repos: []v1beta1.PGBackRestRepo{{Name: "repo1", Volume: &v1beta1.RepoPVC{}}}, + initContainers: []corev1.Container{}, + containers: []corev1.Container{{Name: "database"}}, + testMap: map[string]string{ + "repo1": "hippo-repo1", + }, + }} + + for _, tc := range testsCases { + t.Run(fmt.Sprintf("repos=%d, containers=%d", len(tc.repos), len(tc.containers)), func(t *testing.T) { + postgresCluster.Spec.Backups.PGBackRest.Repos = tc.repos + template := &corev1.PodTemplateSpec{ + Spec: corev1.PodSpec{ + InitContainers: tc.initContainers, + Containers: tc.containers, + }, + } + err := AddRepoVolumesToPod(postgresCluster, template, tc.testMap, getContainerNames(tc.containers)...) + if len(tc.initContainers) == 0 { + assert.Error(t, err, "Unable to find init container \"pgbackrest-log-dir\" when adding pgBackRest repo volumes") + } else { + assert.NilError(t, err) + + // verify volumes and volume mounts + for _, r := range tc.repos { + var foundVolume bool + for _, v := range template.Spec.Volumes { + if v.Name == r.Name && v.VolumeSource.PersistentVolumeClaim.ClaimName == + naming.PGBackRestRepoVolume(postgresCluster, r.Name).Name { + foundVolume = true + break + } + } + + if !foundVolume { + t.Errorf("volume %s is missing or invalid", r.Name) + } + + for _, c := range template.Spec.Containers { + var foundVolumeMount bool + for _, vm := range c.VolumeMounts { + if vm.Name == r.Name && vm.MountPath == "/pgbackrest/"+r.Name { + foundVolumeMount = true + break + } + } + if !foundVolumeMount { + t.Errorf("container volume mount %s is missing or invalid", r.Name) + } + } + for _, c := range template.Spec.InitContainers { + var foundVolumeMount bool + for _, vm := range c.VolumeMounts { + if vm.Name == r.Name && vm.MountPath == "/pgbackrest/"+r.Name { + foundVolumeMount = true + break + } + } + if !foundVolumeMount { + t.Errorf("init container volume mount %s is missing or invalid", r.Name) + } + } + } + } + }) + } +} + +func TestAddConfigToInstancePod(t *testing.T) { + cluster := v1beta1.PostgresCluster{} + cluster.Name = "hippo" + cluster.Default() + + pod := corev1.PodSpec{ + Containers: []corev1.Container{ + {Name: "database"}, + {Name: "other"}, + {Name: "pgbackrest"}, + }, + } + + alwaysExpect := func(t testing.TB, result *corev1.PodSpec) { + // Only Containers and Volumes fields have changed. + assert.DeepEqual(t, pod, *result, cmpopts.IgnoreFields(pod, "Containers", "Volumes")) + + // Only database and pgBackRest containers have mounts. + assert.Assert(t, cmp.MarshalMatches(result.Containers, ` +- name: database + resources: {} + volumeMounts: + - mountPath: /etc/pgbackrest/conf.d + name: pgbackrest-config + readOnly: true +- name: other + resources: {} +- name: pgbackrest + resources: {} + volumeMounts: + - mountPath: /etc/pgbackrest/conf.d + name: pgbackrest-config + readOnly: true + `)) + } + + t.Run("CustomProjections", func(t *testing.T) { + custom := corev1.ConfigMapProjection{} + custom.Name = "custom-configmap" + + cluster := cluster.DeepCopy() + cluster.Spec.Backups.PGBackRest.Configuration = []corev1.VolumeProjection{ + {ConfigMap: &custom}, + } + + out := pod.DeepCopy() + AddConfigToInstancePod(cluster, out) + alwaysExpect(t, out) + + // Instance configuration files after custom projections. + assert.Assert(t, cmp.MarshalMatches(out.Volumes, ` +- name: pgbackrest-config + projected: + sources: + - configMap: + name: custom-configmap + - configMap: + items: + - key: pgbackrest_instance.conf + path: pgbackrest_instance.conf + - key: config-hash + path: config-hash + - key: pgbackrest-server.conf + path: ~postgres-operator_server.conf + name: hippo-pgbackrest-config + - secret: + items: + - key: pgbackrest.ca-roots + path: ~postgres-operator/tls-ca.crt + - key: pgbackrest-client.crt + path: ~postgres-operator/client-tls.crt + - key: pgbackrest-client.key + mode: 384 + path: ~postgres-operator/client-tls.key + name: hippo-pgbackrest + `)) + }) + + t.Run("NoVolumeRepo", func(t *testing.T) { + cluster := cluster.DeepCopy() + cluster.Spec.Backups.PGBackRest.Repos = nil + + out := pod.DeepCopy() + AddConfigToInstancePod(cluster, out) + alwaysExpect(t, out) + + // Instance configuration and certificates. + assert.Assert(t, cmp.MarshalMatches(out.Volumes, ` +- name: pgbackrest-config + projected: + sources: + - configMap: + items: + - key: pgbackrest_instance.conf + path: pgbackrest_instance.conf + - key: config-hash + path: config-hash + - key: pgbackrest-server.conf + path: ~postgres-operator_server.conf + name: hippo-pgbackrest-config + - secret: + items: + - key: pgbackrest.ca-roots + path: ~postgres-operator/tls-ca.crt + - key: pgbackrest-client.crt + path: ~postgres-operator/client-tls.crt + - key: pgbackrest-client.key + mode: 384 + path: ~postgres-operator/client-tls.key + name: hippo-pgbackrest + `)) + }) + + t.Run("OneVolumeRepo", func(t *testing.T) { + cluster := cluster.DeepCopy() + cluster.Spec.Backups.PGBackRest.Repos = []v1beta1.PGBackRestRepo{ + { + Name: "repo1", + Volume: new(v1beta1.RepoPVC), + }, + } + + out := pod.DeepCopy() + AddConfigToInstancePod(cluster, out) + alwaysExpect(t, out) + + // Instance configuration files, server config, and optional client certificates. + assert.Assert(t, cmp.MarshalMatches(out.Volumes, ` +- name: pgbackrest-config + projected: + sources: + - configMap: + items: + - key: pgbackrest_instance.conf + path: pgbackrest_instance.conf + - key: config-hash + path: config-hash + - key: pgbackrest-server.conf + path: ~postgres-operator_server.conf + name: hippo-pgbackrest-config + - secret: + items: + - key: pgbackrest.ca-roots + path: ~postgres-operator/tls-ca.crt + - key: pgbackrest-client.crt + path: ~postgres-operator/client-tls.crt + - key: pgbackrest-client.key + mode: 384 + path: ~postgres-operator/client-tls.key + name: hippo-pgbackrest + `)) + }) +} + +func TestAddConfigToRepoPod(t *testing.T) { + cluster := v1beta1.PostgresCluster{} + cluster.Name = "hippo" + cluster.Default() + + pod := corev1.PodSpec{ + Containers: []corev1.Container{ + {Name: "other"}, + {Name: "pgbackrest"}, + }, + } + + alwaysExpect := func(t testing.TB, result *corev1.PodSpec) { + // Only Containers and Volumes fields have changed. + assert.DeepEqual(t, pod, *result, cmpopts.IgnoreFields(pod, "Containers", "Volumes")) + + // Only pgBackRest containers have mounts. + assert.Assert(t, cmp.MarshalMatches(result.Containers, ` +- name: other + resources: {} +- name: pgbackrest + resources: {} + volumeMounts: + - mountPath: /etc/pgbackrest/conf.d + name: pgbackrest-config + readOnly: true + `)) + } + + t.Run("CustomProjections", func(t *testing.T) { + custom := corev1.ConfigMapProjection{} + custom.Name = "custom-configmap" + + cluster := cluster.DeepCopy() + cluster.Spec.Backups.PGBackRest.Configuration = []corev1.VolumeProjection{ + {ConfigMap: &custom}, + } + + out := pod.DeepCopy() + AddConfigToRepoPod(cluster, out) + alwaysExpect(t, out) + + // Repository configuration files, server config, and client certificates + // after custom projections. + assert.Assert(t, cmp.MarshalMatches(out.Volumes, ` +- name: pgbackrest-config + projected: + sources: + - configMap: + name: custom-configmap + - configMap: + items: + - key: pgbackrest_repo.conf + path: pgbackrest_repo.conf + - key: config-hash + path: config-hash + - key: pgbackrest-server.conf + path: ~postgres-operator_server.conf + name: hippo-pgbackrest-config + - secret: + items: + - key: pgbackrest.ca-roots + path: ~postgres-operator/tls-ca.crt + - key: pgbackrest-client.crt + path: ~postgres-operator/client-tls.crt + - key: pgbackrest-client.key + mode: 384 + path: ~postgres-operator/client-tls.key + name: hippo-pgbackrest + `)) + }) +} + +func TestAddConfigToRestorePod(t *testing.T) { + cluster := v1beta1.PostgresCluster{} + cluster.Name = "source" + cluster.Default() + + pod := corev1.PodSpec{ + Containers: []corev1.Container{ + {Name: "other"}, + {Name: "pgbackrest"}, + }, + } + + alwaysExpect := func(t testing.TB, result *corev1.PodSpec) { + // Only Containers and Volumes fields have changed. + assert.DeepEqual(t, pod, *result, cmpopts.IgnoreFields(pod, "Containers", "Volumes")) + + // Only pgBackRest containers have mounts. + assert.Assert(t, cmp.MarshalMatches(result.Containers, ` +- name: other + resources: {} +- name: pgbackrest + resources: {} + volumeMounts: + - mountPath: /etc/pgbackrest/conf.d + name: pgbackrest-config + readOnly: true + `)) + } + + t.Run("CustomProjections", func(t *testing.T) { + custom := corev1.ConfigMapProjection{} + custom.Name = "custom-configmap" + + cluster := cluster.DeepCopy() + cluster.Spec.Backups.PGBackRest.Configuration = []corev1.VolumeProjection{ + {ConfigMap: &custom}, + } + + custom2 := corev1.SecretProjection{} + custom2.Name = "source-custom-secret" + + sourceCluster := cluster.DeepCopy() + sourceCluster.Spec.Backups.PGBackRest.Configuration = []corev1.VolumeProjection{ + {Secret: &custom2}, + } + + out := pod.DeepCopy() + AddConfigToRestorePod(cluster, sourceCluster, out) + alwaysExpect(t, out) + + // Instance configuration files and optional client certificates + // after custom projections. + assert.Assert(t, cmp.MarshalMatches(out.Volumes, ` +- name: pgbackrest-config + projected: + sources: + - configMap: + name: custom-configmap + - secret: + name: source-custom-secret + - configMap: + items: + - key: pgbackrest_instance.conf + path: pgbackrest_instance.conf + name: source-pgbackrest-config + - secret: + items: + - key: pgbackrest.ca-roots + path: ~postgres-operator/tls-ca.crt + - key: pgbackrest-client.crt + path: ~postgres-operator/client-tls.crt + - key: pgbackrest-client.key + mode: 384 + path: ~postgres-operator/client-tls.key + name: source-pgbackrest + optional: true + `)) + }) + + t.Run("CloudBasedDataSourceProjections", func(t *testing.T) { + custom := corev1.SecretProjection{} + custom.Name = "custom-secret" + + cluster := cluster.DeepCopy() + cluster.Spec.DataSource = &v1beta1.DataSource{ + PGBackRest: &v1beta1.PGBackRestDataSource{ + Configuration: []corev1.VolumeProjection{{Secret: &custom}}, + }, + } + + out := pod.DeepCopy() + AddConfigToRestorePod(cluster, nil, out) + alwaysExpect(t, out) + + // Instance configuration files and optional client certificates + // after custom projections. + assert.Assert(t, cmp.MarshalMatches(out.Volumes, ` +- name: pgbackrest-config + projected: + sources: + - secret: + name: custom-secret + - configMap: + items: + - key: pgbackrest_instance.conf + path: pgbackrest_instance.conf + name: source-pgbackrest-config + - secret: + items: + - key: pgbackrest.ca-roots + path: ~postgres-operator/tls-ca.crt + - key: pgbackrest-client.crt + path: ~postgres-operator/client-tls.crt + - key: pgbackrest-client.key + mode: 384 + path: ~postgres-operator/client-tls.key + name: source-pgbackrest + optional: true + `)) + }) + + t.Run("CustomFiles", func(t *testing.T) { + custom := corev1.ConfigMapProjection{} + custom.Name = "custom-configmap-files" + + cluster := cluster.DeepCopy() + cluster.Spec.Config.Files = []corev1.VolumeProjection{ + {ConfigMap: &custom}, + } + + sourceCluster := cluster.DeepCopy() + + out := pod.DeepCopy() + AddConfigToRestorePod(cluster, sourceCluster, out) + alwaysExpect(t, out) + + // Instance configuration files and optional configuration files + // after custom projections. + assert.Assert(t, cmp.MarshalMatches(out.Volumes, ` +- name: postgres-config + projected: + sources: + - configMap: + name: custom-configmap-files +- name: pgbackrest-config + projected: + sources: + - configMap: + items: + - key: pgbackrest_instance.conf + path: pgbackrest_instance.conf + name: source-pgbackrest-config + - secret: + items: + - key: pgbackrest.ca-roots + path: ~postgres-operator/tls-ca.crt + - key: pgbackrest-client.crt + path: ~postgres-operator/client-tls.crt + - key: pgbackrest-client.key + mode: 384 + path: ~postgres-operator/client-tls.key + name: source-pgbackrest + optional: true + `)) + }) +} + +func TestAddServerToInstancePod(t *testing.T) { + t.Parallel() + + ctx := context.Background() + cluster := v1beta1.PostgresCluster{} + cluster.Name = "hippo" + cluster.Default() + + pod := corev1.PodSpec{ + Containers: []corev1.Container{ + {Name: "database"}, + {Name: "other"}, + }, + Volumes: []corev1.Volume{ + {Name: "other"}, + {Name: "postgres-data"}, + {Name: "postgres-wal"}, + }, + } + + t.Run("CustomResources", func(t *testing.T) { + cluster := cluster.DeepCopy() + cluster.Spec.Backups.PGBackRest.Sidecars = &v1beta1.PGBackRestSidecars{ + PGBackRest: &v1beta1.Sidecar{ + Resources: &corev1.ResourceRequirements{ + Requests: corev1.ResourceList{ + corev1.ResourceCPU: resource.MustParse("5m"), + }, + }, + }, + PGBackRestConfig: &v1beta1.Sidecar{ + Resources: &corev1.ResourceRequirements{ + Limits: corev1.ResourceList{ + corev1.ResourceCPU: resource.MustParse("17m"), + }, + }, + }, + } + + out := pod.DeepCopy() + AddServerToInstancePod(ctx, cluster, out, "instance-secret-name") + + // Only Containers and Volumes fields have changed. + assert.DeepEqual(t, pod, *out, cmpopts.IgnoreFields(pod, "Containers", "Volumes")) + + // The TLS server is added while other containers are untouched. + // It has PostgreSQL volumes mounted while other volumes are ignored. + assert.Assert(t, cmp.MarshalMatches(out.Containers, ` +- name: database + resources: {} +- name: other + resources: {} +- command: + - pgbackrest + - server + livenessProbe: + exec: + command: + - pgbackrest + - server-ping + name: pgbackrest + resources: + requests: + cpu: 5m + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault + volumeMounts: + - mountPath: /etc/pgbackrest/server + name: pgbackrest-server + readOnly: true + - mountPath: /pgdata + name: postgres-data + - mountPath: /pgwal + name: postgres-wal +- command: + - bash + - -ceu + - -- + - |- + monitor() { + exec {fd}<> <(:||:) + until read -r -t 5 -u "${fd}"; do + if + [[ "${filename}" -nt "/proc/self/fd/${fd}" ]] && + pkill -HUP --exact --parent=0 pgbackrest + then + exec {fd}>&- && exec {fd}<> <(:||:) + stat --dereference --format='Loaded configuration dated %y' "${filename}" + elif + { [[ "${directory}" -nt "/proc/self/fd/${fd}" ]] || + [[ "${authority}" -nt "/proc/self/fd/${fd}" ]] + } && + pkill -HUP --exact --parent=0 pgbackrest + then + exec {fd}>&- && exec {fd}<> <(:||:) + stat --format='Loaded certificates dated %y' "${directory}" + fi + done + }; export directory="$1" authority="$2" filename="$3"; export -f monitor; exec -a "$0" bash -ceu monitor + - pgbackrest-config + - /etc/pgbackrest/server + - /etc/pgbackrest/conf.d/~postgres-operator/tls-ca.crt + - /etc/pgbackrest/conf.d/~postgres-operator_server.conf + name: pgbackrest-config + resources: + limits: + cpu: 17m + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault + volumeMounts: + - mountPath: /etc/pgbackrest/server + name: pgbackrest-server + readOnly: true + `)) + + // The server certificate comes from the instance Secret. + // Other volumes are untouched. + assert.Assert(t, cmp.MarshalMatches(out.Volumes, ` +- name: other +- name: postgres-data +- name: postgres-wal +- name: pgbackrest-server + projected: + sources: + - secret: + items: + - key: pgbackrest-server.crt + path: server-tls.crt + - key: pgbackrest-server.key + mode: 384 + path: server-tls.key + name: instance-secret-name + `)) + }) + + t.Run("AddTablespaces", func(t *testing.T) { + gate := feature.NewGate() + assert.NilError(t, gate.SetFromMap(map[string]bool{ + feature.TablespaceVolumes: true, + })) + ctx := feature.NewContext(ctx, gate) + + clusterWithTablespaces := cluster.DeepCopy() + clusterWithTablespaces.Spec.InstanceSets = []v1beta1.PostgresInstanceSetSpec{ + { + TablespaceVolumes: []v1beta1.TablespaceVolume{ + {Name: "trial"}, + {Name: "castle"}, + }, + }, + } + + out := pod.DeepCopy() + out.Volumes = append(out.Volumes, corev1.Volume{Name: "tablespace-trial"}, corev1.Volume{Name: "tablespace-castle"}) + AddServerToInstancePod(ctx, clusterWithTablespaces, out, "instance-secret-name") + + // Only Containers and Volumes fields have changed. + assert.DeepEqual(t, pod, *out, cmpopts.IgnoreFields(pod, "Containers", "Volumes")) + assert.Assert(t, cmp.MarshalMatches(out.Containers, ` +- name: database + resources: {} +- name: other + resources: {} +- command: + - pgbackrest + - server + livenessProbe: + exec: + command: + - pgbackrest + - server-ping + name: pgbackrest + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault + volumeMounts: + - mountPath: /etc/pgbackrest/server + name: pgbackrest-server + readOnly: true + - mountPath: /pgdata + name: postgres-data + - mountPath: /pgwal + name: postgres-wal + - mountPath: /tablespaces/trial + name: tablespace-trial + - mountPath: /tablespaces/castle + name: tablespace-castle +- command: + - bash + - -ceu + - -- + - |- + monitor() { + exec {fd}<> <(:||:) + until read -r -t 5 -u "${fd}"; do + if + [[ "${filename}" -nt "/proc/self/fd/${fd}" ]] && + pkill -HUP --exact --parent=0 pgbackrest + then + exec {fd}>&- && exec {fd}<> <(:||:) + stat --dereference --format='Loaded configuration dated %y' "${filename}" + elif + { [[ "${directory}" -nt "/proc/self/fd/${fd}" ]] || + [[ "${authority}" -nt "/proc/self/fd/${fd}" ]] + } && + pkill -HUP --exact --parent=0 pgbackrest + then + exec {fd}>&- && exec {fd}<> <(:||:) + stat --format='Loaded certificates dated %y' "${directory}" + fi + done + }; export directory="$1" authority="$2" filename="$3"; export -f monitor; exec -a "$0" bash -ceu monitor + - pgbackrest-config + - /etc/pgbackrest/server + - /etc/pgbackrest/conf.d/~postgres-operator/tls-ca.crt + - /etc/pgbackrest/conf.d/~postgres-operator_server.conf + name: pgbackrest-config + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault + volumeMounts: + - mountPath: /etc/pgbackrest/server + name: pgbackrest-server + readOnly: true + `)) + }) +} + +func TestAddServerToRepoPod(t *testing.T) { + t.Parallel() + + ctx := context.Background() + cluster := v1beta1.PostgresCluster{} + cluster.Name = "hippo" + cluster.Default() + + pod := corev1.PodSpec{ + Containers: []corev1.Container{ + {Name: "other"}, + }, + } + + t.Run("CustomResources", func(t *testing.T) { + cluster := cluster.DeepCopy() + cluster.Spec.Backups.PGBackRest.RepoHost = &v1beta1.PGBackRestRepoHost{ + Resources: corev1.ResourceRequirements{ + Requests: corev1.ResourceList{ + corev1.ResourceCPU: resource.MustParse("5m"), + }, + }, + } + cluster.Spec.Backups.PGBackRest.Sidecars = &v1beta1.PGBackRestSidecars{ + PGBackRestConfig: &v1beta1.Sidecar{ + Resources: &corev1.ResourceRequirements{ + Limits: corev1.ResourceList{ + corev1.ResourceCPU: resource.MustParse("19m"), + }, + }, + }, + } + + out := pod.DeepCopy() + AddServerToRepoPod(ctx, cluster, out) + + // Only Containers and Volumes fields have changed. + assert.DeepEqual(t, pod, *out, cmpopts.IgnoreFields(pod, "Containers", "Volumes")) + + // The TLS server is added while other containers are untouched. + assert.Assert(t, cmp.MarshalMatches(out.Containers, ` +- name: other + resources: {} +- command: + - pgbackrest + - server + livenessProbe: + exec: + command: + - pgbackrest + - server-ping + name: pgbackrest + resources: + requests: + cpu: 5m + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault + volumeMounts: + - mountPath: /etc/pgbackrest/server + name: pgbackrest-server + readOnly: true +- command: + - bash + - -ceu + - -- + - |- + monitor() { + exec {fd}<> <(:||:) + until read -r -t 5 -u "${fd}"; do + if + [[ "${filename}" -nt "/proc/self/fd/${fd}" ]] && + pkill -HUP --exact --parent=0 pgbackrest + then + exec {fd}>&- && exec {fd}<> <(:||:) + stat --dereference --format='Loaded configuration dated %y' "${filename}" + elif + { [[ "${directory}" -nt "/proc/self/fd/${fd}" ]] || + [[ "${authority}" -nt "/proc/self/fd/${fd}" ]] + } && + pkill -HUP --exact --parent=0 pgbackrest + then + exec {fd}>&- && exec {fd}<> <(:||:) + stat --format='Loaded certificates dated %y' "${directory}" + fi + done + }; export directory="$1" authority="$2" filename="$3"; export -f monitor; exec -a "$0" bash -ceu monitor + - pgbackrest-config + - /etc/pgbackrest/server + - /etc/pgbackrest/conf.d/~postgres-operator/tls-ca.crt + - /etc/pgbackrest/conf.d/~postgres-operator_server.conf + name: pgbackrest-config + resources: + limits: + cpu: 19m + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault + volumeMounts: + - mountPath: /etc/pgbackrest/server + name: pgbackrest-server + readOnly: true + `)) + + // The server certificate comes from the pgBackRest Secret. + assert.Assert(t, cmp.MarshalMatches(out.Volumes, ` +- name: pgbackrest-server + projected: + sources: + - secret: + items: + - key: pgbackrest-repo-host.crt + path: server-tls.crt + - key: pgbackrest-repo-host.key + mode: 384 + path: server-tls.key + name: hippo-pgbackrest + `)) + }) +} + +func getContainerNames(containers []corev1.Container) []string { + names := make([]string, len(containers)) + for i, c := range containers { + names[i] = c.Name + } + return names +} + +func TestReplicaCreateCommand(t *testing.T) { + cluster := new(v1beta1.PostgresCluster) + instance := new(v1beta1.PostgresInstanceSetSpec) + + t.Run("NoRepositories", func(t *testing.T) { + assert.Equal(t, 0, len(ReplicaCreateCommand(cluster, instance))) + }) + + t.Run("NoReadyRepositories", func(t *testing.T) { + cluster.Status.PGBackRest = &v1beta1.PGBackRestStatus{ + Repos: []v1beta1.RepoStatus{{ + Name: "repo2", ReplicaCreateBackupComplete: false, + }}, + } + + assert.Equal(t, 0, len(ReplicaCreateCommand(cluster, instance))) + }) + + t.Run("SomeReadyRepositories", func(t *testing.T) { + cluster.Status.PGBackRest = &v1beta1.PGBackRestStatus{ + Repos: []v1beta1.RepoStatus{{ + Name: "repo2", ReplicaCreateBackupComplete: true, + }, { + Name: "repo3", ReplicaCreateBackupComplete: true, + }}, + } + + assert.DeepEqual(t, ReplicaCreateCommand(cluster, instance), []string{ + "pgbackrest", "restore", "--delta", "--stanza=db", "--repo=2", + "--link-map=pg_wal=/pgdata/pg0_wal", "--type=standby", + }) + }) + + t.Run("Standby", func(t *testing.T) { + cluster := cluster.DeepCopy() + cluster.Spec.Standby = &v1beta1.PostgresStandbySpec{ + Enabled: true, + RepoName: "repo7", + } + + assert.DeepEqual(t, ReplicaCreateCommand(cluster, instance), []string{ + "pgbackrest", "restore", "--delta", "--stanza=db", "--repo=7", + "--link-map=pg_wal=/pgdata/pg0_wal", "--type=standby", + }) + }) +} + +func TestSecret(t *testing.T) { + t.Parallel() + + ctx := context.Background() + cluster := new(v1beta1.PostgresCluster) + existing := new(corev1.Secret) + intent := new(corev1.Secret) + + root, err := pki.NewRootCertificateAuthority() + assert.NilError(t, err) + + t.Run("NoRepoHost", func(t *testing.T) { + // Nothing happens when there is no repository host. + constant := intent.DeepCopy() + assert.NilError(t, Secret(ctx, cluster, nil, root, existing, intent)) + assert.DeepEqual(t, constant, intent) + }) + + host := new(appsv1.StatefulSet) + host.Namespace = "ns1" + host.Name = "some-repo" + host.Spec.ServiceName = "some-domain" + + // The existing Secret does not change. + constant := existing.DeepCopy() + assert.NilError(t, Secret(ctx, cluster, host, root, existing, intent)) + assert.DeepEqual(t, constant, existing) + + // There is a leaf certificate and private key for the repository host. + leaf := &pki.LeafCertificate{} + assert.NilError(t, leaf.Certificate.UnmarshalText(intent.Data["pgbackrest-repo-host.crt"])) + assert.NilError(t, leaf.PrivateKey.UnmarshalText(intent.Data["pgbackrest-repo-host.key"])) + + assert.DeepEqual(t, leaf.Certificate.DNSNames(), []string{ + leaf.Certificate.CommonName(), + "some-repo-0.some-domain.ns1.svc", + "some-repo-0.some-domain.ns1", + "some-repo-0.some-domain", + }) + + // Assuming the intent is written, no change when called again. + existing.Data = intent.Data + before := intent.DeepCopy() + assert.NilError(t, Secret(ctx, cluster, host, root, existing, intent)) + assert.DeepEqual(t, before, intent) + + t.Run("Rotation", func(t *testing.T) { + // The leaf certificate is regenerated when the root authority changes. + root2, err := pki.NewRootCertificateAuthority() + assert.NilError(t, err) + assert.NilError(t, Secret(ctx, cluster, host, root2, existing, intent)) + + leaf2 := &pki.LeafCertificate{} + assert.NilError(t, leaf2.Certificate.UnmarshalText(intent.Data["pgbackrest-repo-host.crt"])) + assert.NilError(t, leaf2.PrivateKey.UnmarshalText(intent.Data["pgbackrest-repo-host.key"])) + + assert.Assert(t, !reflect.DeepEqual(leaf.Certificate, leaf2.Certificate)) + assert.Assert(t, !reflect.DeepEqual(leaf.PrivateKey, leaf2.PrivateKey)) + }) +} diff --git a/internal/pgbackrest/restore.md b/internal/pgbackrest/restore.md new file mode 100644 index 0000000000..8828576921 --- /dev/null +++ b/internal/pgbackrest/restore.md @@ -0,0 +1,111 @@ + + +## Target Action + +The `--target-action` option of `pgbackrest restore` almost translates to the +PostgreSQL `recovery_target_action` parameter but not exactly. The behavior of +that parameter also depends on the PostgreSQL version and on other parameters. + +For PostgreSQL 9.5 through 15, + + - The PostgreSQL documentation states that for `recovery_target_action` + "the default is `pause`," but that is only the case when `hot_standby=on`. + + - The PostgreSQL documentation states that when `hot_standby=off` "a setting + of `pause` will act the same as `shutdown`," but that cannot be configured + through pgBackRest. + +The default value of `hot_standby` is `off` prior to PostgreSQL 10 and `on` since. + +### PostgreSQL 15, 14, 13, 12 + +[12]: https://www.postgresql.org/docs/12/runtime-config-wal.html +[commit]: https://git.postgresql.org/gitweb/?p=postgresql.git;h=2dedf4d9a899b36d1a8ed29be5efbd1b31a8fe85 + +| --target-action | recovery_target_action | hot_standby=off | hot_standby=on (default) | +|------------------|------------------------|-----------------|--------------------------| +| _not configured_ | _not configured_ | shutdown | pause | +| `pause` | _not configured_ | shutdown | pause | +| _not possible_ | `pause` | shutdown | pause | +| `promote` | `promote` | promote | promote | +| `shutdown` | `shutdown` | shutdown | shutdown | + + +### PostgreSQL 11, 10 + +[11]: https://www.postgresql.org/docs/11/recovery-target-settings.html +[10]: https://www.postgresql.org/docs/10/runtime-config-replication.html + +| --target-action | recovery_target_action | hot_standby=off | hot_standby=on (default) | +|------------------|------------------------|-----------------|--------------------------| +| _not configured_ | _not configured_ | promote | pause | +| `pause` | _not configured_ | promote | pause | +| _not possible_ | `pause` | shutdown | pause | +| `promote` | `promote` | promote | promote | +| `shutdown` | `shutdown` | shutdown | shutdown | + + +### PostgreSQL 9.6, 9.5 + +[9.6]: https://www.postgresql.org/docs/9.6/recovery-target-settings.html + +| --target-action | recovery_target_action | hot_standby=off (default) | hot_standby=on | +|------------------|------------------------|---------------------------|----------------| +| _not configured_ | _not configured_ | promote | pause | +| `pause` | _not configured_ | promote | pause | +| _not possible_ | `pause` | shutdown | pause | +| `promote` | `promote` | promote | promote | +| `shutdown` | `shutdown` | shutdown | shutdown | + + +### PostgreSQL 9.4, 9.3, 9.2, 9.1 + +[9.4]: https://www.postgresql.org/docs/9.4/recovery-target-settings.html +[9.4]: https://www.postgresql.org/docs/9.4/runtime-config-replication.html + +| --target-action | pause_at_recovery_target | hot_standby=off (default) | hot_standby=on | +|------------------|--------------------------|---------------------------|----------------| +| _not configured_ | _not configured_ | promote | pause | +| `pause` | _not configured_ | promote | pause | +| _not possible_ | `true` | promote | pause | +| `promote` | `false` | promote | promote | + + + diff --git a/internal/pgbackrest/tls-server.md b/internal/pgbackrest/tls-server.md new file mode 100644 index 0000000000..b572cc1ea4 --- /dev/null +++ b/internal/pgbackrest/tls-server.md @@ -0,0 +1,97 @@ + + +# pgBackRest TLS Server + +A handful of pgBackRest features require connectivity between `pgbackrest` processes +on different pods: + +- [dedicated repository host](https://pgbackrest.org/user-guide.html#repo-host) +- [backup from standby](https://pgbackrest.org/user-guide.html#standby-backup) + +When a PostgresCluster is configured to store backups on a PVC, the dedicated +repository host is used to make that PVC available to all PostgreSQL instances +in the cluster. Regardless of whether the repo host has a defined PVC, it +functions as the server for the pgBackRest clients that run on the Instances. + +The repository host runs a `pgbackrest` server that is secured through TLS and +[certificates][]. When performing backups, it connects to `pgbackrest` servers +running on PostgreSQL instances (as sidecars). Restore jobs connect to the +repository host to fetch files. PostgreSQL calls `pgbackrest` which connects +to the repository host to [send and receive WAL files][archiving]. + +[archiving]: https://www.postgresql.org/docs/current/continuous-archiving.html +[certificates]: certificates.md + + +The `pgbackrest` command acts as a TLS client and connects to a pgBackRest TLS +server when `pg-host-type=tls` and/or `repo-host-type=tls`. The default for these is `ssh`: + +- https://github.com/pgbackrest/pgbackrest/blob/release/2.38/src/config/parse.auto.c#L3771 +- https://github.com/pgbackrest/pgbackrest/blob/release/2.38/src/config/parse.auto.c#L6137 + + +The pgBackRest TLS server is configured through the `tls-server-*` [options](config.md). +In pgBackRest 2.38, changing any of these options or changing certificate contents +requires a reload of the server, as shown in the "Setup TLS Server" section of the +documentation, with the command configured as + +``` +ExecReload=kill -HUP $MAINPID +``` + +- https://pgbackrest.org/user-guide-rhel.html#repo-host/setup-tls + +- `tls-server-address`, `tls-server-port`
+ The network address and port on which to listen. pgBackRest 2.38 listens on + the *first* address returned by `getaddrinfo()`. There is no way to listen on + all interfaces. + + - https://github.com/pgbackrest/pgbackrest/blob/release/2.38/src/common/io/socket/server.c#L172 + - https://github.com/pgbackrest/pgbackrest/blob/release/2.38/src/common/io/socket/common.c#L87 + +- `tls-server-cert-file`, `tls-server-key-file`
+ The [certificate chain][certificates] and private key pair used to encrypt connections. + +- `tls-server-ca-file`
+ The certificate used to verify client [certificates][]. + [Required](https://github.com/pgbackrest/pgbackrest/blob/release/2.38/src/config/parse.auto.c#L8767). + +- `tls-server-auth`
+ A map/hash/dictionary of certificate common names and the stanzas they are authorized + to interact with. + [Required](https://github.com/pgbackrest/pgbackrest/blob/release/2.38/src/config/parse.auto.c#L8751). + + +In pgBackRest 2.38, as mentioned above, sending SIGHUP causes a configuration reload. + +- https://github.com/pgbackrest/pgbackrest/blob/release/2.38/src/command/server/server.c#L178 + +``` +P00 DETAIL: configuration reload begin +P00 INFO: server command begin 2.38... +P00 DETAIL: configuration reload end +``` + +Sending SIGINT to the TLS server causes it to exit with code 63, TermError. + +- https://github.com/pgbackrest/pgbackrest/blob/release/2.38/src/common/exit.c#L73-L75 +- https://github.com/pgbackrest/pgbackrest/blob/release/2.38/src/common/exit.c#L62 +- https://github.com/pgbackrest/pgbackrest/blob/release/2.38/src/common/error.auto.c#L48 + + +``` +P00 INFO: server command end: terminated on signal [SIGINT] +``` + +Sending SIGTERM exits the signal loop and lead to the command termination. + +- https://github.com/pgbackrest/pgbackrest/blob/release/2.38/src/command/server/server.c#L194 + + +``` +P00 INFO: server command end: completed successfully +``` diff --git a/internal/pgbackrest/util.go b/internal/pgbackrest/util.go new file mode 100644 index 0000000000..4fc2266c56 --- /dev/null +++ b/internal/pgbackrest/util.go @@ -0,0 +1,102 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgbackrest + +import ( + "fmt" + "hash/fnv" + "io" + + "github.com/pkg/errors" + "k8s.io/apimachinery/pkg/util/rand" + + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +// maxPGBackrestRepos is the maximum number of repositories that can be configured according to the +// multi-repository solution implemented within pgBackRest +const maxPGBackrestRepos = 4 + +// RepoHostVolumeDefined determines whether not at least one pgBackRest dedicated +// repository host volume has been defined in the PostgresCluster manifest. +func RepoHostVolumeDefined(postgresCluster *v1beta1.PostgresCluster) bool { + for _, repo := range postgresCluster.Spec.Backups.PGBackRest.Repos { + if repo.Volume != nil { + return true + } + } + return false +} + +// CalculateConfigHashes calculates hashes for any external pgBackRest repository configuration +// present in the PostgresCluster spec (e.g. configuration for Azure, GCR and/or S3 repositories). +// Additionally it returns a hash of the hashes for each external repository. +func CalculateConfigHashes( + postgresCluster *v1beta1.PostgresCluster) (map[string]string, string, error) { + + hashFunc := func(repoOpts []string) (string, error) { + return safeHash32(func(w io.Writer) (err error) { + for _, o := range repoOpts { + _, err = w.Write([]byte(o)) + } + return + }) + } + + var err error + repoConfigHashes := make(map[string]string) + for _, repo := range postgresCluster.Spec.Backups.PGBackRest.Repos { + // hashes are only calculated for external repo configs + if repo.Volume != nil { + continue + } + + var hash, name string + switch { + case repo.Azure != nil: + hash, err = hashFunc([]string{repo.Azure.Container}) + name = repo.Name + case repo.GCS != nil: + hash, err = hashFunc([]string{repo.GCS.Bucket}) + name = repo.Name + case repo.S3 != nil: + hash, err = hashFunc([]string{repo.S3.Bucket, repo.S3.Endpoint, repo.S3.Region}) + name = repo.Name + default: + return map[string]string{}, "", errors.New("found unexpected repo type") + } + if err != nil { + return map[string]string{}, "", errors.WithStack(err) + } + repoConfigHashes[name] = hash + } + + configHashes := []string{} + // ensure we always process in the same order + for i := 1; i <= maxPGBackrestRepos; i++ { + configName := fmt.Sprintf("repo%d", i) + if _, ok := repoConfigHashes[configName]; ok { + configHashes = append(configHashes, repoConfigHashes[configName]) + } + } + configHash, err := hashFunc(configHashes) + if err != nil { + return map[string]string{}, "", errors.WithStack(err) + } + + return repoConfigHashes, configHash, nil +} + +// safeHash32 runs content and returns a short alphanumeric string that +// represents everything written to w. The string is unlikely to have bad words +// and is safe to store in the Kubernetes API. This is the same algorithm used +// by ControllerRevision's "controller.kubernetes.io/hash". +func safeHash32(content func(w io.Writer) error) (string, error) { + hash := fnv.New32() + if err := content(hash); err != nil { + return "", err + } + return rand.SafeEncodeString(fmt.Sprint(hash.Sum32())), nil +} diff --git a/internal/pgbackrest/util_test.go b/internal/pgbackrest/util_test.go new file mode 100644 index 0000000000..eb0f4dec29 --- /dev/null +++ b/internal/pgbackrest/util_test.go @@ -0,0 +1,123 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgbackrest + +import ( + "io" + "math/rand" + "strconv" + "testing" + + "gotest.tools/v3/assert" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestCalculateConfigHashes(t *testing.T) { + + hashFunc := func(opts []string) (string, error) { + return safeHash32(func(w io.Writer) (err error) { + for _, o := range opts { + _, err = w.Write([]byte(o)) + } + return + }) + } + + azureOpts, gcsOpts := []string{"container"}, []string{"container"} + s3Opts := []string{"bucket", "endpoint", "region"} + + preCalculatedRepo1AzureHash, err := hashFunc(azureOpts) + assert.NilError(t, err) + preCalculatedRepo2GCSHash, err := hashFunc(gcsOpts) + assert.NilError(t, err) + preCalculatedRepo3S3Hash, err := hashFunc(s3Opts) + assert.NilError(t, err) + preCalculatedConfigHash, err := hashFunc([]string{preCalculatedRepo1AzureHash, + preCalculatedRepo2GCSHash, preCalculatedRepo3S3Hash}) + assert.NilError(t, err) + + // create a PostgresCluster to test with + postgresCluster := &v1beta1.PostgresCluster{ + ObjectMeta: metav1.ObjectMeta{ + Name: "config-hashes", + Namespace: "calculate-config-hashes", + }, + Spec: v1beta1.PostgresClusterSpec{ + Backups: v1beta1.Backups{ + PGBackRest: v1beta1.PGBackRestArchive{ + Repos: []v1beta1.PGBackRestRepo{{ + Name: "repo1", + Azure: &v1beta1.RepoAzure{ + Container: azureOpts[0], + }, + }, { + Name: "repo2", + GCS: &v1beta1.RepoGCS{ + Bucket: gcsOpts[0], + }, + }, { + Name: "repo3", + S3: &v1beta1.RepoS3{ + Bucket: s3Opts[0], + Endpoint: s3Opts[1], + Region: s3Opts[2], + }, + }}, + }, + }, + }, + } + + configHashMap, configHash, err := CalculateConfigHashes(postgresCluster) + assert.NilError(t, err) + assert.Equal(t, preCalculatedConfigHash, configHash) + assert.Equal(t, preCalculatedRepo1AzureHash, configHashMap["repo1"]) + assert.Equal(t, preCalculatedRepo2GCSHash, configHashMap["repo2"]) + assert.Equal(t, preCalculatedRepo3S3Hash, configHashMap["repo3"]) + + // call CalculateConfigHashes multiple times to ensure consistent results + for i := 0; i < 10; i++ { + hashMap, hash, err := CalculateConfigHashes(postgresCluster) + assert.NilError(t, err) + assert.Equal(t, configHash, hash) + assert.Equal(t, configHashMap["repo1"], hashMap["repo1"]) + assert.Equal(t, configHashMap["repo2"], hashMap["repo2"]) + assert.Equal(t, configHashMap["repo3"], hashMap["repo3"]) + } + + // shuffle the repo slice in order to ensure the same result is returned regardless of the + // order of the repos slice + shuffleCluster := postgresCluster.DeepCopy() + for i := 0; i < 10; i++ { + repos := shuffleCluster.Spec.Backups.PGBackRest.Repos + rand.Shuffle(len(repos), func(i, j int) { + repos[i], repos[j] = repos[j], repos[i] + }) + _, hash, err := CalculateConfigHashes(shuffleCluster) + assert.NilError(t, err) + assert.Equal(t, configHash, hash) + } + + // now modify some values in each repo and confirm we see a different result + for i := 0; i < 3; i++ { + modCluster := postgresCluster.DeepCopy() + switch i { + case 0: + modCluster.Spec.Backups.PGBackRest.Repos[i].Azure.Container = "modified-container" + case 1: + modCluster.Spec.Backups.PGBackRest.Repos[i].GCS.Bucket = "modified-bucket" + case 2: + modCluster.Spec.Backups.PGBackRest.Repos[i].S3.Bucket = "modified-bucket" + } + hashMap, hash, err := CalculateConfigHashes(modCluster) + assert.NilError(t, err) + assert.Assert(t, configHash != hash) + assert.NilError(t, err) + repo := "repo" + strconv.Itoa(i+1) + assert.Assert(t, hashMap[repo] != configHashMap[repo]) + } +} diff --git a/internal/pgbouncer/certificates.go b/internal/pgbouncer/certificates.go new file mode 100644 index 0000000000..31f91c503a --- /dev/null +++ b/internal/pgbouncer/certificates.go @@ -0,0 +1,129 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgbouncer + +import ( + corev1 "k8s.io/api/core/v1" +) + +const ( + tlsAuthoritySecretKey = "ca.crt" + tlsCertificateSecretKey = corev1.TLSCertKey + tlsPrivateKeySecretKey = corev1.TLSPrivateKeyKey + + certBackendAuthorityAbsolutePath = configDirectory + "/" + certBackendAuthorityProjectionPath + certBackendAuthorityProjectionPath = "~postgres-operator/backend-ca.crt" + + certFrontendAuthorityAbsolutePath = configDirectory + "/" + certFrontendAuthorityProjectionPath + certFrontendPrivateKeyAbsolutePath = configDirectory + "/" + certFrontendPrivateKeyProjectionPath + certFrontendAbsolutePath = configDirectory + "/" + certFrontendProjectionPath + + certFrontendAuthorityProjectionPath = "~postgres-operator/frontend-ca.crt" + certFrontendPrivateKeyProjectionPath = "~postgres-operator/frontend-tls.key" + certFrontendProjectionPath = "~postgres-operator/frontend-tls.crt" + + certFrontendAuthoritySecretKey = "pgbouncer-frontend.ca-roots" + certFrontendPrivateKeySecretKey = "pgbouncer-frontend.key" + certFrontendSecretKey = "pgbouncer-frontend.crt" +) + +// backendAuthority creates a volume projection of the PostgreSQL server +// certificate authority. +func backendAuthority(postgres *corev1.SecretProjection) corev1.VolumeProjection { + var items []corev1.KeyToPath + result := postgres.DeepCopy() + + for i := range result.Items { + // The PostgreSQL server projection expects Path to match typical Keys. + if result.Items[i].Path == tlsAuthoritySecretKey { + result.Items[i].Path = certBackendAuthorityProjectionPath + items = append(items, result.Items[i]) + } + } + + if len(items) == 0 { + items = []corev1.KeyToPath{{ + Key: tlsAuthoritySecretKey, + Path: certBackendAuthorityProjectionPath, + }} + } + + result.Items = items + return corev1.VolumeProjection{Secret: result} +} + +// frontendCertificate creates a volume projection of the PgBouncer certificate. +func frontendCertificate( + custom *corev1.SecretProjection, secret *corev1.Secret, +) corev1.VolumeProjection { + if custom == nil { + return corev1.VolumeProjection{Secret: &corev1.SecretProjection{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: secret.Name, + }, + Items: []corev1.KeyToPath{ + { + Key: certFrontendAuthoritySecretKey, + Path: certFrontendAuthorityProjectionPath, + }, + { + Key: certFrontendPrivateKeySecretKey, + Path: certFrontendPrivateKeyProjectionPath, + }, + { + Key: certFrontendSecretKey, + Path: certFrontendProjectionPath, + }, + }, + }} + } + + // The custom projection may have more or less than the three items we need + // to mount. Search for items that have the Path we expect and mount them at + // the path we need. When no items are specified, the Key serves as the Path. + + // TODO(cbandy): A more structured field or validating webhook would ensure + // that the necessary values are specified. + + var items []corev1.KeyToPath + result := custom.DeepCopy() + + for i := range result.Items { + // The custom projection expects Path to match typical Keys. + switch result.Items[i].Path { + case tlsAuthoritySecretKey: + result.Items[i].Path = certFrontendAuthorityProjectionPath + items = append(items, result.Items[i]) + + case tlsCertificateSecretKey: + result.Items[i].Path = certFrontendProjectionPath + items = append(items, result.Items[i]) + + case tlsPrivateKeySecretKey: + result.Items[i].Path = certFrontendPrivateKeyProjectionPath + items = append(items, result.Items[i]) + } + } + + if len(items) == 0 { + items = []corev1.KeyToPath{ + { + Key: tlsAuthoritySecretKey, + Path: certFrontendAuthorityProjectionPath, + }, + { + Key: tlsPrivateKeySecretKey, + Path: certFrontendPrivateKeyProjectionPath, + }, + { + Key: tlsCertificateSecretKey, + Path: certFrontendProjectionPath, + }, + } + } + + result.Items = items + return corev1.VolumeProjection{Secret: result} +} diff --git a/internal/pgbouncer/certificates_test.go b/internal/pgbouncer/certificates_test.go new file mode 100644 index 0000000000..5955c3de9c --- /dev/null +++ b/internal/pgbouncer/certificates_test.go @@ -0,0 +1,97 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgbouncer + +import ( + "testing" + + "gotest.tools/v3/assert" + corev1 "k8s.io/api/core/v1" + + "github.com/crunchydata/postgres-operator/internal/testing/cmp" +) + +func TestBackendAuthority(t *testing.T) { + // No items; assume Key matches Path. + projection := &corev1.SecretProjection{ + LocalObjectReference: corev1.LocalObjectReference{Name: "some-name"}, + } + assert.Assert(t, cmp.MarshalMatches(backendAuthority(projection), ` +secret: + items: + - key: ca.crt + path: ~postgres-operator/backend-ca.crt + name: some-name + `)) + + // Some items; use only the CA Path. + projection.Items = []corev1.KeyToPath{ + {Key: "some-crt-key", Path: "tls.crt"}, + {Key: "some-ca-key", Path: "ca.crt"}, + } + assert.Assert(t, cmp.MarshalMatches(backendAuthority(projection), ` +secret: + items: + - key: some-ca-key + path: ~postgres-operator/backend-ca.crt + name: some-name + `)) +} + +func TestFrontendCertificate(t *testing.T) { + secret := new(corev1.Secret) + secret.Name = "op-secret" + + t.Run("Generated", func(t *testing.T) { + assert.Assert(t, cmp.MarshalMatches(frontendCertificate(nil, secret), ` +secret: + items: + - key: pgbouncer-frontend.ca-roots + path: ~postgres-operator/frontend-ca.crt + - key: pgbouncer-frontend.key + path: ~postgres-operator/frontend-tls.key + - key: pgbouncer-frontend.crt + path: ~postgres-operator/frontend-tls.crt + name: op-secret + `)) + }) + + t.Run("Custom", func(t *testing.T) { + custom := new(corev1.SecretProjection) + custom.Name = "some-other" + + // No items; assume Key matches Path. + assert.Assert(t, cmp.MarshalMatches(frontendCertificate(custom, secret), ` +secret: + items: + - key: ca.crt + path: ~postgres-operator/frontend-ca.crt + - key: tls.key + path: ~postgres-operator/frontend-tls.key + - key: tls.crt + path: ~postgres-operator/frontend-tls.crt + name: some-other + `)) + + // Some items; use only the TLS Paths. + custom.Items = []corev1.KeyToPath{ + {Key: "any", Path: "thing"}, + {Key: "some-ca-key", Path: "ca.crt"}, + {Key: "some-cert-key", Path: "tls.crt"}, + {Key: "some-key-key", Path: "tls.key"}, + } + assert.Assert(t, cmp.MarshalMatches(frontendCertificate(custom, secret), ` +secret: + items: + - key: some-ca-key + path: ~postgres-operator/frontend-ca.crt + - key: some-cert-key + path: ~postgres-operator/frontend-tls.crt + - key: some-key-key + path: ~postgres-operator/frontend-tls.key + name: some-other + `)) + }) +} diff --git a/internal/pgbouncer/config.go b/internal/pgbouncer/config.go new file mode 100644 index 0000000000..a203144817 --- /dev/null +++ b/internal/pgbouncer/config.go @@ -0,0 +1,257 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgbouncer + +import ( + "fmt" + "sort" + "strings" + + corev1 "k8s.io/api/core/v1" + + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +const ( + configDirectory = "/etc/pgbouncer" + + authFileAbsolutePath = configDirectory + "/" + authFileProjectionPath + emptyFileAbsolutePath = configDirectory + "/" + emptyFileProjectionPath + iniFileAbsolutePath = configDirectory + "/" + iniFileProjectionPath + + authFileProjectionPath = "~postgres-operator/users.txt" + emptyFileProjectionPath = "pgbouncer.ini" + iniFileProjectionPath = "~postgres-operator.ini" + + authFileSecretKey = "pgbouncer-users.txt" // #nosec G101 this is a name, not a credential + passwordSecretKey = "pgbouncer-password" // #nosec G101 this is a name, not a credential + verifierSecretKey = "pgbouncer-verifier" // #nosec G101 this is a name, not a credential + emptyConfigMapKey = "pgbouncer-empty" + iniFileConfigMapKey = "pgbouncer.ini" +) + +const ( + iniGeneratedWarning = "" + + "# Generated by postgres-operator. DO NOT EDIT.\n" + + "# Your changes will not be saved.\n" +) + +type iniValueSet map[string]string + +func (vs iniValueSet) String() string { + keys := make([]string, 0, len(vs)) + for k := range vs { + keys = append(keys, k) + } + + sort.Strings(keys) + + var b strings.Builder + for _, k := range keys { + if len(vs[k]) <= 0 { + _, _ = fmt.Fprintf(&b, "%s =\n", k) + } else { + _, _ = fmt.Fprintf(&b, "%s = %s\n", k, vs[k]) + } + } + return b.String() +} + +// authFileContents returns a PgBouncer user database. +func authFileContents(password string) []byte { + // > There should be at least 2 fields, surrounded by double quotes. + // > Double quotes in a field value can be escaped by writing two double quotes. + // - https://www.pgbouncer.org/config.html#authentication-file-format + quote := func(s string) string { + return `"` + strings.ReplaceAll(s, `"`, `""`) + `"` + } + + user1 := quote(postgresqlUser) + " " + quote(password) + "\n" + + return []byte(user1) +} + +func clusterINI(cluster *v1beta1.PostgresCluster) string { + var ( + pgBouncerPort = *cluster.Spec.Proxy.PGBouncer.Port + postgresPort = *cluster.Spec.Port + ) + + global := iniValueSet{ + // Prior to PostgreSQL v12, the default setting for "extra_float_digits" + // does not return precise float values. Applications that want + // consistent results from different PostgreSQL versions may connect + // with this startup parameter. The JDBC driver uses it regardless. + // Trust that applications that know or care about this setting are + // using it consistently within each connection pool. + // - https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-EXTRA-FLOAT-DIGITS + // - https://github.com/pgjdbc/pgjdbc/blob/REL42.2.19/pgjdbc/src/main/java/org/postgresql/core/v3/ConnectionFactoryImpl.java#L334 + "ignore_startup_parameters": "extra_float_digits", + + // Authenticate frontend connections using passwords stored in PostgreSQL. + // PgBouncer will connect to the backend database that is requested by + // the frontend as the "auth_user" and execute "auth_query". When + // "auth_user" requires a password, PgBouncer reads it from "auth_file". + "auth_file": authFileAbsolutePath, + "auth_query": "SELECT username, password from pgbouncer.get_auth($1)", + "auth_user": postgresqlUser, + + // TODO(cbandy): Use an HBA file to control authentication of PgBouncer + // accounts; e.g. "admin_users" below. + // - https://www.pgbouncer.org/config.html#hba-file-format + //"auth_hba_file": "", + //"auth_type": "hba", + //"admin_users": "pgbouncer", + + // Require TLS encryption on client connections. + "client_tls_sslmode": "require", + "client_tls_cert_file": certFrontendAbsolutePath, + "client_tls_key_file": certFrontendPrivateKeyAbsolutePath, + "client_tls_ca_file": certFrontendAuthorityAbsolutePath, + + // Listen on the PgBouncer port on all addresses. + "listen_addr": "*", + "listen_port": fmt.Sprint(pgBouncerPort), + + // Require TLS encryption on connections to PostgreSQL. + "server_tls_sslmode": "verify-full", + "server_tls_ca_file": certBackendAuthorityAbsolutePath, + + // Disable Unix sockets to keep the filesystem read-only. + "unix_socket_dir": "", + } + + // Override the above with any specified settings. + for k, v := range cluster.Spec.Proxy.PGBouncer.Config.Global { + global[k] = v + } + + // Prevent the user from bypassing the main configuration file. + global["conffile"] = iniFileAbsolutePath + + // Use a wildcard to automatically create connection pools based on database + // names. These pools connect to cluster's primary service. The service name + // is an RFC 1123 DNS label so it does not need to be quoted nor escaped. + // - https://www.pgbouncer.org/config.html#section-databases + // + // NOTE(cbandy): PgBouncer only accepts connections to items in this section + // and the database "pgbouncer", which is the admin console. For connections + // to the wildcard, PgBouncer first checks for the database in PostgreSQL. + // When that database does not exist, the client will experience timeouts + // or errors that sound like PgBouncer misconfiguration. + // - https://github.com/pgbouncer/pgbouncer/issues/352 + databases := iniValueSet{ + "*": fmt.Sprintf("host=%s port=%d", + naming.ClusterPrimaryService(cluster).Name, postgresPort), + } + + // Replace the above with any specified databases. + if len(cluster.Spec.Proxy.PGBouncer.Config.Databases) > 0 { + databases = iniValueSet(cluster.Spec.Proxy.PGBouncer.Config.Databases) + } + + users := iniValueSet(cluster.Spec.Proxy.PGBouncer.Config.Users) + + // Include any custom configuration file, then apply global settings, then + // pool definitions. + result := iniGeneratedWarning + + "\n[pgbouncer]" + + "\n%include " + emptyFileAbsolutePath + + "\n\n[pgbouncer]\n" + global.String() + + "\n[databases]\n" + databases.String() + + if len(users) > 0 { + result += "\n[users]\n" + users.String() + } + + return result +} + +// podConfigFiles returns projections of PgBouncer's configuration files to +// include in the configuration volume. +func podConfigFiles( + config v1beta1.PGBouncerConfiguration, + configmap *corev1.ConfigMap, secret *corev1.Secret, +) []corev1.VolumeProjection { + // Start with an empty file at /etc/pgbouncer/pgbouncer.ini. This file can + // be overridden by the user, but it must exist because our configuration + // file refers to it. + projections := []corev1.VolumeProjection{ + { + ConfigMap: &corev1.ConfigMapProjection{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: configmap.Name, + }, + Items: []corev1.KeyToPath{{ + Key: emptyConfigMapKey, + Path: emptyFileProjectionPath, + }}, + }, + }, + } + + // Add any specified projections. These may override the files above. + // - https://docs.k8s.io/concepts/storage/volumes/#projected + projections = append(projections, config.Files...) + + // Add our non-empty configurations last so that they take precedence. + projections = append(projections, []corev1.VolumeProjection{ + { + ConfigMap: &corev1.ConfigMapProjection{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: configmap.Name, + }, + Items: []corev1.KeyToPath{{ + Key: iniFileConfigMapKey, + Path: iniFileProjectionPath, + }}, + }, + }, + { + Secret: &corev1.SecretProjection{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: secret.Name, + }, + Items: []corev1.KeyToPath{{ + Key: authFileSecretKey, + Path: authFileProjectionPath, + }}, + }, + }, + }...) + + return projections +} + +// reloadCommand returns an entrypoint that convinces PgBouncer to reload +// configuration files. The process will appear as name in `ps` and `top`. +func reloadCommand(name string) []string { + // Use a Bash loop to periodically check the mtime of the mounted + // configuration volume. When it changes, signal PgBouncer and print the + // observed timestamp. + // + // Coreutils `sleep` uses a lot of memory, so the following opens a file + // descriptor and uses the timeout of the builtin `read` to wait. That same + // descriptor gets closed and reopened to use the builtin `[ -nt` to check + // mtimes. + // - https://unix.stackexchange.com/a/407383 + const script = ` +exec {fd}<> <(:||:) +while read -r -t 5 -u "${fd}" ||:; do + if [[ "${directory}" -nt "/proc/self/fd/${fd}" ]] && pkill -HUP --exact pgbouncer + then + exec {fd}>&- && exec {fd}<> <(:||:) + stat --format='Loaded configuration dated %y' "${directory}" + fi +done +` + + // Elide the above script from `ps` and `top` by wrapping it in a function + // and calling that. + wrapper := `monitor() {` + script + `}; export directory="$1"; export -f monitor; exec -a "$0" bash -ceu monitor` + + return []string{"bash", "-ceu", "--", wrapper, name, configDirectory} +} diff --git a/internal/pgbouncer/config.md b/internal/pgbouncer/config.md new file mode 100644 index 0000000000..abfec12518 --- /dev/null +++ b/internal/pgbouncer/config.md @@ -0,0 +1,118 @@ + + +PgBouncer is configured through INI files. It will reload these files when +receiving a `HUP` signal or [`RELOAD` command][RELOAD] in the admin console. + +There is a [`SET` command][SET] available in the admin console, but it is not +clear when those changes take affect. + +- https://www.pgbouncer.org/config.html + +[RELOAD]: https://www.pgbouncer.org/usage.html#process-controlling-commands +[SET]: https://www.pgbouncer.org/usage.html#other-commands + +The [`%include` directive](https://www.pgbouncer.org/config.html#include-directive) +allows one file to refer other existing files. + +There are three sections in the files: + + - `[pgbouncer]` is for settings that apply to the PgBouncer process. + - `[databases]` is a list of databases to which clients can connect. + - `[users]` changes a few database settings based on the client user. + +``` +psql (12.6, server 1.15.0/bouncer) + +pgbouncer=# SHOW CONFIG; + key | value | default | changeable +---------------------------+--------------------------------------------------------+--------------------------------------------------------+------------ + admin_users | | | yes + application_name_add_host | 0 | 0 | yes + auth_file | | | yes + auth_hba_file | | | yes + auth_query | SELECT usename, passwd FROM pg_shadow WHERE usename=$1 | SELECT usename, passwd FROM pg_shadow WHERE usename=$1 | yes + auth_type | md5 | md5 | yes + auth_user | | | yes + autodb_idle_timeout | 3600 | 3600 | yes + client_idle_timeout | 0 | 0 | yes + client_login_timeout | 60 | 60 | yes + client_tls_ca_file | | | no + client_tls_cert_file | | | no + client_tls_ciphers | fast | fast | no + client_tls_dheparams | auto | auto | no + client_tls_ecdhcurve | auto | auto | no + client_tls_key_file | | | no + client_tls_protocols | secure | secure | no + client_tls_sslmode | disable | disable | no + conffile | /tmp/pgbouncer.ini | | yes + default_pool_size | 20 | 20 | yes + disable_pqexec | 0 | 0 | no + dns_max_ttl | 15 | 15 | yes + dns_nxdomain_ttl | 15 | 15 | yes + dns_zone_check_period | 0 | 0 | yes + idle_transaction_timeout | 0 | 0 | yes + ignore_startup_parameters | | | yes + job_name | pgbouncer | pgbouncer | no + listen_addr | * | | no + listen_backlog | 128 | 128 | no + listen_port | 6432 | 6432 | no + log_connections | 1 | 1 | yes + log_disconnections | 1 | 1 | yes + log_pooler_errors | 1 | 1 | yes + log_stats | 1 | 1 | yes + logfile | | | yes + max_client_conn | 100 | 100 | yes + max_db_connections | 0 | 0 | yes + max_packet_size | 2147483647 | 2147483647 | yes + max_user_connections | 0 | 0 | yes + min_pool_size | 0 | 0 | yes + pidfile | | | no + pkt_buf | 4096 | 4096 | no + pool_mode | session | session | yes + query_timeout | 0 | 0 | yes + query_wait_timeout | 120 | 120 | yes + reserve_pool_size | 0 | 0 | yes + reserve_pool_timeout | 5 | 5 | yes + resolv_conf | | | no + sbuf_loopcnt | 5 | 5 | yes + server_check_delay | 30 | 30 | yes + server_check_query | select 1 | select 1 | yes + server_connect_timeout | 15 | 15 | yes + server_fast_close | 0 | 0 | yes + server_idle_timeout | 600 | 600 | yes + server_lifetime | 3600 | 3600 | yes + server_login_retry | 15 | 15 | yes + server_reset_query | DISCARD ALL | DISCARD ALL | yes + server_reset_query_always | 0 | 0 | yes + server_round_robin | 0 | 0 | yes + server_tls_ca_file | | | no + server_tls_cert_file | | | no + server_tls_ciphers | fast | fast | no + server_tls_key_file | | | no + server_tls_protocols | secure | secure | no + server_tls_sslmode | disable | disable | no + so_reuseport | 0 | 0 | no + stats_period | 60 | 60 | yes + stats_users | | | yes + suspend_timeout | 10 | 10 | yes + syslog | 0 | 0 | yes + syslog_facility | daemon | daemon | yes + syslog_ident | pgbouncer | pgbouncer | yes + tcp_defer_accept | 1 | | yes + tcp_keepalive | 1 | 1 | yes + tcp_keepcnt | 0 | 0 | yes + tcp_keepidle | 0 | 0 | yes + tcp_keepintvl | 0 | 0 | yes + tcp_socket_buffer | 0 | 0 | yes + tcp_user_timeout | 0 | 0 | yes + unix_socket_dir | /tmp | /tmp | no + unix_socket_group | | | no + unix_socket_mode | 511 | 0777 | no + user | | | no + verbose | 0 | | yes +(84 rows) +``` diff --git a/internal/pgbouncer/config_test.go b/internal/pgbouncer/config_test.go new file mode 100644 index 0000000000..7a96da571c --- /dev/null +++ b/internal/pgbouncer/config_test.go @@ -0,0 +1,220 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgbouncer + +import ( + "os" + "os/exec" + "path/filepath" + "strings" + "testing" + + "gotest.tools/v3/assert" + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "sigs.k8s.io/yaml" + + "github.com/crunchydata/postgres-operator/internal/testing/cmp" + "github.com/crunchydata/postgres-operator/internal/testing/require" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestPrettyYAML(t *testing.T) { + b, err := yaml.Marshal(iniValueSet{ + "x": "y", + "z": "", + }.String()) + assert.NilError(t, err) + assert.Assert(t, strings.HasPrefix(string(b), `|`), + "expected literal block scalar, got:\n%s", b) +} + +func TestAuthFileContents(t *testing.T) { + t.Parallel() + + password := `very"random` + data := authFileContents(password) + assert.Equal(t, string(data), `"_crunchypgbouncer" "very""random"`+"\n") +} + +func TestClusterINI(t *testing.T) { + t.Parallel() + + cluster := new(v1beta1.PostgresCluster) + cluster.Default() + + cluster.Name = "foo-baz" + *cluster.Spec.Port = 9999 + + cluster.Spec.Proxy = new(v1beta1.PostgresProxySpec) + cluster.Spec.Proxy.PGBouncer = new(v1beta1.PGBouncerPodSpec) + cluster.Spec.Proxy.PGBouncer.Port = new(int32) + *cluster.Spec.Proxy.PGBouncer.Port = 8888 + + t.Run("Default", func(t *testing.T) { + assert.Equal(t, clusterINI(cluster), strings.Trim(` +# Generated by postgres-operator. DO NOT EDIT. +# Your changes will not be saved. + +[pgbouncer] +%include /etc/pgbouncer/pgbouncer.ini + +[pgbouncer] +auth_file = /etc/pgbouncer/~postgres-operator/users.txt +auth_query = SELECT username, password from pgbouncer.get_auth($1) +auth_user = _crunchypgbouncer +client_tls_ca_file = /etc/pgbouncer/~postgres-operator/frontend-ca.crt +client_tls_cert_file = /etc/pgbouncer/~postgres-operator/frontend-tls.crt +client_tls_key_file = /etc/pgbouncer/~postgres-operator/frontend-tls.key +client_tls_sslmode = require +conffile = /etc/pgbouncer/~postgres-operator.ini +ignore_startup_parameters = extra_float_digits +listen_addr = * +listen_port = 8888 +server_tls_ca_file = /etc/pgbouncer/~postgres-operator/backend-ca.crt +server_tls_sslmode = verify-full +unix_socket_dir = + +[databases] +* = host=foo-baz-primary port=9999 + `, "\t\n")+"\n") + }) + + t.Run("CustomSettings", func(t *testing.T) { + cluster.Spec.Proxy.PGBouncer.Config.Global = map[string]string{ + "ignore_startup_parameters": "custom", + "verbose": "whomp", + } + cluster.Spec.Proxy.PGBouncer.Config.Databases = map[string]string{ + "appdb": "conn=str", + } + cluster.Spec.Proxy.PGBouncer.Config.Users = map[string]string{ + "app": "mode=rad", + } + + assert.Equal(t, clusterINI(cluster), strings.Trim(` +# Generated by postgres-operator. DO NOT EDIT. +# Your changes will not be saved. + +[pgbouncer] +%include /etc/pgbouncer/pgbouncer.ini + +[pgbouncer] +auth_file = /etc/pgbouncer/~postgres-operator/users.txt +auth_query = SELECT username, password from pgbouncer.get_auth($1) +auth_user = _crunchypgbouncer +client_tls_ca_file = /etc/pgbouncer/~postgres-operator/frontend-ca.crt +client_tls_cert_file = /etc/pgbouncer/~postgres-operator/frontend-tls.crt +client_tls_key_file = /etc/pgbouncer/~postgres-operator/frontend-tls.key +client_tls_sslmode = require +conffile = /etc/pgbouncer/~postgres-operator.ini +ignore_startup_parameters = custom +listen_addr = * +listen_port = 8888 +server_tls_ca_file = /etc/pgbouncer/~postgres-operator/backend-ca.crt +server_tls_sslmode = verify-full +unix_socket_dir = +verbose = whomp + +[databases] +appdb = conn=str + +[users] +app = mode=rad + `, "\t\n")+"\n") + + // The "conffile" setting cannot be changed. + cluster.Spec.Proxy.PGBouncer.Config.Global["conffile"] = "too-far" + assert.Assert(t, !strings.Contains(clusterINI(cluster), "too-far")) + }) +} + +func TestPodConfigFiles(t *testing.T) { + t.Parallel() + + config := v1beta1.PGBouncerConfiguration{} + configmap := &corev1.ConfigMap{ObjectMeta: metav1.ObjectMeta{Name: "some-cm"}} + secret := &corev1.Secret{ObjectMeta: metav1.ObjectMeta{Name: "some-shh"}} + + t.Run("Default", func(t *testing.T) { + projections := podConfigFiles(config, configmap, secret) + assert.Assert(t, cmp.MarshalMatches(projections, ` +- configMap: + items: + - key: pgbouncer-empty + path: pgbouncer.ini + name: some-cm +- configMap: + items: + - key: pgbouncer.ini + path: ~postgres-operator.ini + name: some-cm +- secret: + items: + - key: pgbouncer-users.txt + path: ~postgres-operator/users.txt + name: some-shh + `)) + }) + + t.Run("CustomFiles", func(t *testing.T) { + config.Files = []corev1.VolumeProjection{ + {Secret: &corev1.SecretProjection{ + LocalObjectReference: corev1.LocalObjectReference{Name: "my-thing"}, + }}, + {Secret: &corev1.SecretProjection{ + LocalObjectReference: corev1.LocalObjectReference{Name: "also"}, + Items: []corev1.KeyToPath{ + {Key: "specific", Path: "files"}, + }, + }}, + } + + projections := podConfigFiles(config, configmap, secret) + assert.Assert(t, cmp.MarshalMatches(projections, ` +- configMap: + items: + - key: pgbouncer-empty + path: pgbouncer.ini + name: some-cm +- secret: + name: my-thing +- secret: + items: + - key: specific + path: files + name: also +- configMap: + items: + - key: pgbouncer.ini + path: ~postgres-operator.ini + name: some-cm +- secret: + items: + - key: pgbouncer-users.txt + path: ~postgres-operator/users.txt + name: some-shh + `)) + }) +} + +func TestReloadCommand(t *testing.T) { + shellcheck := require.ShellCheck(t) + command := reloadCommand("some-name") + + // Expect a bash command with an inline script. + assert.DeepEqual(t, command[:3], []string{"bash", "-ceu", "--"}) + assert.Assert(t, len(command) > 3) + + // Write out that inline script. + dir := t.TempDir() + file := filepath.Join(dir, "script.bash") + assert.NilError(t, os.WriteFile(file, []byte(command[3]), 0o600)) + + // Expect shellcheck to be happy. + cmd := exec.Command(shellcheck, "--enable=all", file) + output, err := cmd.CombinedOutput() + assert.NilError(t, err, "%q\n%s", cmd.Args, output) +} diff --git a/internal/pgbouncer/postgres.go b/internal/pgbouncer/postgres.go new file mode 100644 index 0000000000..cbc2e29916 --- /dev/null +++ b/internal/pgbouncer/postgres.go @@ -0,0 +1,223 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgbouncer + +import ( + "context" + "strings" + + corev1 "k8s.io/api/core/v1" + + "github.com/crunchydata/postgres-operator/internal/logging" + "github.com/crunchydata/postgres-operator/internal/postgres" + "github.com/crunchydata/postgres-operator/internal/postgres/password" + "github.com/crunchydata/postgres-operator/internal/util" +) + +const ( + postgresqlSchema = "pgbouncer" + + // NOTE(cbandy): The "pgbouncer" database is special in PgBouncer and seems + // to also be related to the "auth_user". + // - https://github.com/pgbouncer/pgbouncer/issues/568 + // - https://github.com/pgbouncer/pgbouncer/issues/302#issuecomment-815097248 + postgresqlUser = "_crunchypgbouncer" +) + +// sqlAuthenticationQuery returns the SECURITY DEFINER function that allows +// PgBouncer to access non-privileged and non-system user credentials. +func sqlAuthenticationQuery(sqlFunctionName string) string { + // Only a subset of authorization identifiers should be accessible to + // PgBouncer. + // - https://www.postgresql.org/docs/current/catalog-pg-authid.html + sqlAuthorizationConditions := strings.Join([]string{ + // Only those with permission to login. + `pg_authid.rolcanlogin`, + // No superusers. This is important: allowing superusers would make the + // PgBouncer user a de facto superuser. + `NOT pg_authid.rolsuper`, + // No replicators. + `NOT pg_authid.rolreplication`, + // Not the PgBouncer role itself. + `pg_authid.rolname <> ` + util.SQLQuoteLiteral(postgresqlUser), + // Those without a password expiration or an expiration in the future. + `(pg_authid.rolvaliduntil IS NULL OR pg_authid.rolvaliduntil >= CURRENT_TIMESTAMP)`, + }, "\n AND ") + + return strings.TrimSpace(` +CREATE OR REPLACE FUNCTION ` + sqlFunctionName + `(username TEXT) +RETURNS TABLE(username TEXT, password TEXT) AS ` + util.SQLQuoteLiteral(` + SELECT rolname::TEXT, rolpassword::TEXT + FROM pg_catalog.pg_authid + WHERE pg_authid.rolname = $1 + AND `+sqlAuthorizationConditions) + ` +LANGUAGE SQL STABLE SECURITY DEFINER;`) +} + +// DisableInPostgreSQL removes any objects created by EnableInPostgreSQL. +func DisableInPostgreSQL(ctx context.Context, exec postgres.Executor) error { + log := logging.FromContext(ctx) + + // First, remove PgBouncer objects from all databases and database templates. + // The PgBouncer user is removed later. + stdout, stderr, err := exec.ExecInAllDatabases(ctx, + strings.Join([]string{ + // Quiet NOTICE messages from IF EXISTS statements. + // - https://www.postgresql.org/docs/current/runtime-config-client.html + `SET client_min_messages = WARNING;`, + + // Drop the following objects in a transaction. + `BEGIN;`, + + // Remove the "get_auth" function that returns user credentials to PgBouncer. + `DROP FUNCTION IF EXISTS :"namespace".get_auth(username TEXT);`, + + // Drop the PgBouncer schema and anything within it. + `DROP SCHEMA IF EXISTS :"namespace" CASCADE;`, + + // Ensure there's nothing else unexpectedly owned by the PgBouncer + // user in this database. Any privileges on shared objects are also + // removed. + strings.TrimSpace(` +SELECT pg_catalog.format('DROP OWNED BY %I CASCADE', :'username') + WHERE EXISTS (SELECT 1 FROM pg_catalog.pg_roles WHERE rolname = :'username') +\gexec`), + + // Commit (finish) the transaction. + `COMMIT;`, + }, "\n"), + map[string]string{ + "username": postgresqlUser, + "namespace": postgresqlSchema, + + "ON_ERROR_STOP": "on", // Abort when any one statement fails. + "QUIET": "on", // Do not print successful statements to stdout. + }) + + log.V(1).Info("removed PgBouncer objects", "stdout", stdout, "stderr", stderr) + + if err == nil { + // Remove the PgBouncer user now that the objects and other privileges are gone. + stdout, stderr, err = exec.ExecInDatabasesFromQuery(ctx, + `SELECT pg_catalog.current_database()`, + `SET client_min_messages = WARNING; DROP ROLE IF EXISTS :"username";`, + map[string]string{ + "username": postgresqlUser, + + "ON_ERROR_STOP": "on", // Abort when any one statement fails. + "QUIET": "on", // Do not print successful statements to stdout. + }) + + log.V(1).Info("removed PgBouncer user", "stdout", stdout, "stderr", stderr) + } + + return err +} + +// EnableInPostgreSQL creates the PgBouncer user, schema, and SECURITY DEFINER +// function that allows it to authenticate clients using their password stored +// in PostgreSQL. +func EnableInPostgreSQL( + ctx context.Context, exec postgres.Executor, clusterSecret *corev1.Secret, +) error { + log := logging.FromContext(ctx) + + stdout, stderr, err := exec.ExecInAllDatabases(ctx, + strings.Join([]string{ + // Quiet NOTICE messages from IF NOT EXISTS statements. + // - https://www.postgresql.org/docs/current/runtime-config-client.html + `SET client_min_messages = WARNING;`, + + // Create the following objects in a transaction so that permissions + // are correct before any other session sees them. + // - https://www.postgresql.org/docs/current/ddl-priv.html + `BEGIN;`, + + // Create the PgBouncer user if it does not already exist. + // Permissions are granted later. + strings.TrimSpace(` +SELECT pg_catalog.format('CREATE ROLE %I NOLOGIN', :'username') + WHERE NOT EXISTS (SELECT 1 FROM pg_catalog.pg_roles WHERE rolname = :'username') +\gexec`), + + // Ensure the user can only access the one schema. Revoke anything + // that might have been granted on other schemas, like "public". + strings.TrimSpace(` +SELECT pg_catalog.format('REVOKE ALL PRIVILEGES ON SCHEMA %I FROM %I', nspname, :'username') + FROM pg_catalog.pg_namespace + WHERE pg_catalog.has_schema_privilege(:'username', oid, 'CREATE, USAGE') + AND nspname NOT IN ('pg_catalog', :'namespace') +\gexec`), + + // Create the one schema and lock it down. Only the one user is + // allowed to use it. + strings.TrimSpace(` +CREATE SCHEMA IF NOT EXISTS :"namespace"; +REVOKE ALL PRIVILEGES + ON SCHEMA :"namespace" FROM PUBLIC, :"username"; + GRANT USAGE + ON SCHEMA :"namespace" TO :"username";`), + + // The "get_auth" function returns the appropriate credentials for + // a user's password-based authentication and works with PgBouncer's + // "auth_query" setting. Only the one user is allowed to execute it. + // - https://www.pgbouncer.org/config.html#auth_query + sqlAuthenticationQuery(`:"namespace".get_auth`), + strings.TrimSpace(` +REVOKE ALL PRIVILEGES + ON FUNCTION :"namespace".get_auth(username TEXT) FROM PUBLIC, :"username"; + GRANT EXECUTE + ON FUNCTION :"namespace".get_auth(username TEXT) TO :"username";`), + + // Remove "public" from the PgBouncer user's search_path. + // - https://www.postgresql.org/docs/current/perm-functions.html + `ALTER ROLE :"username" SET search_path TO :'namespace';`, + + // Allow the PgBouncer user to to login. + `ALTER ROLE :"username" LOGIN PASSWORD :'verifier';`, + + // Commit (finish) the transaction. + `COMMIT;`, + }, "\n"), + map[string]string{ + "username": postgresqlUser, + "namespace": postgresqlSchema, + "verifier": string(clusterSecret.Data[verifierSecretKey]), + + "ON_ERROR_STOP": "on", // Abort when any one statement fails. + "QUIET": "on", // Do not print successful statements to stdout. + }) + + log.V(1).Info("applied PgBouncer objects", "stdout", stdout, "stderr", stderr) + + return err +} + +func generatePassword() (plaintext, verifier string, err error) { + // PgBouncer can login to PostgreSQL using either MD5 or SCRAM-SHA-256. + // When using MD5, the (hashed) verifier can be stored in PgBouncer's + // authentication file. When using SCRAM, the plaintext password must be + // stored. + // - https://www.pgbouncer.org/config.html#authentication-file-format + // - https://github.com/pgbouncer/pgbouncer/issues/508#issuecomment-713339834 + + plaintext, err = util.GenerateASCIIPassword(32) + if err == nil { + verifier, err = password.NewSCRAMPassword(plaintext).Build() + } + return +} + +func postgresqlHBAs() []postgres.HostBasedAuthentication { + // PgBouncer must connect over TLS using a SCRAM password. Other network + // connections are forbidden. + // - https://www.postgresql.org/docs/current/auth-pg-hba-conf.html + // - https://www.postgresql.org/docs/current/auth-password.html + + return []postgres.HostBasedAuthentication{ + *postgres.NewHBA().User(postgresqlUser).TLS().Method("scram-sha-256"), + *postgres.NewHBA().User(postgresqlUser).TCP().Method("reject"), + } +} diff --git a/internal/pgbouncer/postgres_test.go b/internal/pgbouncer/postgres_test.go new file mode 100644 index 0000000000..f2ce419753 --- /dev/null +++ b/internal/pgbouncer/postgres_test.go @@ -0,0 +1,189 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgbouncer + +import ( + "context" + "errors" + "io" + "strings" + "testing" + + "github.com/onsi/gomega" + "gotest.tools/v3/assert" + corev1 "k8s.io/api/core/v1" +) + +func TestSQLAuthenticationQuery(t *testing.T) { + assert.Equal(t, sqlAuthenticationQuery("some.fn_name"), + `CREATE OR REPLACE FUNCTION some.fn_name(username TEXT) +RETURNS TABLE(username TEXT, password TEXT) AS ' + SELECT rolname::TEXT, rolpassword::TEXT + FROM pg_catalog.pg_authid + WHERE pg_authid.rolname = $1 + AND pg_authid.rolcanlogin + AND NOT pg_authid.rolsuper + AND NOT pg_authid.rolreplication + AND pg_authid.rolname <> ''_crunchypgbouncer'' + AND (pg_authid.rolvaliduntil IS NULL OR pg_authid.rolvaliduntil >= CURRENT_TIMESTAMP)' +LANGUAGE SQL STABLE SECURITY DEFINER;`) +} + +func TestDisableInPostgreSQL(t *testing.T) { + expected := errors.New("whoops") + + // The first call is to drop objects. + t.Run("call1", func(t *testing.T) { + call1 := func( + _ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + assert.Assert(t, stdout != nil, "should capture stdout") + assert.Assert(t, stderr != nil, "should capture stderr") + assert.Assert(t, strings.Contains(strings.Join(command, "\n"), + `SELECT datname FROM pg_catalog.pg_database`, + ), "expected all databases and templates") + + b, err := io.ReadAll(stdin) + assert.NilError(t, err) + assert.Equal(t, string(b), strings.TrimSpace(` +SET client_min_messages = WARNING; +BEGIN; +DROP FUNCTION IF EXISTS :"namespace".get_auth(username TEXT); +DROP SCHEMA IF EXISTS :"namespace" CASCADE; +SELECT pg_catalog.format('DROP OWNED BY %I CASCADE', :'username') + WHERE EXISTS (SELECT 1 FROM pg_catalog.pg_roles WHERE rolname = :'username') +\gexec +COMMIT;`)) + gomega.NewWithT(t).Expect(command).To(gomega.ContainElements( + `--set=namespace=pgbouncer`, + `--set=username=_crunchypgbouncer`, + ), "expected query parameters") + + return expected + } + + calls := 0 + exec := func( + ctx context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + calls++ + return call1(ctx, stdin, stdout, stderr, command...) + } + + ctx := context.Background() + assert.Equal(t, expected, DisableInPostgreSQL(ctx, exec)) + assert.Equal(t, calls, 1, "expected an exec error to return early") + }) + + // The second call is to drop the user. + t.Run("call2", func(t *testing.T) { + call2 := func( + _ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + assert.Assert(t, stdout != nil, "should capture stdout") + assert.Assert(t, stderr != nil, "should capture stderr") + gomega.NewWithT(t).Expect(command).To(gomega.ContainElement( + `SELECT pg_catalog.current_database()`, + ), "expected the default database") + + b, err := io.ReadAll(stdin) + assert.NilError(t, err) + assert.Equal(t, string(b), `SET client_min_messages = WARNING; DROP ROLE IF EXISTS :"username";`) + gomega.NewWithT(t).Expect(command).To(gomega.ContainElements( + `--set=username=_crunchypgbouncer`, + ), "expected query parameters") + + return expected + } + + calls := 0 + exec := func( + ctx context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + calls++ + if calls == 1 { + return nil + } + return call2(ctx, stdin, stdout, stderr, command...) + } + + ctx := context.Background() + assert.Equal(t, expected, DisableInPostgreSQL(ctx, exec)) + assert.Equal(t, calls, 2, "expected two calls to exec") + }) +} + +func TestEnableInPostgreSQL(t *testing.T) { + expected := errors.New("whoops") + secret := new(corev1.Secret) + secret.Data = map[string][]byte{ + "pgbouncer-verifier": []byte("digest$and==:whatnot"), + } + + exec := func( + _ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + assert.Assert(t, stdout != nil, "should capture stdout") + assert.Assert(t, stderr != nil, "should capture stderr") + assert.Assert(t, strings.Contains(strings.Join(command, "\n"), + `SELECT datname FROM pg_catalog.pg_database`, + ), "expected all databases and templates") + + b, err := io.ReadAll(stdin) + assert.NilError(t, err) + assert.Equal(t, string(b), strings.TrimSpace(` +SET client_min_messages = WARNING; +BEGIN; +SELECT pg_catalog.format('CREATE ROLE %I NOLOGIN', :'username') + WHERE NOT EXISTS (SELECT 1 FROM pg_catalog.pg_roles WHERE rolname = :'username') +\gexec +SELECT pg_catalog.format('REVOKE ALL PRIVILEGES ON SCHEMA %I FROM %I', nspname, :'username') + FROM pg_catalog.pg_namespace + WHERE pg_catalog.has_schema_privilege(:'username', oid, 'CREATE, USAGE') + AND nspname NOT IN ('pg_catalog', :'namespace') +\gexec +CREATE SCHEMA IF NOT EXISTS :"namespace"; +REVOKE ALL PRIVILEGES + ON SCHEMA :"namespace" FROM PUBLIC, :"username"; + GRANT USAGE + ON SCHEMA :"namespace" TO :"username"; +CREATE OR REPLACE FUNCTION :"namespace".get_auth(username TEXT) +RETURNS TABLE(username TEXT, password TEXT) AS ' + SELECT rolname::TEXT, rolpassword::TEXT + FROM pg_catalog.pg_authid + WHERE pg_authid.rolname = $1 + AND pg_authid.rolcanlogin + AND NOT pg_authid.rolsuper + AND NOT pg_authid.rolreplication + AND pg_authid.rolname <> ''_crunchypgbouncer'' + AND (pg_authid.rolvaliduntil IS NULL OR pg_authid.rolvaliduntil >= CURRENT_TIMESTAMP)' +LANGUAGE SQL STABLE SECURITY DEFINER; +REVOKE ALL PRIVILEGES + ON FUNCTION :"namespace".get_auth(username TEXT) FROM PUBLIC, :"username"; + GRANT EXECUTE + ON FUNCTION :"namespace".get_auth(username TEXT) TO :"username"; +ALTER ROLE :"username" SET search_path TO :'namespace'; +ALTER ROLE :"username" LOGIN PASSWORD :'verifier'; +COMMIT;`)) + + gomega.NewWithT(t).Expect(command).To(gomega.ContainElements( + `--set=namespace=pgbouncer`, + `--set=username=_crunchypgbouncer`, + `--set=verifier=digest$and==:whatnot`, + ), "expected query parameters") + + return expected + } + + ctx := context.Background() + assert.Equal(t, expected, EnableInPostgreSQL(ctx, exec, secret)) +} + +func TestPostgreSQLHBAs(t *testing.T) { + rules := postgresqlHBAs() + assert.Equal(t, len(rules), 2) + assert.Equal(t, rules[0].String(), `hostssl all "_crunchypgbouncer" all scram-sha-256`) + assert.Equal(t, rules[1].String(), `host all "_crunchypgbouncer" all reject`) +} diff --git a/internal/pgbouncer/reconcile.go b/internal/pgbouncer/reconcile.go new file mode 100644 index 0000000000..999d6524a5 --- /dev/null +++ b/internal/pgbouncer/reconcile.go @@ -0,0 +1,203 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgbouncer + +import ( + "context" + + "github.com/pkg/errors" + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/resource" + + "github.com/crunchydata/postgres-operator/internal/config" + "github.com/crunchydata/postgres-operator/internal/feature" + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/pki" + "github.com/crunchydata/postgres-operator/internal/postgres" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +// ConfigMap populates the PgBouncer ConfigMap. +func ConfigMap( + inCluster *v1beta1.PostgresCluster, + outConfigMap *corev1.ConfigMap, +) { + if inCluster.Spec.Proxy == nil || inCluster.Spec.Proxy.PGBouncer == nil { + // PgBouncer is disabled; there is nothing to do. + return + } + + initialize.Map(&outConfigMap.Data) + + outConfigMap.Data[emptyConfigMapKey] = "" + outConfigMap.Data[iniFileConfigMapKey] = clusterINI(inCluster) +} + +// Secret populates the PgBouncer Secret. +func Secret(ctx context.Context, + inCluster *v1beta1.PostgresCluster, + inRoot *pki.RootCertificateAuthority, + inSecret *corev1.Secret, + inService *corev1.Service, + outSecret *corev1.Secret, +) error { + if inCluster.Spec.Proxy == nil || inCluster.Spec.Proxy.PGBouncer == nil { + // PgBouncer is disabled; there is nothing to do. + return nil + } + + var err error + initialize.Map(&outSecret.Data) + + // Use the existing password and verifier. Generate both when either is missing. + // NOTE(cbandy): We don't have a function to compare a plaintext password + // to a SCRAM verifier. + password := string(inSecret.Data[passwordSecretKey]) + verifier := string(inSecret.Data[verifierSecretKey]) + + if err == nil && (len(password) == 0 || len(verifier) == 0) { + password, verifier, err = generatePassword() + err = errors.WithStack(err) + } + + if err == nil { + // Store the SCRAM verifier alongside the plaintext password so that + // later reconciles don't generate it repeatedly. + outSecret.Data[authFileSecretKey] = authFileContents(password) + outSecret.Data[passwordSecretKey] = []byte(password) + outSecret.Data[verifierSecretKey] = []byte(verifier) + } + + if inCluster.Spec.Proxy.PGBouncer.CustomTLSSecret == nil { + leaf := &pki.LeafCertificate{} + dnsNames := naming.ServiceDNSNames(ctx, inService) + dnsFQDN := dnsNames[0] + + if err == nil { + // Unmarshal and validate the stored leaf. These first errors can + // be ignored because they result in an invalid leaf which is then + // correctly regenerated. + _ = leaf.Certificate.UnmarshalText(inSecret.Data[certFrontendSecretKey]) + _ = leaf.PrivateKey.UnmarshalText(inSecret.Data[certFrontendPrivateKeySecretKey]) + + leaf, err = inRoot.RegenerateLeafWhenNecessary(leaf, dnsFQDN, dnsNames) + err = errors.WithStack(err) + } + + if err == nil { + outSecret.Data[certFrontendAuthoritySecretKey], err = inRoot.Certificate.MarshalText() + } + if err == nil { + outSecret.Data[certFrontendPrivateKeySecretKey], err = leaf.PrivateKey.MarshalText() + } + if err == nil { + outSecret.Data[certFrontendSecretKey], err = leaf.Certificate.MarshalText() + } + } + + return err +} + +// Pod populates a PodSpec with the container and volumes needed to run PgBouncer. +func Pod( + ctx context.Context, + inCluster *v1beta1.PostgresCluster, + inConfigMap *corev1.ConfigMap, + inPostgreSQLCertificate *corev1.SecretProjection, + inSecret *corev1.Secret, + outPod *corev1.PodSpec, +) { + if inCluster.Spec.Proxy == nil || inCluster.Spec.Proxy.PGBouncer == nil { + // PgBouncer is disabled; there is nothing to do. + return + } + + configVolumeMount := corev1.VolumeMount{ + Name: "pgbouncer-config", MountPath: configDirectory, ReadOnly: true, + } + configVolume := corev1.Volume{Name: configVolumeMount.Name} + configVolume.Projected = &corev1.ProjectedVolumeSource{ + Sources: append(append([]corev1.VolumeProjection{}, + podConfigFiles(inCluster.Spec.Proxy.PGBouncer.Config, inConfigMap, inSecret)...), + frontendCertificate(inCluster.Spec.Proxy.PGBouncer.CustomTLSSecret, inSecret), + backendAuthority(inPostgreSQLCertificate), + ), + } + + container := corev1.Container{ + Name: naming.ContainerPGBouncer, + + Command: []string{"pgbouncer", iniFileAbsolutePath}, + Image: config.PGBouncerContainerImage(inCluster), + ImagePullPolicy: inCluster.Spec.ImagePullPolicy, + Resources: inCluster.Spec.Proxy.PGBouncer.Resources, + SecurityContext: initialize.RestrictedSecurityContext(), + + Ports: []corev1.ContainerPort{{ + Name: naming.PortPGBouncer, + ContainerPort: *inCluster.Spec.Proxy.PGBouncer.Port, + Protocol: corev1.ProtocolTCP, + }}, + + VolumeMounts: []corev1.VolumeMount{configVolumeMount}, + } + + // TODO container.LivenessProbe? + // TODO container.ReadinessProbe? + + reloader := corev1.Container{ + Name: naming.ContainerPGBouncerConfig, + + Command: reloadCommand(naming.ContainerPGBouncerConfig), + Image: container.Image, + ImagePullPolicy: container.ImagePullPolicy, + SecurityContext: initialize.RestrictedSecurityContext(), + + VolumeMounts: []corev1.VolumeMount{configVolumeMount}, + } + + // Let the PgBouncer container drive the QoS of the pod. Set resources only + // when that container has some. + // - https://docs.k8s.io/tasks/configure-pod-container/quality-service-pod/ + if len(container.Resources.Limits)+len(container.Resources.Requests) > 0 { + // Limits without Requests implies Requests that match. + reloader.Resources.Limits = corev1.ResourceList{ + corev1.ResourceCPU: resource.MustParse("5m"), + corev1.ResourceMemory: resource.MustParse("16Mi"), + } + } + + // When resources are explicitly set, overwrite the above. + if inCluster.Spec.Proxy.PGBouncer.Sidecars != nil && + inCluster.Spec.Proxy.PGBouncer.Sidecars.PGBouncerConfig != nil && + inCluster.Spec.Proxy.PGBouncer.Sidecars.PGBouncerConfig.Resources != nil { + reloader.Resources = *inCluster.Spec.Proxy.PGBouncer.Sidecars.PGBouncerConfig.Resources + } + + outPod.Containers = []corev1.Container{container, reloader} + + // If the PGBouncerSidecars feature gate is enabled and custom pgBouncer + // sidecars are defined, add the defined container to the Pod. + if feature.Enabled(ctx, feature.PGBouncerSidecars) && + inCluster.Spec.Proxy.PGBouncer.Containers != nil { + outPod.Containers = append(outPod.Containers, inCluster.Spec.Proxy.PGBouncer.Containers...) + } + + outPod.Volumes = []corev1.Volume{configVolume} +} + +// PostgreSQL populates outHBAs with any records needed to run PgBouncer. +func PostgreSQL( + inCluster *v1beta1.PostgresCluster, + outHBAs *postgres.HBAs, +) { + if inCluster.Spec.Proxy == nil || inCluster.Spec.Proxy.PGBouncer == nil { + // PgBouncer is disabled; there is nothing to do. + return + } + + outHBAs.Mandatory = append(outHBAs.Mandatory, postgresqlHBAs()...) +} diff --git a/internal/pgbouncer/reconcile_test.go b/internal/pgbouncer/reconcile_test.go new file mode 100644 index 0000000000..a53de8cf64 --- /dev/null +++ b/internal/pgbouncer/reconcile_test.go @@ -0,0 +1,496 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgbouncer + +import ( + "context" + "testing" + + gocmp "github.com/google/go-cmp/cmp" + "gotest.tools/v3/assert" + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/resource" + + "github.com/crunchydata/postgres-operator/internal/feature" + "github.com/crunchydata/postgres-operator/internal/pki" + "github.com/crunchydata/postgres-operator/internal/postgres" + "github.com/crunchydata/postgres-operator/internal/testing/cmp" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestConfigMap(t *testing.T) { + t.Parallel() + + cluster := new(v1beta1.PostgresCluster) + config := new(corev1.ConfigMap) + + t.Run("Disabled", func(t *testing.T) { + // Nothing happens when PgBouncer is disabled. + constant := config.DeepCopy() + ConfigMap(cluster, config) + assert.DeepEqual(t, constant, config) + }) + + cluster.Spec.Proxy = new(v1beta1.PostgresProxySpec) + cluster.Spec.Proxy.PGBouncer = new(v1beta1.PGBouncerPodSpec) + cluster.Default() + + ConfigMap(cluster, config) + + // The output of clusterINI should go into config. + data := clusterINI(cluster) + assert.DeepEqual(t, config.Data["pgbouncer.ini"], data) + + // No change when called again. + before := config.DeepCopy() + ConfigMap(cluster, config) + assert.DeepEqual(t, before, config) +} + +func TestSecret(t *testing.T) { + t.Parallel() + + ctx := context.Background() + cluster := new(v1beta1.PostgresCluster) + service := new(corev1.Service) + existing := new(corev1.Secret) + intent := new(corev1.Secret) + + root, err := pki.NewRootCertificateAuthority() + assert.NilError(t, err) + + t.Run("Disabled", func(t *testing.T) { + // Nothing happens when PgBouncer is disabled. + constant := intent.DeepCopy() + assert.NilError(t, Secret(ctx, cluster, root, existing, service, intent)) + assert.DeepEqual(t, constant, intent) + }) + + cluster.Spec.Proxy = new(v1beta1.PostgresProxySpec) + cluster.Spec.Proxy.PGBouncer = new(v1beta1.PGBouncerPodSpec) + cluster.Default() + + constant := existing.DeepCopy() + assert.NilError(t, Secret(ctx, cluster, root, existing, service, intent)) + assert.DeepEqual(t, constant, existing) + + // A password should be generated. + assert.Assert(t, len(intent.Data["pgbouncer-password"]) != 0) + assert.Assert(t, len(intent.Data["pgbouncer-verifier"]) != 0) + + // The output of authFileContents should go into intent. + assert.Assert(t, len(intent.Data["pgbouncer-users.txt"]) != 0) + + // Assuming the intent is written, no change when called again. + existing.Data = intent.Data + before := intent.DeepCopy() + assert.NilError(t, Secret(ctx, cluster, root, existing, service, intent)) + assert.DeepEqual(t, before, intent) +} + +func TestPod(t *testing.T) { + t.Parallel() + + features := feature.NewGate() + ctx := feature.NewContext(context.Background(), features) + + cluster := new(v1beta1.PostgresCluster) + configMap := new(corev1.ConfigMap) + primaryCertificate := new(corev1.SecretProjection) + secret := new(corev1.Secret) + pod := new(corev1.PodSpec) + + call := func() { Pod(ctx, cluster, configMap, primaryCertificate, secret, pod) } + + t.Run("Disabled", func(t *testing.T) { + before := pod.DeepCopy() + call() + + // No change when PgBouncer is not requested in the spec. + assert.DeepEqual(t, before, pod) + }) + + t.Run("Defaults", func(t *testing.T) { + cluster.Spec.Proxy = new(v1beta1.PostgresProxySpec) + cluster.Spec.Proxy.PGBouncer = new(v1beta1.PGBouncerPodSpec) + cluster.Default() + + call() + + assert.Assert(t, cmp.MarshalMatches(pod, ` +containers: +- command: + - pgbouncer + - /etc/pgbouncer/~postgres-operator.ini + name: pgbouncer + ports: + - containerPort: 5432 + name: pgbouncer + protocol: TCP + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault + volumeMounts: + - mountPath: /etc/pgbouncer + name: pgbouncer-config + readOnly: true +- command: + - bash + - -ceu + - -- + - |- + monitor() { + exec {fd}<> <(:||:) + while read -r -t 5 -u "${fd}" ||:; do + if [[ "${directory}" -nt "/proc/self/fd/${fd}" ]] && pkill -HUP --exact pgbouncer + then + exec {fd}>&- && exec {fd}<> <(:||:) + stat --format='Loaded configuration dated %y' "${directory}" + fi + done + }; export directory="$1"; export -f monitor; exec -a "$0" bash -ceu monitor + - pgbouncer-config + - /etc/pgbouncer + name: pgbouncer-config + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault + volumeMounts: + - mountPath: /etc/pgbouncer + name: pgbouncer-config + readOnly: true +volumes: +- name: pgbouncer-config + projected: + sources: + - configMap: + items: + - key: pgbouncer-empty + path: pgbouncer.ini + - configMap: + items: + - key: pgbouncer.ini + path: ~postgres-operator.ini + - secret: + items: + - key: pgbouncer-users.txt + path: ~postgres-operator/users.txt + - secret: + items: + - key: pgbouncer-frontend.ca-roots + path: ~postgres-operator/frontend-ca.crt + - key: pgbouncer-frontend.key + path: ~postgres-operator/frontend-tls.key + - key: pgbouncer-frontend.crt + path: ~postgres-operator/frontend-tls.crt + - secret: + items: + - key: ca.crt + path: ~postgres-operator/backend-ca.crt + `)) + + // No change when called again. + before := pod.DeepCopy() + call() + assert.DeepEqual(t, before, pod) + }) + + t.Run("Customizations", func(t *testing.T) { + cluster.Spec.ImagePullPolicy = corev1.PullAlways + cluster.Spec.Proxy.PGBouncer.Image = "image-town" + cluster.Spec.Proxy.PGBouncer.Resources.Requests = corev1.ResourceList{ + corev1.ResourceCPU: resource.MustParse("100m"), + } + cluster.Spec.Proxy.PGBouncer.CustomTLSSecret = &corev1.SecretProjection{ + LocalObjectReference: corev1.LocalObjectReference{Name: "tls-name"}, + Items: []corev1.KeyToPath{ + {Key: "k1", Path: "tls.crt"}, + {Key: "k2", Path: "tls.key"}, + }, + } + + call() + + assert.Assert(t, cmp.MarshalMatches(pod, ` +containers: +- command: + - pgbouncer + - /etc/pgbouncer/~postgres-operator.ini + image: image-town + imagePullPolicy: Always + name: pgbouncer + ports: + - containerPort: 5432 + name: pgbouncer + protocol: TCP + resources: + requests: + cpu: 100m + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault + volumeMounts: + - mountPath: /etc/pgbouncer + name: pgbouncer-config + readOnly: true +- command: + - bash + - -ceu + - -- + - |- + monitor() { + exec {fd}<> <(:||:) + while read -r -t 5 -u "${fd}" ||:; do + if [[ "${directory}" -nt "/proc/self/fd/${fd}" ]] && pkill -HUP --exact pgbouncer + then + exec {fd}>&- && exec {fd}<> <(:||:) + stat --format='Loaded configuration dated %y' "${directory}" + fi + done + }; export directory="$1"; export -f monitor; exec -a "$0" bash -ceu monitor + - pgbouncer-config + - /etc/pgbouncer + image: image-town + imagePullPolicy: Always + name: pgbouncer-config + resources: + limits: + cpu: 5m + memory: 16Mi + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault + volumeMounts: + - mountPath: /etc/pgbouncer + name: pgbouncer-config + readOnly: true +volumes: +- name: pgbouncer-config + projected: + sources: + - configMap: + items: + - key: pgbouncer-empty + path: pgbouncer.ini + - configMap: + items: + - key: pgbouncer.ini + path: ~postgres-operator.ini + - secret: + items: + - key: pgbouncer-users.txt + path: ~postgres-operator/users.txt + - secret: + items: + - key: k1 + path: ~postgres-operator/frontend-tls.crt + - key: k2 + path: ~postgres-operator/frontend-tls.key + name: tls-name + - secret: + items: + - key: ca.crt + path: ~postgres-operator/backend-ca.crt + `)) + }) + + t.Run("Sidecar customization", func(t *testing.T) { + cluster.Spec.Proxy.PGBouncer.Sidecars = &v1beta1.PGBouncerSidecars{ + PGBouncerConfig: &v1beta1.Sidecar{ + Resources: &corev1.ResourceRequirements{ + Requests: corev1.ResourceList{ + corev1.ResourceCPU: resource.MustParse("200m"), + }, + }, + }, + } + + call() + + assert.Assert(t, cmp.MarshalMatches(pod, ` +containers: +- command: + - pgbouncer + - /etc/pgbouncer/~postgres-operator.ini + image: image-town + imagePullPolicy: Always + name: pgbouncer + ports: + - containerPort: 5432 + name: pgbouncer + protocol: TCP + resources: + requests: + cpu: 100m + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault + volumeMounts: + - mountPath: /etc/pgbouncer + name: pgbouncer-config + readOnly: true +- command: + - bash + - -ceu + - -- + - |- + monitor() { + exec {fd}<> <(:||:) + while read -r -t 5 -u "${fd}" ||:; do + if [[ "${directory}" -nt "/proc/self/fd/${fd}" ]] && pkill -HUP --exact pgbouncer + then + exec {fd}>&- && exec {fd}<> <(:||:) + stat --format='Loaded configuration dated %y' "${directory}" + fi + done + }; export directory="$1"; export -f monitor; exec -a "$0" bash -ceu monitor + - pgbouncer-config + - /etc/pgbouncer + image: image-town + imagePullPolicy: Always + name: pgbouncer-config + resources: + requests: + cpu: 200m + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault + volumeMounts: + - mountPath: /etc/pgbouncer + name: pgbouncer-config + readOnly: true +volumes: +- name: pgbouncer-config + projected: + sources: + - configMap: + items: + - key: pgbouncer-empty + path: pgbouncer.ini + - configMap: + items: + - key: pgbouncer.ini + path: ~postgres-operator.ini + - secret: + items: + - key: pgbouncer-users.txt + path: ~postgres-operator/users.txt + - secret: + items: + - key: k1 + path: ~postgres-operator/frontend-tls.crt + - key: k2 + path: ~postgres-operator/frontend-tls.key + name: tls-name + - secret: + items: + - key: ca.crt + path: ~postgres-operator/backend-ca.crt + `)) + }) + + t.Run("WithCustomSidecarContainer", func(t *testing.T) { + cluster.Spec.Proxy.PGBouncer.Containers = []corev1.Container{ + {Name: "customsidecar1"}, + } + + t.Run("SidecarNotEnabled", func(t *testing.T) { + + call() + assert.Equal(t, len(pod.Containers), 2, "expected 2 containers in Pod, got %d", len(pod.Containers)) + }) + + t.Run("SidecarEnabled", func(t *testing.T) { + assert.NilError(t, features.SetFromMap(map[string]bool{ + feature.PGBouncerSidecars: true, + })) + call() + + assert.Equal(t, len(pod.Containers), 3, "expected 3 containers in Pod, got %d", len(pod.Containers)) + + var found bool + for i := range pod.Containers { + if pod.Containers[i].Name == "customsidecar1" { + found = true + break + } + } + assert.Assert(t, found, "expected custom sidecar 'customsidecar1', but container not found") + }) + }) +} + +func TestPostgreSQL(t *testing.T) { + t.Parallel() + + cluster := new(v1beta1.PostgresCluster) + hbas := new(postgres.HBAs) + + t.Run("Disabled", func(t *testing.T) { + PostgreSQL(cluster, hbas) + + // No change when PgBouncer is not requested in the spec. + assert.DeepEqual(t, hbas, new(postgres.HBAs)) + }) + + t.Run("Enabled", func(t *testing.T) { + cluster.Spec.Proxy = new(v1beta1.PostgresProxySpec) + cluster.Spec.Proxy.PGBouncer = new(v1beta1.PGBouncerPodSpec) + cluster.Default() + + PostgreSQL(cluster, hbas) + + assert.DeepEqual(t, hbas, + &postgres.HBAs{ + Mandatory: postgresqlHBAs(), + }, + // postgres.HostBasedAuthentication has unexported fields. Call String() to compare. + gocmp.Transformer("", postgres.HostBasedAuthentication.String)) + }) +} diff --git a/internal/pgmonitor/exporter.go b/internal/pgmonitor/exporter.go new file mode 100644 index 0000000000..9d7a1fc3c6 --- /dev/null +++ b/internal/pgmonitor/exporter.go @@ -0,0 +1,183 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgmonitor + +import ( + "context" + "fmt" + "os" + "strings" + + "github.com/crunchydata/postgres-operator/internal/logging" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +const ( + ExporterPort = int32(9187) + + // TODO: With the current implementation of the crunchy-postgres-exporter + // it makes sense to hard-code the database. When moving away from the + // crunchy-postgres-exporter start.sh script we should re-evaluate always + // setting the exporter database to `postgres`. + ExporterDB = "postgres" + + // The exporter connects to all databases over loopback using a password. + // Kubernetes guarantees localhost resolves to loopback: + // https://kubernetes.io/docs/concepts/cluster-administration/networking/ + // https://releases.k8s.io/v1.21.0/pkg/kubelet/kubelet_pods.go#L343 + ExporterHost = "localhost" +) + +// postgres_exporter command flags +var ( + ExporterWebConfigFileFlag = "--web.config.file=/web-config/web-config.yml" + ExporterDeactivateStatBGWriterFlag = "--no-collector.stat_bgwriter" +) + +// Defaults for certain values used in queries.yml +// TODO(dsessler7): make these values configurable via spec +var DefaultValuesForQueries = map[string]string{ + "PGBACKREST_INFO_THROTTLE_MINUTES": "10", + "PG_STAT_STATEMENTS_LIMIT": "20", + "PG_STAT_STATEMENTS_THROTTLE_MINUTES": "-1", +} + +// GenerateDefaultExporterQueries generates the default queries used by exporter +func GenerateDefaultExporterQueries(ctx context.Context, cluster *v1beta1.PostgresCluster) string { + log := logging.FromContext(ctx) + var queries string + baseQueries := []string{"backrest", "global", "global_dbsize", "per_db", "nodemx"} + queriesConfigDir := GetQueriesConfigDir(ctx) + + // TODO: When we add pgbouncer support we will do something like the following: + // if pgbouncerEnabled() { + // baseQueries = append(baseQueries, "pgbouncer") + // } + + for _, queryType := range baseQueries { + queriesContents, err := os.ReadFile(fmt.Sprintf("%s/queries_%s.yml", queriesConfigDir, queryType)) + if err != nil { + // log an error, but continue to next iteration + log.Error(err, fmt.Sprintf("Query file queries_%s.yml does not exist (it should)...", queryType)) + continue + } + queries += string(queriesContents) + "\n" + } + + // Add general queries for specific postgres version + queriesGeneral, err := os.ReadFile(fmt.Sprintf("%s/pg%d/queries_general.yml", queriesConfigDir, cluster.Spec.PostgresVersion)) + if err != nil { + // log an error, but continue + log.Error(err, fmt.Sprintf("Query file %s/pg%d/queries_general.yml does not exist (it should)...", queriesConfigDir, cluster.Spec.PostgresVersion)) + } else { + queries += string(queriesGeneral) + "\n" + } + + // Add pg_stat_statement queries for specific postgres version + queriesPgStatStatements, err := os.ReadFile(fmt.Sprintf("%s/pg%d/queries_pg_stat_statements.yml", queriesConfigDir, cluster.Spec.PostgresVersion)) + if err != nil { + // log an error, but continue + log.Error(err, fmt.Sprintf("Query file %s/pg%d/queries_pg_stat_statements.yml not loaded.", queriesConfigDir, cluster.Spec.PostgresVersion)) + } else { + queries += string(queriesPgStatStatements) + "\n" + } + + // If postgres version >= 12, add pg_stat_statements_reset queries + if cluster.Spec.PostgresVersion >= 12 { + queriesPgStatStatementsReset, err := os.ReadFile(fmt.Sprintf("%s/pg%d/queries_pg_stat_statements_reset_info.yml", queriesConfigDir, cluster.Spec.PostgresVersion)) + if err != nil { + // log an error, but continue + log.Error(err, fmt.Sprintf("Query file %s/pg%d/queries_pg_stat_statements_reset_info.yml not loaded.", queriesConfigDir, cluster.Spec.PostgresVersion)) + } else { + queries += string(queriesPgStatStatementsReset) + "\n" + } + } + + // Find and replace default values in queries + for k, v := range DefaultValuesForQueries { + queries = strings.ReplaceAll(queries, fmt.Sprintf("#%s#", k), v) + } + + // TODO: Add ability to exclude certain user-specified queries + + return queries +} + +// ExporterStartCommand generates an entrypoint that will create a master queries file and +// start the postgres_exporter. It will repeat those steps if it notices a change in +// the source queries files. +func ExporterStartCommand(builtinCollectors bool, commandFlags ...string) []string { + script := []string{ + // Older images do not have the command on the PATH. + `PATH="$PATH:$(echo /opt/cpm/bin/postgres_exporter-*)"`, + + // Set up temporary file to hold postgres_exporter process id + `POSTGRES_EXPORTER_PIDFILE=/tmp/postgres_exporter.pid`, + + `postgres_exporter_flags=(`, + `'--extend.query-path=/tmp/queries.yml'`, + fmt.Sprintf(`'--web.listen-address=:%d'`, ExporterPort), + `"$@")`, + } + + // Append flags that disable built-in collectors. Find flags in the help + // output and return them with "--[no-]" replaced by "--no-" or "--". + if !builtinCollectors { + script = append(script, + `postgres_exporter_flags+=($(`, + `postgres_exporter --help 2>&1 | while read -r w _; do case "${w}" in`, + `'--[no-]collector.'*) echo "--no-${w#*-]}";;`, + `'--[no-]disable'*'metrics') echo "--${w#*-]}";;`, + `esac; done))`, + ) + } + + script = append(script, + // declare function that will combine custom queries file and default + // queries and start the postgres_exporter + `start_postgres_exporter() {`, + ` cat /conf/* > /tmp/queries.yml`, + ` echo "Starting postgres_exporter with the following flags..."`, + ` echo "${postgres_exporter_flags[@]}"`, + ` postgres_exporter "${postgres_exporter_flags[@]}" &`, + ` echo $! > $POSTGRES_EXPORTER_PIDFILE`, + `}`, + + // run function to combine queries files and start postgres_exporter + `start_postgres_exporter`, + + // Create a file descriptor with a no-op process that will not get + // cleaned up + `exec {fd}<> <(:||:)`, + + // Set up loop. Use read's timeout setting instead of sleep, + // which uses up a lot of memory + `while read -r -t 3 -u "${fd}" ||:; do`, + + // If either directories' modify time is newer than our file descriptor's, + // something must have changed, so kill the postgres_exporter + ` if ([ "/conf" -nt "/proc/self/fd/${fd}" ] || [ "/opt/crunchy/password" -nt "/proc/self/fd/${fd}" ]) \`, + ` && kill $(head -1 ${POSTGRES_EXPORTER_PIDFILE?});`, + ` then`, + // When something changes we want to get rid of the old file descriptor, get a fresh one + // and restart the loop + ` echo "Something changed..."`, + ` exec {fd}>&- && exec {fd}<> <(:||:)`, + ` stat --format='Latest queries file dated %y' "/conf"`, + ` stat --format='Latest password file dated %y' "/opt/crunchy/password"`, + ` fi`, + + // If postgres_exporter is not running, restart it + // Use the recorded pid as a proxy for checking if postgres_exporter is running + ` if [[ ! -e /proc/$(head -1 ${POSTGRES_EXPORTER_PIDFILE?}) ]] ; then`, + ` start_postgres_exporter`, + ` fi`, + `done`, + ) + + return append([]string{ + "bash", "-ceu", "--", strings.Join(script, "\n"), "postgres_exporter_watcher", + }, commandFlags...) +} diff --git a/internal/pgmonitor/exporter_test.go b/internal/pgmonitor/exporter_test.go new file mode 100644 index 0000000000..5ba14e0993 --- /dev/null +++ b/internal/pgmonitor/exporter_test.go @@ -0,0 +1,90 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgmonitor + +import ( + "context" + "os" + "strings" + "testing" + + "gotest.tools/v3/assert" + "sigs.k8s.io/yaml" + + "github.com/crunchydata/postgres-operator/internal/testing/cmp" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestGenerateDefaultExporterQueries(t *testing.T) { + if os.Getenv("QUERIES_CONFIG_DIR") == "" { + t.Skip("QUERIES_CONFIG_DIR must be set") + } + + ctx := context.Background() + cluster := &v1beta1.PostgresCluster{} + + t.Run("PG<=11", func(t *testing.T) { + cluster.Spec.PostgresVersion = 11 + queries := GenerateDefaultExporterQueries(ctx, cluster) + assert.Assert(t, !strings.Contains(queries, "ccp_pg_stat_statements_reset"), + "Queries contain 'ccp_pg_stat_statements_reset' query when they should not.") + }) + + t.Run("PG>=12", func(t *testing.T) { + cluster.Spec.PostgresVersion = 12 + queries := GenerateDefaultExporterQueries(ctx, cluster) + assert.Assert(t, strings.Contains(queries, "ccp_pg_stat_statements_reset"), + "Queries do not contain 'ccp_pg_stat_statements_reset' query when they should.") + }) +} + +func TestExporterStartCommand(t *testing.T) { + for _, tt := range []struct { + Name string + Collectors bool + Flags []string + Expect func(t *testing.T, command []string, script string) + }{ + { + Name: "NoCollectorsNoFlags", + Expect: func(t *testing.T, _ []string, script string) { + assert.Assert(t, cmp.Contains(script, "--[no-]collector")) + }, + }, + { + Name: "WithCollectorsNoFlags", + Collectors: true, + Expect: func(t *testing.T, _ []string, script string) { + assert.Assert(t, !strings.Contains(script, "collector")) + }, + }, + { + Name: "MultipleFlags", + Flags: []string{"--firstTestFlag", "--secondTestFlag"}, + Expect: func(t *testing.T, command []string, _ string) { + assert.DeepEqual(t, command[4:], []string{"postgres_exporter_watcher", "--firstTestFlag", "--secondTestFlag"}) + }, + }, + } { + t.Run(tt.Name, func(t *testing.T) { + command := ExporterStartCommand(tt.Collectors, tt.Flags...) + assert.DeepEqual(t, command[:3], []string{"bash", "-ceu", "--"}) + assert.Assert(t, len(command) > 3) + script := command[3] + + assert.Assert(t, cmp.Contains(script, "'--extend.query-path=/tmp/queries.yml'")) + assert.Assert(t, cmp.Contains(script, "'--web.listen-address=:9187'")) + + tt.Expect(t, command, script) + + t.Run("PrettyYAML", func(t *testing.T) { + b, err := yaml.Marshal(script) + assert.NilError(t, err) + assert.Assert(t, strings.HasPrefix(string(b), `|`), + "expected literal block scalar, got:\n%s", b) + }) + }) + } +} diff --git a/internal/pgmonitor/postgres.go b/internal/pgmonitor/postgres.go new file mode 100644 index 0000000000..8aed164a18 --- /dev/null +++ b/internal/pgmonitor/postgres.go @@ -0,0 +1,138 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgmonitor + +import ( + "context" + "strings" + + corev1 "k8s.io/api/core/v1" + + "github.com/crunchydata/postgres-operator/internal/logging" + "github.com/crunchydata/postgres-operator/internal/postgres" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +const ( + // MonitoringUser is a Postgres user created by pgMonitor configuration + MonitoringUser = "ccp_monitoring" +) + +// PostgreSQLHBAs provides the Postgres HBA rules for allowing the monitoring +// exporter to be accessible +func PostgreSQLHBAs(inCluster *v1beta1.PostgresCluster, outHBAs *postgres.HBAs) { + if ExporterEnabled(inCluster) { + // Limit the monitoring user to local connections using SCRAM. + outHBAs.Mandatory = append(outHBAs.Mandatory, + *postgres.NewHBA().TCP().User(MonitoringUser).Method("scram-sha-256").Network("127.0.0.0/8"), + *postgres.NewHBA().TCP().User(MonitoringUser).Method("scram-sha-256").Network("::1/128"), + *postgres.NewHBA().TCP().User(MonitoringUser).Method("reject")) + } +} + +// PostgreSQLParameters provides additional required configuration parameters +// that Postgres needs to support monitoring +func PostgreSQLParameters(inCluster *v1beta1.PostgresCluster, outParameters *postgres.Parameters) { + if ExporterEnabled(inCluster) { + // Exporter expects that shared_preload_libraries are installed + // pg_stat_statements: https://access.crunchydata.com/documentation/pgmonitor/latest/exporter/ + // pgnodemx: https://github.com/CrunchyData/pgnodemx + outParameters.Mandatory.AppendToList("shared_preload_libraries", "pg_stat_statements", "pgnodemx") + outParameters.Mandatory.Add("pgnodemx.kdapi_path", + postgres.DownwardAPIVolumeMount().MountPath) + } +} + +// DisableExporterInPostgreSQL disables the exporter configuration in PostgreSQL. +// Currently the exporter is disabled by removing login permissions for the +// monitoring user. +// TODO: evaluate other uninstall/removal options +func DisableExporterInPostgreSQL(ctx context.Context, exec postgres.Executor) error { + log := logging.FromContext(ctx) + + stdout, stderr, err := exec.Exec(ctx, strings.NewReader(` + SELECT pg_catalog.format('ALTER ROLE %I NOLOGIN', :'username') + WHERE EXISTS (SELECT 1 FROM pg_catalog.pg_roles WHERE rolname = :'username') + \gexec`), + map[string]string{ + "username": MonitoringUser, + }) + + log.V(1).Info("monitoring user disabled", "stdout", stdout, "stderr", stderr) + + return err +} + +// EnableExporterInPostgreSQL runs SQL setup commands in `database` to enable +// the exporter to retrieve metrics. pgMonitor objects are created and expected +// extensions are installed. We also ensure that the monitoring user has the +// current password and can login. +func EnableExporterInPostgreSQL(ctx context.Context, exec postgres.Executor, + monitoringSecret *corev1.Secret, database, setup string) error { + log := logging.FromContext(ctx) + + stdout, stderr, err := exec.ExecInAllDatabases(ctx, + strings.Join([]string{ + // Quiet NOTICE messages from IF EXISTS statements. + // - https://www.postgresql.org/docs/current/runtime-config-client.html + `SET client_min_messages = WARNING;`, + + // Exporter expects that extension(s) to be installed in all databases + // pg_stat_statements: https://access.crunchydata.com/documentation/pgmonitor/latest/exporter/ + "CREATE EXTENSION IF NOT EXISTS pg_stat_statements;", + + // Run idempotent update + "ALTER EXTENSION pg_stat_statements UPDATE;", + }, "\n"), + map[string]string{ + "ON_ERROR_STOP": "on", // Abort when any one statement fails. + "QUIET": "on", // Do not print successful commands to stdout. + }, + ) + + log.V(1).Info("applied pgMonitor objects", "database", "current and future databases", "stdout", stdout, "stderr", stderr) + + // NOTE: Setup is run last to ensure that the setup sql is used in the hash + if err == nil { + stdout, stderr, err = exec.ExecInDatabasesFromQuery(ctx, + `SELECT :'database'`, + strings.Join([]string{ + // Quiet NOTICE messages from IF EXISTS statements. + // - https://www.postgresql.org/docs/current/runtime-config-client.html + `SET client_min_messages = WARNING;`, + + // Setup.sql file from the exporter image. sql is specific + // to the PostgreSQL version + setup, + + // pgnodemx: https://github.com/CrunchyData/pgnodemx + // The `monitor` schema is hard-coded in the setup SQL files + // from pgMonitor configuration + // https://github.com/CrunchyData/pgmonitor/blob/master/postgres_exporter/common/queries_nodemx.yml + "CREATE EXTENSION IF NOT EXISTS pgnodemx WITH SCHEMA monitor;", + + // Run idempotent update + "ALTER EXTENSION pgnodemx UPDATE;", + + // ccp_monitoring user is created in Setup.sql without a + // password; update the password and ensure that the ROLE + // can login to the database + `ALTER ROLE :"username" LOGIN PASSWORD :'verifier';`, + }, "\n"), + map[string]string{ + "database": database, + "username": MonitoringUser, + "verifier": string(monitoringSecret.Data["verifier"]), + + "ON_ERROR_STOP": "on", // Abort when any one statement fails. + "QUIET": "on", // Do not print successful commands to stdout. + }, + ) + + log.V(1).Info("applied pgMonitor objects", "database", database, "stdout", stdout, "stderr", stderr) + } + + return err +} diff --git a/internal/pgmonitor/postgres_test.go b/internal/pgmonitor/postgres_test.go new file mode 100644 index 0000000000..655fa936ae --- /dev/null +++ b/internal/pgmonitor/postgres_test.go @@ -0,0 +1,90 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgmonitor + +import ( + "strings" + "testing" + + "gotest.tools/v3/assert" + + "github.com/crunchydata/postgres-operator/internal/postgres" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestPostgreSQLHBA(t *testing.T) { + t.Run("ExporterDisabled", func(t *testing.T) { + inCluster := &v1beta1.PostgresCluster{} + outHBAs := postgres.HBAs{} + PostgreSQLHBAs(inCluster, &outHBAs) + assert.Equal(t, len(outHBAs.Mandatory), 0) + }) + + t.Run("ExporterEnabled", func(t *testing.T) { + inCluster := &v1beta1.PostgresCluster{} + inCluster.Spec.Monitoring = &v1beta1.MonitoringSpec{ + PGMonitor: &v1beta1.PGMonitorSpec{ + Exporter: &v1beta1.ExporterSpec{ + Image: "image", + }, + }, + } + + outHBAs := postgres.HBAs{} + PostgreSQLHBAs(inCluster, &outHBAs) + + assert.Equal(t, len(outHBAs.Mandatory), 3) + assert.Equal(t, outHBAs.Mandatory[0].String(), `host all "ccp_monitoring" "127.0.0.0/8" scram-sha-256`) + assert.Equal(t, outHBAs.Mandatory[1].String(), `host all "ccp_monitoring" "::1/128" scram-sha-256`) + assert.Equal(t, outHBAs.Mandatory[2].String(), `host all "ccp_monitoring" all reject`) + }) +} + +func TestPostgreSQLParameters(t *testing.T) { + t.Run("ExporterDisabled", func(t *testing.T) { + inCluster := &v1beta1.PostgresCluster{} + outParameters := postgres.NewParameters() + PostgreSQLParameters(inCluster, &outParameters) + assert.Assert(t, !outParameters.Mandatory.Has("shared_preload_libraries")) + }) + + t.Run("ExporterEnabled", func(t *testing.T) { + inCluster := &v1beta1.PostgresCluster{} + inCluster.Spec.Monitoring = &v1beta1.MonitoringSpec{ + PGMonitor: &v1beta1.PGMonitorSpec{ + Exporter: &v1beta1.ExporterSpec{ + Image: "image", + }, + }, + } + outParameters := postgres.NewParameters() + + PostgreSQLParameters(inCluster, &outParameters) + libs, found := outParameters.Mandatory.Get("shared_preload_libraries") + assert.Assert(t, found) + assert.Assert(t, strings.Contains(libs, "pg_stat_statements")) + assert.Assert(t, strings.Contains(libs, "pgnodemx")) + }) + + t.Run("SharedPreloadLibraries Defined", func(t *testing.T) { + inCluster := &v1beta1.PostgresCluster{} + inCluster.Spec.Monitoring = &v1beta1.MonitoringSpec{ + PGMonitor: &v1beta1.PGMonitorSpec{ + Exporter: &v1beta1.ExporterSpec{ + Image: "image", + }, + }, + } + outParameters := postgres.NewParameters() + outParameters.Mandatory.Add("shared_preload_libraries", "daisy") + + PostgreSQLParameters(inCluster, &outParameters) + libs, found := outParameters.Mandatory.Get("shared_preload_libraries") + assert.Assert(t, found) + assert.Assert(t, strings.Contains(libs, "pg_stat_statements")) + assert.Assert(t, strings.Contains(libs, "pgnodemx")) + assert.Assert(t, strings.Contains(libs, "daisy")) + }) +} diff --git a/internal/pgmonitor/util.go b/internal/pgmonitor/util.go new file mode 100644 index 0000000000..f5606ccd08 --- /dev/null +++ b/internal/pgmonitor/util.go @@ -0,0 +1,40 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgmonitor + +import ( + "context" + "os" + + "github.com/crunchydata/postgres-operator/internal/logging" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func GetQueriesConfigDir(ctx context.Context) string { + log := logging.FromContext(ctx) + // The QUERIES_CONFIG_DIR environment variable can be used to tell postgres-operator where to + // find the setup.sql and queries.yml files when running the postgres-operator binary locally + if queriesConfigDir := os.Getenv("QUERIES_CONFIG_DIR"); queriesConfigDir != "" { + log.Info("Directory for setup.sql and queries files set by QUERIES_CONFIG_DIR env var. " + + "This should only be used when running the postgres-operator binary locally.") + return queriesConfigDir + } + + return "/opt/crunchy/conf" +} + +// ExporterEnabled returns true if the monitoring exporter is enabled +func ExporterEnabled(cluster *v1beta1.PostgresCluster) bool { + if cluster.Spec.Monitoring == nil { + return false + } + if cluster.Spec.Monitoring.PGMonitor == nil { + return false + } + if cluster.Spec.Monitoring.PGMonitor.Exporter == nil { + return false + } + return true +} diff --git a/internal/pgmonitor/util_test.go b/internal/pgmonitor/util_test.go new file mode 100644 index 0000000000..8d16d74bae --- /dev/null +++ b/internal/pgmonitor/util_test.go @@ -0,0 +1,28 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pgmonitor + +import ( + "testing" + + "gotest.tools/v3/assert" + + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestExporterEnabled(t *testing.T) { + cluster := &v1beta1.PostgresCluster{} + assert.Assert(t, !ExporterEnabled(cluster)) + + cluster.Spec.Monitoring = &v1beta1.MonitoringSpec{} + assert.Assert(t, !ExporterEnabled(cluster)) + + cluster.Spec.Monitoring.PGMonitor = &v1beta1.PGMonitorSpec{} + assert.Assert(t, !ExporterEnabled(cluster)) + + cluster.Spec.Monitoring.PGMonitor.Exporter = &v1beta1.ExporterSpec{} + assert.Assert(t, ExporterEnabled(cluster)) + +} diff --git a/internal/pki/common.go b/internal/pki/common.go new file mode 100644 index 0000000000..fbe9421f8b --- /dev/null +++ b/internal/pki/common.go @@ -0,0 +1,95 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pki + +import ( + "crypto/ecdsa" + "crypto/elliptic" + "crypto/rand" + "crypto/x509" + "crypto/x509/pkix" + "math/big" + "time" +) + +// certificateSignatureAlgorithm is ECDSA with SHA-384, the recommended +// signature algorithm with the P-256 curve. +const certificateSignatureAlgorithm = x509.ECDSAWithSHA384 + +// currentTime returns the current local time. It is a variable so it can be +// replaced during testing. +var currentTime = time.Now + +// generateKey returns a random ECDSA key using a P-256 curve. This curve is +// roughly equivalent to an RSA 3072-bit key but requires less bits to achieve +// the equivalent cryptographic strength. Additionally, ECDSA is FIPS 140-2 +// compliant. +func generateKey() (*ecdsa.PrivateKey, error) { + return ecdsa.GenerateKey(elliptic.P256(), rand.Reader) +} + +// generateSerialNumber returns a random 128-bit integer. +func generateSerialNumber() (*big.Int, error) { + return rand.Int(rand.Reader, new(big.Int).Lsh(big.NewInt(1), 128)) +} + +func generateLeafCertificate( + signer *x509.Certificate, signerPrivate *ecdsa.PrivateKey, + signeePublic *ecdsa.PublicKey, serialNumber *big.Int, + commonName string, dnsNames []string, +) (*x509.Certificate, error) { + const leafExpiration = time.Hour * 24 * 365 + const leafStartValid = time.Hour * -1 + + now := currentTime() + template := &x509.Certificate{ + BasicConstraintsValid: true, + DNSNames: dnsNames, + KeyUsage: x509.KeyUsageDigitalSignature | x509.KeyUsageKeyEncipherment, + NotBefore: now.Add(leafStartValid), + NotAfter: now.Add(leafExpiration), + SerialNumber: serialNumber, + SignatureAlgorithm: certificateSignatureAlgorithm, + Subject: pkix.Name{ + CommonName: commonName, + }, + } + + bytes, err := x509.CreateCertificate(rand.Reader, template, signer, + signeePublic, signerPrivate) + + parsed, _ := x509.ParseCertificate(bytes) + return parsed, err +} + +func generateRootCertificate( + privateKey *ecdsa.PrivateKey, serialNumber *big.Int, +) (*x509.Certificate, error) { + const rootCommonName = "postgres-operator-ca" + const rootExpiration = time.Hour * 24 * 365 * 10 + const rootStartValid = time.Hour * -1 + + now := currentTime() + template := &x509.Certificate{ + BasicConstraintsValid: true, + IsCA: true, + KeyUsage: x509.KeyUsageCertSign | x509.KeyUsageCRLSign, + MaxPathLenZero: true, // there are no intermediate certificates + NotBefore: now.Add(rootStartValid), + NotAfter: now.Add(rootExpiration), + SerialNumber: serialNumber, + SignatureAlgorithm: certificateSignatureAlgorithm, + Subject: pkix.Name{ + CommonName: rootCommonName, + }, + } + + // A root certificate is self-signed, so pass in the template twice. + bytes, err := x509.CreateCertificate(rand.Reader, template, template, + privateKey.Public(), privateKey) + + parsed, _ := x509.ParseCertificate(bytes) + return parsed, err +} diff --git a/internal/pki/doc.go b/internal/pki/doc.go new file mode 100644 index 0000000000..71f8c0a1bc --- /dev/null +++ b/internal/pki/doc.go @@ -0,0 +1,13 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +// Package pki provides types and functions to support the public key +// infrastructure of the Postgres Operator. It enforces a two layer system +// of certificate authorities and certificates. +// +// NewRootCertificateAuthority() creates a new root CA. +// GenerateLeafCertificate() creates a new leaf certificate. +// +// Certificate and PrivateKey are primitives that can be marshaled. +package pki diff --git a/internal/pki/encoding.go b/internal/pki/encoding.go new file mode 100644 index 0000000000..2d2cd851e3 --- /dev/null +++ b/internal/pki/encoding.go @@ -0,0 +1,95 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pki + +import ( + "crypto/ecdsa" + "crypto/x509" + "encoding" + "encoding/pem" + "fmt" +) + +const ( + // pemLabelCertificate is the textual encoding label for an X.509 certificate + // according to RFC 7468. See https://tools.ietf.org/html/rfc7468. + pemLabelCertificate = "CERTIFICATE" + + // pemLabelECDSAKey is the textual encoding label for an elliptic curve private key + // according to RFC 5915. See https://tools.ietf.org/html/rfc5915. + pemLabelECDSAKey = "EC PRIVATE KEY" +) + +var ( + _ encoding.TextMarshaler = Certificate{} + _ encoding.TextMarshaler = (*Certificate)(nil) + _ encoding.TextUnmarshaler = (*Certificate)(nil) +) + +// MarshalText returns a PEM encoding of c that OpenSSL understands. +func (c Certificate) MarshalText() ([]byte, error) { + if c.x509 == nil || len(c.x509.Raw) == 0 { + _, err := x509.ParseCertificate(nil) + return nil, err + } + + return pem.EncodeToMemory(&pem.Block{ + Type: pemLabelCertificate, + Bytes: c.x509.Raw, + }), nil +} + +// UnmarshalText populates c from its PEM encoding. +func (c *Certificate) UnmarshalText(data []byte) error { + block, _ := pem.Decode(data) + + if block == nil || block.Type != pemLabelCertificate { + return fmt.Errorf("not a PEM-encoded certificate") + } + + parsed, err := x509.ParseCertificate(block.Bytes) + if err == nil { + c.x509 = parsed + } + return err +} + +var ( + _ encoding.TextMarshaler = PrivateKey{} + _ encoding.TextMarshaler = (*PrivateKey)(nil) + _ encoding.TextUnmarshaler = (*PrivateKey)(nil) +) + +// MarshalText returns a PEM encoding of k that OpenSSL understands. +func (k PrivateKey) MarshalText() ([]byte, error) { + if k.ecdsa == nil { + k.ecdsa = new(ecdsa.PrivateKey) + } + + der, err := x509.MarshalECPrivateKey(k.ecdsa) + if err != nil { + return nil, err + } + + return pem.EncodeToMemory(&pem.Block{ + Type: pemLabelECDSAKey, + Bytes: der, + }), nil +} + +// UnmarshalText populates k from its PEM encoding. +func (k *PrivateKey) UnmarshalText(data []byte) error { + block, _ := pem.Decode(data) + + if block == nil || block.Type != pemLabelECDSAKey { + return fmt.Errorf("not a PEM-encoded private key") + } + + key, err := x509.ParseECPrivateKey(block.Bytes) + if err == nil { + k.ecdsa = key + } + return err +} diff --git a/internal/pki/encoding_test.go b/internal/pki/encoding_test.go new file mode 100644 index 0000000000..cdf7c0de5a --- /dev/null +++ b/internal/pki/encoding_test.go @@ -0,0 +1,183 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pki + +import ( + "bytes" + "os" + "os/exec" + "path/filepath" + "strings" + "testing" + + "gotest.tools/v3/assert" + + "github.com/crunchydata/postgres-operator/internal/testing/require" +) + +func TestCertificateTextMarshaling(t *testing.T) { + t.Run("Zero", func(t *testing.T) { + // Zero cannot marshal. + _, err := Certificate{}.MarshalText() + assert.ErrorContains(t, err, "malformed") + + // Empty cannot unmarshal. + var sink Certificate + assert.ErrorContains(t, sink.UnmarshalText(nil), "PEM-encoded") + assert.ErrorContains(t, sink.UnmarshalText([]byte{}), "PEM-encoded") + }) + + root, err := NewRootCertificateAuthority() + assert.NilError(t, err) + + cert := root.Certificate + txt, err := cert.MarshalText() + assert.NilError(t, err) + assert.Assert(t, bytes.HasPrefix(txt, []byte("-----BEGIN CERTIFICATE-----\n")), "got %q", txt) + assert.Assert(t, bytes.HasSuffix(txt, []byte("\n-----END CERTIFICATE-----\n")), "got %q", txt) + + t.Run("RoundTrip", func(t *testing.T) { + var sink Certificate + assert.NilError(t, sink.UnmarshalText(txt)) + assert.DeepEqual(t, cert, sink) + }) + + t.Run("Bundle", func(t *testing.T) { + other, _ := NewRootCertificateAuthority() + otherText, err := other.Certificate.MarshalText() + assert.NilError(t, err) + + bundle := bytes.Join([][]byte{txt, otherText}, nil) + + // Only the first certificate of a bundle is parsed. + var sink Certificate + assert.NilError(t, sink.UnmarshalText(bundle)) + assert.DeepEqual(t, cert, sink) + }) + + t.Run("EncodedEmpty", func(t *testing.T) { + txt := []byte("-----BEGIN CERTIFICATE-----\n\n-----END CERTIFICATE-----\n") + + var sink Certificate + assert.ErrorContains(t, sink.UnmarshalText(txt), "malformed") + }) + + t.Run("EncodedGarbage", func(t *testing.T) { + txt := []byte("-----BEGIN CERTIFICATE-----\nasdfasdf\n-----END CERTIFICATE-----\n") + + var sink Certificate + assert.ErrorContains(t, sink.UnmarshalText(txt), "malformed") + }) + + t.Run("ReadByOpenSSL", func(t *testing.T) { + openssl := require.OpenSSL(t) + dir := t.TempDir() + + certFile := filepath.Join(dir, "cert.pem") + certBytes, err := cert.MarshalText() + assert.NilError(t, err) + assert.NilError(t, os.WriteFile(certFile, certBytes, 0o600)) + + // The "openssl x509" command parses X.509 certificates. + cmd := exec.Command(openssl, "x509", + "-in", certFile, "-inform", "PEM", "-noout", "-text") + + output, err := cmd.CombinedOutput() + assert.NilError(t, err, "%q\n%s", cmd.Args, output) + }) +} + +func TestPrivateKeyTextMarshaling(t *testing.T) { + t.Run("Zero", func(t *testing.T) { + // Zero cannot marshal. + _, err := PrivateKey{}.MarshalText() + assert.ErrorContains(t, err, "unknown") + + // Empty cannot unmarshal. + var sink PrivateKey + assert.ErrorContains(t, sink.UnmarshalText(nil), "PEM-encoded") + assert.ErrorContains(t, sink.UnmarshalText([]byte{}), "PEM-encoded") + }) + + root, err := NewRootCertificateAuthority() + assert.NilError(t, err) + + key := root.PrivateKey + txt, err := key.MarshalText() + assert.NilError(t, err) + assert.Assert(t, bytes.HasPrefix(txt, []byte("-----BEGIN EC PRIVATE KEY-----\n")), "got %q", txt) + assert.Assert(t, bytes.HasSuffix(txt, []byte("\n-----END EC PRIVATE KEY-----\n")), "got %q", txt) + + t.Run("RoundTrip", func(t *testing.T) { + var sink PrivateKey + assert.NilError(t, sink.UnmarshalText(txt)) + assert.DeepEqual(t, key, sink) + }) + + t.Run("Bundle", func(t *testing.T) { + other, _ := NewRootCertificateAuthority() + otherText, err := other.PrivateKey.MarshalText() + assert.NilError(t, err) + + bundle := bytes.Join([][]byte{txt, otherText}, nil) + + // Only the first key of a bundle is parsed. + var sink PrivateKey + assert.NilError(t, sink.UnmarshalText(bundle)) + assert.DeepEqual(t, key, sink) + }) + + t.Run("EncodedEmpty", func(t *testing.T) { + txt := []byte("-----BEGIN EC PRIVATE KEY-----\n\n-----END EC PRIVATE KEY-----\n") + + var sink PrivateKey + assert.ErrorContains(t, sink.UnmarshalText(txt), "asn1") + }) + + t.Run("EncodedGarbage", func(t *testing.T) { + txt := []byte("-----BEGIN EC PRIVATE KEY-----\nasdfasdf\n-----END EC PRIVATE KEY-----\n") + + var sink PrivateKey + assert.ErrorContains(t, sink.UnmarshalText(txt), "asn1") + }) + + t.Run("ReadByOpenSSL", func(t *testing.T) { + openssl := require.OpenSSL(t) + dir := t.TempDir() + + keyFile := filepath.Join(dir, "key.pem") + keyBytes, err := key.MarshalText() + assert.NilError(t, err) + assert.NilError(t, os.WriteFile(keyFile, keyBytes, 0o600)) + + // The "openssl pkey" command processes public and private keys. + cmd := exec.Command(openssl, "pkey", + "-in", keyFile, "-inform", "PEM", "-noout", "-text") + + output, err := cmd.CombinedOutput() + assert.NilError(t, err, "%q\n%s", cmd.Args, output) + + assert.Assert(t, + bytes.Contains(output, []byte("Private-Key:")), + "expected valid private key, got:\n%s", output) + + t.Run("Check", func(t *testing.T) { + output, _ := exec.Command(openssl, "pkey", "-help").CombinedOutput() + if !strings.Contains(string(output), "-check") { + t.Skip(`requires "-check" flag`) + } + + cmd := exec.Command(openssl, "pkey", + "-check", "-in", keyFile, "-inform", "PEM", "-noout", "-text") + + output, err := cmd.CombinedOutput() + assert.NilError(t, err, "%q\n%s", cmd.Args, output) + + assert.Assert(t, + bytes.Contains(output, []byte("is valid")), + "expected valid private key, got:\n%s", output) + }) + }) +} diff --git a/internal/pki/pki.go b/internal/pki/pki.go new file mode 100644 index 0000000000..7048810654 --- /dev/null +++ b/internal/pki/pki.go @@ -0,0 +1,220 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pki + +import ( + "crypto/ecdsa" + "crypto/x509" + "math/big" + "time" +) + +const renewalRatio = 3 + +// Certificate represents an X.509 certificate that conforms to the Internet +// PKI Profile, RFC 5280. +type Certificate struct{ x509 *x509.Certificate } + +// PrivateKey represents the private key of a Certificate. +type PrivateKey struct{ ecdsa *ecdsa.PrivateKey } + +// Equal reports whether c and other have the same value. +func (c Certificate) Equal(other Certificate) bool { + return c.x509.Equal(other.x509) +} + +// CommonName returns a copy of the certificate common name (ASN.1 OID 2.5.4.3). +func (c Certificate) CommonName() string { + if c.x509 == nil { + return "" + } + return c.x509.Subject.CommonName +} + +// DNSNames returns a copy of the certificate subject alternative names +// (ASN.1 OID 2.5.29.17) that are DNS names. +func (c Certificate) DNSNames() []string { + if c.x509 == nil || len(c.x509.DNSNames) == 0 { + return nil + } + return append([]string{}, c.x509.DNSNames...) +} + +// hasSubject checks that c has these values in its subject. +func (c Certificate) hasSubject(commonName string, dnsNames []string) bool { + ok := c.x509 != nil && + c.x509.Subject.CommonName == commonName && + len(c.x509.DNSNames) == len(dnsNames) + + for i := range dnsNames { + ok = ok && c.x509.DNSNames[i] == dnsNames[i] + } + + return ok +} + +// Equal reports whether k and other have the same value. +func (k PrivateKey) Equal(other PrivateKey) bool { + if k.ecdsa == nil || other.ecdsa == nil { + return k.ecdsa == other.ecdsa + } + return k.ecdsa.Equal(other.ecdsa) +} + +// LeafCertificate is a certificate and private key pair that can be validated +// by RootCertificateAuthority. +type LeafCertificate struct { + Certificate Certificate + PrivateKey PrivateKey +} + +// RootCertificateAuthority is a certificate and private key pair that can +// generate other certificates. +type RootCertificateAuthority struct { + Certificate Certificate + PrivateKey PrivateKey +} + +// NewRootCertificateAuthority generates a new key and self-signed certificate +// for issuing other certificates. +func NewRootCertificateAuthority() (*RootCertificateAuthority, error) { + var root RootCertificateAuthority + var serial *big.Int + + key, err := generateKey() + if err == nil { + serial, err = generateSerialNumber() + } + if err == nil { + root.PrivateKey.ecdsa = key + root.Certificate.x509, err = generateRootCertificate(key, serial) + } + + return &root, err +} + +// RootIsValid checks if root is valid according to this package's policies. +func RootIsValid(root *RootCertificateAuthority) bool { + if root == nil || root.Certificate.x509 == nil { + return false + } + + trusted := x509.NewCertPool() + trusted.AddCert(root.Certificate.x509) + + // Verify the certificate expiration, basic constraints, key usages, and + // critical extensions. Trust the certificate as an authority so it is not + // compared to system roots or sent to the platform certificate verifier. + _, err := root.Certificate.x509.Verify(x509.VerifyOptions{ + Roots: trusted, + }) + + // Its expiration, key usages, and critical extensions are good. + ok := err == nil + + // It is an authority with the Subject Key Identifier extension. + // The "crypto/x509" package adds the extension automatically since Go 1.15. + // - https://tools.ietf.org/html/rfc5280#section-4.2.1.2 + // - https://go.dev/doc/go1.15#crypto/x509 + ok = ok && + root.Certificate.x509.BasicConstraintsValid && + root.Certificate.x509.IsCA && + len(root.Certificate.x509.SubjectKeyId) > 0 + + // It is signed by this private key. + ok = ok && + root.PrivateKey.ecdsa != nil && + root.PrivateKey.ecdsa.PublicKey.Equal(root.Certificate.x509.PublicKey) + + return ok +} + +// GenerateLeafCertificate generates a new key and certificate signed by root. +func (root *RootCertificateAuthority) GenerateLeafCertificate( + commonName string, dnsNames []string, +) (*LeafCertificate, error) { + var leaf LeafCertificate + var serial *big.Int + + key, err := generateKey() + if err == nil { + serial, err = generateSerialNumber() + } + if err == nil { + leaf.PrivateKey.ecdsa = key + leaf.Certificate.x509, err = generateLeafCertificate( + root.Certificate.x509, root.PrivateKey.ecdsa, &key.PublicKey, serial, + commonName, dnsNames) + } + + return &leaf, err +} + +// leafIsValid checks if leaf is valid according to this package's policies and +// is signed by root. +func (root *RootCertificateAuthority) leafIsValid(leaf *LeafCertificate) bool { + if root == nil || root.Certificate.x509 == nil { + return false + } + if leaf == nil || leaf.Certificate.x509 == nil { + return false + } + + trusted := x509.NewCertPool() + trusted.AddCert(root.Certificate.x509) + + // Go 1.10 enforces name constraints for all names in the certificate. + // Go 1.15 does not enforce name constraints on the CommonName field. + // - https://go.dev/doc/go1.10#crypto/x509 + // - https://go.dev/doc/go1.15#commonname + _, err := leaf.Certificate.x509.Verify(x509.VerifyOptions{ + Roots: trusted, + }) + + // Its expiration, name constraints, key usages, and critical extensions are good. + ok := err == nil + + // It is not an authority. + ok = ok && + leaf.Certificate.x509.BasicConstraintsValid && + !leaf.Certificate.x509.IsCA + + // It is signed by this private key. + ok = ok && + leaf.PrivateKey.ecdsa != nil && + leaf.PrivateKey.ecdsa.PublicKey.Equal(leaf.Certificate.x509.PublicKey) + + // It is not yet past the "renewal by" time, + // as defined by the before and after times of the certificate's expiration + // and the default ratio + ok = ok && isBeforeRenewalTime(leaf.Certificate.x509.NotBefore, + leaf.Certificate.x509.NotAfter) + + return ok +} + +// isBeforeRenewalTime checks if the result of `currentTime` +// is after the default renewal time of +// 1/3rds before the certificate's expiry +func isBeforeRenewalTime(before, after time.Time) bool { + renewalDuration := after.Sub(before) / renewalRatio + renewalTime := after.Add(-1 * renewalDuration) + return currentTime().Before(renewalTime) +} + +// RegenerateLeafWhenNecessary returns leaf when it is valid according to this +// package's policies, signed by root, and has commonName and dnsNames in its +// subject. Otherwise, it returns a new key and certificate signed by root. +func (root *RootCertificateAuthority) RegenerateLeafWhenNecessary( + leaf *LeafCertificate, commonName string, dnsNames []string, +) (*LeafCertificate, error) { + ok := root.leafIsValid(leaf) && + leaf.Certificate.hasSubject(commonName, dnsNames) + + if ok { + return leaf, nil + } + return root.GenerateLeafCertificate(commonName, dnsNames) +} diff --git a/internal/pki/pki_test.go b/internal/pki/pki_test.go new file mode 100644 index 0000000000..cd13896450 --- /dev/null +++ b/internal/pki/pki_test.go @@ -0,0 +1,522 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package pki + +import ( + "crypto/ecdsa" + "crypto/x509" + "os" + "os/exec" + "path/filepath" + "strings" + "testing" + "time" + + "gotest.tools/v3/assert" + + "github.com/crunchydata/postgres-operator/internal/testing/require" +) + +type StringSet map[string]struct{} + +func (s StringSet) Has(item string) bool { _, ok := s[item]; return ok } +func (s StringSet) Insert(item string) { s[item] = struct{}{} } + +func TestCertificateCommonName(t *testing.T) { + zero := Certificate{} + assert.Assert(t, zero.CommonName() == "") +} + +func TestCertificateDNSNames(t *testing.T) { + zero := Certificate{} + assert.Assert(t, zero.DNSNames() == nil) +} + +func TestCertificateHasSubject(t *testing.T) { + zero := Certificate{} + + // The zero value has no subject. + for _, cn := range []string{"", "any"} { + for _, dns := range [][]string{nil, {}, {"any"}} { + assert.Assert(t, !zero.hasSubject(cn, dns), "for (%q, %q)", cn, dns) + } + } +} + +func TestCertificateEqual(t *testing.T) { + zero := Certificate{} + assert.Assert(t, zero.Equal(zero)) + + root, err := NewRootCertificateAuthority() + assert.NilError(t, err) + assert.Assert(t, root.Certificate.Equal(root.Certificate)) + + assert.Assert(t, !root.Certificate.Equal(zero)) + assert.Assert(t, !zero.Equal(root.Certificate)) + + other, err := NewRootCertificateAuthority() + assert.NilError(t, err) + assert.Assert(t, !root.Certificate.Equal(other.Certificate)) + + // DeepEqual calls the Equal method, so no cmp.Option are necessary. + assert.DeepEqual(t, zero, zero) + assert.DeepEqual(t, root.Certificate, root.Certificate) +} + +func TestPrivateKeyEqual(t *testing.T) { + zero := PrivateKey{} + assert.Assert(t, zero.Equal(zero)) + + root, err := NewRootCertificateAuthority() + assert.NilError(t, err) + assert.Assert(t, root.PrivateKey.Equal(root.PrivateKey)) + + assert.Assert(t, !root.PrivateKey.Equal(zero)) + assert.Assert(t, !zero.Equal(root.PrivateKey)) + + other, err := NewRootCertificateAuthority() + assert.NilError(t, err) + assert.Assert(t, !root.PrivateKey.Equal(other.PrivateKey)) + + // DeepEqual calls the Equal method, so no cmp.Option are necessary. + assert.DeepEqual(t, zero, zero) + assert.DeepEqual(t, root.PrivateKey, root.PrivateKey) +} + +func TestRootCertificateAuthority(t *testing.T) { + root, err := NewRootCertificateAuthority() + assert.NilError(t, err) + assert.Assert(t, root != nil) + + cert := root.Certificate.x509 + assert.Assert(t, RootIsValid(root), "got %#v", cert) + + assert.DeepEqual(t, cert.Issuer, cert.Subject) // self-signed + assert.Assert(t, cert.BasicConstraintsValid && cert.IsCA) // authority + assert.Assert(t, time.Now().After(cert.NotBefore), "early, got %v", cert.NotBefore) + assert.Assert(t, time.Now().Before(cert.NotAfter), "expired, got %v", cert.NotAfter) + + assert.Equal(t, cert.MaxPathLen, 0) + assert.Equal(t, cert.PublicKeyAlgorithm, x509.ECDSA) + assert.Equal(t, cert.SignatureAlgorithm, x509.ECDSAWithSHA384) + assert.Equal(t, cert.Subject.CommonName, "postgres-operator-ca") + assert.Equal(t, cert.KeyUsage, x509.KeyUsageCertSign|x509.KeyUsageCRLSign) + + assert.Assert(t, cert.DNSNames == nil) + assert.Assert(t, cert.EmailAddresses == nil) + assert.Assert(t, cert.IPAddresses == nil) + assert.Assert(t, cert.URIs == nil) + + // The Subject Key Identifier extension is necessary on CAs. + // The "crypto/x509" package adds it automatically since Go 1.15. + // - https://tools.ietf.org/html/rfc5280#section-4.2.1.2 + // - https://go.dev/doc/go1.15#crypto/x509 + assert.Assert(t, len(cert.SubjectKeyId) > 0) + + // The Subject field must be populated on CAs. + // - https://tools.ietf.org/html/rfc5280#section-4.1.2.6 + assert.Assert(t, len(cert.Subject.Names) > 0) + + root2, err := NewRootCertificateAuthority() + assert.NilError(t, err) + assert.Assert(t, root2 != nil) + + cert2 := root2.Certificate.x509 + assert.Assert(t, RootIsValid(root2), "got %#v", cert2) + + assert.Assert(t, cert2.SerialNumber.Cmp(cert.SerialNumber) != 0, "new serial") + assert.Assert(t, !cert2.PublicKey.(*ecdsa.PublicKey).Equal(cert.PublicKey), "new key") + + // The root certificate cannot be verified independently by OpenSSL because + // it is self-signed. OpenSSL does perform some checks when it is part of + // a proper chain in [TestLeafCertificate]. +} + +func TestRootIsInvalid(t *testing.T) { + t.Run("NoCertificate", func(t *testing.T) { + assert.Assert(t, !RootIsValid(nil)) + assert.Assert(t, !RootIsValid(&RootCertificateAuthority{})) + + root, err := NewRootCertificateAuthority() + assert.NilError(t, err) + + root.Certificate = Certificate{} + assert.Assert(t, !RootIsValid(root)) + }) + + t.Run("NoPrivateKey", func(t *testing.T) { + root, err := NewRootCertificateAuthority() + assert.NilError(t, err) + + root.PrivateKey = PrivateKey{} + assert.Assert(t, !RootIsValid(root)) + }) + + t.Run("WrongPrivateKey", func(t *testing.T) { + root, err := NewRootCertificateAuthority() + assert.NilError(t, err) + + other, err := NewRootCertificateAuthority() + assert.NilError(t, err) + + root.PrivateKey = other.PrivateKey + assert.Assert(t, !RootIsValid(root)) + }) + + t.Run("NotAuthority", func(t *testing.T) { + root, err := NewRootCertificateAuthority() + assert.NilError(t, err) + + leaf, err := root.GenerateLeafCertificate("", nil) + assert.NilError(t, err) + + assert.Assert(t, !RootIsValid((*RootCertificateAuthority)(leaf))) + }) + + t.Run("TooEarly", func(t *testing.T) { + original := currentTime + t.Cleanup(func() { currentTime = original }) + + currentTime = func() time.Time { + return time.Now().Add(time.Hour * 24) // tomorrow + } + + root, err := NewRootCertificateAuthority() + assert.NilError(t, err) + + assert.Assert(t, !RootIsValid(root)) + }) + + t.Run("Expired", func(t *testing.T) { + original := currentTime + t.Cleanup(func() { currentTime = original }) + + currentTime = func() time.Time { + return time.Date(2010, time.January, 1, 0, 0, 0, 0, time.Local) + } + + root, err := NewRootCertificateAuthority() + assert.NilError(t, err) + + assert.Assert(t, !RootIsValid(root)) + }) +} + +func TestLeafCertificate(t *testing.T) { + serials := StringSet{} + root, err := NewRootCertificateAuthority() + assert.NilError(t, err) + + for _, tt := range []struct { + test string + commonName string + dnsNames []string + }{ + { + test: "OnlyCommonName", commonName: "some-cn", + }, + { + test: "OnlyDNSNames", dnsNames: []string{"local-name", "sub.domain"}, + }, + } { + t.Run(tt.test, func(t *testing.T) { + leaf, err := root.GenerateLeafCertificate(tt.commonName, tt.dnsNames) + assert.NilError(t, err) + assert.Assert(t, leaf != nil) + + cert := leaf.Certificate.x509 + assert.Assert(t, root.leafIsValid(leaf), "got %#v", cert) + + number := cert.SerialNumber.String() + assert.Assert(t, !serials.Has(number)) + serials.Insert(number) + + assert.Equal(t, cert.Issuer.CommonName, "postgres-operator-ca") + assert.Assert(t, cert.BasicConstraintsValid && !cert.IsCA) + assert.Assert(t, time.Now().After(cert.NotBefore), "early, got %v", cert.NotBefore) + assert.Assert(t, time.Now().Before(cert.NotAfter), "expired, got %v", cert.NotAfter) + + assert.Equal(t, cert.PublicKeyAlgorithm, x509.ECDSA) + assert.Equal(t, cert.SignatureAlgorithm, x509.ECDSAWithSHA384) + assert.Equal(t, cert.KeyUsage, x509.KeyUsageDigitalSignature|x509.KeyUsageKeyEncipherment) + + assert.Equal(t, cert.Subject.CommonName, tt.commonName) + assert.DeepEqual(t, cert.DNSNames, tt.dnsNames) + assert.Assert(t, cert.EmailAddresses == nil) + assert.Assert(t, cert.IPAddresses == nil) + assert.Assert(t, cert.URIs == nil) + + // CAs must include the Authority Key Identifier on new certificates. + // The "crypto/x509" package adds it automatically since Go 1.15. + // - https://tools.ietf.org/html/rfc5280#section-4.2.1.1 + // - https://go.dev/doc/go1.15#crypto/x509 + assert.DeepEqual(t, + leaf.Certificate.x509.AuthorityKeyId, + root.Certificate.x509.SubjectKeyId) + + // CAs must include their entire Subject on new certificates. + // - https://tools.ietf.org/html/rfc5280#section-4.1.2.6 + assert.DeepEqual(t, + leaf.Certificate.x509.Issuer, + root.Certificate.x509.Subject) + + t.Run("OpenSSLVerify", func(t *testing.T) { + openssl := require.OpenSSL(t) + + t.Run("Basic", func(t *testing.T) { + basicOpenSSLVerify(t, openssl, root.Certificate, leaf.Certificate) + }) + + t.Run("Strict", func(t *testing.T) { + strictOpenSSLVerify(t, openssl, root.Certificate, leaf.Certificate) + }) + }) + + t.Run("Subject", func(t *testing.T) { + assert.Equal(t, + leaf.Certificate.CommonName(), tt.commonName) + assert.DeepEqual(t, + leaf.Certificate.DNSNames(), tt.dnsNames) + assert.Assert(t, + leaf.Certificate.hasSubject(tt.commonName, tt.dnsNames)) + + for _, other := range []struct { + test string + commonName string + dnsNames []string + }{ + { + test: "DifferentCommonName", + commonName: "other", + dnsNames: tt.dnsNames, + }, + { + test: "DifferentDNSNames", + commonName: tt.commonName, + dnsNames: []string{"other"}, + }, + { + test: "DNSNameSubset", + commonName: tt.commonName, + dnsNames: []string{"local-name"}, + }, + } { + assert.Assert(t, + !leaf.Certificate.hasSubject(other.commonName, other.dnsNames)) + } + }) + }) + } +} + +func TestLeafIsInvalid(t *testing.T) { + root, err := NewRootCertificateAuthority() + assert.NilError(t, err) + + t.Run("ZeroRoot", func(t *testing.T) { + zero := RootCertificateAuthority{} + assert.Assert(t, !zero.leafIsValid(nil)) + + leaf, err := root.GenerateLeafCertificate("", nil) + assert.NilError(t, err) + + assert.Assert(t, !zero.leafIsValid(leaf)) + }) + + t.Run("NoCertificate", func(t *testing.T) { + assert.Assert(t, !root.leafIsValid(nil)) + assert.Assert(t, !root.leafIsValid(&LeafCertificate{})) + + leaf, err := root.GenerateLeafCertificate("", nil) + assert.NilError(t, err) + + leaf.Certificate = Certificate{} + assert.Assert(t, !root.leafIsValid(leaf)) + }) + + t.Run("NoPrivateKey", func(t *testing.T) { + leaf, err := root.GenerateLeafCertificate("", nil) + assert.NilError(t, err) + + leaf.PrivateKey = PrivateKey{} + assert.Assert(t, !root.leafIsValid(leaf)) + }) + + t.Run("WrongPrivateKey", func(t *testing.T) { + leaf, err := root.GenerateLeafCertificate("", nil) + assert.NilError(t, err) + + other, err := root.GenerateLeafCertificate("", nil) + assert.NilError(t, err) + + leaf.PrivateKey = other.PrivateKey + assert.Assert(t, !root.leafIsValid(leaf)) + }) + + t.Run("IsAuthority", func(t *testing.T) { + assert.Assert(t, !root.leafIsValid((*LeafCertificate)(root))) + }) + + t.Run("TooEarly", func(t *testing.T) { + original := currentTime + t.Cleanup(func() { currentTime = original }) + + currentTime = func() time.Time { + return time.Now().Add(time.Hour * 24) // tomorrow + } + + leaf, err := root.GenerateLeafCertificate("", nil) + assert.NilError(t, err) + + assert.Assert(t, !root.leafIsValid(leaf)) + }) + + t.Run("PastRenewalTime", func(t *testing.T) { + // Generate a cert with the default valid times, + // e.g., 1 hour before now until 1 year from now + leaf, err := root.GenerateLeafCertificate("", nil) + assert.NilError(t, err) + + // set the time now to be over 2/3rds of a year for checking + original := currentTime + t.Cleanup(func() { currentTime = original }) + + currentTime = func() time.Time { + return time.Now().Add(time.Hour * 24 * 330) + } + + assert.Assert(t, !root.leafIsValid(leaf)) + }) + + t.Run("Expired", func(t *testing.T) { + original := currentTime + t.Cleanup(func() { currentTime = original }) + + currentTime = func() time.Time { + return time.Date(2010, time.January, 1, 0, 0, 0, 0, time.Local) + } + + leaf, err := root.GenerateLeafCertificate("", nil) + assert.NilError(t, err) + + assert.Assert(t, !root.leafIsValid(leaf)) + }) +} + +func TestIsBeforeRenewalTime(t *testing.T) { + oneHourAgo := time.Now().Add(-1 * time.Hour) + twoHoursInTheFuture := time.Now().Add(2 * time.Hour) + + assert.Assert(t, isBeforeRenewalTime(oneHourAgo, twoHoursInTheFuture)) + + sixHoursAgo := time.Now().Add(-6 * time.Hour) + assert.Assert(t, !isBeforeRenewalTime(sixHoursAgo, twoHoursInTheFuture)) +} + +func TestRegenerateLeaf(t *testing.T) { + root, err := NewRootCertificateAuthority() + assert.NilError(t, err) + + before, err := root.GenerateLeafCertificate("before", nil) + assert.NilError(t, err) + + // Leaf is the same when the subject is the same. + same, err := root.RegenerateLeafWhenNecessary(before, "before", nil) + assert.NilError(t, err) + assert.DeepEqual(t, same, before) + + after, err := root.RegenerateLeafWhenNecessary(before, "after", nil) + assert.NilError(t, err) + assert.DeepEqual(t, same, before) // Argument does not change. + + assert.Assert(t, after.Certificate.hasSubject("after", nil)) + assert.Assert(t, !after.Certificate.Equal(before.Certificate)) +} + +func basicOpenSSLVerify(t *testing.T, openssl string, root, leaf Certificate) { + verify := func(t testing.TB, args ...string) { + t.Helper() + // #nosec G204 -- args from this test + cmd := exec.Command(openssl, append([]string{"verify"}, args...)...) + + output, err := cmd.CombinedOutput() + assert.NilError(t, err, "%q\n%s", cmd.Args, output) + } + + dir := t.TempDir() + + rootFile := filepath.Join(dir, "root.crt") + rootBytes, err := root.MarshalText() + assert.NilError(t, err) + assert.NilError(t, os.WriteFile(rootFile, rootBytes, 0o600)) + + // The root certificate cannot be verified independently because it is self-signed. + // It is checked below by being the specified CA. + + leafFile := filepath.Join(dir, "leaf.crt") + leafBytes, err := leaf.MarshalText() + assert.NilError(t, err) + assert.NilError(t, os.WriteFile(leafFile, leafBytes, 0o600)) + + // Older versions of the "openssl verify" command cannot properly verify + // a certificate chain that contains intermediates. When the only flag + // available is "-CAfile", intermediates must be bundled there and are + // *implicitly trusted*. The [strictOpenSSLVerify] function is able to + // verify the chain properly. + // - https://mail.python.org/pipermail/cryptography-dev/2016-August/000676.html + + // TODO(cbandy): When we generate intermediate certificates, verify them + // independently then bundle them with the root to verify the leaf. + + verify(t, "-CAfile", rootFile, leafFile) + verify(t, "-CAfile", rootFile, "-purpose", "sslclient", leafFile) + verify(t, "-CAfile", rootFile, "-purpose", "sslserver", leafFile) +} + +func strictOpenSSLVerify(t *testing.T, openssl string, root, leaf Certificate) { + output, _ := exec.Command(openssl, "verify", "-help").CombinedOutput() + if !strings.Contains(string(output), "-x509_strict") { + t.Skip(`requires "-x509_strict" flag`) + } + if !strings.Contains(string(output), "-no-CAfile") { + t.Skip(`requires a flag to ignore system certificates`) + } + + verify := func(t testing.TB, args ...string) { + t.Helper() + // #nosec G204 -- args from this test + cmd := exec.Command(openssl, append([]string{"verify", + // Do not use the default trusted CAs. + "-no-CAfile", "-no-CApath", + // Disable "non-compliant workarounds for broken certificates". + "-x509_strict", + }, args...)...) + + output, err := cmd.CombinedOutput() + assert.NilError(t, err, "%q\n%s", cmd.Args, output) + } + + dir := t.TempDir() + + rootFile := filepath.Join(dir, "root.crt") + rootBytes, err := root.MarshalText() + assert.NilError(t, err) + assert.NilError(t, os.WriteFile(rootFile, rootBytes, 0o600)) + + // The root certificate cannot be verified independently because it is self-signed. + // Some checks are performed when it is a "trusted" certificate below. + + leafFile := filepath.Join(dir, "leaf.crt") + leafBytes, err := leaf.MarshalText() + assert.NilError(t, err) + assert.NilError(t, os.WriteFile(leafFile, leafBytes, 0o600)) + + // TODO(cbandy): When we generate intermediate certificates, verify them + // independently then pass them via "-untrusted" to verify the leaf. + + verify(t, "-trusted", rootFile, leafFile) + verify(t, "-trusted", rootFile, "-purpose", "sslclient", leafFile) + verify(t, "-trusted", rootFile, "-purpose", "sslserver", leafFile) +} diff --git a/internal/postgis/postgis.go b/internal/postgis/postgis.go new file mode 100644 index 0000000000..f54da0dd93 --- /dev/null +++ b/internal/postgis/postgis.go @@ -0,0 +1,42 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgis + +import ( + "context" + "strings" + + "github.com/crunchydata/postgres-operator/internal/logging" + "github.com/crunchydata/postgres-operator/internal/postgres" +) + +// EnableInPostgreSQL installs triggers for the following extensions into every database: +// - postgis +// - postgis_topology +// - fuzzystrmatch +// - postgis_tiger_geocoder +func EnableInPostgreSQL(ctx context.Context, exec postgres.Executor) error { + log := logging.FromContext(ctx) + + stdout, stderr, err := exec.ExecInAllDatabases(ctx, + strings.Join([]string{ + // Quiet NOTICE messages from IF NOT EXISTS statements. + // - https://www.postgresql.org/docs/current/runtime-config-client.html + `SET client_min_messages = WARNING;`, + + `CREATE EXTENSION IF NOT EXISTS postgis;`, + `CREATE EXTENSION IF NOT EXISTS postgis_topology;`, + `CREATE EXTENSION IF NOT EXISTS fuzzystrmatch;`, + `CREATE EXTENSION IF NOT EXISTS postgis_tiger_geocoder;`, + }, "\n"), + map[string]string{ + "ON_ERROR_STOP": "on", // Abort when any one statement fails. + "QUIET": "on", // Do not print successful statements to stdout. + }) + + log.V(1).Info("enabled PostGIS and related extensions", "stdout", stdout, "stderr", stderr) + + return err +} diff --git a/internal/postgis/postgis_test.go b/internal/postgis/postgis_test.go new file mode 100644 index 0000000000..5f604abc90 --- /dev/null +++ b/internal/postgis/postgis_test.go @@ -0,0 +1,42 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgis + +import ( + "context" + "errors" + "io" + "strings" + "testing" + + "gotest.tools/v3/assert" +) + +func TestEnableInPostgreSQL(t *testing.T) { + expected := errors.New("whoops") + exec := func( + _ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + assert.Assert(t, stdout != nil, "should capture stdout") + assert.Assert(t, stderr != nil, "should capture stderr") + + assert.Assert(t, strings.Contains(strings.Join(command, "\n"), + `SELECT datname FROM pg_catalog.pg_database`, + ), "expected all databases and templates") + + b, err := io.ReadAll(stdin) + assert.NilError(t, err) + assert.Equal(t, string(b), `SET client_min_messages = WARNING; +CREATE EXTENSION IF NOT EXISTS postgis; +CREATE EXTENSION IF NOT EXISTS postgis_topology; +CREATE EXTENSION IF NOT EXISTS fuzzystrmatch; +CREATE EXTENSION IF NOT EXISTS postgis_tiger_geocoder;`) + + return expected + } + + ctx := context.Background() + assert.Equal(t, expected, EnableInPostgreSQL(ctx, exec)) +} diff --git a/internal/postgres/config.go b/internal/postgres/config.go new file mode 100644 index 0000000000..ce1acde3fb --- /dev/null +++ b/internal/postgres/config.go @@ -0,0 +1,427 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgres + +import ( + "context" + "fmt" + "strings" + + corev1 "k8s.io/api/core/v1" + + "github.com/crunchydata/postgres-operator/internal/config" + "github.com/crunchydata/postgres-operator/internal/feature" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +const ( + // bashHalt is a Bash function that prints its arguments to stderr then + // exits with a non-zero status. It uses the exit status of the prior + // command if that was not zero. + bashHalt = `halt() { local rc=$?; >&2 echo "$@"; exit "${rc/#0/1}"; }` + + // bashPermissions is a Bash function that prints the permissions of a file + // or directory and all its parent directories, except the root directory. + bashPermissions = `permissions() {` + + ` while [[ -n "$1" ]]; do set "${1%/*}" "$@"; done; shift;` + + ` stat -Lc '%A %4u %4g %n' "$@";` + + ` }` + + // bashRecreateDirectory is a Bash function that moves the contents of an + // existing directory into a newly created directory of the same name. + bashRecreateDirectory = ` +recreate() ( + local tmp; tmp=$(mktemp -d -p "${1%/*}"); GLOBIGNORE='.:..'; set -x + chmod "$2" "${tmp}"; mv "$1"/* "${tmp}"; rmdir "$1"; mv "${tmp}" "$1" +) +` + + // bashSafeLink is a Bash function that moves an existing file or directory + // and replaces it with a symbolic link. + bashSafeLink = ` +safelink() ( + local desired="$1" name="$2" current + current=$(realpath "${name}") + if [[ "${current}" == "${desired}" ]]; then return; fi + set -x; mv --no-target-directory "${current}" "${desired}" + ln --no-dereference --force --symbolic "${desired}" "${name}" +) +` + + // dataMountPath is where to mount the main data volume. + dataMountPath = "/pgdata" + + // dataMountPath is where to mount the main data volume. + tablespaceMountPath = "/tablespaces" + + // walMountPath is where to mount the optional WAL volume. + walMountPath = "/pgwal" + + // downwardAPIPath is where to mount the downwardAPI volume. + downwardAPIPath = "/etc/database-containerinfo" + + // SocketDirectory is where to bind and connect to UNIX sockets. + SocketDirectory = "/tmp/postgres" + + // ReplicationUser is the PostgreSQL role that will be created by Patroni + // for streaming replication and for `pg_rewind`. + ReplicationUser = "_crunchyrepl" + + // configMountPath is where to mount additional config files + configMountPath = "/etc/postgres" +) + +// ConfigDirectory returns the absolute path to $PGDATA for cluster. +// - https://www.postgresql.org/docs/current/runtime-config-file-locations.html +func ConfigDirectory(cluster *v1beta1.PostgresCluster) string { + return DataDirectory(cluster) +} + +// DataDirectory returns the absolute path to the "data_directory" of cluster. +// - https://www.postgresql.org/docs/current/runtime-config-file-locations.html +func DataDirectory(cluster *v1beta1.PostgresCluster) string { + return fmt.Sprintf("%s/pg%d", dataMountPath, cluster.Spec.PostgresVersion) +} + +// WALDirectory returns the absolute path to the directory where an instance +// stores its WAL files. +// - https://www.postgresql.org/docs/current/wal.html +func WALDirectory( + cluster *v1beta1.PostgresCluster, instance *v1beta1.PostgresInstanceSetSpec, +) string { + return fmt.Sprintf("%s/pg%d_wal", WALStorage(instance), cluster.Spec.PostgresVersion) +} + +// WALStorage returns the absolute path to the disk where an instance stores its +// WAL files. Use [WALDirectory] for the exact directory that Postgres uses. +func WALStorage(instance *v1beta1.PostgresInstanceSetSpec) string { + if instance.WALVolumeClaimSpec != nil { + return walMountPath + } + // When no WAL volume is specified, store WAL files on the main data volume. + return dataMountPath +} + +// Environment returns the environment variables required to invoke PostgreSQL +// utilities. +func Environment(cluster *v1beta1.PostgresCluster) []corev1.EnvVar { + return []corev1.EnvVar{ + // - https://www.postgresql.org/docs/current/reference-server.html + { + Name: "PGDATA", + Value: ConfigDirectory(cluster), + }, + + // - https://www.postgresql.org/docs/current/libpq-envars.html + { + Name: "PGHOST", + Value: SocketDirectory, + }, + { + Name: "PGPORT", + Value: fmt.Sprint(*cluster.Spec.Port), + }, + // Setting the KRB5_CONFIG for kerberos + // - https://web.mit.edu/kerberos/krb5-current/doc/admin/conf_files/krb5_conf.html + { + Name: "KRB5_CONFIG", + Value: configMountPath + "/krb5.conf", + }, + // In testing it was determined that we need to set this env var for the replay cache + // otherwise it defaults to the read-only location `/var/tmp/` + // - https://web.mit.edu/kerberos/krb5-current/doc/basic/rcache_def.html#replay-cache-types + { + Name: "KRB5RCACHEDIR", + Value: "/tmp", + }, + // This allows a custom CA certificate to be mounted for Postgres LDAP + // authentication via spec.config.files. + // - https://wiki.postgresql.org/wiki/LDAP_Authentication_against_AD + // + // When setting the TLS_CACERT for LDAP as an environment variable, 'LDAP' + // must be appended as a prefix. + // - https://www.openldap.org/software/man.cgi?query=ldap.conf + // + // Testing with LDAPTLS_CACERTDIR did not work as expected during testing. + { + Name: "LDAPTLS_CACERT", + Value: configMountPath + "/ldap/ca.crt", + }, + } +} + +// reloadCommand returns an entrypoint that convinces PostgreSQL to reload +// certificate files when they change. The process will appear as name in `ps` +// and `top`. +func reloadCommand(name string) []string { + // Use a Bash loop to periodically check the mtime of the mounted + // certificate volume. When it changes, copy the replication certificate, + // signal PostgreSQL, and print the observed timestamp. + // + // PostgreSQL v10 reads its server certificate files during reload (SIGHUP). + // - https://www.postgresql.org/docs/current/ssl-tcp.html#SSL-SERVER-FILES + // - https://www.postgresql.org/docs/current/app-postgres.html + // + // PostgreSQL reads its replication credentials every time it opens a + // replication connection. It does not need to be signaled when the + // certificate contents change. + // + // The copy is necessary because Kubernetes sets g+r when fsGroup is enabled, + // but PostgreSQL requires client keys to not be readable by other users. + // - https://www.postgresql.org/docs/current/libpq-ssl.html + // - https://issue.k8s.io/57923 + // + // Coreutils `sleep` uses a lot of memory, so the following opens a file + // descriptor and uses the timeout of the builtin `read` to wait. That same + // descriptor gets closed and reopened to use the builtin `[ -nt` to check + // mtimes. + // - https://unix.stackexchange.com/a/407383 + script := fmt.Sprintf(` +# Parameters for curl when managing autogrow annotation. +APISERVER="https://kubernetes.default.svc" +SERVICEACCOUNT="/var/run/secrets/kubernetes.io/serviceaccount" +NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace) +TOKEN=$(cat ${SERVICEACCOUNT}/token) +CACERT=${SERVICEACCOUNT}/ca.crt + +declare -r directory=%q +exec {fd}<> <(:||:) +while read -r -t 5 -u "${fd}" ||:; do + # Manage replication certificate. + if [[ "${directory}" -nt "/proc/self/fd/${fd}" ]] && + install -D --mode=0600 -t %q "${directory}"/{%s,%s,%s} && + pkill -HUP --exact --parent=1 postgres + then + exec {fd}>&- && exec {fd}<> <(:||:) + stat --format='Loaded certificates dated %%y' "${directory}" + fi + + # Manage autogrow annotation. + # Return size in Mebibytes. + size=$(df --human-readable --block-size=M /pgdata | awk 'FNR == 2 {print $2}') + use=$(df --human-readable /pgdata | awk 'FNR == 2 {print $5}') + sizeInt="${size//M/}" + # Use the sed punctuation class, because the shell will not accept the percent sign in an expansion. + useInt=$(echo $use | sed 's/[[:punct:]]//g') + triggerExpansion="$((useInt > 75))" + if [ $triggerExpansion -eq 1 ]; then + newSize="$(((sizeInt / 2)+sizeInt))" + newSizeMi="${newSize}Mi" + d='[{"op": "add", "path": "/metadata/annotations/suggested-pgdata-pvc-size", "value": "'"$newSizeMi"'"}]' + curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -XPATCH "${APISERVER}/api/v1/namespaces/${NAMESPACE}/pods/${HOSTNAME}?fieldManager=kubectl-annotate" -H "Content-Type: application/json-patch+json" --data "$d" + fi +done +`, + naming.CertMountPath, + naming.ReplicationTmp, + naming.ReplicationCertPath, + naming.ReplicationPrivateKeyPath, + naming.ReplicationCACertPath, + ) + + // Elide the above script from `ps` and `top` by wrapping it in a function + // and calling that. + wrapper := `monitor() {` + script + `}; export -f monitor; exec -a "$0" bash -ceu monitor` + + return []string{"bash", "-ceu", "--", wrapper, name} +} + +// startupCommand returns an entrypoint that prepares the filesystem for +// PostgreSQL. +func startupCommand( + ctx context.Context, + cluster *v1beta1.PostgresCluster, instance *v1beta1.PostgresInstanceSetSpec, +) []string { + version := fmt.Sprint(cluster.Spec.PostgresVersion) + walDir := WALDirectory(cluster, instance) + + // If the user requests tablespaces, we want to make sure the directories exist with the + // correct owner and permissions. + tablespaceCmd := "" + if feature.Enabled(ctx, feature.TablespaceVolumes) { + // This command checks if a dir exists and if not, creates it; + // if the dir does exist, then we `recreate` it to make sure the owner is correct; + // if the dir exists with the wrong owner and is not writeable, we error. + // This is the same behavior we use for the main PGDATA directory. + // Note: Postgres requires the tablespace directory to be "an existing, empty directory + // that is owned by the PostgreSQL operating system user." + // - https://www.postgresql.org/docs/current/manage-ag-tablespaces.html + // However, unlike the PGDATA directory, Postgres will _not_ error out + // if the permissions are wrong on the tablespace directory. + // Instead, when a tablespace is created in Postgres, Postgres will `chmod` the + // tablespace directory to match permissions on the PGDATA directory (either 700 or 750). + // Postgres setting the tablespace directory permissions: + // - https://git.postgresql.org/gitweb/?p=postgresql.git;f=src/backend/commands/tablespace.c;hb=REL_14_0#l600 + // Postgres choosing directory permissions: + // - https://git.postgresql.org/gitweb/?p=postgresql.git;f=src/common/file_perm.c;hb=REL_14_0#l27 + // Note: This permission change seems to happen only when the tablespace is created in Postgres. + // If the user manually `chmod`'ed the directory after the creation of the tablespace, Postgres + // would not attempt to change the directory permissions. + // Note: as noted below, we mount the tablespace directory to the mountpoint `/tablespaces/NAME`, + // and so we add the subdirectory `data` in order to set the permissions. + checkInstallRecreateCmd := strings.Join([]string{ + `if [[ ! -e "${tablespace_dir}" || -O "${tablespace_dir}" ]]; then`, + `install --directory --mode=0700 "${tablespace_dir}"`, + `elif [[ -w "${tablespace_dir}" && -g "${tablespace_dir}" ]]; then`, + `recreate "${tablespace_dir}" '0700'`, + `else (halt Permissions!); fi ||`, + `halt "$(permissions "${tablespace_dir}" ||:)"`, + }, "\n") + + for _, tablespace := range instance.TablespaceVolumes { + // The path for tablespaces volumes is /tablespaces/NAME/data + // -- the `data` path is added so that we can arrange the permissions. + tablespaceCmd = tablespaceCmd + "\ntablespace_dir=/tablespaces/" + tablespace.Name + "/data" + "\n" + + checkInstallRecreateCmd + } + } + + pg_rewind_override := "" + if config.FetchKeyCommand(&cluster.Spec) != "" { + // Quoting "EOF" disables parameter substitution during write. + // - https://tldp.org/LDP/abs/html/here-docs.html#EX71C + pg_rewind_override = `cat << "EOF" > /tmp/pg_rewind_tde.sh +#!/bin/sh +pg_rewind -K "$(postgres -C encryption_key_command)" "$@" +EOF +chmod +x /tmp/pg_rewind_tde.sh +` + } + + args := []string{version, walDir, naming.PGBackRestPGDataLogPath} + script := strings.Join([]string{ + `declare -r expected_major_version="$1" pgwal_directory="$2" pgbrLog_directory="$3"`, + + // Function to print the permissions of a file or directory and its parents. + bashPermissions, + + // Function to print a message to stderr then exit non-zero. + bashHalt, + + // Function to log values in a basic structured format. + `results() { printf '::postgres-operator: %s::%s\n' "$@"; }`, + + // Function to change the owner of an existing directory. + strings.TrimSpace(bashRecreateDirectory), + + // Function to change a directory symlink while keeping the directory contents. + strings.TrimSpace(bashSafeLink), + + // Log the effective user ID and all the group IDs. + `echo Initializing ...`, + `results 'uid' "$(id -u ||:)" 'gid' "$(id -G ||:)"`, + + // The pgbackrest spool path should be co-located with wal. If a wal volume exists, symlink the spool-path to it. + `if [[ "${pgwal_directory}" == *"pgwal/"* ]] && [[ ! -d "/pgwal/pgbackrest-spool" ]];then rm -rf "/pgdata/pgbackrest-spool" && mkdir -p "/pgwal/pgbackrest-spool" && ln --force --symbolic "/pgwal/pgbackrest-spool" "/pgdata/pgbackrest-spool";fi`, + // When a pgwal volume is removed, the symlink will be broken; force pgbackrest to recreate spool-path. + `if [[ ! -e "/pgdata/pgbackrest-spool" ]];then rm -rf /pgdata/pgbackrest-spool;fi`, + + // Abort when the PostgreSQL version installed in the image does not + // match the cluster spec. + `results 'postgres path' "$(command -v postgres ||:)"`, + `results 'postgres version' "${postgres_version:=$(postgres --version ||:)}"`, + `[[ "${postgres_version}" =~ ") ${expected_major_version}"($|[^0-9]) ]] ||`, + `halt Expected PostgreSQL version "${expected_major_version}"`, + + // Abort when the configured data directory is not $PGDATA. + // - https://www.postgresql.org/docs/current/runtime-config-file-locations.html + `results 'config directory' "${PGDATA:?}"`, + `postgres_data_directory=$([[ -d "${PGDATA}" ]] && postgres -C data_directory || echo "${PGDATA}")`, + `results 'data directory' "${postgres_data_directory}"`, + `[[ "${postgres_data_directory}" == "${PGDATA}" ]] ||`, + `halt Expected matching config and data directories`, + + // Determine if the data directory has been prepared for bootstrapping the cluster + `bootstrap_dir="${postgres_data_directory}_bootstrap"`, + `[[ -d "${bootstrap_dir}" ]] && results 'bootstrap directory' "${bootstrap_dir}"`, + `[[ -d "${bootstrap_dir}" ]] && postgres_data_directory="${bootstrap_dir}"`, + + // PostgreSQL requires its directory to be writable by only itself. + // Pod "securityContext.fsGroup" sets g+w on directories for *some* + // storage providers. Ensure the current user owns the directory, and + // remove group permissions. + // - https://www.postgresql.org/docs/current/creating-cluster.html + // - https://git.postgresql.org/gitweb/?p=postgresql.git;f=src/backend/postmaster/postmaster.c;hb=REL_10_0#l1522 + // - https://git.postgresql.org/gitweb/?p=postgresql.git;f=src/backend/utils/init/miscinit.c;hb=REL_14_0#l349 + // - https://issue.k8s.io/93802#issuecomment-717646167 + // + // When the directory does not exist, create it with the correct permissions. + // When the directory has the correct owner, set the correct permissions. + `if [[ ! -e "${postgres_data_directory}" || -O "${postgres_data_directory}" ]]; then`, + `install --directory --mode=0700 "${postgres_data_directory}"`, + // + // The directory exists but its owner is wrong. When it is writable, + // the set-group-ID bit indicates that "fsGroup" probably ran on its + // contents making them safe to use. In this case, we can make a new + // directory (owned by this user) and refill it. + `elif [[ -w "${postgres_data_directory}" && -g "${postgres_data_directory}" ]]; then`, + `recreate "${postgres_data_directory}" '0700'`, + // + // The directory exists, its owner is wrong, and it is not writable. + `else (halt Permissions!); fi ||`, + `halt "$(permissions "${postgres_data_directory}" ||:)"`, + + // Create the pgBackRest log directory. + `results 'pgBackRest log directory' "${pgbrLog_directory}"`, + `install --directory --mode=0775 "${pgbrLog_directory}" ||`, + `halt "$(permissions "${pgbrLog_directory}" ||:)"`, + + // Copy replication client certificate files + // from the /pgconf/tls/replication directory to the /tmp/replication directory in order + // to set proper file permissions. This is required because the group permission settings + // applied via the defaultMode option are not honored as expected, resulting in incorrect + // group read permissions. + // See https://github.com/kubernetes/kubernetes/issues/57923 + // TODO(tjmoore4): remove this implementation when/if defaultMode permissions are set as + // expected for the mounted volume. + fmt.Sprintf(`install -D --mode=0600 -t %q %q/{%s,%s,%s}`, + naming.ReplicationTmp, naming.CertMountPath+naming.ReplicationDirectory, + naming.ReplicationCert, naming.ReplicationPrivateKey, + naming.ReplicationCACert), + + // Add the pg_rewind wrapper script, if TDE is enabled. + pg_rewind_override, + + tablespaceCmd, + // When the data directory is empty, there's nothing more to do. + `[[ -f "${postgres_data_directory}/PG_VERSION" ]] || exit 0`, + + // Abort when the data directory is not empty and its version does not + // match the cluster spec. + `results 'data version' "${postgres_data_version:=$(< "${postgres_data_directory}/PG_VERSION")}"`, + `[[ "${postgres_data_version}" == "${expected_major_version}" ]] ||`, + `halt Expected PostgreSQL data version "${expected_major_version}"`, + + // For a restore from datasource: + // Patroni will complain if there's no `postgresql.conf` file + // and PGDATA may be missing that file if this is a restored database + // where the conf file was kept elsewhere. + `[[ ! -f "${postgres_data_directory}/postgresql.conf" ]] &&`, + `touch "${postgres_data_directory}/postgresql.conf"`, + + // Safely move the WAL directory onto the intended volume. PostgreSQL + // always writes WAL files in the "pg_wal" directory inside the data + // directory. The recommended way to relocate it is with a symbolic + // link. `initdb` and `pg_basebackup` have a `--waldir` flag that does + // the same. + // - https://www.postgresql.org/docs/current/wal-internals.html + // - https://git.postgresql.org/gitweb/?p=postgresql.git;f=src/bin/initdb/initdb.c;hb=REL_13_0#l2718 + // - https://git.postgresql.org/gitweb/?p=postgresql.git;f=src/bin/pg_basebackup/pg_basebackup.c;hb=REL_13_0#l2621 + `safelink "${pgwal_directory}" "${postgres_data_directory}/pg_wal"`, + `results 'wal directory' "$(realpath "${postgres_data_directory}/pg_wal" ||:)"`, + + // Early versions of PGO create replicas with a recovery signal file. + // Patroni also creates a standby signal file before starting Postgres, + // causing Postgres to remove only one, the standby. Remove the extra + // signal file now, if it exists, and let Patroni manage the standby + // signal file instead. + // - https://git.postgresql.org/gitweb/?p=postgresql.git;f=src/backend/access/transam/xlog.c;hb=REL_12_0#l5318 + // TODO(cbandy): Remove this after 5.0 is EOL. + `rm -f "${postgres_data_directory}/recovery.signal"`, + }, "\n") + + return append([]string{"bash", "-ceu", "--", script, "startup"}, args...) +} diff --git a/internal/postgres/config_test.go b/internal/postgres/config_test.go new file mode 100644 index 0000000000..cd4c92d185 --- /dev/null +++ b/internal/postgres/config_test.go @@ -0,0 +1,508 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgres + +import ( + "bytes" + "context" + "errors" + "fmt" + "os" + "os/exec" + "path/filepath" + "strings" + "testing" + + "gotest.tools/v3/assert" + corev1 "k8s.io/api/core/v1" + "sigs.k8s.io/yaml" + + "github.com/crunchydata/postgres-operator/internal/testing/cmp" + "github.com/crunchydata/postgres-operator/internal/testing/require" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestConfigDirectory(t *testing.T) { + cluster := new(v1beta1.PostgresCluster) + cluster.Spec.PostgresVersion = 11 + + assert.Equal(t, ConfigDirectory(cluster), "/pgdata/pg11") +} + +func TestDataDirectory(t *testing.T) { + cluster := new(v1beta1.PostgresCluster) + cluster.Spec.PostgresVersion = 12 + + assert.Equal(t, DataDirectory(cluster), "/pgdata/pg12") +} + +func TestWALDirectory(t *testing.T) { + cluster := new(v1beta1.PostgresCluster) + cluster.Spec.PostgresVersion = 13 + + // without WAL volume + instance := new(v1beta1.PostgresInstanceSetSpec) + assert.Equal(t, WALDirectory(cluster, instance), "/pgdata/pg13_wal") + + // with WAL volume + instance.WALVolumeClaimSpec = new(corev1.PersistentVolumeClaimSpec) + assert.Equal(t, WALDirectory(cluster, instance), "/pgwal/pg13_wal") +} + +func TestBashHalt(t *testing.T) { + t.Run("NoPipeline", func(t *testing.T) { + cmd := exec.Command("bash") + cmd.Args = append(cmd.Args, "-c", "--", bashHalt+`; halt ab cd e`) + + var exit *exec.ExitError + stdout, err := cmd.Output() + assert.Assert(t, errors.As(err, &exit)) + assert.Equal(t, string(stdout), "", "expected no stdout") + assert.Equal(t, string(exit.Stderr), "ab cd e\n") + assert.Equal(t, exit.ExitCode(), 1) + }) + + t.Run("PipelineZeroStatus", func(t *testing.T) { + cmd := exec.Command("bash") + cmd.Args = append(cmd.Args, "-c", "--", bashHalt+`; true && halt message`) + + var exit *exec.ExitError + stdout, err := cmd.Output() + assert.Assert(t, errors.As(err, &exit)) + assert.Equal(t, string(stdout), "", "expected no stdout") + assert.Equal(t, string(exit.Stderr), "message\n") + assert.Equal(t, exit.ExitCode(), 1) + }) + + t.Run("PipelineNonZeroStatus", func(t *testing.T) { + cmd := exec.Command("bash") + cmd.Args = append(cmd.Args, "-c", "--", bashHalt+`; (exit 99) || halt $'multi\nline'`) + + var exit *exec.ExitError + stdout, err := cmd.Output() + assert.Assert(t, errors.As(err, &exit)) + assert.Equal(t, string(stdout), "", "expected no stdout") + assert.Equal(t, string(exit.Stderr), "multi\nline\n") + assert.Equal(t, exit.ExitCode(), 99) + }) + + t.Run("Subshell", func(t *testing.T) { + cmd := exec.Command("bash") + cmd.Args = append(cmd.Args, "-c", "--", bashHalt+`; (halt 'err') || echo 'after'`) + + stderr := new(bytes.Buffer) + cmd.Stderr = stderr + + stdout, err := cmd.Output() + assert.NilError(t, err) + assert.Equal(t, string(stdout), "after\n") + assert.Equal(t, stderr.String(), "err\n") + assert.Equal(t, cmd.ProcessState.ExitCode(), 0) + }) +} + +func TestBashPermissions(t *testing.T) { + // macOS `stat` takes different arguments than BusyBox and GNU coreutils. + if output, err := exec.Command("stat", "--help").CombinedOutput(); err != nil { + t.Skip(`requires "stat" executable`) + } else if !strings.Contains(string(output), "%A") { + t.Skip(`requires "stat" with access format sequence`) + } + + dir := t.TempDir() + assert.NilError(t, os.Mkdir(filepath.Join(dir, "sub"), 0o751)) + assert.NilError(t, os.Chmod(filepath.Join(dir, "sub"), 0o751)) + assert.NilError(t, os.WriteFile(filepath.Join(dir, "sub", "fn"), nil, 0o624)) // #nosec G306 OK permissions for a temp dir in a test + assert.NilError(t, os.Chmod(filepath.Join(dir, "sub", "fn"), 0o624)) + + cmd := exec.Command("bash") + cmd.Args = append(cmd.Args, "-c", "--", + bashPermissions+`; permissions "$@"`, "-", + filepath.Join(dir, "sub", "fn")) + + stdout, err := cmd.Output() + assert.NilError(t, err) + assert.Assert(t, cmp.Regexp(``+ + `drwxr-x--x\s+\d+\s+\d+\s+[^ ]+/sub\n`+ + `-rw--w-r--\s+\d+\s+\d+\s+[^ ]+/sub/fn\n`+ + `$`, string(stdout))) +} + +func TestBashRecreateDirectory(t *testing.T) { + // macOS `stat` takes different arguments than BusyBox and GNU coreutils. + if output, err := exec.Command("stat", "--help").CombinedOutput(); err != nil { + t.Skip(`requires "stat" executable`) + } else if !strings.Contains(string(output), "%a") { + t.Skip(`requires "stat" with access format sequence`) + } + + dir := t.TempDir() + assert.NilError(t, os.Mkdir(filepath.Join(dir, "d"), 0o755)) + assert.NilError(t, os.WriteFile(filepath.Join(dir, "d", ".hidden"), nil, 0o644)) // #nosec G306 OK permissions for a temp dir in a test + assert.NilError(t, os.WriteFile(filepath.Join(dir, "d", "file"), nil, 0o644)) // #nosec G306 OK permissions for a temp dir in a test + + stat := func(args ...string) string { + cmd := exec.Command("stat", "-c", "%i %#a %N") + cmd.Args = append(cmd.Args, args...) + out, err := cmd.CombinedOutput() + + t.Helper() + assert.NilError(t, err, string(out)) + return string(out) + } + + var before, after struct{ d, f, dInode, dPerms string } + + before.d = stat(filepath.Join(dir, "d")) + before.f = stat( + filepath.Join(dir, "d", ".hidden"), + filepath.Join(dir, "d", "file"), + ) + + cmd := exec.Command("bash") + cmd.Args = append(cmd.Args, "-ceu", "--", + bashRecreateDirectory+` recreate "$@"`, "-", + filepath.Join(dir, "d"), "0740") + // The assertion below expects alphabetically sorted filenames. + // Set an empty environment to always use the default/standard locale. + cmd.Env = []string{} + output, err := cmd.CombinedOutput() + assert.NilError(t, err, string(output)) + assert.Assert(t, cmp.Regexp(`^`+ + `[+] chmod 0740 [^ ]+/tmp.[^ /]+\n`+ + `[+] mv [^ ]+/d/.hidden [^ ]+/d/file [^ ]+/tmp.[^ /]+\n`+ + `[+] rmdir [^ ]+/d\n`+ + `[+] mv [^ ]+/tmp.[^ /]+ [^ ]+/d\n`+ + `$`, string(output))) + + after.d = stat(filepath.Join(dir, "d")) + after.f = stat( + filepath.Join(dir, "d", ".hidden"), + filepath.Join(dir, "d", "file"), + ) + + _, err = fmt.Sscan(before.d, &before.dInode, &before.dPerms) + assert.NilError(t, err) + _, err = fmt.Sscan(after.d, &after.dInode, &after.dPerms) + assert.NilError(t, err) + + // New directory is new. + assert.Assert(t, after.dInode != before.dInode) + + // New directory has the requested permissions. + assert.Equal(t, after.dPerms, "0740") + + // Files are in the new directory and unchanged. + assert.DeepEqual(t, after.f, before.f) +} + +func TestBashSafeLink(t *testing.T) { + // macOS `mv` takes different arguments than GNU coreutils. + if output, err := exec.Command("mv", "--help").CombinedOutput(); err != nil { + t.Skip(`requires "mv" executable`) + } else if !strings.Contains(string(output), "no-target-directory") { + t.Skip(`requires "mv" that overwrites a directory symlink`) + } + + // execute calls the bash function with args. + execute := func(args ...string) (string, error) { + cmd := exec.Command("bash") + cmd.Args = append(cmd.Args, "-ceu", "--", bashSafeLink+`safelink "$@"`, "-") + cmd.Args = append(cmd.Args, args...) + output, err := cmd.CombinedOutput() + return string(output), err + } + + t.Run("CurrentIsFullDirectory", func(t *testing.T) { + // setupDirectory creates a non-empty directory. + setupDirectory := func(t testing.TB) (root, current string) { + t.Helper() + root = t.TempDir() + current = filepath.Join(root, "original") + assert.NilError(t, os.MkdirAll(current, 0o700)) + file, err := os.Create(filepath.Join(current, "original.file")) + assert.NilError(t, err) + assert.NilError(t, file.Close()) + return + } + + // assertSetupContents ensures that directory contents match setupDirectory. + assertSetupContents := func(t testing.TB, directory string) { + t.Helper() + entries, err := os.ReadDir(directory) + assert.NilError(t, err) + assert.Equal(t, len(entries), 1) + assert.Equal(t, entries[0].Name(), "original.file") + } + + // This situation is unexpected and succeeds. + t.Run("DesiredIsEmptyDirectory", func(t *testing.T) { + root, current := setupDirectory(t) + + // desired is an empty directory. + desired := filepath.Join(root, "desired") + assert.NilError(t, os.MkdirAll(desired, 0o700)) + + output, err := execute(desired, current) + assert.NilError(t, err, "\n%s", output) + + result, err := os.Readlink(current) + assert.NilError(t, err, "expected symlink") + assert.Equal(t, result, desired) + assertSetupContents(t, desired) + }) + + // This situation is unexpected and aborts. + t.Run("DesiredIsFullDirectory", func(t *testing.T) { + root, current := setupDirectory(t) + + // desired is a non-empty directory. + desired := filepath.Join(root, "desired") + assert.NilError(t, os.MkdirAll(desired, 0o700)) + file, err := os.Create(filepath.Join(desired, "existing.file")) + assert.NilError(t, err) + assert.NilError(t, file.Close()) + + // The function should fail and leave the original directory alone. + output, err := execute(desired, current) + assert.ErrorContains(t, err, "exit status 1") + assert.Assert(t, strings.Contains(output, "cannot"), "\n%v", output) + assertSetupContents(t, current) + }) + + // This situation is unexpected and aborts. + t.Run("DesiredIsFile", func(t *testing.T) { + root, current := setupDirectory(t) + + // desired is an empty file. + desired := filepath.Join(root, "desired") + file, err := os.Create(desired) + assert.NilError(t, err) + assert.NilError(t, file.Close()) + + // The function should fail and leave the original directory alone. + output, err := execute(desired, current) + assert.ErrorContains(t, err, "exit status 1") + assert.Assert(t, strings.Contains(output, "cannot"), "\n%v", output) + assertSetupContents(t, current) + }) + + // This covers a legacy WAL directory that is still inside the data directory. + t.Run("DesiredIsMissing", func(t *testing.T) { + root, current := setupDirectory(t) + + // desired does not exist. + desired := filepath.Join(root, "desired") + + output, err := execute(desired, current) + assert.NilError(t, err, "\n%s", output) + + result, err := os.Readlink(current) + assert.NilError(t, err, "expected symlink") + assert.Equal(t, result, desired) + assertSetupContents(t, desired) + }) + }) + + t.Run("CurrentIsFile", func(t *testing.T) { + // setupFile creates an non-empty file. + setupFile := func(t testing.TB) (root, current string) { + t.Helper() + root = t.TempDir() + current = filepath.Join(root, "original") + assert.NilError(t, os.WriteFile(current, []byte(`treasure`), 0o600)) + return + } + + // assertSetupContents ensures that file contents match setupFile. + assertSetupContents := func(t testing.TB, file string) { + t.Helper() + content, err := os.ReadFile(file) + assert.NilError(t, err) + assert.Equal(t, string(content), `treasure`) + } + + // This is situation is unexpected and aborts. + t.Run("DesiredIsEmptyDirectory", func(t *testing.T) { + root, current := setupFile(t) + + // desired is an empty directory. + desired := filepath.Join(root, "desired") + assert.NilError(t, os.MkdirAll(desired, 0o700)) + + // The function should fail and leave the original directory alone. + output, err := execute(desired, current) + assert.ErrorContains(t, err, "exit status 1") + assert.Assert(t, strings.Contains(output, "cannot"), "\n%v", output) + assertSetupContents(t, current) + }) + + // This situation is unexpected and succeeds. + t.Run("DesiredIsFile", func(t *testing.T) { + root, current := setupFile(t) + + // desired is an empty file. + desired := filepath.Join(root, "desired") + file, err := os.Create(desired) + assert.NilError(t, err) + assert.NilError(t, file.Close()) + + output, err := execute(desired, current) + assert.NilError(t, err, "\n%s", output) + + result, err := os.Readlink(current) + assert.NilError(t, err, "expected symlink") + assert.Equal(t, result, desired) + assertSetupContents(t, desired) + }) + + // This situation is normal and succeeds. + t.Run("DesiredIsMissing", func(t *testing.T) { + root, current := setupFile(t) + + // desired does not exist. + desired := filepath.Join(root, "desired") + + output, err := execute(desired, current) + assert.NilError(t, err, "\n%s", output) + + result, err := os.Readlink(current) + assert.NilError(t, err, "expected symlink") + assert.Equal(t, result, desired) + assertSetupContents(t, desired) + }) + }) + + // This is the steady state and must be a successful no-op. + t.Run("CurrentIsLinkToDesired", func(t *testing.T) { + root := t.TempDir() + + // current is a non-empty directory. + current := filepath.Join(root, "original") + assert.NilError(t, os.MkdirAll(current, 0o700)) + file, err := os.Create(filepath.Join(current, "original.file")) + assert.NilError(t, err) + assert.NilError(t, file.Close()) + symlink := filepath.Join(root, "symlink") + assert.NilError(t, os.Symlink(current, symlink)) + + output, err := execute(current, symlink) + assert.NilError(t, err, "\n%s", output) + + result, err := os.Readlink(symlink) + assert.NilError(t, err, "expected symlink") + assert.Equal(t, result, current) + + entries, err := os.ReadDir(current) + assert.NilError(t, err) + assert.Equal(t, len(entries), 1) + assert.Equal(t, entries[0].Name(), "original.file") + }) + + // This covers a WAL directory that is a symbolic link. + t.Run("CurrentIsLinkToExisting", func(t *testing.T) { + root := t.TempDir() + + // desired does not exist. + desired := filepath.Join(root, "desired") + + // current is a non-empty directory. + current := filepath.Join(root, "original") + assert.NilError(t, os.MkdirAll(current, 0o700)) + file, err := os.Create(filepath.Join(current, "original.file")) + assert.NilError(t, err) + assert.NilError(t, file.Close()) + symlink := filepath.Join(root, "symlink") + assert.NilError(t, os.Symlink(current, symlink)) + + output, err := execute(desired, symlink) + assert.NilError(t, err, "\n%s", output) + + result, err := os.Readlink(symlink) + assert.NilError(t, err, "expected symlink") + assert.Equal(t, result, desired) + + entries, err := os.ReadDir(desired) + assert.NilError(t, err) + assert.Equal(t, len(entries), 1) + assert.Equal(t, entries[0].Name(), "original.file") + }) + + // This is situation is unexpected and aborts. + t.Run("CurrentIsLinkToMissing", func(t *testing.T) { + root := t.TempDir() + + // desired does not exist. + desired := filepath.Join(root, "desired") + + // current does not exist. + current := filepath.Join(root, "original") + symlink := filepath.Join(root, "symlink") + assert.NilError(t, os.Symlink(current, symlink)) + + // The function should fail and leave the symlink alone. + output, err := execute(desired, symlink) + assert.ErrorContains(t, err, "exit status 1") + assert.Assert(t, strings.Contains(output, "cannot"), "\n%v", output) + + result, err := os.Readlink(symlink) + assert.NilError(t, err, "expected symlink") + assert.Equal(t, result, current) + }) +} + +func TestStartupCommand(t *testing.T) { + shellcheck := require.ShellCheck(t) + t.Parallel() + + cluster := new(v1beta1.PostgresCluster) + cluster.Spec.PostgresVersion = 13 + instance := new(v1beta1.PostgresInstanceSetSpec) + + ctx := context.Background() + command := startupCommand(ctx, cluster, instance) + + // Expect a bash command with an inline script. + assert.DeepEqual(t, command[:3], []string{"bash", "-ceu", "--"}) + assert.Assert(t, len(command) > 3) + script := command[3] + + // Write out that inline script. + dir := t.TempDir() + file := filepath.Join(dir, "script.bash") + assert.NilError(t, os.WriteFile(file, []byte(script), 0o600)) + + // Expect shellcheck to be happy. + cmd := exec.Command(shellcheck, "--enable=all", file) + output, err := cmd.CombinedOutput() + assert.NilError(t, err, "%q\n%s", cmd.Args, output) + + t.Run("PrettyYAML", func(t *testing.T) { + b, err := yaml.Marshal(script) + assert.NilError(t, err) + assert.Assert(t, strings.HasPrefix(string(b), `|`), + "expected literal block scalar, got:\n%s", b) + }) + + t.Run("EnableTDE", func(t *testing.T) { + + cluster.Spec.Patroni = &v1beta1.PatroniSpec{ + DynamicConfiguration: map[string]any{ + "postgresql": map[string]any{ + "parameters": map[string]any{ + "encryption_key_command": "echo test", + }, + }, + }, + } + command := startupCommand(ctx, cluster, instance) + assert.Assert(t, len(command) > 3) + assert.Assert(t, strings.Contains(command[3], `cat << "EOF" > /tmp/pg_rewind_tde.sh +#!/bin/sh +pg_rewind -K "$(postgres -C encryption_key_command)" "$@" +EOF +chmod +x /tmp/pg_rewind_tde.sh`)) + }) +} diff --git a/internal/postgres/databases.go b/internal/postgres/databases.go new file mode 100644 index 0000000000..0d70170527 --- /dev/null +++ b/internal/postgres/databases.go @@ -0,0 +1,72 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgres + +import ( + "bytes" + "context" + "encoding/json" + + "github.com/crunchydata/postgres-operator/internal/logging" +) + +// CreateDatabasesInPostgreSQL calls exec to create databases that do not exist +// in PostgreSQL. +func CreateDatabasesInPostgreSQL( + ctx context.Context, exec Executor, databases []string, +) error { + log := logging.FromContext(ctx) + + var err error + var sql bytes.Buffer + + // Prevent unexpected dereferences by emptying "search_path". The "pg_catalog" + // schema is still searched, and only temporary objects can be created. + // - https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-SEARCH-PATH + _, _ = sql.WriteString(`SET search_path TO '';`) + + // Fill a temporary table with the JSON of the database specifications. + // "\copy" reads from subsequent lines until the special line "\.". + // - https://www.postgresql.org/docs/current/app-psql.html#APP-PSQL-META-COMMANDS-COPY + _, _ = sql.WriteString(` +CREATE TEMPORARY TABLE input (id serial, data json); +\copy input (data) from stdin with (format text) +`) + + encoder := json.NewEncoder(&sql) + encoder.SetEscapeHTML(false) + + for i := range databases { + if err == nil { + err = encoder.Encode(map[string]any{ + "database": databases[i], + }) + } + } + _, _ = sql.WriteString(`\.` + "\n") + + // Create databases that do not already exist. + // - https://www.postgresql.org/docs/current/sql-createdatabase.html + _, _ = sql.WriteString(` +SELECT pg_catalog.format('CREATE DATABASE %I', + pg_catalog.json_extract_path_text(input.data, 'database')) + FROM input + WHERE NOT EXISTS ( + SELECT 1 FROM pg_catalog.pg_database + WHERE datname = pg_catalog.json_extract_path_text(input.data, 'database')) + ORDER BY input.id +\gexec +`) + + stdout, stderr, err := exec.Exec(ctx, &sql, + map[string]string{ + "ON_ERROR_STOP": "on", // Abort when any one statement fails. + "QUIET": "on", // Do not print successful statements to stdout. + }) + + log.V(1).Info("created PostgreSQL databases", "stdout", stdout, "stderr", stderr) + + return err +} diff --git a/internal/postgres/databases_test.go b/internal/postgres/databases_test.go new file mode 100644 index 0000000000..e025e86788 --- /dev/null +++ b/internal/postgres/databases_test.go @@ -0,0 +1,92 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgres + +import ( + "context" + "errors" + "io" + "strings" + "testing" + + "gotest.tools/v3/assert" + + "github.com/crunchydata/postgres-operator/internal/testing/cmp" +) + +func TestCreateDatabasesInPostgreSQL(t *testing.T) { + ctx := context.Background() + + t.Run("Arguments", func(t *testing.T) { + expected := errors.New("pass-through") + exec := func( + _ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + assert.Assert(t, stdout != nil, "should capture stdout") + assert.Assert(t, stderr != nil, "should capture stderr") + return expected + } + + assert.Equal(t, expected, CreateDatabasesInPostgreSQL(ctx, exec, nil)) + }) + + t.Run("Empty", func(t *testing.T) { + calls := 0 + exec := func( + _ context.Context, stdin io.Reader, _, _ io.Writer, command ...string, + ) error { + calls++ + + b, err := io.ReadAll(stdin) + assert.NilError(t, err) + assert.Equal(t, string(b), strings.TrimLeft(` +SET search_path TO ''; +CREATE TEMPORARY TABLE input (id serial, data json); +\copy input (data) from stdin with (format text) +\. + +SELECT pg_catalog.format('CREATE DATABASE %I', + pg_catalog.json_extract_path_text(input.data, 'database')) + FROM input + WHERE NOT EXISTS ( + SELECT 1 FROM pg_catalog.pg_database + WHERE datname = pg_catalog.json_extract_path_text(input.data, 'database')) + ORDER BY input.id +\gexec +`, "\n")) + return nil + } + + assert.NilError(t, CreateDatabasesInPostgreSQL(ctx, exec, nil)) + assert.Equal(t, calls, 1) + + assert.NilError(t, CreateDatabasesInPostgreSQL(ctx, exec, []string{})) + assert.Equal(t, calls, 2) + }) + + t.Run("Full", func(t *testing.T) { + calls := 0 + exec := func( + _ context.Context, stdin io.Reader, _, _ io.Writer, command ...string, + ) error { + calls++ + + b, err := io.ReadAll(stdin) + assert.NilError(t, err) + assert.Assert(t, cmp.Contains(string(b), ` +\copy input (data) from stdin with (format text) +{"database":"white space"} +{"database":"eXaCtLy"} +\. +`)) + return nil + } + + assert.NilError(t, CreateDatabasesInPostgreSQL(ctx, exec, + []string{"white space", "eXaCtLy"}, + )) + assert.Equal(t, calls, 1) + }) +} diff --git a/internal/postgres/doc.go b/internal/postgres/doc.go index 974cb7c8df..bd616b5916 100644 --- a/internal/postgres/doc.go +++ b/internal/postgres/doc.go @@ -1,20 +1,8 @@ -// package postgres is a collection of resources that interact with PostgreSQL -// or provide functionality that makes it easier for other resources to interact -// with PostgreSQL +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 +// Package postgres is a collection of resources that interact with PostgreSQL +// or provide functionality that makes it easier for other resources to interact +// with PostgreSQL. package postgres - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ diff --git a/internal/postgres/exec.go b/internal/postgres/exec.go new file mode 100644 index 0000000000..a846a8aa57 --- /dev/null +++ b/internal/postgres/exec.go @@ -0,0 +1,103 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgres + +import ( + "bytes" + "context" + "io" + "sort" + "strings" +) + +// Executor provides methods for calling "psql". +type Executor func( + ctx context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, +) error + +// Exec uses "psql" to execute sql. The sql statement(s) are passed via stdin +// and may contain psql variables that are assigned from the variables map. +// - https://www.postgresql.org/docs/current/app-psql.html#APP-PSQL-VARIABLES +func (exec Executor) Exec( + ctx context.Context, sql io.Reader, variables map[string]string, +) (string, string, error) { + // Convert variables into `psql` arguments. + args := make([]string, 0, len(variables)) + for k, v := range variables { + args = append(args, "--set="+k+"="+v) + } + + // The map iteration above is nondeterministic. Sort the arguments so that + // calls to exec are deterministic. + // - https://golang.org/ref/spec#For_range + sort.Strings(args) + + // Execute `psql` without reading config files nor prompting for a password. + var stdout, stderr bytes.Buffer + err := exec(ctx, sql, &stdout, &stderr, + append([]string{"psql", "-Xw", "--file=-"}, args...)...) + return stdout.String(), stderr.String(), err +} + +// ExecInAllDatabases uses "bash" and "psql" to execute sql in every database +// that allows connections, including templates. The sql command(s) may contain +// psql variables that are assigned from the variables map. +// - https://www.postgresql.org/docs/current/app-psql.html#APP-PSQL-VARIABLES +func (exec Executor) ExecInAllDatabases( + ctx context.Context, sql string, variables map[string]string, +) (string, string, error) { + const databases = "" + + // Prevent unexpected dereferences by emptying "search_path". + // The "pg_catalog" schema is still searched. + // - https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-SEARCH-PATH + `SET search_path = '';` + + + // Return the names of databases that allow connections, including + // "template1". Exclude "template0" to ensure it is never manipulated. + // - https://www.postgresql.org/docs/current/managing-databases.html + `SELECT datname FROM pg_catalog.pg_database` + + ` WHERE datallowconn AND datname NOT IN ('template0')` + + return exec.ExecInDatabasesFromQuery(ctx, databases, sql, variables) +} + +// ExecInDatabasesFromQuery uses "bash" and "psql" to execute sql in every +// database returned by the databases query. The sql statement(s) may contain +// psql variables that are assigned from the variables map. +// - https://www.postgresql.org/docs/current/app-psql.html#APP-PSQL-VARIABLES +func (exec Executor) ExecInDatabasesFromQuery( + ctx context.Context, databases, sql string, variables map[string]string, +) (string, string, error) { + // Use a Bash loop to call `psql` multiple times. The query to run in every + // database is passed via standard input while the database query is passed + // as the first argument. Remaining arguments are passed through to `psql`. + stdin := strings.NewReader(sql) + args := []string{databases} + for k, v := range variables { + args = append(args, "--set="+k+"="+v) + } + + // The map iteration above is nondeterministic. Sort the variable arguments + // so that calls to exec are deterministic. + // - https://golang.org/ref/spec#For_range + sort.Strings(args[1:]) + + const script = ` +sql_target=$(< /dev/stdin) +sql_databases="$1" +shift 1 + +databases=$(psql "$@" -Xw -Aqt --file=- <<< "${sql_databases}") +while IFS= read -r database; do + PGDATABASE="${database}" psql "$@" -Xw --file=- <<< "${sql_target}" +done <<< "${databases}" +` + + // Execute the script with some error handling enabled. + var stdout, stderr bytes.Buffer + err := exec(ctx, stdin, &stdout, &stderr, + append([]string{"bash", "-ceu", "--", script, "-"}, args...)...) + return stdout.String(), stderr.String(), err +} diff --git a/internal/postgres/exec_test.go b/internal/postgres/exec_test.go new file mode 100644 index 0000000000..df9b862577 --- /dev/null +++ b/internal/postgres/exec_test.go @@ -0,0 +1,194 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgres + +import ( + "context" + "errors" + "io" + "os" + "os/exec" + "path/filepath" + "strings" + "testing" + + "gotest.tools/v3/assert" + + "github.com/crunchydata/postgres-operator/internal/testing/require" +) + +// This example demonstrates how Executor can work with exec.Cmd. +func ExampleExecutor_execCmd() { + _ = Executor(func( + ctx context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + // #nosec G204 Nothing calls the function defined in this example. + cmd := exec.CommandContext(ctx, command[0], command[1:]...) + cmd.Stdin, cmd.Stdout, cmd.Stderr = stdin, stdout, stderr + return cmd.Run() + }) +} + +func TestExecutorExec(t *testing.T) { + expected := errors.New("pass-through") + fn := func( + _ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + b, err := io.ReadAll(stdin) + assert.NilError(t, err) + assert.Equal(t, string(b), `statements; to run;`) + + assert.DeepEqual(t, command, []string{ + "psql", "-Xw", "--file=-", + "--set=CASE=sEnSiTiVe", + "--set=different=vars", + "--set=lots=of", + }) + + _, _ = io.WriteString(stdout, "some stdout") + _, _ = io.WriteString(stderr, "and stderr") + return expected + } + + stdout, stderr, err := Executor(fn).Exec( + context.Background(), + strings.NewReader(`statements; to run;`), + map[string]string{ + "lots": "of", + "different": "vars", + "CASE": "sEnSiTiVe", + }) + + assert.Equal(t, expected, err, "expected function to be called") + assert.Equal(t, stdout, "some stdout") + assert.Equal(t, stderr, "and stderr") +} + +func TestExecutorExecInAllDatabases(t *testing.T) { + expected := errors.New("exact") + fn := func( + _ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + b, err := io.ReadAll(stdin) + assert.NilError(t, err) + assert.Equal(t, string(b), `the; stuff;`) + + assert.DeepEqual(t, command, []string{ + "bash", "-ceu", "--", ` +sql_target=$(< /dev/stdin) +sql_databases="$1" +shift 1 + +databases=$(psql "$@" -Xw -Aqt --file=- <<< "${sql_databases}") +while IFS= read -r database; do + PGDATABASE="${database}" psql "$@" -Xw --file=- <<< "${sql_target}" +done <<< "${databases}" +`, + "-", + `SET search_path = '';SELECT datname FROM pg_catalog.pg_database WHERE datallowconn AND datname NOT IN ('template0')`, + "--set=CASE=sEnSiTiVe", + "--set=different=vars", + "--set=lots=of", + }) + + _, _ = io.WriteString(stdout, "some stdout") + _, _ = io.WriteString(stderr, "and stderr") + return expected + } + + stdout, stderr, err := Executor(fn).ExecInAllDatabases( + context.Background(), + `the; stuff;`, + map[string]string{ + "lots": "of", + "different": "vars", + "CASE": "sEnSiTiVe", + }) + + assert.Equal(t, expected, err, "expected function to be called") + assert.Equal(t, stdout, "some stdout") + assert.Equal(t, stderr, "and stderr") +} + +func TestExecutorExecInDatabasesFromQuery(t *testing.T) { + expected := errors.New("splat") + fn := func( + _ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + b, err := io.ReadAll(stdin) + assert.NilError(t, err) + assert.Equal(t, string(b), `statements; to run;`) + + assert.DeepEqual(t, command, []string{ + "bash", "-ceu", "--", ` +sql_target=$(< /dev/stdin) +sql_databases="$1" +shift 1 + +databases=$(psql "$@" -Xw -Aqt --file=- <<< "${sql_databases}") +while IFS= read -r database; do + PGDATABASE="${database}" psql "$@" -Xw --file=- <<< "${sql_target}" +done <<< "${databases}" +`, + "-", + `db query`, + "--set=CASE=sEnSiTiVe", + "--set=different=vars", + "--set=lots=of", + }) + + // Use the PGDATABASE environment variable to ensure the value is not + // interpreted as a connection string. + // + // > $ psql -Xw -d 'host=127.0.0.1' + // > psql: error: fe_sendauth: no password supplied + // > + // > $ PGDATABASE='host=127.0.0.1' psql -Xw + // > psql: error: FATAL: database "host=127.0.0.1" does not exist + // + // TODO(cbandy): Create a test that actually runs psql. + assert.Assert(t, strings.Contains(command[3], `PGDATABASE="${database}" psql`)) + + _, _ = io.WriteString(stdout, "some stdout") + _, _ = io.WriteString(stderr, "and stderr") + return expected + } + + stdout, stderr, err := Executor(fn).ExecInDatabasesFromQuery( + context.Background(), `db query`, `statements; to run;`, map[string]string{ + "lots": "of", + "different": "vars", + "CASE": "sEnSiTiVe", + }) + + assert.Equal(t, expected, err, "expected function to be called") + assert.Equal(t, stdout, "some stdout") + assert.Equal(t, stderr, "and stderr") + + t.Run("ShellCheck", func(t *testing.T) { + shellcheck := require.ShellCheck(t) + + _, _, _ = Executor(func( + _ context.Context, _ io.Reader, _, _ io.Writer, command ...string, + ) error { + // Expect a bash command with an inline script. + assert.DeepEqual(t, command[:3], []string{"bash", "-ceu", "--"}) + assert.Assert(t, len(command) > 3) + script := command[3] + + // Write out that inline script. + dir := t.TempDir() + file := filepath.Join(dir, "script.bash") + assert.NilError(t, os.WriteFile(file, []byte(script), 0o600)) + + // Expect shellcheck to be happy. + cmd := exec.Command(shellcheck, "--enable=all", file) + output, err := cmd.CombinedOutput() + assert.NilError(t, err, "%q\n%s", cmd.Args, output) + + return nil + }).ExecInDatabasesFromQuery(context.Background(), "", "", nil) + }) +} diff --git a/internal/postgres/hba.go b/internal/postgres/hba.go new file mode 100644 index 0000000000..d9b5ce2680 --- /dev/null +++ b/internal/postgres/hba.go @@ -0,0 +1,159 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgres + +import ( + "fmt" + "strings" +) + +// NewHBAs returns HostBasedAuthentication records required by this package. +func NewHBAs() HBAs { + return HBAs{ + Mandatory: []HostBasedAuthentication{ + // The "postgres" superuser must always be able to connect locally. + *NewHBA().Local().User("postgres").Method("peer"), + + // The replication user must always connect over TLS using certificate + // authentication. Patroni also connects to the "postgres" database + // when calling `pg_rewind`. + // - https://www.postgresql.org/docs/current/warm-standby.html#STREAMING-REPLICATION-AUTHENTICATION + *NewHBA().TLS().User(ReplicationUser).Method("cert").Replication(), + *NewHBA().TLS().User(ReplicationUser).Method("cert").Database("postgres"), + *NewHBA().TCP().User(ReplicationUser).Method("reject"), + }, + + Default: []HostBasedAuthentication{ + // Allow TLS connections to any database using passwords. The "md5" + // authentication method automatically verifies passwords encrypted + // using either MD5 or SCRAM-SHA-256. + // - https://www.postgresql.org/docs/current/auth-password.html + *NewHBA().TLS().Method("md5"), + }, + } +} + +// HBAs is a pairing of HostBasedAuthentication records. +type HBAs struct{ Mandatory, Default []HostBasedAuthentication } + +// HostBasedAuthentication represents a single record for pg_hba.conf. +// - https://www.postgresql.org/docs/current/auth-pg-hba-conf.html +type HostBasedAuthentication struct { + origin, database, user, address, method, options string +} + +// NewHBA returns an HBA record that matches all databases, networks, and users. +func NewHBA() *HostBasedAuthentication { + return new(HostBasedAuthentication).AllDatabases().AllNetworks().AllUsers() +} + +func (HostBasedAuthentication) quote(value string) string { + return `"` + strings.ReplaceAll(value, `"`, `""`) + `"` +} + +// AllDatabases makes hba match connections made to any database. +func (hba *HostBasedAuthentication) AllDatabases() *HostBasedAuthentication { + hba.database = "all" + return hba +} + +// AllNetworks makes hba match connection attempts made from any IP address. +func (hba *HostBasedAuthentication) AllNetworks() *HostBasedAuthentication { + hba.address = "all" + return hba +} + +// AllUsers makes hba match connections made by any user. +func (hba *HostBasedAuthentication) AllUsers() *HostBasedAuthentication { + hba.user = "all" + return hba +} + +// Database makes hba match connections made to a specific database. +func (hba *HostBasedAuthentication) Database(name string) *HostBasedAuthentication { + hba.database = hba.quote(name) + return hba +} + +// Local makes hba match connection attempts using Unix-domain sockets. +func (hba *HostBasedAuthentication) Local() *HostBasedAuthentication { + hba.origin = "local" + return hba +} + +// Method specifies the authentication method to use when a connection matches hba. +func (hba *HostBasedAuthentication) Method(name string) *HostBasedAuthentication { + hba.method = name + return hba +} + +// Network makes hba match connection attempts from a block of IP addresses in CIDR notation. +func (hba *HostBasedAuthentication) Network(block string) *HostBasedAuthentication { + hba.address = hba.quote(block) + return hba +} + +// NoSSL makes hba match connection attempts made over TCP/IP without SSL. +func (hba *HostBasedAuthentication) NoSSL() *HostBasedAuthentication { + hba.origin = "hostnossl" + return hba +} + +// Options specifies any options for the authentication method. +func (hba *HostBasedAuthentication) Options(opts map[string]string) *HostBasedAuthentication { + hba.options = "" + for k, v := range opts { + hba.options = fmt.Sprintf("%s %s=%s", hba.options, k, hba.quote(v)) + } + return hba +} + +// Replication makes hba match physical replication connections. +func (hba *HostBasedAuthentication) Replication() *HostBasedAuthentication { + hba.database = "replication" + return hba +} + +// Role makes hba match connections by users that are members of a specific role. +func (hba *HostBasedAuthentication) Role(name string) *HostBasedAuthentication { + hba.user = "+" + hba.quote(name) + return hba +} + +// SameNetwork makes hba match connection attempts from IP addresses in any +// subnet to which the server is directly connected. +func (hba *HostBasedAuthentication) SameNetwork() *HostBasedAuthentication { + hba.address = "samenet" + return hba +} + +// TLS makes hba match connection attempts made using TCP/IP with TLS. +func (hba *HostBasedAuthentication) TLS() *HostBasedAuthentication { + hba.origin = "hostssl" + return hba +} + +// TCP makes hba match connection attempts made using TCP/IP, with or without SSL. +func (hba *HostBasedAuthentication) TCP() *HostBasedAuthentication { + hba.origin = "host" + return hba +} + +// User makes hba match connections by a specific user. +func (hba *HostBasedAuthentication) User(name string) *HostBasedAuthentication { + hba.user = hba.quote(name) + return hba +} + +// String returns hba formatted for the pg_hba.conf file without a newline. +func (hba HostBasedAuthentication) String() string { + if hba.origin == "local" { + return strings.TrimSpace(fmt.Sprintf("local %s %s %s %s", + hba.database, hba.user, hba.method, hba.options)) + } + + return strings.TrimSpace(fmt.Sprintf("%s %s %s %s %s %s", + hba.origin, hba.database, hba.user, hba.address, hba.method, hba.options)) +} diff --git a/internal/postgres/hba_test.go b/internal/postgres/hba_test.go new file mode 100644 index 0000000000..9744479fdd --- /dev/null +++ b/internal/postgres/hba_test.go @@ -0,0 +1,62 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgres + +import ( + "strings" + "testing" + + "gotest.tools/v3/assert" + + "github.com/crunchydata/postgres-operator/internal/testing/cmp" +) + +func TestNewHBAs(t *testing.T) { + matches := func(actual []HostBasedAuthentication, expected string) cmp.Comparison { + printed := make([]string, len(actual)) + for i := range actual { + printed[i] = actual[i].String() + } + + parsed := strings.Split(strings.Trim(expected, "\t\n"), "\n") + for i := range parsed { + parsed[i] = strings.Join(strings.Fields(parsed[i]), " ") + } + + return cmp.DeepEqual(printed, parsed) + } + + hba := NewHBAs() + assert.Assert(t, matches(hba.Mandatory, ` +local all "postgres" peer +hostssl replication "_crunchyrepl" all cert +hostssl "postgres" "_crunchyrepl" all cert +host all "_crunchyrepl" all reject + `)) + assert.Assert(t, matches(hba.Default, ` +hostssl all all all md5 + `)) +} + +func TestHostBasedAuthentication(t *testing.T) { + assert.Equal(t, `local all "postgres" peer`, + NewHBA().Local().User("postgres").Method("peer").String()) + + assert.Equal(t, `host all all "::1/128" trust`, + NewHBA().TCP().Network("::1/128").Method("trust").String()) + + assert.Equal(t, `host replication "KD6-3.7" samenet scram-sha-256`, + NewHBA().TCP().SameNetwork().Replication(). + User("KD6-3.7").Method("scram-sha-256"). + String()) + + assert.Equal(t, `hostssl "data" +"admin" all md5 clientcert="verify-ca"`, + NewHBA().TLS().Database("data").Role("admin"). + Method("md5").Options(map[string]string{"clientcert": "verify-ca"}). + String()) + + assert.Equal(t, `hostnossl all all all reject`, + NewHBA().NoSSL().Method("reject").String()) +} diff --git a/internal/postgres/huge_pages.go b/internal/postgres/huge_pages.go new file mode 100644 index 0000000000..ee13c0d11b --- /dev/null +++ b/internal/postgres/huge_pages.go @@ -0,0 +1,43 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgres + +import ( + "strings" + + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/resource" + + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +// This function looks for a valid huge_pages resource request. If it finds one, +// it sets the PostgreSQL parameter "huge_pages" to "try". If it doesn't find +// one, it sets "huge_pages" to "off". +func SetHugePages(cluster *v1beta1.PostgresCluster, pgParameters *Parameters) { + if HugePagesRequested(cluster) { + pgParameters.Default.Add("huge_pages", "try") + } else { + pgParameters.Default.Add("huge_pages", "off") + } +} + +// This helper function checks to see if a huge_pages value greater than zero has +// been set in any of the PostgresCluster's instances' resource specs +func HugePagesRequested(cluster *v1beta1.PostgresCluster) bool { + for _, instance := range cluster.Spec.InstanceSets { + for resourceName := range instance.Resources.Limits { + if strings.HasPrefix(resourceName.String(), corev1.ResourceHugePagesPrefix) { + resourceQuantity := instance.Resources.Limits.Name(resourceName, resource.BinarySI) + + if resourceQuantity != nil && resourceQuantity.Value() > 0 { + return true + } + } + } + } + + return false +} diff --git a/internal/postgres/huge_pages_test.go b/internal/postgres/huge_pages_test.go new file mode 100644 index 0000000000..58a6a6aa57 --- /dev/null +++ b/internal/postgres/huge_pages_test.go @@ -0,0 +1,98 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgres + +import ( + "testing" + + "gotest.tools/v3/assert" + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/resource" + + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestSetHugePages(t *testing.T) { + t.Run("hugepages not set at all", func(t *testing.T) { + cluster := new(v1beta1.PostgresCluster) + + cluster.Spec.InstanceSets = []v1beta1.PostgresInstanceSetSpec{{ + Name: "test-instance1", + Replicas: initialize.Int32(1), + Resources: corev1.ResourceRequirements{ + Limits: corev1.ResourceList{}, + }, + }} + + pgParameters := NewParameters() + SetHugePages(cluster, &pgParameters) + + assert.Equal(t, pgParameters.Default.Has("huge_pages"), true) + assert.Equal(t, pgParameters.Default.Value("huge_pages"), "off") + }) + + t.Run("hugepages quantity not set", func(t *testing.T) { + cluster := new(v1beta1.PostgresCluster) + + emptyQuantity, _ := resource.ParseQuantity("") + cluster.Spec.InstanceSets = []v1beta1.PostgresInstanceSetSpec{{ + Name: "test-instance1", + Replicas: initialize.Int32(1), + Resources: corev1.ResourceRequirements{ + Limits: corev1.ResourceList{ + corev1.ResourceHugePagesPrefix + "2Mi": emptyQuantity, + }, + }, + }} + + pgParameters := NewParameters() + SetHugePages(cluster, &pgParameters) + + assert.Equal(t, pgParameters.Default.Has("huge_pages"), true) + assert.Equal(t, pgParameters.Default.Value("huge_pages"), "off") + }) + + t.Run("hugepages set to zero", func(t *testing.T) { + cluster := new(v1beta1.PostgresCluster) + + cluster.Spec.InstanceSets = []v1beta1.PostgresInstanceSetSpec{{ + Name: "test-instance1", + Replicas: initialize.Int32(1), + Resources: corev1.ResourceRequirements{ + Limits: corev1.ResourceList{ + corev1.ResourceHugePagesPrefix + "2Mi": resource.MustParse("0Mi"), + }, + }, + }} + + pgParameters := NewParameters() + SetHugePages(cluster, &pgParameters) + + assert.Equal(t, pgParameters.Default.Has("huge_pages"), true) + assert.Equal(t, pgParameters.Default.Value("huge_pages"), "off") + }) + + t.Run("hugepages set correctly", func(t *testing.T) { + cluster := new(v1beta1.PostgresCluster) + + cluster.Spec.InstanceSets = []v1beta1.PostgresInstanceSetSpec{{ + Name: "test-instance1", + Replicas: initialize.Int32(1), + Resources: corev1.ResourceRequirements{ + Limits: corev1.ResourceList{ + corev1.ResourceHugePagesPrefix + "2Mi": resource.MustParse("16Mi"), + }, + }, + }} + + pgParameters := NewParameters() + SetHugePages(cluster, &pgParameters) + + assert.Equal(t, pgParameters.Default.Has("huge_pages"), true) + assert.Equal(t, pgParameters.Default.Value("huge_pages"), "try") + }) + +} diff --git a/internal/postgres/iana.go b/internal/postgres/iana.go new file mode 100644 index 0000000000..4392b549f1 --- /dev/null +++ b/internal/postgres/iana.go @@ -0,0 +1,16 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgres + +// The protocol used by PostgreSQL is registered with the Internet Assigned +// Numbers Authority (IANA). +// - https://www.iana.org/assignments/service-names-port-numbers +const ( + // IANAPortNumber is the port assigned to PostgreSQL at the IANA. + IANAPortNumber = 5432 + + // IANAServiceName is the name of the PostgreSQL protocol at the IANA. + IANAServiceName = "postgresql" +) diff --git a/internal/postgres/parameters.go b/internal/postgres/parameters.go new file mode 100644 index 0000000000..434d9fd1dd --- /dev/null +++ b/internal/postgres/parameters.go @@ -0,0 +1,126 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgres + +import ( + "strings" +) + +// NewParameters returns ParameterSets required by this package. +func NewParameters() Parameters { + parameters := Parameters{ + Mandatory: NewParameterSet(), + Default: NewParameterSet(), + } + + // Use UNIX domain sockets for local connections. + // PostgreSQL must be restarted when changing this value. + parameters.Mandatory.Add("unix_socket_directories", SocketDirectory) + + // Enable logical replication in addition to streaming and WAL archiving. + // PostgreSQL must be restarted when changing this value. + // - https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-WAL-LEVEL + // - https://www.postgresql.org/docs/current/runtime-config-replication.html + // - https://www.postgresql.org/docs/current/logical-replication.html + parameters.Mandatory.Add("wal_level", "logical") + + // Always enable SSL/TLS. + // PostgreSQL must be reloaded when changing this value. + // - https://www.postgresql.org/docs/current/ssl-tcp.html + parameters.Mandatory.Add("ssl", "on") + parameters.Mandatory.Add("ssl_cert_file", "/pgconf/tls/tls.crt") + parameters.Mandatory.Add("ssl_key_file", "/pgconf/tls/tls.key") + parameters.Mandatory.Add("ssl_ca_file", "/pgconf/tls/ca.crt") + + // Just-in-Time compilation can degrade performance unexpectedly. Allow + // users to enable it for appropriate workloads. + // - https://www.postgresql.org/docs/current/jit.html + parameters.Default.Add("jit", "off") + + // SCRAM-SHA-256 is preferred over MD5, but allow users to disable it when + // necessary. PostgreSQL 10 is the first to support SCRAM-SHA-256, and + // PostgreSQL 14 makes it the default. + // - https://www.postgresql.org/docs/current/auth-password.html + parameters.Default.Add("password_encryption", "scram-sha-256") + + return parameters +} + +// Parameters is a pairing of ParameterSets. +type Parameters struct{ Mandatory, Default *ParameterSet } + +// ParameterSet is a collection of PostgreSQL parameters. +// - https://www.postgresql.org/docs/current/config-setting.html +type ParameterSet struct { + values map[string]string +} + +// NewParameterSet returns an empty ParameterSet. +func NewParameterSet() *ParameterSet { + return &ParameterSet{ + values: make(map[string]string), + } +} + +// AsMap returns a copy of ps as a map. +func (ps ParameterSet) AsMap() map[string]string { + out := make(map[string]string, len(ps.values)) + for name, value := range ps.values { + out[name] = value + } + return out +} + +// DeepCopy returns a copy of ps. +func (ps *ParameterSet) DeepCopy() (out *ParameterSet) { + return &ParameterSet{ + values: ps.AsMap(), + } +} + +// Add sets parameter name to value. +func (ps *ParameterSet) Add(name, value string) { + ps.values[ps.normalize(name)] = value +} + +// AppendToList adds each value to the right-hand side of parameter name +// as a comma-separated list without quoting. +func (ps *ParameterSet) AppendToList(name string, value ...string) { + result := ps.Value(name) + + if len(value) > 0 { + if len(result) > 0 { + result += "," + strings.Join(value, ",") + } else { + result = strings.Join(value, ",") + } + } + + ps.Add(name, result) +} + +// Get returns the value of parameter name and whether or not it was present in ps. +func (ps ParameterSet) Get(name string) (string, bool) { + value, ok := ps.values[ps.normalize(name)] + return value, ok +} + +// Has returns whether or not parameter name is present in ps. +func (ps ParameterSet) Has(name string) bool { + _, ok := ps.Get(name) + return ok +} + +func (ParameterSet) normalize(name string) string { + // All parameter names are case-insensitive. + // -- https://www.postgresql.org/docs/current/config-setting.html + return strings.ToLower(name) +} + +// Value returns empty string or the value of parameter name if it is present in ps. +func (ps ParameterSet) Value(name string) string { + value, _ := ps.Get(name) + return value +} diff --git a/internal/postgres/parameters_test.go b/internal/postgres/parameters_test.go new file mode 100644 index 0000000000..c6228d7958 --- /dev/null +++ b/internal/postgres/parameters_test.go @@ -0,0 +1,82 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgres + +import ( + "testing" + + "gotest.tools/v3/assert" +) + +func TestNewParameters(t *testing.T) { + parameters := NewParameters() + + assert.DeepEqual(t, parameters.Mandatory.AsMap(), map[string]string{ + "ssl": "on", + "ssl_ca_file": "/pgconf/tls/ca.crt", + "ssl_cert_file": "/pgconf/tls/tls.crt", + "ssl_key_file": "/pgconf/tls/tls.key", + + "unix_socket_directories": "/tmp/postgres", + + "wal_level": "logical", + }) + assert.DeepEqual(t, parameters.Default.AsMap(), map[string]string{ + "jit": "off", + + "password_encryption": "scram-sha-256", + }) +} + +func TestParameterSet(t *testing.T) { + ps := NewParameterSet() + + ps.Add("x", "y") + assert.Assert(t, ps.Has("X")) + assert.Equal(t, ps.Value("x"), "y") + + v, ok := ps.Get("X") + assert.Assert(t, ok) + assert.Equal(t, v, "y") + + ps.Add("X", "z") + assert.Equal(t, ps.Value("x"), "z") + + ps.Add("abc", "j'l") + assert.DeepEqual(t, ps.AsMap(), map[string]string{ + "abc": "j'l", + "x": "z", + }) + + ps2 := ps.DeepCopy() + assert.Assert(t, ps2.Has("abc")) + assert.Equal(t, ps2.Value("x"), ps.Value("x")) + + ps2.Add("x", "n") + assert.Assert(t, ps2.Value("x") != ps.Value("x")) +} + +func TestParameterSetAppendToList(t *testing.T) { + ps := NewParameterSet() + + ps.AppendToList("empty") + assert.Assert(t, ps.Has("empty")) + assert.Equal(t, ps.Value("empty"), "") + + ps.AppendToList("empty") + assert.Equal(t, ps.Value("empty"), "", "expected no change") + + ps.AppendToList("full", "a") + assert.Equal(t, ps.Value("full"), "a") + + ps.AppendToList("full", "b") + assert.Equal(t, ps.Value("full"), "a,b") + + ps.AppendToList("full") + assert.Equal(t, ps.Value("full"), "a,b", "expected no change") + + ps.AppendToList("full", "a", "cd", `"e"`) + assert.Equal(t, ps.Value("full"), `a,b,a,cd,"e"`) +} diff --git a/internal/postgres/password/doc.go b/internal/postgres/password/doc.go index 6ea6563873..eef7ed7db2 100644 --- a/internal/postgres/password/doc.go +++ b/internal/postgres/password/doc.go @@ -1,19 +1,7 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + // package password lets one create the appropriate password hashes and // verifiers that are used for adding the information into PostgreSQL - package password - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ diff --git a/internal/postgres/password/md5.go b/internal/postgres/password/md5.go index ae93d2cc56..884dfb655e 100644 --- a/internal/postgres/password/md5.go +++ b/internal/postgres/password/md5.go @@ -1,33 +1,19 @@ -package password - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ +package password import ( + + // #nosec G501 "crypto/md5" "errors" "fmt" ) -// md5Prefix is used as a prefix to the hashed password, i.e. "md5[a-f0-9]{32}" -const md5Prefix = "md5" - -var ( - // ErrMD5PasswordInvalid is returned when the password attributes are invalid - ErrMD5PasswordInvalid = errors.New(`invalid password attributes. must provide "username" and "password"`) -) +// ErrMD5PasswordInvalid is returned when the password attributes are invalid +var ErrMD5PasswordInvalid = errors.New(`invalid password attributes. must provide "username" and "password"`) // MD5Password implements the PostgresPassword interface for hashing passwords // using the PostgreSQL MD5 method @@ -45,6 +31,7 @@ func (m *MD5Password) Build() (string, error) { plaintext := []byte(m.password + m.username) // finish the transformation by getting the string value of the MD5 hash and // encoding it in hexadecimal for PostgreSQL, appending "md5" to the front + // #nosec G401 return fmt.Sprintf("md5%x", md5.Sum(plaintext)), nil } diff --git a/internal/postgres/password/md5_test.go b/internal/postgres/password/md5_test.go index c77c8abf43..80cb7742d6 100644 --- a/internal/postgres/password/md5_test.go +++ b/internal/postgres/password/md5_test.go @@ -1,19 +1,8 @@ -package password - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ +package password import ( "fmt" @@ -38,7 +27,6 @@ func TestMD5Build(t *testing.T) { } hash, err := md5.Build() - if err != nil { t.Error(err) } diff --git a/internal/postgres/password/password.go b/internal/postgres/password/password.go index b70112a4c3..337282cc74 100644 --- a/internal/postgres/password/password.go +++ b/internal/postgres/password/password.go @@ -1,19 +1,8 @@ -package password - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ +package password import ( "errors" @@ -31,10 +20,8 @@ const ( SCRAM ) -var ( - // ErrPasswordType is returned when a password type does not exist - ErrPasswordType = errors.New("password type does not exist") -) +// ErrPasswordType is returned when a password type does not exist +var ErrPasswordType = errors.New("password type does not exist") // PostgresPassword is the interface that defines the methods required to build // a password for PostgreSQL in a desired format (e.g. MD5) diff --git a/internal/postgres/password/password_test.go b/internal/postgres/password/password_test.go index b9b7094dbc..3401dec4ac 100644 --- a/internal/postgres/password/password_test.go +++ b/internal/postgres/password/password_test.go @@ -1,21 +1,11 @@ -package password - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ +package password import ( + "errors" "testing" ) @@ -27,7 +17,6 @@ func TestNewPostgresPassword(t *testing.T) { passwordType := MD5 postgresPassword, err := NewPostgresPassword(passwordType, username, password) - if err != nil { t.Error(err) } @@ -49,7 +38,6 @@ func TestNewPostgresPassword(t *testing.T) { passwordType := SCRAM postgresPassword, err := NewPostgresPassword(passwordType, username, password) - if err != nil { t.Error(err) } @@ -66,7 +54,7 @@ func TestNewPostgresPassword(t *testing.T) { t.Run("invalid", func(t *testing.T) { passwordType := PasswordType(-1) - if _, err := NewPostgresPassword(passwordType, username, password); err != ErrPasswordType { + if _, err := NewPostgresPassword(passwordType, username, password); !errors.Is(err, ErrPasswordType) { t.Errorf("expected error: %q", err.Error()) } }) diff --git a/internal/postgres/password/scram.go b/internal/postgres/password/scram.go index aa6eee3df8..8264cd87a0 100644 --- a/internal/postgres/password/scram.go +++ b/internal/postgres/password/scram.go @@ -1,19 +1,8 @@ -package password - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ +package password import ( "crypto/hmac" @@ -26,7 +15,7 @@ import ( "unicode" "unicode/utf8" - "github.com/xdg/stringprep" + "github.com/xdg-go/stringprep" "golang.org/x/crypto/pbkdf2" ) @@ -37,7 +26,7 @@ import ( // // where: // DIGEST = SCRAM-SHA-256 (only value for now in PostgreSQL) -// ITERATIONS = the number of iteratiosn to use for PBKDF2 +// ITERATIONS = the number of iterations to use for PBKDF2 // SALT = the salt used as part of the PBKDF2, stored in base64 // STORED_KEY = the hash of the client key, stored in base64 // SERVER_KEY = the hash of the server key @@ -96,7 +85,6 @@ type SCRAMPassword struct { func (s *SCRAMPassword) Build() (string, error) { // get a generated salt salt, err := s.generateSalt(s.SaltLength) - if err != nil { return "", err } @@ -163,10 +151,10 @@ func (s *SCRAMPassword) isASCII() bool { // using SCRAM. It differs from RFC 4013 in that it returns the original, // unmodified password when: // -// - the input is not valid UTF-8 -// - the output would be empty -// - the output would contain prohibited characters -// - the output would contain ambiguous bidirectional characters +// - the input is not valid UTF-8 +// - the output would be empty +// - the output would contain prohibited characters +// - the output would contain ambiguous bidirectional characters // // See: // @@ -180,12 +168,13 @@ func (s *SCRAMPassword) saslPrep() string { // perform SASLprep on the password. if the SASLprep fails or returns an // empty string, return the original password - // Otherwise return the clean pasword - if cleanedPassword, err := stringprep.SASLprep.Prepare(s.password); cleanedPassword == "" || err != nil { + // Otherwise return the clean password + cleanedPassword, err := stringprep.SASLprep.Prepare(s.password) + if cleanedPassword == "" || err != nil { return s.password - } else { - return cleanedPassword } + + return cleanedPassword } // NewSCRAMPassword constructs a new SCRAMPassword struct with sane defaults @@ -198,7 +187,7 @@ func NewSCRAMPassword(password string) *SCRAMPassword { } } -// scramGenerateSalt generates aseries of cryptographic bytes of a specified +// scramGenerateSalt generates a series of cryptographic bytes of a specified // length for purposes of SCRAM. must be at least 1 func scramGenerateSalt(length int) ([]byte, error) { // length must be at least one diff --git a/internal/postgres/password/scram_test.go b/internal/postgres/password/scram_test.go index 6de92bb17c..0552e519b7 100644 --- a/internal/postgres/password/scram_test.go +++ b/internal/postgres/password/scram_test.go @@ -1,19 +1,8 @@ -package password - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ +package password import ( "bytes" @@ -54,7 +43,6 @@ func TestScramGenerateSalt(t *testing.T) { for _, saltLength := range saltLengths { t.Run(fmt.Sprintf("salt length %d", saltLength), func(t *testing.T) { salt, err := scramGenerateSalt(saltLength) - if err != nil { t.Error(err) } @@ -71,7 +59,6 @@ func TestScramGenerateSalt(t *testing.T) { for _, saltLength := range saltLengths { t.Run(fmt.Sprintf("salt length %d", saltLength), func(t *testing.T) { - if _, err := scramGenerateSalt(saltLength); err == nil { t.Errorf("error expected for salt length of %d", saltLength) } @@ -82,7 +69,6 @@ func TestScramGenerateSalt(t *testing.T) { func TestSCRAMBuild(t *testing.T) { t.Run("scram-sha-256", func(t *testing.T) { - t.Run("valid", func(t *testing.T) { // check a few different password combinations. note: the salt is kept the // same so we can get a reproducible result @@ -97,14 +83,13 @@ func TestSCRAMBuild(t *testing.T) { return []byte("h1pp0p4rty2020"), nil } - // a crednetial is valid if it generates the specified md5 hash + // a credential is valid if it generates the specified md5 hash for _, credentials := range credentialList { t.Run(credentials[0], func(t *testing.T) { scram := NewSCRAMPassword(credentials[0]) scram.generateSalt = mockGenerateSalt hash, err := scram.Build() - if err != nil { t.Error(err) } @@ -152,7 +137,7 @@ func TestSCRAMHash(t *testing.T) { expected, _ := hex.DecodeString("877cc977e7b033e10d6e0b0d666da1f463bc51b1de48869250a0347ec1b2b8b3") actual := scram.hash(sha256.New, []byte("hippo")) - if bytes.Compare(expected, actual) != 0 { + if !bytes.Equal(expected, actual) { t.Errorf("expected: %x actual %x", expected, actual) } }) @@ -164,7 +149,7 @@ func TestSCRAMHMAC(t *testing.T) { expected, _ := hex.DecodeString("ac9872eb21043142c3bf073c9fa4caf9553940750ef7b85116905aaa456a2d07") actual := scram.hmac(sha256.New, []byte("hippo"), []byte("datalake")) - if bytes.Compare(expected, actual) != 0 { + if !bytes.Equal(expected, actual) { t.Errorf("expected: %x actual %x", expected, actual) } }) diff --git a/internal/postgres/reconcile.go b/internal/postgres/reconcile.go new file mode 100644 index 0000000000..344f91dd9f --- /dev/null +++ b/internal/postgres/reconcile.go @@ -0,0 +1,301 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgres + +import ( + "context" + + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/resource" + + "github.com/crunchydata/postgres-operator/internal/config" + "github.com/crunchydata/postgres-operator/internal/feature" + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +var ( + oneMillicore = resource.MustParse("1m") + oneMebibyte = resource.MustParse("1Mi") +) + +// DataVolumeMount returns the name and mount path of the PostgreSQL data volume. +func DataVolumeMount() corev1.VolumeMount { + return corev1.VolumeMount{Name: "postgres-data", MountPath: dataMountPath} +} + +// TablespaceVolumeMount returns the name and mount path of the PostgreSQL tablespace data volume. +func TablespaceVolumeMount(tablespaceName string) corev1.VolumeMount { + return corev1.VolumeMount{Name: "tablespace-" + tablespaceName, MountPath: tablespaceMountPath + "/" + tablespaceName} +} + +// WALVolumeMount returns the name and mount path of the PostgreSQL WAL volume. +func WALVolumeMount() corev1.VolumeMount { + return corev1.VolumeMount{Name: "postgres-wal", MountPath: walMountPath} +} + +// DownwardAPIVolumeMount returns the name and mount path of the DownwardAPI volume. +func DownwardAPIVolumeMount() corev1.VolumeMount { + return corev1.VolumeMount{ + Name: "database-containerinfo", + MountPath: downwardAPIPath, + ReadOnly: true, + } +} + +// AdditionalConfigVolumeMount returns the name and mount path of the additional config files. +func AdditionalConfigVolumeMount() corev1.VolumeMount { + return corev1.VolumeMount{ + Name: "postgres-config", + MountPath: configMountPath, + ReadOnly: true, + } +} + +// InstancePod initializes outInstancePod with the database container and the +// volumes needed by PostgreSQL. +func InstancePod(ctx context.Context, + inCluster *v1beta1.PostgresCluster, + inInstanceSpec *v1beta1.PostgresInstanceSetSpec, + inClusterCertificates, inClientCertificates *corev1.SecretProjection, + inDataVolume, inWALVolume *corev1.PersistentVolumeClaim, + inTablespaceVolumes []*corev1.PersistentVolumeClaim, + outInstancePod *corev1.PodSpec, +) { + certVolumeMount := corev1.VolumeMount{ + Name: naming.CertVolume, + MountPath: naming.CertMountPath, + ReadOnly: true, + } + certVolume := corev1.Volume{ + Name: certVolumeMount.Name, + VolumeSource: corev1.VolumeSource{ + Projected: &corev1.ProjectedVolumeSource{ + // PostgreSQL expects client certificate keys to not be readable + // by any other user. + // - https://www.postgresql.org/docs/current/libpq-ssl.html + DefaultMode: initialize.Int32(0o600), + Sources: []corev1.VolumeProjection{ + {Secret: inClusterCertificates}, + {Secret: inClientCertificates}, + }, + }, + }, + } + + dataVolumeMount := DataVolumeMount() + dataVolume := corev1.Volume{ + Name: dataVolumeMount.Name, + VolumeSource: corev1.VolumeSource{ + PersistentVolumeClaim: &corev1.PersistentVolumeClaimVolumeSource{ + ClaimName: inDataVolume.Name, + ReadOnly: false, + }, + }, + } + + downwardAPIVolumeMount := DownwardAPIVolumeMount() + downwardAPIVolume := corev1.Volume{ + Name: downwardAPIVolumeMount.Name, + VolumeSource: corev1.VolumeSource{ + DownwardAPI: &corev1.DownwardAPIVolumeSource{ + // The paths defined in Items (cpu_limit, cpu_request, etc.) + // are hard coded in the pgnodemx queries defined by + // pgMonitor configuration (queries_nodemx.yml) + // https://github.com/CrunchyData/pgmonitor/blob/master/postgres_exporter/common/queries_nodemx.yml + Items: []corev1.DownwardAPIVolumeFile{{ + Path: "cpu_limit", + ResourceFieldRef: &corev1.ResourceFieldSelector{ + ContainerName: naming.ContainerDatabase, + Resource: "limits.cpu", + Divisor: oneMillicore, + }, + }, { + Path: "cpu_request", + ResourceFieldRef: &corev1.ResourceFieldSelector{ + ContainerName: naming.ContainerDatabase, + Resource: "requests.cpu", + Divisor: oneMillicore, + }, + }, { + Path: "mem_limit", + ResourceFieldRef: &corev1.ResourceFieldSelector{ + ContainerName: naming.ContainerDatabase, + Resource: "limits.memory", + Divisor: oneMebibyte, + }, + }, { + Path: "mem_request", + ResourceFieldRef: &corev1.ResourceFieldSelector{ + ContainerName: naming.ContainerDatabase, + Resource: "requests.memory", + Divisor: oneMebibyte, + }, + }, { + Path: "labels", + FieldRef: &corev1.ObjectFieldSelector{ + APIVersion: corev1.SchemeGroupVersion.Version, + FieldPath: "metadata.labels", + }, + }, { + Path: "annotations", + FieldRef: &corev1.ObjectFieldSelector{ + APIVersion: corev1.SchemeGroupVersion.Version, + FieldPath: "metadata.annotations", + }, + }}, + }, + }, + } + + container := corev1.Container{ + Name: naming.ContainerDatabase, + + // Patroni will set the command and probes. + + Env: Environment(inCluster), + Image: config.PostgresContainerImage(inCluster), + ImagePullPolicy: inCluster.Spec.ImagePullPolicy, + Resources: inInstanceSpec.Resources, + + Ports: []corev1.ContainerPort{{ + Name: naming.PortPostgreSQL, + ContainerPort: *inCluster.Spec.Port, + Protocol: corev1.ProtocolTCP, + }}, + + SecurityContext: initialize.RestrictedSecurityContext(), + VolumeMounts: []corev1.VolumeMount{ + certVolumeMount, + dataVolumeMount, + downwardAPIVolumeMount, + }, + } + + reloader := corev1.Container{ + Name: naming.ContainerClientCertCopy, + + Command: reloadCommand(naming.ContainerClientCertCopy), + + Image: container.Image, + ImagePullPolicy: container.ImagePullPolicy, + SecurityContext: initialize.RestrictedSecurityContext(), + + VolumeMounts: []corev1.VolumeMount{certVolumeMount, dataVolumeMount}, + } + + if inInstanceSpec.Sidecars != nil && + inInstanceSpec.Sidecars.ReplicaCertCopy != nil && + inInstanceSpec.Sidecars.ReplicaCertCopy.Resources != nil { + reloader.Resources = *inInstanceSpec.Sidecars.ReplicaCertCopy.Resources + } + + startup := corev1.Container{ + Name: naming.ContainerPostgresStartup, + + Command: startupCommand(ctx, inCluster, inInstanceSpec), + Env: Environment(inCluster), + + Image: container.Image, + ImagePullPolicy: container.ImagePullPolicy, + Resources: container.Resources, + SecurityContext: initialize.RestrictedSecurityContext(), + + VolumeMounts: []corev1.VolumeMount{certVolumeMount, dataVolumeMount}, + } + + outInstancePod.Volumes = []corev1.Volume{ + certVolume, + dataVolume, + downwardAPIVolume, + } + + // If `TablespaceVolumes` FeatureGate is enabled, `inTablespaceVolumes` may not be nil. + // In that case, add any tablespace volumes to the pod, and + // add volumeMounts to the database and startup containers + for _, vol := range inTablespaceVolumes { + tablespaceVolumeMount := TablespaceVolumeMount(vol.Labels[naming.LabelData]) + tablespaceVolume := corev1.Volume{ + Name: tablespaceVolumeMount.Name, + VolumeSource: corev1.VolumeSource{ + PersistentVolumeClaim: &corev1.PersistentVolumeClaimVolumeSource{ + ClaimName: vol.Name, + ReadOnly: false, + }, + }, + } + outInstancePod.Volumes = append(outInstancePod.Volumes, tablespaceVolume) + container.VolumeMounts = append(container.VolumeMounts, tablespaceVolumeMount) + startup.VolumeMounts = append(startup.VolumeMounts, tablespaceVolumeMount) + } + + if len(inCluster.Spec.Config.Files) != 0 { + additionalConfigVolumeMount := AdditionalConfigVolumeMount() + additionalConfigVolume := corev1.Volume{Name: additionalConfigVolumeMount.Name} + additionalConfigVolume.Projected = &corev1.ProjectedVolumeSource{ + Sources: append([]corev1.VolumeProjection{}, inCluster.Spec.Config.Files...), + } + container.VolumeMounts = append(container.VolumeMounts, additionalConfigVolumeMount) + outInstancePod.Volumes = append(outInstancePod.Volumes, additionalConfigVolume) + } + + // Mount the WAL PVC whenever it exists. The startup command will move WAL + // files to or from this volume according to inInstanceSpec. + if inWALVolume != nil { + walVolumeMount := WALVolumeMount() + walVolume := corev1.Volume{ + Name: walVolumeMount.Name, + VolumeSource: corev1.VolumeSource{ + PersistentVolumeClaim: &corev1.PersistentVolumeClaimVolumeSource{ + ClaimName: inWALVolume.Name, + ReadOnly: false, + }, + }, + } + + container.VolumeMounts = append(container.VolumeMounts, walVolumeMount) + startup.VolumeMounts = append(startup.VolumeMounts, walVolumeMount) + outInstancePod.Volumes = append(outInstancePod.Volumes, walVolume) + } + + outInstancePod.Containers = []corev1.Container{container, reloader} + + // If the InstanceSidecars feature gate is enabled and instance sidecars are + // defined, add the defined container to the Pod. + if feature.Enabled(ctx, feature.InstanceSidecars) && + inInstanceSpec.Containers != nil { + outInstancePod.Containers = append(outInstancePod.Containers, inInstanceSpec.Containers...) + } + + outInstancePod.InitContainers = []corev1.Container{startup} +} + +// PodSecurityContext returns a v1.PodSecurityContext for cluster that can write +// to PersistentVolumes. +func PodSecurityContext(cluster *v1beta1.PostgresCluster) *corev1.PodSecurityContext { + podSecurityContext := initialize.PodSecurityContext() + + // Use the specified supplementary groups except for root. The CRD has + // similar validation, but we should never emit a PodSpec with that group. + // - https://docs.k8s.io/concepts/security/pod-security-standards/ + for i := range cluster.Spec.SupplementalGroups { + if gid := cluster.Spec.SupplementalGroups[i]; gid > 0 { + podSecurityContext.SupplementalGroups = append(podSecurityContext.SupplementalGroups, gid) + } + } + + // OpenShift assigns a filesystem group based on a SecurityContextConstraint. + // Otherwise, set a filesystem group so PostgreSQL can write to files + // regardless of the UID or GID of a container. + // - https://cloud.redhat.com/blog/a-guide-to-openshift-and-uids + // - https://docs.k8s.io/tasks/configure-pod-container/security-context/ + // - https://docs.openshift.com/container-platform/4.8/authentication/managing-security-context-constraints.html + if cluster.Spec.OpenShift == nil || !*cluster.Spec.OpenShift { + podSecurityContext.FSGroup = initialize.Int64(26) + } + + return podSecurityContext +} diff --git a/internal/postgres/reconcile_test.go b/internal/postgres/reconcile_test.go new file mode 100644 index 0000000000..138b5c7b3e --- /dev/null +++ b/internal/postgres/reconcile_test.go @@ -0,0 +1,753 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgres + +import ( + "context" + "testing" + + "gotest.tools/v3/assert" + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/resource" + + "github.com/crunchydata/postgres-operator/internal/feature" + "github.com/crunchydata/postgres-operator/internal/initialize" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/testing/cmp" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestDataVolumeMount(t *testing.T) { + mount := DataVolumeMount() + + assert.DeepEqual(t, mount, corev1.VolumeMount{ + Name: "postgres-data", + MountPath: "/pgdata", + ReadOnly: false, + }) +} + +func TestWALVolumeMount(t *testing.T) { + mount := WALVolumeMount() + + assert.DeepEqual(t, mount, corev1.VolumeMount{ + Name: "postgres-wal", + MountPath: "/pgwal", + ReadOnly: false, + }) +} + +func TestDownwardAPIVolumeMount(t *testing.T) { + mount := DownwardAPIVolumeMount() + + assert.DeepEqual(t, mount, corev1.VolumeMount{ + Name: "database-containerinfo", + MountPath: "/etc/database-containerinfo", + ReadOnly: true, + }) +} + +func TestTablespaceVolumeMount(t *testing.T) { + mount := TablespaceVolumeMount("trial") + + assert.DeepEqual(t, mount, corev1.VolumeMount{ + Name: "tablespace-trial", + MountPath: "/tablespaces/trial", + ReadOnly: false, + }) +} + +func TestInstancePod(t *testing.T) { + t.Parallel() + + ctx := context.Background() + cluster := new(v1beta1.PostgresCluster) + cluster.Default() + cluster.Spec.ImagePullPolicy = corev1.PullAlways + cluster.Spec.PostgresVersion = 11 + + dataVolume := new(corev1.PersistentVolumeClaim) + dataVolume.Name = "datavol" + + instance := new(v1beta1.PostgresInstanceSetSpec) + instance.Resources.Requests = corev1.ResourceList{"cpu": resource.MustParse("9m")} + instance.Sidecars = &v1beta1.InstanceSidecars{ + ReplicaCertCopy: &v1beta1.Sidecar{ + Resources: &corev1.ResourceRequirements{ + Requests: corev1.ResourceList{"cpu": resource.MustParse("21m")}, + }, + }, + } + + serverSecretProjection := &corev1.SecretProjection{ + LocalObjectReference: corev1.LocalObjectReference{Name: "srv-secret"}, + Items: []corev1.KeyToPath{ + { + Key: naming.ReplicationCert, + Path: naming.ReplicationCert, + }, + { + Key: naming.ReplicationPrivateKey, + Path: naming.ReplicationPrivateKey, + }, + { + Key: naming.ReplicationCACert, + Path: naming.ReplicationCACert, + }, + }, + } + + clientSecretProjection := &corev1.SecretProjection{ + LocalObjectReference: corev1.LocalObjectReference{Name: "repl-secret"}, + Items: []corev1.KeyToPath{ + { + Key: naming.ReplicationCert, + Path: naming.ReplicationCertPath, + }, + { + Key: naming.ReplicationPrivateKey, + Path: naming.ReplicationPrivateKeyPath, + }, + }, + } + + // without WAL volume nor WAL volume spec + pod := new(corev1.PodSpec) + InstancePod(ctx, cluster, instance, + serverSecretProjection, clientSecretProjection, dataVolume, nil, nil, pod) + + assert.Assert(t, cmp.MarshalMatches(pod, ` +containers: +- env: + - name: PGDATA + value: /pgdata/pg11 + - name: PGHOST + value: /tmp/postgres + - name: PGPORT + value: "5432" + - name: KRB5_CONFIG + value: /etc/postgres/krb5.conf + - name: KRB5RCACHEDIR + value: /tmp + - name: LDAPTLS_CACERT + value: /etc/postgres/ldap/ca.crt + imagePullPolicy: Always + name: database + ports: + - containerPort: 5432 + name: postgres + protocol: TCP + resources: + requests: + cpu: 9m + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault + volumeMounts: + - mountPath: /pgconf/tls + name: cert-volume + readOnly: true + - mountPath: /pgdata + name: postgres-data + - mountPath: /etc/database-containerinfo + name: database-containerinfo + readOnly: true +- command: + - bash + - -ceu + - -- + - |- + monitor() { + # Parameters for curl when managing autogrow annotation. + APISERVER="https://kubernetes.default.svc" + SERVICEACCOUNT="/var/run/secrets/kubernetes.io/serviceaccount" + NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace) + TOKEN=$(cat ${SERVICEACCOUNT}/token) + CACERT=${SERVICEACCOUNT}/ca.crt + + declare -r directory="/pgconf/tls" + exec {fd}<> <(:||:) + while read -r -t 5 -u "${fd}" ||:; do + # Manage replication certificate. + if [[ "${directory}" -nt "/proc/self/fd/${fd}" ]] && + install -D --mode=0600 -t "/tmp/replication" "${directory}"/{replication/tls.crt,replication/tls.key,replication/ca.crt} && + pkill -HUP --exact --parent=1 postgres + then + exec {fd}>&- && exec {fd}<> <(:||:) + stat --format='Loaded certificates dated %y' "${directory}" + fi + + # Manage autogrow annotation. + # Return size in Mebibytes. + size=$(df --human-readable --block-size=M /pgdata | awk 'FNR == 2 {print $2}') + use=$(df --human-readable /pgdata | awk 'FNR == 2 {print $5}') + sizeInt="${size//M/}" + # Use the sed punctuation class, because the shell will not accept the percent sign in an expansion. + useInt=$(echo $use | sed 's/[[:punct:]]//g') + triggerExpansion="$((useInt > 75))" + if [ $triggerExpansion -eq 1 ]; then + newSize="$(((sizeInt / 2)+sizeInt))" + newSizeMi="${newSize}Mi" + d='[{"op": "add", "path": "/metadata/annotations/suggested-pgdata-pvc-size", "value": "'"$newSizeMi"'"}]' + curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -XPATCH "${APISERVER}/api/v1/namespaces/${NAMESPACE}/pods/${HOSTNAME}?fieldManager=kubectl-annotate" -H "Content-Type: application/json-patch+json" --data "$d" + fi + done + }; export -f monitor; exec -a "$0" bash -ceu monitor + - replication-cert-copy + imagePullPolicy: Always + name: replication-cert-copy + resources: + requests: + cpu: 21m + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault + volumeMounts: + - mountPath: /pgconf/tls + name: cert-volume + readOnly: true + - mountPath: /pgdata + name: postgres-data +initContainers: +- command: + - bash + - -ceu + - -- + - |- + declare -r expected_major_version="$1" pgwal_directory="$2" pgbrLog_directory="$3" + permissions() { while [[ -n "$1" ]]; do set "${1%/*}" "$@"; done; shift; stat -Lc '%A %4u %4g %n' "$@"; } + halt() { local rc=$?; >&2 echo "$@"; exit "${rc/#0/1}"; } + results() { printf '::postgres-operator: %s::%s\n' "$@"; } + recreate() ( + local tmp; tmp=$(mktemp -d -p "${1%/*}"); GLOBIGNORE='.:..'; set -x + chmod "$2" "${tmp}"; mv "$1"/* "${tmp}"; rmdir "$1"; mv "${tmp}" "$1" + ) + safelink() ( + local desired="$1" name="$2" current + current=$(realpath "${name}") + if [[ "${current}" == "${desired}" ]]; then return; fi + set -x; mv --no-target-directory "${current}" "${desired}" + ln --no-dereference --force --symbolic "${desired}" "${name}" + ) + echo Initializing ... + results 'uid' "$(id -u ||:)" 'gid' "$(id -G ||:)" + if [[ "${pgwal_directory}" == *"pgwal/"* ]] && [[ ! -d "/pgwal/pgbackrest-spool" ]];then rm -rf "/pgdata/pgbackrest-spool" && mkdir -p "/pgwal/pgbackrest-spool" && ln --force --symbolic "/pgwal/pgbackrest-spool" "/pgdata/pgbackrest-spool";fi + if [[ ! -e "/pgdata/pgbackrest-spool" ]];then rm -rf /pgdata/pgbackrest-spool;fi + results 'postgres path' "$(command -v postgres ||:)" + results 'postgres version' "${postgres_version:=$(postgres --version ||:)}" + [[ "${postgres_version}" =~ ") ${expected_major_version}"($|[^0-9]) ]] || + halt Expected PostgreSQL version "${expected_major_version}" + results 'config directory' "${PGDATA:?}" + postgres_data_directory=$([[ -d "${PGDATA}" ]] && postgres -C data_directory || echo "${PGDATA}") + results 'data directory' "${postgres_data_directory}" + [[ "${postgres_data_directory}" == "${PGDATA}" ]] || + halt Expected matching config and data directories + bootstrap_dir="${postgres_data_directory}_bootstrap" + [[ -d "${bootstrap_dir}" ]] && results 'bootstrap directory' "${bootstrap_dir}" + [[ -d "${bootstrap_dir}" ]] && postgres_data_directory="${bootstrap_dir}" + if [[ ! -e "${postgres_data_directory}" || -O "${postgres_data_directory}" ]]; then + install --directory --mode=0700 "${postgres_data_directory}" + elif [[ -w "${postgres_data_directory}" && -g "${postgres_data_directory}" ]]; then + recreate "${postgres_data_directory}" '0700' + else (halt Permissions!); fi || + halt "$(permissions "${postgres_data_directory}" ||:)" + results 'pgBackRest log directory' "${pgbrLog_directory}" + install --directory --mode=0775 "${pgbrLog_directory}" || + halt "$(permissions "${pgbrLog_directory}" ||:)" + install -D --mode=0600 -t "/tmp/replication" "/pgconf/tls/replication"/{tls.crt,tls.key,ca.crt} + + + [[ -f "${postgres_data_directory}/PG_VERSION" ]] || exit 0 + results 'data version' "${postgres_data_version:=$(< "${postgres_data_directory}/PG_VERSION")}" + [[ "${postgres_data_version}" == "${expected_major_version}" ]] || + halt Expected PostgreSQL data version "${expected_major_version}" + [[ ! -f "${postgres_data_directory}/postgresql.conf" ]] && + touch "${postgres_data_directory}/postgresql.conf" + safelink "${pgwal_directory}" "${postgres_data_directory}/pg_wal" + results 'wal directory' "$(realpath "${postgres_data_directory}/pg_wal" ||:)" + rm -f "${postgres_data_directory}/recovery.signal" + - startup + - "11" + - /pgdata/pg11_wal + - /pgdata/pgbackrest/log + env: + - name: PGDATA + value: /pgdata/pg11 + - name: PGHOST + value: /tmp/postgres + - name: PGPORT + value: "5432" + - name: KRB5_CONFIG + value: /etc/postgres/krb5.conf + - name: KRB5RCACHEDIR + value: /tmp + - name: LDAPTLS_CACERT + value: /etc/postgres/ldap/ca.crt + imagePullPolicy: Always + name: postgres-startup + resources: + requests: + cpu: 9m + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault + volumeMounts: + - mountPath: /pgconf/tls + name: cert-volume + readOnly: true + - mountPath: /pgdata + name: postgres-data +volumes: +- name: cert-volume + projected: + defaultMode: 384 + sources: + - secret: + items: + - key: tls.crt + path: tls.crt + - key: tls.key + path: tls.key + - key: ca.crt + path: ca.crt + name: srv-secret + - secret: + items: + - key: tls.crt + path: replication/tls.crt + - key: tls.key + path: replication/tls.key + name: repl-secret +- name: postgres-data + persistentVolumeClaim: + claimName: datavol +- downwardAPI: + items: + - path: cpu_limit + resourceFieldRef: + containerName: database + divisor: 1m + resource: limits.cpu + - path: cpu_request + resourceFieldRef: + containerName: database + divisor: 1m + resource: requests.cpu + - path: mem_limit + resourceFieldRef: + containerName: database + divisor: 1Mi + resource: limits.memory + - path: mem_request + resourceFieldRef: + containerName: database + divisor: 1Mi + resource: requests.memory + - fieldRef: + apiVersion: v1 + fieldPath: metadata.labels + path: labels + - fieldRef: + apiVersion: v1 + fieldPath: metadata.annotations + path: annotations + name: database-containerinfo + `)) + + t.Run("WithWALVolumeWithoutWALVolumeSpec", func(t *testing.T) { + walVolume := new(corev1.PersistentVolumeClaim) + walVolume.Name = "walvol" + + pod := new(corev1.PodSpec) + InstancePod(ctx, cluster, instance, + serverSecretProjection, clientSecretProjection, dataVolume, walVolume, nil, pod) + + assert.Assert(t, len(pod.Containers) > 0) + assert.Assert(t, len(pod.InitContainers) > 0) + + // Container has all mountPaths, including downwardAPI + assert.Assert(t, cmp.MarshalMatches(pod.Containers[0].VolumeMounts, ` +- mountPath: /pgconf/tls + name: cert-volume + readOnly: true +- mountPath: /pgdata + name: postgres-data +- mountPath: /etc/database-containerinfo + name: database-containerinfo + readOnly: true +- mountPath: /pgwal + name: postgres-wal`), "expected WAL and downwardAPI mounts in %q container", pod.Containers[0].Name) + + // InitContainer has all mountPaths, except downwardAPI + assert.Assert(t, cmp.MarshalMatches(pod.InitContainers[0].VolumeMounts, ` +- mountPath: /pgconf/tls + name: cert-volume + readOnly: true +- mountPath: /pgdata + name: postgres-data +- mountPath: /pgwal + name: postgres-wal`), "expected WAL mount, no downwardAPI mount in %q container", pod.InitContainers[0].Name) + + assert.Assert(t, cmp.MarshalMatches(pod.Volumes, ` +- name: cert-volume + projected: + defaultMode: 384 + sources: + - secret: + items: + - key: tls.crt + path: tls.crt + - key: tls.key + path: tls.key + - key: ca.crt + path: ca.crt + name: srv-secret + - secret: + items: + - key: tls.crt + path: replication/tls.crt + - key: tls.key + path: replication/tls.key + name: repl-secret +- name: postgres-data + persistentVolumeClaim: + claimName: datavol +- downwardAPI: + items: + - path: cpu_limit + resourceFieldRef: + containerName: database + divisor: 1m + resource: limits.cpu + - path: cpu_request + resourceFieldRef: + containerName: database + divisor: 1m + resource: requests.cpu + - path: mem_limit + resourceFieldRef: + containerName: database + divisor: 1Mi + resource: limits.memory + - path: mem_request + resourceFieldRef: + containerName: database + divisor: 1Mi + resource: requests.memory + - fieldRef: + apiVersion: v1 + fieldPath: metadata.labels + path: labels + - fieldRef: + apiVersion: v1 + fieldPath: metadata.annotations + path: annotations + name: database-containerinfo +- name: postgres-wal + persistentVolumeClaim: + claimName: walvol + `), "expected WAL volume") + + // Startup moves WAL files to data volume. + assert.DeepEqual(t, pod.InitContainers[0].Command[4:], + []string{"startup", "11", "/pgdata/pg11_wal", "/pgdata/pgbackrest/log"}) + }) + + t.Run("WithAdditionalConfigFiles", func(t *testing.T) { + clusterWithConfig := cluster.DeepCopy() + clusterWithConfig.Spec.Config.Files = []corev1.VolumeProjection{ + { + Secret: &corev1.SecretProjection{ + LocalObjectReference: corev1.LocalObjectReference{ + Name: "keytab", + }, + }, + }, + } + + pod := new(corev1.PodSpec) + InstancePod(ctx, clusterWithConfig, instance, + serverSecretProjection, clientSecretProjection, dataVolume, nil, nil, pod) + + assert.Assert(t, len(pod.Containers) > 0) + assert.Assert(t, len(pod.InitContainers) > 0) + + // Container has all mountPaths, including downwardAPI, + // and the postgres-config + assert.Assert(t, cmp.MarshalMatches(pod.Containers[0].VolumeMounts, ` +- mountPath: /pgconf/tls + name: cert-volume + readOnly: true +- mountPath: /pgdata + name: postgres-data +- mountPath: /etc/database-containerinfo + name: database-containerinfo + readOnly: true +- mountPath: /etc/postgres + name: postgres-config + readOnly: true`), "expected WAL and downwardAPI mounts in %q container", pod.Containers[0].Name) + + // InitContainer has all mountPaths, except downwardAPI and additionalConfig + assert.Assert(t, cmp.MarshalMatches(pod.InitContainers[0].VolumeMounts, ` +- mountPath: /pgconf/tls + name: cert-volume + readOnly: true +- mountPath: /pgdata + name: postgres-data`), "expected WAL mount, no downwardAPI mount in %q container", pod.InitContainers[0].Name) + }) + + t.Run("WithCustomSidecarContainer", func(t *testing.T) { + sidecarInstance := new(v1beta1.PostgresInstanceSetSpec) + sidecarInstance.Containers = []corev1.Container{ + {Name: "customsidecar1"}, + } + + t.Run("SidecarNotEnabled", func(t *testing.T) { + InstancePod(ctx, cluster, sidecarInstance, + serverSecretProjection, clientSecretProjection, dataVolume, nil, nil, pod) + + assert.Equal(t, len(pod.Containers), 2, "expected 2 containers in Pod, got %d", len(pod.Containers)) + }) + + t.Run("SidecarEnabled", func(t *testing.T) { + gate := feature.NewGate() + assert.NilError(t, gate.SetFromMap(map[string]bool{ + feature.InstanceSidecars: true, + })) + ctx := feature.NewContext(ctx, gate) + + InstancePod(ctx, cluster, sidecarInstance, + serverSecretProjection, clientSecretProjection, dataVolume, nil, nil, pod) + + assert.Equal(t, len(pod.Containers), 3, "expected 3 containers in Pod, got %d", len(pod.Containers)) + + var found bool + for i := range pod.Containers { + if pod.Containers[i].Name == "customsidecar1" { + found = true + break + } + } + assert.Assert(t, found, "expected custom sidecar 'customsidecar1', but container not found") + }) + }) + + t.Run("WithTablespaces", func(t *testing.T) { + clusterWithTablespaces := cluster.DeepCopy() + clusterWithTablespaces.Spec.InstanceSets = []v1beta1.PostgresInstanceSetSpec{ + { + TablespaceVolumes: []v1beta1.TablespaceVolume{ + {Name: "trial"}, + {Name: "castle"}, + }, + }, + } + + tablespaceVolume1 := new(corev1.PersistentVolumeClaim) + tablespaceVolume1.Labels = map[string]string{ + "postgres-operator.crunchydata.com/data": "castle", + } + tablespaceVolume2 := new(corev1.PersistentVolumeClaim) + tablespaceVolume2.Labels = map[string]string{ + "postgres-operator.crunchydata.com/data": "trial", + } + tablespaceVolumes := []*corev1.PersistentVolumeClaim{tablespaceVolume1, tablespaceVolume2} + + InstancePod(ctx, cluster, instance, + serverSecretProjection, clientSecretProjection, dataVolume, nil, tablespaceVolumes, pod) + + assert.Assert(t, cmp.MarshalMatches(pod.Containers[0].VolumeMounts, ` +- mountPath: /pgconf/tls + name: cert-volume + readOnly: true +- mountPath: /pgdata + name: postgres-data +- mountPath: /etc/database-containerinfo + name: database-containerinfo + readOnly: true +- mountPath: /tablespaces/castle + name: tablespace-castle +- mountPath: /tablespaces/trial + name: tablespace-trial`), "expected tablespace mount(s) in %q container", pod.Containers[0].Name) + + // InitContainer has all mountPaths, except downwardAPI and additionalConfig + assert.Assert(t, cmp.MarshalMatches(pod.InitContainers[0].VolumeMounts, ` +- mountPath: /pgconf/tls + name: cert-volume + readOnly: true +- mountPath: /pgdata + name: postgres-data +- mountPath: /tablespaces/castle + name: tablespace-castle +- mountPath: /tablespaces/trial + name: tablespace-trial`), "expected tablespace mount(s) in %q container", pod.InitContainers[0].Name) + }) + + t.Run("WithWALVolumeWithWALVolumeSpec", func(t *testing.T) { + walVolume := new(corev1.PersistentVolumeClaim) + walVolume.Name = "walvol" + + instance := new(v1beta1.PostgresInstanceSetSpec) + instance.WALVolumeClaimSpec = new(corev1.PersistentVolumeClaimSpec) + + pod := new(corev1.PodSpec) + InstancePod(ctx, cluster, instance, + serverSecretProjection, clientSecretProjection, dataVolume, walVolume, nil, pod) + + assert.Assert(t, len(pod.Containers) > 0) + assert.Assert(t, len(pod.InitContainers) > 0) + + assert.Assert(t, cmp.MarshalMatches(pod.Containers[0].VolumeMounts, ` +- mountPath: /pgconf/tls + name: cert-volume + readOnly: true +- mountPath: /pgdata + name: postgres-data +- mountPath: /etc/database-containerinfo + name: database-containerinfo + readOnly: true +- mountPath: /pgwal + name: postgres-wal`), "expected WAL and downwardAPI mounts in %q container", pod.Containers[0].Name) + + assert.Assert(t, cmp.MarshalMatches(pod.InitContainers[0].VolumeMounts, ` +- mountPath: /pgconf/tls + name: cert-volume + readOnly: true +- mountPath: /pgdata + name: postgres-data +- mountPath: /pgwal + name: postgres-wal`), "expected WAL mount, no downwardAPI mount in %q container", pod.InitContainers[0].Name) + + assert.Assert(t, cmp.MarshalMatches(pod.Volumes, ` +- name: cert-volume + projected: + defaultMode: 384 + sources: + - secret: + items: + - key: tls.crt + path: tls.crt + - key: tls.key + path: tls.key + - key: ca.crt + path: ca.crt + name: srv-secret + - secret: + items: + - key: tls.crt + path: replication/tls.crt + - key: tls.key + path: replication/tls.key + name: repl-secret +- name: postgres-data + persistentVolumeClaim: + claimName: datavol +- downwardAPI: + items: + - path: cpu_limit + resourceFieldRef: + containerName: database + divisor: 1m + resource: limits.cpu + - path: cpu_request + resourceFieldRef: + containerName: database + divisor: 1m + resource: requests.cpu + - path: mem_limit + resourceFieldRef: + containerName: database + divisor: 1Mi + resource: limits.memory + - path: mem_request + resourceFieldRef: + containerName: database + divisor: 1Mi + resource: requests.memory + - fieldRef: + apiVersion: v1 + fieldPath: metadata.labels + path: labels + - fieldRef: + apiVersion: v1 + fieldPath: metadata.annotations + path: annotations + name: database-containerinfo +- name: postgres-wal + persistentVolumeClaim: + claimName: walvol + `), "expected WAL volume") + + // Startup moves WAL files to WAL volume. + assert.DeepEqual(t, pod.InitContainers[0].Command[4:], + []string{"startup", "11", "/pgwal/pg11_wal", "/pgdata/pgbackrest/log"}) + }) +} + +func TestPodSecurityContext(t *testing.T) { + cluster := new(v1beta1.PostgresCluster) + cluster.Default() + + assert.Assert(t, cmp.MarshalMatches(PodSecurityContext(cluster), ` +fsGroup: 26 +fsGroupChangePolicy: OnRootMismatch + `)) + + cluster.Spec.OpenShift = initialize.Bool(true) + assert.Assert(t, cmp.MarshalMatches(PodSecurityContext(cluster), ` +fsGroupChangePolicy: OnRootMismatch + `)) + + cluster.Spec.SupplementalGroups = []int64{} + assert.Assert(t, cmp.MarshalMatches(PodSecurityContext(cluster), ` +fsGroupChangePolicy: OnRootMismatch + `)) + + cluster.Spec.SupplementalGroups = []int64{999, 65000} + assert.Assert(t, cmp.MarshalMatches(PodSecurityContext(cluster), ` +fsGroupChangePolicy: OnRootMismatch +supplementalGroups: +- 999 +- 65000 + `)) + + *cluster.Spec.OpenShift = false + assert.Assert(t, cmp.MarshalMatches(PodSecurityContext(cluster), ` +fsGroup: 26 +fsGroupChangePolicy: OnRootMismatch +supplementalGroups: +- 999 +- 65000 + `)) + + t.Run("NoRootGID", func(t *testing.T) { + cluster.Spec.SupplementalGroups = []int64{999, 0, 100, 0} + assert.DeepEqual(t, []int64{999, 100}, PodSecurityContext(cluster).SupplementalGroups) + + cluster.Spec.SupplementalGroups = []int64{0} + assert.Assert(t, PodSecurityContext(cluster).SupplementalGroups == nil) + }) +} diff --git a/internal/postgres/users.go b/internal/postgres/users.go new file mode 100644 index 0000000000..be8785a4e5 --- /dev/null +++ b/internal/postgres/users.go @@ -0,0 +1,241 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgres + +import ( + "bytes" + "context" + "encoding/json" + "strings" + + pg_query "github.com/pganalyze/pg_query_go/v5" + + "github.com/crunchydata/postgres-operator/internal/feature" + "github.com/crunchydata/postgres-operator/internal/logging" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +var RESERVED_SCHEMA_NAMES = map[string]bool{ + "public": true, // This is here for documentation; Postgres will reject a role named `public` as reserved + "pgbouncer": true, + "monitor": true, +} + +func sanitizeAlterRoleOptions(options string) string { + const AlterRolePrefix = `ALTER ROLE "any" WITH ` + + // Parse the options and discard them completely when incoherent. + parsed, err := pg_query.Parse(AlterRolePrefix + options) + if err != nil || len(parsed.GetStmts()) != 1 { + return "" + } + + // Rebuild the options list without invalid options. TODO(go1.21) TODO(slices) + orig := parsed.GetStmts()[0].GetStmt().GetAlterRoleStmt().GetOptions() + next := make([]*pg_query.Node, 0, len(orig)) + for i, option := range orig { + if strings.EqualFold(option.GetDefElem().GetDefname(), "password") { + continue + } + next = append(next, orig[i]) + } + if len(next) > 0 { + parsed.GetStmts()[0].GetStmt().GetAlterRoleStmt().Options = next + } else { + return "" + } + + // Turn the modified statement back into SQL and remove the ALTER ROLE portion. + sql, _ := pg_query.Deparse(parsed) + return strings.TrimPrefix(sql, AlterRolePrefix) +} + +// WriteUsersInPostgreSQL calls exec to create users that do not exist in +// PostgreSQL. Once they exist, it updates their options and passwords and +// grants them access to their specified databases. The databases must already +// exist. +func WriteUsersInPostgreSQL( + ctx context.Context, cluster *v1beta1.PostgresCluster, exec Executor, + users []v1beta1.PostgresUserSpec, verifiers map[string]string, +) error { + log := logging.FromContext(ctx) + + var err error + var sql bytes.Buffer + + // Prevent unexpected dereferences by emptying "search_path". The "pg_catalog" + // schema is still searched, and only temporary objects can be created. + // - https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-SEARCH-PATH + _, _ = sql.WriteString(`SET search_path TO '';`) + + // Fill a temporary table with the JSON of the user specifications. + // "\copy" reads from subsequent lines until the special line "\.". + // - https://www.postgresql.org/docs/current/app-psql.html#APP-PSQL-META-COMMANDS-COPY + _, _ = sql.WriteString(` +CREATE TEMPORARY TABLE input (id serial, data json); +\copy input (data) from stdin with (format text) +`) + encoder := json.NewEncoder(&sql) + encoder.SetEscapeHTML(false) + + for i := range users { + spec := users[i] + + databases := spec.Databases + options := sanitizeAlterRoleOptions(spec.Options) + + // The "postgres" user must always be a superuser that can login to + // the "postgres" database. + if spec.Name == "postgres" { + databases = append(databases[:0:0], "postgres") + options = `LOGIN SUPERUSER` + } + + if err == nil { + err = encoder.Encode(map[string]any{ + "databases": databases, + "options": options, + "username": spec.Name, + "verifier": verifiers[string(spec.Name)], + }) + } + } + _, _ = sql.WriteString(`\.` + "\n") + + // Create the following objects in a transaction so that permissions are + // correct before any other session sees them. + // - https://www.postgresql.org/docs/current/ddl-priv.html + _, _ = sql.WriteString(`BEGIN;`) + + // Create users that do not already exist. Permissions are granted later. + // Roles created this way automatically have the LOGIN option. + // - https://www.postgresql.org/docs/current/sql-createuser.html + _, _ = sql.WriteString(` +SELECT pg_catalog.format('CREATE USER %I', + pg_catalog.json_extract_path_text(input.data, 'username')) + FROM input + WHERE NOT EXISTS ( + SELECT 1 FROM pg_catalog.pg_roles + WHERE rolname = pg_catalog.json_extract_path_text(input.data, 'username')) + ORDER BY input.id +\gexec +`) + + // Set any options from the specification. Validation ensures that the value + // does not contain semicolons. + // - https://www.postgresql.org/docs/current/sql-alterrole.html + _, _ = sql.WriteString(` +SELECT pg_catalog.format('ALTER ROLE %I WITH %s PASSWORD %L', + pg_catalog.json_extract_path_text(input.data, 'username'), + pg_catalog.json_extract_path_text(input.data, 'options'), + pg_catalog.json_extract_path_text(input.data, 'verifier')) + FROM input ORDER BY input.id +\gexec +`) + + // Grant access to any specified databases. + // - https://www.postgresql.org/docs/current/sql-grant.html + _, _ = sql.WriteString(` +SELECT pg_catalog.format('GRANT ALL PRIVILEGES ON DATABASE %I TO %I', + pg_catalog.json_array_elements_text( + pg_catalog.json_extract_path( + pg_catalog.json_strip_nulls(input.data), 'databases')), + pg_catalog.json_extract_path_text(input.data, 'username')) + FROM input ORDER BY input.id +\gexec +`) + + // Commit (finish) the transaction. + _, _ = sql.WriteString(`COMMIT;`) + + stdout, stderr, err := exec.Exec(ctx, &sql, + map[string]string{ + "ON_ERROR_STOP": "on", // Abort when any one statement fails. + "QUIET": "on", // Do not print successful statements to stdout. + }) + + log.V(1).Info("wrote PostgreSQL users", "stdout", stdout, "stderr", stderr) + + // The operator will attempt to write schemas for the users in the spec if + // * the feature gate is enabled and + // * the cluster is annotated. + if feature.Enabled(ctx, feature.AutoCreateUserSchema) { + autoCreateUserSchemaAnnotationValue, annotationExists := cluster.Annotations[naming.AutoCreateUserSchemaAnnotation] + if annotationExists && strings.EqualFold(autoCreateUserSchemaAnnotationValue, "true") { + log.V(1).Info("Writing schemas for users.") + err = WriteUsersSchemasInPostgreSQL(ctx, exec, users) + } + } + + return err +} + +// WriteUsersSchemasInPostgreSQL will create a schema for each user in each database that user has access to +func WriteUsersSchemasInPostgreSQL(ctx context.Context, exec Executor, + users []v1beta1.PostgresUserSpec) error { + + log := logging.FromContext(ctx) + + var err error + var stdout string + var stderr string + + for i := range users { + spec := users[i] + + // We skip if the user has the name of a reserved schema + if RESERVED_SCHEMA_NAMES[string(spec.Name)] { + log.V(1).Info("Skipping schema creation for user with reserved name", + "name", string(spec.Name)) + continue + } + + // We skip if the user has no databases + if len(spec.Databases) == 0 { + continue + } + + var sql bytes.Buffer + + // Prevent unexpected dereferences by emptying "search_path". The "pg_catalog" + // schema is still searched, and only temporary objects can be created. + // - https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-SEARCH-PATH + _, _ = sql.WriteString(`SET search_path TO '';`) + + _, _ = sql.WriteString(`SELECT * FROM json_array_elements_text(:'databases');`) + + databases, _ := json.Marshal(spec.Databases) + + stdout, stderr, err = exec.ExecInDatabasesFromQuery(ctx, + sql.String(), + strings.Join([]string{ + // Quiet NOTICE messages from IF EXISTS statements. + // - https://www.postgresql.org/docs/current/runtime-config-client.html + `SET client_min_messages = WARNING;`, + + // Creates a schema named after and owned by the user + // - https://www.postgresql.org/docs/current/ddl-schemas.html + // - https://www.postgresql.org/docs/current/sql-createschema.html + + // We create a schema named after the user because + // the PG search_path does not need to be updated, + // since search_path defaults to "$user", public. + // - https://www.postgresql.org/docs/current/ddl-schemas.html#DDL-SCHEMAS-PATH + `CREATE SCHEMA IF NOT EXISTS :"username" AUTHORIZATION :"username";`, + }, "\n"), + map[string]string{ + "databases": string(databases), + "username": string(spec.Name), + + "ON_ERROR_STOP": "on", // Abort when any one statement fails. + "QUIET": "on", // Do not print successful commands to stdout. + }, + ) + + log.V(1).Info("wrote PostgreSQL schemas", "stdout", stdout, "stderr", stderr) + } + return err +} diff --git a/internal/postgres/users_test.go b/internal/postgres/users_test.go new file mode 100644 index 0000000000..141175c78e --- /dev/null +++ b/internal/postgres/users_test.go @@ -0,0 +1,237 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package postgres + +import ( + "context" + "errors" + "io" + "regexp" + "strings" + "testing" + + "gotest.tools/v3/assert" + + "github.com/crunchydata/postgres-operator/internal/testing/cmp" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestSanitizeAlterRoleOptions(t *testing.T) { + assert.Equal(t, sanitizeAlterRoleOptions(""), "") + assert.Equal(t, sanitizeAlterRoleOptions(" login other stuff"), "", + "expected non-options to be removed") + + t.Run("RemovesPassword", func(t *testing.T) { + assert.Equal(t, sanitizeAlterRoleOptions("password 'anything'"), "") + assert.Equal(t, sanitizeAlterRoleOptions("password $wild$ dollar quoting $wild$ login"), "LOGIN") + assert.Equal(t, sanitizeAlterRoleOptions(" login password '' replication "), "LOGIN REPLICATION") + }) + + t.Run("RemovesComments", func(t *testing.T) { + assert.Equal(t, sanitizeAlterRoleOptions("login -- asdf"), "LOGIN") + assert.Equal(t, sanitizeAlterRoleOptions("login /*"), "") + assert.Equal(t, sanitizeAlterRoleOptions("login /* createdb */ createrole"), "LOGIN CREATEROLE") + }) +} + +func TestWriteUsersInPostgreSQL(t *testing.T) { + ctx := context.Background() + + t.Run("Arguments", func(t *testing.T) { + expected := errors.New("pass-through") + exec := func( + _ context.Context, stdin io.Reader, stdout, stderr io.Writer, command ...string, + ) error { + assert.Assert(t, stdout != nil, "should capture stdout") + assert.Assert(t, stderr != nil, "should capture stderr") + return expected + } + + cluster := new(v1beta1.PostgresCluster) + assert.Equal(t, expected, WriteUsersInPostgreSQL(ctx, cluster, exec, nil, nil)) + }) + + t.Run("Empty", func(t *testing.T) { + calls := 0 + exec := func( + _ context.Context, stdin io.Reader, _, _ io.Writer, command ...string, + ) error { + calls++ + + b, err := io.ReadAll(stdin) + assert.NilError(t, err) + assert.Equal(t, string(b), strings.TrimSpace(` +SET search_path TO ''; +CREATE TEMPORARY TABLE input (id serial, data json); +\copy input (data) from stdin with (format text) +\. +BEGIN; +SELECT pg_catalog.format('CREATE USER %I', + pg_catalog.json_extract_path_text(input.data, 'username')) + FROM input + WHERE NOT EXISTS ( + SELECT 1 FROM pg_catalog.pg_roles + WHERE rolname = pg_catalog.json_extract_path_text(input.data, 'username')) + ORDER BY input.id +\gexec + +SELECT pg_catalog.format('ALTER ROLE %I WITH %s PASSWORD %L', + pg_catalog.json_extract_path_text(input.data, 'username'), + pg_catalog.json_extract_path_text(input.data, 'options'), + pg_catalog.json_extract_path_text(input.data, 'verifier')) + FROM input ORDER BY input.id +\gexec + +SELECT pg_catalog.format('GRANT ALL PRIVILEGES ON DATABASE %I TO %I', + pg_catalog.json_array_elements_text( + pg_catalog.json_extract_path( + pg_catalog.json_strip_nulls(input.data), 'databases')), + pg_catalog.json_extract_path_text(input.data, 'username')) + FROM input ORDER BY input.id +\gexec +COMMIT;`)) + return nil + } + + cluster := new(v1beta1.PostgresCluster) + assert.NilError(t, WriteUsersInPostgreSQL(ctx, cluster, exec, nil, nil)) + assert.Equal(t, calls, 1) + + assert.NilError(t, WriteUsersInPostgreSQL(ctx, cluster, exec, []v1beta1.PostgresUserSpec{}, nil)) + assert.Equal(t, calls, 2) + + assert.NilError(t, WriteUsersInPostgreSQL(ctx, cluster, exec, nil, map[string]string{})) + assert.Equal(t, calls, 3) + }) + + t.Run("OptionalFields", func(t *testing.T) { + cluster := new(v1beta1.PostgresCluster) + calls := 0 + exec := func( + _ context.Context, stdin io.Reader, _, _ io.Writer, command ...string, + ) error { + calls++ + + b, err := io.ReadAll(stdin) + assert.NilError(t, err) + assert.Assert(t, cmp.Contains(string(b), ` +\copy input (data) from stdin with (format text) +{"databases":["db1"],"options":"","username":"user-no-options","verifier":""} +{"databases":null,"options":"CREATEDB CREATEROLE","username":"user-no-databases","verifier":""} +{"databases":null,"options":"","username":"user-with-verifier","verifier":"some$verifier"} +{"databases":null,"options":"LOGIN","username":"user-invalid-options","verifier":""} +\. +`)) + return nil + } + + assert.NilError(t, WriteUsersInPostgreSQL(ctx, cluster, exec, + []v1beta1.PostgresUserSpec{ + { + Name: "user-no-options", + Databases: []v1beta1.PostgresIdentifier{"db1"}, + }, + { + Name: "user-no-databases", + Options: "createdb createrole", + }, + { + Name: "user-with-verifier", + }, + { + Name: "user-invalid-options", + Options: "login password 'doot' --", + }, + }, + map[string]string{ + "no-user": "ignored", + "user-with-verifier": "some$verifier", + }, + )) + assert.Equal(t, calls, 1) + }) + + t.Run("PostgresSuperuser", func(t *testing.T) { + calls := 0 + cluster := new(v1beta1.PostgresCluster) + exec := func( + _ context.Context, stdin io.Reader, _, _ io.Writer, command ...string, + ) error { + calls++ + + b, err := io.ReadAll(stdin) + assert.NilError(t, err) + assert.Assert(t, cmp.Contains(string(b), ` +\copy input (data) from stdin with (format text) +{"databases":["postgres"],"options":"LOGIN SUPERUSER","username":"postgres","verifier":"allowed"} +\. +`)) + return nil + } + + assert.NilError(t, WriteUsersInPostgreSQL(ctx, cluster, exec, + []v1beta1.PostgresUserSpec{ + { + Name: "postgres", + Databases: []v1beta1.PostgresIdentifier{"all", "ignored"}, + Options: "NOLOGIN CONNECTION LIMIT 0", + }, + }, + map[string]string{ + "postgres": "allowed", + }, + )) + assert.Equal(t, calls, 1) + }) +} + +func TestWriteUsersSchemasInPostgreSQL(t *testing.T) { + ctx := context.Background() + + t.Run("Mixed users", func(t *testing.T) { + calls := 0 + exec := func( + _ context.Context, stdin io.Reader, _, _ io.Writer, command ...string, + ) error { + calls++ + + b, err := io.ReadAll(stdin) + assert.NilError(t, err) + + // The command strings will contain either of two possibilities, depending on the user called. + commands := strings.Join(command, ",") + re := regexp.MustCompile("--set=databases=\\[\"db1\"\\],--set=username=user-single-db|--set=databases=\\[\"db1\",\"db2\"\\],--set=username=user-multi-db") + assert.Assert(t, cmp.Regexp(re, commands)) + + assert.Assert(t, cmp.Contains(string(b), `CREATE SCHEMA IF NOT EXISTS :"username" AUTHORIZATION :"username";`)) + return nil + } + + assert.NilError(t, WriteUsersSchemasInPostgreSQL(ctx, exec, + []v1beta1.PostgresUserSpec{ + { + Name: "user-single-db", + Databases: []v1beta1.PostgresIdentifier{"db1"}, + }, + { + Name: "user-no-databases", + }, + { + Name: "user-multi-dbs", + Databases: []v1beta1.PostgresIdentifier{"db1", "db2"}, + }, + { + Name: "public", + Databases: []v1beta1.PostgresIdentifier{"db3"}, + }, + }, + )) + // The spec.users has four elements, but two will be skipped: + // * the user with the reserved name `public` + // * the user with 0 databases + assert.Equal(t, calls, 2) + }) + +} diff --git a/internal/postgres/wal.md b/internal/postgres/wal.md new file mode 100644 index 0000000000..afb094c20e --- /dev/null +++ b/internal/postgres/wal.md @@ -0,0 +1,57 @@ + + +PostgreSQL commits transactions by storing changes in its [write-ahead log][WAL]. +The contents of the log are applied to data files (containing tables and indexes) +later as part of a checkpoint. + +The way WAL files are accessed and utilized often differs from that of data +files. In high-performance situations, it can desirable to put WAL files on +storage with different performance or durability characteristics. + +[WAL]: https://www.postgresql.org/docs/current/wal.html + + +PostgresCluster has a field that specifies how to store PostgreSQL data files +and an optional field for how to store PostgreSQL WAL files. When a WAL volume +is specified the PostgresCluster controller reconciles one "pgwal" PVC per +instance in the instance set. + +## Starting with a WAL volume + +When a PostgresCluster is created with a WAL volume specified, the `--waldir` +argument to `initdb` ensures that WAL files are written to WAL volume. When +creating a replica (e.g. scaling up) the `--waldir` argument to `pg_basebackup` +does the same. The way pgBackRest handles this depends on the contents of the +backup, but when creating a replica the `--link-map=pg_wal` argument does the +same. + +## Adding a WAL volume + +It is possible to specify a WAL volume on PostgresCluster after it has already +bootstrapped, has data, etc. In this case, the WAL PVC is reconciled and mounted +as usual and an init container moves any existing WAL files while PostgreSQL is +stopped. These are changes to the instance PodTemplate and go through the normal +rollout procedure. + +## Removing a WAL volume + +It is possible to remove the specification of a WAL volume on PostgresCluster +after it has already bootstrapped, has data, etc. In this case, a series of +rollouts moves WAL files off the volume then unmounts and deletes the PVC. + +First, the command of the init container is adjusted to match the PostgresCluster +spec -- WAL files belong *off* the WAL volume. The WAL PVC continues to exist +and remains mounted in the PodSpec. This change to the PodTemplate is rolled +out, allowing the init container to move WAL files off the WAL volume while +PostgreSQL is stopped. + +When the PostgreSQL container of an instance Pod starts running, the +PostgresCluster controller examines the WAL directory inside its volume. When +the WAL files are safely off the WAL volume, it deletes the WAL PVC and removes +it from the PodSpec. This change to the PodTemplate goes through the normal +rollout procedure. + diff --git a/internal/registration/interface.go b/internal/registration/interface.go new file mode 100644 index 0000000000..578a064e2b --- /dev/null +++ b/internal/registration/interface.go @@ -0,0 +1,67 @@ +// Copyright 2023 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package registration + +import ( + "fmt" + "os" + + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/meta" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/client-go/tools/record" + "sigs.k8s.io/controller-runtime/pkg/client" + + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +type Registration interface { + // Required returns true when registration is required but the token is missing or invalid. + Required(record.EventRecorder, client.Object, *[]metav1.Condition) bool +} + +var URL = os.Getenv("REGISTRATION_URL") + +func SetAdvanceWarning(recorder record.EventRecorder, object client.Object, conditions *[]metav1.Condition) { + recorder.Eventf(object, corev1.EventTypeWarning, "Register Soon", + "Crunchy Postgres for Kubernetes requires registration for upgrades."+ + " Register now to be ready for your next upgrade. See %s for details.", URL) + + meta.SetStatusCondition(conditions, metav1.Condition{ + Type: v1beta1.Registered, + Status: metav1.ConditionFalse, + Reason: "TokenRequired", + Message: fmt.Sprintf( + "Crunchy Postgres for Kubernetes requires registration for upgrades."+ + " Register now to be ready for your next upgrade. See %s for details.", URL), + ObservedGeneration: object.GetGeneration(), + }) +} + +func SetRequiredWarning(recorder record.EventRecorder, object client.Object, conditions *[]metav1.Condition) { + recorder.Eventf(object, corev1.EventTypeWarning, "Registration Required", + "Crunchy Postgres for Kubernetes requires registration for upgrades."+ + " Register now to be ready for your next upgrade. See %s for details.", URL) + + meta.SetStatusCondition(conditions, metav1.Condition{ + Type: v1beta1.Registered, + Status: metav1.ConditionFalse, + Reason: "TokenRequired", + Message: fmt.Sprintf( + "Crunchy Postgres for Kubernetes requires registration for upgrades."+ + " Upgrade suspended. See %s for details.", URL), + ObservedGeneration: object.GetGeneration(), + }) +} + +func emitFailedWarning(recorder record.EventRecorder, object client.Object) { + recorder.Eventf(object, corev1.EventTypeWarning, "Token Authentication Failed", + "See %s for details.", URL) +} + +func emitVerifiedEvent(recorder record.EventRecorder, object client.Object) { + recorder.Event(object, corev1.EventTypeNormal, "Token Verified", + "Thank you for registering your installation of Crunchy Postgres for Kubernetes.") +} diff --git a/internal/registration/runner.go b/internal/registration/runner.go new file mode 100644 index 0000000000..0d607e1e94 --- /dev/null +++ b/internal/registration/runner.go @@ -0,0 +1,191 @@ +// Copyright 2023 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package registration + +import ( + "context" + "crypto/rsa" + "errors" + "os" + "strings" + "sync" + "time" + + "github.com/golang-jwt/jwt/v5" + "k8s.io/apimachinery/pkg/api/meta" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/client-go/tools/record" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/manager" + + "github.com/crunchydata/postgres-operator/internal/logging" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +// Runner implements [Registration] by loading and validating the token at a +// fixed path. Its methods are safe to call concurrently. +type Runner struct { + changed func() + enabled bool + publicKey *rsa.PublicKey + refresh time.Duration + tokenPath string + + token struct { + sync.RWMutex + Exists bool `json:"-"` + + jwt.RegisteredClaims + Iteration int `json:"itr"` + } +} + +// Runner implements [Registration] and [manager.Runnable]. +var _ Registration = (*Runner)(nil) +var _ manager.Runnable = (*Runner)(nil) + +// NewRunner creates a [Runner] that periodically checks the validity of the +// token at tokenPath. It calls changed when the validity of the token changes. +func NewRunner(publicKey, tokenPath string, changed func()) (*Runner, error) { + runner := &Runner{ + changed: changed, + refresh: time.Minute, + tokenPath: tokenPath, + } + + var err error + switch { + case publicKey != "" && tokenPath != "": + if !strings.HasPrefix(strings.TrimSpace(publicKey), "-") { + publicKey = "-----BEGIN -----\n" + publicKey + "\n-----END -----" + } + + runner.enabled = true + runner.publicKey, err = jwt.ParseRSAPublicKeyFromPEM([]byte(publicKey)) + + case publicKey == "" && tokenPath != "": + err = errors.New("registration: missing public key") + + case publicKey != "" && tokenPath == "": + err = errors.New("registration: missing token path") + } + + return runner, err +} + +// CheckToken loads and verifies the configured token, returning an error when +// the file exists but cannot be verified, and +// returning the token if it can be verified. +// NOTE(upgradecheck): return the token/nil so that we can use the token +// in upgradecheck; currently a refresh of the token will cause a restart of the pod +// meaning that the token used in upgradecheck is always the current token. +// But if the restart behavior changes, we might drop the token return in main.go +// and change upgradecheck to retrieve the token itself +func (r *Runner) CheckToken() (*jwt.Token, error) { + data, errFile := os.ReadFile(r.tokenPath) + key := func(*jwt.Token) (any, error) { return r.publicKey, nil } + + // Assume [jwt] and [os] functions could do something unexpected; use defer + // to safely write to the token. + r.token.Lock() + defer r.token.Unlock() + + token, errToken := jwt.ParseWithClaims(string(data), &r.token, key, + jwt.WithExpirationRequired(), + jwt.WithValidMethods([]string{"RS256"}), + ) + + // The error from [os.ReadFile] indicates whether a token file exists. + r.token.Exists = !os.IsNotExist(errFile) + + // Reset most claims if there is any problem loading, parsing, validating, or + // verifying the token file. + if errFile != nil || errToken != nil { + r.token.RegisteredClaims = jwt.RegisteredClaims{} + } + + switch { + case !r.enabled || !r.token.Exists: + return nil, nil + case errFile != nil: + return nil, errFile + default: + return token, errToken + } +} + +func (r *Runner) state() (failed, required bool) { + // Assume [time] functions could do something unexpected; use defer to safely + // read the token. + r.token.RLock() + defer r.token.RUnlock() + + failed = r.token.Exists && r.token.ExpiresAt == nil + required = r.enabled && + (!r.token.Exists || failed || r.token.ExpiresAt.Before(time.Now())) + return +} + +// Required returns true when registration is required but the token is missing or invalid. +func (r *Runner) Required( + recorder record.EventRecorder, object client.Object, conditions *[]metav1.Condition, +) bool { + failed, required := r.state() + + if r.enabled && failed { + emitFailedWarning(recorder, object) + } + + if !required && conditions != nil { + before := len(*conditions) + meta.RemoveStatusCondition(conditions, v1beta1.Registered) + meta.RemoveStatusCondition(conditions, "RegistrationRequired") + meta.RemoveStatusCondition(conditions, "TokenRequired") + found := len(*conditions) != before + + if r.enabled && found { + emitVerifiedEvent(recorder, object) + } + } + + return required +} + +// NeedLeaderElection returns true so that r runs only on the single +// [manager.Manager] that is elected leader in the Kubernetes namespace. +func (r *Runner) NeedLeaderElection() bool { return true } + +// Start watches for a mounted registration token when enabled. It blocks +// until ctx is cancelled. +func (r *Runner) Start(ctx context.Context) error { + var ticks <-chan time.Time + + if r.enabled { + ticker := time.NewTicker(r.refresh) + defer ticker.Stop() + ticks = ticker.C + } + + log := logging.FromContext(ctx).WithValues("controller", "registration") + + for { + select { + case <-ticks: + _, before := r.state() + if _, err := r.CheckToken(); err != nil { + log.Error(err, "Unable to validate token") + } + if _, after := r.state(); before != after && r.changed != nil { + r.changed() + } + case <-ctx.Done(): + // https://github.com/kubernetes-sigs/controller-runtime/issues/1927 + if errors.Is(ctx.Err(), context.Canceled) { + return nil + } + return ctx.Err() + } + } +} diff --git a/internal/registration/runner_test.go b/internal/registration/runner_test.go new file mode 100644 index 0000000000..8e75848986 --- /dev/null +++ b/internal/registration/runner_test.go @@ -0,0 +1,574 @@ +// Copyright 2023 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package registration + +import ( + "context" + "crypto/rand" + "crypto/rsa" + "crypto/x509" + "encoding/pem" + "os" + "path/filepath" + "strings" + "testing" + "time" + + "github.com/golang-jwt/jwt/v5" + "gotest.tools/v3/assert" + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/client-go/kubernetes/scheme" + "sigs.k8s.io/controller-runtime/pkg/manager" + + "github.com/crunchydata/postgres-operator/internal/testing/events" +) + +func TestNewRunner(t *testing.T) { + t.Parallel() + + key, err := rsa.GenerateKey(rand.Reader, 2048) + assert.NilError(t, err) + + der, err := x509.MarshalPKIXPublicKey(&key.PublicKey) + assert.NilError(t, err) + + public := pem.EncodeToMemory(&pem.Block{Bytes: der}) + assert.Assert(t, len(public) != 0) + + t.Run("Disabled", func(t *testing.T) { + runner, err := NewRunner("", "", nil) + assert.NilError(t, err) + assert.Assert(t, runner != nil) + assert.Assert(t, !runner.enabled) + }) + + t.Run("ConfiguredCorrectly", func(t *testing.T) { + runner, err := NewRunner(string(public), "any", nil) + assert.NilError(t, err) + assert.Assert(t, runner != nil) + assert.Assert(t, runner.enabled) + + t.Run("ExtraLines", func(t *testing.T) { + input := "\n\n" + strings.ReplaceAll(string(public), "\n", "\n\n") + "\n\n" + + runner, err := NewRunner(input, "any", nil) + assert.NilError(t, err) + assert.Assert(t, runner != nil) + assert.Assert(t, runner.enabled) + }) + + t.Run("WithoutPEMBoundaries", func(t *testing.T) { + lines := strings.Split(strings.TrimSpace(string(public)), "\n") + lines = lines[1 : len(lines)-1] + + for _, input := range []string{ + strings.Join(lines, ""), // single line + strings.Join(lines, "\n"), // multi-line + "\n\n" + strings.Join(lines, "\n\n") + "\n\n", // extra lines + } { + runner, err := NewRunner(input, "any", nil) + assert.NilError(t, err) + assert.Assert(t, runner != nil) + assert.Assert(t, runner.enabled) + } + }) + }) + + t.Run("ConfiguredIncorrectly", func(t *testing.T) { + for _, tt := range []struct { + key, path, msg string + }{ + {msg: "public key", key: "", path: "any"}, + {msg: "token path", key: "bad", path: ""}, + {msg: "invalid key", key: "bad", path: "any"}, + {msg: "token path", key: string(public), path: ""}, + } { + _, err := NewRunner(tt.key, tt.path, nil) + assert.ErrorContains(t, err, tt.msg, "(key=%q, path=%q)", tt.key, tt.path) + } + }) +} + +func TestRunnerCheckToken(t *testing.T) { + t.Parallel() + + dir := t.TempDir() + key, err := rsa.GenerateKey(rand.Reader, 2048) + assert.NilError(t, err) + + t.Run("SafeToCallDisabled", func(t *testing.T) { + r := Runner{enabled: false} + _, err := r.CheckToken() + assert.NilError(t, err) + }) + + t.Run("FileMissing", func(t *testing.T) { + r := Runner{enabled: true, tokenPath: filepath.Join(dir, "nope")} + _, err := r.CheckToken() + assert.NilError(t, err) + }) + + t.Run("FileUnreadable", func(t *testing.T) { + r := Runner{enabled: true, tokenPath: filepath.Join(dir, "nope")} + assert.NilError(t, os.WriteFile(r.tokenPath, nil, 0o200)) // Writeable + + _, err := r.CheckToken() + assert.ErrorContains(t, err, "permission") + assert.Assert(t, r.token.ExpiresAt == nil) + }) + + t.Run("FileEmpty", func(t *testing.T) { + r := Runner{enabled: true, tokenPath: filepath.Join(dir, "empty")} + assert.NilError(t, os.WriteFile(r.tokenPath, nil, 0o400)) // Readable + + _, err := r.CheckToken() + assert.ErrorContains(t, err, "malformed") + assert.Assert(t, r.token.ExpiresAt == nil) + }) + + t.Run("WrongAlgorithm", func(t *testing.T) { + r := Runner{ + enabled: true, + publicKey: &key.PublicKey, + tokenPath: filepath.Join(dir, "hs256"), + } + + // Maliciously treating an RSA public key as an HMAC secret. + // - https://auth0.com/blog/critical-vulnerabilities-in-json-web-token-libraries/ + public, err := x509.MarshalPKIXPublicKey(r.publicKey) + assert.NilError(t, err) + data, err := jwt.New(jwt.SigningMethodHS256).SignedString(public) + assert.NilError(t, err) + assert.NilError(t, os.WriteFile(r.tokenPath, []byte(data), 0o400)) // Readable + + _, err = r.CheckToken() + assert.Assert(t, err != nil, "HMAC algorithm should be rejected") + assert.Assert(t, r.token.ExpiresAt == nil) + }) + + t.Run("MissingExpiration", func(t *testing.T) { + r := Runner{ + enabled: true, + publicKey: &key.PublicKey, + tokenPath: filepath.Join(dir, "no-claims"), + } + + data, err := jwt.New(jwt.SigningMethodRS256).SignedString(key) + assert.NilError(t, err) + assert.NilError(t, os.WriteFile(r.tokenPath, []byte(data), 0o400)) // Readable + + _, err = r.CheckToken() + assert.ErrorContains(t, err, "exp claim is required") + assert.Assert(t, r.token.ExpiresAt == nil) + }) + + t.Run("ExpiredToken", func(t *testing.T) { + r := Runner{ + enabled: true, + publicKey: &key.PublicKey, + tokenPath: filepath.Join(dir, "expired"), + } + + data, err := jwt.NewWithClaims(jwt.SigningMethodRS256, jwt.MapClaims{ + "exp": jwt.NewNumericDate(time.Date(2020, 1, 1, 1, 1, 1, 1, time.UTC)), + }).SignedString(key) + assert.NilError(t, err) + assert.NilError(t, os.WriteFile(r.tokenPath, []byte(data), 0o400)) // Readable + + _, err = r.CheckToken() + assert.ErrorContains(t, err, "is expired") + assert.Assert(t, r.token.ExpiresAt == nil) + }) + + t.Run("ValidToken", func(t *testing.T) { + r := Runner{ + enabled: true, + publicKey: &key.PublicKey, + tokenPath: filepath.Join(dir, "valid"), + } + + expiration := jwt.NewNumericDate(time.Now().Add(time.Hour)) + data, err := jwt.NewWithClaims(jwt.SigningMethodRS256, jwt.MapClaims{ + "exp": expiration, + }).SignedString(key) + assert.NilError(t, err) + assert.NilError(t, os.WriteFile(r.tokenPath, []byte(data), 0o400)) // Readable + + token, err := r.CheckToken() + assert.NilError(t, err) + assert.Assert(t, r.token.ExpiresAt != nil) + assert.Assert(t, token.Valid) + exp, err := token.Claims.GetExpirationTime() + assert.NilError(t, err) + assert.Equal(t, exp.Time, expiration.Time) + }) +} + +func TestRunnerLeaderElectionRunnable(t *testing.T) { + var runner manager.LeaderElectionRunnable = &Runner{} + + assert.Assert(t, runner.NeedLeaderElection()) +} + +func TestRunnerRequiredConditions(t *testing.T) { + t.Parallel() + + t.Run("RegistrationDisabled", func(t *testing.T) { + r := Runner{enabled: false} + + for _, tt := range []struct { + before, after []metav1.Condition + }{ + { + before: []metav1.Condition{}, + after: []metav1.Condition{}, + }, + { + before: []metav1.Condition{{Type: "ExistingOther"}}, + after: []metav1.Condition{{Type: "ExistingOther"}}, + }, + { + before: []metav1.Condition{{Type: "Registered"}, {Type: "ExistingOther"}}, + after: []metav1.Condition{{Type: "ExistingOther"}}, + }, + { + before: []metav1.Condition{ + {Type: "Registered"}, + {Type: "ExistingOther"}, + {Type: "RegistrationRequired"}, + }, + after: []metav1.Condition{{Type: "ExistingOther"}}, + }, + { + before: []metav1.Condition{{Type: "TokenRequired"}}, + after: []metav1.Condition{}, + }, + } { + for _, exists := range []bool{false, true} { + for _, expires := range []time.Time{ + time.Now().Add(time.Hour), + time.Now().Add(-time.Hour), + } { + r.token.Exists = exists + r.token.ExpiresAt = jwt.NewNumericDate(expires) + + conditions := append([]metav1.Condition{}, tt.before...) + discard := new(events.Recorder) + object := &corev1.ConfigMap{} + + result := r.Required(discard, object, &conditions) + + assert.Equal(t, result, false, "expected registration not required") + assert.DeepEqual(t, conditions, tt.after) + } + } + } + }) + + t.Run("RegistrationRequired", func(t *testing.T) { + r := Runner{enabled: true} + + for _, tt := range []struct { + exists bool + expires time.Time + before []metav1.Condition + }{ + { + exists: false, expires: time.Now().Add(time.Hour), + before: []metav1.Condition{{Type: "Registered"}, {Type: "ExistingOther"}}, + }, + { + exists: false, expires: time.Now().Add(-time.Hour), + before: []metav1.Condition{{Type: "Registered"}, {Type: "ExistingOther"}}, + }, + { + exists: true, expires: time.Now().Add(-time.Hour), + before: []metav1.Condition{{Type: "Registered"}, {Type: "ExistingOther"}}, + }, + } { + r.token.Exists = tt.exists + r.token.ExpiresAt = jwt.NewNumericDate(tt.expires) + + conditions := append([]metav1.Condition{}, tt.before...) + discard := new(events.Recorder) + object := &corev1.ConfigMap{} + + result := r.Required(discard, object, &conditions) + + assert.Equal(t, result, true, "expected registration required") + assert.DeepEqual(t, conditions, tt.before) + } + }) + + t.Run("Registered", func(t *testing.T) { + r := Runner{} + r.token.Exists = true + r.token.ExpiresAt = jwt.NewNumericDate(time.Now().Add(time.Hour)) + + for _, tt := range []struct { + before, after []metav1.Condition + }{ + { + before: []metav1.Condition{}, + after: []metav1.Condition{}, + }, + { + before: []metav1.Condition{{Type: "ExistingOther"}}, + after: []metav1.Condition{{Type: "ExistingOther"}}, + }, + { + before: []metav1.Condition{{Type: "Registered"}, {Type: "ExistingOther"}}, + after: []metav1.Condition{{Type: "ExistingOther"}}, + }, + { + before: []metav1.Condition{ + {Type: "Registered"}, + {Type: "ExistingOther"}, + {Type: "RegistrationRequired"}, + }, + after: []metav1.Condition{{Type: "ExistingOther"}}, + }, + { + before: []metav1.Condition{{Type: "TokenRequired"}}, + after: []metav1.Condition{}, + }, + } { + for _, enabled := range []bool{false, true} { + r.enabled = enabled + + conditions := append([]metav1.Condition{}, tt.before...) + discard := new(events.Recorder) + object := &corev1.ConfigMap{} + + result := r.Required(discard, object, &conditions) + + assert.Equal(t, result, false, "expected registration not required") + assert.DeepEqual(t, conditions, tt.after) + } + } + }) +} + +func TestRunnerRequiredEvents(t *testing.T) { + t.Parallel() + + t.Run("RegistrationDisabled", func(t *testing.T) { + r := Runner{enabled: false} + + for _, tt := range []struct { + before []metav1.Condition + }{ + { + before: []metav1.Condition{}, + }, + { + before: []metav1.Condition{{Type: "ExistingOther"}}, + }, + { + before: []metav1.Condition{{Type: "Registered"}, {Type: "ExistingOther"}}, + }, + } { + for _, exists := range []bool{false, true} { + for _, expires := range []time.Time{ + time.Now().Add(time.Hour), + time.Now().Add(-time.Hour), + } { + r.token.Exists = exists + r.token.ExpiresAt = jwt.NewNumericDate(expires) + + conditions := append([]metav1.Condition{}, tt.before...) + object := &corev1.ConfigMap{} + recorder := events.NewRecorder(t, scheme.Scheme) + + result := r.Required(recorder, object, &conditions) + + assert.Equal(t, result, false, "expected registration not required") + assert.Equal(t, len(recorder.Events), 0, "expected no events") + } + } + } + }) + + t.Run("RegistrationRequired", func(t *testing.T) { + r := Runner{enabled: true} + + t.Run("MissingToken", func(t *testing.T) { + r.token.Exists = false + + for _, tt := range []struct { + before []metav1.Condition + }{ + { + before: []metav1.Condition{}, + }, + { + before: []metav1.Condition{{Type: "ExistingOther"}}, + }, + { + before: []metav1.Condition{{Type: "Registered"}, {Type: "ExistingOther"}}, + }, + } { + conditions := append([]metav1.Condition{}, tt.before...) + object := &corev1.ConfigMap{} + recorder := events.NewRecorder(t, scheme.Scheme) + + result := r.Required(recorder, object, &conditions) + + assert.Equal(t, result, true, "expected registration required") + assert.Equal(t, len(recorder.Events), 0, "expected no events") + } + }) + + t.Run("InvalidToken", func(t *testing.T) { + r.token.Exists = true + r.token.ExpiresAt = nil + + for _, tt := range []struct { + before []metav1.Condition + }{ + { + before: []metav1.Condition{}, + }, + { + before: []metav1.Condition{{Type: "ExistingOther"}}, + }, + { + before: []metav1.Condition{{Type: "Registered"}, {Type: "ExistingOther"}}, + }, + } { + conditions := append([]metav1.Condition{}, tt.before...) + object := &corev1.ConfigMap{} + recorder := events.NewRecorder(t, scheme.Scheme) + + result := r.Required(recorder, object, &conditions) + + assert.Equal(t, result, true, "expected registration required") + assert.Equal(t, len(recorder.Events), 1, "expected one event") + assert.Equal(t, recorder.Events[0].Type, "Warning") + assert.Equal(t, recorder.Events[0].Reason, "Token Authentication Failed") + } + }) + }) + + t.Run("Registered", func(t *testing.T) { + r := Runner{} + r.token.Exists = true + r.token.ExpiresAt = jwt.NewNumericDate(time.Now().Add(time.Hour)) + + t.Run("AlwaysRegistered", func(t *testing.T) { + // No prior registration conditions + for _, tt := range []struct { + before []metav1.Condition + }{ + { + before: []metav1.Condition{}, + }, + { + before: []metav1.Condition{{Type: "ExistingOther"}}, + }, + } { + for _, enabled := range []bool{false, true} { + r.enabled = enabled + + conditions := append([]metav1.Condition{}, tt.before...) + object := &corev1.ConfigMap{} + recorder := events.NewRecorder(t, scheme.Scheme) + + result := r.Required(recorder, object, &conditions) + + assert.Equal(t, result, false, "expected registration not required") + assert.Equal(t, len(recorder.Events), 0, "expected no events") + } + } + }) + + t.Run("PreviouslyUnregistered", func(t *testing.T) { + r.enabled = true + + // One or more prior registration conditions + for _, tt := range []struct { + before []metav1.Condition + }{ + { + before: []metav1.Condition{{Type: "Registered"}, {Type: "ExistingOther"}}, + }, + { + before: []metav1.Condition{ + {Type: "Registered"}, + {Type: "ExistingOther"}, + {Type: "RegistrationRequired"}, + }, + }, + { + before: []metav1.Condition{{Type: "TokenRequired"}}, + }, + } { + conditions := append([]metav1.Condition{}, tt.before...) + object := &corev1.ConfigMap{} + recorder := events.NewRecorder(t, scheme.Scheme) + + result := r.Required(recorder, object, &conditions) + + assert.Equal(t, result, false, "expected registration not required") + assert.Equal(t, len(recorder.Events), 1, "expected one event") + assert.Equal(t, recorder.Events[0].Type, "Normal") + assert.Equal(t, recorder.Events[0].Reason, "Token Verified") + } + }) + }) +} + +func TestRunnerStart(t *testing.T) { + t.Parallel() + + dir := t.TempDir() + key, err := rsa.GenerateKey(rand.Reader, 2048) + assert.NilError(t, err) + + token, err := jwt.NewWithClaims(jwt.SigningMethodRS256, jwt.MapClaims{ + "exp": jwt.NewNumericDate(time.Now().Add(time.Hour)), + }).SignedString(key) + assert.NilError(t, err) + + t.Run("DisabledDoesNothing", func(t *testing.T) { + runner := &Runner{ + enabled: false, + refresh: time.Nanosecond, + } + + ctx, cancel := context.WithTimeout(context.Background(), time.Millisecond) + defer cancel() + + assert.ErrorIs(t, runner.Start(ctx), context.DeadlineExceeded, + "expected it to block until context is canceled") + }) + + t.Run("WithCallback", func(t *testing.T) { + called := false + runner := &Runner{ + changed: func() { called = true }, + enabled: true, + publicKey: &key.PublicKey, + refresh: time.Second, + tokenPath: filepath.Join(dir, "token"), + } + + // Begin with an invalid token. + assert.NilError(t, os.WriteFile(runner.tokenPath, nil, 0o600)) + _, err = runner.CheckToken() + assert.Assert(t, err != nil) + + // Replace it with a valid token. + assert.NilError(t, os.WriteFile(runner.tokenPath, []byte(token), 0o600)) + + // Run with a timeout that exceeds the refresh interval. + ctx, cancel := context.WithTimeout(context.Background(), runner.refresh*3/2) + defer cancel() + + assert.ErrorIs(t, runner.Start(ctx), context.DeadlineExceeded) + assert.Assert(t, called, "expected a call back") + }) +} diff --git a/internal/registration/testing.go b/internal/registration/testing.go new file mode 100644 index 0000000000..1418f6d2d3 --- /dev/null +++ b/internal/registration/testing.go @@ -0,0 +1,21 @@ +// Copyright 2023 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package registration + +import ( + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/client-go/tools/record" + "sigs.k8s.io/controller-runtime/pkg/client" +) + +// NOTE: This type can go away following https://go.dev/issue/47487. + +type RegistrationFunc func(record.EventRecorder, client.Object, *[]metav1.Condition) bool + +func (fn RegistrationFunc) Required(rec record.EventRecorder, obj client.Object, conds *[]metav1.Condition) bool { + return fn(rec, obj, conds) +} + +var _ Registration = RegistrationFunc(nil) diff --git a/internal/testing/cmp/cmp.go b/internal/testing/cmp/cmp.go new file mode 100644 index 0000000000..265a598064 --- /dev/null +++ b/internal/testing/cmp/cmp.go @@ -0,0 +1,67 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package cmp + +import ( + "strings" + + gocmp "github.com/google/go-cmp/cmp" + gotest "gotest.tools/v3/assert/cmp" + "sigs.k8s.io/yaml" +) + +type Comparison = gotest.Comparison + +// Contains succeeds if item is in collection. The collection may be a string, +// map, slice, or array. See [gotest.tools/v3/assert/cmp.Contains]. When either +// item or collection is a multi-line string, the failure message contains a +// multi-line report of the differences. +func Contains(collection, item any) Comparison { + cString, cStringOK := collection.(string) + iString, iStringOK := item.(string) + + if cStringOK && iStringOK { + if strings.Contains(cString, "\n") || strings.Contains(iString, "\n") { + return func() gotest.Result { + if strings.Contains(cString, iString) { + return gotest.ResultSuccess + } + return gotest.ResultFailureTemplate(` +--- {{ with callArg 0 }}{{ formatNode . }}{{else}}←{{end}} string does not contain ++++ {{ with callArg 1 }}{{ formatNode . }}{{else}}→{{end}} substring +{{ .Data.diff }}`, + map[string]any{ + "diff": gocmp.Diff(collection, item), + }) + } + } + } + + return gotest.Contains(collection, item) +} + +// DeepEqual compares two values using [github.com/google/go-cmp/cmp] and +// succeeds if the values are equal. The comparison can be customized using +// comparison Options. See [github.com/google/go-cmp/cmp.Option] constructors +// and [github.com/google/go-cmp/cmp/cmpopts]. +func DeepEqual(x, y any, opts ...gocmp.Option) Comparison { + return gotest.DeepEqual(x, y, opts...) +} + +// MarshalMatches converts actual to YAML and compares that to expected. +func MarshalMatches(actual any, expected string) Comparison { + b, err := yaml.Marshal(actual) + if err != nil { + return func() gotest.Result { return gotest.ResultFromError(err) } + } + return gotest.DeepEqual(string(b), strings.Trim(expected, "\t\n")+"\n") +} + +// Regexp succeeds if value contains any match of the regular expression re. +// The regular expression may be a *regexp.Regexp or a string that is a valid +// regexp pattern. +func Regexp(re any, value string) Comparison { + return gotest.Regexp(re, value) +} diff --git a/internal/testing/events/recorder.go b/internal/testing/events/recorder.go new file mode 100644 index 0000000000..23c03a4c40 --- /dev/null +++ b/internal/testing/events/recorder.go @@ -0,0 +1,99 @@ +// Copyright 2022 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package events + +import ( + "fmt" + "testing" + "time" + + "gotest.tools/v3/assert" + corev1 "k8s.io/api/core/v1" + eventsv1 "k8s.io/api/events/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/runtime" + "k8s.io/client-go/tools/record" + "k8s.io/client-go/tools/record/util" + "k8s.io/client-go/tools/reference" +) + +// Recorder implements the interface for the deprecated v1.Event API. +// The zero value discards events. +// - https://pkg.go.dev/k8s.io/client-go@v0.24.1/tools/record#EventRecorder +type Recorder struct { + Events []eventsv1.Event + + // eventf signature is intended to match the recorder for the events/v1 API. + // - https://pkg.go.dev/k8s.io/client-go@v0.24.1/tools/events#EventRecorder + eventf func(regarding, related runtime.Object, eventtype, reason, action, note string, args ...any) +} + +// NewRecorder returns an EventRecorder for the deprecated v1.Event API. +func NewRecorder(t testing.TB, scheme *runtime.Scheme) *Recorder { + t.Helper() + + var recorder Recorder + + // Construct an events/v1.Event and store it. This is a copy of the upstream + // implementation except that t.Error is called rather than klog. + // - https://releases.k8s.io/v1.24.1/staging/src/k8s.io/client-go/tools/events/event_recorder.go#L43-L92 + recorder.eventf = func(regarding, related runtime.Object, eventtype, reason, action, note string, args ...any) { + t.Helper() + + timestamp := metav1.MicroTime{Time: time.Now()} + message := fmt.Sprintf(note, args...) + + refRegarding, err := reference.GetReference(scheme, regarding) + assert.Check(t, err, "Could not construct reference to: '%#v'", regarding) + + var refRelated *corev1.ObjectReference + if related != nil { + refRelated, err = reference.GetReference(scheme, related) + assert.Check(t, err, "Could not construct reference to: '%#v'", related) + } + + assert.Check(t, util.ValidateEventType(eventtype), "Unsupported event type: '%v'", eventtype) + + namespace := refRegarding.Namespace + if namespace == "" { + namespace = metav1.NamespaceDefault + } + + recorder.Events = append(recorder.Events, eventsv1.Event{ + ObjectMeta: metav1.ObjectMeta{ + Name: fmt.Sprintf("%v.%x", refRegarding.Name, timestamp.UnixNano()), + Namespace: namespace, + }, + EventTime: timestamp, + Series: nil, + ReportingController: t.Name(), + ReportingInstance: t.Name() + "-{hostname}", + Action: action, + Reason: reason, + Regarding: *refRegarding, + Related: refRelated, + Note: message, + Type: eventtype, + }) + } + + return &recorder +} + +var _ record.EventRecorder = (*Recorder)(nil) + +func (*Recorder) AnnotatedEventf(object runtime.Object, annotations map[string]string, eventtype, reason, messageFmt string, args ...any) { + panic("DEPRECATED: do not use AnnotatedEventf") +} +func (r *Recorder) Event(object runtime.Object, eventtype, reason, message string) { + if r.eventf != nil { + r.eventf(object, nil, eventtype, reason, "", message) + } +} +func (r *Recorder) Eventf(object runtime.Object, eventtype, reason, messageFmt string, args ...any) { + if r.eventf != nil { + r.eventf(object, nil, eventtype, reason, "", messageFmt, args...) + } +} diff --git a/internal/testing/require/exec.go b/internal/testing/require/exec.go new file mode 100644 index 0000000000..c182e84996 --- /dev/null +++ b/internal/testing/require/exec.go @@ -0,0 +1,68 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package require + +import ( + "os/exec" + "sync" + "testing" + + "gotest.tools/v3/assert" +) + +// Flake8 returns the path to the "flake8" executable or calls t.Skip. +func Flake8(t testing.TB) string { t.Helper(); return flake8(t) } + +var flake8 = executable("flake8", "--version") + +// OpenSSL returns the path to the "openssl" executable or calls t.Skip. +func OpenSSL(t testing.TB) string { t.Helper(); return openssl(t) } + +var openssl = executable("openssl", "version", "-a") + +// ShellCheck returns the path to the "shellcheck" executable or calls t.Skip. +func ShellCheck(t testing.TB) string { t.Helper(); return shellcheck(t) } + +var shellcheck = executable("shellcheck", "--version") + +// executable builds a function that returns the full path to name. +// The function (1) locates name or calls t.Skip, (2) runs that with args, +// (3) calls t.Log with the output, and (4) calls t.Fatal if it exits non-zero. +func executable(name string, args ...string) func(testing.TB) string { + var result func(testing.TB) string + var once sync.Once + + return func(t testing.TB) string { + t.Helper() + once.Do(func() { + path, err := exec.LookPath(name) + cmd := exec.Command(path, args...) // #nosec G204 -- args from init() + + if err != nil { + result = func(t testing.TB) string { + t.Helper() + t.Skipf("requires %q executable", name) + return "" + } + } else if info, err := cmd.CombinedOutput(); err != nil { + result = func(t testing.TB) string { + t.Helper() + // Let the "assert" package inspect and format the error. + // Show what was executed and anything it printed as well. + // This always calls t.Fatal because err is not nil here. + assert.NilError(t, err, "%q\n%s", cmd.Args, info) + return "" + } + } else { + result = func(t testing.TB) string { + t.Helper() + t.Logf("using %q\n%s", path, info) + return path + } + } + }) + return result(t) + } +} diff --git a/internal/testing/require/kubernetes.go b/internal/testing/require/kubernetes.go new file mode 100644 index 0000000000..df21bca058 --- /dev/null +++ b/internal/testing/require/kubernetes.go @@ -0,0 +1,167 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package require + +import ( + "context" + "os" + "path/filepath" + goruntime "runtime" + "strings" + "sync" + "testing" + + "gotest.tools/v3/assert" + corev1 "k8s.io/api/core/v1" + "k8s.io/client-go/rest" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/envtest" + + "github.com/crunchydata/postgres-operator/internal/controller/runtime" +) + +// https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/envtest#pkg-constants +var envtestVarsSet = os.Getenv("KUBEBUILDER_ASSETS") != "" || + strings.EqualFold(os.Getenv("USE_EXISTING_CLUSTER"), "true") + +// EnvTest returns an unstarted Environment with crds. It calls t.Skip when +// the "KUBEBUILDER_ASSETS" and "USE_EXISTING_CLUSTER" environment variables +// are unset. +func EnvTest(t testing.TB, crds envtest.CRDInstallOptions) *envtest.Environment { + t.Helper() + + if !envtestVarsSet { + t.SkipNow() + } + + return &envtest.Environment{ + CRDInstallOptions: crds, + Scheme: crds.Scheme, + } +} + +var kubernetes struct { + sync.Mutex + + // Count references to the started Environment. + count int + env *envtest.Environment +} + +// Kubernetes starts or connects to a Kubernetes API and returns a client that uses it. +// When starting a local API, the client is a member of the "system:masters" group. +// +// It calls t.Fatal when something fails. It stops the local API using t.Cleanup. +// It calls t.Skip when the "KUBEBUILDER_ASSETS" and "USE_EXISTING_CLUSTER" environment +// variables are unset. +// +// Tests that call t.Parallel might share the same local API. Call t.Parallel after this +// function to ensure they share. +func Kubernetes(t testing.TB) client.Client { + t.Helper() + _, cc := kubernetes3(t) + return cc +} + +// Kubernetes2 is the same as [Kubernetes] but also returns a copy of the client +// configuration. +func Kubernetes2(t testing.TB) (*rest.Config, client.Client) { + t.Helper() + env, cc := kubernetes3(t) + return rest.CopyConfig(env.Config), cc +} + +func kubernetes3(t testing.TB) (*envtest.Environment, client.Client) { + t.Helper() + + if !envtestVarsSet { + t.SkipNow() + } + + frames := func() *goruntime.Frames { + var pcs [5]uintptr + n := goruntime.Callers(2, pcs[:]) + return goruntime.CallersFrames(pcs[0:n]) + }() + + // Calculate the project directory as reported by [goruntime.CallersFrames]. + frame, ok := frames.Next() + self := frame.File + root := strings.TrimSuffix(self, + filepath.Join("internal", "testing", "require", "kubernetes.go")) + + // Find the first caller that is not in this file. + for ok && frame.File == self { + frame, ok = frames.Next() + } + caller := frame.File + + // Calculate the project directory path relative to the caller. + base, err := filepath.Rel(filepath.Dir(caller), root) + assert.NilError(t, err) + + kubernetes.Lock() + defer kubernetes.Unlock() + + if kubernetes.env == nil { + env := EnvTest(t, envtest.CRDInstallOptions{ + ErrorIfPathMissing: true, + Paths: []string{ + filepath.Join(base, "config", "crd", "bases"), + filepath.Join(base, "hack", "tools", "external-snapshotter", "client", "config", "crd"), + }, + Scheme: runtime.Scheme, + }) + + _, err := env.Start() + assert.NilError(t, err) + + kubernetes.env = env + } + + kubernetes.count++ + + t.Cleanup(func() { + kubernetes.Lock() + defer kubernetes.Unlock() + + kubernetes.count-- + + if kubernetes.count == 0 { + assert.Check(t, kubernetes.env.Stop()) + kubernetes.env = nil + } + }) + + cc, err := client.New(kubernetes.env.Config, client.Options{ + Scheme: kubernetes.env.Scheme, + }) + assert.NilError(t, err) + + return kubernetes.env, cc +} + +// Namespace creates a random namespace that is deleted by t.Cleanup. It calls +// t.Fatal when creation fails. The caller may delete the namespace at any time. +func Namespace(t testing.TB, cc client.Client) *corev1.Namespace { + t.Helper() + + // Remove / that shows up when running a sub-test + // TestSomeThing/test_some_specific_thing + name, _, _ := strings.Cut(t.Name(), "/") + + ns := &corev1.Namespace{} + ns.GenerateName = "postgres-operator-test-" + ns.Labels = map[string]string{"postgres-operator-test": name} + + ctx := context.Background() + assert.NilError(t, cc.Create(ctx, ns)) + + t.Cleanup(func() { + assert.Check(t, client.IgnoreNotFound(cc.Delete(ctx, ns))) + }) + + return ns +} diff --git a/internal/testing/require/parallel.go b/internal/testing/require/parallel.go new file mode 100644 index 0000000000..4fbdf42284 --- /dev/null +++ b/internal/testing/require/parallel.go @@ -0,0 +1,26 @@ +// Copyright 2022 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package require + +import ( + "sync" + "testing" +) + +var capacity sync.Mutex + +// ParallelCapacity calls t.Parallel then waits for needed capacity. There is +// no wait when needed is zero. +func ParallelCapacity(t *testing.T, needed int) { + t.Helper() + t.Parallel() + + if needed > 0 { + // Assume capacity of one; allow only one caller at a time. + // TODO: actually track how much capacity is available. + capacity.Lock() + t.Cleanup(capacity.Unlock) + } +} diff --git a/internal/testing/token_invalid b/internal/testing/token_invalid new file mode 100644 index 0000000000..1e4622430a --- /dev/null +++ b/internal/testing/token_invalid @@ -0,0 +1 @@ +eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOiJDUEsiLCJzdWIiOiJwb2ludC5vZi5jb250YWN0QGNvbXBhbnkuY29tIiwiaXNzIjoiQ3J1bmNoeSBEYXRhIiwiZXhwIjoxNzI3NDUxOTM1LCJuYmYiOjE1MTYyMzkwMjIsImlhdCI6MTUxNjIzOTAyMn0.I2RBGvpHV4GKoWD5TaM89ToEFBhNdSYovyNlYp-PbEmSTTGLc_Wa3cKujahSYtlfwlZ6gSPKVE5U4IPAv7kzO8C74zoX-9_5GpHxGyBBDLL2XLglRmuTO_W5bheuFzrCq9A7HIi-kjKTk_DRvep1dhdooHqFzZQiAxxDa_U-zCkUAByo1cWd-Z2k51VZp1TUzAYSId6rDclIBc7QSi2HrMsdh3IeXZQs4dPhjemf09l6vVIT94sdqj774t6kTawUJhTdGVrZ_ad8ar3YxCpWGZzB3oSo62K7QEGWp9KCqTebP-LAF8glkpwi8H4HWiUcXo4bfANXPXe9Z0Oziau69Q+ diff --git a/internal/testing/token_rsa_key.pub b/internal/testing/token_rsa_key.pub new file mode 100644 index 0000000000..e548f1cef5 --- /dev/null +++ b/internal/testing/token_rsa_key.pub @@ -0,0 +1,9 @@ +-----BEGIN PUBLIC KEY----- +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAu1SU1LfVLPHCozMxH2Mo +4lgOEePzNm0tRgeLezV6ffAt0gunVTLw7onLRnrq0/IzW7yWR7QkrmBL7jTKEn5u ++qKhbwKfBstIs+bMY2Zkp18gnTxKLxoS2tFczGkPLPgizskuemMghRniWaoLcyeh +kd3qqGElvW/VDL5AaWTg0nLVkjRo9z+40RQzuVaE8AkAFmxZzow3x+VJYKdjykkJ +0iT9wCS0DRTXu269V264Vf/3jvredZiKRkgwlL9xNAwxXFg0x/XFw005UWVRIkdg +cKWTjpBP2dPwVZ4WWC+9aGVd+Gyn1o0CLelf4rEjGoXbAAEgAqeGUxrcIlbjXfbc +mwIDAQAB +-----END PUBLIC KEY----- diff --git a/internal/testing/token_valid b/internal/testing/token_valid new file mode 100644 index 0000000000..6982d38829 --- /dev/null +++ b/internal/testing/token_valid @@ -0,0 +1 @@ +eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOiJDUEsiLCJzdWIiOiJwb2ludC5vZi5jb250YWN0QGNvbXBhbnkuY29tIiwiaXNzIjoiQ3J1bmNoeSBEYXRhIiwiZXhwIjoxNzI3NDUxOTM1LCJuYmYiOjE1MTYyMzkwMjIsImlhdCI6MTUxNjIzOTAyMn0.I2RBGvpHV4GKoWD5TaM89ToEFBhNdSYovyNlYp-PbEmSTTGLc_Wa3cKujahSYtlfwlZ6gSPKVE5U4IPAv7kzO8C74zoX-9_5GpHxGyBBDLL2XLglRmuTO_W5bheuFzrCq9A7HIi-kjKTk_DRvep1dhdooHqFzZQiAxxDa_U-zCkUAByo1cWd-Z2k51VZp1TUzAYSId6rDclIBc7QSi2HrMsdh3IeXZQs4dPhjemf09l6vVIT94sdqj774t6kTawUJhTdGVrZ_ad8ar3YxCpWGZzB3oSo62K7QEGWp9KCqTebP-LAF8glkpwi8H4HWiUcXo4bfANXPXe9Z0Oziau69Q diff --git a/internal/testing/validation/postgrescluster_test.go b/internal/testing/validation/postgrescluster_test.go new file mode 100644 index 0000000000..e71ff22b2e --- /dev/null +++ b/internal/testing/validation/postgrescluster_test.go @@ -0,0 +1,125 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package validation + +import ( + "context" + "fmt" + "testing" + + "gotest.tools/v3/assert" + apierrors "k8s.io/apimachinery/pkg/api/errors" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/yaml" + + "github.com/crunchydata/postgres-operator/internal/testing/cmp" + "github.com/crunchydata/postgres-operator/internal/testing/require" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestPostgresUserOptions(t *testing.T) { + ctx := context.Background() + cc := require.Kubernetes(t) + t.Parallel() + + namespace := require.Namespace(t, cc) + base := v1beta1.NewPostgresCluster() + + // Start with a bunch of required fields. + assert.NilError(t, yaml.Unmarshal([]byte(`{ + postgresVersion: 16, + backups: { + pgbackrest: { + repos: [{ name: repo1 }], + }, + }, + instances: [{ + dataVolumeClaimSpec: { + accessModes: [ReadWriteOnce], + resources: { requests: { storage: 1Mi } }, + }, + }], + }`), &base.Spec)) + + base.Namespace = namespace.Name + base.Name = "postgres-user-options" + + assert.NilError(t, cc.Create(ctx, base.DeepCopy(), client.DryRunAll), + "expected this base cluster to be valid") + + // See [internal/controller/postgrescluster.TestValidatePostgresUsers] + + t.Run("NoComments", func(t *testing.T) { + cluster := base.DeepCopy() + cluster.Spec.Users = []v1beta1.PostgresUserSpec{ + {Name: "dashes", Options: "ANY -- comment"}, + {Name: "block-open", Options: "/* asdf"}, + {Name: "block-close", Options: " qw */ rt"}, + } + + err := cc.Create(ctx, cluster, client.DryRunAll) + assert.Assert(t, apierrors.IsInvalid(err)) + assert.ErrorContains(t, err, "cannot contain comments") + + //nolint:errorlint // This is a test, and a panic is unlikely. + status := err.(apierrors.APIStatus).Status() + assert.Assert(t, status.Details != nil) + assert.Equal(t, len(status.Details.Causes), 3) + + for i, cause := range status.Details.Causes { + assert.Equal(t, cause.Field, fmt.Sprintf("spec.users[%d].options", i)) + assert.Assert(t, cmp.Contains(cause.Message, "cannot contain comments")) + } + }) + + t.Run("NoPassword", func(t *testing.T) { + cluster := base.DeepCopy() + cluster.Spec.Users = []v1beta1.PostgresUserSpec{ + {Name: "uppercase", Options: "SUPERUSER PASSWORD ''"}, + {Name: "lowercase", Options: "password 'asdf'"}, + } + + err := cc.Create(ctx, cluster, client.DryRunAll) + assert.Assert(t, apierrors.IsInvalid(err)) + assert.ErrorContains(t, err, "cannot assign password") + + //nolint:errorlint // This is a test, and a panic is unlikely. + status := err.(apierrors.APIStatus).Status() + assert.Assert(t, status.Details != nil) + assert.Equal(t, len(status.Details.Causes), 2) + + for i, cause := range status.Details.Causes { + assert.Equal(t, cause.Field, fmt.Sprintf("spec.users[%d].options", i)) + assert.Assert(t, cmp.Contains(cause.Message, "cannot assign password")) + } + }) + + t.Run("NoTerminators", func(t *testing.T) { + cluster := base.DeepCopy() + cluster.Spec.Users = []v1beta1.PostgresUserSpec{ + {Name: "semicolon", Options: "some ;where"}, + } + + err := cc.Create(ctx, cluster, client.DryRunAll) + assert.Assert(t, apierrors.IsInvalid(err)) + assert.ErrorContains(t, err, "should match") + + //nolint:errorlint // This is a test, and a panic is unlikely. + status := err.(apierrors.APIStatus).Status() + assert.Assert(t, status.Details != nil) + assert.Equal(t, len(status.Details.Causes), 1) + assert.Equal(t, status.Details.Causes[0].Field, "spec.users[0].options") + }) + + t.Run("Valid", func(t *testing.T) { + cluster := base.DeepCopy() + cluster.Spec.Users = []v1beta1.PostgresUserSpec{ + {Name: "normal", Options: "CREATEDB valid until '2006-01-02'"}, + {Name: "very-full", Options: "NOSUPERUSER NOCREATEDB NOCREATEROLE NOINHERIT NOLOGIN NOREPLICATION NOBYPASSRLS CONNECTION LIMIT 5"}, + } + + assert.NilError(t, cc.Create(ctx, cluster, client.DryRunAll)) + }) +} diff --git a/internal/tlsutil/primitives.go b/internal/tlsutil/primitives.go deleted file mode 100644 index 03fb73f744..0000000000 --- a/internal/tlsutil/primitives.go +++ /dev/null @@ -1,109 +0,0 @@ -package tlsutil - -/* -Copyright 2019 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "crypto/rand" - "crypto/rsa" - "crypto/x509" - "encoding/pem" - "errors" - "io" - "io/ioutil" - "math" - "math/big" - "time" -) - -const ( - rsaKeySize = 2048 - duration365d = time.Hour * 24 * 365 -) - -// newPrivateKey returns randomly generated RSA private key. -func NewPrivateKey() (*rsa.PrivateKey, error) { - return rsa.GenerateKey(rand.Reader, rsaKeySize) -} - -// encodePrivateKeyPEM encodes the given private key pem and returns bytes (base64). -func EncodePrivateKeyPEM(key *rsa.PrivateKey) []byte { - return pem.EncodeToMemory(&pem.Block{ - Type: "RSA PRIVATE KEY", - Bytes: x509.MarshalPKCS1PrivateKey(key), - }) -} - -// encodeCertificatePEM encodes the given certificate pem and returns bytes (base64). -func EncodeCertificatePEM(cert *x509.Certificate) []byte { - return pem.EncodeToMemory(&pem.Block{ - Type: "CERTIFICATE", - Bytes: cert.Raw, - }) -} - -// parsePEMEncodedCert parses a certificate from the given pemdata -func ParsePEMEncodedCert(pemdata []byte) (*x509.Certificate, error) { - decoded, _ := pem.Decode(pemdata) - if decoded == nil { - return nil, errors.New("no PEM data found") - } - return x509.ParseCertificate(decoded.Bytes) -} - -// parsePEMEncodedPrivateKey parses a private key from given pemdata -func ParsePEMEncodedPrivateKey(pemdata []byte) (*rsa.PrivateKey, error) { - decoded, _ := pem.Decode(pemdata) - if decoded == nil { - return nil, errors.New("no PEM data found") - } - return x509.ParsePKCS1PrivateKey(decoded.Bytes) -} - -// newSelfSignedCACertificate returns a self-signed CA certificate based on given configuration and private key. -// The certificate has one-year lease. -func NewSelfSignedCACertificate(key *rsa.PrivateKey) (*x509.Certificate, error) { - serial, err := rand.Int(rand.Reader, new(big.Int).SetInt64(math.MaxInt64)) - if err != nil { - return nil, err - } - now := time.Now() - tmpl := x509.Certificate{ - SerialNumber: serial, - NotBefore: now.UTC(), - NotAfter: now.Add(duration365d).UTC(), - KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature | x509.KeyUsageCertSign, - BasicConstraintsValid: true, - IsCA: true, - } - certDERBytes, err := x509.CreateCertificate(rand.Reader, &tmpl, &tmpl, key.Public(), key) - if err != nil { - return nil, err - } - return x509.ParseCertificate(certDERBytes) -} - -// ExtendTrust extends the provided certpool with the PEM-encoded certificates -// presented by certSource. If reading from certSource produces an error -// the base pool remains unmodified -func ExtendTrust(base *x509.CertPool, certSource io.Reader) error { - certs, err := ioutil.ReadAll(certSource) - if err != nil { - return err - } - base.AppendCertsFromPEM(certs) - - return nil -} diff --git a/internal/tlsutil/primitives_test.go b/internal/tlsutil/primitives_test.go deleted file mode 100644 index 22676e9fbc..0000000000 --- a/internal/tlsutil/primitives_test.go +++ /dev/null @@ -1,166 +0,0 @@ -package tlsutil - -/* -Copyright 2019 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "bytes" - "crypto/rsa" - "crypto/tls" - "crypto/x509" - "encoding/base64" - "fmt" - "io/ioutil" - "net/http" - "net/http/httptest" - "testing" -) - -func TestKeyPEMSymmetry(t *testing.T) { - oldKey, err := NewPrivateKey() - if err != nil { - t.Fatalf("unable to generate new key - %s", err) - } - - pemKey := EncodePrivateKeyPEM(oldKey) - newKey, err := ParsePEMEncodedPrivateKey(pemKey) - if err != nil { - t.Fatalf("unable to parse pem key - %s", err) - } - - t.Log(base64.StdEncoding.EncodeToString(pemKey)) - - if !keysEq(oldKey, newKey) { - t.Fatal("Decoded key did not match its input source") - } -} - -func TestCertPEMSymmetry(t *testing.T) { - privKey, err := NewPrivateKey() - if err != nil { - t.Fatalf("unable to generate new key - %s", err) - } - - oldCert, err := NewSelfSignedCACertificate(privKey) - if err != nil { - t.Fatalf("unable to generate cert - %s", err) - } - - pemCert := EncodeCertificatePEM(oldCert) - - newCert, err := ParsePEMEncodedCert(pemCert) - if err != nil { - t.Fatalf("error decoding cert PEM - %s", err) - } - - if !oldCert.Equal(newCert) { - t.Fatal("decoded cert did not match its input source") - } -} - -func TestExtendedTrust(t *testing.T) { - expected := "You do that very well. It's as if i was looking in a mirror." - - // Create x509 certificate pair (key, cert) - key, err := NewPrivateKey() - if err != nil { - t.Fatalf("error creating private key - %s\n", err) - } - pemKey := EncodePrivateKeyPEM(key) - - cert, err := NewSelfSignedCACertificate(key) - if err != nil { - t.Fatalf("error creating cert - %s\n", err) - } - pemCert := EncodeCertificatePEM(cert) - - // Set up and start server - srv := httptest.NewUnstartedServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { - fmt.Fprintln(w, expected) - })) - defer srv.Close() - - caTrust := x509.NewCertPool() - ExtendTrust(caTrust, bytes.NewReader(pemCert)) - - srv.TLS = &tls.Config{ - ServerName: "Stom", - ClientAuth: tls.RequireAndVerifyClientCert, - InsecureSkipVerify: true, // because self-signed, naturally - ClientCAs: caTrust, - MinVersion: tls.VersionTLS11, - } - srv.StartTLS() - - // Set up client - clientCert, err := tls.X509KeyPair(pemCert, pemKey) - if err != nil { - t.Fatalf("unable to prepare client cert - %s\n", err) - } - - client := srv.Client() - client.Transport = &http.Transport{ - TLSClientConfig: &tls.Config{ - Certificates: []tls.Certificate{ - clientCert, - }, - RootCAs: caTrust, - InsecureSkipVerify: true, // because self-signed, naturally - }, - } - - // Confirm server response - res, err := client.Get(srv.URL) - if err != nil { - t.Fatalf("error getting response - %s\n", err) - } - - body, err := ioutil.ReadAll(res.Body) - res.Body.Close() - if err != nil { - t.Fatalf("error reading response -%s\n", err) - } - - if recv := string(bytes.TrimSpace(body)); recv != expected { - t.Fatalf("expected [%s], got [%s] instead\n", expected, recv) - } -} - -func keysEq(a, b *rsa.PrivateKey) bool { - if a.E != b.E { - // PublicKey exponent different - return false - } - if a.N.Cmp(b.N) != 0 { - // PublicKey modulus different - return false - } - if a.D.Cmp(b.D) != 0 { - // PrivateKey exponent different - return false - } - if len(a.Primes) != len(b.Primes) { - // Prime factor difference (Tier 1) - return false - } - for i, aPrime := range a.Primes { - if aPrime.Cmp(b.Primes[i]) != 0 { - // Prime factor difference (Tier 2) - return false - } - } - - return true -} diff --git a/internal/upgradecheck/header.go b/internal/upgradecheck/header.go new file mode 100644 index 0000000000..a1d56ef442 --- /dev/null +++ b/internal/upgradecheck/header.go @@ -0,0 +1,219 @@ +// Copyright 2017 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package upgradecheck + +import ( + "context" + "encoding/json" + "net/http" + "os" + + googleuuid "github.com/google/uuid" + corev1 "k8s.io/api/core/v1" + apierrors "k8s.io/apimachinery/pkg/api/errors" + "k8s.io/apimachinery/pkg/util/uuid" + "k8s.io/client-go/discovery" + "k8s.io/client-go/rest" + crclient "sigs.k8s.io/controller-runtime/pkg/client" + + "github.com/crunchydata/postgres-operator/internal/controller/postgrescluster" + "github.com/crunchydata/postgres-operator/internal/feature" + "github.com/crunchydata/postgres-operator/internal/logging" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +const ( + clientHeader = "X-Crunchy-Client-Metadata" +) + +var ( + // Using apimachinery's UUID package, so our deployment UUID will be a string + deploymentID string +) + +// Extensible struct for client upgrade data +type clientUpgradeData struct { + BridgeClustersTotal int `json:"bridge_clusters_total"` + BuildSource string `json:"build_source"` + DeploymentID string `json:"deployment_id"` + FeatureGatesEnabled string `json:"feature_gates_enabled"` + IsOpenShift bool `json:"is_open_shift"` + KubernetesEnv string `json:"kubernetes_env"` + PGOClustersTotal int `json:"pgo_clusters_total"` + PGOInstaller string `json:"pgo_installer"` + PGOInstallerOrigin string `json:"pgo_installer_origin"` + PGOVersion string `json:"pgo_version"` + RegistrationToken string `json:"registration_token"` +} + +// generateHeader aggregates data and returns a struct of that data +// If any errors are encountered, it logs those errors and uses the default values +func generateHeader(ctx context.Context, cfg *rest.Config, crClient crclient.Client, + pgoVersion string, isOpenShift bool, registrationToken string) *clientUpgradeData { + + return &clientUpgradeData{ + BridgeClustersTotal: getBridgeClusters(ctx, crClient), + BuildSource: os.Getenv("BUILD_SOURCE"), + DeploymentID: ensureDeploymentID(ctx, crClient), + FeatureGatesEnabled: feature.ShowGates(ctx), + IsOpenShift: isOpenShift, + KubernetesEnv: getServerVersion(ctx, cfg), + PGOClustersTotal: getManagedClusters(ctx, crClient), + PGOInstaller: os.Getenv("PGO_INSTALLER"), + PGOInstallerOrigin: os.Getenv("PGO_INSTALLER_ORIGIN"), + PGOVersion: pgoVersion, + RegistrationToken: registrationToken, + } +} + +// ensureDeploymentID checks if the UUID exists in memory or in a ConfigMap +// If no UUID exists, ensureDeploymentID creates one and saves it in memory/as a ConfigMap +// Any errors encountered will be logged and the ID result will be what is in memory +func ensureDeploymentID(ctx context.Context, crClient crclient.Client) string { + // If there is no deploymentID in memory, generate one for possible use + if deploymentID == "" { + deploymentID = string(uuid.NewUUID()) + } + + cm := manageUpgradeCheckConfigMap(ctx, crClient, deploymentID) + + if cm != nil && cm.Data["deployment_id"] != "" { + deploymentID = cm.Data["deployment_id"] + } + + return deploymentID +} + +// manageUpgradeCheckConfigMap ensures a ConfigMap exists with a UUID +// If it doesn't exist, this creates it with the in-memory ID +// If it exists and it has a valid UUID, use that to replace the in-memory ID +// If it exists but the field is blank or mangled, we update the ConfigMap with the in-memory ID +func manageUpgradeCheckConfigMap(ctx context.Context, crClient crclient.Client, + currentID string) *corev1.ConfigMap { + + log := logging.FromContext(ctx) + upgradeCheckConfigMapMetadata := naming.UpgradeCheckConfigMap() + + cm := &corev1.ConfigMap{ + ObjectMeta: upgradeCheckConfigMapMetadata, + Data: map[string]string{"deployment_id": currentID}, + } + cm.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("ConfigMap")) + + // If no namespace is set, then log this and skip trying to set the UUID in the ConfigMap + if upgradeCheckConfigMapMetadata.GetNamespace() == "" { + log.V(1).Info("upgrade check issue: namespace not set") + return cm + } + + retrievedCM := &corev1.ConfigMap{} + err := crClient.Get(ctx, naming.AsObjectKey(upgradeCheckConfigMapMetadata), retrievedCM) + + // If we get any error besides IsNotFound, log it, skip any ConfigMap steps, + // and use the in-memory deploymentID + if err != nil && !apierrors.IsNotFound(err) { + log.V(1).Info("upgrade check issue: error retrieving configmap", + "response", err.Error()) + return cm + } + + // If we get a ConfigMap with a "deployment_id", check if that UUID is valid + if retrievedCM.Data["deployment_id"] != "" { + _, parseErr := googleuuid.Parse(retrievedCM.Data["deployment_id"]) + // No error -- the ConfigMap has a valid deploymentID, so use that + if parseErr == nil { + cm.Data["deployment_id"] = retrievedCM.Data["deployment_id"] + } + } + + err = applyConfigMap(ctx, crClient, cm, postgrescluster.ControllerName) + if err != nil { + log.V(1).Info("upgrade check issue: could not apply configmap", + "response", err.Error()) + } + return cm +} + +// applyConfigMap is a focused version of the Reconciler.apply method, +// meant only to work with this ConfigMap +// It sends an apply patch to the Kubernetes API, with the fieldManager set to the deployment_id +// and the force parameter set to true. +// - https://docs.k8s.io/reference/using-api/server-side-apply/#managers +// - https://docs.k8s.io/reference/using-api/server-side-apply/#conflicts +func applyConfigMap(ctx context.Context, crClient crclient.Client, + object crclient.Object, owner string) error { + // Generate an apply-patch by comparing the object to its zero value. + zero := &corev1.ConfigMap{} + data, err := crclient.MergeFrom(zero).Data(object) + + if err == nil { + apply := crclient.RawPatch(crclient.Apply.Type(), data) + err = crClient.Patch(ctx, object, apply, + []crclient.PatchOption{crclient.ForceOwnership, crclient.FieldOwner(owner)}...) + } + return err +} + +// getManagedClusters returns a count of postgres clusters managed by this PGO instance +// Any errors encountered will be logged and the count result will be 0 +func getManagedClusters(ctx context.Context, crClient crclient.Client) int { + var count int + clusters := &v1beta1.PostgresClusterList{} + err := crClient.List(ctx, clusters) + if err != nil { + log := logging.FromContext(ctx) + log.V(1).Info("upgrade check issue: could not count postgres clusters", + "response", err.Error()) + } else { + count = len(clusters.Items) + } + return count +} + +// getBridgeClusters returns a count of Bridge clusters managed by this PGO instance +// Any errors encountered will be logged and the count result will be 0 +func getBridgeClusters(ctx context.Context, crClient crclient.Client) int { + var count int + clusters := &v1beta1.CrunchyBridgeClusterList{} + err := crClient.List(ctx, clusters) + if err != nil { + log := logging.FromContext(ctx) + log.V(1).Info("upgrade check issue: could not count bridge clusters", + "response", err.Error()) + } else { + count = len(clusters.Items) + } + return count +} + +// getServerVersion returns the stringified server version (i.e., the same info `kubectl version` +// returns for the server) +// Any errors encountered will be logged and will return an empty string +func getServerVersion(ctx context.Context, cfg *rest.Config) string { + log := logging.FromContext(ctx) + discoveryClient, err := discovery.NewDiscoveryClientForConfig(cfg) + if err != nil { + log.V(1).Info("upgrade check issue: could not retrieve discovery client", + "response", err.Error()) + return "" + } + versionInfo, err := discoveryClient.ServerVersion() + if err != nil { + log.V(1).Info("upgrade check issue: could not retrieve server version", + "response", err.Error()) + return "" + } + return versionInfo.String() +} + +func addHeader(req *http.Request, upgradeInfo *clientUpgradeData) (*http.Request, error) { + marshaled, err := json.Marshal(upgradeInfo) + if err == nil { + upgradeInfoString := string(marshaled) + req.Header.Add(clientHeader, upgradeInfoString) + } + return req, err +} diff --git a/internal/upgradecheck/header_test.go b/internal/upgradecheck/header_test.go new file mode 100644 index 0000000000..c144e7629b --- /dev/null +++ b/internal/upgradecheck/header_test.go @@ -0,0 +1,611 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package upgradecheck + +import ( + "context" + "encoding/json" + "net/http" + "strings" + "testing" + + "gotest.tools/v3/assert" + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/util/uuid" + "k8s.io/client-go/discovery" + + // Google Kubernetes Engine / Google Cloud Platform authentication provider + _ "k8s.io/client-go/plugin/pkg/client/auth/gcp" + "k8s.io/client-go/rest" + + "github.com/crunchydata/postgres-operator/internal/controller/postgrescluster" + "github.com/crunchydata/postgres-operator/internal/feature" + "github.com/crunchydata/postgres-operator/internal/naming" + "github.com/crunchydata/postgres-operator/internal/testing/cmp" + "github.com/crunchydata/postgres-operator/internal/testing/require" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +func TestGenerateHeader(t *testing.T) { + setupDeploymentID(t) + ctx := context.Background() + cfg, cc := require.Kubernetes2(t) + setupNamespace(t, cc) + + dc, err := discovery.NewDiscoveryClientForConfig(cfg) + assert.NilError(t, err) + server, err := dc.ServerVersion() + assert.NilError(t, err) + + reconciler := postgrescluster.Reconciler{Client: cc} + + t.Setenv("PGO_INSTALLER", "test") + t.Setenv("PGO_INSTALLER_ORIGIN", "test-origin") + t.Setenv("BUILD_SOURCE", "developer") + + t.Run("error ensuring ID", func(t *testing.T) { + fakeClientWithOptionalError := &fakeClientWithError{ + cc, "patch error", + } + ctx, calls := setupLogCapture(ctx) + + res := generateHeader(ctx, cfg, fakeClientWithOptionalError, + "1.2.3", reconciler.IsOpenShift, "") + assert.Equal(t, len(*calls), 1) + assert.Assert(t, cmp.Contains((*calls)[0], `upgrade check issue: could not apply configmap`)) + assert.Equal(t, res.IsOpenShift, reconciler.IsOpenShift) + assert.Equal(t, deploymentID, res.DeploymentID) + pgoList := v1beta1.PostgresClusterList{} + err := cc.List(ctx, &pgoList) + assert.NilError(t, err) + assert.Equal(t, len(pgoList.Items), res.PGOClustersTotal) + bridgeList := v1beta1.CrunchyBridgeClusterList{} + err = cc.List(ctx, &bridgeList) + assert.NilError(t, err) + assert.Equal(t, len(bridgeList.Items), res.BridgeClustersTotal) + assert.Equal(t, "1.2.3", res.PGOVersion) + assert.Equal(t, server.String(), res.KubernetesEnv) + assert.Equal(t, "test", res.PGOInstaller) + assert.Equal(t, "test-origin", res.PGOInstallerOrigin) + assert.Equal(t, "developer", res.BuildSource) + }) + + t.Run("error getting cluster count", func(t *testing.T) { + fakeClientWithOptionalError := &fakeClientWithError{ + cc, "list error", + } + ctx, calls := setupLogCapture(ctx) + + res := generateHeader(ctx, cfg, fakeClientWithOptionalError, + "1.2.3", reconciler.IsOpenShift, "") + assert.Equal(t, len(*calls), 2) + // Aggregating the logs since we cannot determine which call will be first + callsAggregate := strings.Join(*calls, " ") + assert.Assert(t, cmp.Contains(callsAggregate, `upgrade check issue: could not count postgres clusters`)) + assert.Assert(t, cmp.Contains(callsAggregate, `upgrade check issue: could not count bridge clusters`)) + assert.Equal(t, res.IsOpenShift, reconciler.IsOpenShift) + assert.Equal(t, deploymentID, res.DeploymentID) + assert.Equal(t, 0, res.PGOClustersTotal) + assert.Equal(t, 0, res.BridgeClustersTotal) + assert.Equal(t, "1.2.3", res.PGOVersion) + assert.Equal(t, server.String(), res.KubernetesEnv) + assert.Equal(t, "test", res.PGOInstaller) + assert.Equal(t, "test-origin", res.PGOInstallerOrigin) + assert.Equal(t, "developer", res.BuildSource) + }) + + t.Run("error getting server version info", func(t *testing.T) { + ctx, calls := setupLogCapture(ctx) + badcfg := &rest.Config{} + + res := generateHeader(ctx, badcfg, cc, + "1.2.3", reconciler.IsOpenShift, "") + assert.Equal(t, len(*calls), 1) + assert.Assert(t, cmp.Contains((*calls)[0], `upgrade check issue: could not retrieve server version`)) + assert.Equal(t, res.IsOpenShift, reconciler.IsOpenShift) + assert.Equal(t, deploymentID, res.DeploymentID) + pgoList := v1beta1.PostgresClusterList{} + err := cc.List(ctx, &pgoList) + assert.NilError(t, err) + assert.Equal(t, len(pgoList.Items), res.PGOClustersTotal) + assert.Equal(t, "1.2.3", res.PGOVersion) + assert.Equal(t, "", res.KubernetesEnv) + assert.Equal(t, "test", res.PGOInstaller) + assert.Equal(t, "test-origin", res.PGOInstallerOrigin) + assert.Equal(t, "developer", res.BuildSource) + }) + + t.Run("success", func(t *testing.T) { + ctx, calls := setupLogCapture(ctx) + gate := feature.NewGate() + assert.NilError(t, gate.SetFromMap(map[string]bool{ + feature.TablespaceVolumes: true, + })) + ctx = feature.NewContext(ctx, gate) + + res := generateHeader(ctx, cfg, cc, + "1.2.3", reconciler.IsOpenShift, "") + assert.Equal(t, len(*calls), 0) + assert.Equal(t, res.IsOpenShift, reconciler.IsOpenShift) + assert.Equal(t, deploymentID, res.DeploymentID) + pgoList := v1beta1.PostgresClusterList{} + err := cc.List(ctx, &pgoList) + assert.NilError(t, err) + assert.Equal(t, len(pgoList.Items), res.PGOClustersTotal) + assert.Equal(t, "1.2.3", res.PGOVersion) + assert.Equal(t, server.String(), res.KubernetesEnv) + assert.Equal(t, "TablespaceVolumes=true", res.FeatureGatesEnabled) + assert.Equal(t, "test", res.PGOInstaller) + assert.Equal(t, "test-origin", res.PGOInstallerOrigin) + assert.Equal(t, "developer", res.BuildSource) + }) +} + +func TestEnsureID(t *testing.T) { + ctx := context.Background() + cc := require.Kubernetes(t) + setupNamespace(t, cc) + + t.Run("success, no id set in mem or configmap", func(t *testing.T) { + deploymentID = "" + oldID := deploymentID + ctx, calls := setupLogCapture(ctx) + + newID := ensureDeploymentID(ctx, cc) + assert.Equal(t, len(*calls), 0) + assert.Assert(t, newID != oldID) + assert.Assert(t, newID == deploymentID) + + cm := &corev1.ConfigMap{} + err := cc.Get(ctx, naming.AsObjectKey(naming.UpgradeCheckConfigMap()), cm) + assert.NilError(t, err) + assert.Equal(t, newID, cm.Data["deployment_id"]) + err = cc.Delete(ctx, cm) + assert.NilError(t, err) + }) + + t.Run("success, id set in mem, configmap created", func(t *testing.T) { + oldID := setupDeploymentID(t) + + cm := &corev1.ConfigMap{} + err := cc.Get(ctx, naming.AsObjectKey( + naming.UpgradeCheckConfigMap()), cm) + assert.Error(t, err, `configmaps "pgo-upgrade-check" not found`) + ctx, calls := setupLogCapture(ctx) + + newID := ensureDeploymentID(ctx, cc) + assert.Equal(t, len(*calls), 0) + assert.Assert(t, newID == oldID) + assert.Assert(t, newID == deploymentID) + + err = cc.Get(ctx, naming.AsObjectKey( + naming.UpgradeCheckConfigMap()), cm) + assert.NilError(t, err) + assert.Assert(t, deploymentID == cm.Data["deployment_id"]) + + err = cc.Delete(ctx, cm) + assert.NilError(t, err) + }) + + t.Run("success, id set in configmap, mem overwritten", func(t *testing.T) { + cm := &corev1.ConfigMap{ + ObjectMeta: naming.UpgradeCheckConfigMap(), + Data: map[string]string{ + "deployment_id": string(uuid.NewUUID()), + }, + } + err := cc.Create(ctx, cm) + assert.NilError(t, err) + + cmRetrieved := &corev1.ConfigMap{} + err = cc.Get(ctx, naming.AsObjectKey( + naming.UpgradeCheckConfigMap()), cmRetrieved) + assert.NilError(t, err) + + oldID := setupDeploymentID(t) + ctx, calls := setupLogCapture(ctx) + newID := ensureDeploymentID(ctx, cc) + assert.Equal(t, len(*calls), 0) + assert.Assert(t, newID != oldID) + assert.Assert(t, newID == deploymentID) + assert.Assert(t, deploymentID == cmRetrieved.Data["deployment_id"]) + + err = cc.Delete(ctx, cm) + assert.NilError(t, err) + }) + + t.Run("configmap failed, no namespace given", func(t *testing.T) { + cm := &corev1.ConfigMap{ + ObjectMeta: naming.UpgradeCheckConfigMap(), + Data: map[string]string{ + "deployment_id": string(uuid.NewUUID()), + }, + } + err := cc.Create(ctx, cm) + assert.NilError(t, err) + + cmRetrieved := &corev1.ConfigMap{} + err = cc.Get(ctx, naming.AsObjectKey( + naming.UpgradeCheckConfigMap()), cmRetrieved) + assert.NilError(t, err) + + oldID := setupDeploymentID(t) + ctx, calls := setupLogCapture(ctx) + t.Setenv("PGO_NAMESPACE", "") + + newID := ensureDeploymentID(ctx, cc) + assert.Equal(t, len(*calls), 1) + assert.Assert(t, cmp.Contains((*calls)[0], `upgrade check issue: namespace not set`)) + assert.Assert(t, newID == oldID) + assert.Assert(t, newID == deploymentID) + assert.Assert(t, deploymentID != cmRetrieved.Data["deployment_id"]) + err = cc.Delete(ctx, cm) + assert.NilError(t, err) + }) + + t.Run("configmap failed with not NotFound error, using preexisting ID", func(t *testing.T) { + fakeClientWithOptionalError := &fakeClientWithError{ + cc, "get error", + } + oldID := setupDeploymentID(t) + ctx, calls := setupLogCapture(ctx) + + newID := ensureDeploymentID(ctx, fakeClientWithOptionalError) + assert.Equal(t, len(*calls), 1) + assert.Assert(t, cmp.Contains((*calls)[0], `upgrade check issue: error retrieving configmap`)) + assert.Assert(t, newID == oldID) + assert.Assert(t, newID == deploymentID) + + cmRetrieved := &corev1.ConfigMap{} + err := cc.Get(ctx, naming.AsObjectKey( + naming.UpgradeCheckConfigMap()), cmRetrieved) + assert.Error(t, err, `configmaps "pgo-upgrade-check" not found`) + }) + + t.Run("configmap failed to create, using preexisting ID", func(t *testing.T) { + fakeClientWithOptionalError := &fakeClientWithError{ + cc, "patch error", + } + oldID := setupDeploymentID(t) + + ctx, calls := setupLogCapture(ctx) + newID := ensureDeploymentID(ctx, fakeClientWithOptionalError) + assert.Equal(t, len(*calls), 1) + assert.Assert(t, cmp.Contains((*calls)[0], `upgrade check issue: could not apply configmap`)) + assert.Assert(t, newID == oldID) + assert.Assert(t, newID == deploymentID) + }) +} + +func TestManageUpgradeCheckConfigMap(t *testing.T) { + ctx := context.Background() + cc := require.Kubernetes(t) + setupNamespace(t, cc) + + t.Run("no namespace given", func(t *testing.T) { + ctx, calls := setupLogCapture(ctx) + t.Setenv("PGO_NAMESPACE", "") + + returnedCM := manageUpgradeCheckConfigMap(ctx, cc, "current-id") + assert.Equal(t, len(*calls), 1) + assert.Assert(t, cmp.Contains((*calls)[0], `upgrade check issue: namespace not set`)) + assert.Assert(t, returnedCM.Data["deployment_id"] == "current-id") + }) + + t.Run("configmap not found, created", func(t *testing.T) { + cmRetrieved := &corev1.ConfigMap{} + err := cc.Get(ctx, naming.AsObjectKey( + naming.UpgradeCheckConfigMap()), cmRetrieved) + assert.Error(t, err, `configmaps "pgo-upgrade-check" not found`) + + ctx, calls := setupLogCapture(ctx) + returnedCM := manageUpgradeCheckConfigMap(ctx, cc, "current-id") + + assert.Equal(t, len(*calls), 0) + assert.Assert(t, returnedCM.Data["deployment_id"] == "current-id") + err = cc.Delete(ctx, returnedCM) + assert.NilError(t, err) + }) + + t.Run("configmap failed with not NotFound error", func(t *testing.T) { + fakeClientWithOptionalError := &fakeClientWithError{ + cc, "get error", + } + ctx, calls := setupLogCapture(ctx) + + returnedCM := manageUpgradeCheckConfigMap(ctx, fakeClientWithOptionalError, + "current-id") + assert.Equal(t, len(*calls), 1) + assert.Assert(t, cmp.Contains((*calls)[0], `upgrade check issue: error retrieving configmap`)) + assert.Assert(t, returnedCM.Data["deployment_id"] == "current-id") + }) + + t.Run("no deployment id in configmap", func(t *testing.T) { + cm := &corev1.ConfigMap{ + ObjectMeta: naming.UpgradeCheckConfigMap(), + Data: map[string]string{ + "wrong_field": string(uuid.NewUUID()), + }, + } + err := cc.Create(ctx, cm) + assert.NilError(t, err) + + cmRetrieved := &corev1.ConfigMap{} + err = cc.Get(ctx, naming.AsObjectKey( + naming.UpgradeCheckConfigMap()), cmRetrieved) + assert.NilError(t, err) + + ctx, calls := setupLogCapture(ctx) + returnedCM := manageUpgradeCheckConfigMap(ctx, cc, "current-id") + assert.Equal(t, len(*calls), 0) + assert.Assert(t, returnedCM.Data["deployment_id"] == "current-id") + err = cc.Delete(ctx, cm) + assert.NilError(t, err) + }) + + t.Run("mangled deployment id", func(t *testing.T) { + cm := &corev1.ConfigMap{ + ObjectMeta: naming.UpgradeCheckConfigMap(), + Data: map[string]string{ + "deploymentid": string(uuid.NewUUID())[1:], + }, + } + err := cc.Create(ctx, cm) + assert.NilError(t, err) + + cmRetrieved := &corev1.ConfigMap{} + err = cc.Get(ctx, naming.AsObjectKey( + naming.UpgradeCheckConfigMap()), cmRetrieved) + assert.NilError(t, err) + + ctx, calls := setupLogCapture(ctx) + returnedCM := manageUpgradeCheckConfigMap(ctx, cc, "current-id") + assert.Equal(t, len(*calls), 0) + assert.Assert(t, returnedCM.Data["deployment_id"] == "current-id") + err = cc.Delete(ctx, cm) + assert.NilError(t, err) + }) + + t.Run("good configmap with good id", func(t *testing.T) { + cm := &corev1.ConfigMap{ + ObjectMeta: naming.UpgradeCheckConfigMap(), + Data: map[string]string{ + "deployment_id": string(uuid.NewUUID()), + }, + } + err := cc.Create(ctx, cm) + assert.NilError(t, err) + + cmRetrieved := &corev1.ConfigMap{} + err = cc.Get(ctx, naming.AsObjectKey( + naming.UpgradeCheckConfigMap()), cmRetrieved) + assert.NilError(t, err) + + ctx, calls := setupLogCapture(ctx) + returnedCM := manageUpgradeCheckConfigMap(ctx, cc, "current-id") + assert.Equal(t, len(*calls), 0) + assert.Assert(t, returnedCM.Data["deployment-id"] != "current-id") + err = cc.Delete(ctx, cm) + assert.NilError(t, err) + }) + + t.Run("configmap failed to create", func(t *testing.T) { + fakeClientWithOptionalError := &fakeClientWithError{ + cc, "patch error", + } + + ctx, calls := setupLogCapture(ctx) + returnedCM := manageUpgradeCheckConfigMap(ctx, fakeClientWithOptionalError, + "current-id") + assert.Equal(t, len(*calls), 1) + assert.Assert(t, cmp.Contains((*calls)[0], `upgrade check issue: could not apply configmap`)) + assert.Assert(t, returnedCM.Data["deployment_id"] == "current-id") + }) +} + +func TestApplyConfigMap(t *testing.T) { + ctx := context.Background() + cc := require.Kubernetes(t) + setupNamespace(t, cc) + + t.Run("successful create", func(t *testing.T) { + cmRetrieved := &corev1.ConfigMap{} + err := cc.Get(ctx, naming.AsObjectKey(naming.UpgradeCheckConfigMap()), cmRetrieved) + assert.Error(t, err, `configmaps "pgo-upgrade-check" not found`) + + cm := &corev1.ConfigMap{ + ObjectMeta: naming.UpgradeCheckConfigMap(), + Data: map[string]string{ + "new_field": "new_value", + }, + } + cm.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("ConfigMap")) + err = applyConfigMap(ctx, cc, cm, "test") + assert.NilError(t, err) + cmRetrieved = &corev1.ConfigMap{} + err = cc.Get(ctx, naming.AsObjectKey(naming.UpgradeCheckConfigMap()), cmRetrieved) + assert.NilError(t, err) + assert.Equal(t, cm.Data["new_value"], cmRetrieved.Data["new_value"]) + err = cc.Delete(ctx, cm) + assert.NilError(t, err) + }) + + t.Run("successful update", func(t *testing.T) { + cm := &corev1.ConfigMap{ + ObjectMeta: naming.UpgradeCheckConfigMap(), + Data: map[string]string{ + "new_field": "old_value", + }, + } + cm.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("ConfigMap")) + err := cc.Create(ctx, cm) + assert.NilError(t, err) + cmRetrieved := &corev1.ConfigMap{} + err = cc.Get(ctx, naming.AsObjectKey(naming.UpgradeCheckConfigMap()), cmRetrieved) + assert.NilError(t, err) + + cm2 := &corev1.ConfigMap{ + ObjectMeta: naming.UpgradeCheckConfigMap(), + Data: map[string]string{ + "new_field": "new_value", + }, + } + cm2.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("ConfigMap")) + err = applyConfigMap(ctx, cc, cm2, "test") + assert.NilError(t, err) + cmRetrieved = &corev1.ConfigMap{} + err = cc.Get(ctx, naming.AsObjectKey(naming.UpgradeCheckConfigMap()), cmRetrieved) + assert.NilError(t, err) + assert.Equal(t, cm.Data["new_value"], cmRetrieved.Data["new_value"]) + err = cc.Delete(ctx, cm) + assert.NilError(t, err) + }) + + t.Run("successful nothing changed", func(t *testing.T) { + cm := &corev1.ConfigMap{ + ObjectMeta: naming.UpgradeCheckConfigMap(), + Data: map[string]string{ + "new_field": "new_value", + }, + } + cm.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("ConfigMap")) + err := cc.Create(ctx, cm) + assert.NilError(t, err) + cmRetrieved := &corev1.ConfigMap{} + err = cc.Get(ctx, naming.AsObjectKey(naming.UpgradeCheckConfigMap()), cmRetrieved) + assert.NilError(t, err) + + cm2 := &corev1.ConfigMap{ + ObjectMeta: naming.UpgradeCheckConfigMap(), + Data: map[string]string{ + "new_field": "new_value", + }, + } + cm2.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("ConfigMap")) + err = applyConfigMap(ctx, cc, cm2, "test") + assert.NilError(t, err) + cmRetrieved = &corev1.ConfigMap{} + err = cc.Get(ctx, naming.AsObjectKey(naming.UpgradeCheckConfigMap()), cmRetrieved) + assert.NilError(t, err) + assert.Equal(t, cm.Data["new_value"], cmRetrieved.Data["new_value"]) + err = cc.Delete(ctx, cm) + assert.NilError(t, err) + }) + + t.Run("failure", func(t *testing.T) { + cmRetrieved := &corev1.ConfigMap{} + err := cc.Get(ctx, naming.AsObjectKey(naming.UpgradeCheckConfigMap()), cmRetrieved) + assert.Error(t, err, `configmaps "pgo-upgrade-check" not found`) + + cm := &corev1.ConfigMap{ + ObjectMeta: naming.UpgradeCheckConfigMap(), + Data: map[string]string{ + "new_field": "new_value", + }, + } + cm.SetGroupVersionKind(corev1.SchemeGroupVersion.WithKind("ConfigMap")) + fakeClientWithOptionalError := &fakeClientWithError{ + cc, "patch error", + } + + err = applyConfigMap(ctx, fakeClientWithOptionalError, cm, "test") + assert.Error(t, err, "patch error") + }) +} + +func TestGetManagedClusters(t *testing.T) { + ctx := context.Background() + + t.Run("success", func(t *testing.T) { + fakeClient := setupFakeClientWithPGOScheme(t, true) + ctx, calls := setupLogCapture(ctx) + count := getManagedClusters(ctx, fakeClient) + assert.Equal(t, len(*calls), 0) + assert.Assert(t, count == 2) + }) + + t.Run("list throw error", func(t *testing.T) { + fakeClientWithOptionalError := &fakeClientWithError{ + setupFakeClientWithPGOScheme(t, true), "list error", + } + ctx, calls := setupLogCapture(ctx) + count := getManagedClusters(ctx, fakeClientWithOptionalError) + assert.Assert(t, len(*calls) > 0) + assert.Assert(t, cmp.Contains((*calls)[0], `upgrade check issue: could not count postgres clusters`)) + assert.Assert(t, count == 0) + }) +} + +func TestGetBridgeClusters(t *testing.T) { + ctx := context.Background() + + t.Run("success", func(t *testing.T) { + fakeClient := setupFakeClientWithPGOScheme(t, true) + ctx, calls := setupLogCapture(ctx) + count := getBridgeClusters(ctx, fakeClient) + assert.Equal(t, len(*calls), 0) + assert.Assert(t, count == 2) + }) + + t.Run("list throw error", func(t *testing.T) { + fakeClientWithOptionalError := &fakeClientWithError{ + setupFakeClientWithPGOScheme(t, true), "list error", + } + ctx, calls := setupLogCapture(ctx) + count := getBridgeClusters(ctx, fakeClientWithOptionalError) + assert.Assert(t, len(*calls) > 0) + assert.Assert(t, cmp.Contains((*calls)[0], `upgrade check issue: could not count bridge clusters`)) + assert.Assert(t, count == 0) + }) +} + +func TestGetServerVersion(t *testing.T) { + t.Run("success", func(t *testing.T) { + expect, server := setupVersionServer(t, true) + ctx, calls := setupLogCapture(context.Background()) + + got := getServerVersion(ctx, &rest.Config{ + Host: server.URL, + }) + assert.Equal(t, len(*calls), 0) + assert.Equal(t, got, expect.String()) + }) + + t.Run("failure", func(t *testing.T) { + _, server := setupVersionServer(t, false) + ctx, calls := setupLogCapture(context.Background()) + + got := getServerVersion(ctx, &rest.Config{ + Host: server.URL, + }) + assert.Equal(t, len(*calls), 1) + assert.Assert(t, cmp.Contains((*calls)[0], `upgrade check issue: could not retrieve server version`)) + assert.Equal(t, got, "") + }) +} + +func TestAddHeader(t *testing.T) { + t.Run("successful", func(t *testing.T) { + req := &http.Request{ + Header: http.Header{}, + } + versionString := "1.2.3" + upgradeInfo := &clientUpgradeData{ + PGOVersion: versionString, + } + + result, err := addHeader(req, upgradeInfo) + assert.NilError(t, err) + header := result.Header[clientHeader] + + passedThroughData := &clientUpgradeData{} + err = json.Unmarshal([]byte(header[0]), passedThroughData) + assert.NilError(t, err) + + assert.Equal(t, passedThroughData.PGOVersion, "1.2.3") + // Failure to list clusters results in 0 returned + assert.Equal(t, passedThroughData.PGOClustersTotal, 0) + }) +} diff --git a/internal/upgradecheck/helpers_test.go b/internal/upgradecheck/helpers_test.go new file mode 100644 index 0000000000..63184184db --- /dev/null +++ b/internal/upgradecheck/helpers_test.go @@ -0,0 +1,179 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package upgradecheck + +import ( + "context" + "encoding/json" + "fmt" + "net/http" + "net/http/httptest" + "testing" + + "github.com/go-logr/logr/funcr" + "gotest.tools/v3/assert" + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/types" + "k8s.io/apimachinery/pkg/util/uuid" + "k8s.io/apimachinery/pkg/version" + crclient "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/client/fake" + + "github.com/crunchydata/postgres-operator/internal/controller/runtime" + "github.com/crunchydata/postgres-operator/internal/logging" + "github.com/crunchydata/postgres-operator/pkg/apis/postgres-operator.crunchydata.com/v1beta1" +) + +// fakeClientWithError is a controller runtime client and an error type to force +type fakeClientWithError struct { + crclient.Client + errorType string +} + +// Get returns the client.get OR an Error (`get error`) if the fakeClientWithError is set to error that way +func (f *fakeClientWithError) Get(ctx context.Context, key types.NamespacedName, obj crclient.Object, opts ...crclient.GetOption) error { + switch f.errorType { + case "get error": + return fmt.Errorf("get error") + default: + return f.Client.Get(ctx, key, obj, opts...) + } +} + +// Patch returns the client.get OR an Error (`patch error`) if the fakeClientWithError is set to error that way +// TODO: PatchType is not supported currently by fake +// - https://github.com/kubernetes/client-go/issues/970 +// Once that gets fixed, we can test without envtest +func (f *fakeClientWithError) Patch(ctx context.Context, obj crclient.Object, + patch crclient.Patch, opts ...crclient.PatchOption) error { + switch { + case f.errorType == "patch error": + return fmt.Errorf("patch error") + default: + return f.Client.Patch(ctx, obj, patch, opts...) + } +} + +// List returns the client.get OR an Error (`list error`) if the fakeClientWithError is set to error that way +func (f *fakeClientWithError) List(ctx context.Context, objList crclient.ObjectList, + opts ...crclient.ListOption) error { + switch f.errorType { + case "list error": + return fmt.Errorf("list error") + default: + return f.Client.List(ctx, objList, opts...) + } +} + +// setupDeploymentID returns a UUID +func setupDeploymentID(t *testing.T) string { + t.Helper() + deploymentID = string(uuid.NewUUID()) + return deploymentID +} + +// setupFakeClientWithPGOScheme returns a fake client with the PGO scheme added; +// if `includeCluster` is true, also adds some empty PostgresCluster and CrunchyBridgeCluster +// items to the client +func setupFakeClientWithPGOScheme(t *testing.T, includeCluster bool) crclient.Client { + t.Helper() + if includeCluster { + pc := &v1beta1.PostgresClusterList{ + Items: []v1beta1.PostgresCluster{ + { + ObjectMeta: metav1.ObjectMeta{ + Name: "hippo", + }, + }, + { + ObjectMeta: metav1.ObjectMeta{ + Name: "elephant", + }, + }, + }, + } + + bcl := &v1beta1.CrunchyBridgeClusterList{ + Items: []v1beta1.CrunchyBridgeCluster{ + { + ObjectMeta: metav1.ObjectMeta{ + Name: "hippo", + }, + }, + { + ObjectMeta: metav1.ObjectMeta{ + Name: "elephant", + }, + }, + }, + } + + return fake.NewClientBuilder(). + WithScheme(runtime.Scheme). + WithLists(pc, bcl). + Build() + } + return fake.NewClientBuilder().WithScheme(runtime.Scheme).Build() +} + +// setupVersionServer sets up and tears down a server and version info for testing +func setupVersionServer(t *testing.T, works bool) (version.Info, *httptest.Server) { + t.Helper() + expect := version.Info{ + Major: "1", + Minor: "22", + GitCommit: "v1.22.2", + } + server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, + req *http.Request) { + if works { + output, _ := json.Marshal(expect) + w.Header().Set("Content-Type", "application/json") + w.WriteHeader(http.StatusOK) + // We don't need to check the error output from this + _, _ = w.Write(output) + } else { + w.WriteHeader(http.StatusBadRequest) + } + })) + t.Cleanup(server.Close) + + return expect, server +} + +// setupLogCapture captures the logs and keeps count of the logs captured +func setupLogCapture(ctx context.Context) (context.Context, *[]string) { + calls := []string{} + testlog := funcr.NewJSON(func(object string) { + calls = append(calls, object) + }, funcr.Options{ + Verbosity: 1, + }) + return logging.NewContext(ctx, testlog), &calls +} + +// setupNamespace creates a namespace that will be deleted by t.Cleanup. +// For upgradechecking, this namespace is set to `postgres-operator`, +// which sometimes is created by other parts of the testing apparatus, +// cf., the createnamespace call in `make check-envtest-existing`. +// When creation fails, it calls t.Fatal. The caller may delete the namespace +// at any time. +func setupNamespace(t testing.TB, cc crclient.Client) { + t.Helper() + ns := &corev1.Namespace{} + ns.Name = "postgres-operator" + ns.Labels = map[string]string{"postgres-operator-test": t.Name()} + + ctx := context.Background() + exists := &corev1.Namespace{} + assert.NilError(t, crclient.IgnoreNotFound( + cc.Get(ctx, crclient.ObjectKeyFromObject(ns), exists))) + if exists.Name != "" { + return + } + assert.NilError(t, cc.Create(ctx, ns)) + t.Cleanup(func() { assert.Check(t, crclient.IgnoreNotFound(cc.Delete(ctx, ns))) }) +} diff --git a/internal/upgradecheck/http.go b/internal/upgradecheck/http.go new file mode 100644 index 0000000000..71a3c465c0 --- /dev/null +++ b/internal/upgradecheck/http.go @@ -0,0 +1,201 @@ +// Copyright 2017 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package upgradecheck + +import ( + "context" + "fmt" + "io" + "net/http" + "time" + + "github.com/golang-jwt/jwt/v5" + "k8s.io/apimachinery/pkg/util/wait" + "k8s.io/client-go/rest" + crclient "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/manager" + + "github.com/crunchydata/postgres-operator/internal/logging" +) + +var ( + client HTTPClient + + // With these Backoff settings, wait.ExponentialBackoff will + // * use one second as the base time; + // * increase delays between calls by a power of 2 (1, 2, 4, etc.); + // * and retry four times. + // Note that there is no indeterminacy here since there is no Jitter set). + // With these parameters, the calls will occur at 0, 1, 3, and 7 seconds + // (i.e., at 1, 2, and 4 second delays for the retries). + backoff = wait.Backoff{ + Duration: 1 * time.Second, + Factor: float64(2), + Steps: 4, + } +) + +const ( + // upgradeCheckURL can be set using the CHECK_FOR_UPGRADES_URL env var + upgradeCheckURL = "https://operator-maestro.crunchydata.com/pgo-versions" +) + +type HTTPClient interface { + Do(req *http.Request) (*http.Response, error) +} + +// Creating an interface for cache with WaitForCacheSync to allow easier mocking +type CacheWithWait interface { + WaitForCacheSync(ctx context.Context) bool +} + +func init() { + // Since we create this client once during startup, + // we want each connection to be fresh, hence the non-default transport + // with DisableKeepAlives set to true + // See https://github.com/golang/go/issues/43905 and https://github.com/golang/go/issues/23427 + // for discussion of problems with long-lived connections + client = &http.Client{ + Timeout: 5 * time.Second, + Transport: &http.Transport{ + DisableKeepAlives: true, + }, + } +} + +func checkForUpgrades(ctx context.Context, url, versionString string, backoff wait.Backoff, + crclient crclient.Client, cfg *rest.Config, + isOpenShift bool, registrationToken string) (message string, header string, err error) { + var headerPayloadStruct *clientUpgradeData + + // Prep request + req, err := http.NewRequest("GET", url, nil) + if err == nil { + // generateHeader always returns some sort of struct, using defaults/nil values + // in case some of the checks return errors + headerPayloadStruct = generateHeader(ctx, cfg, crclient, + versionString, isOpenShift, registrationToken) + req, err = addHeader(req, headerPayloadStruct) + } + + // wait.ExponentialBackoff will retry the func according to the backoff object until + // (a) func returns done as true or + // (b) the backoff settings are exhausted, + // i.e., the process hits the cap for time or the number of steps + // The anonymous function here sets certain preexisting variables (bodyBytes, err, status) + // which are then used by the surrounding `checkForUpgrades` function as part of the return + var bodyBytes []byte + var status int + + if err == nil { + _ = wait.ExponentialBackoff( + backoff, + func() (done bool, backoffErr error) { + var res *http.Response + res, err = client.Do(req) + + if err == nil { + defer res.Body.Close() + status = res.StatusCode + + // This is a very basic check, ignoring nuances around + // certain StatusCodes that should either prevent or impact retries + if status == http.StatusOK { + bodyBytes, err = io.ReadAll(res.Body) + return true, nil + } + } + + // Return false, nil to continue checking + return false, nil + }) + } + + // We received responses, but none of them were 200 OK. + if err == nil && status != http.StatusOK { + err = fmt.Errorf("received StatusCode %d", status) + } + + // TODO: Parse response and log info for user on potential upgrades + return string(bodyBytes), req.Header.Get(clientHeader), err +} + +type CheckForUpgradesScheduler struct { + Client crclient.Client + Config *rest.Config + + OpenShift bool + Refresh time.Duration + RegistrationToken string + URL, Version string +} + +// ManagedScheduler creates a [CheckForUpgradesScheduler] and adds it to m. +// NOTE(registration): This takes a token/nil parameter when the operator is started. +// Currently the operator restarts when the token is updated, +// so this token is always current; but if that restart behavior is changed, +// we will want the upgrade mechanism to instantiate its own registration runner +// or otherwise get the most recent token. +func ManagedScheduler(m manager.Manager, openshift bool, + url, version string, registrationToken *jwt.Token) error { + if url == "" { + url = upgradeCheckURL + } + + var token string + if registrationToken != nil { + token = registrationToken.Raw + } + + return m.Add(&CheckForUpgradesScheduler{ + Client: m.GetClient(), + Config: m.GetConfig(), + OpenShift: openshift, + Refresh: 24 * time.Hour, + RegistrationToken: token, + URL: url, + Version: version, + }) +} + +// NeedLeaderElection returns true so that s runs only on the single +// [manager.Manager] that is elected leader in the Kubernetes cluster. +func (s *CheckForUpgradesScheduler) NeedLeaderElection() bool { return true } + +// Start checks for upgrades periodically. It blocks until ctx is cancelled. +func (s *CheckForUpgradesScheduler) Start(ctx context.Context) error { + s.check(ctx) + + ticker := time.NewTicker(s.Refresh) + defer ticker.Stop() + + for { + select { + case <-ticker.C: + s.check(ctx) + case <-ctx.Done(): + return ctx.Err() + } + } +} + +func (s *CheckForUpgradesScheduler) check(ctx context.Context) { + log := logging.FromContext(ctx) + + defer func() { + if v := recover(); v != nil { + log.V(1).Info("encountered panic in upgrade check", "response", v) + } + }() + + info, header, err := checkForUpgrades(ctx, + s.URL, s.Version, backoff, s.Client, s.Config, s.OpenShift, s.RegistrationToken) + + if err != nil { + log.V(1).Info("could not complete upgrade check", "response", err.Error()) + } else { + log.Info(info, clientHeader, header) + } +} diff --git a/internal/upgradecheck/http_test.go b/internal/upgradecheck/http_test.go new file mode 100644 index 0000000000..9535f942ea --- /dev/null +++ b/internal/upgradecheck/http_test.go @@ -0,0 +1,236 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package upgradecheck + +import ( + "context" + "encoding/json" + "errors" + "fmt" + "io" + "net/http" + "strings" + "testing" + "time" + + "github.com/go-logr/logr/funcr" + "gotest.tools/v3/assert" + "k8s.io/apimachinery/pkg/util/wait" + "k8s.io/client-go/rest" + "sigs.k8s.io/controller-runtime/pkg/manager" + + "github.com/crunchydata/postgres-operator/internal/feature" + "github.com/crunchydata/postgres-operator/internal/logging" + "github.com/crunchydata/postgres-operator/internal/testing/cmp" +) + +func init() { + client = &MockClient{Timeout: 1} + // set backoff to two steps, 1 second apart for testing + backoff = wait.Backoff{ + Duration: 1 * time.Second, + Factor: float64(1), + Steps: 2, + } +} + +type MockClient struct { + Timeout time.Duration +} + +var funcFoo func() (*http.Response, error) + +// Do is the mock request that will return a mock success +func (m *MockClient) Do(req *http.Request) (*http.Response, error) { + return funcFoo() +} + +func TestCheckForUpgrades(t *testing.T) { + fakeClient := setupFakeClientWithPGOScheme(t, true) + cfg := &rest.Config{} + + ctx := logging.NewContext(context.Background(), logging.Discard()) + gate := feature.NewGate() + assert.NilError(t, gate.SetFromMap(map[string]bool{ + feature.TablespaceVolumes: true, + })) + ctx = feature.NewContext(ctx, gate) + + // Pass *testing.T to allows the correct messages from the assert package + // in the event of certain failures. + checkData := func(t *testing.T, header string) { + data := clientUpgradeData{} + err := json.Unmarshal([]byte(header), &data) + assert.NilError(t, err) + assert.Assert(t, data.DeploymentID != "") + assert.Equal(t, data.PGOVersion, "4.7.3") + assert.Equal(t, data.RegistrationToken, "speakFriend") + assert.Equal(t, data.BridgeClustersTotal, 2) + assert.Equal(t, data.PGOClustersTotal, 2) + assert.Equal(t, data.FeatureGatesEnabled, "TablespaceVolumes=true") + } + + t.Run("success", func(t *testing.T) { + // A successful call + funcFoo = func() (*http.Response, error) { + json := `{"pgo_versions":[{"tag":"v5.0.4"},{"tag":"v5.0.3"},{"tag":"v5.0.2"},{"tag":"v5.0.1"},{"tag":"v5.0.0"}]}` + return &http.Response{ + Body: io.NopCloser(strings.NewReader(json)), + StatusCode: http.StatusOK, + }, nil + } + + res, header, err := checkForUpgrades(ctx, "", "4.7.3", backoff, + fakeClient, cfg, false, "speakFriend") + assert.NilError(t, err) + assert.Equal(t, res, `{"pgo_versions":[{"tag":"v5.0.4"},{"tag":"v5.0.3"},{"tag":"v5.0.2"},{"tag":"v5.0.1"},{"tag":"v5.0.0"}]}`) + checkData(t, header) + }) + + t.Run("total failure, err sending", func(t *testing.T) { + var counter int + // A call returning errors + funcFoo = func() (*http.Response, error) { + counter++ + return &http.Response{}, errors.New("whoops") + } + + res, header, err := checkForUpgrades(ctx, "", "4.7.3", backoff, + fakeClient, cfg, false, "speakFriend") + // Two failed calls because of env var + assert.Equal(t, counter, 2) + assert.Equal(t, res, "") + assert.Equal(t, err.Error(), `whoops`) + checkData(t, header) + }) + + t.Run("total failure, bad StatusCode", func(t *testing.T) { + var counter int + // A call returning bad StatusCode + funcFoo = func() (*http.Response, error) { + counter++ + return &http.Response{ + Body: io.NopCloser(strings.NewReader("")), + StatusCode: http.StatusBadRequest, + }, nil + } + + res, header, err := checkForUpgrades(ctx, "", "4.7.3", backoff, + fakeClient, cfg, false, "speakFriend") + assert.Equal(t, res, "") + // Two failed calls because of env var + assert.Equal(t, counter, 2) + assert.Equal(t, err.Error(), `received StatusCode 400`) + checkData(t, header) + }) + + t.Run("one failure, then success", func(t *testing.T) { + var counter int + // A call returning bad StatusCode the first time + // and a successful response the second time + funcFoo = func() (*http.Response, error) { + if counter < 1 { + counter++ + return &http.Response{ + Body: io.NopCloser(strings.NewReader("")), + StatusCode: http.StatusBadRequest, + }, nil + } + counter++ + json := `{"pgo_versions":[{"tag":"v5.0.4"},{"tag":"v5.0.3"},{"tag":"v5.0.2"},{"tag":"v5.0.1"},{"tag":"v5.0.0"}]}` + return &http.Response{ + Body: io.NopCloser(strings.NewReader(json)), + StatusCode: http.StatusOK, + }, nil + } + + res, header, err := checkForUpgrades(ctx, "", "4.7.3", backoff, + fakeClient, cfg, false, "speakFriend") + assert.Equal(t, counter, 2) + assert.NilError(t, err) + assert.Equal(t, res, `{"pgo_versions":[{"tag":"v5.0.4"},{"tag":"v5.0.3"},{"tag":"v5.0.2"},{"tag":"v5.0.1"},{"tag":"v5.0.0"}]}`) + checkData(t, header) + }) +} + +// TODO(benjaminjb): Replace `fake` with envtest +func TestCheckForUpgradesScheduler(t *testing.T) { + fakeClient := setupFakeClientWithPGOScheme(t, false) + _, server := setupVersionServer(t, true) + defer server.Close() + cfg := &rest.Config{Host: server.URL} + + t.Run("panic from checkForUpgrades doesn't bubble up", func(t *testing.T) { + ctx := context.Background() + + // capture logs + var calls []string + ctx = logging.NewContext(ctx, funcr.NewJSON(func(object string) { + calls = append(calls, object) + }, funcr.Options{ + Verbosity: 1, + })) + + // A panicking call + funcFoo = func() (*http.Response, error) { + panic(fmt.Errorf("oh no!")) + } + + s := CheckForUpgradesScheduler{ + Client: fakeClient, + Config: cfg, + } + s.check(ctx) + + assert.Equal(t, len(calls), 2) + assert.Assert(t, cmp.Contains(calls[1], `encountered panic in upgrade check`)) + }) + + t.Run("successful log each loop, ticker works", func(t *testing.T) { + ctx := context.Background() + + // capture logs + var calls []string + ctx = logging.NewContext(ctx, funcr.NewJSON(func(object string) { + calls = append(calls, object) + }, funcr.Options{ + Verbosity: 1, + })) + + // A successful call + funcFoo = func() (*http.Response, error) { + json := `{"pgo_versions":[{"tag":"v5.0.4"},{"tag":"v5.0.3"},{"tag":"v5.0.2"},{"tag":"v5.0.1"},{"tag":"v5.0.0"}]}` + return &http.Response{ + Body: io.NopCloser(strings.NewReader(json)), + StatusCode: http.StatusOK, + }, nil + } + + // Set loop time to 1s and sleep for 2s before sending the done signal + ctx, cancel := context.WithTimeout(ctx, 2*time.Second) + defer cancel() + s := CheckForUpgradesScheduler{ + Client: fakeClient, + Config: cfg, + Refresh: 1 * time.Second, + } + assert.ErrorIs(t, context.DeadlineExceeded, s.Start(ctx)) + + // Sleeping leads to some non-deterministic results, but we expect at least 2 executions + // plus one log for the failure to apply the configmap + assert.Assert(t, len(calls) >= 4) + + assert.Assert(t, cmp.Contains(calls[1], `{\"pgo_versions\":[{\"tag\":\"v5.0.4\"},{\"tag\":\"v5.0.3\"},{\"tag\":\"v5.0.2\"},{\"tag\":\"v5.0.1\"},{\"tag\":\"v5.0.0\"}]}`)) + assert.Assert(t, cmp.Contains(calls[3], `{\"pgo_versions\":[{\"tag\":\"v5.0.4\"},{\"tag\":\"v5.0.3\"},{\"tag\":\"v5.0.2\"},{\"tag\":\"v5.0.1\"},{\"tag\":\"v5.0.0\"}]}`)) + }) +} + +func TestCheckForUpgradesSchedulerLeaderOnly(t *testing.T) { + // CheckForUpgradesScheduler should implement this interface. + var s manager.LeaderElectionRunnable = new(CheckForUpgradesScheduler) + + assert.Assert(t, s.NeedLeaderElection(), + "expected to only run on the leader") +} diff --git a/internal/util/backrest.go b/internal/util/backrest.go deleted file mode 100644 index 66e3a2dec6..0000000000 --- a/internal/util/backrest.go +++ /dev/null @@ -1,96 +0,0 @@ -package util - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "errors" - "fmt" - "strings" - - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" -) - -const ( - BackrestRepoDeploymentName = "%s-backrest-shared-repo" - BackrestRepoServiceName = "%s-backrest-shared-repo" - BackrestRepoPVCName = "%s-pgbr-repo" - BackrestRepoSecretName = "%s-backrest-repo-config" -) - -// defines the default repo1-path for pgBackRest for use when a specic path is not provided -// in the pgcluster CR. The '%s' format verb will be replaced with the cluster name when this -// variable is utilized -const defaultBackrestRepoPath = "/backrestrepo/%s-backrest-shared-repo" - -// ValidateBackrestStorageTypeOnBackupRestore checks to see if the pgbackrest storage type provided -// when performing either pgbackrest backup or restore is valid. This includes ensuring the value -// provided is a valid storage type (e.g. "s3" and/or "local"). This also includes ensuring the -// storage type specified (e.g. "s3" or "local") is enabled in the current cluster. And finally, -// validation is ocurring for a restore, the ensure only one storage type is selected. -func ValidateBackrestStorageTypeOnBackupRestore(newBackRestStorageType, - currentBackRestStorageType string, restore bool) error { - - if newBackRestStorageType != "" && !IsValidBackrestStorageType(newBackRestStorageType) { - return fmt.Errorf("Invalid value provided for pgBackRest storage type. The following "+ - "values are allowed: %s", "\""+strings.Join(crv1.BackrestStorageTypes, "\", \"")+"\"") - } else if newBackRestStorageType != "" && - strings.Contains(newBackRestStorageType, "s3") && - !strings.Contains(currentBackRestStorageType, "s3") { - return errors.New("Storage type 's3' not allowed. S3 storage is not enabled for " + - "pgBackRest in this cluster") - } else if (newBackRestStorageType == "" || - strings.Contains(newBackRestStorageType, "local")) && - (currentBackRestStorageType != "" && - !strings.Contains(currentBackRestStorageType, "local")) { - return errors.New("Storage type 'local' not allowed. Local storage is not enabled for " + - "pgBackRest in this cluster. If this cluster uses S3 storage only, specify 's3' " + - "for the pgBackRest storage type.") - } - - // storage type validation that is only applicable for restores - if restore && newBackRestStorageType != "" && - len(strings.Split(newBackRestStorageType, ",")) > 1 { - return fmt.Errorf("Multiple storage types cannot be selected cannot be select when "+ - "performing a restore. Please select one of the following: %s", - "\""+strings.Join(crv1.BackrestStorageTypes, "\", \"")+"\"") - } - - return nil -} - -// IsValidBackrestStorageType determines if the storage source string contains valid pgBackRest -// storage type values -func IsValidBackrestStorageType(storageType string) bool { - isValid := true - for _, storageType := range strings.Split(storageType, ",") { - if !IsStringOneOf(storageType, crv1.BackrestStorageTypes...) { - isValid = false - break - } - } - return isValid -} - -// GetPGBackRestRepoPath is responsible for determining the repo path setting (i.e. 'repo1-path' -// flag) for use by pgBackRest. If a specific repo path has been defined in the pgcluster CR, -// then that path will be returned. Otherwise a default path will be returned, which is generated -// using the 'defaultBackrestRepoPath' constant and the cluster name. -func GetPGBackRestRepoPath(cluster crv1.Pgcluster) string { - if cluster.Spec.BackrestRepoPath != "" { - return cluster.Spec.BackrestRepoPath - } - return fmt.Sprintf(defaultBackrestRepoPath, cluster.Name) -} diff --git a/internal/util/clone.go b/internal/util/clone.go deleted file mode 100644 index a1d563262c..0000000000 --- a/internal/util/clone.go +++ /dev/null @@ -1,101 +0,0 @@ -package util - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - "time" - - "github.com/crunchydata/postgres-operator/internal/config" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - - meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1" -) - -const ( - // CloneParameterBackrestPVCSize is the parameter name for the Backrest PVC - // size parameter - CloneParameterBackrestPVCSize = "backrestPVCSize" - // CloneParameterEnableMetrics if set to true, enables metrics collection in - // a newly created cluster - CloneParameterEnableMetrics = "enableMetrics" - // CloneParameterPVCSize is the parameter name for the PVC parameter for - // primary and replicas - CloneParameterPVCSize = "pvcSize" -) - -// CloneTask allows you to create a Pgtask CRD with the appropriate options -type CloneTask struct { - BackrestPVCSize string - BackrestStorageSource string - EnableMetrics bool - PGOUser string - PVCSize string - SourceClusterName string - TargetClusterName string - TaskStepLabel string - TaskType string - Timestamp time.Time - WorkflowID string -} - -// newCloneTask returns a new instance of a Pgtask CRD -func (clone CloneTask) Create() *crv1.Pgtask { - // get the one-time gneerated task name - taskName := clone.taskName() - - // sigh...set a "boolean" for enabling metrics - enableMetrics := "false" - if clone.EnableMetrics { - enableMetrics = "true" - } - - return &crv1.Pgtask{ - ObjectMeta: meta_v1.ObjectMeta{ - Name: taskName, - Labels: map[string]string{ - config.LABEL_PG_CLUSTER: clone.TargetClusterName, - config.LABEL_PGOUSER: clone.PGOUser, - config.LABEL_PGO_CLONE: "true", - clone.TaskStepLabel: "true", - }, - }, - Spec: crv1.PgtaskSpec{ - Name: taskName, - TaskType: clone.TaskType, - Parameters: map[string]string{ - CloneParameterBackrestPVCSize: clone.BackrestPVCSize, - "backrestStorageType": clone.BackrestStorageSource, - CloneParameterEnableMetrics: enableMetrics, - CloneParameterPVCSize: clone.PVCSize, - "sourceClusterName": clone.SourceClusterName, - "targetClusterName": clone.TargetClusterName, - "taskName": taskName, - "timestamp": clone.Timestamp.Format(time.RFC3339), - crv1.PgtaskWorkflowID: clone.WorkflowID, - }, - }, - } -} - -// taskName generates the task name, which uses the "TaskType" and -// "TargetClusterName" properties, with a little bit of entropy -func (clone CloneTask) taskName() string { - // create a task name based on the step we are on in the process, with some - // entropy - uid := RandStringBytesRmndr(4) - return fmt.Sprintf("%s-%s-%s", clone.TaskType, clone.TargetClusterName, uid) -} diff --git a/internal/util/cluster.go b/internal/util/cluster.go deleted file mode 100644 index 91c7cfec95..0000000000 --- a/internal/util/cluster.go +++ /dev/null @@ -1,323 +0,0 @@ -package util - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "errors" - "fmt" - "strconv" - "strings" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/kubeapi" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - - log "github.com/sirupsen/logrus" - v1 "k8s.io/api/core/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/client-go/kubernetes" - "k8s.io/client-go/rest" -) - -// BackrestRepoConfig represents the configuration required to created backrest repo secrets -type BackrestRepoConfig struct { - // BackrestS3CA is the byte string value of the CA that should be used for the - // S3 inerfacd pgBackRest repository - BackrestS3CA []byte - BackrestS3Key string - BackrestS3KeySecret string - ClusterName string - ClusterNamespace string - OperatorNamespace string -} - -// AWSS3Secret is a structured representation for providing an AWS S3 key and -// key secret -type AWSS3Secret struct { - AWSS3CA []byte - AWSS3Key string - AWSS3KeySecret string -} - -const ( - // DefaultGeneratedPasswordLength is the length of what a generated password - // is if it's not set in the pgo.yaml file, and to create some semblance of - // consistency - DefaultGeneratedPasswordLength = 24 - // DefaultPasswordValidUntilDays is the number of days until a PostgreSQL user's - // password expires. If it is not set in the pgo.yaml file, we will use a - // default of "0" which means that a password will never expire - DefaultPasswordValidUntilDays = 0 -) - -// values for the keys used to access the pgBackRest repository Secret -const ( - // three of these are exported, as they are used to help add the information - // into the templates. Say the last one 10 times fast - BackRestRepoSecretKeyAWSS3KeyAWSS3CACert = "aws-s3-ca.crt" - BackRestRepoSecretKeyAWSS3KeyAWSS3Key = "aws-s3-key" - BackRestRepoSecretKeyAWSS3KeyAWSS3KeySecret = "aws-s3-key-secret" - // the rest are private - backRestRepoSecretKeyAuthorizedKeys = "authorized_keys" - backRestRepoSecretKeySSHConfig = "config" - backRestRepoSecretKeySSHDConfig = "sshd_config" - backRestRepoSecretKeySSHPrivateKey = "id_ed25519" - backRestRepoSecretKeySSHHostPrivateKey = "ssh_host_ed25519_key" -) - -const ( - // SQLValidUntilAlways uses a special PostgreSQL value to ensure a password - // is always valid - SQLValidUntilAlways = "infinity" - // SQLValidUntilNever uses a special PostgreSQL value to ensure a password - // is never valid. This is exportable and used in other places - SQLValidUntilNever = "-infinity" - // sqlSetPasswordDefault is the SQL to update the password - // NOTE: this is safe from SQL injection as we explicitly add the inerpolated - // string as a MD5 hash or SCRAM verifier. And if you're not doing that, - // rethink your usage of this - // - // The escaping for SQL injections is handled in the SetPostgreSQLPassword - // function - sqlSetPasswordDefault = `ALTER ROLE %s PASSWORD %s;` -) - -var ( - // ErrMissingConfigAnnotation represents an error thrown when the 'config' annotation is found - // to be missing from the 'config' configMap created to store cluster-wide configuration - ErrMissingConfigAnnotation error = errors.New("'config' annotation missing from cluster " + - "configutation") -) - -var ( - // CmdStopPostgreSQL is the command used to stop a PostgreSQL instance, which - // uses the "fast" shutdown mode. This needs a data directory appended to it - cmdStopPostgreSQL = []string{"pg_ctl", "stop", - "-m", "fast", "-D", - } -) - -// CreateBackrestRepoSecrets creates the secrets required to manage the -// pgBackRest repo container -func CreateBackrestRepoSecrets(clientset kubernetes.Interface, - backrestRepoConfig BackrestRepoConfig) error { - - keys, err := NewPrivatePublicKeyPair() - if err != nil { - return err - } - - // Retrieve the S3/SSHD configuration files from secret - configs, err := clientset. - CoreV1().Secrets(backrestRepoConfig.OperatorNamespace). - Get("pgo-backrest-repo-config", metav1.GetOptions{}) - - if err != nil { - log.Error(err) - return err - } - - // if an S3 key has been provided via the request, then use key and key secret - // included in the request instead of the default credentials that are - // available in the Operator pgBackRest secret - backrestS3Key := []byte(backrestRepoConfig.BackrestS3Key) - - if backrestRepoConfig.BackrestS3Key == "" { - backrestS3Key = configs.Data[BackRestRepoSecretKeyAWSS3KeyAWSS3Key] - } - - backrestS3KeySecret := []byte(backrestRepoConfig.BackrestS3KeySecret) - - if backrestRepoConfig.BackrestS3KeySecret == "" { - backrestS3KeySecret = configs.Data[BackRestRepoSecretKeyAWSS3KeyAWSS3KeySecret] - } - - // determine if there is a CA override provided, and if not, use the default - // from the configuration - caCert := backrestRepoConfig.BackrestS3CA - if len(caCert) == 0 { - caCert = configs.Data[BackRestRepoSecretKeyAWSS3KeyAWSS3CACert] - } - - // set up the secret for the cluster that contains the pgBackRest information - secret := v1.Secret{ - ObjectMeta: metav1.ObjectMeta{ - Name: fmt.Sprintf("%s-%s", backrestRepoConfig.ClusterName, - config.LABEL_BACKREST_REPO_SECRET), - Labels: map[string]string{ - config.LABEL_VENDOR: config.LABEL_CRUNCHY, - config.LABEL_PG_CLUSTER: backrestRepoConfig.ClusterName, - config.LABEL_PGO_BACKREST_REPO: "true", - }, - }, - Data: map[string][]byte{ - BackRestRepoSecretKeyAWSS3KeyAWSS3CACert: caCert, - BackRestRepoSecretKeyAWSS3KeyAWSS3Key: backrestS3Key, - BackRestRepoSecretKeyAWSS3KeyAWSS3KeySecret: backrestS3KeySecret, - backRestRepoSecretKeyAuthorizedKeys: keys.Public, - backRestRepoSecretKeySSHConfig: configs.Data[backRestRepoSecretKeySSHConfig], - backRestRepoSecretKeySSHDConfig: configs.Data[backRestRepoSecretKeySSHDConfig], - backRestRepoSecretKeySSHPrivateKey: keys.Private, - backRestRepoSecretKeySSHHostPrivateKey: keys.Private, - }, - } - - _, err = clientset.CoreV1().Secrets(backrestRepoConfig.ClusterNamespace).Create(&secret) - if kubeapi.IsAlreadyExists(err) { - _, err = clientset.CoreV1().Secrets(backrestRepoConfig.ClusterNamespace).Update(&secret) - } - return err -} - -// IsAutofailEnabled - returns true if autofail label is set to true, false if not. -func IsAutofailEnabled(cluster *crv1.Pgcluster) bool { - - labels := cluster.ObjectMeta.Labels - failLabel := labels[config.LABEL_AUTOFAIL] - - log.Debugf("IsAutoFailEnabled: %s", failLabel) - - return failLabel == "true" -} - -// GeneratedPasswordValidUntilDays returns the value for the number of days that -// a password is valid for, which is used as part of PostgreSQL's VALID UNTIL -// directive on a user. It first determines if the user provided this value via -// a configuration file, and if not and/or the value is invalid, uses the -// default value -func GeneratedPasswordValidUntilDays(configuredValidUntilDays string) int { - // set the generated password length for random password generation - // note that "configuredPasswordLength" may be an empty string, and as such - // the below line could fail. That's ok though! as we have a default set up - validUntilDays, err := strconv.Atoi(configuredValidUntilDays) - - // if there is an error...set it to a default - if err != nil { - validUntilDays = DefaultPasswordValidUntilDays - } - - return validUntilDays -} - -// GetPrimaryPod gets the Pod of the primary PostgreSQL instance. If somehow -// the query gets multiple pods, then the first one in the list is returned -func GetPrimaryPod(clientset kubernetes.Interface, cluster *crv1.Pgcluster) (*v1.Pod, error) { - // set up the selector for the primary pod - selector := fmt.Sprintf("%s=%s,%s=%s", - config.LABEL_PG_CLUSTER, cluster.Spec.Name, config.LABEL_PGHA_ROLE, config.LABEL_PGHA_ROLE_PRIMARY) - namespace := cluster.Spec.Namespace - - // query the pods - pods, err := clientset.CoreV1().Pods(namespace).List(metav1.ListOptions{LabelSelector: selector}) - - // if there is an error, log it and abort - if err != nil { - return nil, err - } - - // if no pods are retirn, then also raise an error - if len(pods.Items) == 0 { - err := errors.New(fmt.Sprintf("primary pod not found for selector [%s]", selector)) - return nil, err - } - - // Grab the first pod from the list as this is presumably the primary pod - pod := pods.Items[0] - return &pod, nil -} - -// GetS3CredsFromBackrestRepoSecret retrieves the AWS S3 credentials, i.e. the key and key -// secret, from a specific cluster's backrest repo secret -func GetS3CredsFromBackrestRepoSecret(clientset kubernetes.Interface, namespace, clusterName string) (AWSS3Secret, error) { - secretName := fmt.Sprintf("%s-%s", clusterName, config.LABEL_BACKREST_REPO_SECRET) - s3Secret := AWSS3Secret{} - - secret, err := clientset.CoreV1().Secrets(namespace).Get(secretName, metav1.GetOptions{}) - - if err != nil { - log.Error(err) - return s3Secret, err - } - - // get the S3 secret credentials out of the secret, and return - s3Secret.AWSS3CA = secret.Data[BackRestRepoSecretKeyAWSS3KeyAWSS3CACert] - s3Secret.AWSS3Key = string(secret.Data[BackRestRepoSecretKeyAWSS3KeyAWSS3Key]) - s3Secret.AWSS3KeySecret = string(secret.Data[BackRestRepoSecretKeyAWSS3KeyAWSS3KeySecret]) - - return s3Secret, nil -} - -// SetPostgreSQLPassword updates the password for a PostgreSQL role in the -// PostgreSQL cluster by executing into the primary Pod and changing it -// -// Note: it is recommended to pre-hash the password (e.g. md5, SCRAM) so that -// way the plaintext password is not logged anywhere. This also avoids potential -// SQL injections -func SetPostgreSQLPassword(clientset kubernetes.Interface, restconfig *rest.Config, pod *v1.Pod, port, username, password, sqlCustom string) error { - log.Debugf("set PostgreSQL password for user [%s]", username) - - // if custom SQL is not set, use the default SQL - sqlRaw := sqlCustom - - if sqlRaw == "" { - sqlRaw = sqlSetPasswordDefault - } - - // This is safe from SQL injection as we are using constants and a well defined - // string...well, as long as the function caller does this - sql := strings.NewReader(fmt.Sprintf(sqlRaw, - SQLQuoteIdentifier(username), SQLQuoteLiteral(password))) - cmd := []string{"psql", "-p", port} - - // exec into the pod to run the query - _, stderr, err := kubeapi.ExecToPodThroughAPI(restconfig, clientset, - cmd, "database", pod.Name, pod.ObjectMeta.Namespace, sql) - - // if there is an error executing the command, or output in stderr, - // log the error message and return - if err != nil { - log.Error(err) - return err - } else if stderr != "" { - log.Error(stderr) - return fmt.Errorf(stderr) - } - - return nil -} - -// StopPostgreSQLInstance issues a "fast" shutdown command to the PostgreSQL -// instance. This will immediately terminate any connections and safely shut -// down PostgreSQL so it does not have to start up in crash recovery mode -func StopPostgreSQLInstance(clientset kubernetes.Interface, restconfig *rest.Config, pod *v1.Pod, instanceName string) error { - log.Debugf("shutting down PostgreSQL on pod [%s]", pod.Name) - - // append the data directory, which is the name of the instance - cmd := cmdStopPostgreSQL - dataDirectory := fmt.Sprintf("%s/%s", config.VOLUME_POSTGRESQL_DATA_MOUNT_PATH, instanceName) - cmd = append(cmd, dataDirectory) - - // exec into the pod to execute the stop command - _, stderr, _ := kubeapi.ExecToPodThroughAPI(restconfig, clientset, - cmd, "database", pod.Name, pod.ObjectMeta.Namespace, nil) - - // if there is error output, assume this is an error and return - if stderr != "" { - return fmt.Errorf(stderr) - } - - return nil -} diff --git a/internal/util/failover.go b/internal/util/failover.go deleted file mode 100644 index 87dadf51cf..0000000000 --- a/internal/util/failover.go +++ /dev/null @@ -1,408 +0,0 @@ -package util - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "encoding/json" - "errors" - "fmt" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/kubeapi" - - log "github.com/sirupsen/logrus" - v1 "k8s.io/api/core/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/client-go/kubernetes" - "k8s.io/client-go/rest" -) - -// InstanceReplicationInfo is the user friendly information for the current -// status of key replication metrics for a PostgreSQL instance -type InstanceReplicationInfo struct { - Name string - Node string - ReplicationLag int - Status string - Timeline int - PendingRestart bool - Role string -} - -type ReplicationStatusRequest struct { - RESTConfig *rest.Config - Clientset kubernetes.Interface - Namespace string - ClusterName string -} - -type ReplicationStatusResponse struct { - Instances []InstanceReplicationInfo -} - -// instanceReplicationInfoJSON is the information returned from the request to -// the Patroni REST endpoint for info on the replication status of all the -// replicas -type instanceReplicationInfoJSON struct { - PodName string `json:"Member"` - Type string `json:"Role"` - ReplicationLag int `json:"Lag in MB"` - State string - Timeline int `json:"TL"` - PendingRestart string `json:"Pending restart"` -} - -// instanceInfo stores the name and node of a specific instance (primary or replica) within a -// PG cluster -type instanceInfo struct { - name string - node string -} - -const ( - // instanceReplicationInfoTypePrimary is the label used by Patroni to indicate that an instance - // is indeed a primary PostgreSQL instance - instanceReplicationInfoTypePrimary = "Leader" - // instanceReplicationInfoTypePrimaryStandby is the label used by Patroni to indicate that an - // instance is indeed a primary PostgreSQL instance, specifically within a standby cluster - instanceReplicationInfoTypePrimaryStandby = "Standby Leader" - // instanceRolePrimary indicates that an instance is a primary - instanceRolePrimary = "primary" - // instanceRoleReplica indicates that an instance is a replica - instanceRoleReplica = "replica" - // instanceRoleUnknown indicates that an instance is of an unknown typ - instanceRoleUnknown = "unknown" - // instanceStatusUnavailable indicates an instance is unavailable - instanceStatusUnavailable = "unavailable" -) - -var ( - // instanceInfoCommand is the command used to get information about the status - // and other statistics about the instances in a PostgreSQL cluster, e.g. - // replication lag - instanceInfoCommand = []string{"patronictl", "list", "-f", "json"} -) - -// GetPod determines the best target to fail to -func GetPod(clientset kubernetes.Interface, deploymentName, namespace string) (*v1.Pod, error) { - - var err error - - var pod *v1.Pod - var pods *v1.PodList - - selector := config.LABEL_DEPLOYMENT_NAME + "=" + deploymentName + "," + config.LABEL_PGHA_ROLE + "=replica" - pods, err = clientset.CoreV1().Pods(namespace).List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - return pod, err - } - if len(pods.Items) != 1 { - return pod, errors.New("could not determine which pod to failover to") - } - - for _, v := range pods.Items { - pod = &v - } - - found := false - - //make sure the pod has a database container it it - for _, c := range pod.Spec.Containers { - if c.Name == "database" { - found = true - } - } - - if !found { - return pod, errors.New("could not find a database container in the pod") - } - - return pod, err -} - -// ReplicationStatus is responsible for retrieving and returning the replication -// information about the status of the replicas in a PostgreSQL cluster. It -// executes into a single replica pod and leverages the functionality of Patroni -// for getting the key metrics that are appropriate to help the user understand -// the current state of their replicas. -// -// Statistics include: the current node the replica is on, if it is up, the -// replication lag, etc. -// -// By default information is only returned for replicas within the cluster. However, -// if primary information is also needed, the inlcudePrimary flag can set set to true -// and primary information will will also be included in the ReplicationStatusResponse. -// -// Also by default we do not include any "busted" Pods, e.g. a Pod that is not -// in a happy phase. That Pod may be lacking a "role" label. From there, we zero -// out the statistics and apply an error -func ReplicationStatus(request ReplicationStatusRequest, includePrimary, includeBusted bool) (ReplicationStatusResponse, error) { - response := ReplicationStatusResponse{ - Instances: make([]InstanceReplicationInfo, 0), - } - - // Build up the selector. First, create the base, which restricts to the - // current cluster - // pg-cluster=clusterName,pgo-pg-database - selector := fmt.Sprintf("%s=%s,%s", - config.LABEL_PG_CLUSTER, request.ClusterName, config.LABEL_PG_DATABASE) - - // if we are not including the primary, determine if we are including busted - // replicas or not - if !includePrimary { - if includeBusted { - // include all Pods that identify as a database, but **not** a primary - // pg-cluster=clusterName,pgo-pg-database,role!=config.LABEL_PGHA_ROLE_PRIMARY - selector += fmt.Sprintf(",%s!=%s", config.LABEL_PGHA_ROLE, config.LABEL_PGHA_ROLE_PRIMARY) - } else { - // include all Pods that identify as a database and have a replica label - // pg-cluster=clusterName,pgo-pg-database,role=replica - selector += fmt.Sprintf(",%s=%s", config.LABEL_PGHA_ROLE, config.LABEL_PGHA_ROLE_REPLICA) - } - } - - log.Debugf(`searching for pods with "%s"`, selector) - pods, err := request.Clientset.CoreV1().Pods(request.Namespace).List(metav1.ListOptions{LabelSelector: selector}) - - // If there is an error trying to get the pods, return here. Allow the caller - // to handle the error - if err != nil { - return response, err - } - - // See how many replica instances were found. If none were found then return - log.Debugf(`replica pods found "%d"`, len(pods.Items)) - - if len(pods.Items) == 0 { - return response, err - } - - // We need to create a quick map of "pod name" => node name / instance name - // We will iterate through the pod list once to extract the name we refer to - // the specific instance as, as well as which node it is deployed on - instanceInfoMap := createInstanceInfoMap(pods) - - // Now get the statistics about the current state of the replicas, which we - // can delegate to Patroni vis-a-vis the information that it collects - // We can get the statistics about the current state of the managed instance - // From executing and running a command in the first active pod - var pod *v1.Pod - - for _, p := range pods.Items { - if p.Status.Phase == v1.PodRunning { - pod = &p - break - } - } - - // if no active Pod can be found, we can only assume that all of the instances - // are unavailable, and we should indicate as such - if pod == nil { - for _, p := range pods.Items { - // set up the instance that will be returned - instance := InstanceReplicationInfo{ - Name: instanceInfoMap[p.Name].name, - Node: instanceInfoMap[p.Name].node, - ReplicationLag: -1, - Role: instanceRoleUnknown, - Status: instanceStatusUnavailable, - Timeline: -1, - } - - // append this newly created instance to the list that will be returned - response.Instances = append(response.Instances, instance) - } - - return response, nil - } - - // Execute the command that will retrieve the replica information from Patroni - commandStdOut, _, err := kubeapi.ExecToPodThroughAPI( - request.RESTConfig, request.Clientset, instanceInfoCommand, - pod.Spec.Containers[0].Name, pod.Name, request.Namespace, nil) - - // if there is an error, return. We will log the error at a higher level - if err != nil { - return response, err - } - - // parse the JSON and plast it into instanceInfoList - var rawInstances []instanceReplicationInfoJSON - json.Unmarshal([]byte(commandStdOut), &rawInstances) - - log.Debugf("patroni instance info: %v", rawInstances) - - // We need to iterate through this list to format the information for the - // response - for _, rawInstance := range rawInstances { - var role string - - // skip the primary unless explicitly enabled - if !includePrimary && (rawInstance.Type == instanceReplicationInfoTypePrimary || - rawInstance.Type == instanceReplicationInfoTypePrimaryStandby) { - continue - } - - // if this is a busted instance and we are not including it, skip - if !includeBusted && rawInstance.State == "" { - continue - } - - // determine the role of the instnace - switch rawInstance.Type { - default: - role = instanceRoleReplica - case instanceReplicationInfoTypePrimary, instanceReplicationInfoTypePrimaryStandby: - role = instanceRolePrimary - } - - // set up the instance that will be returned - instance := InstanceReplicationInfo{ - ReplicationLag: rawInstance.ReplicationLag, - Status: rawInstance.State, - Timeline: rawInstance.Timeline, - Role: role, - Name: instanceInfoMap[rawInstance.PodName].name, - Node: instanceInfoMap[rawInstance.PodName].node, - PendingRestart: rawInstance.PendingRestart == "*", - } - - // update the instance info if the instance is busted - if rawInstance.State == "" { - instance.Status = instanceStatusUnavailable - instance.ReplicationLag = -1 - instance.Timeline = -1 - } - - // append this newly created instance to the list that will be returned - response.Instances = append(response.Instances, instance) - } - - // pass along the response for the requestor to process - return response, nil -} - -// ToggleAutoFailover enables or disables autofailover for a cluster. Disabling autofailover means "pausing" -// Patroni, which will result in Patroni stepping aside from managing the cluster. This will effectively cause -// Patroni to stop responding to failures or other database activities, e.g. it will not attempt to start the -// database when stopped to perform maintenance -func ToggleAutoFailover(clientset kubernetes.Interface, enable bool, pghaScope, namespace string) error { - - // find the "config" configMap created by Patroni - configMapName := pghaScope + "-config" - log.Debugf("setting autofailover to %t for cluster with pgha scope %s", enable, pghaScope) - - configMap, err := clientset.CoreV1().ConfigMaps(namespace).Get(configMapName, metav1.GetOptions{}) - if err != nil { - log.Error(err) - return err - } - - // return ErrMissingConfigAnnotation error if configMap is missing the "config" annotation. - // This allows for graceful handling of scenarios where a failover toggle is attempted - // (e.g. during cluster removal), but this annotation has not been created yet (e.g. due to - // a failed cluster bootstrap) - if _, ok := configMap.ObjectMeta.Annotations["config"]; !ok { - return ErrMissingConfigAnnotation - } - - configJSONStr := configMap.ObjectMeta.Annotations["config"] - - var configJSON map[string]interface{} - json.Unmarshal([]byte(configJSONStr), &configJSON) - - if !enable { - // disable autofail condition - disableFailover(clientset, configMap, configJSON, namespace) - } else { - // enable autofail - enableFailover(clientset, configMap, configJSON, namespace) - } - - return nil -} - -// createInstanceInfoMap creates a mapping between the pod names for the PostgreSQL -// pods in a cluster to the a struct containing the associated instance name and the -// Nodes that it runs on, all based upon the output from a Kubernetes API query -func createInstanceInfoMap(pods *v1.PodList) map[string]instanceInfo { - - instanceInfoMap := make(map[string]instanceInfo) - - // Iterate through each pod that is returned and get the mapping between the - // pod and the PostgreSQL instance name with node it is scheduled on - for _, pod := range pods.Items { - instanceInfoMap[pod.GetName()] = instanceInfo{ - name: pod.ObjectMeta.Labels[config.LABEL_DEPLOYMENT_NAME], - node: pod.Spec.NodeName, - } - } - - log.Debugf("instanceInfoMap: %v", instanceInfoMap) - - return instanceInfoMap -} - -// If "pause" is present in the config and set to "true", then it needs to be removed to enable -// failover. Otherwise, if "pause" isn't present in the config or if it has a value other than -// true, then assume autofail is enabled and do nothing (when Patroni see's an invalid value for -// "pause" it sets it to "true") -func enableFailover(clientset kubernetes.Interface, configMap *v1.ConfigMap, configJSON map[string]interface{}, - namespace string) error { - if _, ok := configJSON["pause"]; ok && configJSON["pause"] == true { - log.Debugf("updating pause key in configMap %s to enable autofailover", configMap.Name) - // disabled autofail by removing "pause" from the config - delete(configJSON, "pause") - configJSONFinalStr, err := json.Marshal(configJSON) - if err != nil { - return err - } - configMap.ObjectMeta.Annotations["config"] = string(configJSONFinalStr) - _, err = clientset.CoreV1().ConfigMaps(namespace).Update(configMap) - if err != nil { - return err - } - } else { - log.Debugf("autofailover already enabled according to the pause key (or lack thereof) in configMap %s", - configMap.Name) - } - return nil -} - -// If "pause" isn't present in the config then assume autofail is enabled and needs to be disabled -// by setting "pause" to true. Or if it is present and set to something other than "true" (e.g. -// "false" or "null"), then it also needs to be disabled by setting "pause" to true. -func disableFailover(clientset kubernetes.Interface, configMap *v1.ConfigMap, configJSON map[string]interface{}, - namespace string) error { - if _, ok := configJSON["pause"]; !ok || configJSON["pause"] != true { - log.Debugf("updating pause key in configMap %s to disable autofailover", configMap.Name) - // disable autofail by setting "pause" to true - configJSON["pause"] = true - configJSONFinalStr, err := json.Marshal(configJSON) - if err != nil { - return err - } - configMap.ObjectMeta.Annotations["config"] = string(configJSONFinalStr) - _, err = clientset.CoreV1().ConfigMaps(namespace).Update(configMap) - if err != nil { - return err - } - } else { - log.Debugf("autofailover already disabled according to the pause key in configMap %s", - configMap.Name) - } - return nil -} diff --git a/internal/util/pgbouncer.go b/internal/util/pgbouncer.go deleted file mode 100644 index 2fdd645126..0000000000 --- a/internal/util/pgbouncer.go +++ /dev/null @@ -1,62 +0,0 @@ -package util - -/* - Copyright 2017 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" -) - -// pgBouncerConfigMapFormat is the format used for the name of the config -// map associated with a pgBouncer cluster, and follows the pattern -// "-pgbouncer-cm" -const pgBouncerConfigMapFormat = "%s-pgbouncer-cm" - -// pgBouncerSecretFormat is the name of the Kubernetes Secret that pgBouncer -// uses that stores configuration and pgbouncer user information, and follows -// the format "-pgbouncer-secret" -const pgBouncerSecretFormat = "%s-pgbouncer-secret" - -// pgBouncerUserFileFormat is the format of what the pgBouncer user management -// file looks like, i.e. `"username" "password"`` -const pgBouncerUserFileFormat = `"%s" "%s"` - -// GeneratePgBouncerConfigMapName generates the name of the configmap file -// associated with the pgBouncer Deployment -func GeneratePgBouncerConfigMapName(clusterName string) string { - return fmt.Sprintf(pgBouncerConfigMapFormat, clusterName) -} - -// GeneratePgBouncerSecretName returns the name of the secret that contains -// information around a pgBouncer deployment -func GeneratePgBouncerSecretName(clusterName string) string { - return fmt.Sprintf(pgBouncerSecretFormat, clusterName) -} - -// GeneratePgBouncerUsersFileBytes generates the byte string that is -// used by the pgBouncer secret to authenticate a user into pgBouncer that is -// acting as the pgBouncer "service user" (aka PgBouncerUser). -// -// The format of this file is `"username "hashed-password"` -// -// where "hashed-password" is a MD5 or SCRAM hashed password -// -// This is ultimately moutned by the pgBouncer Pod via the secret -func GeneratePgBouncerUsersFileBytes(hashedPassword string) []byte { - data := fmt.Sprintf(pgBouncerUserFileFormat, crv1.PGUserPgBouncer, hashedPassword) - return []byte(data) -} diff --git a/internal/util/policy.go b/internal/util/policy.go deleted file mode 100644 index 11f5a1563e..0000000000 --- a/internal/util/policy.go +++ /dev/null @@ -1,208 +0,0 @@ -package util - -/* - Copyright 2017 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "encoding/json" - "errors" - "fmt" - "net/http" - "strings" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/kubeapi" - pgo "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned" - - "io/ioutil" - - jsonpatch "github.com/evanphx/json-patch" - log "github.com/sirupsen/logrus" - kerrors "k8s.io/apimachinery/pkg/api/errors" - "k8s.io/apimachinery/pkg/api/meta" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/types" - - "k8s.io/client-go/kubernetes" - "k8s.io/client-go/rest" -) - -// ExecPolicy execute a sql policy against a cluster -func ExecPolicy(clientset kubeapi.Interface, restconfig *rest.Config, namespace, policyName, serviceName, port string) error { - //fetch the policy sql - sql, err := GetPolicySQL(clientset, namespace, policyName) - - if err != nil { - return err - } - - // prepare the SQL string to be something that can be passed to a STDIN - // interface - stdin := strings.NewReader(sql) - - // now, we need to ensure we can get the Pod name of the primary PostgreSQL - // instance. Thname being passed in is actually the "serviceName" of the Pod - // We can isolate the exact Pod we want by using this (LABEL_SERVICE_NAME) and - // the LABEL_PGHA_ROLE labels - selector := fmt.Sprintf("%s=%s,%s=%s", - config.LABEL_SERVICE_NAME, serviceName, - config.LABEL_PGHA_ROLE, config.LABEL_PGHA_ROLE_PRIMARY) - - podList, err := clientset.CoreV1().Pods(namespace).List(metav1.ListOptions{LabelSelector: selector}) - - if err != nil { - return err - } else if len(podList.Items) != 1 { - msg := fmt.Sprintf("could not find the primary pod selector:[%s] pods returned:[%d]", - selector, len(podList.Items)) - - return errors.New(msg) - } - - // get the primary Pod - pod := podList.Items[0] - - // in the Pod spec, the first container is always the one with the PostgreSQL - // instnace. We can use that to build out our execution call - // - // But first, let's prepare the command that will execute the SQL. - // NOTE: this executes as the "postgres" user on the "postgres" database, - // because that is what the existing functionality does - // - // However, unlike the previous implementation, this will connect over a UNIX - // socket. There are certainly additional improvements that can be made, but - // this gets us closer to what we want to do - command := []string{ - "psql", - "-p", - port, - "postgres", - "postgres", - "-f", - "-", - } - - // execute the command! if it fails, return the error - if _, stderr, err := kubeapi.ExecToPodThroughAPI(restconfig, clientset, - command, pod.Spec.Containers[0].Name, pod.Name, namespace, stdin); err != nil || stderr != "" { - // log the error from the pod and stderr, but return the stderr - log.Error(err, stderr) - - return fmt.Errorf(stderr) - } - - return nil -} - -// GetPolicySQL returns the SQL string from a policy -func GetPolicySQL(clientset pgo.Interface, namespace, policyName string) (string, error) { - p, err := clientset.CrunchydataV1().Pgpolicies(namespace).Get(policyName, metav1.GetOptions{}) - if err == nil { - if p.Spec.URL != "" { - return readSQLFromURL(p.Spec.URL) - } - return p.Spec.SQL, err - } - - if kerrors.IsNotFound(err) { - log.Error("getPolicySQL policy not found using " + policyName + " in namespace " + namespace) - } - log.Error(err) - return "", err -} - -// readSQLFromURL returns the SQL string from a URL -func readSQLFromURL(urlstring string) (string, error) { - var bodyBytes []byte - response, err := http.Get(urlstring) - if err == nil { - bodyBytes, err = ioutil.ReadAll(response.Body) - defer response.Body.Close() - } - - if err != nil { - log.Error(err) - return "", err - } - - return string(bodyBytes), err - -} - -// ValidatePolicy tests to see if a policy exists -func ValidatePolicy(clientset pgo.Interface, namespace string, policyName string) error { - _, err := clientset.CrunchydataV1().Pgpolicies(namespace).Get(policyName, metav1.GetOptions{}) - if err == nil { - log.Debugf("pgpolicy %s was validated", policyName) - } else if kerrors.IsNotFound(err) { - log.Debugf("pgpolicy %s not found fail validation", policyName) - } else { - log.Error("error getting pgpolicy " + policyName + err.Error()) - } - return err -} - -// UpdatePolicyLabels ... -func UpdatePolicyLabels(clientset kubernetes.Interface, clusterName string, namespace string, newLabels map[string]string) error { - - deployment, err := clientset.AppsV1().Deployments(namespace).Get(clusterName, metav1.GetOptions{}) - if err != nil { - return err - } - - var patchBytes, newData, origData []byte - origData, err = json.Marshal(deployment) - if err != nil { - return err - } - - accessor, err2 := meta.Accessor(deployment) - if err2 != nil { - return err2 - } - - objLabels := accessor.GetLabels() - if objLabels == nil { - objLabels = make(map[string]string) - } - - //update the deployment labels - for key, value := range newLabels { - objLabels[key] = value - } - log.Debugf("updated labels are %v\n", objLabels) - - accessor.SetLabels(objLabels) - newData, err = json.Marshal(deployment) - if err != nil { - return err - } - - patchBytes, err = jsonpatch.CreateMergePatch(origData, newData) - createdPatch := err == nil - if err != nil { - return err - } - if createdPatch { - log.Debug("created merge patch") - } - - _, err = clientset.AppsV1().Deployments(namespace).Patch(clusterName, types.MergePatchType, patchBytes, "") - if err != nil { - log.Debug("error patching deployment " + err.Error()) - } - return err - -} diff --git a/internal/util/secrets.go b/internal/util/secrets.go index 1ef90e7bd6..82768c9386 100644 --- a/internal/util/secrets.go +++ b/internal/util/secrets.go @@ -1,239 +1,79 @@ -package util - -/* - Copyright 2017 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 +// Copyright 2017 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ +package util import ( "crypto/rand" - "fmt" + "io" "math/big" - "strconv" - "strings" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/kubeapi" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - log "github.com/sirupsen/logrus" - v1 "k8s.io/api/core/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/client-go/kubernetes" ) -// UserSecretFormat follows the pattern of how the user information is stored, -// which is "--secret" -const UserSecretFormat = "%s-%s" + crv1.UserSecretSuffix - -// The following constants are used as a part of password generation. For more -// information on these selections, please consulting the ASCII man page -// (`man ascii`) +// The following constant is used as a part of password generation. const ( - // passwordCharLower is the lowest ASCII character to use for generating a - // password, which is 40 - passwordCharLower = 40 - // passwordCharUpper is the highest ASCII character to use for generating a - // password, which is 126 - passwordCharUpper = 126 + // DefaultGeneratedPasswordLength is the default length of what a generated + // password should be if it's not set elsewhere + DefaultGeneratedPasswordLength = 24 ) -// passwordCharSelector is a "big int" that we need to select the random ASCII -// character for the password. Since the random integer generator looks for -// values from [0,X), we need to force this to be [40,126] -var passwordCharSelector = big.NewInt(passwordCharUpper - passwordCharLower) - -// CreateSecret create the secret, user, and primary secrets -func CreateSecret(clientset kubernetes.Interface, db, secretName, username, password, namespace string) error { - - var enUsername = username - - secret := v1.Secret{} - - secret.Name = secretName - secret.ObjectMeta.Labels = make(map[string]string) - secret.ObjectMeta.Labels["pg-cluster"] = db - secret.ObjectMeta.Labels[config.LABEL_VENDOR] = config.LABEL_CRUNCHY - secret.Data = make(map[string][]byte) - secret.Data["username"] = []byte(enUsername) - secret.Data["password"] = []byte(password) - - _, err := clientset.CoreV1().Secrets(namespace).Create(&secret) - - return err +// accumulate gathers n bytes from f and returns them as a string. It returns +// an empty string when f returns an error. +func accumulate(n int, f func() (byte, error)) (string, error) { + result := make([]byte, n) -} - -// GeneratePassword generates a password of a given length out of the acceptable -// ASCII characters suitable for a password -func GeneratePassword(length int) (string, error) { - password := make([]byte, length) - - for i := 0; i < length; i++ { - char, err := rand.Int(rand.Reader, passwordCharSelector) - - // if there is an error generating the random integer, return - if err != nil { + for i := range result { + if b, err := f(); err == nil { + result[i] = b + } else { return "", err } - - password[i] = byte(passwordCharLower + char.Int64()) - } - - return string(password), nil -} - -// GeneratedPasswordLength returns the value for what the length of a -// randomly generated password should be. It first determines if the user -// provided this value via a configuration file, and if not and/or the value is -// invalid, uses the default value -func GeneratedPasswordLength(configuredPasswordLength string) int { - // set the generated password length for random password generation - // note that "configuredPasswordLength" may be an empty string, and as such - // the below line could fail. That's ok though! as we have a default set up - generatedPasswordLength, err := strconv.Atoi(configuredPasswordLength) - - // if there is an error...set it to a default - if err != nil { - generatedPasswordLength = DefaultGeneratedPasswordLength - } - - return generatedPasswordLength -} - -// GetPasswordFromSecret will fetch the password from a user secret -func GetPasswordFromSecret(clientset kubernetes.Interface, namespace, secretName string) (string, error) { - secret, err := clientset.CoreV1().Secrets(namespace).Get(secretName, metav1.GetOptions{}) - - if err != nil { - return "", err } - return string(secret.Data["password"][:]), nil -} - -// IsPostgreSQLUserSystemAccount determines whether or not this is a system -// PostgreSQL user account, as if this returns true, one likely may not want to -// allow a user to directly access the account -// Normalizes the lookup by downcasing it -func IsPostgreSQLUserSystemAccount(username string) bool { - // go look up and see if the username is in the map - _, found := crv1.PGUserSystemAccounts[strings.ToLower(username)] - return found -} - -// CloneClusterSecrets will copy the secrets from a cluster into the secrets of -// another cluster -type CloneClusterSecrets struct { - // any additional selectors that can be added to the query that is made - AdditionalSelectors []string - // The Kubernetes Clientset used to make API calls to Kubernetes` - ClientSet kubernetes.Interface - // The Namespace that the clusters are in - Namespace string - // The name of the PostgreSQL cluster that the secrets are originating from - SourceClusterName string - // The name of the PostgreSQL cluster that we are copying the secrets to - TargetClusterName string + return string(result), nil } -// Clone performs the actual clone of the secrets between PostgreSQL clusters -func (cs CloneClusterSecrets) Clone() error { - log.Debugf("clone secrets [%s] to [%s]", cs.SourceClusterName, cs.TargetClusterName) - - // initialize the selector, and add any additional options to it - selector := fmt.Sprintf("pg-cluster=%s", cs.SourceClusterName) - - for _, additionalSelector := range cs.AdditionalSelectors { - selector += fmt.Sprintf(",%s", additionalSelector) +// randomCharacter builds a function that returns random bytes from class. +func randomCharacter(random io.Reader, class string) func() (byte, error) { + if random == nil { + panic("requires a random source") } - - // get all the secrets that exist in the source PostgreSQL cluster - secrets, err := cs.ClientSet. - CoreV1().Secrets(cs.Namespace). - List(metav1.ListOptions{LabelSelector: selector}) - - // if this fails, log and return the error - if err != nil { - log.Error(err) - return err + if len(class) == 0 { + panic("class cannot be empty") } - // iterate through the existing secrets in the cluster, and copy them over - for _, s := range secrets.Items { - log.Debugf("found secret : %s", s.ObjectMeta.Name) - - secret := v1.Secret{} - - // create the secret name - secret.Name = strings.Replace(s.ObjectMeta.Name, cs.SourceClusterName, cs.TargetClusterName, 1) + size := big.NewInt(int64(len(class))) - // assign the labels - secret.ObjectMeta.Labels = map[string]string{ - "pg-cluster": cs.TargetClusterName, + return func() (byte, error) { + if i, err := rand.Int(random, size); err == nil { + return class[int(i.Int64())], nil + } else { + return 0, err } - // secret.ObjectMeta.Labels["pg-cluster"] = toCluster - - // copy over the secret - // secret.Data = make(map[string][]byte) - secret.Data = map[string][]byte{ - "username": s.Data["username"][:], - "password": s.Data["password"][:], - } - - // create the secret - cs.ClientSet.CoreV1().Secrets(cs.Namespace).Create(&secret) } - - return nil } -// CreateUserSecret will create a new secret holding a user credential -func CreateUserSecret(clientset kubernetes.Interface, clustername, username, password, namespace string) error { - secretName := fmt.Sprintf(UserSecretFormat, clustername, username) - - if err := CreateSecret(clientset, clustername, secretName, username, password, namespace); err != nil { - log.Error(err) - return err - } +var randomAlphaNumeric = randomCharacter(rand.Reader, ``+ + `ABCDEFGHIJKLMNOPQRSTUVWXYZ`+ + `abcdefghijklmnopqrstuvwxyz`+ + `0123456789`) - return nil +// GenerateAlphaNumericPassword returns a random alphanumeric string. +func GenerateAlphaNumericPassword(length int) (string, error) { + return accumulate(length, randomAlphaNumeric) } -// UpdateUserSecret updates a user secret with a new password. It follows the -// following method: -// -// 1. If the Secret exists, it updates the value of the Secret -// 2. If the Secret does not exist, it creates the secret -func UpdateUserSecret(clientset kubernetes.Interface, clustername, username, password, namespace string) error { - secretName := fmt.Sprintf(UserSecretFormat, clustername, username) - - // see if the secret already exists - secret, err := clientset.CoreV1().Secrets(namespace).Get(secretName, metav1.GetOptions{}) - - // if this returns an error and it's not the "not found" error, return - // However, if it is the "not found" error, treat this as creating the user - // secret - if err != nil { - if !kubeapi.IsNotFound(err) { - return err - } - - return CreateUserSecret(clientset, clustername, username, password, namespace) - } +// policyASCII is the list of acceptable characters from which to generate an +// ASCII password. +const policyASCII = `` + + `()*+,-./` + `:;<=>?@` + `[]^_` + `{|}` + + `ABCDEFGHIJKLMNOPQRSTUVWXYZ` + + `abcdefghijklmnopqrstuvwxyz` + + `0123456789` - // update the value of "password" - secret.Data["password"] = []byte(password) +var randomASCII = randomCharacter(rand.Reader, policyASCII) - _, err = clientset.CoreV1().Secrets(secret.Namespace).Update(secret) - return err +// GenerateASCIIPassword returns a random string of printable ASCII characters. +func GenerateASCIIPassword(length int) (string, error) { + return accumulate(length, randomASCII) } diff --git a/internal/util/secrets_test.go b/internal/util/secrets_test.go index 89cbcebac9..5d549ca89e 100644 --- a/internal/util/secrets_test.go +++ b/internal/util/secrets_test.go @@ -1,58 +1,140 @@ -package util - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ +package util import ( + "errors" "strings" "testing" + "testing/iotest" "unicode" + + "gotest.tools/v3/assert" + "gotest.tools/v3/assert/cmp" + "k8s.io/apimachinery/pkg/util/sets" ) -func TestGeneratePassword(t *testing.T) { - // different lengths - for _, length := range []int{1, 2, 3, 5, 20} { - password, err := GeneratePassword(length) - if err != nil { - t.Fatalf("expected no error, got %v", err) - } - if expected, actual := length, len(password); expected != actual { - t.Fatalf("expected length %v, got %v", expected, actual) - } - if i := strings.IndexFunc(password, unicode.IsPrint); i > 0 { - t.Fatalf("expected only printable characters, got %q in %q", password[i], password) - } - } +func TestAccumulate(t *testing.T) { + called := 0 + result, err := accumulate(10, func() (byte, error) { + called++ + return byte('A' + called), nil + }) - // random contents - previous := []string{} + assert.NilError(t, err) + assert.Equal(t, called, 10) + assert.Equal(t, result, "BCDEFGHIJK") + + t.Run("Error", func(t *testing.T) { + called := 0 + expected := errors.New("zap") + result, err := accumulate(10, func() (byte, error) { + called++ + if called < 5 { + return byte('A' + called), nil + } else { + return 'Z', expected + } + }) + assert.Equal(t, err, expected) + assert.Equal(t, called, 5, "expected an early return") + assert.Equal(t, result, "") + }) +} + +func TestGenerateAlphaNumericPassword(t *testing.T) { + for _, length := range []int{0, 1, 2, 3, 5, 20, 200} { + password, err := GenerateAlphaNumericPassword(length) + + assert.NilError(t, err) + assert.Equal(t, length, len(password)) + assert.Assert(t, cmp.Regexp(`^[A-Za-z0-9]*$`, password)) + } + + previous := sets.Set[string]{} for i := 0; i < 10; i++ { - password, err := GeneratePassword(5) - if err != nil { - t.Fatalf("expected no error, got %v", err) - } - if i := strings.IndexFunc(password, unicode.IsPrint); i > 0 { - t.Fatalf("expected only printable characters, got %q in %q", password[i], password) + password, err := GenerateAlphaNumericPassword(5) + + assert.NilError(t, err) + assert.Assert(t, cmp.Regexp(`^[A-Za-z0-9]{5}$`, password)) + + assert.Assert(t, !previous.Has(password), "%q generated twice", password) + previous.Insert(password) + } +} + +func TestGenerateASCIIPassword(t *testing.T) { + for _, length := range []int{0, 1, 2, 3, 5, 20, 200} { + password, err := GenerateASCIIPassword(length) + + assert.NilError(t, err) + assert.Equal(t, length, len(password)) + + // Check every rune in the string. See [TestPolicyASCII]. + for _, c := range password { + assert.Assert(t, strings.ContainsRune(policyASCII, c), "%q is not acceptable", c) } + } - for i := range previous { - if password == previous[i] { - t.Fatalf("expected passwords to not repeat, got %q after %q", password, previous) - } + previous := sets.Set[string]{} + for i := 0; i < 10; i++ { + password, err := GenerateASCIIPassword(5) + + assert.NilError(t, err) + assert.Equal(t, 5, len(password)) + + // Check every rune in the string. See [TestPolicyASCII]. + for _, c := range password { + assert.Assert(t, strings.ContainsRune(policyASCII, c), "%q is not acceptable", c) } - previous = append(previous, password) + + assert.Assert(t, !previous.Has(password), "%q generated twice", password) + previous.Insert(password) } } + +func TestPolicyASCII(t *testing.T) { + // [GenerateASCIIPassword] used to pick random characters by doing + // arithmetic on ASCII codepoints. It now uses a constant set of characters + // that satisfy the following properties. For more information on these + // selections, consult the ASCII man page, `man ascii`. + + // lower and upper are the lowest and highest ASCII characters to use. + const lower = 40 + const upper = 126 + + // exclude is a map of characters that we choose to exclude from + // the password to simplify usage in the shell. + const exclude = "`\\" + + count := map[rune]int{} + + // Check every rune in the string. + for _, c := range policyASCII { + assert.Assert(t, unicode.IsPrint(c), "%q is not printable", c) + assert.Assert(t, c <= unicode.MaxASCII, "%q is not ASCII", c) + assert.Assert(t, lower <= c && c < upper, "%q is outside the range", c) + assert.Assert(t, !strings.ContainsRune(exclude, c), "%q should be excluded", c) + + count[c]++ + assert.Assert(t, count[c] == 1, "%q occurs more than once", c) + } + + // Every acceptable byte is in the policy. + assert.Equal(t, len(policyASCII), upper-lower-len(exclude)) +} + +func TestRandomCharacter(t *testing.T) { + // The random source cannot be nil and the character class cannot be empty. + assert.Assert(t, cmp.Panics(func() { randomCharacter(nil, "") })) + assert.Assert(t, cmp.Panics(func() { randomCharacter(nil, "asdf") })) + assert.Assert(t, cmp.Panics(func() { randomCharacter(iotest.ErrReader(nil), "") })) + + // The function returns any error from the random source. + expected := errors.New("doot") + _, err := randomCharacter(iotest.ErrReader(expected), "asdf")() + assert.Equal(t, err, expected) +} diff --git a/internal/util/ssh.go b/internal/util/ssh.go deleted file mode 100644 index aa886bbca7..0000000000 --- a/internal/util/ssh.go +++ /dev/null @@ -1,145 +0,0 @@ -package util - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "bytes" - "crypto/ed25519" - "encoding/pem" - "math/rand" - - "golang.org/x/crypto/ssh" -) - -// SSHKey stores byte slices that represent private and public ssh keys -type SSHKey struct { - Private []byte - Public []byte -} - -// NewPrivatePublicKeyPair generates a an ed25519 ssh private and public key -func NewPrivatePublicKeyPair() (SSHKey, error) { - var keys SSHKey - - pub, priv, err := ed25519.GenerateKey(nil) - if err != nil { - return SSHKey{}, err - } - - keys.Public, err = newPublicKey(pub) - if err != nil { - return SSHKey{}, err - } - - keys.Private, err = newPrivateKey(priv) - if err != nil { - return SSHKey{}, err - } - - return keys, nil -} - -// newPublicKey generates a byte slice containing an public key that can be used -// to ssh. This key is based off of the ed25519.PublicKey type The function is -// only used by NewPrivatePublicKeyPair -func newPublicKey(key ed25519.PublicKey) ([]byte, error) { - pubKey, err := ssh.NewPublicKey(key) - if err != nil { - return nil, err - } - return ssh.MarshalAuthorizedKey(pubKey), nil -} - -// newPrivateKey generates a byte slice containing an OpenSSH private ssh key. -// This key is based off of the ed25519.PrivateKey type. The function is only -// used by NewPrivatePublicKeyPair -func newPrivateKey(key ed25519.PrivateKey) ([]byte, error) { - // The following link describes the private key format for OpenSSH. It - // oulines the structs that are used to generate the OpenSSH private key - // from the ed25519 private key - // https://anongit.mindrot.org/openssh.git/tree/PROTOCOL.key?h=V_8_1_P1 - - const authMagic = "openssh-key-v1" - const noneCipherBlockSize = 8 - - private := struct { - Check1 uint32 - Check2 uint32 - KeyType string - Public []byte - Private []byte - Comment string - Pad []byte `ssh:"rest"` - }{ - KeyType: ssh.KeyAlgoED25519, - Public: key.Public().(ed25519.PublicKey), - Private: key, - } - - // check fields should match to easily verify - // that a decryption was successful - private.Check1 = rand.Uint32() - private.Check2 = private.Check1 - - { - bsize := noneCipherBlockSize - plen := len(ssh.Marshal(private)) - private.Pad = make([]byte, bsize-(plen%bsize)) - } - - // The list of privatekey/comment pairs is padded with the - // bytes 1, 2, 3, ... until the total length is a multiple - // of the cipher block size. - for i := range private.Pad { - private.Pad[i] = byte(i) + 1 - } - - public := struct { - Keytype string - Public []byte - }{ - Keytype: ssh.KeyAlgoED25519, - Public: private.Public, - } - - // The overall key consists of a header, a list of public keys, and - // an encrypted list of matching private keys. - overall := struct { - CipherName string - KDFName string - KDFOpts string - NumKeys uint32 - PubKey []byte - PrivKeyBlock []byte - }{ - CipherName: "none", KDFName: "none", // unencrypted - NumKeys: 1, - PubKey: ssh.Marshal(public), - PrivKeyBlock: ssh.Marshal(private), - } - - pemBlock := &pem.Block{ - Type: "OPENSSH PRIVATE KEY", - Bytes: append(append([]byte(authMagic), 0), ssh.Marshal(overall)...), - } - - var privateKeyPEM bytes.Buffer - if err := pem.Encode(&privateKeyPEM, pemBlock); err != nil { - return nil, err - } - - return privateKeyPEM.Bytes(), nil -} diff --git a/internal/util/util.go b/internal/util/util.go index bff3802c63..72634ebbc6 100644 --- a/internal/util/util.go +++ b/internal/util/util.go @@ -1,235 +1,18 @@ -package util - -/* - Copyright 2017 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 +// Copyright 2017 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ +package util import ( - "encoding/json" - "errors" - "fmt" - "math/rand" "strings" - "time" - - "github.com/crunchydata/postgres-operator/internal/config" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - pgo "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned" - - jsonpatch "github.com/evanphx/json-patch" - log "github.com/sirupsen/logrus" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/types" - "k8s.io/apimachinery/pkg/util/validation" - "k8s.io/client-go/kubernetes" - "k8s.io/client-go/rest" ) -const letterBytes = "abcdefghijklmnopqrstuvwxyz" - -// JSONPatchOperation represents the structure for a JSON patch operation -type JSONPatchOperation struct { - Op string `json:"op"` - Path string `json:"path"` - Value interface{} `json:"value"` -} - -func init() { - rand.Seed(time.Now().UnixNano()) - -} - -// ThingSpec is a json patch structure -type ThingSpec struct { - Op string `json:"op"` - Path string `json:"path"` - Value string `json:"value"` -} - -// Patch will patch a particular resource -func Patch(restclient rest.Interface, path string, value string, resource string, name string, namespace string) error { - things := make([]ThingSpec, 1) - things[0].Op = "replace" - things[0].Path = path - things[0].Value = value - - patchBytes, err4 := json.Marshal(things) - if err4 != nil { - log.Error("error in converting patch " + err4.Error()) - } - log.Debug(string(patchBytes)) - - _, err6 := restclient.Patch(types.JSONPatchType). - Namespace(namespace). - Resource(resource). - Name(name). - Body(patchBytes). - Do(). - Get() - - return err6 - -} - -// GetLabels ... -func GetLabels(name, clustername string, replica bool) string { - var output string - if replica { - output += fmt.Sprintf("\"primary\": \"%s\",\n", "false") - } - output += fmt.Sprintf("\"name\": \"%s\",\n", name) - output += fmt.Sprintf("\"pg-cluster\": \"%s\"\n", clustername) - return output -} - -//CurrentPrimaryUpdate prepares the needed data structures with the correct current primary value -//before passing them along to be patched into the current pgcluster CRD's annotations -func CurrentPrimaryUpdate(clientset pgo.Interface, cluster *crv1.Pgcluster, currentPrimary, namespace string) error { - //create a new map - metaLabels := make(map[string]string) - //copy the relevant values into the new map - for k, v := range cluster.ObjectMeta.Labels { - metaLabels[k] = v - } - //update this map with the new deployment label - metaLabels[config.LABEL_DEPLOYMENT_NAME] = currentPrimary - - //Update CRD with the current primary name and the new deployment to point to after the failover - if err := PatchClusterCRD(clientset, metaLabels, cluster, currentPrimary, namespace); err != nil { - log.Errorf("failoverlogic: could not patch pgcluster %s with the current primary", currentPrimary) - } - - return nil -} - -// PatchClusterCRD patches the pgcluster CRD with any updated labels, or an updated current -// primary annotation value. As this uses a JSON merge patch, it will only updates those -// values that are different between the old and new CRD values. -func PatchClusterCRD(clientset pgo.Interface, labelMap map[string]string, oldCrd *crv1.Pgcluster, currentPrimary, namespace string) error { - oldData, err := json.Marshal(oldCrd) - if err != nil { - return err - } - - // if there are no meta object lables on the current CRD, create a new map to hold them - if oldCrd.ObjectMeta.Labels == nil { - oldCrd.ObjectMeta.Labels = make(map[string]string) - } - - // if there are not any annotation on the current CRD, create a new map to hold them - if oldCrd.Annotations == nil { - oldCrd.Annotations = make(map[string]string) - } - // update our pgcluster annotation with the correct current primary value - oldCrd.Annotations[config.ANNOTATION_CURRENT_PRIMARY] = currentPrimary - oldCrd.Annotations[config.ANNOTATION_PRIMARY_DEPLOYMENT] = currentPrimary - - // update the stored primary storage value to match the current primary and deployment name - oldCrd.Spec.PrimaryStorage.Name = currentPrimary - - for k, v := range labelMap { - if len(validation.IsQualifiedName(k)) == 0 && len(validation.IsValidLabelValue(v)) == 0 { - oldCrd.ObjectMeta.Labels[k] = v - } else { - log.Debugf("user label %s:%s does not meet Kubernetes label requirements and will not be used to label "+ - "pgcluster %s", k, v, oldCrd.Spec.Name) - } - } - - var newData, patchBytes []byte - newData, err = json.Marshal(oldCrd) - if err != nil { - return err - } - - patchBytes, err = jsonpatch.CreateMergePatch(oldData, newData) - if err != nil { - return err - } - - log.Debug(string(patchBytes)) - - _, err6 := clientset.CrunchydataV1().Pgclusters(namespace).Patch(oldCrd.Spec.Name, types.MergePatchType, patchBytes) - - return err6 - -} - -// GetValueOrDefault checks whether the first value given is set. If it is, -// that value is returned. If not, the second, default value is returned instead -func GetValueOrDefault(value, defaultValue string) string { - if value != "" { - return value - } - return defaultValue -} - -// GetSecretPassword ... -func GetSecretPassword(clientset kubernetes.Interface, db, suffix, Namespace string) (string, error) { - - var err error - - selector := "pg-cluster=" + db - secrets, err := clientset. - CoreV1().Secrets(Namespace). - List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - return "", err - } - - log.Debugf("secrets for %s", db) - secretName := db + suffix - for _, s := range secrets.Items { - log.Debugf("secret : %s", s.ObjectMeta.Name) - if s.ObjectMeta.Name == secretName { - log.Debug("pgprimary password found") - return string(s.Data["password"][:]), err - } - } - - log.Error("primary secret not found for " + db) - return "", errors.New("primary secret not found for " + db) - -} - -// RandStringBytesRmndr ... -func RandStringBytesRmndr(n int) string { - b := make([]byte, n) - for i := range b { - b[i] = letterBytes[rand.Int63()%int64(len(letterBytes))] - } - return string(b) -} - -// IsStringOneOf tests to see string testVal is included in the list -// of strings provided using acceptedVals -func IsStringOneOf(testVal string, acceptedVals ...string) bool { - isOneOf := false - for _, val := range acceptedVals { - if testVal == val { - isOneOf = true - break - } - } - return isOneOf -} - // SQLQuoteIdentifier quotes an "identifier" (e.g. a table or a column name) to // be used as part of an SQL statement. // // Any double quotes in name will be escaped. The quoted identifier will be -// case sensitive when used in a query. If the input string contains a zero +// case-sensitive when used in a query. If the input string contains a zero // byte, the result will be truncated immediately before it. // // Implementation borrowed from lib/pq: https://github.com/lib/pq which is diff --git a/licenses/.gitignore b/licenses/.gitignore new file mode 100644 index 0000000000..72e8ffc0db --- /dev/null +++ b/licenses/.gitignore @@ -0,0 +1 @@ +* diff --git a/licenses/LICENSE.txt b/licenses/LICENSE.txt new file mode 100644 index 0000000000..e799dc3209 --- /dev/null +++ b/licenses/LICENSE.txt @@ -0,0 +1,194 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + Copyright 2017 - 2024 Crunchy Data Solutions, Inc. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. + + + diff --git a/licenses/github.com/PuerkitoBio/purell/LICENSE b/licenses/github.com/PuerkitoBio/purell/LICENSE deleted file mode 100644 index 4b9986dea7..0000000000 --- a/licenses/github.com/PuerkitoBio/purell/LICENSE +++ /dev/null @@ -1,12 +0,0 @@ -Copyright (c) 2012, Martin Angers -All rights reserved. - -Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: - -* Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. - -* Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. - -* Neither the name of the author nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/licenses/github.com/PuerkitoBio/urlesc/LICENSE b/licenses/github.com/PuerkitoBio/urlesc/LICENSE deleted file mode 100644 index 7448756763..0000000000 --- a/licenses/github.com/PuerkitoBio/urlesc/LICENSE +++ /dev/null @@ -1,27 +0,0 @@ -Copyright (c) 2012 The Go Authors. All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are -met: - - * Redistributions of source code must retain the above copyright -notice, this list of conditions and the following disclaimer. - * Redistributions in binary form must reproduce the above -copyright notice, this list of conditions and the following disclaimer -in the documentation and/or other materials provided with the -distribution. - * Neither the name of Google Inc. nor the names of its -contributors may be used to endorse or promote products derived from -this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/licenses/github.com/cpuguy83/go-md2man/LICENSE.md b/licenses/github.com/cpuguy83/go-md2man/LICENSE.md deleted file mode 100644 index 1cade6cef6..0000000000 --- a/licenses/github.com/cpuguy83/go-md2man/LICENSE.md +++ /dev/null @@ -1,21 +0,0 @@ -The MIT License (MIT) - -Copyright (c) 2014 Brian Goff - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. diff --git a/licenses/github.com/davecgh/go-spew/LICENSE b/licenses/github.com/davecgh/go-spew/LICENSE deleted file mode 100644 index bc52e96f2b..0000000000 --- a/licenses/github.com/davecgh/go-spew/LICENSE +++ /dev/null @@ -1,15 +0,0 @@ -ISC License - -Copyright (c) 2012-2016 Dave Collins - -Permission to use, copy, modify, and/or distribute this software for any -purpose with or without fee is hereby granted, provided that the above -copyright notice and this permission notice appear in all copies. - -THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES -WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF -MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR -ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES -WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN -ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF -OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. diff --git a/licenses/github.com/docker/spdystream/LICENSE b/licenses/github.com/docker/spdystream/LICENSE deleted file mode 100644 index 9e4bd4dbee..0000000000 --- a/licenses/github.com/docker/spdystream/LICENSE +++ /dev/null @@ -1,191 +0,0 @@ - - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - Copyright 2014-2015 Docker, Inc. - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/licenses/github.com/emicklei/go-restful/LICENSE b/licenses/github.com/emicklei/go-restful/LICENSE deleted file mode 100644 index ece7ec61ef..0000000000 --- a/licenses/github.com/emicklei/go-restful/LICENSE +++ /dev/null @@ -1,22 +0,0 @@ -Copyright (c) 2012,2013 Ernest Micklei - -MIT License - -Permission is hereby granted, free of charge, to any person obtaining -a copy of this software and associated documentation files (the -"Software"), to deal in the Software without restriction, including -without limitation the rights to use, copy, modify, merge, publish, -distribute, sublicense, and/or sell copies of the Software, and to -permit persons to whom the Software is furnished to do so, subject to -the following conditions: - -The above copyright notice and this permission notice shall be -included in all copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. \ No newline at end of file diff --git a/licenses/github.com/evanphx/json-patch/LICENSE b/licenses/github.com/evanphx/json-patch/LICENSE deleted file mode 100644 index 0eb9b72d84..0000000000 --- a/licenses/github.com/evanphx/json-patch/LICENSE +++ /dev/null @@ -1,25 +0,0 @@ -Copyright (c) 2014, Evan Phoenix -All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are met: - -* Redistributions of source code must retain the above copyright notice, this - list of conditions and the following disclaimer. -* Redistributions in binary form must reproduce the above copyright notice - this list of conditions and the following disclaimer in the documentation - and/or other materials provided with the distribution. -* Neither the name of the Evan Phoenix nor the names of its contributors - may be used to endorse or promote products derived from this software - without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE -FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL -DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR -SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER -CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, -OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/licenses/github.com/fatih/color/LICENSE.md b/licenses/github.com/fatih/color/LICENSE.md deleted file mode 100644 index 25fdaf639d..0000000000 --- a/licenses/github.com/fatih/color/LICENSE.md +++ /dev/null @@ -1,20 +0,0 @@ -The MIT License (MIT) - -Copyright (c) 2013 Fatih Arslan - -Permission is hereby granted, free of charge, to any person obtaining a copy of -this software and associated documentation files (the "Software"), to deal in -the Software without restriction, including without limitation the rights to -use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of -the Software, and to permit persons to whom the Software is furnished to do so, -subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS -FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR -COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER -IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN -CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/licenses/github.com/ghodss/yaml/LICENSE b/licenses/github.com/ghodss/yaml/LICENSE deleted file mode 100644 index 7805d36de7..0000000000 --- a/licenses/github.com/ghodss/yaml/LICENSE +++ /dev/null @@ -1,50 +0,0 @@ -The MIT License (MIT) - -Copyright (c) 2014 Sam Ghods - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. - - -Copyright (c) 2012 The Go Authors. All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are -met: - - * Redistributions of source code must retain the above copyright -notice, this list of conditions and the following disclaimer. - * Redistributions in binary form must reproduce the above -copyright notice, this list of conditions and the following disclaimer -in the documentation and/or other materials provided with the -distribution. - * Neither the name of Google Inc. nor the names of its -contributors may be used to endorse or promote products derived from -this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/licenses/github.com/go-openapi/jsonpointer/LICENSE b/licenses/github.com/go-openapi/jsonpointer/LICENSE deleted file mode 100644 index d645695673..0000000000 --- a/licenses/github.com/go-openapi/jsonpointer/LICENSE +++ /dev/null @@ -1,202 +0,0 @@ - - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/licenses/github.com/go-openapi/jsonreference/LICENSE b/licenses/github.com/go-openapi/jsonreference/LICENSE deleted file mode 100644 index d645695673..0000000000 --- a/licenses/github.com/go-openapi/jsonreference/LICENSE +++ /dev/null @@ -1,202 +0,0 @@ - - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/licenses/github.com/go-openapi/spec/LICENSE b/licenses/github.com/go-openapi/spec/LICENSE deleted file mode 100644 index d645695673..0000000000 --- a/licenses/github.com/go-openapi/spec/LICENSE +++ /dev/null @@ -1,202 +0,0 @@ - - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/licenses/github.com/go-openapi/swag/LICENSE b/licenses/github.com/go-openapi/swag/LICENSE deleted file mode 100644 index d645695673..0000000000 --- a/licenses/github.com/go-openapi/swag/LICENSE +++ /dev/null @@ -1,202 +0,0 @@ - - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/licenses/github.com/gogo/protobuf/LICENSE b/licenses/github.com/gogo/protobuf/LICENSE deleted file mode 100644 index 7be0cc7b62..0000000000 --- a/licenses/github.com/gogo/protobuf/LICENSE +++ /dev/null @@ -1,36 +0,0 @@ -Protocol Buffers for Go with Gadgets - -Copyright (c) 2013, The GoGo Authors. All rights reserved. -http://github.com/gogo/protobuf - -Go support for Protocol Buffers - Google's data interchange format - -Copyright 2010 The Go Authors. All rights reserved. -https://github.com/golang/protobuf - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are -met: - - * Redistributions of source code must retain the above copyright -notice, this list of conditions and the following disclaimer. - * Redistributions in binary form must reproduce the above -copyright notice, this list of conditions and the following disclaimer -in the documentation and/or other materials provided with the -distribution. - * Neither the name of Google Inc. nor the names of its -contributors may be used to endorse or promote products derived from -this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - diff --git a/licenses/github.com/golang/glog/LICENSE b/licenses/github.com/golang/glog/LICENSE deleted file mode 100644 index 37ec93a14f..0000000000 --- a/licenses/github.com/golang/glog/LICENSE +++ /dev/null @@ -1,191 +0,0 @@ -Apache License -Version 2.0, January 2004 -http://www.apache.org/licenses/ - -TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - -1. Definitions. - -"License" shall mean the terms and conditions for use, reproduction, and -distribution as defined by Sections 1 through 9 of this document. - -"Licensor" shall mean the copyright owner or entity authorized by the copyright -owner that is granting the License. - -"Legal Entity" shall mean the union of the acting entity and all other entities -that control, are controlled by, or are under common control with that entity. -For the purposes of this definition, "control" means (i) the power, direct or -indirect, to cause the direction or management of such entity, whether by -contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the -outstanding shares, or (iii) beneficial ownership of such entity. - -"You" (or "Your") shall mean an individual or Legal Entity exercising -permissions granted by this License. - -"Source" form shall mean the preferred form for making modifications, including -but not limited to software source code, documentation source, and configuration -files. - -"Object" form shall mean any form resulting from mechanical transformation or -translation of a Source form, including but not limited to compiled object code, -generated documentation, and conversions to other media types. - -"Work" shall mean the work of authorship, whether in Source or Object form, made -available under the License, as indicated by a copyright notice that is included -in or attached to the work (an example is provided in the Appendix below). - -"Derivative Works" shall mean any work, whether in Source or Object form, that -is based on (or derived from) the Work and for which the editorial revisions, -annotations, elaborations, or other modifications represent, as a whole, an -original work of authorship. For the purposes of this License, Derivative Works -shall not include works that remain separable from, or merely link (or bind by -name) to the interfaces of, the Work and Derivative Works thereof. - -"Contribution" shall mean any work of authorship, including the original version -of the Work and any modifications or additions to that Work or Derivative Works -thereof, that is intentionally submitted to Licensor for inclusion in the Work -by the copyright owner or by an individual or Legal Entity authorized to submit -on behalf of the copyright owner. For the purposes of this definition, -"submitted" means any form of electronic, verbal, or written communication sent -to the Licensor or its representatives, including but not limited to -communication on electronic mailing lists, source code control systems, and -issue tracking systems that are managed by, or on behalf of, the Licensor for -the purpose of discussing and improving the Work, but excluding communication -that is conspicuously marked or otherwise designated in writing by the copyright -owner as "Not a Contribution." - -"Contributor" shall mean Licensor and any individual or Legal Entity on behalf -of whom a Contribution has been received by Licensor and subsequently -incorporated within the Work. - -2. Grant of Copyright License. - -Subject to the terms and conditions of this License, each Contributor hereby -grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, -irrevocable copyright license to reproduce, prepare Derivative Works of, -publicly display, publicly perform, sublicense, and distribute the Work and such -Derivative Works in Source or Object form. - -3. Grant of Patent License. - -Subject to the terms and conditions of this License, each Contributor hereby -grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, -irrevocable (except as stated in this section) patent license to make, have -made, use, offer to sell, sell, import, and otherwise transfer the Work, where -such license applies only to those patent claims licensable by such Contributor -that are necessarily infringed by their Contribution(s) alone or by combination -of their Contribution(s) with the Work to which such Contribution(s) was -submitted. If You institute patent litigation against any entity (including a -cross-claim or counterclaim in a lawsuit) alleging that the Work or a -Contribution incorporated within the Work constitutes direct or contributory -patent infringement, then any patent licenses granted to You under this License -for that Work shall terminate as of the date such litigation is filed. - -4. Redistribution. - -You may reproduce and distribute copies of the Work or Derivative Works thereof -in any medium, with or without modifications, and in Source or Object form, -provided that You meet the following conditions: - -You must give any other recipients of the Work or Derivative Works a copy of -this License; and -You must cause any modified files to carry prominent notices stating that You -changed the files; and -You must retain, in the Source form of any Derivative Works that You distribute, -all copyright, patent, trademark, and attribution notices from the Source form -of the Work, excluding those notices that do not pertain to any part of the -Derivative Works; and -If the Work includes a "NOTICE" text file as part of its distribution, then any -Derivative Works that You distribute must include a readable copy of the -attribution notices contained within such NOTICE file, excluding those notices -that do not pertain to any part of the Derivative Works, in at least one of the -following places: within a NOTICE text file distributed as part of the -Derivative Works; within the Source form or documentation, if provided along -with the Derivative Works; or, within a display generated by the Derivative -Works, if and wherever such third-party notices normally appear. The contents of -the NOTICE file are for informational purposes only and do not modify the -License. You may add Your own attribution notices within Derivative Works that -You distribute, alongside or as an addendum to the NOTICE text from the Work, -provided that such additional attribution notices cannot be construed as -modifying the License. -You may add Your own copyright statement to Your modifications and may provide -additional or different license terms and conditions for use, reproduction, or -distribution of Your modifications, or for any such Derivative Works as a whole, -provided Your use, reproduction, and distribution of the Work otherwise complies -with the conditions stated in this License. - -5. Submission of Contributions. - -Unless You explicitly state otherwise, any Contribution intentionally submitted -for inclusion in the Work by You to the Licensor shall be under the terms and -conditions of this License, without any additional terms or conditions. -Notwithstanding the above, nothing herein shall supersede or modify the terms of -any separate license agreement you may have executed with Licensor regarding -such Contributions. - -6. Trademarks. - -This License does not grant permission to use the trade names, trademarks, -service marks, or product names of the Licensor, except as required for -reasonable and customary use in describing the origin of the Work and -reproducing the content of the NOTICE file. - -7. Disclaimer of Warranty. - -Unless required by applicable law or agreed to in writing, Licensor provides the -Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, -including, without limitation, any warranties or conditions of TITLE, -NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are -solely responsible for determining the appropriateness of using or -redistributing the Work and assume any risks associated with Your exercise of -permissions under this License. - -8. Limitation of Liability. - -In no event and under no legal theory, whether in tort (including negligence), -contract, or otherwise, unless required by applicable law (such as deliberate -and grossly negligent acts) or agreed to in writing, shall any Contributor be -liable to You for damages, including any direct, indirect, special, incidental, -or consequential damages of any character arising as a result of this License or -out of the use or inability to use the Work (including but not limited to -damages for loss of goodwill, work stoppage, computer failure or malfunction, or -any and all other commercial damages or losses), even if such Contributor has -been advised of the possibility of such damages. - -9. Accepting Warranty or Additional Liability. - -While redistributing the Work or Derivative Works thereof, You may choose to -offer, and charge a fee for, acceptance of support, warranty, indemnity, or -other liability obligations and/or rights consistent with this License. However, -in accepting such obligations, You may act only on Your own behalf and on Your -sole responsibility, not on behalf of any other Contributor, and only if You -agree to indemnify, defend, and hold each Contributor harmless for any liability -incurred by, or claims asserted against, such Contributor by reason of your -accepting any such warranty or additional liability. - -END OF TERMS AND CONDITIONS - -APPENDIX: How to apply the Apache License to your work - -To apply the Apache License to your work, attach the following boilerplate -notice, with the fields enclosed by brackets "[]" replaced with your own -identifying information. (Don't include the brackets!) The text should be -enclosed in the appropriate comment syntax for the file format. We also -recommend that a file or class name and description of purpose be included on -the same "printed page" as the copyright notice for easier identification within -third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/licenses/github.com/golang/protobuf/LICENSE b/licenses/github.com/golang/protobuf/LICENSE deleted file mode 100644 index 0f646931a4..0000000000 --- a/licenses/github.com/golang/protobuf/LICENSE +++ /dev/null @@ -1,28 +0,0 @@ -Copyright 2010 The Go Authors. All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are -met: - - * Redistributions of source code must retain the above copyright -notice, this list of conditions and the following disclaimer. - * Redistributions in binary form must reproduce the above -copyright notice, this list of conditions and the following disclaimer -in the documentation and/or other materials provided with the -distribution. - * Neither the name of Google Inc. nor the names of its -contributors may be used to endorse or promote products derived from -this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - diff --git a/licenses/github.com/google/btree/LICENSE b/licenses/github.com/google/btree/LICENSE deleted file mode 100644 index d645695673..0000000000 --- a/licenses/github.com/google/btree/LICENSE +++ /dev/null @@ -1,202 +0,0 @@ - - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/licenses/github.com/google/gofuzz/LICENSE b/licenses/github.com/google/gofuzz/LICENSE deleted file mode 100644 index d645695673..0000000000 --- a/licenses/github.com/google/gofuzz/LICENSE +++ /dev/null @@ -1,202 +0,0 @@ - - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/licenses/github.com/googleapis/gnostic/LICENSE b/licenses/github.com/googleapis/gnostic/LICENSE deleted file mode 100644 index 6b0b1270ff..0000000000 --- a/licenses/github.com/googleapis/gnostic/LICENSE +++ /dev/null @@ -1,203 +0,0 @@ - - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. - diff --git a/licenses/github.com/gorilla/context/LICENSE b/licenses/github.com/gorilla/context/LICENSE deleted file mode 100644 index 0e5fb87280..0000000000 --- a/licenses/github.com/gorilla/context/LICENSE +++ /dev/null @@ -1,27 +0,0 @@ -Copyright (c) 2012 Rodrigo Moraes. All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are -met: - - * Redistributions of source code must retain the above copyright -notice, this list of conditions and the following disclaimer. - * Redistributions in binary form must reproduce the above -copyright notice, this list of conditions and the following disclaimer -in the documentation and/or other materials provided with the -distribution. - * Neither the name of Google Inc. nor the names of its -contributors may be used to endorse or promote products derived from -this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/licenses/github.com/gorilla/mux/LICENSE b/licenses/github.com/gorilla/mux/LICENSE deleted file mode 100644 index 0e5fb87280..0000000000 --- a/licenses/github.com/gorilla/mux/LICENSE +++ /dev/null @@ -1,27 +0,0 @@ -Copyright (c) 2012 Rodrigo Moraes. All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are -met: - - * Redistributions of source code must retain the above copyright -notice, this list of conditions and the following disclaimer. - * Redistributions in binary form must reproduce the above -copyright notice, this list of conditions and the following disclaimer -in the documentation and/or other materials provided with the -distribution. - * Neither the name of Google Inc. nor the names of its -contributors may be used to endorse or promote products derived from -this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/licenses/github.com/gregjones/httpcache/LICENSE.txt b/licenses/github.com/gregjones/httpcache/LICENSE.txt deleted file mode 100644 index 81316beb0c..0000000000 --- a/licenses/github.com/gregjones/httpcache/LICENSE.txt +++ /dev/null @@ -1,7 +0,0 @@ -Copyright © 2012 Greg Jones (greg.jones@gmail.com) - -Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. \ No newline at end of file diff --git a/licenses/github.com/hashicorp/golang-lru/LICENSE b/licenses/github.com/hashicorp/golang-lru/LICENSE deleted file mode 100644 index be2cc4dfb6..0000000000 --- a/licenses/github.com/hashicorp/golang-lru/LICENSE +++ /dev/null @@ -1,362 +0,0 @@ -Mozilla Public License, version 2.0 - -1. Definitions - -1.1. "Contributor" - - means each individual or legal entity that creates, contributes to the - creation of, or owns Covered Software. - -1.2. "Contributor Version" - - means the combination of the Contributions of others (if any) used by a - Contributor and that particular Contributor's Contribution. - -1.3. "Contribution" - - means Covered Software of a particular Contributor. - -1.4. "Covered Software" - - means Source Code Form to which the initial Contributor has attached the - notice in Exhibit A, the Executable Form of such Source Code Form, and - Modifications of such Source Code Form, in each case including portions - thereof. - -1.5. "Incompatible With Secondary Licenses" - means - - a. that the initial Contributor has attached the notice described in - Exhibit B to the Covered Software; or - - b. that the Covered Software was made available under the terms of - version 1.1 or earlier of the License, but not also under the terms of - a Secondary License. - -1.6. "Executable Form" - - means any form of the work other than Source Code Form. - -1.7. "Larger Work" - - means a work that combines Covered Software with other material, in a - separate file or files, that is not Covered Software. - -1.8. "License" - - means this document. - -1.9. "Licensable" - - means having the right to grant, to the maximum extent possible, whether - at the time of the initial grant or subsequently, any and all of the - rights conveyed by this License. - -1.10. "Modifications" - - means any of the following: - - a. any file in Source Code Form that results from an addition to, - deletion from, or modification of the contents of Covered Software; or - - b. any new file in Source Code Form that contains any Covered Software. - -1.11. "Patent Claims" of a Contributor - - means any patent claim(s), including without limitation, method, - process, and apparatus claims, in any patent Licensable by such - Contributor that would be infringed, but for the grant of the License, - by the making, using, selling, offering for sale, having made, import, - or transfer of either its Contributions or its Contributor Version. - -1.12. "Secondary License" - - means either the GNU General Public License, Version 2.0, the GNU Lesser - General Public License, Version 2.1, the GNU Affero General Public - License, Version 3.0, or any later versions of those licenses. - -1.13. "Source Code Form" - - means the form of the work preferred for making modifications. - -1.14. "You" (or "Your") - - means an individual or a legal entity exercising rights under this - License. For legal entities, "You" includes any entity that controls, is - controlled by, or is under common control with You. For purposes of this - definition, "control" means (a) the power, direct or indirect, to cause - the direction or management of such entity, whether by contract or - otherwise, or (b) ownership of more than fifty percent (50%) of the - outstanding shares or beneficial ownership of such entity. - - -2. License Grants and Conditions - -2.1. Grants - - Each Contributor hereby grants You a world-wide, royalty-free, - non-exclusive license: - - a. under intellectual property rights (other than patent or trademark) - Licensable by such Contributor to use, reproduce, make available, - modify, display, perform, distribute, and otherwise exploit its - Contributions, either on an unmodified basis, with Modifications, or - as part of a Larger Work; and - - b. under Patent Claims of such Contributor to make, use, sell, offer for - sale, have made, import, and otherwise transfer either its - Contributions or its Contributor Version. - -2.2. Effective Date - - The licenses granted in Section 2.1 with respect to any Contribution - become effective for each Contribution on the date the Contributor first - distributes such Contribution. - -2.3. Limitations on Grant Scope - - The licenses granted in this Section 2 are the only rights granted under - this License. No additional rights or licenses will be implied from the - distribution or licensing of Covered Software under this License. - Notwithstanding Section 2.1(b) above, no patent license is granted by a - Contributor: - - a. for any code that a Contributor has removed from Covered Software; or - - b. for infringements caused by: (i) Your and any other third party's - modifications of Covered Software, or (ii) the combination of its - Contributions with other software (except as part of its Contributor - Version); or - - c. under Patent Claims infringed by Covered Software in the absence of - its Contributions. - - This License does not grant any rights in the trademarks, service marks, - or logos of any Contributor (except as may be necessary to comply with - the notice requirements in Section 3.4). - -2.4. Subsequent Licenses - - No Contributor makes additional grants as a result of Your choice to - distribute the Covered Software under a subsequent version of this - License (see Section 10.2) or under the terms of a Secondary License (if - permitted under the terms of Section 3.3). - -2.5. Representation - - Each Contributor represents that the Contributor believes its - Contributions are its original creation(s) or it has sufficient rights to - grant the rights to its Contributions conveyed by this License. - -2.6. Fair Use - - This License is not intended to limit any rights You have under - applicable copyright doctrines of fair use, fair dealing, or other - equivalents. - -2.7. Conditions - - Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in - Section 2.1. - - -3. Responsibilities - -3.1. Distribution of Source Form - - All distribution of Covered Software in Source Code Form, including any - Modifications that You create or to which You contribute, must be under - the terms of this License. You must inform recipients that the Source - Code Form of the Covered Software is governed by the terms of this - License, and how they can obtain a copy of this License. You may not - attempt to alter or restrict the recipients' rights in the Source Code - Form. - -3.2. Distribution of Executable Form - - If You distribute Covered Software in Executable Form then: - - a. such Covered Software must also be made available in Source Code Form, - as described in Section 3.1, and You must inform recipients of the - Executable Form how they can obtain a copy of such Source Code Form by - reasonable means in a timely manner, at a charge no more than the cost - of distribution to the recipient; and - - b. You may distribute such Executable Form under the terms of this - License, or sublicense it under different terms, provided that the - license for the Executable Form does not attempt to limit or alter the - recipients' rights in the Source Code Form under this License. - -3.3. Distribution of a Larger Work - - You may create and distribute a Larger Work under terms of Your choice, - provided that You also comply with the requirements of this License for - the Covered Software. If the Larger Work is a combination of Covered - Software with a work governed by one or more Secondary Licenses, and the - Covered Software is not Incompatible With Secondary Licenses, this - License permits You to additionally distribute such Covered Software - under the terms of such Secondary License(s), so that the recipient of - the Larger Work may, at their option, further distribute the Covered - Software under the terms of either this License or such Secondary - License(s). - -3.4. Notices - - You may not remove or alter the substance of any license notices - (including copyright notices, patent notices, disclaimers of warranty, or - limitations of liability) contained within the Source Code Form of the - Covered Software, except that You may alter any license notices to the - extent required to remedy known factual inaccuracies. - -3.5. Application of Additional Terms - - You may choose to offer, and to charge a fee for, warranty, support, - indemnity or liability obligations to one or more recipients of Covered - Software. However, You may do so only on Your own behalf, and not on - behalf of any Contributor. You must make it absolutely clear that any - such warranty, support, indemnity, or liability obligation is offered by - You alone, and You hereby agree to indemnify every Contributor for any - liability incurred by such Contributor as a result of warranty, support, - indemnity or liability terms You offer. You may include additional - disclaimers of warranty and limitations of liability specific to any - jurisdiction. - -4. Inability to Comply Due to Statute or Regulation - - If it is impossible for You to comply with any of the terms of this License - with respect to some or all of the Covered Software due to statute, - judicial order, or regulation then You must: (a) comply with the terms of - this License to the maximum extent possible; and (b) describe the - limitations and the code they affect. Such description must be placed in a - text file included with all distributions of the Covered Software under - this License. Except to the extent prohibited by statute or regulation, - such description must be sufficiently detailed for a recipient of ordinary - skill to be able to understand it. - -5. Termination - -5.1. The rights granted under this License will terminate automatically if You - fail to comply with any of its terms. However, if You become compliant, - then the rights granted under this License from a particular Contributor - are reinstated (a) provisionally, unless and until such Contributor - explicitly and finally terminates Your grants, and (b) on an ongoing - basis, if such Contributor fails to notify You of the non-compliance by - some reasonable means prior to 60 days after You have come back into - compliance. Moreover, Your grants from a particular Contributor are - reinstated on an ongoing basis if such Contributor notifies You of the - non-compliance by some reasonable means, this is the first time You have - received notice of non-compliance with this License from such - Contributor, and You become compliant prior to 30 days after Your receipt - of the notice. - -5.2. If You initiate litigation against any entity by asserting a patent - infringement claim (excluding declaratory judgment actions, - counter-claims, and cross-claims) alleging that a Contributor Version - directly or indirectly infringes any patent, then the rights granted to - You by any and all Contributors for the Covered Software under Section - 2.1 of this License shall terminate. - -5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user - license agreements (excluding distributors and resellers) which have been - validly granted by You or Your distributors under this License prior to - termination shall survive termination. - -6. Disclaimer of Warranty - - Covered Software is provided under this License on an "as is" basis, - without warranty of any kind, either expressed, implied, or statutory, - including, without limitation, warranties that the Covered Software is free - of defects, merchantable, fit for a particular purpose or non-infringing. - The entire risk as to the quality and performance of the Covered Software - is with You. Should any Covered Software prove defective in any respect, - You (not any Contributor) assume the cost of any necessary servicing, - repair, or correction. This disclaimer of warranty constitutes an essential - part of this License. No use of any Covered Software is authorized under - this License except under this disclaimer. - -7. Limitation of Liability - - Under no circumstances and under no legal theory, whether tort (including - negligence), contract, or otherwise, shall any Contributor, or anyone who - distributes Covered Software as permitted above, be liable to You for any - direct, indirect, special, incidental, or consequential damages of any - character including, without limitation, damages for lost profits, loss of - goodwill, work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses, even if such party shall have been - informed of the possibility of such damages. This limitation of liability - shall not apply to liability for death or personal injury resulting from - such party's negligence to the extent applicable law prohibits such - limitation. Some jurisdictions do not allow the exclusion or limitation of - incidental or consequential damages, so this exclusion and limitation may - not apply to You. - -8. Litigation - - Any litigation relating to this License may be brought only in the courts - of a jurisdiction where the defendant maintains its principal place of - business and such litigation shall be governed by laws of that - jurisdiction, without reference to its conflict-of-law provisions. Nothing - in this Section shall prevent a party's ability to bring cross-claims or - counter-claims. - -9. Miscellaneous - - This License represents the complete agreement concerning the subject - matter hereof. If any provision of this License is held to be - unenforceable, such provision shall be reformed only to the extent - necessary to make it enforceable. Any law or regulation which provides that - the language of a contract shall be construed against the drafter shall not - be used to construe this License against a Contributor. - - -10. Versions of the License - -10.1. New Versions - - Mozilla Foundation is the license steward. Except as provided in Section - 10.3, no one other than the license steward has the right to modify or - publish new versions of this License. Each version will be given a - distinguishing version number. - -10.2. Effect of New Versions - - You may distribute the Covered Software under the terms of the version - of the License under which You originally received the Covered Software, - or under the terms of any subsequent version published by the license - steward. - -10.3. Modified Versions - - If you create software not governed by this License, and you want to - create a new license for such software, you may create and use a - modified version of this License if you rename the license and remove - any references to the name of the license steward (except to note that - such modified license differs from this License). - -10.4. Distributing Source Code Form that is Incompatible With Secondary - Licenses If You choose to distribute Source Code Form that is - Incompatible With Secondary Licenses under the terms of this version of - the License, the notice described in Exhibit B of this License must be - attached. - -Exhibit A - Source Code Form License Notice - - This Source Code Form is subject to the - terms of the Mozilla Public License, v. - 2.0. If a copy of the MPL was not - distributed with this file, You can - obtain one at - http://mozilla.org/MPL/2.0/. - -If it is not possible or desirable to put the notice in a particular file, -then You may include the notice in a location (such as a LICENSE file in a -relevant directory) where a recipient would be likely to look for such a -notice. - -You may add additional accurate notices of copyright ownership. - -Exhibit B - "Incompatible With Secondary Licenses" Notice - - This Source Code Form is "Incompatible - With Secondary Licenses", as defined by - the Mozilla Public License, v. 2.0. diff --git a/licenses/github.com/howeyc/gopass/LICENSE.txt b/licenses/github.com/howeyc/gopass/LICENSE.txt deleted file mode 100644 index 14f74708a4..0000000000 --- a/licenses/github.com/howeyc/gopass/LICENSE.txt +++ /dev/null @@ -1,15 +0,0 @@ -ISC License - -Copyright (c) 2012 Chris Howey - -Permission to use, copy, modify, and distribute this software for any -purpose with or without fee is hereby granted, provided that the above -copyright notice and this permission notice appear in all copies. - -THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES -WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF -MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR -ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES -WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN -ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF -OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. diff --git a/licenses/github.com/howeyc/gopass/OPENSOLARIS.LICENSE b/licenses/github.com/howeyc/gopass/OPENSOLARIS.LICENSE deleted file mode 100644 index da23621dc8..0000000000 --- a/licenses/github.com/howeyc/gopass/OPENSOLARIS.LICENSE +++ /dev/null @@ -1,384 +0,0 @@ -Unless otherwise noted, all files in this distribution are released -under the Common Development and Distribution License (CDDL). -Exceptions are noted within the associated source files. - --------------------------------------------------------------------- - - -COMMON DEVELOPMENT AND DISTRIBUTION LICENSE Version 1.0 - -1. Definitions. - - 1.1. "Contributor" means each individual or entity that creates - or contributes to the creation of Modifications. - - 1.2. "Contributor Version" means the combination of the Original - Software, prior Modifications used by a Contributor (if any), - and the Modifications made by that particular Contributor. - - 1.3. "Covered Software" means (a) the Original Software, or (b) - Modifications, or (c) the combination of files containing - Original Software with files containing Modifications, in - each case including portions thereof. - - 1.4. "Executable" means the Covered Software in any form other - than Source Code. - - 1.5. "Initial Developer" means the individual or entity that first - makes Original Software available under this License. - - 1.6. "Larger Work" means a work which combines Covered Software or - portions thereof with code not governed by the terms of this - License. - - 1.7. "License" means this document. - - 1.8. "Licensable" means having the right to grant, to the maximum - extent possible, whether at the time of the initial grant or - subsequently acquired, any and all of the rights conveyed - herein. - - 1.9. "Modifications" means the Source Code and Executable form of - any of the following: - - A. Any file that results from an addition to, deletion from or - modification of the contents of a file containing Original - Software or previous Modifications; - - B. Any new file that contains any part of the Original - Software or previous Modifications; or - - C. Any new file that is contributed or otherwise made - available under the terms of this License. - - 1.10. "Original Software" means the Source Code and Executable - form of computer software code that is originally released - under this License. - - 1.11. "Patent Claims" means any patent claim(s), now owned or - hereafter acquired, including without limitation, method, - process, and apparatus claims, in any patent Licensable by - grantor. - - 1.12. "Source Code" means (a) the common form of computer software - code in which modifications are made and (b) associated - documentation included in or with such code. - - 1.13. "You" (or "Your") means an individual or a legal entity - exercising rights under, and complying with all of the terms - of, this License. For legal entities, "You" includes any - entity which controls, is controlled by, or is under common - control with You. For purposes of this definition, - "control" means (a) the power, direct or indirect, to cause - the direction or management of such entity, whether by - contract or otherwise, or (b) ownership of more than fifty - percent (50%) of the outstanding shares or beneficial - ownership of such entity. - -2. License Grants. - - 2.1. The Initial Developer Grant. - - Conditioned upon Your compliance with Section 3.1 below and - subject to third party intellectual property claims, the Initial - Developer hereby grants You a world-wide, royalty-free, - non-exclusive license: - - (a) under intellectual property rights (other than patent or - trademark) Licensable by Initial Developer, to use, - reproduce, modify, display, perform, sublicense and - distribute the Original Software (or portions thereof), - with or without Modifications, and/or as part of a Larger - Work; and - - (b) under Patent Claims infringed by the making, using or - selling of Original Software, to make, have made, use, - practice, sell, and offer for sale, and/or otherwise - dispose of the Original Software (or portions thereof). - - (c) The licenses granted in Sections 2.1(a) and (b) are - effective on the date Initial Developer first distributes - or otherwise makes the Original Software available to a - third party under the terms of this License. - - (d) Notwithstanding Section 2.1(b) above, no patent license is - granted: (1) for code that You delete from the Original - Software, or (2) for infringements caused by: (i) the - modification of the Original Software, or (ii) the - combination of the Original Software with other software - or devices. - - 2.2. Contributor Grant. - - Conditioned upon Your compliance with Section 3.1 below and - subject to third party intellectual property claims, each - Contributor hereby grants You a world-wide, royalty-free, - non-exclusive license: - - (a) under intellectual property rights (other than patent or - trademark) Licensable by Contributor to use, reproduce, - modify, display, perform, sublicense and distribute the - Modifications created by such Contributor (or portions - thereof), either on an unmodified basis, with other - Modifications, as Covered Software and/or as part of a - Larger Work; and - - (b) under Patent Claims infringed by the making, using, or - selling of Modifications made by that Contributor either - alone and/or in combination with its Contributor Version - (or portions of such combination), to make, use, sell, - offer for sale, have made, and/or otherwise dispose of: - (1) Modifications made by that Contributor (or portions - thereof); and (2) the combination of Modifications made by - that Contributor with its Contributor Version (or portions - of such combination). - - (c) The licenses granted in Sections 2.2(a) and 2.2(b) are - effective on the date Contributor first distributes or - otherwise makes the Modifications available to a third - party. - - (d) Notwithstanding Section 2.2(b) above, no patent license is - granted: (1) for any code that Contributor has deleted - from the Contributor Version; (2) for infringements caused - by: (i) third party modifications of Contributor Version, - or (ii) the combination of Modifications made by that - Contributor with other software (except as part of the - Contributor Version) or other devices; or (3) under Patent - Claims infringed by Covered Software in the absence of - Modifications made by that Contributor. - -3. Distribution Obligations. - - 3.1. Availability of Source Code. - - Any Covered Software that You distribute or otherwise make - available in Executable form must also be made available in Source - Code form and that Source Code form must be distributed only under - the terms of this License. You must include a copy of this - License with every copy of the Source Code form of the Covered - Software You distribute or otherwise make available. You must - inform recipients of any such Covered Software in Executable form - as to how they can obtain such Covered Software in Source Code - form in a reasonable manner on or through a medium customarily - used for software exchange. - - 3.2. Modifications. - - The Modifications that You create or to which You contribute are - governed by the terms of this License. You represent that You - believe Your Modifications are Your original creation(s) and/or - You have sufficient rights to grant the rights conveyed by this - License. - - 3.3. Required Notices. - - You must include a notice in each of Your Modifications that - identifies You as the Contributor of the Modification. You may - not remove or alter any copyright, patent or trademark notices - contained within the Covered Software, or any notices of licensing - or any descriptive text giving attribution to any Contributor or - the Initial Developer. - - 3.4. Application of Additional Terms. - - You may not offer or impose any terms on any Covered Software in - Source Code form that alters or restricts the applicable version - of this License or the recipients' rights hereunder. You may - choose to offer, and to charge a fee for, warranty, support, - indemnity or liability obligations to one or more recipients of - Covered Software. However, you may do so only on Your own behalf, - and not on behalf of the Initial Developer or any Contributor. - You must make it absolutely clear that any such warranty, support, - indemnity or liability obligation is offered by You alone, and You - hereby agree to indemnify the Initial Developer and every - Contributor for any liability incurred by the Initial Developer or - such Contributor as a result of warranty, support, indemnity or - liability terms You offer. - - 3.5. Distribution of Executable Versions. - - You may distribute the Executable form of the Covered Software - under the terms of this License or under the terms of a license of - Your choice, which may contain terms different from this License, - provided that You are in compliance with the terms of this License - and that the license for the Executable form does not attempt to - limit or alter the recipient's rights in the Source Code form from - the rights set forth in this License. If You distribute the - Covered Software in Executable form under a different license, You - must make it absolutely clear that any terms which differ from - this License are offered by You alone, not by the Initial - Developer or Contributor. You hereby agree to indemnify the - Initial Developer and every Contributor for any liability incurred - by the Initial Developer or such Contributor as a result of any - such terms You offer. - - 3.6. Larger Works. - - You may create a Larger Work by combining Covered Software with - other code not governed by the terms of this License and - distribute the Larger Work as a single product. In such a case, - You must make sure the requirements of this License are fulfilled - for the Covered Software. - -4. Versions of the License. - - 4.1. New Versions. - - Sun Microsystems, Inc. is the initial license steward and may - publish revised and/or new versions of this License from time to - time. Each version will be given a distinguishing version number. - Except as provided in Section 4.3, no one other than the license - steward has the right to modify this License. - - 4.2. Effect of New Versions. - - You may always continue to use, distribute or otherwise make the - Covered Software available under the terms of the version of the - License under which You originally received the Covered Software. - If the Initial Developer includes a notice in the Original - Software prohibiting it from being distributed or otherwise made - available under any subsequent version of the License, You must - distribute and make the Covered Software available under the terms - of the version of the License under which You originally received - the Covered Software. Otherwise, You may also choose to use, - distribute or otherwise make the Covered Software available under - the terms of any subsequent version of the License published by - the license steward. - - 4.3. Modified Versions. - - When You are an Initial Developer and You want to create a new - license for Your Original Software, You may create and use a - modified version of this License if You: (a) rename the license - and remove any references to the name of the license steward - (except to note that the license differs from this License); and - (b) otherwise make it clear that the license contains terms which - differ from this License. - -5. DISCLAIMER OF WARRANTY. - - COVERED SOFTWARE IS PROVIDED UNDER THIS LICENSE ON AN "AS IS" - BASIS, WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, - INCLUDING, WITHOUT LIMITATION, WARRANTIES THAT THE COVERED - SOFTWARE IS FREE OF DEFECTS, MERCHANTABLE, FIT FOR A PARTICULAR - PURPOSE OR NON-INFRINGING. THE ENTIRE RISK AS TO THE QUALITY AND - PERFORMANCE OF THE COVERED SOFTWARE IS WITH YOU. SHOULD ANY - COVERED SOFTWARE PROVE DEFECTIVE IN ANY RESPECT, YOU (NOT THE - INITIAL DEVELOPER OR ANY OTHER CONTRIBUTOR) ASSUME THE COST OF ANY - NECESSARY SERVICING, REPAIR OR CORRECTION. THIS DISCLAIMER OF - WARRANTY CONSTITUTES AN ESSENTIAL PART OF THIS LICENSE. NO USE OF - ANY COVERED SOFTWARE IS AUTHORIZED HEREUNDER EXCEPT UNDER THIS - DISCLAIMER. - -6. TERMINATION. - - 6.1. This License and the rights granted hereunder will terminate - automatically if You fail to comply with terms herein and fail to - cure such breach within 30 days of becoming aware of the breach. - Provisions which, by their nature, must remain in effect beyond - the termination of this License shall survive. - - 6.2. If You assert a patent infringement claim (excluding - declaratory judgment actions) against Initial Developer or a - Contributor (the Initial Developer or Contributor against whom You - assert such claim is referred to as "Participant") alleging that - the Participant Software (meaning the Contributor Version where - the Participant is a Contributor or the Original Software where - the Participant is the Initial Developer) directly or indirectly - infringes any patent, then any and all rights granted directly or - indirectly to You by such Participant, the Initial Developer (if - the Initial Developer is not the Participant) and all Contributors - under Sections 2.1 and/or 2.2 of this License shall, upon 60 days - notice from Participant terminate prospectively and automatically - at the expiration of such 60 day notice period, unless if within - such 60 day period You withdraw Your claim with respect to the - Participant Software against such Participant either unilaterally - or pursuant to a written agreement with Participant. - - 6.3. In the event of termination under Sections 6.1 or 6.2 above, - all end user licenses that have been validly granted by You or any - distributor hereunder prior to termination (excluding licenses - granted to You by any distributor) shall survive termination. - -7. LIMITATION OF LIABILITY. - - UNDER NO CIRCUMSTANCES AND UNDER NO LEGAL THEORY, WHETHER TORT - (INCLUDING NEGLIGENCE), CONTRACT, OR OTHERWISE, SHALL YOU, THE - INITIAL DEVELOPER, ANY OTHER CONTRIBUTOR, OR ANY DISTRIBUTOR OF - COVERED SOFTWARE, OR ANY SUPPLIER OF ANY OF SUCH PARTIES, BE - LIABLE TO ANY PERSON FOR ANY INDIRECT, SPECIAL, INCIDENTAL, OR - CONSEQUENTIAL DAMAGES OF ANY CHARACTER INCLUDING, WITHOUT - LIMITATION, DAMAGES FOR LOST PROFITS, LOSS OF GOODWILL, WORK - STOPPAGE, COMPUTER FAILURE OR MALFUNCTION, OR ANY AND ALL OTHER - COMMERCIAL DAMAGES OR LOSSES, EVEN IF SUCH PARTY SHALL HAVE BEEN - INFORMED OF THE POSSIBILITY OF SUCH DAMAGES. THIS LIMITATION OF - LIABILITY SHALL NOT APPLY TO LIABILITY FOR DEATH OR PERSONAL - INJURY RESULTING FROM SUCH PARTY'S NEGLIGENCE TO THE EXTENT - APPLICABLE LAW PROHIBITS SUCH LIMITATION. SOME JURISDICTIONS DO - NOT ALLOW THE EXCLUSION OR LIMITATION OF INCIDENTAL OR - CONSEQUENTIAL DAMAGES, SO THIS EXCLUSION AND LIMITATION MAY NOT - APPLY TO YOU. - -8. U.S. GOVERNMENT END USERS. - - The Covered Software is a "commercial item," as that term is - defined in 48 C.F.R. 2.101 (Oct. 1995), consisting of "commercial - computer software" (as that term is defined at 48 - C.F.R. 252.227-7014(a)(1)) and "commercial computer software - documentation" as such terms are used in 48 C.F.R. 12.212 - (Sept. 1995). Consistent with 48 C.F.R. 12.212 and 48 - C.F.R. 227.7202-1 through 227.7202-4 (June 1995), all - U.S. Government End Users acquire Covered Software with only those - rights set forth herein. This U.S. Government Rights clause is in - lieu of, and supersedes, any other FAR, DFAR, or other clause or - provision that addresses Government rights in computer software - under this License. - -9. MISCELLANEOUS. - - This License represents the complete agreement concerning subject - matter hereof. If any provision of this License is held to be - unenforceable, such provision shall be reformed only to the extent - necessary to make it enforceable. This License shall be governed - by the law of the jurisdiction specified in a notice contained - within the Original Software (except to the extent applicable law, - if any, provides otherwise), excluding such jurisdiction's - conflict-of-law provisions. Any litigation relating to this - License shall be subject to the jurisdiction of the courts located - in the jurisdiction and venue specified in a notice contained - within the Original Software, with the losing party responsible - for costs, including, without limitation, court costs and - reasonable attorneys' fees and expenses. The application of the - United Nations Convention on Contracts for the International Sale - of Goods is expressly excluded. Any law or regulation which - provides that the language of a contract shall be construed - against the drafter shall not apply to this License. You agree - that You alone are responsible for compliance with the United - States export administration regulations (and the export control - laws and regulation of any other countries) when You use, - distribute or otherwise make available any Covered Software. - -10. RESPONSIBILITY FOR CLAIMS. - - As between Initial Developer and the Contributors, each party is - responsible for claims and damages arising, directly or - indirectly, out of its utilization of rights under this License - and You agree to work with Initial Developer and Contributors to - distribute such responsibility on an equitable basis. Nothing - herein is intended or shall be deemed to constitute any admission - of liability. - --------------------------------------------------------------------- - -NOTICE PURSUANT TO SECTION 9 OF THE COMMON DEVELOPMENT AND -DISTRIBUTION LICENSE (CDDL) - -For Covered Software in this distribution, this License shall -be governed by the laws of the State of California (excluding -conflict-of-law provisions). - -Any litigation relating to this License shall be subject to the -jurisdiction of the Federal Courts of the Northern District of -California and the state courts of the State of California, with -venue lying in Santa Clara County, California. diff --git a/licenses/github.com/imdario/mergo/LICENSE b/licenses/github.com/imdario/mergo/LICENSE deleted file mode 100644 index 686680298d..0000000000 --- a/licenses/github.com/imdario/mergo/LICENSE +++ /dev/null @@ -1,28 +0,0 @@ -Copyright (c) 2013 Dario Castañé. All rights reserved. -Copyright (c) 2012 The Go Authors. All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are -met: - - * Redistributions of source code must retain the above copyright -notice, this list of conditions and the following disclaimer. - * Redistributions in binary form must reproduce the above -copyright notice, this list of conditions and the following disclaimer -in the documentation and/or other materials provided with the -distribution. - * Neither the name of Google Inc. nor the names of its -contributors may be used to endorse or promote products derived from -this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/licenses/github.com/inconshreveable/mousetrap/LICENSE b/licenses/github.com/inconshreveable/mousetrap/LICENSE deleted file mode 100644 index 5f0d1fb6a7..0000000000 --- a/licenses/github.com/inconshreveable/mousetrap/LICENSE +++ /dev/null @@ -1,13 +0,0 @@ -Copyright 2014 Alan Shreve - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. diff --git a/licenses/github.com/json-iterator/go/LICENSE b/licenses/github.com/json-iterator/go/LICENSE deleted file mode 100644 index 2cf4f5ab28..0000000000 --- a/licenses/github.com/json-iterator/go/LICENSE +++ /dev/null @@ -1,21 +0,0 @@ -MIT License - -Copyright (c) 2016 json-iterator - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. diff --git a/licenses/github.com/juju/ratelimit/LICENSE b/licenses/github.com/juju/ratelimit/LICENSE deleted file mode 100644 index ade9307b39..0000000000 --- a/licenses/github.com/juju/ratelimit/LICENSE +++ /dev/null @@ -1,191 +0,0 @@ -All files in this repository are licensed as follows. If you contribute -to this repository, it is assumed that you license your contribution -under the same license unless you state otherwise. - -All files Copyright (C) 2015 Canonical Ltd. unless otherwise specified in the file. - -This software is licensed under the LGPLv3, included below. - -As a special exception to the GNU Lesser General Public License version 3 -("LGPL3"), the copyright holders of this Library give you permission to -convey to a third party a Combined Work that links statically or dynamically -to this Library without providing any Minimal Corresponding Source or -Minimal Application Code as set out in 4d or providing the installation -information set out in section 4e, provided that you comply with the other -provisions of LGPL3 and provided that you meet, for the Application the -terms and conditions of the license(s) which apply to the Application. - -Except as stated in this special exception, the provisions of LGPL3 will -continue to comply in full to this Library. If you modify this Library, you -may apply this exception to your version of this Library, but you are not -obliged to do so. If you do not wish to do so, delete this exception -statement from your version. This exception does not (and cannot) modify any -license terms which apply to the Application, with which you must still -comply. - - - GNU LESSER GENERAL PUBLIC LICENSE - Version 3, 29 June 2007 - - Copyright (C) 2007 Free Software Foundation, Inc. - Everyone is permitted to copy and distribute verbatim copies - of this license document, but changing it is not allowed. - - - This version of the GNU Lesser General Public License incorporates -the terms and conditions of version 3 of the GNU General Public -License, supplemented by the additional permissions listed below. - - 0. Additional Definitions. - - As used herein, "this License" refers to version 3 of the GNU Lesser -General Public License, and the "GNU GPL" refers to version 3 of the GNU -General Public License. - - "The Library" refers to a covered work governed by this License, -other than an Application or a Combined Work as defined below. - - An "Application" is any work that makes use of an interface provided -by the Library, but which is not otherwise based on the Library. -Defining a subclass of a class defined by the Library is deemed a mode -of using an interface provided by the Library. - - A "Combined Work" is a work produced by combining or linking an -Application with the Library. The particular version of the Library -with which the Combined Work was made is also called the "Linked -Version". - - The "Minimal Corresponding Source" for a Combined Work means the -Corresponding Source for the Combined Work, excluding any source code -for portions of the Combined Work that, considered in isolation, are -based on the Application, and not on the Linked Version. - - The "Corresponding Application Code" for a Combined Work means the -object code and/or source code for the Application, including any data -and utility programs needed for reproducing the Combined Work from the -Application, but excluding the System Libraries of the Combined Work. - - 1. Exception to Section 3 of the GNU GPL. - - You may convey a covered work under sections 3 and 4 of this License -without being bound by section 3 of the GNU GPL. - - 2. Conveying Modified Versions. - - If you modify a copy of the Library, and, in your modifications, a -facility refers to a function or data to be supplied by an Application -that uses the facility (other than as an argument passed when the -facility is invoked), then you may convey a copy of the modified -version: - - a) under this License, provided that you make a good faith effort to - ensure that, in the event an Application does not supply the - function or data, the facility still operates, and performs - whatever part of its purpose remains meaningful, or - - b) under the GNU GPL, with none of the additional permissions of - this License applicable to that copy. - - 3. Object Code Incorporating Material from Library Header Files. - - The object code form of an Application may incorporate material from -a header file that is part of the Library. You may convey such object -code under terms of your choice, provided that, if the incorporated -material is not limited to numerical parameters, data structure -layouts and accessors, or small macros, inline functions and templates -(ten or fewer lines in length), you do both of the following: - - a) Give prominent notice with each copy of the object code that the - Library is used in it and that the Library and its use are - covered by this License. - - b) Accompany the object code with a copy of the GNU GPL and this license - document. - - 4. Combined Works. - - You may convey a Combined Work under terms of your choice that, -taken together, effectively do not restrict modification of the -portions of the Library contained in the Combined Work and reverse -engineering for debugging such modifications, if you also do each of -the following: - - a) Give prominent notice with each copy of the Combined Work that - the Library is used in it and that the Library and its use are - covered by this License. - - b) Accompany the Combined Work with a copy of the GNU GPL and this license - document. - - c) For a Combined Work that displays copyright notices during - execution, include the copyright notice for the Library among - these notices, as well as a reference directing the user to the - copies of the GNU GPL and this license document. - - d) Do one of the following: - - 0) Convey the Minimal Corresponding Source under the terms of this - License, and the Corresponding Application Code in a form - suitable for, and under terms that permit, the user to - recombine or relink the Application with a modified version of - the Linked Version to produce a modified Combined Work, in the - manner specified by section 6 of the GNU GPL for conveying - Corresponding Source. - - 1) Use a suitable shared library mechanism for linking with the - Library. A suitable mechanism is one that (a) uses at run time - a copy of the Library already present on the user's computer - system, and (b) will operate properly with a modified version - of the Library that is interface-compatible with the Linked - Version. - - e) Provide Installation Information, but only if you would otherwise - be required to provide such information under section 6 of the - GNU GPL, and only to the extent that such information is - necessary to install and execute a modified version of the - Combined Work produced by recombining or relinking the - Application with a modified version of the Linked Version. (If - you use option 4d0, the Installation Information must accompany - the Minimal Corresponding Source and Corresponding Application - Code. If you use option 4d1, you must provide the Installation - Information in the manner specified by section 6 of the GNU GPL - for conveying Corresponding Source.) - - 5. Combined Libraries. - - You may place library facilities that are a work based on the -Library side by side in a single library together with other library -facilities that are not Applications and are not covered by this -License, and convey such a combined library under terms of your -choice, if you do both of the following: - - a) Accompany the combined library with a copy of the same work based - on the Library, uncombined with any other library facilities, - conveyed under the terms of this License. - - b) Give prominent notice with the combined library that part of it - is a work based on the Library, and explaining where to find the - accompanying uncombined form of the same work. - - 6. Revised Versions of the GNU Lesser General Public License. - - The Free Software Foundation may publish revised and/or new versions -of the GNU Lesser General Public License from time to time. Such new -versions will be similar in spirit to the present version, but may -differ in detail to address new problems or concerns. - - Each version is given a distinguishing version number. If the -Library as you received it specifies that a certain numbered version -of the GNU Lesser General Public License "or any later version" -applies to it, you have the option of following the terms and -conditions either of that published version or of any later version -published by the Free Software Foundation. If the Library as you -received it does not specify a version number of the GNU Lesser -General Public License, you may choose any version of the GNU Lesser -General Public License ever published by the Free Software Foundation. - - If the Library as you received it specifies that a proxy can decide -whether future versions of the GNU Lesser General Public License shall -apply, that proxy's public statement of acceptance of any version is -permanent authorization for you to choose that version for the -Library. diff --git a/licenses/github.com/lib/pq/LICENSE.md b/licenses/github.com/lib/pq/LICENSE.md deleted file mode 100644 index 5773904a30..0000000000 --- a/licenses/github.com/lib/pq/LICENSE.md +++ /dev/null @@ -1,8 +0,0 @@ -Copyright (c) 2011-2013, 'pq' Contributors -Portions Copyright (C) 2011 Blake Mizerany - -Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/licenses/github.com/mailru/easyjson/LICENSE b/licenses/github.com/mailru/easyjson/LICENSE deleted file mode 100644 index fbff658f70..0000000000 --- a/licenses/github.com/mailru/easyjson/LICENSE +++ /dev/null @@ -1,7 +0,0 @@ -Copyright (c) 2016 Mail.Ru Group - -Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/licenses/github.com/mattn/go-colorable/LICENSE b/licenses/github.com/mattn/go-colorable/LICENSE deleted file mode 100644 index 91b5cef30e..0000000000 --- a/licenses/github.com/mattn/go-colorable/LICENSE +++ /dev/null @@ -1,21 +0,0 @@ -The MIT License (MIT) - -Copyright (c) 2016 Yasuhiro Matsumoto - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. diff --git a/licenses/github.com/mattn/go-isatty/LICENSE b/licenses/github.com/mattn/go-isatty/LICENSE deleted file mode 100644 index 65dc692b6b..0000000000 --- a/licenses/github.com/mattn/go-isatty/LICENSE +++ /dev/null @@ -1,9 +0,0 @@ -Copyright (c) Yasuhiro MATSUMOTO - -MIT License (Expat) - -Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/licenses/github.com/modern-go/concurrent/LICENSE b/licenses/github.com/modern-go/concurrent/LICENSE deleted file mode 100644 index 261eeb9e9f..0000000000 --- a/licenses/github.com/modern-go/concurrent/LICENSE +++ /dev/null @@ -1,201 +0,0 @@ - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/licenses/github.com/modern-go/reflect2/LICENSE b/licenses/github.com/modern-go/reflect2/LICENSE deleted file mode 100644 index 261eeb9e9f..0000000000 --- a/licenses/github.com/modern-go/reflect2/LICENSE +++ /dev/null @@ -1,201 +0,0 @@ - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/licenses/github.com/petar/GoLLRB/LICENSE b/licenses/github.com/petar/GoLLRB/LICENSE deleted file mode 100644 index b75312c787..0000000000 --- a/licenses/github.com/petar/GoLLRB/LICENSE +++ /dev/null @@ -1,27 +0,0 @@ -Copyright (c) 2010, Petar Maymounkov -All rights reserved. - -Redistribution and use in source and binary forms, with or without modification, -are permitted provided that the following conditions are met: - -(*) Redistributions of source code must retain the above copyright notice, this list -of conditions and the following disclaimer. - -(*) Redistributions in binary form must reproduce the above copyright notice, this -list of conditions and the following disclaimer in the documentation and/or -other materials provided with the distribution. - -(*) Neither the name of Petar Maymounkov nor the names of its contributors may be -used to endorse or promote products derived from this software without specific -prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND -ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED -WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR -ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES -(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON -ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS -SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/licenses/github.com/peterbourgon/diskv/LICENSE b/licenses/github.com/peterbourgon/diskv/LICENSE deleted file mode 100644 index 41ce7f16e1..0000000000 --- a/licenses/github.com/peterbourgon/diskv/LICENSE +++ /dev/null @@ -1,19 +0,0 @@ -Copyright (c) 2011-2012 Peter Bourgon - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in -all copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN -THE SOFTWARE. diff --git a/licenses/github.com/russross/blackfriday/LICENSE.txt b/licenses/github.com/russross/blackfriday/LICENSE.txt deleted file mode 100644 index 2885af3602..0000000000 --- a/licenses/github.com/russross/blackfriday/LICENSE.txt +++ /dev/null @@ -1,29 +0,0 @@ -Blackfriday is distributed under the Simplified BSD License: - -> Copyright © 2011 Russ Ross -> All rights reserved. -> -> Redistribution and use in source and binary forms, with or without -> modification, are permitted provided that the following conditions -> are met: -> -> 1. Redistributions of source code must retain the above copyright -> notice, this list of conditions and the following disclaimer. -> -> 2. Redistributions in binary form must reproduce the above -> copyright notice, this list of conditions and the following -> disclaimer in the documentation and/or other materials provided with -> the distribution. -> -> THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -> "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -> LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS -> FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE -> COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, -> INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, -> BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -> LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER -> CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT -> LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN -> ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -> POSSIBILITY OF SUCH DAMAGE. diff --git a/licenses/github.com/sirupsen/logrus/LICENSE b/licenses/github.com/sirupsen/logrus/LICENSE deleted file mode 100644 index f090cb42f3..0000000000 --- a/licenses/github.com/sirupsen/logrus/LICENSE +++ /dev/null @@ -1,21 +0,0 @@ -The MIT License (MIT) - -Copyright (c) 2014 Simon Eskildsen - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in -all copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN -THE SOFTWARE. diff --git a/licenses/github.com/spf13/cobra/LICENSE.txt b/licenses/github.com/spf13/cobra/LICENSE.txt deleted file mode 100644 index 298f0e2665..0000000000 --- a/licenses/github.com/spf13/cobra/LICENSE.txt +++ /dev/null @@ -1,174 +0,0 @@ - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. diff --git a/licenses/github.com/spf13/pflag/LICENSE b/licenses/github.com/spf13/pflag/LICENSE deleted file mode 100644 index 63ed1cfea1..0000000000 --- a/licenses/github.com/spf13/pflag/LICENSE +++ /dev/null @@ -1,28 +0,0 @@ -Copyright (c) 2012 Alex Ogier. All rights reserved. -Copyright (c) 2012 The Go Authors. All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are -met: - - * Redistributions of source code must retain the above copyright -notice, this list of conditions and the following disclaimer. - * Redistributions in binary form must reproduce the above -copyright notice, this list of conditions and the following disclaimer -in the documentation and/or other materials provided with the -distribution. - * Neither the name of Google Inc. nor the names of its -contributors may be used to endorse or promote products derived from -this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/licenses/golang.org/x/crypto/LICENSE b/licenses/golang.org/x/crypto/LICENSE deleted file mode 100644 index 6a66aea5ea..0000000000 --- a/licenses/golang.org/x/crypto/LICENSE +++ /dev/null @@ -1,27 +0,0 @@ -Copyright (c) 2009 The Go Authors. All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are -met: - - * Redistributions of source code must retain the above copyright -notice, this list of conditions and the following disclaimer. - * Redistributions in binary form must reproduce the above -copyright notice, this list of conditions and the following disclaimer -in the documentation and/or other materials provided with the -distribution. - * Neither the name of Google Inc. nor the names of its -contributors may be used to endorse or promote products derived from -this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/licenses/golang.org/x/net/LICENSE b/licenses/golang.org/x/net/LICENSE deleted file mode 100644 index 6a66aea5ea..0000000000 --- a/licenses/golang.org/x/net/LICENSE +++ /dev/null @@ -1,27 +0,0 @@ -Copyright (c) 2009 The Go Authors. All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are -met: - - * Redistributions of source code must retain the above copyright -notice, this list of conditions and the following disclaimer. - * Redistributions in binary form must reproduce the above -copyright notice, this list of conditions and the following disclaimer -in the documentation and/or other materials provided with the -distribution. - * Neither the name of Google Inc. nor the names of its -contributors may be used to endorse or promote products derived from -this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/licenses/golang.org/x/sys/LICENSE b/licenses/golang.org/x/sys/LICENSE deleted file mode 100644 index 6a66aea5ea..0000000000 --- a/licenses/golang.org/x/sys/LICENSE +++ /dev/null @@ -1,27 +0,0 @@ -Copyright (c) 2009 The Go Authors. All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are -met: - - * Redistributions of source code must retain the above copyright -notice, this list of conditions and the following disclaimer. - * Redistributions in binary form must reproduce the above -copyright notice, this list of conditions and the following disclaimer -in the documentation and/or other materials provided with the -distribution. - * Neither the name of Google Inc. nor the names of its -contributors may be used to endorse or promote products derived from -this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/licenses/golang.org/x/text/LICENSE b/licenses/golang.org/x/text/LICENSE deleted file mode 100644 index 6a66aea5ea..0000000000 --- a/licenses/golang.org/x/text/LICENSE +++ /dev/null @@ -1,27 +0,0 @@ -Copyright (c) 2009 The Go Authors. All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are -met: - - * Redistributions of source code must retain the above copyright -notice, this list of conditions and the following disclaimer. - * Redistributions in binary form must reproduce the above -copyright notice, this list of conditions and the following disclaimer -in the documentation and/or other materials provided with the -distribution. - * Neither the name of Google Inc. nor the names of its -contributors may be used to endorse or promote products derived from -this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/licenses/gopkg.in/inf.v0/LICENSE b/licenses/gopkg.in/inf.v0/LICENSE deleted file mode 100644 index 87a5cede33..0000000000 --- a/licenses/gopkg.in/inf.v0/LICENSE +++ /dev/null @@ -1,28 +0,0 @@ -Copyright (c) 2012 Péter Surányi. Portions Copyright (c) 2009 The Go -Authors. All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are -met: - - * Redistributions of source code must retain the above copyright -notice, this list of conditions and the following disclaimer. - * Redistributions in binary form must reproduce the above -copyright notice, this list of conditions and the following disclaimer -in the documentation and/or other materials provided with the -distribution. - * Neither the name of Google Inc. nor the names of its -contributors may be used to endorse or promote products derived from -this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/licenses/gopkg.in/robfig/cron.v2/LICENSE b/licenses/gopkg.in/robfig/cron.v2/LICENSE deleted file mode 100644 index 3a0f627ffe..0000000000 --- a/licenses/gopkg.in/robfig/cron.v2/LICENSE +++ /dev/null @@ -1,21 +0,0 @@ -Copyright (C) 2012 Rob Figueiredo -All Rights Reserved. - -MIT LICENSE - -Permission is hereby granted, free of charge, to any person obtaining a copy of -this software and associated documentation files (the "Software"), to deal in -the Software without restriction, including without limitation the rights to -use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of -the Software, and to permit persons to whom the Software is furnished to do so, -subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS -FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR -COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER -IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN -CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/licenses/gopkg.in/yaml.v2/LICENSE b/licenses/gopkg.in/yaml.v2/LICENSE deleted file mode 100644 index 8dada3edaf..0000000000 --- a/licenses/gopkg.in/yaml.v2/LICENSE +++ /dev/null @@ -1,201 +0,0 @@ - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "{}" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright {yyyy} {name of copyright owner} - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/licenses/k8s.io/api/LICENSE b/licenses/k8s.io/api/LICENSE deleted file mode 100644 index d645695673..0000000000 --- a/licenses/k8s.io/api/LICENSE +++ /dev/null @@ -1,202 +0,0 @@ - - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/licenses/k8s.io/apimachinery/LICENSE b/licenses/k8s.io/apimachinery/LICENSE deleted file mode 100644 index d645695673..0000000000 --- a/licenses/k8s.io/apimachinery/LICENSE +++ /dev/null @@ -1,202 +0,0 @@ - - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/licenses/k8s.io/client-go/LICENSE b/licenses/k8s.io/client-go/LICENSE deleted file mode 100644 index d645695673..0000000000 --- a/licenses/k8s.io/client-go/LICENSE +++ /dev/null @@ -1,202 +0,0 @@ - - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/licenses/k8s.io/kube-openapi/LICENSE b/licenses/k8s.io/kube-openapi/LICENSE deleted file mode 100644 index d645695673..0000000000 --- a/licenses/k8s.io/kube-openapi/LICENSE +++ /dev/null @@ -1,202 +0,0 @@ - - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/pgo-backrest/pgo-backrest.go b/pgo-backrest/pgo-backrest.go deleted file mode 100644 index ee75dd743d..0000000000 --- a/pgo-backrest/pgo-backrest.go +++ /dev/null @@ -1,154 +0,0 @@ -package main - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "os" - "strconv" - "strings" - - "github.com/crunchydata/postgres-operator/internal/kubeapi" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - log "github.com/sirupsen/logrus" -) - -const backrestCommand = "pgbackrest" - -const backrestBackupCommand = `backup` -const backrestInfoCommand = `info` -const backrestStanzaCreateCommand = `stanza-create` -const containername = "database" -const repoTypeFlagS3 = "--repo1-type=s3" -const noRepoS3VerifyTLS = "--no-repo1-s3-verify-tls" - -func main() { - log.Info("pgo-backrest starts") - - debugFlag := os.Getenv("CRUNCHY_DEBUG") - if debugFlag == "true" { - log.SetLevel(log.DebugLevel) - log.Debug("debug flag set to true") - } else { - log.Info("debug flag set to false") - } - - Namespace := os.Getenv("NAMESPACE") - log.Debugf("setting NAMESPACE to %s", Namespace) - if Namespace == "" { - log.Error("NAMESPACE env var not set") - os.Exit(2) - } - - COMMAND := os.Getenv("COMMAND") - log.Debugf("setting COMMAND to %s", COMMAND) - if COMMAND == "" { - log.Error("COMMAND env var not set") - os.Exit(2) - } - - COMMAND_OPTS := os.Getenv("COMMAND_OPTS") - log.Debugf("setting COMMAND_OPTS to %s", COMMAND_OPTS) - - PODNAME := os.Getenv("PODNAME") - log.Debugf("setting PODNAME to %s", PODNAME) - if PODNAME == "" { - log.Error("PODNAME env var not set") - os.Exit(2) - } - - REPO_TYPE := os.Getenv("PGBACKREST_REPO_TYPE") - log.Debugf("setting REPO_TYPE to %s", REPO_TYPE) - - // determine the setting of PGHA_PGBACKREST_LOCAL_S3_STORAGE - // we will discard the error and treat the value as "false" if it is not - // explicitly set - PGHA_PGBACKREST_LOCAL_S3_STORAGE, _ := strconv.ParseBool(os.Getenv("PGHA_PGBACKREST_LOCAL_S3_STORAGE")) - log.Debugf("setting PGHA_PGBACKREST_LOCAL_S3_STORAGE to %v", PGHA_PGBACKREST_LOCAL_S3_STORAGE) - - // parse the environment variable and store the appropriate boolean value - // we will discard the error and treat the value as "false" if it is not - // explicitly set - PGHA_PGBACKREST_S3_VERIFY_TLS, _ := strconv.ParseBool(os.Getenv("PGHA_PGBACKREST_S3_VERIFY_TLS")) - log.Debugf("setting PGHA_PGBACKREST_S3_VERIFY_TLS to %v", PGHA_PGBACKREST_S3_VERIFY_TLS) - - client, err := kubeapi.NewClient() - if err != nil { - panic(err) - } - - bashcmd := make([]string, 1) - bashcmd[0] = "bash" - cmdStrs := make([]string, 0) - - switch COMMAND { - case crv1.PgtaskBackrestStanzaCreate: - log.Info("backrest stanza-create command requested") - cmdStrs = append(cmdStrs, backrestCommand) - cmdStrs = append(cmdStrs, backrestStanzaCreateCommand) - cmdStrs = append(cmdStrs, COMMAND_OPTS) - case crv1.PgtaskBackrestInfo: - log.Info("backrest info command requested") - cmdStrs = append(cmdStrs, backrestCommand) - cmdStrs = append(cmdStrs, backrestInfoCommand) - cmdStrs = append(cmdStrs, COMMAND_OPTS) - case crv1.PgtaskBackrestBackup: - log.Info("backrest backup command requested") - cmdStrs = append(cmdStrs, backrestCommand) - cmdStrs = append(cmdStrs, backrestBackupCommand) - cmdStrs = append(cmdStrs, COMMAND_OPTS) - default: - log.Error("unsupported backup command specified " + COMMAND) - os.Exit(2) - } - - if PGHA_PGBACKREST_LOCAL_S3_STORAGE { - firstCmd := cmdStrs - cmdStrs = append(cmdStrs, "&&") - cmdStrs = append(cmdStrs, strings.Join(firstCmd, " ")) - cmdStrs = append(cmdStrs, repoTypeFlagS3) - // pass in the flag to disable TLS verification, if set - // otherwise, maintain default behavior and verify TLS - if !PGHA_PGBACKREST_S3_VERIFY_TLS { - cmdStrs = append(cmdStrs, noRepoS3VerifyTLS) - } - log.Info("backrest command will be executed for both local and s3 storage") - } else if REPO_TYPE == "s3" { - cmdStrs = append(cmdStrs, repoTypeFlagS3) - // pass in the flag to disable TLS verification, if set - // otherwise, maintain default behavior and verify TLS - if !PGHA_PGBACKREST_S3_VERIFY_TLS { - cmdStrs = append(cmdStrs, noRepoS3VerifyTLS) - } - log.Info("s3 flag enabled for backrest command") - } - - log.Infof("command to execute is [%s]", strings.Join(cmdStrs, " ")) - - log.Infof("command is %s ", strings.Join(cmdStrs, " ")) - reader := strings.NewReader(strings.Join(cmdStrs, " ")) - output, stderr, err := kubeapi.ExecToPodThroughAPI(client.Config, client, bashcmd, containername, PODNAME, Namespace, reader) - if err != nil { - log.Info("output=[" + output + "]") - log.Info("stderr=[" + stderr + "]") - log.Error(err) - os.Exit(2) - } - log.Info("output=[" + output + "]") - log.Info("stderr=[" + stderr + "]") - - log.Info("pgo-backrest ends") - -} diff --git a/pgo-rmdata/README.txt b/pgo-rmdata/README.txt deleted file mode 100644 index 3361973ff1..0000000000 --- a/pgo-rmdata/README.txt +++ /dev/null @@ -1,6 +0,0 @@ - -you can test this program outside of a container like so: - -cd $PGOROOT - -go run ./pgo-rmdata/pgo-rmdata.go -pg-cluster=mycluster -replica-name= -namespace=mynamespace -remove-data=true -remove-backup=true -is-replica=false -is-backup=false diff --git a/pgo-rmdata/pgo-rmdata.go b/pgo-rmdata/pgo-rmdata.go deleted file mode 100644 index 40a3bf32fd..0000000000 --- a/pgo-rmdata/pgo-rmdata.go +++ /dev/null @@ -1,71 +0,0 @@ -package main - -/* -Copyright 2019 - 2020 Crunchy Data -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "flag" - "os" - - "github.com/crunchydata/postgres-operator/internal/kubeapi" - crunchylog "github.com/crunchydata/postgres-operator/internal/logging" - "github.com/crunchydata/postgres-operator/pgo-rmdata/rmdata" - log "github.com/sirupsen/logrus" -) - -var request rmdata.Request - -func main() { - request = rmdata.Request{ - RemoveData: false, - IsReplica: false, - IsBackup: false, - RemoveBackup: false, - ClusterName: "", - ClusterPGHAScope: "", - ReplicaName: "", - Namespace: "", - } - flag.BoolVar(&request.RemoveData, "remove-data", false, "") - flag.BoolVar(&request.IsReplica, "is-replica", false, "") - flag.BoolVar(&request.IsBackup, "is-backup", false, "") - flag.BoolVar(&request.RemoveBackup, "remove-backup", false, "") - flag.StringVar(&request.ClusterName, "pg-cluster", "", "") - flag.StringVar(&request.ClusterPGHAScope, "pgha-scope", "", "") - flag.StringVar(&request.ReplicaName, "replica-name", "", "") - flag.StringVar(&request.Namespace, "namespace", "", "") - flag.Parse() - - crunchylog.CrunchyLogger(crunchylog.SetParameters()) - if os.Getenv("CRUNCHY_DEBUG") == "true" { - log.SetLevel(log.DebugLevel) - log.Debug("debug flag set to true") - } else { - log.Info("debug flag set to false") - } - - client, err := kubeapi.NewClient() - if err != nil { - log.Fatalln(err.Error()) - } - - request.Clientset = client - - log.Infoln("pgo-rmdata starts") - log.Infof("request is %s", request.String()) - - rmdata.Delete(request) - -} diff --git a/pgo-rmdata/rmdata/process.go b/pgo-rmdata/rmdata/process.go deleted file mode 100644 index 90d135a59a..0000000000 --- a/pgo-rmdata/rmdata/process.go +++ /dev/null @@ -1,761 +0,0 @@ -package rmdata - -/* -Copyright 2019 - 2020 Crunchy Data -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "errors" - "fmt" - "strings" - "time" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/util" - - log "github.com/sirupsen/logrus" - kerror "k8s.io/apimachinery/pkg/api/errors" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" -) - -const ( - MAX_TRIES = 16 - pgBackRestPathFormat = "/backrestrepo/%s" - pgBackRestRepoPVC = "%s-pgbr-repo" - pgDumpPVCPrefix = "backup-%s-pgdump" - pgDataPathFormat = "/pgdata/%s" - tablespacePathFormat = "/tablespaces/%s/%s" - // the tablespace on a replcia follows the pattern "" - walReplicaPVCPattern = "%s-wal" - - // the following constants define the suffixes for the various configMaps created by Patroni - configConfigMapSuffix = "config" - leaderConfigMapSuffix = "leader" - failoverConfigMapSuffix = "failover" -) - -func Delete(request Request) { - log.Infof("rmdata.Process %v", request) - - // if, check to see if this is a full cluster removal...i.e. "IsReplica" - // and "IsBackup" is set to false - // - // if this is a full cluster removal, first disable autofailover - if !(request.IsReplica || request.IsBackup) { - log.Debug("disabling autofailover for cluster removal") - util.ToggleAutoFailover(request.Clientset, false, request.ClusterPGHAScope, request.Namespace) - } - - //the case of 'pgo scaledown' - if request.IsReplica { - log.Info("rmdata.Process scaledown replica use case") - removeReplicaServices(request) - pvcList, err := getReplicaPVC(request) - if err != nil { - log.Error(err) - } - //delete the pgreplica CRD - if err := request.Clientset. - CrunchydataV1().Pgreplicas(request.Namespace). - Delete(request.ReplicaName, &metav1.DeleteOptions{}); err != nil { - // If the name of the replica being deleted matches the scope for the cluster, then - // we assume it was the original primary and the pgreplica deletion will fail with - // a not found error. In this case we allow the rmdata process to continue despite - // the error. This allows for the original primary to be scaled down once it is - // is no longer a primary, and has become a replica. - if !(request.ReplicaName == request.ClusterPGHAScope && kerror.IsNotFound(err)) { - log.Error(err) - return - } - log.Debug("replica name matches PGHA scope, assuming scale down of original primary" + - "and therefore ignoring error attempting to delete nonexistent pgreplica") - } - - err = removeReplica(request) - if err != nil { - log.Error(err) - } - - if request.RemoveData { - removePVCs(pvcList, request) - } - - //scale down is its own use case so we leave when done - return - } - - if request.IsBackup { - log.Info("rmdata.Process backup use case") - //the case of removing a backup using `pgo delete backup`, only applies to - // "backup-type=pgdump" - removeBackupJobs(request) - removeLogicalBackupPVCs(request) - // this is the special case of removing an ad hoc backup removal, so we can - // exit here - return - } - - log.Info("rmdata.Process cluster use case") - - // first, clear out any of the scheduled jobs that may occur, as this would be - // executing asynchronously against any stale data - removeSchedules(request) - - //the user had done something like: - //pgo delete cluster mycluster --delete-data - if request.RemoveData { - removeUserSecrets(request) - } - - //handle the case of 'pgo delete cluster mycluster' - removeCluster(request) - if err := request.Clientset. - CrunchydataV1().Pgclusters(request.Namespace). - Delete(request.ClusterName, &metav1.DeleteOptions{}); err != nil { - log.Error(err) - } - removeServices(request) - removeAddons(request) - removePgreplicas(request) - removePgtasks(request) - removeClusterConfigmaps(request) - //removeClusterJobs(request) - if request.RemoveData { - if pvcList, err := getInstancePVCs(request); err != nil { - log.Error(err) - } else { - log.Debugf("rmdata pvc list: [%v]", pvcList) - - removePVCs(pvcList, request) - } - } - - // backups have to be the last thing we remove. We want to ensure that all - // the clusters (well, really, the primary) have stopped. This means that no - // more WAL archives are being pushed, and at this point it is safe for us to - // remove the pgBackRest repo if we have opted to remove all of the backups. - // - // Regardless of the choice the user made, we want to remove all of the - // backup jobs, as those take up space - removeBackupJobs(request) - // Now, even though it appears we are removing the pgBackRest repo here, we - // are **not** removing the physical data unless request.RemoveBackup is true. - // In that case, only the deployment/services for the pgBackRest repo are - // removed - removeBackrestRepo(request) - // now, check to see if the user wants the remainder of the physical data and - // PVCs to be removed - if request.RemoveBackup { - removeBackupSecrets(request) - removeAllBackupPVCs(request) - } -} - -// removeBackRestRepo removes the pgBackRest repo that is associated with the -// PostgreSQL cluster -func removeBackrestRepo(request Request) { - deploymentName := fmt.Sprintf("%s-backrest-shared-repo", request.ClusterName) - - log.Debugf("deleting the pgbackrest repo [%s]", deploymentName) - - // now delete the deployment and services - deletePropagation := metav1.DeletePropagationForeground - err := request.Clientset. - AppsV1().Deployments(request.Namespace). - Delete(deploymentName, &metav1.DeleteOptions{PropagationPolicy: &deletePropagation}) - if err != nil { - log.Error(err) - } - - //delete the service for the backrest repo - err = request.Clientset. - CoreV1().Services(request.Namespace). - Delete(deploymentName, &metav1.DeleteOptions{}) - if err != nil { - log.Error(err) - } -} - -// removeAllBackupPVCs removes all of the PVCs associated with any kind of -// backup -func removeAllBackupPVCs(request Request) { - // first, ensure that logical backups are removed - removeLogicalBackupPVCs(request) - // finally, we will remove the pgBackRest repo PVC...or PVCs? - removePgBackRestRepoPVCs(request) -} - -// removeBackupSecrets removes any secrets that are associated with backups -// for this cluster, in particular, the secret that is used by the pgBackRest -// repository that is available for this cluster. -func removeBackupSecrets(request Request) { - // first, derive the secrename of the pgBackRest repo, which is the - // "`clusterName`-`LABEL_BACKREST_REPO_SECRET`" - secretName := fmt.Sprintf("%s-%s", - request.ClusterName, config.LABEL_BACKREST_REPO_SECRET) - log.Debugf("removeBackupSecrets: %s", secretName) - - // we can attempt to delete the secret directly without making any further - // API calls. Even if we did a "get", there could still be a race with some - // independent process (e.g. an external user) deleting the secret before we - // get to it. The main goal is to have the secret deleted - // - // we'll also check to see if there was an error, but if there is we'll only - // log the fact there was an error; this function is just a pass through - if err := request.Clientset.CoreV1().Secrets(request.Namespace).Delete(secretName, &metav1.DeleteOptions{}); err != nil { - log.Error(err) - } - - // and done! - return -} - -// removeClusterConfigmaps deletes the configmaps that are created for each -// cluster. The first two are created by Patroni when it initializes a new cluster: -// -leader (stores data pertinent to the leader election process) -// -config (stores global/cluster-wide configuration settings) -// Additionally, the Postgres Operator also creates a configMap for each cluster -// containing a default Patroni configuration file: -// -pgha-config (stores a Patroni config file in YAML format) -func removeClusterConfigmaps(request Request) { - // Store the derived names of the three configmaps in an array - clusterConfigmaps := []string{ - // first, derive the name of the PG HA default configmap, which is - // "`clusterName`-`LABEL_PGHA_CONFIGMAP`" - fmt.Sprintf("%s-%s", request.ClusterName, config.LABEL_PGHA_CONFIGMAP), - // next, the name of the leader configmap, which is - // "`clusterName`-leader" - fmt.Sprintf("%s-%s", request.ClusterName, leaderConfigMapSuffix), - // next, the name of the general configuration settings configmap, which is - // "`clusterName`-config" - fmt.Sprintf("%s-%s", request.ClusterName, configConfigMapSuffix), - // next, the name of the failover configmap, which is - // "`clusterName`-failover" - fmt.Sprintf("%s-%s", request.ClusterName, failoverConfigMapSuffix), - // finally, if there is a pgbouncer, remove the pgbouncer configmap - util.GeneratePgBouncerConfigMapName(request.ClusterName), - } - - // As with similar resources, we can attempt to delete the configmaps directly without - // making any further API calls since the goal is simply to delete the configmap. Race - // conditions are more or less unavoidable but should not cause any additional problems. - // We'll also check to see if there was an error, but if there is we'll only - // log the fact there was an error; this function is just a pass through - for _, cm := range clusterConfigmaps { - if err := request.Clientset.CoreV1().ConfigMaps(request.Namespace).Delete(cm, &metav1.DeleteOptions{}); err != nil && !kerror.IsNotFound(err) { - log.Error(err) - } - } -} - -func removeClusterJobs(request Request) { - selector := config.LABEL_PG_CLUSTER + "=" + request.ClusterName - jobs, err := request.Clientset. - BatchV1().Jobs(request.Namespace). - List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - log.Error(err) - return - } - for i := 0; i < len(jobs.Items); i++ { - deletePropagation := metav1.DeletePropagationForeground - err := request.Clientset. - BatchV1().Jobs(request.Namespace). - Delete(jobs.Items[i].Name, &metav1.DeleteOptions{PropagationPolicy: &deletePropagation}) - if err != nil { - log.Error(err) - } - } -} - -// removeCluster removes the cluster deployments EXCEPT for the pgBackRest repo -func removeCluster(request Request) { - // ensure we are deleting every deployment EXCEPT for the pgBackRest repo, - // which needs to happen in a separate step to ensure we clear out all the - // data - selector := fmt.Sprintf("%s=%s,%s!=true", - config.LABEL_PG_CLUSTER, request.ClusterName, config.LABEL_PGO_BACKREST_REPO) - - deployments, err := request.Clientset. - AppsV1().Deployments(request.Namespace). - List(metav1.ListOptions{LabelSelector: selector}) - - // if there is an error here, return as we cannot iterate over the deployment - // list - if err != nil { - log.Error(err) - return - } - - // iterate through each deployment and delete it - for _, d := range deployments.Items { - deletePropagation := metav1.DeletePropagationForeground - err := request.Clientset. - AppsV1().Deployments(request.Namespace). - Delete(d.Name, &metav1.DeleteOptions{PropagationPolicy: &deletePropagation}) - if err != nil { - log.Error(err) - } - } - - // this was here before...this looks like it ensures that deployments are - // deleted. the only thing I'm modifying is the selector - var completed bool - for i := 0; i < MAX_TRIES; i++ { - deployments, err := request.Clientset. - AppsV1().Deployments(request.Namespace). - List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - log.Error(err) - } - if len(deployments.Items) > 0 { - log.Info("sleeping to wait for Deployments to fully terminate") - time.Sleep(time.Second * time.Duration(4)) - } else { - completed = true - } - } - if !completed { - log.Error("could not terminate all cluster deployments") - } -} -func removeReplica(request Request) error { - - deletePropagation := metav1.DeletePropagationForeground - err := request.Clientset. - AppsV1().Deployments(request.Namespace). - Delete(request.ReplicaName, &metav1.DeleteOptions{PropagationPolicy: &deletePropagation}) - if err != nil { - log.Error(err) - return err - } - - //wait for the deployment to go away fully - var completed bool - for i := 0; i < MAX_TRIES; i++ { - _, err = request.Clientset. - AppsV1().Deployments(request.Namespace). - Get(request.ReplicaName, metav1.GetOptions{}) - if err == nil { - log.Info("sleeping to wait for Deployments to fully terminate") - time.Sleep(time.Second * time.Duration(4)) - } else { - completed = true - break - } - } - if !completed { - return errors.New("could not delete replica deployment within max tries") - } - return nil -} - -func removeUserSecrets(request Request) { - //get all that match pg-cluster=db - selector := config.LABEL_PG_CLUSTER + "=" + request.ClusterName - - secrets, err := request.Clientset. - CoreV1().Secrets(request.Namespace). - List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - log.Error(err) - return - } - - for _, s := range secrets.Items { - if s.ObjectMeta.Labels[config.LABEL_PGO_BACKREST_REPO] == "" { - err := request.Clientset.CoreV1().Secrets(request.Namespace).Delete(s.ObjectMeta.Name, &metav1.DeleteOptions{}) - if err != nil { - log.Error(err) - } - } - } - -} - -func removeAddons(request Request) { - //remove pgbouncer - - pgbouncerDepName := request.ClusterName + "-pgbouncer" - - deletePropagation := metav1.DeletePropagationForeground - _ = request.Clientset. - AppsV1().Deployments(request.Namespace). - Delete(pgbouncerDepName, &metav1.DeleteOptions{PropagationPolicy: &deletePropagation}) - - //delete the service name=-pgbouncer - - _ = request.Clientset. - CoreV1().Services(request.Namespace). - Delete(pgbouncerDepName, &metav1.DeleteOptions{}) -} - -func removeServices(request Request) { - - //remove any service for this cluster - - selector := config.LABEL_PG_CLUSTER + "=" + request.ClusterName - - services, err := request.Clientset. - CoreV1().Services(request.Namespace). - List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - log.Error(err) - return - } - - for i := 0; i < len(services.Items); i++ { - err := request.Clientset. - CoreV1().Services(request.Namespace). - Delete(services.Items[i].Name, &metav1.DeleteOptions{}) - if err != nil { - log.Error(err) - } - } - -} - -func removePgreplicas(request Request) { - - //get a list of pgreplicas for this cluster - replicaList, err := request.Clientset.CrunchydataV1().Pgreplicas(request.Namespace).List(metav1.ListOptions{ - LabelSelector: config.LABEL_PG_CLUSTER + "=" + request.ClusterName, - }) - if err != nil { - log.Error(err) - return - } - - log.Debugf("pgreplicas found len is %d\n", len(replicaList.Items)) - - for _, r := range replicaList.Items { - if err := request.Clientset. - CrunchydataV1().Pgreplicas(request.Namespace). - Delete(r.Spec.Name, &metav1.DeleteOptions{}); err != nil { - log.Warn(err) - } - } - -} - -func removePgtasks(request Request) { - - //get a list of pgtasks for this cluster - taskList, err := request.Clientset. - CrunchydataV1().Pgtasks(request.Namespace). - List(metav1.ListOptions{LabelSelector: config.LABEL_PG_CLUSTER + "=" + request.ClusterName}) - if err != nil { - log.Error(err) - return - } - - log.Debugf("pgtasks to remove is %d\n", len(taskList.Items)) - - for _, r := range taskList.Items { - if err := request.Clientset.CrunchydataV1().Pgtasks(request.Namespace).Delete(r.Spec.Name, &metav1.DeleteOptions{}); err != nil { - log.Warn(err) - } - } - -} - -// getInstancePVCs gets all the PVCs that are associated with PostgreSQL -// instances (at least to the best of our knowledge) -func getInstancePVCs(request Request) ([]string, error) { - pvcList := make([]string, 0) - selector := fmt.Sprintf("%s=%s", config.LABEL_PG_CLUSTER, request.ClusterName) - pgDump, pgBackRest := fmt.Sprintf(pgDumpPVCPrefix, request.ClusterName), - fmt.Sprintf(pgBackRestRepoPVC, request.ClusterName) - - log.Debugf("instance pvcs overall selector: [%s]", selector) - - // get all of the PVCs to analyze (see the step below) - pvcs, err := request.Clientset. - CoreV1().PersistentVolumeClaims(request.Namespace). - List(metav1.ListOptions{LabelSelector: selector}) - - // if there is an error, return here and log the error in the calling function - if err != nil { - return pvcList, err - } - - // ...this will be a bit janky. - // - // ...we are going to go through all of the PVCs that are associated with this - // cluster. We will then compare them against the names of the backup types - // of PVCs. If they do not match any of those names, then we will add them - // to the list. - // - // ...process of elimination until we tighten up the labeling - for _, pvc := range pvcs.Items { - pvcName := pvc.ObjectMeta.Name - - log.Debugf("found pvc: [%s]", pvcName) - - if strings.HasPrefix(pvcName, pgDump) || pvcName == pgBackRest { - log.Debug("skipping...") - continue - } - - pvcList = append(pvcList, pvcName) - } - - log.Debugf("instance pvcs found: [%v]", pvcList) - - return pvcList, nil -} - -//get the pvc for this replica deployment -func getReplicaPVC(request Request) ([]string, error) { - pvcList := make([]string, 0) - - //at this point, the naming convention is useful - //and ClusterName is the replica deployment name - //when isReplica=true - pvcList = append(pvcList, request.ReplicaName) - - // see if there are any tablespaces or WAL volumes assigned to this replica, - // and add them to the list. - // - // ...this is a bit janky, as we have to iterate through ALL the PVCs - // associated with this managed cluster, and pull out anyones that have a name - // with the pattern "" or "" - selector := fmt.Sprintf("%s=%s", config.LABEL_PG_CLUSTER, request.ClusterName) - - // get all of the PVCs that are specific to this replica and remove them - pvcs, err := request.Clientset. - CoreV1().PersistentVolumeClaims(request.Namespace). - List(metav1.ListOptions{LabelSelector: selector}) - - // if there is an error, return here and log the error in the calling function - if err != nil { - return pvcList, err - } - - // ...and where the fun begins - tablespaceReplicaPVCPrefix := fmt.Sprintf(tablespaceReplicaPVCPattern, request.ReplicaName) - walReplicaPVCName := fmt.Sprintf(walReplicaPVCPattern, request.ReplicaName) - - // iterate over the PVC list and append the tablespace PVCs - for _, pvc := range pvcs.Items { - pvcName := pvc.ObjectMeta.Name - - // if it does not start with the tablespace replica PVC pattern and does not equal the WAL - // PVC pattern then continue - if !(strings.HasPrefix(pvcName, tablespaceReplicaPVCPrefix) || - pvcName == walReplicaPVCName) { - continue - } - - log.Debugf("found pvc: [%s]", pvcName) - - pvcList = append(pvcList, pvcName) - } - - return pvcList, nil -} - -func removePVCs(pvcList []string, request Request) error { - - for _, p := range pvcList { - log.Infof("deleting pvc %s", p) - deletePropagation := metav1.DeletePropagationForeground - err := request.Clientset. - CoreV1().PersistentVolumeClaims(request.Namespace). - Delete(p, &metav1.DeleteOptions{PropagationPolicy: &deletePropagation}) - if err != nil { - log.Error(err) - } - } - - return nil - -} - -// removeBackupJobs removes any job associated with a backup. These include: -// -// - pgBackRest -// - pg_dump (logical) -func removeBackupJobs(request Request) { - // Some mild cleanup for this function...going to make a list of selectors - // for the different kinds of backup jobs so they can be deleted, but cannot - // do a full cleanup of this process just yet - selectors := []string{ - // pgBackRest - fmt.Sprintf("%s=%s,%s=true", config.LABEL_PG_CLUSTER, request.ClusterName, config.LABEL_BACKREST_JOB), - // pg_dump - fmt.Sprintf("%s=%s,%s=true", config.LABEL_PG_CLUSTER, request.ClusterName, config.LABEL_BACKUP_TYPE_PGDUMP), - } - - // iterate through each type of selector and attempt to get all of the jobs - // that are associated with it - for _, selector := range selectors { - log.Debugf("backup job selector: [%s]", selector) - - // find all the jobs associated with this selector - jobs, err := request.Clientset. - BatchV1().Jobs(request.Namespace). - List(metav1.ListOptions{LabelSelector: selector}) - - if err != nil { - log.Error(err) - continue - } - - // iterate through the list of jobs and attempt to delete them - for i := 0; i < len(jobs.Items); i++ { - deletePropagation := metav1.DeletePropagationForeground - err := request.Clientset. - BatchV1().Jobs(request.Namespace). - Delete(jobs.Items[i].Name, &metav1.DeleteOptions{PropagationPolicy: &deletePropagation}) - if err != nil { - log.Error(err) - } - } - - // ...ensure all the jobs are deleted - var completed bool - - for i := 0; i < MAX_TRIES; i++ { - jobs, err := request.Clientset. - BatchV1().Jobs(request.Namespace). - List(metav1.ListOptions{LabelSelector: selector}) - - if len(jobs.Items) > 0 || err != nil { - log.Debug("sleeping to wait for backup jobs to fully terminate") - time.Sleep(time.Second * time.Duration(4)) - } else { - completed = true - break - } - } - - if !completed { - log.Error("could not remove all backup jobs for [%s]", selector) - } - } -} - -// removeLogicalBackupPVCs removes the logical backups associated with a cluster -// this is an "all-or-nothing" solution: as right now it will only remove the -// PVC, it will remove **all** logical backups -// -// Additionally, as these backups are nota actually mounted anywhere, except -// during one-off jobs, we cannot perform a delete of the filesystem (i.e. -// "rm -rf" like in other commands). Well, we could...we could write a job to do -// this, but that will be saved for future work -func removeLogicalBackupPVCs(request Request) { - - pvcList := make([]string, 0) - selector := fmt.Sprintf("%s=%s", config.LABEL_PG_CLUSTER, request.ClusterName) - dumpPrefix := fmt.Sprintf(pgDumpPVCPrefix, request.ClusterName) - - // get all of the PVCs to analyze (see the step below) - pvcs, err := request.Clientset. - CoreV1().PersistentVolumeClaims(request.Namespace). - List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - log.Error(err) - return - } - - // Now iterate through all the PVCs to identify those that are for a logical backup and add - // them to the PVC list for deletion. This pattern matching will be utilized until better - // labeling is in place to uniquely identify logical backup PVCs. - for _, pvc := range pvcs.Items { - pvcName := pvc.GetName() - - if !strings.HasPrefix(pvcName, dumpPrefix) { - continue - } - - pvcList = append(pvcList, pvcName) - } - - log.Debugf("logical backup pvcs found: [%v]", pvcList) - - removePVCs(pvcList, request) -} - -// removePgBackRestRepoPVCs removes any PVCs that are used by a pgBackRest repo -func removePgBackRestRepoPVCs(request Request) { - // there is only a single PVC for a pgBackRest repo, and it has a well-defined - // name - pvcName := fmt.Sprintf(pgBackRestRepoPVC, request.ClusterName) - - log.Debugf("remove backrest pvc name [%s]", pvcName) - - // make a simple of the PVCs that can be removed by the removePVC command - pvcList := []string{pvcName} - removePVCs(pvcList, request) -} - -// removeReplicaServices removes the replica service if there is currently only a single replica -// in the cluster, i.e. if the last/final replica is being being removed with the current rmdata -// job. If more than one replica still exists, then no action is taken. -func removeReplicaServices(request Request) { - - // selector in the format "pg-cluster=,role=replica" - // which will grab any/all replicas - selector := fmt.Sprintf("%s=%s,%s=%s", config.LABEL_PG_CLUSTER, request.ClusterName, - config.LABEL_PGHA_ROLE, config.LABEL_PGHA_ROLE_REPLICA) - replicaList, err := request.Clientset. - CoreV1().Pods(request.Namespace). - List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - log.Error(err) - return - } - - switch len(replicaList.Items) { - case 0: - log.Error("no replicas found for this cluster") - return - case 1: - log.Debug("removing replica service when scaling down to 0 replicas") - err := request.Clientset. - CoreV1().Services(request.Namespace). - Delete(request.ClusterName+"-replica", &metav1.DeleteOptions{}) - if err != nil { - log.Error(err) - return - } - } - - log.Debug("more than one replica detected, replica service will not be deleted") -} - -// removeSchedules removes any of the ConfigMap objects that were created to -// execute schedule tasks, such as backups -// As these are consistently labeled, we can leverage Kuernetes selectors to -// delete all of them -func removeSchedules(request Request) { - log.Debugf("removing schedules for '%s'", request.ClusterName) - - // a ConfigMap used for the schedule uses the following label selector: - // crunchy-scheduler=true,= - selector := fmt.Sprintf("crunchy-scheduler=true,%s=%s", - config.LABEL_PG_CLUSTER, request.ClusterName) - - // run the query the deletes all of the scheduled configmaps - // if there is an error, log it, but continue on without making a big stink - err := request.Clientset. - CoreV1().ConfigMaps(request.Namespace). - DeleteCollection(&metav1.DeleteOptions{}, metav1.ListOptions{LabelSelector: selector}) - if err != nil { - log.Error(err) - } -} diff --git a/pgo-rmdata/rmdata/types.go b/pgo-rmdata/rmdata/types.go deleted file mode 100644 index 1044e85ee9..0000000000 --- a/pgo-rmdata/rmdata/types.go +++ /dev/null @@ -1,39 +0,0 @@ -package rmdata - -/* -Copyright 2019 - 2020 Crunchy Data -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "fmt" - - "github.com/crunchydata/postgres-operator/internal/kubeapi" -) - -type Request struct { - Clientset kubeapi.Interface - RemoveData bool - RemoveBackup bool - IsBackup bool - IsReplica bool - ClusterName string - ClusterPGHAScope string - ReplicaName string - Namespace string -} - -func (x Request) String() string { - msg := fmt.Sprintf("Request: Cluster [%s] ClusterPGHAScope [%s] Namespace [%s] ReplicaName [%] RemoveData [%t] RemoveBackup [%t] IsReplica [%t] IsBackup [%t]", x.ClusterName, x.ClusterPGHAScope, x.Namespace, x.ReplicaName, x.RemoveData, x.RemoveBackup, x.IsReplica, x.IsBackup) - return msg -} diff --git a/pgo-scheduler/pgo-scheduler.go b/pgo-scheduler/pgo-scheduler.go deleted file mode 100644 index 68b17c1218..0000000000 --- a/pgo-scheduler/pgo-scheduler.go +++ /dev/null @@ -1,257 +0,0 @@ -package main - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "fmt" - "os" - "os/signal" - "strconv" - "sync" - "syscall" - "time" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/controller" - nscontroller "github.com/crunchydata/postgres-operator/internal/controller/namespace" - "github.com/crunchydata/postgres-operator/internal/kubeapi" - crunchylog "github.com/crunchydata/postgres-operator/internal/logging" - "github.com/crunchydata/postgres-operator/internal/ns" - "github.com/crunchydata/postgres-operator/pgo-scheduler/scheduler" - sched "github.com/crunchydata/postgres-operator/pgo-scheduler/scheduler" - log "github.com/sirupsen/logrus" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - kubeinformers "k8s.io/client-go/informers" - "k8s.io/client-go/kubernetes" - "k8s.io/client-go/tools/cache" -) - -const ( - schedulerLabel = "crunchy-scheduler=true" - pgoNamespaceEnv = "PGO_OPERATOR_NAMESPACE" - timeoutEnv = "TIMEOUT" - inCluster = true - namespaceWorkerCount = 1 -) - -var nsRefreshInterval = 10 * time.Minute -var installationName string -var namespace string -var pgoNamespace string -var timeout time.Duration -var seconds int -var clientset kubeapi.Interface - -// this is used to prevent a race condition where an informer is being created -// twice when a new scheduler-enabled ConfigMap is added. -var informerNsMutex sync.Mutex -var informerNamespaces map[string]struct{} - -// NamespaceOperatingMode defines the namespace operating mode for the cluster, -// e.g. "dynamic", "readonly" or "disabled". See type NamespaceOperatingMode -// for detailed explanations of each mode available. -var namespaceOperatingMode ns.NamespaceOperatingMode - -func init() { - var err error - log.SetLevel(log.InfoLevel) - - debugFlag := os.Getenv("CRUNCHY_DEBUG") - //add logging configuration - crunchylog.CrunchyLogger(crunchylog.SetParameters()) - if debugFlag == "true" { - log.SetLevel(log.DebugLevel) - log.Debug("debug flag set to true") - } else { - log.Info("debug flag set to false") - } - - installationName = os.Getenv("PGO_INSTALLATION_NAME") - if installationName == "" { - log.Fatal("PGO_INSTALLATION_NAME env var is not set") - } else { - log.Info("PGO_INSTALLATION_NAME set to " + installationName) - } - - pgoNamespace = os.Getenv(pgoNamespaceEnv) - if pgoNamespace == "" { - log.WithFields(log.Fields{}).Fatalf("Failed to get PGO_OPERATOR_NAMESPACE environment: %s", pgoNamespaceEnv) - } - - secondsEnv := os.Getenv(timeoutEnv) - seconds = 300 - if secondsEnv == "" { - log.WithFields(log.Fields{}).Info("No timeout set, defaulting to 300 seconds") - } else { - seconds, err = strconv.Atoi(secondsEnv) - if err != nil { - log.WithFields(log.Fields{}).Fatalf("Failed to convert timeout env to seconds: %s", err) - } - } - - log.WithFields(log.Fields{}).Infof("Setting timeout to: %d", seconds) - timeout = time.Second * time.Duration(seconds) - - clientset, err = kubeapi.NewClient() - if err != nil { - log.WithFields(log.Fields{}).Fatalf("Failed to connect to kubernetes: %s", err) - } - - var Pgo config.PgoConfig - if err := Pgo.GetConfig(clientset, pgoNamespace); err != nil { - log.WithFields(log.Fields{}).Fatalf("error in Pgo configuration: %s", err) - } - - // Configure namespaces for the Scheduler. This includes determining the namespace - // operating mode and obtaining a valid list of target namespaces for the operator install. - if err := setNamespaceOperatingMode(clientset); err != nil { - log.Errorf("Error configuring operator namespaces: %v", err) - os.Exit(2) - } -} - -func main() { - log.Info("Starting Crunchy Scheduler") - //give time for pgo-event to start up - time.Sleep(time.Duration(5) * time.Second) - - scheduler := scheduler.New(schedulerLabel, pgoNamespace, clientset) - scheduler.CronClient.Start() - - sigs := make(chan os.Signal, 1) - done := make(chan bool, 1) - signal.Notify(sigs, syscall.SIGINT, syscall.SIGTERM) - - go func() { - sig := <-sigs - log.WithFields(log.Fields{ - "signal": sig, - }).Warning("Received signal") - done <- true - }() - - stop := make(chan struct{}) - - nsList, err := ns.GetInitialNamespaceList(clientset, namespaceOperatingMode, - installationName, pgoNamespace) - if err != nil { - log.WithFields(log.Fields{}).Fatalf("Failed to obtain initial namespace list: %s", err) - os.Exit(2) - } - - log.WithFields(log.Fields{}).Infof("Watching namespaces: %s", nsList) - - controllerManager, err := sched.NewControllerManager(nsList, scheduler, installationName, namespaceOperatingMode) - if err != nil { - log.WithFields(log.Fields{}).Fatalf("Failed to create controller manager: %s", err) - os.Exit(2) - } - controllerManager.RunAll() - - // if the namespace operating mode is not disabled, then create and start a namespace - // controller - if namespaceOperatingMode != ns.NamespaceOperatingModeDisabled { - if err := createAndStartNamespaceController(clientset, controllerManager, - scheduler, stop); err != nil { - log.WithFields(log.Fields{}).Fatalf("Failed to create namespace informer factory: %s", - err) - os.Exit(2) - } - } - - // If not using the "disabled" namespace operating mode, start a real namespace controller - // that is able to resond to namespace events in the Kube cluster. If using the "disabled" - // operating mode, then create a fake client containing all namespaces defined for the install - // (i.e. via the NAMESPACE environment variable) and use that to create the namespace - // controller. This allows for namespace and RBAC reconciliation logic to be run in a - // consistent manner regardless of the namespace operating mode being utilized. - if namespaceOperatingMode != ns.NamespaceOperatingModeDisabled { - if err := createAndStartNamespaceController(clientset, controllerManager, scheduler, - stop); err != nil { - log.Fatal(err) - } - } else { - fakeClient, err := ns.CreateFakeNamespaceClient(installationName) - if err != nil { - log.Fatal(err) - } - if err := createAndStartNamespaceController(fakeClient, controllerManager, scheduler, - stop); err != nil { - log.Fatal(err) - } - } - - for { - select { - case <-done: - log.Warning("Shutting down scheduler") - scheduler.CronClient.Stop() - close(stop) - os.Exit(0) - default: - time.Sleep(time.Second * 1) - } - } -} - -// setNamespaceOperatingMode set the namespace operating mode for the Operator by calling the -// proper utility function to determine which mode is applicable based on the current -// permissions assigned to the Operator Service Account. -func setNamespaceOperatingMode(clientset kubernetes.Interface) error { - nsOpMode, err := ns.GetNamespaceOperatingMode(clientset) - if err != nil { - return err - } - namespaceOperatingMode = nsOpMode - - return nil -} - -// createAndStartNamespaceController creates a namespace controller and then starts it -func createAndStartNamespaceController(kubeClientset kubernetes.Interface, - controllerManager controller.Manager, schedular *sched.Scheduler, - stopCh <-chan struct{}) error { - - nsKubeInformerFactory := kubeinformers.NewSharedInformerFactoryWithOptions(kubeClientset, - nsRefreshInterval, - kubeinformers.WithTweakListOptions(func(options *metav1.ListOptions) { - options.LabelSelector = fmt.Sprintf("%s=%s,%s=%s", - config.LABEL_VENDOR, config.LABEL_CRUNCHY, - config.LABEL_PGO_INSTALLATION_NAME, installationName) - })) - - nsController, err := nscontroller.NewNamespaceController(controllerManager, - nsKubeInformerFactory.Core().V1().Namespaces(), namespaceWorkerCount) - if err != nil { - return err - } - - // start the namespace controller - nsKubeInformerFactory.Start(stopCh) - - if ok := cache.WaitForNamedCacheSync("scheduler namespace", stopCh, - nsKubeInformerFactory.Core().V1().Namespaces().Informer().HasSynced); !ok { - return fmt.Errorf("failed waiting for scheduler namespace cache to sync") - } - - for i := 0; i < nsController.WorkerCount(); i++ { - go nsController.RunWorker(stopCh) - } - - log.Debug("scheduler namespace controller is now running") - - return nil -} diff --git a/pgo-scheduler/scheduler/configmapcontroller.go b/pgo-scheduler/scheduler/configmapcontroller.go deleted file mode 100644 index 41372f96b5..0000000000 --- a/pgo-scheduler/scheduler/configmapcontroller.go +++ /dev/null @@ -1,72 +0,0 @@ -package scheduler - -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - log "github.com/sirupsen/logrus" - v1 "k8s.io/api/core/v1" - coreinformers "k8s.io/client-go/informers/core/v1" - "k8s.io/client-go/tools/cache" -) - -// Controller holds the client and informer for the controller, along with a pointer to a -// Scheduler. -type Controller struct { - Informer coreinformers.ConfigMapInformer - Scheduler *Scheduler -} - -// onAdd is called when a configMap is added -func (c *Controller) onAdd(obj interface{}) { - cm, ok := obj.(*v1.ConfigMap) - if !ok { - log.WithFields(log.Fields{}).Error("Could not convert runtime object to configmap..") - } - - if _, ok := cm.Labels["crunchy-scheduler"]; !ok { - return - } - - if err := c.Scheduler.AddSchedule(cm); err != nil { - log.WithFields(log.Fields{ - "error": err, - }).Error("Failed to add schedules") - } -} - -// onDelete is called when a configMap is deleted -func (c *Controller) onDelete(obj interface{}) { - cm, ok := obj.(*v1.ConfigMap) - if !ok { - log.WithFields(log.Fields{}).Error("Could not convert runtime object to configmap..") - } - - if _, ok := cm.Labels["crunchy-scheduler"]; !ok { - return - } - c.Scheduler.DeleteSchedule(cm) -} - -// AddConfigMapEventHandler adds the pgcluster event handler to the pgcluster informer -func (c *Controller) AddConfigMapEventHandler() { - - c.Informer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{ - AddFunc: c.onAdd, - DeleteFunc: c.onDelete, - }) - - log.Debugf("ConfigMap Controller: added event handler to informer") -} diff --git a/pgo-scheduler/scheduler/controllermanager.go b/pgo-scheduler/scheduler/controllermanager.go deleted file mode 100644 index 843f6ac060..0000000000 --- a/pgo-scheduler/scheduler/controllermanager.go +++ /dev/null @@ -1,370 +0,0 @@ -package scheduler - -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "errors" - "fmt" - "sync" - - "github.com/crunchydata/postgres-operator/internal/controller" - "github.com/crunchydata/postgres-operator/internal/kubeapi" - "github.com/crunchydata/postgres-operator/internal/ns" - log "github.com/sirupsen/logrus" - "golang.org/x/sync/semaphore" - - kubeinformers "k8s.io/client-go/informers" - "k8s.io/client-go/kubernetes" - "k8s.io/client-go/tools/cache" -) - -// ControllerManager manages a map of controller groups, each of which is comprised of the various -// controllers needed to handle events within a specific namespace. Only one controllerGroup is -// allowed per namespace. -type ControllerManager struct { - mgrMutex sync.Mutex - controllers map[string]*controllerGroup - installationName string - namespaceOperatingMode ns.NamespaceOperatingMode - Scheduler *Scheduler - sem *semaphore.Weighted -} - -// controllerGroup is a struct for managing the various controllers created to handle events -// in a specific namespace -type controllerGroup struct { - stopCh chan struct{} - doneCh chan struct{} - started bool - kubeInformerFactory kubeinformers.SharedInformerFactory - informerSyncedFuncs []cache.InformerSynced - clientset kubernetes.Interface -} - -// NewControllerManager returns a new ControllerManager comprised of controllerGroups for each -// namespace included in the 'namespaces' parameter. -func NewControllerManager(namespaces []string, scheduler *Scheduler, installationName string, - namespaceOperatingMode ns.NamespaceOperatingMode) (*ControllerManager, error) { - - controllerManager := ControllerManager{ - controllers: make(map[string]*controllerGroup), - installationName: installationName, - namespaceOperatingMode: namespaceOperatingMode, - Scheduler: scheduler, - sem: semaphore.NewWeighted(1), - } - - // create controller groups for each namespace provided - for _, ns := range namespaces { - if err := controllerManager.AddGroup(ns); err != nil { - log.Error(err) - return nil, err - } - } - - log.Debugf("Controller Manager: new controller manager created for namespaces %v", - namespaces) - - return &controllerManager, nil -} - -// AddGroup adds a new controller group for the namespace specified. Each controller -// group is comprised of a controller for the following resource: -// - configmaps -// One SharedInformerFactory is utilized, specifically for Kube resources, to create and track the -// informers for this resource. Each controller group also receives its own clients, which can then -// be utilized by the controller within the controller group. -func (c *ControllerManager) AddGroup(namespace string) error { - - c.mgrMutex.Lock() - defer c.mgrMutex.Unlock() - - // only return an error if not a group already exists error - if err := c.addControllerGroup(namespace); err != nil && - !errors.Is(err, controller.ErrControllerGroupExists) { - return err - } - - return nil -} - -// AddAndRunGroup is a convenience function that adds a controller group for the -// namespace specified, and then immediately runs the controllers in that group. -func (c *ControllerManager) AddAndRunGroup(namespace string) error { - - if c.controllers[namespace] != nil { - // first try to clean if one is not already in progress - if err := c.clean(namespace); err != nil { - log.Infof("Controller Manager: %s", err.Error()) - } - - // if we just cleaned the current namespace's controller, then return - if _, ok := c.controllers[namespace]; !ok { - log.Infof("Controller Manager: controller group for namespace %s has already "+ - "been cleaned", namespace) - return nil - } - } - - c.mgrMutex.Lock() - defer c.mgrMutex.Unlock() - - // only return an error if not a group already exists error - if err := c.addControllerGroup(namespace); err != nil && - !errors.Is(err, controller.ErrControllerGroupExists) { - return err - } - - if err := c.runControllerGroup(namespace); err != nil { - return err - } - - return nil -} - -// RemoveAll removes all controller groups managed by the controller manager, first stopping all -// controllers within each controller group managed by the controller manager. -func (c *ControllerManager) RemoveAll() { - - c.mgrMutex.Lock() - defer c.mgrMutex.Unlock() - - for ns := range c.controllers { - c.removeControllerGroup(ns) - } - - log.Debug("Controller Manager: all contollers groups have been removed") -} - -// RemoveGroup removes the controller group for the namespace specified, first stopping all -// controllers within that group -func (c *ControllerManager) RemoveGroup(namespace string) { - - c.mgrMutex.Lock() - defer c.mgrMutex.Unlock() - - c.removeControllerGroup(namespace) -} - -// RunAll runs all controllers across all controller groups managed by the controller manager. -func (c *ControllerManager) RunAll() error { - - c.mgrMutex.Lock() - defer c.mgrMutex.Unlock() - - for ns := range c.controllers { - if err := c.runControllerGroup(ns); err != nil { - return err - } - } - - log.Debug("Controller Manager: all contoller groups are now running") - - return nil -} - -// RunGroup runs the controllers within the controller group for the namespace specified. -func (c *ControllerManager) RunGroup(namespace string) error { - - c.mgrMutex.Lock() - defer c.mgrMutex.Unlock() - - if _, ok := c.controllers[namespace]; !ok { - log.Debugf("Controller Manager: unable to run controller group for namespace %s because "+ - "a controller group for this namespace does not exist", namespace) - return nil - } - - if err := c.runControllerGroup(namespace); err != nil { - return err - } - - log.Debugf("Controller Manager: the controller group for ns %s is now running", namespace) - - return nil -} - -// addControllerGroup adds a new controller group for the namespace specified -func (c *ControllerManager) addControllerGroup(namespace string) error { - - if _, ok := c.controllers[namespace]; ok { - log.Debugf("Controller Manager: a controller for namespace %s already exists", namespace) - return controller.ErrControllerGroupExists - } - - // create a client for kube resources - client, err := kubeapi.NewClient() - if err != nil { - log.Error(err) - return err - } - - kubeInformerFactory := kubeinformers.NewSharedInformerFactoryWithOptions(client, 0, - kubeinformers.WithNamespace(namespace)) - - configmapController := &Controller{ - Informer: kubeInformerFactory.Core().V1().ConfigMaps(), - Scheduler: c.Scheduler, - } - - // add the proper event handler to the informer in each controller - configmapController.AddConfigMapEventHandler() - - group := &controllerGroup{ - clientset: client, - stopCh: make(chan struct{}), - kubeInformerFactory: kubeInformerFactory, - informerSyncedFuncs: []cache.InformerSynced{ - kubeInformerFactory.Core().V1().ConfigMaps().Informer().HasSynced, - }, - } - - c.controllers[namespace] = group - - log.Debugf("Controller Manager: added controller group for namespace %s", namespace) - - return nil -} - -// clean removes and controller groups that no longer correspond to a valid namespace within -// the Kubernetes cluster, e.g. in the event that a namespace has been deleted. -func (c *ControllerManager) clean(namespace string) error { - - if !c.sem.TryAcquire(1) { - return fmt.Errorf("controller group clean already in progress, namespace %s will not "+ - "clean", namespace) - } - defer c.sem.Release(1) - - log.Debugf("Controller Manager: namespace %s acquired clean lock and will clean the "+ - "controller groups", namespace) - - nsList, err := ns.GetCurrentNamespaceList(c.controllers[namespace].clientset, - c.installationName, c.namespaceOperatingMode) - if err != nil { - log.Errorf(err.Error()) - } - - for controlledNamespace := range c.controllers { - cleanNamespace := true - for _, currNamespace := range nsList { - if controlledNamespace == currNamespace { - cleanNamespace = false - break - } - } - if cleanNamespace { - log.Debugf("Controller Manager: removing controller group for namespace %s", - controlledNamespace) - c.removeControllerGroup(controlledNamespace) - } - } - - return nil -} - -// hasListerPrivs verifies the Operator has the privileges required to start the controllers -// for the namespace specified. -func (c *ControllerManager) hasListerPrivs(namespace string) bool { - - controllerGroup := c.controllers[namespace] - - var err error - var hasCorePrivs bool - - hasCorePrivs, err = ns.CheckAccessPrivs(controllerGroup.clientset, - map[string][]string{"configmaps": {"list"}}, - "", namespace) - if err != nil { - log.Errorf(err.Error()) - } else if !hasCorePrivs { - log.Errorf("Controller Manager: Controller Group for namespace %s does not have the "+ - "required list privileges for resource %s in the Core API", - namespace, "configmaps") - } - - return hasCorePrivs -} - -// runControllerGroup is responsible running the controllers for the controller group corresponding -// to the namespace provided -func (c *ControllerManager) runControllerGroup(namespace string) error { - - controllerGroup := c.controllers[namespace] - - hasListerPrivs := c.hasListerPrivs(namespace) - switch { - case c.controllers[namespace].started && hasListerPrivs: - log.Debugf("Controller Manager: controller group for namespace %s is already running", - namespace) - return nil - case c.controllers[namespace].started && !hasListerPrivs: - c.removeControllerGroup(namespace) - return fmt.Errorf("Controller Manager: removing the running controller group for "+ - "namespace %s because it no longer has the required privs, will attempt to "+ - "restart on the next ns refresh interval", namespace) - } - - controllerGroup.kubeInformerFactory.Start(controllerGroup.stopCh) - - if ok := cache.WaitForNamedCacheSync(namespace, controllerGroup.stopCh, - controllerGroup.informerSyncedFuncs...); !ok { - return fmt.Errorf("Controller Manager: failed to wait for caches to sync") - } - - controllerGroup.started = true - - log.Debugf("Controller Manager: controller group for namespace %s is now running", namespace) - - return nil -} - -// removeControllerGroup removes the controller group for the namespace specified. Any worker -// queues associated with the controllers inside of the controller group are first shutdown -// prior to removing the controller group. -func (c *ControllerManager) removeControllerGroup(namespace string) { - - if _, ok := c.controllers[namespace]; !ok { - log.Debugf("Controller Manager: no controller group to remove for ns %s", namespace) - return - } - - c.stopControllerGroup(namespace) - delete(c.controllers, namespace) - - log.Debugf("Controller Manager: the controller group for ns %s has been removed", namespace) -} - -// stopControllerGroup stops the controller group associated with the namespace specified. This is -// done by calling the ShutdownWorker function associated with the controller. If the controller -// does not have a ShutdownWorker function then no action is taken. -func (c *ControllerManager) stopControllerGroup(namespace string) { - - if _, ok := c.controllers[namespace]; !ok { - log.Debugf("Controller Manager: unable to stop controller group for namespace %s because "+ - "a controller group for this namespace does not exist", namespace) - return - } - - controllerGroup := c.controllers[namespace] - - // close the stop channel to stop all informers and instruct the workers queues to shutdown - close(controllerGroup.stopCh) - - controllerGroup.started = false - - log.Debugf("Controller Manager: the controller group for ns %s has been stopped", namespace) -} diff --git a/pgo-scheduler/scheduler/pgbackrest.go b/pgo-scheduler/scheduler/pgbackrest.go deleted file mode 100644 index 1ce1fb3166..0000000000 --- a/pgo-scheduler/scheduler/pgbackrest.go +++ /dev/null @@ -1,153 +0,0 @@ -package scheduler - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - "time" - - "github.com/crunchydata/postgres-operator/internal/config" - log "github.com/sirupsen/logrus" - kerrors "k8s.io/apimachinery/pkg/api/errors" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/util/wait" -) - -type BackRestBackupJob struct { - backupType string - stanza string - namespace string - deployment string - label string - container string - cluster string - storageType string - options string -} - -func (s *ScheduleTemplate) NewBackRestSchedule() BackRestBackupJob { - return BackRestBackupJob{ - backupType: s.PGBackRest.Type, - stanza: "db", - namespace: s.Namespace, - deployment: s.PGBackRest.Deployment, - label: s.PGBackRest.Label, - container: s.PGBackRest.Container, - cluster: s.Cluster, - storageType: s.PGBackRest.StorageType, - options: s.Options, - } -} - -func (b BackRestBackupJob) Run() { - contextLogger := log.WithFields(log.Fields{ - "namespace": b.namespace, - "deployment": b.deployment, - "label": b.label, - "container": b.container, - "backupType": b.backupType, - "cluster": b.cluster, - "storageType": b.storageType}) - - contextLogger.Info("Running pgBackRest backup") - - cluster, err := clientset.CrunchydataV1().Pgclusters(b.namespace).Get(b.cluster, metav1.GetOptions{}) - if err != nil { - contextLogger.WithFields(log.Fields{ - "error": err, - }).Error("error retrieving pgCluster") - return - } - - taskName := fmt.Sprintf("%s-%s-sch-backup", b.cluster, b.backupType) - - //if the cluster is found, check for an annotation indicating it has not been upgraded - //if the annotation does not exist, then it is a new cluster and proceed as usual - //if the annotation is set to "true", the cluster has already been upgraded and can proceed but - //if the annotation is set to "false", this cluster will need to be upgraded before proceeding - //log the issue, then return - if cluster.Annotations[config.ANNOTATION_IS_UPGRADED] == config.ANNOTATIONS_FALSE { - contextLogger.WithFields(log.Fields{ - "task": taskName, - }).Debug("pgcluster requires an upgrade before scheduled pgbackrest task can be run") - return - } - - err = clientset.CrunchydataV1().Pgtasks(b.namespace).Delete(taskName, &metav1.DeleteOptions{}) - if err == nil { - deletePropagation := metav1.DeletePropagationForeground - err = clientset. - BatchV1().Jobs(b.namespace). - Delete(taskName, &metav1.DeleteOptions{PropagationPolicy: &deletePropagation}) - if err == nil { - err = wait.Poll(time.Second/2, time.Minute, func() (bool, error) { - _, err := clientset.BatchV1().Jobs(b.namespace).Get(taskName, metav1.GetOptions{}) - return false, err - }) - } - if !kerrors.IsNotFound(err) { - contextLogger.WithFields(log.Fields{ - "task": taskName, - "error": err, - }).Error("error deleting backup job") - return - } - } else if !kerrors.IsNotFound(err) { - contextLogger.WithFields(log.Fields{ - "task": taskName, - "error": err, - }).Error("error deleting pgTask") - return - } - - selector := fmt.Sprintf("%s=%s,pgo-backrest-repo=true", config.LABEL_PG_CLUSTER, b.cluster) - pods, err := clientset.CoreV1().Pods(b.namespace).List(metav1.ListOptions{LabelSelector: selector}) - if err != nil { - contextLogger.WithFields(log.Fields{ - "selector": selector, - "error": err, - }).Error("error getting pods from selector") - return - } - - if len(pods.Items) != 1 { - contextLogger.WithFields(log.Fields{ - "selector": selector, - "error": err, - "podsFound": len(pods.Items), - }).Error("pods returned does not equal 1, it should") - return - } - - backrest := pgBackRestTask{ - clusterName: cluster.Name, - taskName: taskName, - podName: pods.Items[0].Name, - containerName: "database", - backupOptions: fmt.Sprintf("--type=%s %s", b.backupType, b.options), - stanza: b.stanza, - storageType: b.storageType, - imagePrefix: cluster.Spec.PGOImagePrefix, - } - - _, err = clientset.CrunchydataV1().Pgtasks(b.namespace).Create(backrest.NewBackRestTask()) - if err != nil { - contextLogger.WithFields(log.Fields{ - "error": err, - }).Error("could not create new pgtask") - return - } -} diff --git a/pgo-scheduler/scheduler/policy.go b/pgo-scheduler/scheduler/policy.go deleted file mode 100644 index acf5c6a489..0000000000 --- a/pgo-scheduler/scheduler/policy.go +++ /dev/null @@ -1,188 +0,0 @@ -package scheduler - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "bytes" - "encoding/json" - "fmt" - "time" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/operator" - "github.com/crunchydata/postgres-operator/internal/util" - log "github.com/sirupsen/logrus" - v1batch "k8s.io/api/batch/v1" - v1 "k8s.io/api/core/v1" - kerrors "k8s.io/apimachinery/pkg/api/errors" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/util/wait" -) - -type PolicyJob struct { - ccpImageTag string - ccpImagePrefix string - cluster string - namespace string - secret string - policy string - database string -} - -func (s *ScheduleTemplate) NewPolicySchedule() PolicyJob { - return PolicyJob{ - namespace: s.Namespace, - cluster: s.Cluster, - ccpImageTag: s.Policy.ImageTag, - ccpImagePrefix: s.Policy.ImagePrefix, - secret: s.Policy.Secret, - policy: s.Policy.Name, - database: s.Policy.Database, - } -} - -func (p PolicyJob) Run() { - contextLogger := log.WithFields(log.Fields{ - "namespace": p.namespace, - "policy": p.policy, - "cluster": p.cluster}) - - contextLogger.Info("Running Policy schedule") - - cluster, err := clientset.CrunchydataV1().Pgclusters(p.namespace).Get(p.cluster, metav1.GetOptions{}) - if err != nil { - contextLogger.WithFields(log.Fields{ - "error": err, - }).Error("error retrieving pgCluster") - return - } - - policy, err := clientset.CrunchydataV1().Pgpolicies(p.namespace).Get(p.policy, metav1.GetOptions{}) - if err != nil { - contextLogger.WithFields(log.Fields{ - "error": err, - }).Error("error retrieving pgPolicy") - return - } - - name := fmt.Sprintf("policy-%s-%s-schedule", p.cluster, p.policy) - - // if the cluster is found, check for a annotation indicating it has not been upgraded - // if the annotation does not exist, then it is a new cluster and proceed as usual - // if the annotation is set to "true", the cluster has already been upgraded and can proceed but - // if the annotation is set to "false", this cluster will need to be upgraded before proceeding - // log the issue, then return - if cluster.Annotations[config.ANNOTATION_IS_UPGRADED] == config.ANNOTATIONS_FALSE { - contextLogger.WithFields(log.Fields{ - "task": name, - }).Debug("pgcluster requires an upgrade before scheduled policy task can run") - return - } - - filename := fmt.Sprintf("%s.sql", p.policy) - data := make(map[string]string) - data[filename] = string(policy.Spec.SQL) - - var labels = map[string]string{ - "pg-cluster": p.cluster, - } - labels["pg-cluster"] = p.cluster - labels["pg-policy"] = p.policy - labels["pg-schedule"] = "true" - - configmap := &v1.ConfigMap{ - ObjectMeta: metav1.ObjectMeta{ - Name: name, - Labels: labels, - }, - Data: data, - } - - err = clientset.CoreV1().ConfigMaps(p.namespace).Delete(name, &metav1.DeleteOptions{}) - if err != nil && !kerrors.IsNotFound(err) { - contextLogger.WithFields(log.Fields{ - "error": err, - "configMap": name, - }).Error("could not delete policy configmap") - return - } - - log.Debug("Creating configmap..") - _, err = clientset.CoreV1().ConfigMaps(p.namespace).Create(configmap) - if err != nil { - contextLogger.WithFields(log.Fields{ - "error": err, - }).Error("could not create policy configmap") - return - } - - policyJob := PolicyTemplate{ - JobName: name, - ClusterName: p.cluster, - PGOImagePrefix: util.GetValueOrDefault(cluster.Spec.CCPImagePrefix, p.ccpImagePrefix), - PGOImageTag: p.ccpImageTag, - PGHost: p.cluster, - PGPort: cluster.Spec.Port, - PGDatabase: p.database, - PGSQLConfigMap: name, - PGUserSecret: p.secret, - } - - var doc bytes.Buffer - if err := config.PolicyJobTemplate.Execute(&doc, policyJob); err != nil { - contextLogger.WithFields(log.Fields{ - "error": err}).Error("Failed to render job template") - return - } - - deletePropagation := metav1.DeletePropagationForeground - err = clientset. - BatchV1().Jobs(p.namespace). - Delete(name, &metav1.DeleteOptions{PropagationPolicy: &deletePropagation}) - if err == nil { - err = wait.Poll(time.Second/2, time.Minute, func() (bool, error) { - _, err := clientset.BatchV1().Jobs(p.namespace).Get(name, metav1.GetOptions{}) - return false, err - }) - } - if !kerrors.IsNotFound(err) { - contextLogger.WithFields(log.Fields{ - "job": name, - "error": err, - }).Error("error deleting policy job") - return - } - - newJob := &v1batch.Job{} - if err := json.Unmarshal(doc.Bytes(), newJob); err != nil { - contextLogger.WithFields(log.Fields{ - "error": err, - }).Error("Failed unmarshaling job template") - return - } - - // set the container image to an override value, if one exists - operator.SetContainerImageOverride(config.CONTAINER_IMAGE_PGO_SQL_RUNNER, - &newJob.Spec.Template.Spec.Containers[0]) - - _, err = clientset.BatchV1().Jobs(p.namespace).Create(newJob) - if err != nil { - contextLogger.WithFields(log.Fields{ - "error": err, - }).Error("Failed creating policy job") - return - } -} diff --git a/pgo-scheduler/scheduler/scheduler.go b/pgo-scheduler/scheduler/scheduler.go deleted file mode 100644 index d0360b4df4..0000000000 --- a/pgo-scheduler/scheduler/scheduler.go +++ /dev/null @@ -1,124 +0,0 @@ -package scheduler - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "encoding/json" - "errors" - "fmt" - "io/ioutil" - "time" - - "github.com/crunchydata/postgres-operator/internal/kubeapi" - log "github.com/sirupsen/logrus" - - cv2 "github.com/robfig/cron" - v1 "k8s.io/api/core/v1" -) - -func New(label, namespace string, client kubeapi.Interface) *Scheduler { - clientset = client - cronClient := cv2.New() - cronClient.AddFunc("* * * * *", phony) - cronClient.AddFunc("* * * * *", heartbeat) - - return &Scheduler{ - namespace: namespace, - label: label, - CronClient: cronClient, - entries: make(map[string]cv2.EntryID), - } -} - -func (s *Scheduler) AddSchedule(config *v1.ConfigMap) error { - name := config.Name + config.Namespace - if _, ok := s.entries[name]; ok { - return nil - } - - if len(config.Data) != 1 { - return errors.New("Schedule configmaps should contain only one schedule") - } - - var schedule ScheduleTemplate - for _, data := range config.Data { - if err := json.Unmarshal([]byte(data), &schedule); err != nil { - return fmt.Errorf("Failed unmarhsaling configMap: %s", err) - } - } - - if err := validate(schedule); err != nil { - return fmt.Errorf("Failed to validate schedule: %s", err) - } - - id, err := s.schedule(schedule) - if err != nil { - return fmt.Errorf("Failed to schedule configmap: %s", err) - } - - log.WithFields(log.Fields{ - "configMap": string(config.Name), - "type": schedule.Type, - "schedule": schedule.Schedule, - "namespace": schedule.Namespace, - "deployment": schedule.Deployment, - "label": schedule.Label, - "container": schedule.Container, - }).Info("Added new schedule") - - s.entries[name] = id - return nil -} - -func (s *Scheduler) DeleteSchedule(config *v1.ConfigMap) { - log.WithFields(log.Fields{ - "scheduleName": config.Name, - }).Info("Removed schedule") - - name := config.Name + config.Namespace - s.CronClient.Remove(s.entries[name]) - delete(s.entries, name) -} - -func (s *Scheduler) schedule(st ScheduleTemplate) (cv2.EntryID, error) { - var job cv2.Job - - switch st.Type { - case "pgbackrest": - job = st.NewBackRestSchedule() - case "policy": - job = st.NewPolicySchedule() - default: - var id cv2.EntryID - return id, fmt.Errorf("schedule type not implemented yet") - } - return s.CronClient.AddJob(st.Schedule, job) -} - -// phony implements a no-op schedule job to prevent a bug that runs newly -// scheduled jobs multiple times -func phony() { - _ = time.Now() -} - -// heartbeat modifies a sentinel file used as part of the liveness test -// for the scheduler -func heartbeat() { - err := ioutil.WriteFile("/tmp/scheduler.hb", []byte(time.Now().String()), 0644) - if err != nil { - log.Errorln("error writing heartbeat file: ", err) - } -} diff --git a/pgo-scheduler/scheduler/tasks.go b/pgo-scheduler/scheduler/tasks.go deleted file mode 100644 index a2c715d3be..0000000000 --- a/pgo-scheduler/scheduler/tasks.go +++ /dev/null @@ -1,57 +0,0 @@ -package scheduler - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - - "github.com/crunchydata/postgres-operator/internal/config" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1" -) - -type pgBackRestTask struct { - clusterName string - taskName string - podName string - containerName string - backupOptions string - stanza string - storageType string - imagePrefix string -} - -func (p pgBackRestTask) NewBackRestTask() *crv1.Pgtask { - return &crv1.Pgtask{ - ObjectMeta: meta_v1.ObjectMeta{ - Name: p.taskName, - }, - Spec: crv1.PgtaskSpec{ - Name: p.taskName, - TaskType: crv1.PgtaskBackrest, - Parameters: map[string]string{ - config.LABEL_JOB_NAME: p.taskName, - config.LABEL_PG_CLUSTER: p.clusterName, - config.LABEL_POD_NAME: p.podName, - config.LABEL_CONTAINER_NAME: p.containerName, - config.LABEL_BACKREST_COMMAND: crv1.PgtaskBackrestBackup, - config.LABEL_BACKREST_OPTS: fmt.Sprintf("--stanza=%s %s", p.stanza, p.backupOptions), - config.LABEL_BACKREST_STORAGE_TYPE: p.storageType, - config.LABEL_IMAGE_PREFIX: p.imagePrefix, - }, - }, - } -} diff --git a/pgo-scheduler/scheduler/types.go b/pgo-scheduler/scheduler/types.go deleted file mode 100644 index 7b0b07353e..0000000000 --- a/pgo-scheduler/scheduler/types.go +++ /dev/null @@ -1,75 +0,0 @@ -package scheduler - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "time" - - "github.com/crunchydata/postgres-operator/internal/kubeapi" - cv2 "github.com/robfig/cron" -) - -var clientset kubeapi.Interface - -type Scheduler struct { - entries map[string]cv2.EntryID - CronClient *cv2.Cron - label string - namespace string - namespaceList []string - scheduleTypes []string -} - -type ScheduleTemplate struct { - Version string `json:"version"` - Name string `json:"name"` - Created time.Time `json:"created"` - Schedule string `json:"schedule"` - Namespace string `json:"namespace"` - Type string `json:"type"` - Cluster string `json:"cluster"` - PGBackRest `json:"pgbackrest,omitempty"` - Policy `json:"policy,omitempty"` -} - -type PGBackRest struct { - Deployment string `json:"deployment"` - Label string `json:"label"` - Container string `json:"container"` - Type string `json:"type"` - StorageType string `json:"storageType,omitempty"` - Options string `json:"options"` -} - -type Policy struct { - Secret string `json:"secret"` - Name string `json:"name"` - ImagePrefix string `json:"imagePrefix"` - ImageTag string `json:"imageTag"` - Database string `json:"database"` -} - -type PolicyTemplate struct { - JobName string - ClusterName string - PGOImagePrefix string - PGOImageTag string - PGHost string - PGPort string - PGDatabase string - PGUserSecret string - PGSQLConfigMap string -} diff --git a/pgo-scheduler/scheduler/validate.go b/pgo-scheduler/scheduler/validate.go deleted file mode 100644 index 37c24dc7ab..0000000000 --- a/pgo-scheduler/scheduler/validate.go +++ /dev/null @@ -1,123 +0,0 @@ -package scheduler - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "errors" - "fmt" - "strings" - - cv3 "github.com/robfig/cron" -) - -func validate(s ScheduleTemplate) error { - if err := ValidateSchedule(s.Schedule); err != nil { - return err - } - - if err := ValidateScheduleType(s.Type); err != nil { - return err - } - - if err := ValidateBackRestSchedule(s.Type, s.Deployment, s.Label, s.PGBackRest.Type, - s.PGBackRest.StorageType); err != nil { - return err - } - - if err := ValidatePolicySchedule(s.Type, s.Policy.Name, s.Policy.Database); err != nil { - return err - } - - return nil -} - -// ValidateSchedule validates that the cron syntax is valid -// We use the standard format here... -func ValidateSchedule(schedule string) error { - parser := cv3.NewParser(cv3.Minute | cv3.Hour | cv3.Dom | cv3.Month | cv3.Dow) - - if _, err := parser.Parse(schedule); err != nil { - return fmt.Errorf("%s is not a valid schedule: ", schedule) - } - return nil -} - -func ValidateScheduleType(schedule string) error { - scheduleTypes := []string{ - "pgbackrest", - "policy", - } - - schedule = strings.ToLower(schedule) - for _, scheduleType := range scheduleTypes { - if schedule == scheduleType { - return nil - } - } - - return fmt.Errorf("%s is not a valid schedule type", schedule) -} - -func ValidateBackRestSchedule(scheduleType, deployment, label, backupType, storageType string) error { - if scheduleType == "pgbackrest" { - if deployment == "" && label == "" { - return errors.New("Deployment or Label required for pgBackRest schedules") - } - - if backupType == "" { - return errors.New("Backup Type required for pgBackRest schedules") - } - - validBackupTypes := []string{"full", "incr", "diff"} - - var valid bool - for _, bType := range validBackupTypes { - if backupType == bType { - valid = true - break - } - } - - if !valid { - return fmt.Errorf("pgBackRest Backup Type invalid: %s", backupType) - } - - validStorageTypes := []string{"local", "s3"} - for _, sType := range validStorageTypes { - if storageType == sType { - valid = true - break - } - } - - if !valid { - return fmt.Errorf("pgBackRest Backup Type invalid: %s", backupType) - } - } - return nil -} - -func ValidatePolicySchedule(scheduleType, policy, database string) error { - if scheduleType == "policy" { - if database == "" { - return errors.New("Database name required for policy schedules") - } - if policy == "" { - return errors.New("Policy name required for policy schedules") - } - } - return nil -} diff --git a/pgo-scheduler/scheduler/validate_test.go b/pgo-scheduler/scheduler/validate_test.go deleted file mode 100644 index 6abe1a7396..0000000000 --- a/pgo-scheduler/scheduler/validate_test.go +++ /dev/null @@ -1,130 +0,0 @@ -package scheduler - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "testing" -) - -func TestValidSchedule(t *testing.T) { - tests := []struct { - schedule string - valid bool - }{ - {"* * * * *", true}, - {"1 1 1 1 1", true}, - {"1-59/2 * * * *", true}, - {"*/2 * * * *", true}, - {"* * * * * * *", false}, - {"60 * * * *", false}, - {"* 24 * * *", false}, - {"* * 32 * *", false}, - } - - for i, test := range tests { - err := ValidateSchedule(test.schedule) - if test.valid && err != nil { - t.Fatalf("tests[%d] - invalid schedule. expected valid, got invalid: %s", - i, err) - } else if !test.valid && err == nil { - t.Fatalf("tests[%d] - valid schedule. expected invalid, got valid: %s", - i, err) - } - } -} - -func TestValidScheduleType(t *testing.T) { - tests := []struct { - schedule string - valid bool - }{ - {"pgbackrest", true}, - {"policy", true}, - {"PGBACKREST", true}, - {"POLICY", true}, - {"pgBackRest", true}, - {"PoLiCY", true}, - {"FOO", false}, - {"BAR", false}, - {"foo", false}, - {"bar", false}, - {"", false}, - } - - for i, test := range tests { - err := ValidateScheduleType(test.schedule) - if test.valid && err != nil { - t.Fatalf("tests[%d] - invalid schedule type. expected valid, got invalid: %s", - i, err) - } else if !test.valid && err == nil { - t.Fatalf("tests[%d] - valid schedule. expected invalid, got valid: %s", - i, err) - } - } -} - -func TestValidBackRestSchedule(t *testing.T) { - tests := []struct { - schedule, deployment, label, backupType, storageType string - valid bool - }{ - {"pgbackrest", "testdeployment", "", "full", "local", true}, - {"pgbackrest", "", "testlabel=label", "diff", "local", true}, - {"pgbackrest", "testdeployment", "", "full", "s3", true}, - {"pgbackrest", "", "testlabel=label", "diff", "s3", true}, - {"policy", "", "", "", "local", false}, - {"pgbackrest", "", "", "", "local", false}, - {"pgbackrest", "", "", "full", "local", false}, - {"pgbackrest", "testdeployment", "", "", "local", false}, - {"pgbackrest", "", "testlabel=label", "", "local", false}, - {"pgbackrest", "testdeployment", "", "foobar", "local", false}, - {"pgbackrest", "", "testlabel=label", "foobar", "local", false}, - {"pgbackrest", "", "testlabel=label", "foobar", "", false}, - } - - for i, test := range tests { - err := ValidateBackRestSchedule(test.schedule, test.deployment, test.label, test.backupType, test.storageType) - if test.valid && err != nil { - t.Fatalf("tests[%d] - invalid schedule type. expected valid, got invalid: %s", - i, err) - } else if !test.valid && err == nil { - t.Fatalf("tests[%d] - valid schedule. expected invalid, got valid: %s", - i, err) - } - } -} - -func TestValidSQLSchedule(t *testing.T) { - tests := []struct { - schedule, policy, database string - valid bool - }{ - {"policy", "mypolicy", "mydatabase", true}, - {"policy", "", "mydatabase", false}, - {"policy", "mypolicy", "", false}, - } - - for i, test := range tests { - err := ValidatePolicySchedule(test.schedule, test.policy, test.database) - if test.valid && err != nil { - t.Fatalf("tests[%d] - invalid schedule type. expected valid, got invalid: %s", - i, err) - } else if !test.valid && err == nil { - t.Fatalf("tests[%d] - valid schedule. expected invalid, got valid: %s", - i, err) - } - } -} diff --git a/pgo/api/backrest.go b/pgo/api/backrest.go deleted file mode 100644 index 13e2ff702b..0000000000 --- a/pgo/api/backrest.go +++ /dev/null @@ -1,102 +0,0 @@ -package api - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "bytes" - "encoding/json" - "fmt" - "net/http" - - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" -) - -func ShowBackrest(httpclient *http.Client, arg, selector string, SessionCredentials *msgs.BasicAuthCredentials, ns string) (msgs.ShowBackrestResponse, error) { - - var response msgs.ShowBackrestResponse - url := SessionCredentials.APIServerURL + "/backrest/" + arg + "?version=" + msgs.PGO_VERSION + "&selector=" + selector + "&namespace=" + ns - - log.Debugf("show backrest called [%s]", url) - - action := "GET" - req, err := http.NewRequest(action, url, nil) - if err != nil { - return response, err - } - - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - resp, err := httpclient.Do(req) - if err != nil { - fmt.Println("Error: Do: ", err) - return response, err - } - defer resp.Body.Close() - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Debugf("%v", resp.Body) - log.Debug(err) - return response, err - } - - return response, err - -} - -func CreateBackrestBackup(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.CreateBackrestBackupRequest) (msgs.CreateBackrestBackupResponse, error) { - - var response msgs.CreateBackrestBackupResponse - - jsonValue, _ := json.Marshal(request) - - url := SessionCredentials.APIServerURL + "/backrestbackup" - - log.Debugf("create backrest backup called [%s]", url) - - action := "POST" - req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue)) - if err != nil { - return response, err - } - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - resp, err := httpclient.Do(req) - if err != nil { - return response, err - } - defer resp.Body.Close() - - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - fmt.Println("Error: ", err) - log.Println(err) - return response, err - } - - return response, err -} diff --git a/pgo/api/cat.go b/pgo/api/cat.go deleted file mode 100644 index 00d17c7fb6..0000000000 --- a/pgo/api/cat.go +++ /dev/null @@ -1,62 +0,0 @@ -package api - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "bytes" - "encoding/json" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "net/http" -) - -func Cat(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.CatRequest) (msgs.CatResponse, error) { - - var response msgs.CatResponse - - jsonValue, _ := json.Marshal(request) - url := SessionCredentials.APIServerURL + "/cat" - - log.Debugf("cat called [%s]", url) - - action := "POST" - req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue)) - if err != nil { - return response, err - } - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - resp, err := httpclient.Do(req) - if err != nil { - return response, err - } - defer resp.Body.Close() - - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - log.Println(err) - return response, err - } - - return response, err -} diff --git a/pgo/api/clone.go b/pgo/api/clone.go deleted file mode 100644 index 2e7ebfccba..0000000000 --- a/pgo/api/clone.go +++ /dev/null @@ -1,62 +0,0 @@ -package api - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "bytes" - "encoding/json" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "net/http" -) - -func Clone(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.CloneRequest) (msgs.CloneResponse, error) { - - var response msgs.CloneResponse - - jsonValue, _ := json.Marshal(request) - url := SessionCredentials.APIServerURL + "/clone" - - log.Debugf("clone called [%s]", url) - - action := "POST" - req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue)) - if err != nil { - return response, err - } - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - resp, err := httpclient.Do(req) - if err != nil { - return response, err - } - defer resp.Body.Close() - - log.Debugf("%+v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - log.Println(err) - return response, err - } - - return response, err -} diff --git a/pgo/api/cluster.go b/pgo/api/cluster.go deleted file mode 100644 index 74407c0dbc..0000000000 --- a/pgo/api/cluster.go +++ /dev/null @@ -1,193 +0,0 @@ -package api - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "bytes" - "encoding/json" - "fmt" - "net/http" - - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" -) - -const ( - createClusterURL = "%s/clusters" - deleteClusterURL = "%s/clustersdelete" - updateClusterURL = "%s/clustersupdate" - showClusterURL = "%s/showclusters" -) - -func ShowCluster(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.ShowClusterRequest) (msgs.ShowClusterResponse, error) { - - var response msgs.ShowClusterResponse - - jsonValue, _ := json.Marshal(request) - url := fmt.Sprintf(showClusterURL, SessionCredentials.APIServerURL) - log.Debugf("showCluster called...[%s]", url) - - action := "POST" - req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue)) - if err != nil { - return response, err - } - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - resp, err := httpclient.Do(req) - if err != nil { - return response, err - } - defer resp.Body.Close() - - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - fmt.Println("Error: ", err) - log.Println(err) - return response, err - } - - return response, err - -} - -func DeleteCluster(httpclient *http.Client, request *msgs.DeleteClusterRequest, SessionCredentials *msgs.BasicAuthCredentials) (msgs.DeleteClusterResponse, error) { - - var response msgs.DeleteClusterResponse - - jsonValue, _ := json.Marshal(request) - url := fmt.Sprintf(deleteClusterURL, SessionCredentials.APIServerURL) - - log.Debugf("delete cluster called %s", url) - - action := "POST" - req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue)) - if err != nil { - response.Status.Code = msgs.Error - return response, err - } - - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - resp, err := httpclient.Do(req) - if err != nil { - fmt.Println("Error: Do: ", err) - return response, err - } - defer resp.Body.Close() - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - fmt.Println("Error: ", err) - log.Println(err) - return response, err - } - - return response, err - -} - -func CreateCluster(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.CreateClusterRequest) (msgs.CreateClusterResponse, error) { - - var response msgs.CreateClusterResponse - - jsonValue, _ := json.Marshal(request) - url := fmt.Sprintf(createClusterURL, SessionCredentials.APIServerURL) - log.Debugf("createCluster called...[%s]", url) - - action := "POST" - req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue)) - if err != nil { - return response, err - } - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - resp, err := httpclient.Do(req) - if err != nil { - return response, err - } - - defer resp.Body.Close() - - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - log.Println(err) - return response, err - } - - return response, err -} - -func UpdateCluster(httpclient *http.Client, request *msgs.UpdateClusterRequest, SessionCredentials *msgs.BasicAuthCredentials) (msgs.UpdateClusterResponse, error) { - //func UpdateCluster(httpclient *http.Client, arg, selector string, SessionCredentials *msgs.BasicAuthCredentials, autofailFlag, ns string) (msgs.UpdateClusterResponse, error) { - - var response msgs.UpdateClusterResponse - jsonValue, _ := json.Marshal(request) - - url := fmt.Sprintf(updateClusterURL, SessionCredentials.APIServerURL) - log.Debugf("update cluster called %s", url) - - req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonValue)) - if err != nil { - response.Status.Code = msgs.Error - return response, err - } - - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - resp, err := httpclient.Do(req) - if err != nil { - fmt.Println("Error: Do: ", err) - return response, err - } - defer resp.Body.Close() - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - fmt.Println("Error: ", err) - log.Println(err) - return response, err - } - - return response, err - -} diff --git a/pgo/api/common.go b/pgo/api/common.go deleted file mode 100644 index 6fcd876ebf..0000000000 --- a/pgo/api/common.go +++ /dev/null @@ -1,37 +0,0 @@ -package api - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - "net/http" - - log "github.com/sirupsen/logrus" -) - -// StatusCheck ... -func StatusCheck(resp *http.Response) error { - log.Debugf("http status code is %d", resp.StatusCode) - if resp.StatusCode == 401 { - return fmt.Errorf("Authentication Failed: %d\n", resp.StatusCode) - } else if resp.StatusCode == 405 { - return fmt.Errorf("Method %s for URL %s is not allowed in current the Operator "+ - "install: %d", resp.Request.Method, resp.Request.URL.Path, resp.StatusCode) - } else if resp.StatusCode != 200 { - return fmt.Errorf("Invalid Status Code: %d\n", resp.StatusCode) - } - return nil -} diff --git a/pgo/api/config.go b/pgo/api/config.go deleted file mode 100644 index 90848edcfd..0000000000 --- a/pgo/api/config.go +++ /dev/null @@ -1,63 +0,0 @@ -package api - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "encoding/json" - "fmt" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "net/http" -) - -func ShowConfig(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, ns string) (msgs.ShowConfigResponse, error) { - - var response msgs.ShowConfigResponse - - url := SessionCredentials.APIServerURL + "/config?version=" + msgs.PGO_VERSION + "&namespace=" + ns - log.Debug(url) - - req, err := http.NewRequest("GET", url, nil) - if err != nil { - return response, err - } - - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - resp, err := httpclient.Do(req) - if err != nil { - return response, err - } - defer resp.Body.Close() - - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - fmt.Print("Error: ") - fmt.Println(err) - log.Println(err) - return response, err - } - - return response, err - -} diff --git a/pgo/api/df.go b/pgo/api/df.go deleted file mode 100644 index fa993051aa..0000000000 --- a/pgo/api/df.go +++ /dev/null @@ -1,70 +0,0 @@ -package api - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "bytes" - "encoding/json" - "fmt" - "net/http" - - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" -) - -func ShowDf(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request msgs.DfRequest) (msgs.DfResponse, error) { - var response msgs.DfResponse - - // explicitly set the client version here - request.ClientVersion = msgs.PGO_VERSION - - log.Debugf("ShowDf called [%+v]", request) - - jsonValue, _ := json.Marshal(request) - url := SessionCredentials.APIServerURL + "/df" - - action := "POST" - req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue)) - - if err != nil { - return response, err - } - - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - resp, err := httpclient.Do(req) - - if err != nil { - return response, err - } - - defer resp.Body.Close() - - log.Debugf("%+v", resp) - - if err := StatusCheck(resp); err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - fmt.Print("Error: ") - fmt.Println(err) - return response, err - } - - return response, nil -} diff --git a/pgo/api/failover.go b/pgo/api/failover.go deleted file mode 100644 index 4ebbab9471..0000000000 --- a/pgo/api/failover.go +++ /dev/null @@ -1,101 +0,0 @@ -package api - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "bytes" - "encoding/json" - "fmt" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "net/http" -) - -func CreateFailover(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.CreateFailoverRequest) (msgs.CreateFailoverResponse, error) { - - var response msgs.CreateFailoverResponse - - jsonValue, _ := json.Marshal(request) - url := SessionCredentials.APIServerURL + "/failover" - - log.Debugf("create failover called [%s]", url) - - action := "POST" - req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue)) - if err != nil { - return response, err - } - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - resp, err := httpclient.Do(req) - if err != nil { - return response, err - } - defer resp.Body.Close() - - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - log.Println(err) - return response, err - } - - return response, err -} - -func QueryFailover(httpclient *http.Client, arg string, SessionCredentials *msgs.BasicAuthCredentials, ns string) (msgs.QueryFailoverResponse, error) { - - var response msgs.QueryFailoverResponse - - url := SessionCredentials.APIServerURL + "/failover/" + arg + "?version=" + msgs.PGO_VERSION + "&namespace=" + ns - log.Debugf("query failover called [%s]", url) - - action := "GET" - - req, err := http.NewRequest(action, url, nil) - if err != nil { - return response, err - } - - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - resp, err := httpclient.Do(req) - if err != nil { - return response, err - } - defer resp.Body.Close() - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - fmt.Println("Error: ", err) - log.Println(err) - return response, err - } - - return response, err - -} diff --git a/pgo/api/label.go b/pgo/api/label.go deleted file mode 100644 index e083f998a8..0000000000 --- a/pgo/api/label.go +++ /dev/null @@ -1,98 +0,0 @@ -package api - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "bytes" - "encoding/json" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "net/http" -) - -func LabelClusters(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.LabelRequest) (msgs.LabelResponse, error) { - - var response msgs.LabelResponse - url := SessionCredentials.APIServerURL + "/label" - log.Debugf("label called...[%s]", url) - - jsonValue, _ := json.Marshal(request) - - action := "POST" - req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue)) - if err != nil { - return response, err - } - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - resp, err := httpclient.Do(req) - if err != nil { - return response, err - } - defer resp.Body.Close() - - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - log.Println(err) - return response, err - } - - return response, err -} - -func DeleteLabel(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.DeleteLabelRequest) (msgs.LabelResponse, error) { - - var response msgs.LabelResponse - url := SessionCredentials.APIServerURL + "/labeldelete" - log.Debugf("delete label called...[%s]", url) - - jsonValue, _ := json.Marshal(request) - - action := "POST" - req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue)) - if err != nil { - return response, err - } - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - resp, err := httpclient.Do(req) - if err != nil { - return response, err - } - defer resp.Body.Close() - - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - log.Println(err) - return response, err - } - - return response, err -} diff --git a/pgo/api/namespace.go b/pgo/api/namespace.go deleted file mode 100644 index 96f10ba8d7..0000000000 --- a/pgo/api/namespace.go +++ /dev/null @@ -1,184 +0,0 @@ -package api - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "bytes" - "encoding/json" - "fmt" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "net/http" -) - -func ShowNamespace(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.ShowNamespaceRequest) (msgs.ShowNamespaceResponse, error) { - - var resp msgs.ShowNamespaceResponse - resp.Status.Code = msgs.Ok - - jsonValue, _ := json.Marshal(request) - url := SessionCredentials.APIServerURL + "/namespace" - log.Debugf("ShowNamespace called...[%s]", url) - - req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonValue)) - if err != nil { - resp.Status.Code = msgs.Error - return resp, err - } - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - r, err2 := httpclient.Do(req) - if err2 != nil { - return resp, err2 - } - defer r.Body.Close() - - log.Debugf("%v", r) - err = StatusCheck(r) - if err != nil { - return resp, err - } - - if err := json.NewDecoder(r.Body).Decode(&resp); err != nil { - log.Printf("%v\n", r.Body) - fmt.Print("Error: ") - fmt.Println(err) - log.Println(err) - return resp, err - } - - return resp, err - -} - -func CreateNamespace(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.CreateNamespaceRequest) (msgs.CreateNamespaceResponse, error) { - - var resp msgs.CreateNamespaceResponse - resp.Status.Code = msgs.Ok - - jsonValue, _ := json.Marshal(request) - url := SessionCredentials.APIServerURL + "/namespacecreate" - log.Debugf("CreateNamespace called...[%s]", url) - - req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonValue)) - if err != nil { - resp.Status.Code = msgs.Error - return resp, err - } - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - r, err := httpclient.Do(req) - if err != nil { - resp.Status.Code = msgs.Error - return resp, err - } - defer r.Body.Close() - - log.Debugf("%v", r) - err = StatusCheck(r) - if err != nil { - resp.Status.Code = msgs.Error - return resp, err - } - - if err := json.NewDecoder(r.Body).Decode(&resp); err != nil { - log.Printf("%v\n", r.Body) - log.Println(err) - resp.Status.Code = msgs.Error - return resp, err - } - - log.Debugf("response back from apiserver was %v", resp) - return resp, err -} - -func DeleteNamespace(httpclient *http.Client, request *msgs.DeleteNamespaceRequest, SessionCredentials *msgs.BasicAuthCredentials) (msgs.DeleteNamespaceResponse, error) { - - var response msgs.DeleteNamespaceResponse - - url := SessionCredentials.APIServerURL + "/namespacedelete" - - log.Debugf("DeleteNamespace called [%s]", url) - - jsonValue, _ := json.Marshal(request) - req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonValue)) - if err != nil { - response.Status.Code = msgs.Error - return response, err - } - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - resp, err := httpclient.Do(req) - if err != nil { - return response, err - } - defer resp.Body.Close() - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - fmt.Println("Error: ", err) - log.Println(err) - return response, err - } - - return response, err - -} -func UpdateNamespace(httpclient *http.Client, request *msgs.UpdateNamespaceRequest, SessionCredentials *msgs.BasicAuthCredentials) (msgs.UpdateNamespaceResponse, error) { - - var response msgs.UpdateNamespaceResponse - - url := SessionCredentials.APIServerURL + "/namespaceupdate" - - log.Debugf("UpdateNamespace called [%s]", url) - - jsonValue, _ := json.Marshal(request) - req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonValue)) - if err != nil { - response.Status.Code = msgs.Error - return response, err - } - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - resp, err := httpclient.Do(req) - if err != nil { - return response, err - } - defer resp.Body.Close() - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - fmt.Println("Error: ", err) - log.Println(err) - return response, err - } - - return response, err - -} diff --git a/pgo/api/pgadmin.go b/pgo/api/pgadmin.go deleted file mode 100644 index 0d410355cd..0000000000 --- a/pgo/api/pgadmin.go +++ /dev/null @@ -1,160 +0,0 @@ -package api - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "bytes" - "encoding/json" - "io/ioutil" - "net/http" - - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - - log "github.com/sirupsen/logrus" -) - -func CreatePgAdmin(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.CreatePgAdminRequest) (msgs.CreatePgAdminResponse, error) { - var response msgs.CreatePgAdminResponse - - jsonValue, _ := json.Marshal(request) - url := SessionCredentials.APIServerURL + "/pgadmin" - log.Debugf("createPgAdmin called...[%s]", url) - - action := "POST" - req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue)) - if err != nil { - return response, err - } - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - resp, err := httpclient.Do(req) - if err != nil { - return response, err - } - defer resp.Body.Close() - - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - // Read resp.Body so we can log it in the event of error - body, _ := ioutil.ReadAll(resp.Body) - - if err := json.Unmarshal(body, &response); err != nil { - log.Printf("Response body:\n%s\n", string(body)) - log.Println(err) - return response, err - } - - return response, err -} - -func DeletePgAdmin(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.DeletePgAdminRequest) (msgs.DeletePgAdminResponse, error) { - var response msgs.DeletePgAdminResponse - - jsonValue, _ := json.Marshal(request) - url := SessionCredentials.APIServerURL + "/pgadmin" - log.Debugf("deletePgAdmin called...[%s]", url) - - action := "DELETE" - req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue)) - if err != nil { - return response, err - } - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - resp, err := httpclient.Do(req) - if err != nil { - return response, err - } - defer resp.Body.Close() - - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - // Read resp.Body so we can log it in the event of error - body, _ := ioutil.ReadAll(resp.Body) - - if err := json.Unmarshal(body, &response); err != nil { - log.Printf("Response body:\n%s\n", string(body)) - log.Println(err) - return response, err - } - - return response, err -} - -// ShowPgAdmin makes an API call to the "show pgadmin" apiserver endpoint -// and provides the results either using the ShowPgAdmin response format which -// may include an error -func ShowPgAdmin(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, - request msgs.ShowPgAdminRequest) (msgs.ShowPgAdminResponse, error) { - var response msgs.ShowPgAdminResponse - - // explicitly set the client version here - request.ClientVersion = msgs.PGO_VERSION - - log.Debugf("ShowPgAdmin called [%+v]", request) - - // put the request into JSON format and format the URL and HTTP verb - jsonValue, _ := json.Marshal(request) - url := SessionCredentials.APIServerURL + "/pgadmin/show" - action := "POST" - - // prepare the request! - httpRequest, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue)) - - // if there is an error preparing the request, return here - if err != nil { - return msgs.ShowPgAdminResponse{}, err - } - - // set the headers around the request, including authentication information - httpRequest.Header.Set("Content-Type", "application/json") - httpRequest.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - // make the request! if there is an error making the request, return - resp, err := httpclient.Do(httpRequest) - if err != nil { - return msgs.ShowPgAdminResponse{}, err - } - defer resp.Body.Close() - - log.Debugf("%+v", resp) - - // check on the HTTP status. If it is not 200, return here - if err := StatusCheck(resp); err != nil { - return msgs.ShowPgAdminResponse{}, err - } - - // Read resp.Body so we can log it in the event of error - body, _ := ioutil.ReadAll(resp.Body) - - if err := json.Unmarshal(body, &response); err != nil { - log.Printf("Response body:\n%s\n", string(body)) - log.Println(err) - return msgs.ShowPgAdminResponse{}, err - } - - return response, nil -} diff --git a/pgo/api/pgbouncer.go b/pgo/api/pgbouncer.go deleted file mode 100644 index efee86ca53..0000000000 --- a/pgo/api/pgbouncer.go +++ /dev/null @@ -1,207 +0,0 @@ -package api - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "bytes" - "encoding/json" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "net/http" -) - -func CreatePgbouncer(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.CreatePgbouncerRequest) (msgs.CreatePgbouncerResponse, error) { - - var response msgs.CreatePgbouncerResponse - - jsonValue, _ := json.Marshal(request) - url := SessionCredentials.APIServerURL + "/pgbouncer" - log.Debugf("createPgbouncer called...[%s]", url) - - action := "POST" - req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue)) - if err != nil { - return response, err - } - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - resp, err := httpclient.Do(req) - if err != nil { - return response, err - } - defer resp.Body.Close() - - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - log.Println(err) - return response, err - } - - return response, err -} - -func DeletePgbouncer(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.DeletePgbouncerRequest) (msgs.DeletePgbouncerResponse, error) { - - var response msgs.DeletePgbouncerResponse - - jsonValue, _ := json.Marshal(request) - url := SessionCredentials.APIServerURL + "/pgbouncer" - log.Debugf("deletePgbouncer called...[%s]", url) - - action := "DELETE" - req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue)) - if err != nil { - return response, err - } - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - resp, err := httpclient.Do(req) - if err != nil { - return response, err - } - defer resp.Body.Close() - - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - log.Println(err) - return response, err - } - - return response, err -} - -// ShowPgBouncer makes an API call to the "show pgbouncer" apiserver endpoint -// and provides the results either using the ShowPgBouncer response format which -// may include an error -func ShowPgBouncer(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, - request msgs.ShowPgBouncerRequest) (msgs.ShowPgBouncerResponse, error) { - // explicitly set the client version here - request.ClientVersion = msgs.PGO_VERSION - - log.Debugf("ShowPgBouncer called [%+v]", request) - - // put the request into JSON format and format the URL and HTTP verb - jsonValue, _ := json.Marshal(request) - url := SessionCredentials.APIServerURL + "/pgbouncer/show" - action := "POST" - - // prepare the request! - httpRequest, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue)) - - // if there is an error preparing the request, return here - if err != nil { - return msgs.ShowPgBouncerResponse{}, err - } - - // set the headers around the request, including authentication information - httpRequest.Header.Set("Content-Type", "application/json") - httpRequest.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - // make the request! if there is an error making the request, return - - httpResponse, err := httpclient.Do(httpRequest) - - if err != nil { - return msgs.ShowPgBouncerResponse{}, err - } - - defer httpResponse.Body.Close() - - log.Debugf("%+v", httpResponse) - - // check on the HTTP status. If it is not 200, return here - if err := StatusCheck(httpResponse); err != nil { - return msgs.ShowPgBouncerResponse{}, err - } - - // attempt to decode the response into the expected JSON format - response := msgs.ShowPgBouncerResponse{} - - if err := json.NewDecoder(httpResponse.Body).Decode(&response); err != nil { - return msgs.ShowPgBouncerResponse{}, err - } - - // we did it! return the response - return response, nil -} - -// UpdatePgBouncer makes an API call to the "update pgbouncer" apiserver -// endpoint and provides the results -func UpdatePgBouncer(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, - request msgs.UpdatePgBouncerRequest) (msgs.UpdatePgBouncerResponse, error) { - // explicitly set the client version here - request.ClientVersion = msgs.PGO_VERSION - - log.Debugf("UpdatePgBouncer called [%+v]", request) - - // put the request into JSON format and format the URL and HTTP verb - jsonValue, _ := json.Marshal(request) - url := SessionCredentials.APIServerURL + "/pgbouncer" - action := "PUT" - - // prepare the request! - httpRequest, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue)) - - // if there is an error preparing the request, return here - if err != nil { - return msgs.UpdatePgBouncerResponse{}, err - } - - // set the headers around the request, including authentication information - httpRequest.Header.Set("Content-Type", "application/json") - httpRequest.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - // make the request! if there is an error making the request, return - - httpResponse, err := httpclient.Do(httpRequest) - - if err != nil { - return msgs.UpdatePgBouncerResponse{}, err - } - - defer httpResponse.Body.Close() - - log.Debugf("%+v", httpResponse) - - // check on the HTTP status. If it is not 200, return here - if err := StatusCheck(httpResponse); err != nil { - return msgs.UpdatePgBouncerResponse{}, err - } - - // attempt to decode the response into the expected JSON format - response := msgs.UpdatePgBouncerResponse{} - - if err := json.NewDecoder(httpResponse.Body).Decode(&response); err != nil { - return msgs.UpdatePgBouncerResponse{}, err - } - - // we did it! return the response - return response, nil -} diff --git a/pgo/api/pgdump.go b/pgo/api/pgdump.go deleted file mode 100644 index 3bc0804c7b..0000000000 --- a/pgo/api/pgdump.go +++ /dev/null @@ -1,102 +0,0 @@ -package api - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "bytes" - "encoding/json" - "fmt" - "net/http" - - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" -) - -func ShowpgDump(httpclient *http.Client, arg, selector string, SessionCredentials *msgs.BasicAuthCredentials, ns string) (msgs.ShowBackupResponse, error) { - - var response msgs.ShowBackupResponse - url := SessionCredentials.APIServerURL + "/pgdump/" + arg + "?version=" + msgs.PGO_VERSION + "&selector=" + selector + "&namespace=" + ns - - log.Debugf("show pgdump called [%s]", url) - - action := "GET" - req, err := http.NewRequest(action, url, nil) - if err != nil { - return response, err - } - - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - resp, err := httpclient.Do(req) - if err != nil { - fmt.Println("Error: Do: ", err) - return response, err - } - defer resp.Body.Close() - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Debugf("%v", resp.Body) - log.Debug(err) - return response, err - } - - return response, err - -} - -func CreatepgDumpBackup(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.CreatepgDumpBackupRequest) (msgs.CreatepgDumpBackupResponse, error) { - - var response msgs.CreatepgDumpBackupResponse - - jsonValue, _ := json.Marshal(request) - - url := SessionCredentials.APIServerURL + "/pgdumpbackup" - - log.Debugf("create pgdump backup called [%s]", url) - - action := "POST" - req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue)) - if err != nil { - return response, err - } - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - resp, err := httpclient.Do(req) - if err != nil { - return response, err - } - defer resp.Body.Close() - - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - fmt.Println("Error: ", err) - log.Println(err) - return response, err - } - - return response, err -} diff --git a/pgo/api/pgorole.go b/pgo/api/pgorole.go deleted file mode 100644 index 804f0c1eb2..0000000000 --- a/pgo/api/pgorole.go +++ /dev/null @@ -1,178 +0,0 @@ -package api - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "bytes" - "encoding/json" - "fmt" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "net/http" -) - -func ShowPgorole(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.ShowPgoroleRequest) (msgs.ShowPgoroleResponse, error) { - - var response msgs.ShowPgoroleResponse - - jsonValue, _ := json.Marshal(request) - url := SessionCredentials.APIServerURL + "/pgoroleshow" - log.Debugf("ShowPgorole called...[%s]", url) - - req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonValue)) - if err != nil { - response.Status.Code = msgs.Error - return response, err - } - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - resp, err := httpclient.Do(req) - if err != nil { - fmt.Println("Error: Do: ", err) - return response, err - } - defer resp.Body.Close() - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - log.Println(err) - return response, err - } - - return response, err - -} -func CreatePgorole(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.CreatePgoroleRequest) (msgs.CreatePgoroleResponse, error) { - - var resp msgs.CreatePgoroleResponse - resp.Status.Code = msgs.Ok - - jsonValue, _ := json.Marshal(request) - url := SessionCredentials.APIServerURL + "/pgorolecreate" - log.Debugf("CreatePgorole called...[%s]", url) - - req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonValue)) - if err != nil { - resp.Status.Code = msgs.Error - return resp, err - } - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - r, err := httpclient.Do(req) - if err != nil { - resp.Status.Code = msgs.Error - return resp, err - } - defer r.Body.Close() - - log.Debugf("%v", r) - err = StatusCheck(r) - if err != nil { - resp.Status.Code = msgs.Error - return resp, err - } - - if err := json.NewDecoder(r.Body).Decode(&resp); err != nil { - log.Printf("%v\n", r.Body) - log.Println(err) - resp.Status.Code = msgs.Error - return resp, err - } - - log.Debugf("response back from apiserver was %v", resp) - return resp, err -} - -func DeletePgorole(httpclient *http.Client, request *msgs.DeletePgoroleRequest, SessionCredentials *msgs.BasicAuthCredentials) (msgs.DeletePgoroleResponse, error) { - - var response msgs.DeletePgoroleResponse - - url := SessionCredentials.APIServerURL + "/pgoroledelete" - - log.Debugf("DeletePgorole called [%s]", url) - - jsonValue, _ := json.Marshal(request) - req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonValue)) - if err != nil { - response.Status.Code = msgs.Error - return response, err - } - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - resp, err := httpclient.Do(req) - if err != nil { - return response, err - } - defer resp.Body.Close() - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - fmt.Println("Error: ", err) - log.Println(err) - return response, err - } - - return response, err - -} - -func UpdatePgorole(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.UpdatePgoroleRequest) (msgs.UpdatePgoroleResponse, error) { - - var response msgs.UpdatePgoroleResponse - - jsonValue, _ := json.Marshal(request) - url := SessionCredentials.APIServerURL + "/pgoroleupdate" - log.Debugf("UpdatePgorole called...[%s]", url) - - req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonValue)) - if err != nil { - return response, err - } - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - resp, err := httpclient.Do(req) - if err != nil { - return response, err - } - defer resp.Body.Close() - - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - log.Println(err) - return response, err - } - - return response, err -} diff --git a/pgo/api/pgouser.go b/pgo/api/pgouser.go deleted file mode 100644 index e0026d20ca..0000000000 --- a/pgo/api/pgouser.go +++ /dev/null @@ -1,178 +0,0 @@ -package api - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "bytes" - "encoding/json" - "fmt" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "net/http" -) - -func ShowPgouser(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.ShowPgouserRequest) (msgs.ShowPgouserResponse, error) { - - var response msgs.ShowPgouserResponse - - jsonValue, _ := json.Marshal(request) - url := SessionCredentials.APIServerURL + "/pgousershow" - log.Debugf("ShowPgouser called...[%s]", url) - - req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonValue)) - if err != nil { - response.Status.Code = msgs.Error - return response, err - } - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - resp, err := httpclient.Do(req) - if err != nil { - fmt.Println("Error: Do: ", err) - return response, err - } - defer resp.Body.Close() - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - log.Println(err) - return response, err - } - - return response, err - -} -func CreatePgouser(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.CreatePgouserRequest) (msgs.CreatePgouserResponse, error) { - - var resp msgs.CreatePgouserResponse - resp.Status.Code = msgs.Ok - - jsonValue, _ := json.Marshal(request) - url := SessionCredentials.APIServerURL + "/pgousercreate" - log.Debugf("CreatePgouser called...[%s]", url) - - req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonValue)) - if err != nil { - resp.Status.Code = msgs.Error - return resp, err - } - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - r, err := httpclient.Do(req) - if err != nil { - resp.Status.Code = msgs.Error - return resp, err - } - defer r.Body.Close() - - log.Debugf("%v", r) - err = StatusCheck(r) - if err != nil { - resp.Status.Code = msgs.Error - return resp, err - } - - if err := json.NewDecoder(r.Body).Decode(&resp); err != nil { - log.Printf("%v\n", r.Body) - log.Println(err) - resp.Status.Code = msgs.Error - return resp, err - } - - log.Debugf("response back from apiserver was %v", resp) - return resp, err -} - -func DeletePgouser(httpclient *http.Client, request *msgs.DeletePgouserRequest, SessionCredentials *msgs.BasicAuthCredentials) (msgs.DeletePgouserResponse, error) { - - var response msgs.DeletePgouserResponse - - url := SessionCredentials.APIServerURL + "/pgouserdelete" - - log.Debugf("DeletePgouser called [%s]", url) - - jsonValue, _ := json.Marshal(request) - req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonValue)) - if err != nil { - response.Status.Code = msgs.Error - return response, err - } - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - resp, err := httpclient.Do(req) - if err != nil { - return response, err - } - defer resp.Body.Close() - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - fmt.Println("Error: ", err) - log.Println(err) - return response, err - } - - return response, err - -} - -func UpdatePgouser(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.UpdatePgouserRequest) (msgs.UpdatePgouserResponse, error) { - - var response msgs.UpdatePgouserResponse - - jsonValue, _ := json.Marshal(request) - url := SessionCredentials.APIServerURL + "/pgouserupdate" - log.Debugf("UpdatePgouser called...[%s]", url) - - req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonValue)) - if err != nil { - return response, err - } - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - resp, err := httpclient.Do(req) - if err != nil { - return response, err - } - defer resp.Body.Close() - - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - log.Println(err) - return response, err - } - - return response, err -} diff --git a/pgo/api/policy.go b/pgo/api/policy.go deleted file mode 100644 index b7e9cf5d6f..0000000000 --- a/pgo/api/policy.go +++ /dev/null @@ -1,182 +0,0 @@ -package api - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "bytes" - "encoding/json" - "fmt" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "net/http" -) - -func ShowPolicy(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.ShowPolicyRequest) (msgs.ShowPolicyResponse, error) { - - var response msgs.ShowPolicyResponse - - jsonValue, _ := json.Marshal(request) - url := SessionCredentials.APIServerURL + "/showpolicies" - log.Debugf("showPolicy called...[%s]", url) - - action := "POST" - req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue)) - if err != nil { - response.Status.Code = msgs.Error - return response, err - } - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - resp, err := httpclient.Do(req) - if err != nil { - fmt.Println("Error: Do: ", err) - return response, err - } - defer resp.Body.Close() - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - log.Println(err) - return response, err - } - - return response, err - -} -func CreatePolicy(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.CreatePolicyRequest) (msgs.CreatePolicyResponse, error) { - - var resp msgs.CreatePolicyResponse - resp.Status.Code = msgs.Ok - - jsonValue, _ := json.Marshal(request) - url := SessionCredentials.APIServerURL + "/policies" - log.Debugf("createPolicy called...[%s]", url) - - action := "POST" - req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue)) - if err != nil { - resp.Status.Code = msgs.Error - return resp, err - } - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - r, err := httpclient.Do(req) - if err != nil { - resp.Status.Code = msgs.Error - return resp, err - } - defer r.Body.Close() - - log.Debugf("%v", r) - err = StatusCheck(r) - if err != nil { - resp.Status.Code = msgs.Error - return resp, err - } - - if err := json.NewDecoder(r.Body).Decode(&resp); err != nil { - log.Printf("%v\n", r.Body) - log.Println(err) - resp.Status.Code = msgs.Error - return resp, err - } - - log.Debugf("response back from apiserver was %v", resp) - return resp, err -} - -func DeletePolicy(httpclient *http.Client, request *msgs.DeletePolicyRequest, SessionCredentials *msgs.BasicAuthCredentials) (msgs.DeletePolicyResponse, error) { - - var response msgs.DeletePolicyResponse - - url := SessionCredentials.APIServerURL + "/policiesdelete" - - log.Debugf("delete policy called [%s]", url) - - action := "POST" - jsonValue, _ := json.Marshal(request) - req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue)) - if err != nil { - response.Status.Code = msgs.Error - return response, err - } - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - resp, err := httpclient.Do(req) - if err != nil { - return response, err - } - defer resp.Body.Close() - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - fmt.Println("Error: ", err) - log.Println(err) - return response, err - } - - return response, err - -} - -func ApplyPolicy(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.ApplyPolicyRequest) (msgs.ApplyPolicyResponse, error) { - - var response msgs.ApplyPolicyResponse - - jsonValue, _ := json.Marshal(request) - url := SessionCredentials.APIServerURL + "/policies/apply" - log.Debugf("applyPolicy called...[%s]", url) - - action := "POST" - req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue)) - if err != nil { - return response, err - } - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - resp, err := httpclient.Do(req) - if err != nil { - return response, err - } - defer resp.Body.Close() - - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - log.Println(err) - return response, err - } - - return response, err -} diff --git a/pgo/api/pvc.go b/pgo/api/pvc.go deleted file mode 100644 index f4fac4ceb4..0000000000 --- a/pgo/api/pvc.go +++ /dev/null @@ -1,65 +0,0 @@ -package api - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "bytes" - "encoding/json" - "fmt" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "net/http" -) - -func ShowPVC(httpclient *http.Client, request *msgs.ShowPVCRequest, SessionCredentials *msgs.BasicAuthCredentials) (msgs.ShowPVCResponse, error) { - - var response msgs.ShowPVCResponse - - url := SessionCredentials.APIServerURL + "/showpvc" - log.Debugf("ShowPVC called...[%s]", url) - - jsonValue, _ := json.Marshal(request) - log.Debugf("ShowPVC called...[%s]", url) - - action := "POST" - req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue)) - if err != nil { - return response, err - } - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - resp, err := httpclient.Do(req) - if err != nil { - fmt.Println("Error: Do: ", err) - return response, err - } - defer resp.Body.Close() - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - log.Println(err) - return response, err - } - - return response, err - -} diff --git a/pgo/api/reload.go b/pgo/api/reload.go deleted file mode 100644 index 9235cc1ea9..0000000000 --- a/pgo/api/reload.go +++ /dev/null @@ -1,62 +0,0 @@ -package api - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "bytes" - "encoding/json" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "net/http" -) - -func Reload(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.ReloadRequest) (msgs.ReloadResponse, error) { - - var response msgs.ReloadResponse - - jsonValue, _ := json.Marshal(request) - url := SessionCredentials.APIServerURL + "/reload" - - log.Debugf("reload called [%s]", url) - - action := "POST" - req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue)) - if err != nil { - return response, err - } - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - resp, err := httpclient.Do(req) - if err != nil { - return response, err - } - defer resp.Body.Close() - - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - log.Println(err) - return response, err - } - - return response, err -} diff --git a/pgo/api/restart.go b/pgo/api/restart.go deleted file mode 100644 index 13dc205972..0000000000 --- a/pgo/api/restart.go +++ /dev/null @@ -1,108 +0,0 @@ -package api - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "bytes" - "encoding/json" - "fmt" - "net/http" - - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" -) - -// Restart POSTs a Restart request to the PostgreSQL Operator "restart" endpoint in order to restart -// a PG cluster or one or more instances within it. -func Restart(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, - request *msgs.RestartRequest) (msgs.RestartResponse, error) { - - var response msgs.RestartResponse - - jsonValue, _ := json.Marshal(request) - url := fmt.Sprintf("%s/%s", SessionCredentials.APIServerURL, "restart") - req, err := http.NewRequest(http.MethodPost, url, bytes.NewBuffer(jsonValue)) - if err != nil { - return response, err - } - - log.Debugf("restart called [%s]", url) - - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - resp, err := httpclient.Do(req) - if err != nil { - return response, err - } - defer resp.Body.Close() - - log.Debugf("restart response: %v", resp) - - if err := StatusCheck(resp); err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Println(err) - return response, err - } - - return response, err -} - -// QueryRestart sends a GET request to the PostgreSQL Operator "/restart/{clusterName}" endpoint -// in order to obtain information about the various instances available to restart within the -// cluster specified. -func QueryRestart(httpclient *http.Client, clusterName string, SessionCredentials *msgs.BasicAuthCredentials, - namespace string) (msgs.QueryRestartResponse, error) { - - var response msgs.QueryRestartResponse - - url := fmt.Sprintf("%s/%s/%s", SessionCredentials.APIServerURL, "restart", clusterName) - req, err := http.NewRequest(http.MethodGet, url, nil) - if err != nil { - return response, err - } - - q := req.URL.Query() - q.Add("version", msgs.PGO_VERSION) - q.Add("namespace", namespace) - req.URL.RawQuery = q.Encode() - - log.Debugf("query restart called [%s]", req.URL) - - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - resp, err := httpclient.Do(req) - if err != nil { - return response, err - } - defer resp.Body.Close() - - log.Debugf("query restart response: %v", resp) - - if err := StatusCheck(resp); err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Println(err) - return response, err - } - - return response, err -} diff --git a/pgo/api/restore.go b/pgo/api/restore.go deleted file mode 100644 index e22cea904b..0000000000 --- a/pgo/api/restore.go +++ /dev/null @@ -1,62 +0,0 @@ -package api - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "bytes" - "encoding/json" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "net/http" -) - -func Restore(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.RestoreRequest) (msgs.RestoreResponse, error) { - - var response msgs.RestoreResponse - - jsonValue, _ := json.Marshal(request) - url := SessionCredentials.APIServerURL + "/restore" - - log.Debugf("restore called [%s]", url) - - action := "POST" - req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue)) - if err != nil { - return response, err - } - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - resp, err := httpclient.Do(req) - if err != nil { - return response, err - } - defer resp.Body.Close() - - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - log.Println(err) - return response, err - } - - return response, err -} diff --git a/pgo/api/restoreDump.go b/pgo/api/restoreDump.go deleted file mode 100644 index bd911c1b75..0000000000 --- a/pgo/api/restoreDump.go +++ /dev/null @@ -1,62 +0,0 @@ -package api - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "bytes" - "encoding/json" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "net/http" -) - -func RestoreDump(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.PgRestoreRequest) (msgs.RestoreResponse, error) { - - var response msgs.RestoreResponse - - jsonValue, _ := json.Marshal(request) - url := SessionCredentials.APIServerURL + "/pgdumprestore" - - log.Debugf("restore dump called [%s]", url) - - action := "POST" - req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue)) - if err != nil { - return response, err - } - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - resp, err := httpclient.Do(req) - if err != nil { - return response, err - } - defer resp.Body.Close() - - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - log.Println(err) - return response, err - } - - return response, err -} diff --git a/pgo/api/scale.go b/pgo/api/scale.go deleted file mode 100644 index 6defb09127..0000000000 --- a/pgo/api/scale.go +++ /dev/null @@ -1,75 +0,0 @@ -package api - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "encoding/json" - "fmt" - "net/http" - "strconv" - - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" -) - -func ScaleCluster(httpclient *http.Client, arg string, ReplicaCount int, - StorageConfig, NodeLabel, CCPImageTag, ServiceType string, - SessionCredentials *msgs.BasicAuthCredentials, ns string) (msgs.ClusterScaleResponse, error) { - - var response msgs.ClusterScaleResponse - - url := fmt.Sprintf("%s/clusters/scale/%s", SessionCredentials.APIServerURL, arg) - log.Debug(url) - - action := "GET" - req, err := http.NewRequest(action, url, nil) - if err != nil { - return response, err - } - - q := req.URL.Query() - q.Add("replica-count", strconv.Itoa(ReplicaCount)) - q.Add("storage-config", StorageConfig) - q.Add("node-label", NodeLabel) - q.Add("version", msgs.PGO_VERSION) - q.Add("ccp-image-tag", CCPImageTag) - q.Add("service-type", ServiceType) - q.Add("namespace", ns) - req.URL.RawQuery = q.Encode() - - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - resp, err := httpclient.Do(req) - if err != nil { - return response, err - } - defer resp.Body.Close() - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - fmt.Println("Error: ", err) - log.Println(err) - return response, err - } - - return response, err - -} diff --git a/pgo/api/scaledown.go b/pgo/api/scaledown.go deleted file mode 100644 index 1cc6691b72..0000000000 --- a/pgo/api/scaledown.go +++ /dev/null @@ -1,109 +0,0 @@ -package api - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "encoding/json" - "fmt" - "net/http" - "strconv" - - "github.com/crunchydata/postgres-operator/internal/config" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" -) - -func ScaleDownCluster(httpclient *http.Client, clusterName, ScaleDownTarget string, - DeleteData bool, SessionCredentials *msgs.BasicAuthCredentials, - ns string) (msgs.ScaleDownResponse, error) { - - var response msgs.ScaleDownResponse - url := fmt.Sprintf("%s/scaledown/%s", SessionCredentials.APIServerURL, clusterName) - log.Debug(url) - - action := "GET" - req, err := http.NewRequest(action, url, nil) - if err != nil { - return response, err - } - - q := req.URL.Query() - q.Add("version", msgs.PGO_VERSION) - q.Add(config.LABEL_REPLICA_NAME, ScaleDownTarget) - q.Add(config.LABEL_DELETE_DATA, strconv.FormatBool(DeleteData)) - q.Add("namespace", ns) - req.URL.RawQuery = q.Encode() - - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - resp, err := httpclient.Do(req) - if err != nil { - fmt.Println("Error: Do: ", err) - return response, err - } - defer resp.Body.Close() - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - log.Println(err) - return response, err - } - - return response, err - -} - -func ScaleQuery(httpclient *http.Client, arg string, SessionCredentials *msgs.BasicAuthCredentials, ns string) (msgs.ScaleQueryResponse, error) { - - var response msgs.ScaleQueryResponse - - url := SessionCredentials.APIServerURL + "/scale/" + arg + "?version=" + msgs.PGO_VERSION + "&namespace=" + ns - log.Debug(url) - - action := "GET" - - req, err := http.NewRequest(action, url, nil) - if err != nil { - return response, err - } - - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - resp, err := httpclient.Do(req) - if err != nil { - return response, err - } - defer resp.Body.Close() - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - fmt.Println("Error: ", err) - log.Println(err) - return response, err - } - - return response, err - -} diff --git a/pgo/api/schedule.go b/pgo/api/schedule.go deleted file mode 100644 index 4007e77e1b..0000000000 --- a/pgo/api/schedule.go +++ /dev/null @@ -1,138 +0,0 @@ -package api - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "bytes" - "encoding/json" - "fmt" - "net/http" - - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" -) - -const ( - createScheduleURL = "%s/schedule" - deleteScheduleURL = "%s/scheduledelete" - showScheduleURL = "%s/scheduleshow" -) - -func CreateSchedule(client *http.Client, SessionCredentials *msgs.BasicAuthCredentials, r *msgs.CreateScheduleRequest) (msgs.CreateScheduleResponse, error) { - var response msgs.CreateScheduleResponse - - jsonValue, _ := json.Marshal(r) - url := fmt.Sprintf(createScheduleURL, SessionCredentials.APIServerURL) - - log.Debugf("create schedule called [%s]", url) - - req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonValue)) - if err != nil { - return response, err - } - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - resp, err := client.Do(req) - if err != nil { - return response, err - } - defer resp.Body.Close() - - log.Debugf("%v", resp) - if err := StatusCheck(resp); err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - log.Println(err) - return response, err - } - - return response, err -} - -func DeleteSchedule(client *http.Client, SessionCredentials *msgs.BasicAuthCredentials, r *msgs.DeleteScheduleRequest) (msgs.DeleteScheduleResponse, error) { - var response msgs.DeleteScheduleResponse - - jsonValue, _ := json.Marshal(r) - url := fmt.Sprintf(deleteScheduleURL, SessionCredentials.APIServerURL) - - log.Debugf("delete schedule called [%s]", url) - - req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonValue)) - if err != nil { - return response, err - } - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - resp, err := client.Do(req) - if err != nil { - return response, err - } - - defer resp.Body.Close() - - log.Debugf("%v", resp) - if err := StatusCheck(resp); err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - log.Println(err) - return response, err - } - - return response, err -} - -func ShowSchedule(client *http.Client, SessionCredentials *msgs.BasicAuthCredentials, r *msgs.ShowScheduleRequest) (msgs.ShowScheduleResponse, error) { - var response msgs.ShowScheduleResponse - - jsonValue, _ := json.Marshal(r) - url := fmt.Sprintf(showScheduleURL, SessionCredentials.APIServerURL) - log.Debugf("show schedule called [%s]", url) - - req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonValue)) - if err != nil { - return response, err - } - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - resp, err := client.Do(req) - if err != nil { - return response, err - } - - defer resp.Body.Close() - - log.Debugf("%v", resp) - if err := StatusCheck(resp); err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - log.Println(err) - return response, err - } - - return response, err -} diff --git a/pgo/api/status.go b/pgo/api/status.go deleted file mode 100644 index ad70bd2f96..0000000000 --- a/pgo/api/status.go +++ /dev/null @@ -1,59 +0,0 @@ -package api - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "encoding/json" - "fmt" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "net/http" -) - -func ShowStatus(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, ns string) (msgs.StatusResponse, error) { - - var response msgs.StatusResponse - url := SessionCredentials.APIServerURL + "/status?version=" + msgs.PGO_VERSION + "&namespace=" + ns - log.Debug(url) - - action := "GET" - req, err := http.NewRequest(action, url, nil) - if err != nil { - return response, err - } - - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - resp, err := httpclient.Do(req) - if err != nil { - fmt.Println("Error: Do: ", err) - return response, err - } - defer resp.Body.Close() - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - log.Println(err) - return response, err - } - - return response, err - -} diff --git a/pgo/api/test.go b/pgo/api/test.go deleted file mode 100644 index 887d67b056..0000000000 --- a/pgo/api/test.go +++ /dev/null @@ -1,62 +0,0 @@ -package api - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "bytes" - "encoding/json" - "fmt" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "net/http" -) - -func ShowTest(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.ClusterTestRequest) (msgs.ClusterTestResponse, error) { - - var response msgs.ClusterTestResponse - - jsonValue, _ := json.Marshal(request) - url := SessionCredentials.APIServerURL + "/testclusters" - log.Debug(url) - - action := "POST" - req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue)) - if err != nil { - return response, err - } - - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - resp, err := httpclient.Do(req) - if err != nil { - fmt.Println("Error: Do: ", err) - return response, err - } - defer resp.Body.Close() - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - log.Println(err) - return response, err - } - - return response, err - -} diff --git a/pgo/api/upgrade.go b/pgo/api/upgrade.go deleted file mode 100644 index 6079a29023..0000000000 --- a/pgo/api/upgrade.go +++ /dev/null @@ -1,62 +0,0 @@ -package api - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "bytes" - "encoding/json" - "net/http" - - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" -) - -func CreateUpgrade(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.CreateUpgradeRequest) (msgs.CreateUpgradeResponse, error) { - - var response msgs.CreateUpgradeResponse - - jsonValue, _ := json.Marshal(request) - url := SessionCredentials.APIServerURL + "/upgrades" - log.Debugf("CreateUpgrade called...[%s]", url) - - action := "POST" - req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue)) - if err != nil { - return response, err - } - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - resp, err := httpclient.Do(req) - if err != nil { - return response, err - } - defer resp.Body.Close() - - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - log.Println(err) - return response, err - } - - return response, err -} diff --git a/pgo/api/user.go b/pgo/api/user.go deleted file mode 100644 index 38424ab17b..0000000000 --- a/pgo/api/user.go +++ /dev/null @@ -1,182 +0,0 @@ -package api - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "bytes" - "encoding/json" - "fmt" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "net/http" -) - -func ShowUser(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.ShowUserRequest) (msgs.ShowUserResponse, error) { - - var response msgs.ShowUserResponse - response.Status.Code = msgs.Ok - - request.ClientVersion = msgs.PGO_VERSION - - jsonValue, _ := json.Marshal(request) - url := SessionCredentials.APIServerURL + "/usershow" - log.Debugf("ShowUser called...[%s]", url) - - req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonValue)) - if err != nil { - response.Status.Code = msgs.Error - return response, err - } - - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - resp, err := httpclient.Do(req) - if err != nil { - fmt.Println("Error: Do: ", err) - return response, err - } - defer resp.Body.Close() - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - log.Println(err) - return response, err - } - - return response, err - -} -func CreateUser(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.CreateUserRequest) (msgs.CreateUserResponse, error) { - - var response msgs.CreateUserResponse - - request.ClientVersion = msgs.PGO_VERSION - - jsonValue, _ := json.Marshal(request) - url := SessionCredentials.APIServerURL + "/usercreate" - log.Debugf("createUsers called...[%s]", url) - - action := "POST" - req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue)) - if err != nil { - return response, err - } - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - resp, err := httpclient.Do(req) - if err != nil { - return response, err - } - defer resp.Body.Close() - - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - log.Println(err) - return response, err - } - - return response, err -} - -func DeleteUser(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.DeleteUserRequest) (msgs.DeleteUserResponse, error) { - - var response msgs.DeleteUserResponse - - request.ClientVersion = msgs.PGO_VERSION - - jsonValue, _ := json.Marshal(request) - url := SessionCredentials.APIServerURL + "/userdelete" - log.Debugf("deleteUser called...[%s]", url) - - action := "POST" - req, err := http.NewRequest(action, url, bytes.NewBuffer(jsonValue)) - if err != nil { - return response, err - } - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - resp, err := httpclient.Do(req) - if err != nil { - return response, err - } - defer resp.Body.Close() - - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - log.Println(err) - return response, err - } - - return response, err - -} - -func UpdateUser(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials, request *msgs.UpdateUserRequest) (msgs.UpdateUserResponse, error) { - - var response msgs.UpdateUserResponse - - request.ClientVersion = msgs.PGO_VERSION - - jsonValue, _ := json.Marshal(request) - url := SessionCredentials.APIServerURL + "/userupdate" - log.Debugf("UpdateUser called...[%s]", url) - - req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonValue)) - if err != nil { - return response, err - } - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - resp, err := httpclient.Do(req) - if err != nil { - return response, err - } - defer resp.Body.Close() - - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - log.Println(err) - return response, err - } - - return response, err -} diff --git a/pgo/api/version.go b/pgo/api/version.go deleted file mode 100644 index 9ca743add1..0000000000 --- a/pgo/api/version.go +++ /dev/null @@ -1,65 +0,0 @@ -package api - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "encoding/json" - "fmt" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "net/http" -) - -func ShowVersion(httpclient *http.Client, SessionCredentials *msgs.BasicAuthCredentials) (msgs.VersionResponse, error) { - - var response msgs.VersionResponse - - log.Debug("ShowVersion called ") - - url := SessionCredentials.APIServerURL + "/version" - log.Debug(url) - - req, err := http.NewRequest("GET", url, nil) - if err != nil { - return response, err - } - - req.Header.Set("Content-Type", "application/json") - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - - resp, err := httpclient.Do(req) - if err != nil { - return response, err - } - defer resp.Body.Close() - - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - fmt.Print("Error: ") - fmt.Println(err) - log.Println(err) - return response, err - } - - return response, err - -} diff --git a/pgo/api/workflow.go b/pgo/api/workflow.go deleted file mode 100644 index 3289329aa1..0000000000 --- a/pgo/api/workflow.go +++ /dev/null @@ -1,60 +0,0 @@ -package api - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "encoding/json" - "fmt" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "net/http" -) - -func ShowWorkflow(httpclient *http.Client, workflowID string, SessionCredentials *msgs.BasicAuthCredentials, ns string) (msgs.ShowWorkflowResponse, error) { - - var response msgs.ShowWorkflowResponse - - url := SessionCredentials.APIServerURL + "/workflow/" + workflowID + "?version=" + msgs.PGO_VERSION + "&namespace=" + ns - log.Debugf("ShowWorkflow called...[%s]", url) - - action := "GET" - req, err := http.NewRequest(action, url, nil) - if err != nil { - return response, err - } - - req.SetBasicAuth(SessionCredentials.Username, SessionCredentials.Password) - resp, err := httpclient.Do(req) - if err != nil { - fmt.Println("Error: Do: ", err) - return response, err - } - defer resp.Body.Close() - log.Debugf("%v", resp) - err = StatusCheck(resp) - if err != nil { - return response, err - } - - if err := json.NewDecoder(resp.Body).Decode(&response); err != nil { - log.Printf("%v\n", resp.Body) - log.Println(err) - return response, err - } - - return response, err - -} diff --git a/pgo/cmd/auth.go b/pgo/cmd/auth.go deleted file mode 100644 index 322e5f5e9c..0000000000 --- a/pgo/cmd/auth.go +++ /dev/null @@ -1,296 +0,0 @@ -package cmd - -/* - Copyright 2017 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "crypto/tls" - "crypto/x509" - "fmt" - "io/ioutil" - "net/http" - "os" - "runtime" - "strconv" - "strings" - "time" - - "github.com/crunchydata/postgres-operator/internal/tlsutil" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - - log "github.com/sirupsen/logrus" -) - -const ( - pgoUserFileEnvVar = "PGOUSER" - pgoUserNameEnvVar = "PGOUSERNAME" - pgoUserPasswordEnvVar = "PGOUSERPASS" -) - -// SessionCredentials stores the PGO user, PGO password and the PGO APIServer URL -var SessionCredentials msgs.BasicAuthCredentials - -// Globally shared Operator API HTTP client -var httpclient *http.Client - -// StatusCheck ... -func StatusCheck(resp *http.Response) { - log.Debugf("HTTP status code is %d", resp.StatusCode) - if resp.StatusCode == 401 { - fmt.Printf("Error: Authentication Failed: %d\n", resp.StatusCode) - os.Exit(2) - } else if resp.StatusCode != 200 { - fmt.Printf("Error: Invalid Status Code: %d\n", resp.StatusCode) - os.Exit(2) - } -} - -// userHomeDir updates the env variable with the appropriate home directory -// depending on the host operating system the PGO client is running on. -func userHomeDir() string { - env := "HOME" - if runtime.GOOS == "windows" { - env = "USERPROFILE" - } else if runtime.GOOS == "plan9" { - env = "home" - } - return os.Getenv(env) -} - -func parseCredentials(dat string) msgs.BasicAuthCredentials { - // splits by new line to ensure that user on has one line in pgouser file/creds - // this split does not take into account newline conventions of different systems - // ex. windows new lines ("\r\n") - lines := strings.Split(strings.TrimSpace(dat), "\n") - if len(lines) != 1 { - log.Debugf("expected one and only one line in pgouser file - found %d", len(lines)) - fmt.Println("unable to parse credentials in pgouser file") - os.Exit(2) // TODO: graceful exit - } - - // the delimiting char ":" is a valid password char so SplitN will handle if - // ":" is used by always splitting into two substrings including the username - // and everything after the first ":" - fields := strings.SplitN(lines[0], ":", 2) - if len(fields) != 2 { - log.Debug("invalid credential format: expecting \":\"") - fmt.Println("unable to parse credentials in pgouser file") - os.Exit(2) // TODO: graceful exit - } - log.Debugf("%v", fields) - log.Debugf("username=[%s] password=[%s]", fields[0], fields[1]) - - creds := msgs.BasicAuthCredentials{ - Username: fields[0], - Password: fields[1], - APIServerURL: APIServerURL, - } - return creds -} - -// getCredentialsFromFile reads the pgouser and password from the .pgouser file, -// checking in the various locations that file can be expected, and then returns -// the credentials -func getCredentialsFromFile() msgs.BasicAuthCredentials { - found := false - dir := userHomeDir() - fullPath := dir + "/" + ".pgouser" - var creds msgs.BasicAuthCredentials - - //look in env var for pgouser file - pgoUser := os.Getenv(pgoUserFileEnvVar) - if pgoUser != "" { - fullPath = pgoUser - log.Debugf("%s environment variable is being used at %s", pgoUserFileEnvVar, fullPath) - dat, err := ioutil.ReadFile(fullPath) - if err != nil { - fmt.Printf("Error: %s file not found", fullPath) - os.Exit(2) - } - - log.Debugf("pgouser file found at %s contains %s", fullPath, string(dat)) - creds = parseCredentials(string(dat)) - found = true - } - - //look in home directory for .pgouser file - if !found { - log.Debugf("looking in %s for credentials", fullPath) - dat, err := ioutil.ReadFile(fullPath) - if err != nil { - log.Debugf("%s not found", fullPath) - } else { - log.Debugf("%s found", fullPath) - log.Debugf("pgouser file found at %s contains %s", fullPath, string(dat)) - creds = parseCredentials(string(dat)) - found = true - - } - } - - //look in etc for pgouser file - if !found { - fullPath = "/etc/pgo/pgouser" - dat, err := ioutil.ReadFile(fullPath) - if err != nil { - log.Debugf("%s not found", fullPath) - } else { - log.Debugf("%s found", fullPath) - log.Debugf("pgouser file found at %s contains %s", fullPath, string(dat)) - creds = parseCredentials(string(dat)) - found = true - } - } - - if !found { - fmt.Println("could not find pgouser file") - os.Exit(2) - } - - return creds -} - -// getCredentialsFromEnvironment reads the pgouser and password from relevant environment -// variables and then returns a created BasicAuthCredentials object with both values, -// as well as the APIServer URL. -func getCredentialsFromEnvironment() msgs.BasicAuthCredentials { - pgoUser := os.Getenv(pgoUserNameEnvVar) - pgoPass := os.Getenv(pgoUserPasswordEnvVar) - - if len(pgoUser) > 0 && len(pgoPass) < 1 { - fmt.Println("Error: PGOUSERPASS needs to be specified if PGOUSERNAME is provided") - os.Exit(2) - } - if len(pgoPass) > 0 && len(pgoUser) < 1 { - fmt.Println("Error: PGOUSERNAME needs to be specified if PGOUSERPASS is provided") - os.Exit(2) - } - - creds := msgs.BasicAuthCredentials{ - Username: os.Getenv(pgoUserNameEnvVar), - Password: os.Getenv(pgoUserPasswordEnvVar), - APIServerURL: APIServerURL, - } - return creds -} - -// SetSessionUserCredentials gathers the pgouser and password information -// and stores them for use by the PGO client -func SetSessionUserCredentials() { - log.Debug("GetSessionCredentials called") - - SessionCredentials = getCredentialsFromEnvironment() - - if !SessionCredentials.HasUsernameAndPassword() { - SessionCredentials = getCredentialsFromFile() - } -} - -// GetTLSTransport returns an http.Transport configured with environmental -// TLS client settings -func GetTLSTransport() (*http.Transport, error) { - log.Debug("GetTLSTransport called") - - // By default, load the OS CA cert truststore unless explicitly disabled - // Reasonable default given the client controls to whom it is connecting - var caCertPool *x509.CertPool - if noTrust, _ := strconv.ParseBool(os.Getenv("EXCLUDE_OS_TRUST")); noTrust || EXCLUDE_OS_TRUST { - caCertPool = x509.NewCertPool() - } else { - if pool, err := x509.SystemCertPool(); err != nil { - return nil, fmt.Errorf("while loading System CA pool - %s", err) - } else { - caCertPool = pool - } - } - - // Priority: Flag -> ENV - caCertPath := PGO_CA_CERT - if caCertPath == "" { - caCertPath = os.Getenv("PGO_CA_CERT") - if caCertPath == "" { - return nil, fmt.Errorf("PGO_CA_CERT not specified") - } - } - - // Open trust file and extend trust pool - if trustFile, err := os.Open(caCertPath); err != nil { - newErr := fmt.Errorf("unable to load TLS trust from %s - [%v]", caCertPath, err) - return nil, newErr - } else { - err = tlsutil.ExtendTrust(caCertPool, trustFile) - if err != nil { - newErr := fmt.Errorf("error reading %s - %v", caCertPath, err) - return nil, newErr - } - trustFile.Close() - } - - // Priority: Flag -> ENV - clientCertPath := PGO_CLIENT_CERT - if clientCertPath == "" { - clientCertPath = os.Getenv("PGO_CLIENT_CERT") - if clientCertPath == "" { - return nil, fmt.Errorf("PGO_CLIENT_CERT not specified") - } - } - - // Priority: Flag -> ENV - clientKeyPath := PGO_CLIENT_KEY - if clientKeyPath == "" { - clientKeyPath = os.Getenv("PGO_CLIENT_KEY") - if clientKeyPath == "" { - return nil, fmt.Errorf("PGO_CLIENT_KEY not specified") - } - } - - certPair, err := tls.LoadX509KeyPair(clientCertPath, clientKeyPath) - if err != nil { - return nil, fmt.Errorf("client certificate/key loading: %s", err) - } - - // create a Transport object for use by the HTTP client - return &http.Transport{ - TLSClientConfig: &tls.Config{ - RootCAs: caCertPool, - InsecureSkipVerify: true, - Certificates: []tls.Certificate{certPair}, - MinVersion: tls.VersionTLS11, - }, - }, nil -} - -// NewAPIClient returns an http client configured with a tls.Config -// based on environmental settings and a default timeout -func NewAPIClient() *http.Client { - defaultTimeout := 60 * time.Second - return &http.Client{ - Timeout: defaultTimeout, - } -} - -// NewAPIClientTLS returns an http client configured with a tls.Config -// based on environmental settings and a default timeout -// It returns an error if required environmental settings are missing -func NewAPIClientTLS() (*http.Client, error) { - client := NewAPIClient() - if tp, err := GetTLSTransport(); err != nil { - return nil, err - } else { - client.Transport = tp - } - - return client, nil -} diff --git a/pgo/cmd/backrest.go b/pgo/cmd/backrest.go deleted file mode 100644 index a0a183fa86..0000000000 --- a/pgo/cmd/backrest.go +++ /dev/null @@ -1,135 +0,0 @@ -// Package cmd provides the command line functions of the crunchy CLI -package cmd - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - "os" - "strings" - "time" - - "github.com/crunchydata/postgres-operator/pgo/api" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" -) - -// createBackrestBackup .... -func createBackrestBackup(args []string, ns string) { - log.Debugf("createBackrestBackup called %v %s", args, BackupOpts) - - request := new(msgs.CreateBackrestBackupRequest) - request.Namespace = ns - request.Args = args - request.Selector = Selector - request.BackupOpts = BackupOpts - request.BackrestStorageType = BackrestStorageType - - response, err := api.CreateBackrestBackup(httpclient, &SessionCredentials, request) - if err != nil { - fmt.Println("Error: ", err.Error()) - os.Exit(2) - } - - if response.Status.Code == msgs.Ok { - for k := range response.Results { - fmt.Println(response.Results[k]) - } - } else { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(2) - } - - if len(response.Results) == 0 { - fmt.Println("No clusters found.") - return - } - -} - -// showBackrest .... -func showBackrest(args []string, ns string) { - log.Debugf("showBackrest called %v", args) - - for _, v := range args { - response, err := api.ShowBackrest(httpclient, v, Selector, &SessionCredentials, ns) - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(2) - } - - if response.Status.Code != msgs.Ok { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(2) - } - - if len(response.Items) == 0 { - fmt.Println("No pgBackRest found.") - return - } - - log.Debugf("response = %v", response) - log.Debugf("len of items = %d", len(response.Items)) - - for _, backup := range response.Items { - printBackrest(&backup) - } - } -} - -// printBackrest -func printBackrest(result *msgs.ShowBackrestDetail) { - fmt.Printf("%s%s\n", "", "") - fmt.Printf("cluster: %s\n", result.Name) - fmt.Printf("storage type: %s\n\n", result.StorageType) - - for _, info := range result.Info { - fmt.Printf("stanza: %s\n", info.Name) - fmt.Printf(" status: %s\n", info.Status.Message) - fmt.Printf(" cipher: %s\n\n", info.Cipher) - - for _, archive := range info.Archives { - // this is the quick way of getting the name...alternatively we could look - // it up by ID - fmt.Printf(" %s (current)\n", info.Name) - fmt.Printf(" wal archive min/max (%s)\n\n", archive.ID) - - // iterate trhough the the backups and list out all the information - for _, backup := range info.Backups { - databaseSize, databaseUnit := getSizeAndUnit(backup.Info.Size) - databaseBackupSize, databaseBackupUnit := getSizeAndUnit(backup.Info.Delta) - repositorySize, repositoryUnit := getSizeAndUnit(backup.Info.Repository.Size) - repositoryBackupSize, repositoryBackupUnit := getSizeAndUnit(backup.Info.Repository.Delta) - - // this matches the output format of pgbackrest info - fmt.Printf(" %s backup: %s\n", backup.Type, backup.Label) - fmt.Printf(" timestamp start/stop: %s / %s\n", - time.Unix(backup.Timestamp.Start, 0), - time.Unix(backup.Timestamp.Stop, 0)) - fmt.Printf(" wal start/stop: %s / %s\n", - backup.Archive.Start, backup.Archive.Stop) - fmt.Printf(" database size: %.1f%s, backup size: %.1f%s\n", - databaseSize, getUnitString(databaseUnit), - databaseBackupSize, getUnitString(databaseBackupUnit)) - fmt.Printf(" repository size: %.1f%s, repository backup size: %.1f%s\n", - repositorySize, getUnitString(repositoryUnit), - repositoryBackupSize, getUnitString(repositoryBackupUnit)) - fmt.Printf(" backup reference list: %s\n\n", - strings.Join(backup.Reference, ", ")) - } - } - } -} diff --git a/pgo/cmd/backup.go b/pgo/cmd/backup.go deleted file mode 100644 index 61225cc3ea..0000000000 --- a/pgo/cmd/backup.go +++ /dev/null @@ -1,98 +0,0 @@ -// Package cmd provides the command line functions of the crunchy CLI -package cmd - -/* - Copyright 2017 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - - "github.com/crunchydata/postgres-operator/internal/config" - log "github.com/sirupsen/logrus" - "github.com/spf13/cobra" -) - -var PVCName string - -// PGDumpDB is used to store the name of the pgDump database when -// performing either a backup or restore -var PGDumpDB string - -var backupCmd = &cobra.Command{ - Use: "backup", - Short: "Perform a Backup", - Long: `BACKUP performs a Backup, for example: - - pgo backup mycluster`, - Run: func(cmd *cobra.Command, args []string) { - if Namespace == "" { - Namespace = PGONamespace - } - - log.Debug("backup called") - if len(args) == 0 && Selector == "" { - fmt.Println(`Error: You must specify the cluster to backup or a selector flag.`) - } else { - - exitNow := false // used in switch for early exit. - - switch buSelected := backupType; buSelected { - - case config.LABEL_BACKUP_TYPE_BACKREST: - - // storage config flag invalid for backrest - if StorageConfig != "" { - fmt.Println("Error: --storage-config is not allowed when performing a pgbackrest backup.") - exitNow = true - } - - if exitNow { - return - } - - createBackrestBackup(args, Namespace) - - case config.LABEL_BACKUP_TYPE_PGDUMP: - - createpgDumpBackup(args, Namespace) - - default: - fmt.Println("Error: You must specify either pgbackrest or pgdump for the --backup-type.") - - } - - } - - }, -} - -var backupType string - -func init() { - RootCmd.AddCommand(backupCmd) - - backupCmd.Flags().StringVarP(&BackupOpts, "backup-opts", "", "", "The options to pass into pgbackrest.") - backupCmd.Flags().StringVarP(&Selector, "selector", "s", "", "The selector to use for cluster filtering.") - backupCmd.Flags().StringVarP(&PVCName, "pvc-name", "", "", "The PVC name to use for the backup instead of the default.") - backupCmd.Flags().StringVarP(&PGDumpDB, "database", "d", "postgres", "The name of the database pgdump will backup.") - backupCmd.Flags().StringVar(&backupType, "backup-type", "pgbackrest", "The backup type to perform. Default is pgbackrest. Valid backup types are pgbackrest and pgdump.") - backupCmd.Flags().StringVarP(&BackrestStorageType, "pgbackrest-storage-type", "", "", "The type of storage to use when scheduling pgBackRest backups. Either \"local\", \"s3\" or both, comma separated. (default \"local\")") - -} - -// deleteBackup .... -func deleteBackup(args []string, ns string) { - log.Debugf("deleteBackup called %v", args) -} diff --git a/pgo/cmd/cat.go b/pgo/cmd/cat.go deleted file mode 100644 index e97301d40d..0000000000 --- a/pgo/cmd/cat.go +++ /dev/null @@ -1,80 +0,0 @@ -// Package cmd provides the command line functions of the crunchy CLI -package cmd - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - "github.com/crunchydata/postgres-operator/pgo/api" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "github.com/spf13/cobra" - "os" -) - -var catCmd = &cobra.Command{ - Use: "cat", - Short: "Perform a cat command on a cluster", - Long: `CAT performs a Linux cat command on a cluster file. For example: - - pgo cat mycluster /pgdata/mycluster/postgresql.conf`, - Run: func(cmd *cobra.Command, args []string) { - if Namespace == "" { - Namespace = PGONamespace - } - log.Debug("cat called") - if len(args) == 0 { - fmt.Println(`Error: You must specify the cluster`) - } else { - cat(args, Namespace) - } - - }, -} - -func init() { - RootCmd.AddCommand(catCmd) -} - -// pgo cat -func cat(args []string, ns string) { - log.Debugf("cat called %v", args) - - request := new(msgs.CatRequest) - request.Args = args - request.Namespace = ns - response, err := api.Cat(httpclient, &SessionCredentials, request) - - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(2) - } - - if response.Status.Code == msgs.Ok { - for k := range response.Results { - fmt.Println(response.Results[k]) - } - } else { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(2) - } - - if len(response.Results) == 0 { - fmt.Println("No clusters found.") - return - } - -} diff --git a/pgo/cmd/clone.go b/pgo/cmd/clone.go deleted file mode 100644 index 5f544e52e3..0000000000 --- a/pgo/cmd/clone.go +++ /dev/null @@ -1,119 +0,0 @@ -// Package cmd provides the command line functions of the crunchy CLI -package cmd - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - "os" - - "github.com/crunchydata/postgres-operator/pgo/api" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "github.com/spf13/cobra" -) - -var ( - // the source cluster used for the clone, e.g. "oldcluster" - SourceClusterName string - // the target/destination cluster used for the clone, e.g. "newcluster" - TargetClusterName string - // BackrestStorageSource represents the data source to use (e.g. s3 or local) when both s3 - // and local are enabled in the cluster being cloned - BackrestStorageSource string -) - -var cloneCmd = &cobra.Command{ - Use: "clone", - Deprecated: `Use "pgo create cluster newcluster --restore-from=oldcluster" instead. "pgo clone" will be removed in a future release.`, - Short: "Copies the primary database of an existing cluster to a new cluster", - Long: `Clone makes a copy of an existing PostgreSQL cluster managed by the Operator and creates a new PostgreSQL cluster managed by the Operator, with the data from the old cluster. - - pgo create cluster newcluster --restore-from=oldcluster - pgo clone oldcluster newcluster`, - Run: func(cmd *cobra.Command, args []string) { - // if the namespace is not specified, default to the PGONamespace specified - // in the `PGO_NAMESPACE` environmental variable - if Namespace == "" { - Namespace = PGONamespace - } - - log.Debug("clone called") - // ensure all the required arguments are available - if len(args) < 1 { - fmt.Println("Error: You must specify a cluster to clone from and a name for a new cluster") - os.Exit(1) - } - - if len(args) < 2 { - fmt.Println("Error: You must specify the name of the new cluster") - os.Exit(1) - } - - clone(Namespace, args[0], args[1]) - }, -} - -// init is part of the cobra API -func init() { - RootCmd.AddCommand(cloneCmd) - - cloneCmd.Flags().StringVarP(&BackrestPVCSize, "pgbackrest-pvc-size", "", "", - `The size of the PVC capacity for the pgBackRest repository. Overrides the value set in the storage class. This is ignored if the storage type of "local" is not used. Must follow the standard Kubernetes format, e.g. "10.1Gi"`) - cloneCmd.Flags().StringVarP(&BackrestStorageSource, "pgbackrest-storage-source", "", "", - "The data source for the clone when both \"local\" and \"s3\" are enabled in the "+ - "source cluster. Either \"local\", \"s3\" or both, comma separated. (default \"local\")") - cloneCmd.Flags().BoolVar(&MetricsFlag, "enable-metrics", false, `If sets, enables metrics collection on the newly cloned cluster`) - cloneCmd.Flags().StringVarP(&PVCSize, "pvc-size", "", "", - `The size of the PVC capacity for primary and replica PostgreSQL instances. Overrides the value set in the storage class. Must follow the standard Kubernetes format, e.g. "10.1Gi"`) -} - -// clone is a helper function to help set up the clone! -func clone(namespace, sourceClusterName, targetClusterName string) { - log.Debugf("clone called namespace:%s sourceClusterName:%s targetClusterName:%s", - namespace, sourceClusterName, targetClusterName) - - // set up a request to the clone API sendpoint - request := msgs.CloneRequest{ - BackrestStorageSource: BackrestStorageSource, - BackrestPVCSize: BackrestPVCSize, - EnableMetrics: MetricsFlag, - Namespace: Namespace, - PVCSize: PVCSize, - SourceClusterName: sourceClusterName, - TargetClusterName: targetClusterName, - } - - // make a call to the clone API - response, err := api.Clone(httpclient, &SessionCredentials, &request) - - // if there was an error with the API call, print that out here - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(1) - } - - // if the response was unsuccessful due to user error, print out the error - // message here - if response.Status.Code != msgs.Ok { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(1) - } - - // otherwise, print out some feedback: - fmt.Println("Created clone task for: ", response.TargetClusterName) - fmt.Println("workflow id is ", response.WorkflowID) -} diff --git a/pgo/cmd/cluster.go b/pgo/cmd/cluster.go deleted file mode 100644 index 3ee443fcd8..0000000000 --- a/pgo/cmd/cluster.go +++ /dev/null @@ -1,705 +0,0 @@ -package cmd - -/* - Copyright 2017 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "encoding/json" - "fmt" - "os" - "strings" - - "github.com/spf13/cobra" - - "github.com/crunchydata/postgres-operator/pgo/api" - "github.com/crunchydata/postgres-operator/pgo/util" - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" -) - -// below are the tablespace parameters and the expected values of each -const ( - // tablespaceParamName represents the name of the PostgreSQL tablespace - tablespaceParamName = "name" - // tablespaceParamPVCSize represents the size of the PVC - tablespaceParamPVCSize = "pvcsize" - // tablespaceParamStorageConfig represents the storage config to use for the - // tablespace - tablespaceParamStorageConfig = "storageconfig" -) - -// availableTablespaceParams is the list of acceptable parameters in the -// --tablespace flag -var availableTablespaceParams = map[string]struct{}{ - tablespaceParamName: {}, - tablespaceParamPVCSize: {}, - tablespaceParamStorageConfig: {}, -} - -// requiredTablespaceParams are the tablespace parameters that are required -var requiredTablespaceParams = []string{ - tablespaceParamName, - tablespaceParamStorageConfig, -} - -// deleteCluster will delete a PostgreSQL cluster that is managed by the -// PostgreSQL Operator -func deleteCluster(args []string, ns string) { - log.Debugf("deleteCluster called %v", args) - - if AllFlag { - args = make([]string, 1) - args[0] = "all" - } - - r := msgs.DeleteClusterRequest{} - r.Selector = Selector - r.ClientVersion = msgs.PGO_VERSION - r.Namespace = ns - r.DeleteBackups = !KeepBackups - r.DeleteData = !KeepData - - for _, arg := range args { - r.Clustername = arg - response, err := api.DeleteCluster(httpclient, &r, &SessionCredentials) - - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(2) - } - - if response.Status.Code == msgs.Ok { - for _, result := range response.Results { - fmt.Println(result) - } - } else { - fmt.Println("Error: " + response.Status.Msg) - } - - } - -} - -// showCluster ... -func showCluster(args []string, ns string) { - - log.Debugf("showCluster called %v", args) - - if OutputFormat != "" { - if OutputFormat != "json" { - fmt.Println("Error: ", "json is the only supported --output format value") - os.Exit(2) - } - } - - log.Debugf("selector is %s", Selector) - if len(args) == 0 && !AllFlag && Selector == "" { - fmt.Println("Error: ", "--all needs to be set or a cluster name be entered or a --selector be specified") - os.Exit(2) - } - if Selector != "" || AllFlag { - args = make([]string, 1) - args[0] = "" - } - - r := new(msgs.ShowClusterRequest) - r.Selector = Selector - r.Namespace = ns - r.AllFlag = AllFlag - r.ClientVersion = msgs.PGO_VERSION - - for _, v := range args { - - r.Clustername = v - response, err := api.ShowCluster(httpclient, &SessionCredentials, r) - if err != nil { - fmt.Println("Error: ", err.Error()) - os.Exit(2) - } - - if OutputFormat == "json" { - b, err := json.MarshalIndent(response, "", " ") - if err != nil { - fmt.Println("Error: ", err) - } - fmt.Println(string(b)) - return - } - - if response.Status.Code != msgs.Ok { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(2) - } - - if len(response.Results) == 0 { - fmt.Println("No clusters found.") - return - } - - for _, clusterDetail := range response.Results { - printCluster(&clusterDetail) - } - - } - -} - -// printCluster -func printCluster(detail *msgs.ShowClusterDetail) { - fmt.Println("") - fmt.Println("cluster : " + detail.Cluster.Spec.Name + " (" + detail.Cluster.Spec.CCPImage + ":" + detail.Cluster.Spec.CCPImageTag + ")") - - // indicate if a standby cluster - if detail.Standby { - fmt.Printf("%sstandby : %t\n", TreeBranch, detail.Standby) - } - - for _, pod := range detail.Pods { - podType := "(" + pod.Type + ")" - - podStr := fmt.Sprintf("%spod : %s (%s) on %s (%s) %s", TreeBranch, pod.Name, string(pod.Phase), pod.NodeName, pod.ReadyStatus, podType) - fmt.Println(podStr) - for _, pvc := range pod.PVC { - fmt.Println(fmt.Sprintf("%spvc: %s (%s)", TreeBranch+TreeBranch, pvc.Name, pvc.Capacity)) - } - } - - // print out the resources - resources := detail.Cluster.Spec.Resources - - if len(resources) > 0 { - resourceStr := fmt.Sprintf("%sresources :", TreeBranch) - - if !resources.Cpu().IsZero() { - resourceStr += fmt.Sprintf(" CPU: %s", resources.Cpu().String()) - } - - if !resources.Memory().IsZero() { - resourceStr += fmt.Sprintf(" Memory: %s", resources.Memory().String()) - } - - fmt.Println(resourceStr) - } - - // print out the limits - limits := detail.Cluster.Spec.Limits - - if len(limits) > 0 { - limitsStr := fmt.Sprintf("%slimits :", TreeBranch) - - if !limits.Cpu().IsZero() { - limitsStr += fmt.Sprintf(" CPU: %s", limits.Cpu().String()) - } - - if !limits.Memory().IsZero() { - limitsStr += fmt.Sprintf(" Memory: %s", limits.Memory().String()) - } - - fmt.Println(limitsStr) - } - - for _, d := range detail.Deployments { - fmt.Println(TreeBranch + "deployment : " + d.Name) - } - if len(detail.Deployments) > 0 { - printPolicies(&detail.Deployments[0]) - } - - for _, service := range detail.Services { - if service.ExternalIP == "" { - fmt.Println(TreeBranch + "service : " + service.Name + " - ClusterIP (" + service.ClusterIP + ")") - } else { - fmt.Println(TreeBranch + "service : " + service.Name + " - ClusterIP (" + service.ClusterIP + ") ExternalIP (" + service.ExternalIP + ")") - } - } - - for _, replica := range detail.Replicas { - fmt.Println(TreeBranch + "pgreplica : " + replica.Name) - } - - fmt.Printf("%s%s", TreeBranch, "labels : ") - for k, v := range detail.Cluster.ObjectMeta.Labels { - fmt.Printf("%s=%s ", k, v) - } - fmt.Println("") - -} - -func printPolicies(d *msgs.ShowClusterDeployment) { - for _, v := range d.PolicyLabels { - fmt.Printf("%spolicy: %s\n", TreeBranch, v) - } -} - -// createCluster .... -func createCluster(args []string, ns string, createClusterCmd *cobra.Command) { - var err error - - if len(args) != 1 { - fmt.Println("Error: A single Cluster name argument is required.") - return - } - - if !util.IsValidForResourceName(args[0]) { - fmt.Println("Error: Cluster name specified is not valid name - must be lowercase alphanumeric") - return - } - - r := new(msgs.CreateClusterRequest) - r.Name = args[0] - r.Namespace = ns - r.ReplicaCount = ClusterReplicaCount - r.NodeLabel = NodeLabel - r.PasswordLength = PasswordLength - r.PasswordSuperuser = PasswordSuperuser - r.PasswordReplication = PasswordReplication - r.Password = Password - r.SecretFrom = SecretFrom - r.UserLabels = UserLabels - r.Policies = PoliciesFlag - r.CCPImageTag = CCPImageTag - r.CCPImage = CCPImage - r.CCPImagePrefix = CCPImagePrefix - r.PGOImagePrefix = PGOImagePrefix - r.MetricsFlag = MetricsFlag - r.ExporterCPURequest = ExporterCPURequest - r.ExporterCPULimit = ExporterCPULimit - r.ExporterMemoryRequest = ExporterMemoryRequest - r.ExporterMemoryLimit = ExporterMemoryLimit - r.BadgerFlag = BadgerFlag - r.ServiceType = ServiceType - r.AutofailFlag = !DisableAutofailFlag - r.PgbouncerFlag = PgbouncerFlag - r.BackrestStorageConfig = BackrestStorageConfig - r.BackrestStorageType = BackrestStorageType - r.CustomConfig = CustomConfig - r.StorageConfig = StorageConfig - r.ReplicaStorageConfig = ReplicaStorageConfig - r.ClientVersion = msgs.PGO_VERSION - r.PodAntiAffinity = PodAntiAffinity - r.PodAntiAffinityPgBackRest = PodAntiAffinityPgBackRest - r.PodAntiAffinityPgBouncer = PodAntiAffinityPgBouncer - r.BackrestConfig = BackrestConfig - r.BackrestS3CASecretName = BackrestS3CASecretName - r.BackrestS3Key = BackrestS3Key - r.BackrestS3KeySecret = BackrestS3KeySecret - r.BackrestS3Bucket = BackrestS3Bucket - r.BackrestS3Region = BackrestS3Region - r.BackrestS3Endpoint = BackrestS3Endpoint - r.BackrestS3URIStyle = BackrestS3URIStyle - r.PVCSize = PVCSize - r.BackrestPVCSize = BackrestPVCSize - r.Username = Username - r.ShowSystemAccounts = ShowSystemAccounts - r.Database = Database - r.TLSOnly = TLSOnly - r.TLSSecret = TLSSecret - r.ReplicationTLSSecret = ReplicationTLSSecret - r.CASecret = CASecret - r.Standby = Standby - r.BackrestRepoPath = BackrestRepoPath - // set the container resource requests - r.CPURequest = CPURequest - r.CPULimit = CPULimit - r.MemoryRequest = MemoryRequest - r.MemoryLimit = MemoryLimit - r.BackrestCPURequest = BackrestCPURequest - r.BackrestCPULimit = BackrestCPULimit - r.BackrestMemoryRequest = BackrestMemoryRequest - r.BackrestMemoryLimit = BackrestMemoryLimit - r.PgBouncerCPURequest = PgBouncerCPURequest - r.PgBouncerCPULimit = PgBouncerCPULimit - r.PgBouncerMemoryRequest = PgBouncerMemoryRequest - r.PgBouncerMemoryLimit = PgBouncerMemoryLimit - r.PgBouncerReplicas = PgBouncerReplicas - // determine if the user wants to create tablespaces as part of this request, - // and if so, set the values - r.Tablespaces = getTablespaces(Tablespaces) - r.WALStorageConfig = WALStorageConfig - r.WALPVCSize = WALPVCSize - r.PGDataSource.RestoreFrom = RestoreFrom - r.PGDataSource.RestoreOpts = BackupOpts - // set any annotations - r.Annotations = getClusterAnnotations(Annotations, AnnotationsPostgres, AnnotationsBackrest, - AnnotationsPgBouncer) - - // only set SyncReplication in the request if actually provided via the CLI - if createClusterCmd.Flag("sync-replication").Changed { - r.SyncReplication = &SyncReplication - } - // only set BackrestS3VerifyTLS in the request if actually provided via the CLI - // if set, store provided value accordingly - r.BackrestS3VerifyTLS = msgs.UpdateBackrestS3VerifyTLSDoNothing - - if createClusterCmd.Flag("pgbackrest-s3-verify-tls").Changed { - if BackrestS3VerifyTLS { - r.BackrestS3VerifyTLS = msgs.UpdateBackrestS3VerifyTLSEnable - } else { - r.BackrestS3VerifyTLS = msgs.UpdateBackrestS3VerifyTLSDisable - } - } - - // if the user provided resources for CPU or Memory, validate them to ensure - // they are valid Kubernetes values - if err := util.ValidateQuantity(r.CPURequest, "cpu"); err != nil { - fmt.Println(err) - os.Exit(1) - } - - if err := util.ValidateQuantity(r.CPULimit, "cpu-limit"); err != nil { - fmt.Println(err) - os.Exit(1) - } - - if err := util.ValidateQuantity(r.MemoryRequest, "memory"); err != nil { - fmt.Println(err) - os.Exit(1) - } - - if err := util.ValidateQuantity(r.MemoryLimit, "memory-limit"); err != nil { - fmt.Println(err) - os.Exit(1) - } - - if err := util.ValidateQuantity(r.BackrestCPURequest, "pgbackrest-cpu"); err != nil { - fmt.Println(err) - os.Exit(1) - } - - if err := util.ValidateQuantity(r.BackrestCPULimit, "pgbackrest-cpu-limit"); err != nil { - fmt.Println(err) - os.Exit(1) - } - - if err := util.ValidateQuantity(r.BackrestMemoryRequest, "pgbackrest-memory"); err != nil { - fmt.Println(err) - os.Exit(1) - } - - if err := util.ValidateQuantity(r.BackrestMemoryLimit, "pgbackrest-memory-limit"); err != nil { - fmt.Println(err) - os.Exit(1) - } - - if err := util.ValidateQuantity(r.ExporterCPURequest, "exporter-cpu"); err != nil { - fmt.Println(err) - os.Exit(1) - } - - if err := util.ValidateQuantity(r.ExporterCPULimit, "exporter-cpu-limit"); err != nil { - fmt.Println(err) - os.Exit(1) - } - - if err := util.ValidateQuantity(r.ExporterMemoryRequest, "exporter-memory"); err != nil { - fmt.Println(err) - os.Exit(1) - } - - if err := util.ValidateQuantity(r.ExporterMemoryLimit, "exporter-memory-limit"); err != nil { - fmt.Println(err) - os.Exit(1) - } - - if err := util.ValidateQuantity(r.PgBouncerCPURequest, "pgbouncer-cpu"); err != nil { - fmt.Println(err) - os.Exit(1) - } - - if err := util.ValidateQuantity(r.PgBouncerCPULimit, "pgbouncer-cpu-limit"); err != nil { - fmt.Println(err) - os.Exit(1) - } - - if err := util.ValidateQuantity(r.PgBouncerMemoryRequest, "pgbouncer-memory"); err != nil { - fmt.Println(err) - os.Exit(1) - } - - if err := util.ValidateQuantity(r.PgBouncerMemoryLimit, "pgbouncer-memory-limit"); err != nil { - fmt.Println(err) - os.Exit(1) - } - - response, err := api.CreateCluster(httpclient, &SessionCredentials, r) - if err != nil { - fmt.Println("Error: ", err) - os.Exit(2) - } - - if response.Status.Code == msgs.Error { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(2) - } - - // print out the legacy cluster information - fmt.Println("created cluster:", response.Result.Name) - fmt.Println("workflow id:", response.Result.WorkflowID) - fmt.Println("database name:", response.Result.Database) - fmt.Println("users:") - - for _, user := range response.Result.Users { - fmt.Println("\tusername:", user.Username, "password:", user.Password) - } -} - -// getClusterAnnotations determines if there are any Annotations that were provided -// via the various `--annotation` flags. -func getClusterAnnotations(annotationsGlobal, annotationsPostgres, annotationsBackrest, annotationsPgBouncer []string) crv1.ClusterAnnotations { - annotations := crv1.ClusterAnnotations{ - Backrest: map[string]string{}, - Global: map[string]string{}, - PgBouncer: map[string]string{}, - Postgres: map[string]string{}, - } - - // go through each annotation type and attempt to populate it in the - // structure. If the syntax is off anywhere, abort - setClusterAnnotationGroup(annotations.Global, annotationsGlobal) - setClusterAnnotationGroup(annotations.Postgres, annotationsPostgres) - setClusterAnnotationGroup(annotations.Backrest, annotationsBackrest) - setClusterAnnotationGroup(annotations.PgBouncer, annotationsPgBouncer) - - // return the annotations - return annotations -} - -// getTablespaces determines if there are any Tablespaces that were provided -// via the `--tablespace` CLI flag, and if so, process their values. If -// everything checks out, one or more tablespaces are added to the cluster -// request -func getTablespaces(tablespaceParams []string) []msgs.ClusterTablespaceDetail { - tablespaces := []msgs.ClusterTablespaceDetail{} - - // if there are no tablespaces set in the Tablespaces slice, abort - if len(Tablespaces) == 0 { - return tablespaces - } - - for _, tablespace := range tablespaceParams { - tablespaceDetails := map[string]string{} - - // tablespaces are in the format "name=tsname:storageconfig=nfsstorage", - // so we need to split this out in order to put that information into the - // tablespace detail struct - // we will do the initial split of the string, and then iterate to get the - // key value map of the parameters, ignoring any ones that do not exist - for _, tablespaceParamValue := range strings.Split(tablespace, ":") { - tablespaceDetailParts := strings.Split(tablespaceParamValue, "=") - - // if the split is not 2 items, then abort, as that means this is not - // a valid key/value pair - if len(tablespaceDetailParts) != 2 { - fmt.Println(`Error: Tablespace was not specified in proper format (e.g. "name=tablespacename"), aborting.`) - os.Exit(1) - } - - // store the param as lower case - param := strings.ToLower(tablespaceDetailParts[0]) - - // if this is not a tablespace parameter, ignore it - if !isTablespaceParam(param) { - continue - } - - // alright, store this param/value in the map - tablespaceDetails[param] = tablespaceDetailParts[1] - } - - // determine if the required parameters are in the map. if they are not, - // abort - for _, requiredParam := range requiredTablespaceParams { - _, found := tablespaceDetails[requiredParam] - - if !found { - fmt.Printf("Error: Required tablespace parameter \"%s\" is not found, aborting\n", requiredParam) - os.Exit(1) - } - } - - // create the cluster tablespace detail and append it to the slice - clusterTablespaceDetail := msgs.ClusterTablespaceDetail{ - Name: tablespaceDetails[tablespaceParamName], - PVCSize: tablespaceDetails[tablespaceParamPVCSize], - StorageConfig: tablespaceDetails[tablespaceParamStorageConfig], - } - - // append to the tablespaces slice, and continue - tablespaces = append(tablespaces, clusterTablespaceDetail) - } - - // return the tablespace list - return tablespaces -} - -// isTablespaceParam returns true if the parameter in question is acceptable for -// using with a tablespace. -func isTablespaceParam(param string) bool { - _, found := availableTablespaceParams[param] - - return found -} - -// setClusterAnnotationGroup sets up the annotations for a particular group -func setClusterAnnotationGroup(annotationGroup map[string]string, annotations []string) { - for _, annotation := range annotations { - // there are two types of annotations syntaxes: - // - // 1: key=value (adding, editing) - // 2: key- (removing) - if strings.HasSuffix(annotation, "-") { - annotationGroup[strings.TrimSuffix(annotation, "-")] = "" - continue - } - - parts := strings.Split(annotation, "=") - - if len(parts) != 2 { - fmt.Println(`Error: Annotation was not specified in propert format (i.e. key=value), aborting.`) - os.Exit(1) - } - - annotationGroup[parts[0]] = parts[1] - } -} - -// updateCluster ... -func updateCluster(args []string, ns string) { - log.Debugf("updateCluster called %v", args) - - r := msgs.UpdateClusterRequest{} - r.Selector = Selector - r.ClientVersion = msgs.PGO_VERSION - r.Namespace = ns - r.AllFlag = AllFlag - r.BackrestCPURequest = BackrestCPURequest - r.BackrestCPULimit = BackrestCPULimit - r.BackrestMemoryRequest = BackrestMemoryRequest - r.BackrestMemoryLimit = BackrestMemoryLimit - // set the Crunchy Postgres Exporter resource requests - r.ExporterCPURequest = ExporterCPURequest - r.ExporterCPULimit = ExporterCPULimit - r.ExporterMemoryRequest = ExporterMemoryRequest - r.ExporterMemoryLimit = ExporterMemoryLimit - r.Clustername = args - r.Startup = Startup - r.Shutdown = Shutdown - // set the container resource requests - r.CPURequest = CPURequest - r.CPULimit = CPULimit - r.MemoryRequest = MemoryRequest - r.MemoryLimit = MemoryLimit - // determine if the user wants to create tablespaces as part of this request, - // and if so, set the values - r.Tablespaces = getTablespaces(Tablespaces) - // set any annotations - r.Annotations = getClusterAnnotations(Annotations, AnnotationsPostgres, AnnotationsBackrest, - AnnotationsPgBouncer) - - // check to see if EnableStandby or DisableStandby is set. If so, - // set a value for Standby - if EnableStandby { - r.Standby = msgs.UpdateClusterStandbyEnable - } else if DisableStandby { - r.Standby = msgs.UpdateClusterStandbyDisable - } - - // check to see if EnableAutofailFlag or DisableAutofailFlag is set. If so, - // set a value for Autofail - if EnableAutofailFlag { - r.Autofail = msgs.UpdateClusterAutofailEnable - } else if DisableAutofailFlag { - r.Autofail = msgs.UpdateClusterAutofailDisable - } - - // if the user provided resources for CPU or Memory, validate them to ensure - // they are valid Kubernetes values - if err := util.ValidateQuantity(r.CPURequest, "cpu"); err != nil { - fmt.Println(err) - os.Exit(1) - } - - if err := util.ValidateQuantity(r.CPULimit, "cpu-limit"); err != nil { - fmt.Println(err) - os.Exit(1) - } - - if err := util.ValidateQuantity(r.MemoryRequest, "memory"); err != nil { - fmt.Println(err) - os.Exit(1) - } - - if err := util.ValidateQuantity(r.MemoryLimit, "memory-limit"); err != nil { - fmt.Println(err) - os.Exit(1) - } - - if err := util.ValidateQuantity(r.BackrestCPURequest, "pgbackrest-cpu"); err != nil { - fmt.Println(err) - os.Exit(1) - } - - if err := util.ValidateQuantity(r.BackrestCPULimit, "pgbackrest-cpu-limit"); err != nil { - fmt.Println(err) - os.Exit(1) - } - - if err := util.ValidateQuantity(r.BackrestMemoryRequest, "pgbackrest-memory"); err != nil { - fmt.Println(err) - os.Exit(1) - } - - if err := util.ValidateQuantity(r.BackrestMemoryLimit, "pgbackrest-memory-limit"); err != nil { - fmt.Println(err) - os.Exit(1) - } - - if err := util.ValidateQuantity(r.ExporterCPURequest, "exporter-cpu"); err != nil { - fmt.Println(err) - os.Exit(1) - } - - if err := util.ValidateQuantity(r.ExporterCPULimit, "exporter-cpu-limit"); err != nil { - fmt.Println(err) - os.Exit(1) - } - - if err := util.ValidateQuantity(r.ExporterMemoryRequest, "exporter-memory"); err != nil { - fmt.Println(err) - os.Exit(1) - } - - if err := util.ValidateQuantity(r.ExporterMemoryLimit, "exporter-memory-limit"); err != nil { - fmt.Println(err) - os.Exit(1) - } - - response, err := api.UpdateCluster(httpclient, &r, &SessionCredentials) - - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(2) - } - - if response.Status.Code == msgs.Ok { - for _, result := range response.Results { - fmt.Println(result) - } - } else { - fmt.Println("Error: " + response.Status.Msg) - } - -} diff --git a/pgo/cmd/common.go b/pgo/cmd/common.go deleted file mode 100644 index f1e8f84e70..0000000000 --- a/pgo/cmd/common.go +++ /dev/null @@ -1,134 +0,0 @@ -package cmd - -/* - Copyright 2017 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "encoding/json" - "fmt" - "reflect" -) - -// unitType is used to group together the unit types -type unitType int - -// values for the headings -const ( - headingCapacity = "CAPACITY" - headingCluster = "CLUSTER" - headingClusterIP = "CLUSTER IP" - headingErrorMessage = "ERROR" - headingExpires = "EXPIRES" - headingExternalIP = "EXTERNAL IP" - headingInstance = "INSTANCE" - headingPassword = "PASSWORD" - headingPercentUsed = "% USED" - headingPod = "POD" - headingPVC = "PVC" - headingService = "SERVICE" - headingStatus = "STATUS" - headingPVCType = "TYPE" - headingUsed = "USED" - headingUsername = "USERNAME" -) - -// unitSize recommends the unit we will use to size things -const unitSize = 1024 - -// the collection of unittypes, from byte to yottabyte -const ( - unitB unitType = iota - unitKB - unitMB - unitGB - unitTB - unitPB - unitEB - unitZB - unitYB -) - -// unitTypeToString converts the unit types to strings -var unitTypeToString = map[unitType]string{ - unitB: "B", - unitKB: "KiB", - unitMB: "MiB", - unitGB: "GiB", - unitTB: "TiB", - unitPB: "PiB", - unitEB: "EiB", - unitZB: "ZiB", - unitYB: "YiB", -} - -// getHeaderLength returns the length of any value in a list, so that -// the maximum length of the header can be determined -func getHeaderLength(value interface{}, fieldName string) int { - // get the field from the reflection - r := reflect.ValueOf(value) - field := reflect.Indirect(r).FieldByName(fieldName) - return len(field.String()) -} - -// getMaxLength returns the maxLength of the strings of a particular value in -// the struct. Increases the max length by 1 to include a buffer -func getMaxLength(results []interface{}, title, fieldName string) int { - maxLength := len(title) - - for _, result := range results { - length := getHeaderLength(result, fieldName) - - if length > maxLength { - maxLength = length - } - } - - return maxLength + 1 -} - -// getSizeAndUnit determines the best size to return based on the best unit -// where unit is KB, MB, GB, etc... -func getSizeAndUnit(size int64) (float64, unitType) { - // set the unit - var unit unitType - // iterate through each tier, which we will initialize as "bytes" - normalizedSize := float64(size) - - // We keep dividing by "unitSize" which is 1024. Once it is less than the unit - // size, or really, once it's less than "1000" of that unit size, that is - // normalized unit we will use. - // - // of course, eventually this will get too big...so bail after yotta bytes - for unit = unitB; normalizedSize > 1000 && unit < unitYB; unit++ { - normalizedSize /= unitSize - } - - return normalizedSize, unit -} - -// getUnitString maps the raw value of the unit to its corresponding -// abbreviation -func getUnitString(unit unitType) string { - return unitTypeToString[unit] -} - -// printJSON renders a JSON response -func printJSON(response interface{}) { - if content, err := json.MarshalIndent(response, "", " "); err != nil { - fmt.Printf(`{"error": "%s"}`, err.Error()) - } else { - fmt.Println(string(content)) - } -} diff --git a/pgo/cmd/config.go b/pgo/cmd/config.go deleted file mode 100644 index 16a29d0603..0000000000 --- a/pgo/cmd/config.go +++ /dev/null @@ -1,64 +0,0 @@ -package cmd - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "encoding/json" - "fmt" - "os" - - "github.com/crunchydata/postgres-operator/pgo/api" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - - log "github.com/sirupsen/logrus" - "sigs.k8s.io/yaml" -) - -func showConfig(args []string, ns string) { - - log.Debugf("showConfig called %v", args) - - response, err := api.ShowConfig(httpclient, &SessionCredentials, ns) - - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(2) - } - - if response.Status.Code != msgs.Ok { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(2) - } - - if OutputFormat == "json" { - b, err := json.MarshalIndent(response, "", " ") - if err != nil { - fmt.Println("Error: ", err) - } - fmt.Println(string(b)) - return - } - - var y []byte - y, err = yaml.Marshal(response.Result) - if err != nil { - fmt.Println(err.Error()) - os.Exit(2) - } - - fmt.Println(string(y)) - -} diff --git a/pgo/cmd/create.go b/pgo/cmd/create.go deleted file mode 100644 index 57e4e77eb6..0000000000 --- a/pgo/cmd/create.go +++ /dev/null @@ -1,609 +0,0 @@ -package cmd - -/* - Copyright 2017 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - "os" - - log "github.com/sirupsen/logrus" - "github.com/spf13/cobra" -) - -var ClusterReplicaCount int -var ManagedUser bool -var AllNamespaces bool -var BackrestStorageConfig, ReplicaStorageConfig, StorageConfig string -var CustomConfig string -var ArchiveFlag, DisableAutofailFlag, EnableAutofailFlag, PgbouncerFlag, MetricsFlag, BadgerFlag bool -var BackrestRestoreFrom string -var CCPImage string -var CCPImageTag string -var CCPImagePrefix string -var PGOImagePrefix string -var Database string -var Password string -var SecretFrom string -var PoliciesFlag, PolicyFile, PolicyURL string -var UserLabels string -var Tablespaces []string -var ServiceType string -var Schedule string -var ScheduleOptions string -var ScheduleType string -var SchedulePolicy string -var ScheduleDatabase string -var ScheduleSecret string -var PGBackRestType string -var Secret string -var PgouserPassword, PgouserRoles, PgouserNamespaces string -var Permissions string -var PodAntiAffinity string -var PodAntiAffinityPgBackRest string -var PodAntiAffinityPgBouncer string -var SyncReplication bool -var BackrestConfig string -var BackrestS3Key string -var BackrestS3KeySecret string -var BackrestS3Bucket string -var BackrestS3Endpoint string -var BackrestS3Region string -var BackrestS3URIStyle string -var BackrestS3VerifyTLS bool -var PVCSize string -var BackrestPVCSize string -var WALStorageConfig string -var WALPVCSize string -var RestoreFrom string - -// group the annotation requests -var ( - // Annotations contains the global annotations for a cluster - Annotations []string - - // AnnotationsBackrest contains annotations specifc to pgBackRest - AnnotationsBackrest []string - - // AnnotationsPgBouncer contains annotations specifc to pgBouncer - AnnotationsPgBouncer []string - - // AnnotationsPostgres contains annotations specifc to PostgreSQL instances - AnnotationsPostgres []string -) - -// group the various container resource requests together, i.e. for CPU/Memory -var ( - // the resource requests / limits for PostgreSQL instances - CPURequest, MemoryRequest string - CPULimit, MemoryLimit string - // the resource requests / limits for the pgBackRest repository - BackrestCPURequest, BackrestMemoryRequest string - BackrestCPULimit, BackrestMemoryLimit string - // the resource requests / limits for pgBouncer instances - PgBouncerCPURequest, PgBouncerMemoryRequest string - PgBouncerCPULimit, PgBouncerMemoryLimit string - // the resource requests / limits for Crunchy Postgres Exporter the sidecar container - ExporterCPURequest, ExporterMemoryRequest string - ExporterCPULimit, ExporterMemoryLimit string -) - -// BackrestS3CASecretName, if provided, is the name of a secret to use that -// contains a CA certificate to use for the pgBackRest repo -var BackrestS3CASecretName string - -// BackrestRepoPath allows the pgBackRest repo path to be defined instead of using the default -var BackrestRepoPath string - -// Standby determines whether or not the cluster should be created as a standby cluster -var Standby bool - -// PasswordType allows one to specify if the password should be MD5 or SCRAM -// we presently ensure it defaults to MD5 -var PasswordType string - -// PasswordSuperuser specifies the password for the cluster superuser -var PasswordSuperuser string - -// PasswordReplication specifies the password for the cluster replication user -var PasswordReplication string - -// variables used for setting up TLS-enabled PostgreSQL clusters -var ( - // TLSOnly indicates that only TLS connections will be accepted for a - // PostgreSQL cluster - TLSOnly bool - // TLSSecret is the name of the secret that contains the TLS information for - // enabling TLS in a PostgreSQL cluster - TLSSecret string - // ReplicationTLSSecret is the name of the secret that contains the TLS - // information for enabling certificate-based authentication between instances - // in a PostgreSQL cluster, particularly for replication - ReplicationTLSSecret string - // CASecret is the name of the secret that contains the CA information for - // enabling TLS in a PostgreSQL cluster - CASecret string -) - -var CreateCmd = &cobra.Command{ - Use: "create", - Short: "Create a Postgres Operator resource", - Long: `CREATE allows you to create a new Operator resource. For example: - pgo create cluster - pgo create pgadmin - pgo create pgbouncer - pgo create pgouser - pgo create pgorole - pgo create policy - pgo create namespace - pgo create user`, - Run: func(cmd *cobra.Command, args []string) { - log.Debug("create called") - if len(args) == 0 { - fmt.Println(`Error: You must specify the type of resource to create. Valid resource types include: - * cluster - * pgadmin - * pgbouncer - * pgouser - * pgorole - * policy - * namespace - * user`) - } else { - switch args[0] { - case "cluster", "pgbouncer", "pgouser", "pgorole", "policy", "user", "namespace": - break - default: - fmt.Println(`Error: You must specify the type of resource to create. Valid resource types include: - * cluster - * pgadmin - * pgbouncer - * pgouser - * pgorole - * policy - * namespace - * user`) - } - } - }, -} - -// createClusterCmd ... -var createClusterCmd = &cobra.Command{ - Use: "cluster", - Short: "Create a PostgreSQL cluster", - Long: `Create a PostgreSQL cluster consisting of a primary and a number of replica backends. For example: - - pgo create cluster mycluster`, - Run: func(cmd *cobra.Command, args []string) { - if Namespace == "" { - Namespace = PGONamespace - } - log.Debug("create cluster called") - - if len(args) != 1 { - fmt.Println(`Error: A single cluster name is required for this command.`) - os.Exit(1) - } - - if PgbouncerFlag && PgBouncerReplicas < 0 { - fmt.Println("Error: You must specify one or more replicas for pgBouncer.") - os.Exit(1) - } - - createCluster(args, Namespace, cmd) - }, -} - -// createPolicyCmd ... -var createPolicyCmd = &cobra.Command{ - Use: "policy", - Short: "Create a SQL policy", - Long: `Create a policy. For example: - - pgo create policy mypolicy --in-file=/tmp/mypolicy.sql`, - Run: func(cmd *cobra.Command, args []string) { - if Namespace == "" { - Namespace = PGONamespace - } - log.Debug("create policy called ") - if PolicyFile == "" && PolicyURL == "" { - fmt.Println(`Error: The --in-file or --url flags are required to create a policy.`) - return - } - - if len(args) == 0 { - fmt.Println(`Error: A policy name is required for this command.`) - } else { - createPolicy(args, Namespace) - } - }, -} - -// createPgAdminCmd ... -var createPgAdminCmd = &cobra.Command{ - Use: "pgadmin", - Short: "Create a pgAdmin instance ", - Long: `Create a pgAdmin instance for mycluster. For example: - - pgo create pgadmin mycluster`, - Run: func(cmd *cobra.Command, args []string) { - - if Namespace == "" { - Namespace = PGONamespace - } - log.Debug("create pgadmin called ") - - if len(args) == 0 && Selector == "" { - fmt.Println(`Error: A cluster name or selector is required for this command.`) - } else { - createPgAdmin(args, Namespace) - } - }, -} - -// createPgbouncerCmd ... -var createPgbouncerCmd = &cobra.Command{ - Use: "pgbouncer", - Short: "Create a pgbouncer ", - Long: `Create a pgbouncer. For example: - - pgo create pgbouncer mycluster`, - Run: func(cmd *cobra.Command, args []string) { - - if Namespace == "" { - Namespace = PGONamespace - } - log.Debug("create pgbouncer called ") - - if len(args) == 0 && Selector == "" { - fmt.Println(`Error: A cluster name or selector is required for this command.`) - os.Exit(1) - } - - if PgBouncerReplicas < 0 { - fmt.Println("Error: You must specify one or more replicas.") - os.Exit(1) - } - - createPgbouncer(args, Namespace) - }, -} - -// createScheduleCmd ... -var createScheduleCmd = &cobra.Command{ - Use: "schedule", - Short: "Create a cron-like scheduled task", - Long: `Schedule creates a cron-like scheduled task. For example: - - pgo create schedule --schedule="* * * * *" --schedule-type=pgbackrest --pgbackrest-backup-type=full mycluster`, - Run: func(cmd *cobra.Command, args []string) { - - if Namespace == "" { - Namespace = PGONamespace - } - log.Debug("create schedule called ") - if len(args) == 0 && Selector == "" { - fmt.Println("Error: The --selector flag or a cluster name is required to create a schedule.") - return - } - createSchedule(args, Namespace) - }, -} - -// createUserCmd ... -var createUserCmd = &cobra.Command{ - Use: "user", - Short: "Create a PostgreSQL user", - Long: `Create a postgres user. For example: - - pgo create user --username=someuser --all --managed - pgo create user --username=someuser mycluster --managed - pgo create user --username=someuser -selector=name=mycluster --managed - pgo create user --username=user1 --selector=name=mycluster`, - Run: func(cmd *cobra.Command, args []string) { - - if Namespace == "" { - Namespace = PGONamespace - } - log.Debug("create user called") - if Selector == "" && !AllFlag && len(args) == 0 { - fmt.Println(`Error: a cluster name(s), --selector flag, or --all flag is required to create a user.`) - return - } - - createUser(args, Namespace) - }, -} - -func init() { - RootCmd.AddCommand(CreateCmd) - CreateCmd.AddCommand(createClusterCmd) - CreateCmd.AddCommand(createPolicyCmd) - CreateCmd.AddCommand(createPgAdminCmd) - CreateCmd.AddCommand(createPgbouncerCmd) - CreateCmd.AddCommand(createPgouserCmd) - CreateCmd.AddCommand(createPgoroleCmd) - CreateCmd.AddCommand(createScheduleCmd) - CreateCmd.AddCommand(createUserCmd) - CreateCmd.AddCommand(createNamespaceCmd) - - // flags for "pgo create cluster" - createClusterCmd.Flags().StringSliceVar(&Annotations, "annotation", []string{}, - "Add an Annotation to all of the managed deployments (PostgreSQL, pgBackRest, pgBouncer)\n"+ - "The format to add an annotation is \"name=value\"\n"+ - "The format to remove an annotation is \"name-\"\n\n"+ - "For example, to add two annotations: \"--annotation=hippo=awesome,elephant=cool\"") - createClusterCmd.Flags().StringSliceVar(&AnnotationsBackrest, "annotation-pgbackrest", []string{}, - "Add an Annotation specifically to pgBackRest deployments\n"+ - "The format to add an annotation is \"name=value\"\n"+ - "The format to remove an annotation is \"name-\"") - createClusterCmd.Flags().StringSliceVar(&AnnotationsPgBouncer, "annotation-pgbouncer", []string{}, - "Add an Annotation specifically to pgBouncer deployments\n"+ - "The format to add an annotation is \"name=value\"\n"+ - "The format to remove an annotation is \"name-\"") - createClusterCmd.Flags().StringSliceVar(&AnnotationsPostgres, "annotation-postgres", []string{}, - "Add an Annotation specifically to PostgreSQL deployments\n"+ - "The format to add an annotation is \"name=value\"\n"+ - "The format to remove an annotation is \"name-\"") - createClusterCmd.Flags().StringVarP(&CCPImage, "ccp-image", "", "", "The CCPImage name to use for cluster creation. If specified, overrides the value crunchy-postgres.") - createClusterCmd.Flags().StringVarP(&CCPImageTag, "ccp-image-tag", "c", "", "The CCPImageTag to use for cluster creation. If specified, overrides the pgo.yaml setting.") - createClusterCmd.Flags().StringVarP(&CCPImagePrefix, "ccp-image-prefix", "", "", "The CCPImagePrefix to use for cluster creation. If specified, overrides the global configuration.") - createClusterCmd.Flags().StringVarP(&PGOImagePrefix, "pgo-image-prefix", "", "", "The PGOImagePrefix to use for cluster creation. If specified, overrides the global configuration.") - createClusterCmd.Flags().StringVar(&CPURequest, "cpu", "", "Set the number of millicores to request for the CPU, e.g. "+ - "\"100m\" or \"0.1\".") - createClusterCmd.Flags().StringVar(&CPULimit, "cpu-limit", "", "Set the number of millicores to limit for the CPU, e.g. "+ - "\"100m\" or \"0.1\".") - createClusterCmd.Flags().StringVarP(&CustomConfig, "custom-config", "", "", "The name of a configMap that holds custom PostgreSQL configuration files used to override defaults.") - createClusterCmd.Flags().StringVarP(&Database, "database", "d", "", "If specified, sets the name of the initial database that is created for the user. Defaults to the value set in the PostgreSQL Operator configuration, or if that is not present, the name of the cluster") - createClusterCmd.Flags().BoolVarP(&DisableAutofailFlag, "disable-autofail", "", false, "Disables autofail capabitilies in the cluster following cluster initialization.") - createClusterCmd.Flags().StringVarP(&UserLabels, "labels", "l", "", "The labels to apply to this cluster.") - createClusterCmd.Flags().StringVar(&MemoryRequest, "memory", "", "Set the amount of RAM to request, e.g. "+ - "1GiB. Overrides the default server value.") - createClusterCmd.Flags().StringVar(&MemoryLimit, "memory-limit", "", "Set the amount of RAM to limit, e.g. "+ - "1GiB.") - createClusterCmd.Flags().BoolVarP(&MetricsFlag, "metrics", "", false, "Adds the crunchy-postgres-exporter container to the database pod.") - createClusterCmd.Flags().StringVar(&ExporterCPURequest, "exporter-cpu", "", "Set the number of millicores to request for CPU "+ - "for the Crunchy Postgres Exporter sidecar container, e.g. \"100m\" or \"0.1\". Defaults to being unset.") - createClusterCmd.Flags().StringVar(&ExporterCPULimit, "exporter-cpu-limit", "", "Set the number of millicores to limit for CPU "+ - "for the Crunchy Postgres Exporter sidecar container, e.g. \"100m\" or \"0.1\". Defaults to being unset.") - createClusterCmd.Flags().StringVar(&ExporterMemoryRequest, "exporter-memory", "", "Set the amount of memory to request for "+ - "the Crunchy Postgres Exporter sidecar container. Defaults to server value (24Mi).") - createClusterCmd.Flags().StringVar(&ExporterMemoryLimit, "exporter-memory-limit", "", "Set the amount of memory to limit for "+ - "the Crunchy Postgres Exporter sidecar container.") - createClusterCmd.Flags().StringVarP(&NodeLabel, "node-label", "", "", "The node label (key=value) to use in placing the primary database. If not set, any node is used.") - createClusterCmd.Flags().StringVarP(&Password, "password", "", "", "The password to use for standard user account created during cluster initialization.") - createClusterCmd.Flags().IntVarP(&PasswordLength, "password-length", "", 0, "If no password is supplied, sets the length of the automatically generated password. Defaults to the value set on the server.") - createClusterCmd.Flags().StringVarP(&PasswordSuperuser, "password-superuser", "", "", "The password to use for the PostgreSQL superuser.") - createClusterCmd.Flags().StringVarP(&PasswordReplication, "password-replication", "", "", "The password to use for the PostgreSQL replication user.") - createClusterCmd.Flags().StringVar(&BackrestCPURequest, "pgbackrest-cpu", "", "Set the number of millicores to request for CPU "+ - "for the pgBackRest repository.") - createClusterCmd.Flags().StringVar(&BackrestCPULimit, "pgbackrest-cpu-limit", "", "Set the number of millicores to limit for CPU "+ - "for the pgBackRest repository.") - createClusterCmd.Flags().StringVar(&BackrestConfig, "pgbackrest-custom-config", "", "The name of a ConfigMap containing pgBackRest configuration files.") - createClusterCmd.Flags().StringVar(&BackrestMemoryRequest, "pgbackrest-memory", "", "Set the amount of memory to request for "+ - "the pgBackRest repository. Defaults to server value (48Mi).") - createClusterCmd.Flags().StringVar(&BackrestMemoryLimit, "pgbackrest-memory-limit", "", "Set the amount of memory to limit for "+ - "the pgBackRest repository.") - createClusterCmd.Flags().StringVarP(&BackrestPVCSize, "pgbackrest-pvc-size", "", "", - `The size of the PVC capacity for the pgBackRest repository. Overrides the value set in the storage class. This is ignored if the storage type of "local" is not used. Must follow the standard Kubernetes format, e.g. "10.1Gi"`) - createClusterCmd.Flags().StringVarP(&BackrestRepoPath, "pgbackrest-repo-path", "", "", - "The pgBackRest repository path that should be utilized instead of the default. Required "+ - "for standby\nclusters to define the location of an existing pgBackRest repository.") - createClusterCmd.Flags().StringVarP(&BackrestS3Key, "pgbackrest-s3-key", "", "", - "The AWS S3 key that should be utilized for the cluster when the \"s3\" "+ - "storage type is enabled for pgBackRest.") - createClusterCmd.Flags().StringVarP(&BackrestS3Bucket, "pgbackrest-s3-bucket", "", "", - "The AWS S3 bucket that should be utilized for the cluster when the \"s3\" "+ - "storage type is enabled for pgBackRest.") - createClusterCmd.Flags().StringVar(&BackrestS3CASecretName, "pgbackrest-s3-ca-secret", "", - "If used, specifies a Kubernetes secret that uses a different CA certificate for "+ - "S3 or a S3-like storage interface. Must contain a key with the value \"aws-s3-ca.crt\"") - createClusterCmd.Flags().StringVarP(&BackrestS3Endpoint, "pgbackrest-s3-endpoint", "", "", - "The AWS S3 endpoint that should be utilized for the cluster when the \"s3\" "+ - "storage type is enabled for pgBackRest.") - createClusterCmd.Flags().StringVarP(&BackrestS3KeySecret, "pgbackrest-s3-key-secret", "", "", - "The AWS S3 key secret that should be utilized for the cluster when the \"s3\" "+ - "storage type is enabled for pgBackRest.") - createClusterCmd.Flags().StringVarP(&BackrestS3Region, "pgbackrest-s3-region", "", "", - "The AWS S3 region that should be utilized for the cluster when the \"s3\" "+ - "storage type is enabled for pgBackRest.") - createClusterCmd.Flags().StringVarP(&BackrestS3URIStyle, "pgbackrest-s3-uri-style", "", "", "Specifies whether \"host\" or \"path\" style URIs will be used when connecting to S3.") - createClusterCmd.Flags().BoolVarP(&BackrestS3VerifyTLS, "pgbackrest-s3-verify-tls", "", true, "This sets if pgBackRest should verify the TLS certificate when connecting to S3. To disable, use \"--pgbackrest-s3-verify-tls=false\".") - createClusterCmd.Flags().StringVar(&BackrestStorageConfig, "pgbackrest-storage-config", "", "The name of the storage config in pgo.yaml to use for the pgBackRest local repository.") - createClusterCmd.Flags().StringVarP(&BackrestStorageType, "pgbackrest-storage-type", "", "", "The type of storage to use with pgBackRest. Either \"local\", \"s3\" or both, comma separated. (default \"local\")") - createClusterCmd.Flags().BoolVarP(&BadgerFlag, "pgbadger", "", false, "Adds the crunchy-pgbadger container to the database pod.") - createClusterCmd.Flags().BoolVarP(&PgbouncerFlag, "pgbouncer", "", false, "Adds a crunchy-pgbouncer deployment to the cluster.") - createClusterCmd.Flags().StringVar(&PgBouncerCPURequest, "pgbouncer-cpu", "", "Set the number of millicores to request for CPU "+ - "for pgBouncer. Defaults to being unset.") - createClusterCmd.Flags().StringVar(&PgBouncerCPULimit, "pgbouncer-cpu-limit", "", "Set the number of millicores to limit for CPU "+ - "for pgBouncer. Defaults to being unset.") - createClusterCmd.Flags().StringVar(&PgBouncerMemoryRequest, "pgbouncer-memory", "", "Set the amount of memory to request for "+ - "pgBouncer. Defaults to server value (24Mi).") - createClusterCmd.Flags().StringVar(&PgBouncerMemoryLimit, "pgbouncer-memory-limit", "", "Set the amount of memory to limit for "+ - "pgBouncer.") - createClusterCmd.Flags().Int32Var(&PgBouncerReplicas, "pgbouncer-replicas", 0, "Set the total number of pgBouncer instances to deploy. If not set, defaults to 1.") - createClusterCmd.Flags().StringVarP(&ReplicaStorageConfig, "replica-storage-config", "", "", "The name of a Storage config in pgo.yaml to use for the cluster replica storage.") - createClusterCmd.Flags().StringVarP(&PodAntiAffinity, "pod-anti-affinity", "", "", - "Specifies the type of anti-affinity that should be utilized when applying "+ - "default pod anti-affinity rules to PG clusters (default \"preferred\")") - createClusterCmd.Flags().StringVarP(&PodAntiAffinityPgBackRest, "pod-anti-affinity-pgbackrest", "", "", - "Set the Pod anti-affinity rules specifically for the pgBackRest "+ - "repository. Defaults to the default cluster pod anti-affinity (i.e. \"preferred\"), "+ - "or the value set by --pod-anti-affinity") - createClusterCmd.Flags().StringVarP(&PodAntiAffinityPgBouncer, "pod-anti-affinity-pgbouncer", "", "", - "Set the Pod anti-affinity rules specifically for the pgBouncer "+ - "Pods. Defaults to the default cluster pod anti-affinity (i.e. \"preferred\"), "+ - "or the value set by --pod-anti-affinity") - createClusterCmd.Flags().StringVarP(&PoliciesFlag, "policies", "z", "", "The policies to apply when creating a cluster, comma separated.") - createClusterCmd.Flags().StringVarP(&PVCSize, "pvc-size", "", "", - `The size of the PVC capacity for primary and replica PostgreSQL instances. Overrides the value set in the storage class. Must follow the standard Kubernetes format, e.g. "10.1Gi"`) - createClusterCmd.Flags().IntVarP(&ClusterReplicaCount, "replica-count", "", 0, "The number of replicas to create as part of the cluster.") - createClusterCmd.Flags().StringVar(&ReplicationTLSSecret, "replication-tls-secret", "", "The name of the secret that contains "+ - "the TLS keypair to use for enabling certificate-based authentication between PostgreSQL instances, "+ - "particularly for the purpose of replication. Must be used with \"server-tls-secret\" and \"server-ca-secret\".") - createClusterCmd.Flags().StringVarP(&RestoreFrom, "restore-from", "", "", "The name of cluster to restore from when bootstrapping a new cluster") - createClusterCmd.Flags().StringVarP(&BackupOpts, "restore-opts", "", "", - "The options to pass into pgbackrest where performing a restore to bootrap the cluster. "+ - "Only applicable when a \"restore-from\" value is specified") - createClusterCmd.Flags().StringVarP(&SecretFrom, "secret-from", "s", "", "The cluster name to use when restoring secrets.") - createClusterCmd.Flags().StringVar(&CASecret, "server-ca-secret", "", "The name of the secret that contains "+ - "the certficate authority (CA) to use for enabling the PostgreSQL cluster to accept TLS connections. "+ - "Must be used with \"server-tls-secret\".") - createClusterCmd.Flags().StringVar(&TLSSecret, "server-tls-secret", "", "The name of the secret that contains "+ - "the TLS keypair to use for enabling the PostgreSQL cluster to accept TLS connections. "+ - "Must be used with \"server-ca-secret\"") - createClusterCmd.Flags().StringVarP(&ServiceType, "service-type", "", "", "The Service type to use for the PostgreSQL cluster. If not set, the pgo.yaml default will be used.") - createClusterCmd.Flags().BoolVar(&ShowSystemAccounts, "show-system-accounts", false, "Include the system accounts in the results.") - createClusterCmd.Flags().StringVarP(&StorageConfig, "storage-config", "", "", "The name of a Storage config in pgo.yaml to use for the cluster storage.") - createClusterCmd.Flags().BoolVarP(&SyncReplication, "sync-replication", "", false, - "Enables synchronous replication for the cluster.") - createClusterCmd.Flags().BoolVar(&TLSOnly, "tls-only", false, "If true, forces all PostgreSQL connections to be over TLS. "+ - "Must also set \"server-tls-secret\" and \"server-ca-secret\"") - createClusterCmd.Flags().BoolVarP(&Standby, "standby", "", false, "Creates a standby cluster "+ - "that replicates from a pgBackRest repository in AWS S3.") - createClusterCmd.Flags().StringSliceVar(&Tablespaces, "tablespace", []string{}, - "Create a PostgreSQL tablespace on the cluster, e.g. \"name=ts1:storageconfig=nfsstorage\". The format is "+ - "a key/value map that is delimited by \"=\" and separated by \":\". The following parameters are available:\n\n"+ - "- name (required): the name of the PostgreSQL tablespace\n"+ - "- storageconfig (required): the storage configuration to use, as specified in the list available in the "+ - "\"pgo-config\" ConfigMap (aka \"pgo.yaml\")\n"+ - "- pvcsize: the size of the PVC capacity, which overrides the value set in the specified storageconfig. "+ - "Follows the Kubernetes quantity format.\n\n"+ - "For example, to create a tablespace with the NFS storage configuration with a PVC of size 10GiB:\n\n"+ - "--tablespace=name=ts1:storageconfig=nfsstorage:pvcsize=10Gi") - createClusterCmd.Flags().StringVarP(&Username, "username", "u", "", "The username to use for creating the PostgreSQL user with standard permissions. Defaults to the value in the PostgreSQL Operator configuration.") - createClusterCmd.Flags().StringVar(&WALStorageConfig, "wal-storage-config", "", - `The name of a storage configuration in pgo.yaml to use for PostgreSQL's write-ahead log (WAL).`) - createClusterCmd.Flags().StringVar(&WALPVCSize, "wal-storage-size", "", - `The size of the capacity for WAL storage, which overrides any value in the storage configuration. Follows the Kubernetes quantity format.`) - - // pgo create pgadmin - createPgAdminCmd.Flags().StringVarP(&Selector, "selector", "s", "", "The selector to use for cluster filtering.") - - // pgo create pgbouncer - createPgbouncerCmd.Flags().StringVar(&PgBouncerCPURequest, "cpu", "", "Set the number of millicores to request for CPU "+ - "for pgBouncer. Defaults to being unset.") - createPgbouncerCmd.Flags().StringVar(&PgBouncerCPULimit, "cpu-limit", "", "Set the number of millicores to request for CPU "+ - "for pgBouncer.") - createPgbouncerCmd.Flags().StringVar(&PgBouncerMemoryRequest, "memory", "", "Set the amount of memory to request for "+ - "pgBouncer. Defaults to server value (24Mi).") - createPgbouncerCmd.Flags().StringVar(&PgBouncerMemoryLimit, "memory-limit", "", "Set the amount of memory to limit for "+ - "pgBouncer.") - createPgbouncerCmd.Flags().Int32Var(&PgBouncerReplicas, "replicas", 0, "Set the total number of pgBouncer instances to deploy. If not set, defaults to 1.") - createPgbouncerCmd.Flags().StringVarP(&Selector, "selector", "s", "", "The selector to use for cluster filtering.") - - // "pgo create pgouser" flags - createPgouserCmd.Flags().BoolVarP(&AllNamespaces, "all-namespaces", "", false, "specifies this user will have access to all namespaces.") - createPgoroleCmd.Flags().StringVarP(&Permissions, "permissions", "", "", "specify a comma separated list of permissions for a pgorole") - createPgouserCmd.Flags().StringVarP(&PgouserPassword, "pgouser-password", "", "", "specify a password for a pgouser") - createPgouserCmd.Flags().StringVarP(&PgouserRoles, "pgouser-roles", "", "", "specify a comma separated list of Roles for a pgouser") - createPgouserCmd.Flags().StringVarP(&PgouserNamespaces, "pgouser-namespaces", "", "", "specify a comma separated list of Namespaces for a pgouser") - - // "pgo create policy" flags - createPolicyCmd.Flags().StringVarP(&PolicyFile, "in-file", "i", "", "The policy file path to use for adding a policy.") - createPolicyCmd.Flags().StringVarP(&PolicyURL, "url", "u", "", "The url to use for adding a policy.") - - // "pgo create schedule" flags - createScheduleCmd.Flags().StringVarP(&ScheduleDatabase, "database", "", "", "The database to run the SQL policy against.") - createScheduleCmd.Flags().StringVarP(&PGBackRestType, "pgbackrest-backup-type", "", "", "The type of pgBackRest backup to schedule (full, diff or incr).") - createScheduleCmd.Flags().StringVarP(&BackrestStorageType, "pgbackrest-storage-type", "", "", "The type of storage to use when scheduling pgBackRest backups. Either \"local\", \"s3\" or both, comma separated. (default \"local\")") - createScheduleCmd.Flags().StringVarP(&CCPImageTag, "ccp-image-tag", "c", "", "The CCPImageTag to use for cluster creation. If specified, overrides the pgo.yaml setting.") - createScheduleCmd.Flags().StringVarP(&SchedulePolicy, "policy", "", "", "The policy to use for SQL schedules.") - createScheduleCmd.Flags().StringVarP(&Schedule, "schedule", "", "", "The schedule assigned to the cron task.") - createScheduleCmd.Flags().StringVarP(&ScheduleOptions, "schedule-opts", "", "", "The custom options passed to the create schedule API.") - createScheduleCmd.Flags().StringVarP(&ScheduleType, "schedule-type", "", "", "The type of schedule to be created (pgbackrest or policy).") - createScheduleCmd.Flags().StringVarP(&ScheduleSecret, "secret", "", "", "The secret name for the username and password of the PostgreSQL role for SQL schedules.") - createScheduleCmd.Flags().StringVarP(&Selector, "selector", "s", "", "The selector to use for cluster filtering.") - - // "pgo create user" flags - createUserCmd.Flags().BoolVar(&AllFlag, "all", false, "Create a user on every cluster.") - createUserCmd.Flags().BoolVarP(&ManagedUser, "managed", "", false, "Creates a user with secrets that can be managed by the Operator.") - createUserCmd.Flags().StringVarP(&OutputFormat, "output", "o", "", `The output format. Supported types are: "json"`) - createUserCmd.Flags().StringVarP(&Password, "password", "", "", "The password to use for creating a new user which overrides a generated password.") - createUserCmd.Flags().IntVarP(&PasswordLength, "password-length", "", 0, "If no password is supplied, sets the length of the automatically generated password. Defaults to the value set on the server.") - createUserCmd.Flags().StringVar(&PasswordType, "password-type", "md5", "The type of password hashing to use."+ - "Choices are: (md5, scram-sha-256).") - createUserCmd.Flags().StringVarP(&Selector, "selector", "s", "", "The selector to use for cluster filtering.") - createUserCmd.Flags().StringVarP(&Username, "username", "", "", "The username to use for creating a new user") - createUserCmd.Flags().IntVarP(&PasswordAgeDays, "valid-days", "", 0, "Sets the number of days that a password is valid. Defaults to the server value.") -} - -// createPgouserCmd ... -var createPgouserCmd = &cobra.Command{ - Use: "pgouser", - Short: "Create a pgouser", - Long: `Create a pgouser. For example: - - pgo create pgouser someuser`, - Run: func(cmd *cobra.Command, args []string) { - if Namespace == "" { - Namespace = PGONamespace - } - log.Debug("create pgouser called ") - - if len(args) == 0 { - fmt.Println(`Error: A pgouser username is required for this command.`) - } else { - createPgouser(args, Namespace) - } - }, -} - -// createPgoroleCmd ... -var createPgoroleCmd = &cobra.Command{ - Use: "pgorole", - Short: "Create a pgorole", - Long: `Create a pgorole. For example: - - pgo create pgorole somerole --permissions="Cat,Ls"`, - Run: func(cmd *cobra.Command, args []string) { - if Namespace == "" { - Namespace = PGONamespace - } - log.Debug("create pgorole called ") - - if len(args) == 0 { - fmt.Println(`Error: A pgouser role name is required for this command.`) - } else { - createPgorole(args, Namespace) - } - }, -} - -// createNamespaceCmd ... -var createNamespaceCmd = &cobra.Command{ - Use: "namespace", - Short: "Create a namespace", - Long: `Create a namespace. For example: - - pgo create namespace somenamespace - - Note: For Kubernetes versions prior to 1.12, this command will not function properly - - use $PGOROOT/deploy/add_targted_namespace.sh scriptor or give the user cluster-admin privileges. - For more details, see the Namespace Creation section under Installing Operator Using Bash in the documentation.`, - Run: func(cmd *cobra.Command, args []string) { - if Namespace == "" { - Namespace = PGONamespace - } - log.Debug("create namespace called ") - - if len(args) == 0 { - fmt.Println(`Error: A namespace name is required for this command.`) - } else { - createNamespace(args, Namespace) - } - }, -} diff --git a/pgo/cmd/delete.go b/pgo/cmd/delete.go deleted file mode 100644 index d29341355e..0000000000 --- a/pgo/cmd/delete.go +++ /dev/null @@ -1,560 +0,0 @@ -package cmd - -/* - Copyright 2017 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - - "github.com/crunchydata/postgres-operator/pgo/util" - "github.com/spf13/cobra" -) - -// Several prompts to help a user decide if they wish to actually delete their -// data from a cluster -const ( - deleteClusterAllPromptMessage = "This will delete ALL OF YOUR DATA, including backups. Proceed?" - deleteClusterKeepBackupsPromptMessage = "This will delete your active data, but your backups will still be available. Proceed?" - deleteClusterKeepDataPromptMessage = "This will delete your cluster as well as your backups, but your data is still accessible if you recreate the cluster. Proceed?" -) - -// deleteCmd represents the delete command -var deleteCmd = &cobra.Command{ - Use: "delete", - Short: "Delete an Operator resource", - Long: `The delete command allows you to delete an Operator resource. For example: - - pgo delete backup mycluster - pgo delete cluster mycluster - pgo delete cluster mycluster --delete-data - pgo delete cluster mycluster --delete-data --delete-backups - pgo delete label mycluster --label=env=research - pgo delete pgadmin mycluster - pgo delete pgbouncer mycluster - pgo delete pgbouncer mycluster --uninstall - pgo delete pgouser someuser - pgo delete pgorole somerole - pgo delete policy mypolicy - pgo delete namespace mynamespace - pgo delete schedule --schedule-name=mycluster-pgbackrest-full - pgo delete schedule --selector=name=mycluster - pgo delete schedule mycluster - pgo delete user --username=testuser --selector=name=mycluster`, - Run: func(cmd *cobra.Command, args []string) { - - if len(args) == 0 { - fmt.Println(`Error: You must specify the type of resource to delete. Valid resource types include: - * backup - * cluster - * label - * pgadmin - * pgbouncer - * pgouser - * pgorole - * namespace - * policy - * user`) - } else { - switch args[0] { - case "backup", - "cluster", - "label", - "pgadmin", - "pgbouncer", - "pgouser", - "pgorole", - "policy", - "namespace", - "schedule", - "user": - break - default: - fmt.Println(`Error: You must specify the type of resource to delete. Valid resource types include: - * backup - * cluster - * label - * pgadmin - * pgbouncer - * pgouser - * pgorole - * policy - * namespace - * user`) - } - } - - }, -} - -// DEPRECATED deleteBackups, if set to "true", indicates that backups can be deleted. -var deleteBackups bool - -// KeepBackups, If set to "true", indicates that backups should be stored even -// after a cluster is deleted -var KeepBackups bool - -// NoPrompt, If set to "true", indicates that the user should not be prompted -// before executing a delete command -var NoPrompt bool - -// initialize variables specific for the "pgo delete" command and subcommands -func init() { - // set the various commands that are provided by this file - // First, add the root command, i.e. "pgo delete" - RootCmd.AddCommand(deleteCmd) - - // "pgo delete backup" - // used to delete backups - deleteCmd.AddCommand(deleteBackupCmd) - - // "pgo delete cluster" - // used to delete clusters - deleteCmd.AddCommand(deleteClusterCmd) - // "pgo delete cluster --all" - // allows for the deletion of every cluster. - deleteClusterCmd.Flags().BoolVar(&AllFlag, "all", false, - "Delete all clusters. Backups and data subject to --delete-backups and --delete-data flags, respectively.") - // "pgo delete cluster --delete-backups" - // "pgo delete cluster -d" - // instructs that any backups associated with a cluster should be deleted - deleteClusterCmd.Flags().BoolVarP(&deleteBackups, "delete-backups", "b", false, - "Causes the backups for specified cluster to be removed permanently.") - deleteClusterCmd.Flags().MarkDeprecated("delete-backups", - "Backups are deleted by default. If you would like to keep your backups, use the --keep-backups flag") - // "pgo delete cluster --delete-data" - // "pgo delete cluster -d" - // instructs that the PostgreSQL cluster data should be deleted - deleteClusterCmd.Flags().BoolVarP(&DeleteData, "delete-data", "d", false, - "Causes the data for specified cluster to be removed permanently.") - deleteClusterCmd.Flags().MarkDeprecated("delete-data", - "Data is deleted by default. You can preserve your data by keeping your backups with the --keep-backups flag") - // "pgo delete cluster --keep-backups" - // instructs that any backups associated with a cluster should be kept and not deleted - deleteClusterCmd.Flags().BoolVar(&KeepBackups, "keep-backups", false, - "Keeps the backups available for use at a later time (e.g. recreating the cluster).") - // "pgo delete cluster --keep-data" - // instructs that any data associated with the cluster should be kept and not deleted - deleteClusterCmd.Flags().BoolVar(&KeepData, "keep-data", false, - "Keeps the data for the specified cluster. Can be reassigned to exact same cluster in the future.") - // "pgo delete cluster --no-prompt" - // does not display the warning prompt to ensure the user wishes to delete - // a cluster - deleteClusterCmd.Flags().BoolVar(&NoPrompt, "no-prompt", false, - "No command line confirmation before delete.") - // "pgo delete cluster --selector" - // "pgo delete cluster -s" - // the selector flag that filters which clusters to delete - deleteClusterCmd.Flags().StringVarP(&Selector, "selector", "s", "", - "The selector to use for cluster filtering.") - - // "pgo delete label" - // delete a cluster label - deleteCmd.AddCommand(deleteLabelCmd) - // pgo delete label --label - // the label to be deleted - deleteLabelCmd.Flags().StringVar(&LabelCmdLabel, "label", "", - "The label to delete for any selected or specified clusters.") - // "pgo delete label --selector" - // "pgo delete label -s" - // the selector flag that filters which clusters to delete the cluster - // labels from - deleteLabelCmd.Flags().StringVarP(&Selector, "selector", "s", "", - "The selector to use for cluster filtering.") - - // "pgo delete namespace" - // deletes a namespace and all of the objects within it (clusters, etc.) - deleteCmd.AddCommand(deleteNamespaceCmd) - - // "pgo delete pgadmin" - // delete a pgAdmin instance associated with a PostgreSQL cluster - deleteCmd.AddCommand(deletePgAdminCmd) - // "pgo delete pgadmin --no-prompt" - // does not display the warning prompt to confirming pgAdmin deletion - deletePgAdminCmd.Flags().BoolVar(&NoPrompt, "no-prompt", false, "No command line confirmation before delete.") - // "pgo delete pgadmin --selector" - // "pgo delete pgadmin -s" - // the selector flag filtering clusters from which to delete the pgAdmin instances - deletePgAdminCmd.Flags().StringVarP(&Selector, "selector", "s", "", "The selector to use for cluster filtering.") - - // "pgo delete pgbouncer" - // delete a pgBouncer instance that is associated with a PostgreSQL cluster - deleteCmd.AddCommand(deletePgbouncerCmd) - // "pgo delete pgbouncer --no-prompt" - // does not display the warning prompt to ensure the user wishes to delete - // a pgBouncer instance - deletePgbouncerCmd.Flags().BoolVar(&NoPrompt, "no-prompt", false, "No command line confirmation before delete.") - // "pgo delete pgbouncer --selector" - // "pgo delete pgbouncer -s" - // the selector flag that filters which clusters to delete the pgBouncer - // instances from - deletePgbouncerCmd.Flags().StringVarP(&Selector, "selector", "s", "", "The selector to use for cluster filtering.") - // "pgo delete pgbouncer --uninstall" - // this flag removes all of the pgbouncer machinery that is installed in the - // PostgreSQL cluster - deletePgbouncerCmd.Flags().BoolVar(&PgBouncerUninstall, "uninstall", false, `Used to remove any "pgbouncer" owned object and user from the PostgreSQL cluster`) - - // "pgo delete pgorole" - // delete a role that is able to issue commands interface with the - // PostgreSQL Operator - deleteCmd.AddCommand(deletePgoroleCmd) - // "pgo delete pgorole --all" - // allows for the deletion of every PostgreSQL Operator role. - deletePgoroleCmd.Flags().BoolVar(&AllFlag, "all", false, "Delete all PostgreSQL Operator roles.") - // "pgo delete pgorole --no-prompt" - // does not display the warning prompt to ensure the user wishes to delete - // a PostgreSQL Operator role - deletePgoroleCmd.Flags().BoolVar(&NoPrompt, "no-prompt", false, - "No command line confirmation before delete.") - - // "pgo delete pgouser" - // delete a user that is able to issue commands to the PostgreSQL Operator - deleteCmd.AddCommand(deletePgouserCmd) - // "pgo delete cluster --all" - // allows for the deletion of every PostgreSQL Operator user. - deletePgouserCmd.Flags().BoolVar(&AllFlag, "all", false, - "Delete all PostgreSQL Operator users.") - // "pgo delete pgouser --no-prompt" - // does not display the warning prompt to ensure the user wishes to delete - // a PostgreSQL Operator user - deletePgouserCmd.Flags().BoolVar(&NoPrompt, "no-prompt", false, - "No command line confirmation before delete.") - - // "pgo delete policy" - // delete a SQL policy associated with a PostgreSQL cluster - deleteCmd.AddCommand(deletePolicyCmd) - // "pgo delete policy --all" - // delete all SQL policies for all clusters - deletePolicyCmd.Flags().BoolVar(&AllFlag, "all", false, "Delete all SQL policies.") - // "pgo delete policy --no-prompt" - // does not display the warning prompt to ensure the user wishes to delete - // a SQL policy - deletePolicyCmd.Flags().BoolVar(&NoPrompt, "no-prompt", false, "No command line confirmation before delete.") - - // "pgo delete schedule" - // delete a scheduled job for a cluster (e.g. a daily backup) - deleteCmd.AddCommand(deleteScheduleCmd) - // "pgo delete schedule --no-prompt" - // does not display the warning prompt to ensure the user wishes to delete - // a scheduled job for a cluster - deleteScheduleCmd.Flags().BoolVar(&NoPrompt, "no-prompt", false, - "No command line confirmation before delete.") - // "pgo delete schedule --schedule-name" - // the name of the scheduled job to delete - deleteScheduleCmd.Flags().StringVar(&ScheduleName, "schedule-name", "", - "The name of the schedule to delete.") - // "pgo delete schedule --selector" - // "pgo delete schedule -s" - // the selector flag that filters which scheduled jobs should be deleted - // from which clusters - deleteScheduleCmd.Flags().StringVarP(&Selector, "selector", "s", "", - "The selector to use for cluster filtering.") - - // "pgo delete user" - // Delete a user from a PostgreSQL cluster - deleteCmd.AddCommand(deleteUserCmd) - // "pgo delete user --all" - // delete all users from all PostgreSQL clusteres - deleteUserCmd.Flags().BoolVar(&AllFlag, "all", false, - "Delete all PostgreSQL users from all clusters.") - // "pgo delete user --no-prompt" - // does not display the warning prompt to ensure the user wishes to delete - // the PostgreSQL user from the cluster - deleteUserCmd.Flags().BoolVar(&NoPrompt, "no-prompt", false, - "No command line confirmation before delete.") - // "pgo delete user -o" - // selects the type of output to use, choices are "json" and any other input - // will render the text based one - deleteUserCmd.Flags().StringVarP(&OutputFormat, "output", "o", "", - `The output format. Supported types are: "json"`) - // "pgo delete schedule --selector" - // "pgo delete schedule -s" - // the selector flag that filters which PostgreSQL users should be deleted - // from which clusters - deleteUserCmd.Flags().StringVarP(&Selector, "selector", "s", "", - "The selector to use for cluster filtering.") - // "pgo delete schedule --username" - // the username of the PostgreSQL user to delete - deleteUserCmd.Flags().StringVar(&Username, "username", "", - "The username to delete.") -} - -// deleteBackupCmd ... -var deleteBackupCmd = &cobra.Command{ - Use: "backup", - Short: "Delete a backup", - Long: `Delete a backup. For example: - - pgo delete backup mydatabase`, - Run: func(cmd *cobra.Command, args []string) { - if Namespace == "" { - Namespace = PGONamespace - } - if len(args) == 0 { - fmt.Println("Error: A database or cluster name is required for this command.") - } else { - if util.AskForConfirmation(NoPrompt, "") { - deleteBackup(args, Namespace) - } else { - fmt.Println("Aborting...") - } - } - }, -} - -// deleteClusterCmd ... -var deleteClusterCmd = &cobra.Command{ - Use: "cluster", - Short: "Delete a PostgreSQL cluster", - Long: `Delete a PostgreSQL cluster. For example: - - pgo delete cluster --all - pgo delete cluster mycluster`, - Run: func(cmd *cobra.Command, args []string) { - if Namespace == "" { - Namespace = PGONamespace - } - if len(args) == 0 && Selector == "" && !AllFlag { - fmt.Println("Error: A cluster name, selector, or --all is required for this command.") - } else { - // Set the prompt message based on whether or not "--keep-backups" is set - promptMessage := deleteClusterAllPromptMessage - - if KeepBackups && KeepData { - promptMessage = "" - } else if KeepBackups { - promptMessage = deleteClusterKeepBackupsPromptMessage - } else if KeepData { - promptMessage = deleteClusterKeepDataPromptMessage - } - - if util.AskForConfirmation(NoPrompt, promptMessage) { - deleteCluster(args, Namespace) - } else { - fmt.Println("Aborting...") - } - } - }, -} - -// deleteLabelCmd ... -var deleteLabelCmd = &cobra.Command{ - Use: "label", - Short: "Delete a label from clusters", - Long: `Delete a label from clusters. For example: - - pgo delete label mycluster --label=env=research - pgo delete label all --label=env=research - pgo delete label --selector=group=southwest --label=env=research`, - Run: func(cmd *cobra.Command, args []string) { - if Namespace == "" { - Namespace = PGONamespace - } - if len(args) == 0 && Selector == "" { - fmt.Println("Error: A cluster name or selector is required for this command.") - } else { - deleteLabel(args, Namespace) - } - }, -} - -// deleteNamespaceCmd ... -var deleteNamespaceCmd = &cobra.Command{ - Use: "namespace", - Short: "Delete namespaces", - Long: `Delete namespaces. For example: - - pgo delete namespace mynamespace - pgo delete namespace --selector=env=test`, - Run: func(cmd *cobra.Command, args []string) { - if len(args) == 0 && Selector == "" { - fmt.Println("Error: namespace name or selector is required to delete a namespace.") - return - } - - if util.AskForConfirmation(NoPrompt, "") { - deleteNamespace(args, Namespace) - } else { - fmt.Println("Aborting...") - } - }, -} - -// deletePgoroleCmd ... -var deletePgoroleCmd = &cobra.Command{ - Use: "pgorole", - Short: "Delete a pgorole", - Long: `Delete a pgorole. For example: - - pgo delete pgorole somerole`, - Run: func(cmd *cobra.Command, args []string) { - if Namespace == "" { - Namespace = PGONamespace - } - if len(args) == 0 { - fmt.Println("Error: A pgorole role name is required for this command.") - } else { - if util.AskForConfirmation(NoPrompt, "") { - deletePgorole(args, Namespace) - } else { - fmt.Println("Aborting...") - } - } - }, -} - -// deletePgouserCmd ... -var deletePgouserCmd = &cobra.Command{ - Use: "pgouser", - Short: "Delete a pgouser", - Long: `Delete a pgouser. For example: - - pgo delete pgouser someuser`, - Run: func(cmd *cobra.Command, args []string) { - if Namespace == "" { - Namespace = PGONamespace - } - if len(args) == 0 { - fmt.Println("Error: A pgouser username is required for this command.") - } else { - if util.AskForConfirmation(NoPrompt, "") { - deletePgouser(args, Namespace) - } else { - fmt.Println("Aborting...") - } - } - }, -} - -// deletePgAdminCmd ... -var deletePgAdminCmd = &cobra.Command{ - Use: "pgadmin", - Short: "Delete a pgAdmin instance from a cluster", - Long: `Delete a pgAdmin instance from a cluster. For example: - - pgo delete pgadmin mycluster`, - Run: func(cmd *cobra.Command, args []string) { - if Namespace == "" { - Namespace = PGONamespace - } - if len(args) == 0 && Selector == "" { - fmt.Println("Error: A cluster name or selector is required for this command.") - } else { - if util.AskForConfirmation(NoPrompt, "") { - deletePgAdmin(args, Namespace) - - } else { - fmt.Println("Aborting...") - } - } - }, -} - -// deletePgbouncerCmd ... -var deletePgbouncerCmd = &cobra.Command{ - Use: "pgbouncer", - Short: "Delete a pgbouncer from a cluster", - Long: `Delete a pgbouncer from a cluster. For example: - - pgo delete pgbouncer mycluster`, - Run: func(cmd *cobra.Command, args []string) { - if Namespace == "" { - Namespace = PGONamespace - } - if len(args) == 0 && Selector == "" { - fmt.Println("Error: A cluster name or selector is required for this command.") - } else { - if util.AskForConfirmation(NoPrompt, "") { - deletePgbouncer(args, Namespace) - - } else { - fmt.Println("Aborting...") - } - } - }, -} - -// deletePolicyCmd ... -var deletePolicyCmd = &cobra.Command{ - Use: "policy", - Short: "Delete a SQL policy", - Long: `Delete a policy. For example: - - pgo delete policy mypolicy`, - Run: func(cmd *cobra.Command, args []string) { - if Namespace == "" { - Namespace = PGONamespace - } - if len(args) == 0 && !AllFlag { - fmt.Println("Error: A policy name or --all is required for this command.") - } else { - if util.AskForConfirmation(NoPrompt, "") { - deletePolicy(args, Namespace) - } else { - fmt.Println("Aborting...") - } - } - }, -} - -// deleteScheduleCmd ... -var deleteScheduleCmd = &cobra.Command{ - Use: "schedule", - Short: "Delete a schedule", - Long: `Delete a cron-like schedule. For example: - - pgo delete schedule mycluster - pgo delete schedule --selector=env=test - pgo delete schedule --schedule-name=mycluster-pgbackrest-full`, - Run: func(cmd *cobra.Command, args []string) { - if Namespace == "" { - Namespace = PGONamespace - } - if len(args) == 0 && Selector == "" && ScheduleName == "" { - fmt.Println("Error: cluster name, schedule name or selector is required to delete a schedule.") - return - } - - if util.AskForConfirmation(NoPrompt, "") { - deleteSchedule(args, Namespace) - } else { - fmt.Println("Aborting...") - } - }, -} - -// deleteUserCmd ... -var deleteUserCmd = &cobra.Command{ - Use: "user", - Short: "Delete a user", - Long: `Delete a user. For example: - - pgo delete user --username=someuser --selector=name=mycluster`, - Run: func(cmd *cobra.Command, args []string) { - - if Namespace == "" { - Namespace = PGONamespace - } - if len(args) == 0 && AllFlag == false && Selector == "" { - fmt.Println("Error: --all, --selector, or a list of clusters is required for this command") - } else { - if util.AskForConfirmation(NoPrompt, "") { - deleteUser(args, Namespace) - - } else { - fmt.Println("Aborting...") - } - } - }, -} diff --git a/pgo/cmd/df.go b/pgo/cmd/df.go deleted file mode 100644 index 576376d1bc..0000000000 --- a/pgo/cmd/df.go +++ /dev/null @@ -1,268 +0,0 @@ -package cmd - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - "math" - "os" - "sort" - "strings" - - "github.com/crunchydata/postgres-operator/pgo/api" - "github.com/crunchydata/postgres-operator/pgo/util" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "github.com/spf13/cobra" -) - -// dfTextPadding contains the values for what the text padding should be -type dfTextPadding struct { - Capacity int - Instance int - Pod int - PercentUsed int - PVC int - PVCType int - Used int -} - -// the different capacity levels and when to warn. Perhaps at some point one -// can configure this -const ( - capacityWarning = 85 - capacityCaution = 70 -) - -// values for the text paddings that remain constant -const ( - dfTextPaddingCapacity = 10 - dfTextPaddingPVCType = 12 - dfTextPaddingUsed = 10 - dfTextPaddingPercentUsed = 7 -) - -// pvcTypeToString contains the human readable strings of the PVC types -var pvcTypeToString = map[msgs.DfPVCType]string{ - msgs.PVCTypePostgreSQL: "data", - msgs.PVCTypepgBackRest: "pgbackrest", - msgs.PVCTypeTablespace: "tablespace", - msgs.PVCTypeWriteAheadLog: "wal", -} - -var dfCmd = &cobra.Command{ - Use: "df", - Short: "Display disk space for clusters", - Long: `Displays the disk status for PostgreSQL clusters. For example: - - pgo df mycluster - pgo df --selector=env=research - pgo df --all`, - Run: func(cmd *cobra.Command, args []string) { - log.Debug("df called") - - // if the namespace is not set, use the namespcae loaded from the - // environmental variable - if Namespace == "" { - Namespace = PGONamespace - } - - // if the AllFlag is set, set the Selector to "*" - if AllFlag { - Selector = msgs.DfShowAllSelector - } - - if Selector == "" && len(args) == 0 { - fmt.Println(`Error: You must specify at least one cluster, selector, or use the "--all" flag.`) - os.Exit(1) - } - - // if there are multiple cluster names set, iterate through them - // otherwise, just make the single API call - if len(args) > 0 { - for _, clusterName := range args { - // set the selector - selector := fmt.Sprintf("name=%s", clusterName) - - showDf(Namespace, selector) - } - return - } - - showDf(Namespace, Selector) - }, -} - -func init() { - RootCmd.AddCommand(dfCmd) - - dfCmd.Flags().BoolVar(&AllFlag, "all", false, "Get disk utilization for all managed clusters") - dfCmd.Flags().StringVarP(&OutputFormat, "output", "o", "", `The output format. Supported types are: "json"`) - dfCmd.Flags().StringVarP(&Selector, "selector", "s", "", "The selector to use for cluster filtering.") - -} - -// getPVCType returns a "human readable" form of the PVC -func getPVCType(pvcType msgs.DfPVCType) string { - return pvcTypeToString[pvcType] -} - -// getUtilizationColor returns the appropriate color to use on the terminal -// based on the overall utilization -func getUtilizationColor(utilization float64) func(...interface{}) string { - // go through the levels and return the appropriate color - switch { - case utilization >= capacityWarning: - return RED - case utilization >= capacityCaution: - return YELLOW - default: - return GREEN - } -} - -// makeDfInterface returns an interface slice of the available values in the df -func makeDfInterface(values []msgs.DfDetail) []interface{} { - // iterate through the list of values to make the interface - dfInterface := make([]interface{}, len(values)) - - for i, value := range values { - dfInterface[i] = value - } - - return dfInterface -} - -// printDfText renders a text response -func printDfText(response msgs.DfResponse) { - // if the request errored, return the message here and exit with an error - if response.Status.Code != msgs.Ok { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(1) - } - - // if no results returned, return an error - if len(response.Results) == 0 { - fmt.Println("Nothing found.") - return - } - - // go get the max length of a few of the values we need to make an interface - // slice - dfInterface := makeDfInterface(response.Results) - - padding := dfTextPadding{ - Capacity: dfTextPaddingCapacity, - Instance: getMaxLength(dfInterface, headingInstance, "InstanceName"), - PercentUsed: dfTextPaddingPercentUsed, - Pod: getMaxLength(dfInterface, headingPod, "PodName"), - PVC: getMaxLength(dfInterface, headingPVC, "PVCName"), - PVCType: dfTextPaddingPVCType, - Used: dfTextPaddingUsed, - } - - printDfTextHeader(padding) - - // sort the results! - results := response.Results - sort.SliceStable(results, func(i int, j int) bool { - return results[i].InstanceName < results[j].InstanceName - }) - - // iterate through the reuslts and print them out - for _, result := range results { - printDfTextRow(result, padding) - } -} - -// printDfTextHeader prints out the header -func printDfTextHeader(padding dfTextPadding) { - // print the header - fmt.Println("") - fmt.Printf("%s", util.Rpad(headingPVC, " ", padding.PVC)) - fmt.Printf("%s", util.Rpad(headingInstance, " ", padding.Instance)) - fmt.Printf("%s", util.Rpad(headingPod, " ", padding.Pod)) - fmt.Printf("%s", util.Rpad(headingPVCType, " ", padding.PVCType)) - fmt.Printf("%s", util.Rpad(headingUsed, " ", padding.Used)) - fmt.Printf("%s", util.Rpad(headingCapacity, " ", padding.Capacity)) - fmt.Println(headingPercentUsed) - - // print the layer below the header...which prints out a bunch of "-" that's - // 1 less than the padding value - fmt.Println( - strings.Repeat("-", padding.PVC-1), - strings.Repeat("-", padding.Instance-1), - strings.Repeat("-", padding.Pod-1), - strings.Repeat("-", padding.PVCType-1), - strings.Repeat("-", padding.Used-1), - strings.Repeat("-", padding.Capacity-1), - strings.Repeat("-", padding.PercentUsed-1), - ) -} - -// printDfTextRow prints a row of the text data. It also performs calculations -// that that are used to pretty up the rendering -func printDfTextRow(result msgs.DfDetail, padding dfTextPadding) { - // get how the utilization and capacity should be render with their units - pvcUsedSize, pvcUsedUnit := getSizeAndUnit(result.PVCUsed) - pvcCapacitySize, pvcCapacityUnit := getSizeAndUnit(result.PVCCapacity) - - // perform some upfront calculations - // get the % disk utilization - utilization := math.Round(float64(result.PVCUsed) / float64(result.PVCCapacity) * 100) - - // get the color to give guidance on how much disk is being utilized - utilizationColor := getUtilizationColor(utilization) - - fmt.Printf("%s", util.Rpad(result.PVCName, " ", padding.PVC)) - fmt.Printf("%s", util.Rpad(result.InstanceName, " ", padding.Instance)) - fmt.Printf("%s", util.Rpad(result.PodName, " ", padding.Pod)) - fmt.Printf("%s", util.Rpad(getPVCType(result.PVCType), " ", padding.PVCType)) - - fmt.Printf("%s", - util.Rpad(fmt.Sprintf("%.f%s", pvcUsedSize, getUnitString(pvcUsedUnit)), " ", padding.Used)) - - fmt.Printf("%s", - util.Rpad(fmt.Sprintf("%.f%s", pvcCapacitySize, getUnitString(pvcCapacityUnit)), " ", padding.Capacity)) - - fmt.Printf("%s\n", utilizationColor(fmt.Sprintf("%.f%%", utilization))) -} - -// showDf is the legacy function that handles processing the "pgo df" command -func showDf(namespace, selector string) { - request := msgs.DfRequest{ - Namespace: namespace, - Selector: selector, - } - - // make the request - response, err := api.ShowDf(httpclient, &SessionCredentials, request) - - // if there is an error, or the response code is not ok, print the error and - // exit - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(1) - } - - // render the next bit based on the output type - switch OutputFormat { - case "json": - printJSON(response) - default: - printDfText(response) - } -} diff --git a/pgo/cmd/failover.go b/pgo/cmd/failover.go deleted file mode 100644 index 4a667429a7..0000000000 --- a/pgo/cmd/failover.go +++ /dev/null @@ -1,145 +0,0 @@ -// Package cmd provides the command line functions of the crunchy CLI -package cmd - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - "os" - - "github.com/crunchydata/postgres-operator/pgo/api" - "github.com/crunchydata/postgres-operator/pgo/util" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "github.com/spf13/cobra" -) - -var failoverCmd = &cobra.Command{ - Use: "failover", - Short: "Performs a manual failover", - Long: `Performs a manual failover. For example: - - pgo failover mycluster`, - Run: func(cmd *cobra.Command, args []string) { - if Namespace == "" { - Namespace = PGONamespace - } - log.Debug("failover called") - if len(args) == 0 { - fmt.Println(`Error: You must specify the cluster to failover.`) - } else { - if Query { - queryFailover(args, Namespace) - } else if util.AskForConfirmation(NoPrompt, "") { - if Target == "" { - fmt.Println(`Error: The --target flag is required for failover.`) - return - } - createFailover(args, Namespace) - } else { - fmt.Println("Aborting...") - } - } - - }, -} - -func init() { - RootCmd.AddCommand(failoverCmd) - - failoverCmd.Flags().BoolVarP(&Query, "query", "", false, "Prints the list of failover candidates.") - failoverCmd.Flags().BoolVar(&NoPrompt, "no-prompt", false, "No command line confirmation.") - failoverCmd.Flags().StringVarP(&Target, "target", "", "", "The replica target which the failover will occur on.") - -} - -// createFailover .... -func createFailover(args []string, ns string) { - log.Debugf("createFailover called %v", args) - - request := new(msgs.CreateFailoverRequest) - request.Namespace = ns - request.ClusterName = args[0] - request.Target = Target - request.ClientVersion = msgs.PGO_VERSION - - response, err := api.CreateFailover(httpclient, &SessionCredentials, request) - - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(2) - } - - if response.Status.Code == msgs.Ok { - for k := range response.Results { - fmt.Println(response.Results[k]) - } - } else { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(2) - } - -} - -// queryFailover is a helper function to return the user information about the -// replicas that can be failed over to for this cluster. This is called when the -// "--query" flag is specified -func queryFailover(args []string, ns string) { - log.Debugf("queryFailover called %v", args) - - // call the API - response, err := api.QueryFailover(httpclient, args[0], &SessionCredentials, ns) - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(2) - } - - // indicate which cluster this is. Put a newline before to put some - // separation between each line - if !response.Standby { - fmt.Printf("\nCluster: %s\n", args[0]) - } else { - fmt.Printf("\nCluster (standby): %s\n", args[0]) - } - - // If there is a controlled error, output the message here and continue - // to iterate through the list - if response.Status.Code != msgs.Ok { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(1) - } - - // If there are no replicas found for this cluster, indicate so, and - // continue to iterate through the list - if len(response.Results) == 0 { - fmt.Println("No replicas found.") - return - } - - // output the information about each instance - fmt.Printf("%-20s\t%-10s\t%-10s\t%-20s\t%s\n", "REPLICA", "STATUS", "NODE", "REPLICATION LAG", - "PENDING RESTART") - - for i := 0; i < len(response.Results); i++ { - instance := response.Results[i] - - log.Debugf("postgresql instance: %v", instance) - - fmt.Printf("%-20s\t%-10s\t%-10s\t%12d %-7s\t%15t\n", - instance.Name, instance.Status, instance.Node, instance.ReplicationLag, "MB", - instance.PendingRestart) - } -} diff --git a/pgo/cmd/flags.go b/pgo/cmd/flags.go deleted file mode 100644 index bb831e4006..0000000000 --- a/pgo/cmd/flags.go +++ /dev/null @@ -1,52 +0,0 @@ -package cmd - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -//flags used by more than 1 command -var DeleteData bool - -// KeepData, If set to "true", indicates that cluster data should be stored -// even after a cluster is deleted. This is DEPRECATED -var KeepData bool - -var Query bool - -var Target string -var Targets []string - -var OutputFormat string -var Labelselector string -var DebugFlag bool -var Selector string -var DryRun bool -var ScheduleName string -var NodeLabel string - -var BackupType string -var RestoreType string -var BackupOpts string -var BackrestStorageType string - -var RED func(a ...interface{}) string -var YELLOW func(a ...interface{}) string -var GREEN func(a ...interface{}) string - -var Namespace string -var PGONamespace string -var APIServerURL string -var PGO_CA_CERT, PGO_CLIENT_CERT, PGO_CLIENT_KEY string -var PGO_DISABLE_TLS bool -var EXCLUDE_OS_TRUST bool diff --git a/pgo/cmd/label.go b/pgo/cmd/label.go deleted file mode 100644 index 3a888663b6..0000000000 --- a/pgo/cmd/label.go +++ /dev/null @@ -1,130 +0,0 @@ -package cmd - -/* - Copyright 2017 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - "github.com/crunchydata/postgres-operator/pgo/api" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "github.com/spf13/cobra" - "os" -) - -var LabelCmdLabel string -var LabelMap map[string]string -var DeleteLabel bool - -var labelCmd = &cobra.Command{ - Use: "label", - Short: "Label a set of clusters", - Long: `LABEL allows you to add or remove a label on a set of clusters. For example: - - pgo label mycluster yourcluster --label=environment=prod - pgo label all --label=environment=prod - pgo label --label=environment=prod --selector=name=mycluster - pgo label --label=environment=prod --selector=status=final --dry-run`, - Run: func(cmd *cobra.Command, args []string) { - if Namespace == "" { - Namespace = PGONamespace - } - log.Debug("label called") - if len(args) == 0 && Selector == "" { - fmt.Println("Error: A selector or list of clusters is required to label a policy.") - return - } - if LabelCmdLabel == "" { - fmt.Println("Error: You must specify the label to apply.") - } else { - labelClusters(args, Namespace) - } - }, -} - -func init() { - RootCmd.AddCommand(labelCmd) - - labelCmd.Flags().StringVarP(&Selector, "selector", "s", "", "The selector to use for cluster filtering.") - labelCmd.Flags().StringVarP(&LabelCmdLabel, "label", "", "", "The new label to apply for any selected or specified clusters.") - labelCmd.Flags().BoolVarP(&DryRun, "dry-run", "", false, "Shows the clusters that the label would be applied to, without labelling them.") - -} - -func labelClusters(clusters []string, ns string) { - var err error - - if len(clusters) == 0 && Selector == "" { - fmt.Println("No clusters specified.") - return - } - - r := new(msgs.LabelRequest) - r.Args = clusters - r.Namespace = ns - r.Selector = Selector - r.DryRun = DryRun - r.LabelCmdLabel = LabelCmdLabel - r.DeleteLabel = DeleteLabel - r.ClientVersion = msgs.PGO_VERSION - - log.Debugf("%s is the selector", r.Selector) - response, err := api.LabelClusters(httpclient, &SessionCredentials, r) - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(2) - } - - if DryRun { - fmt.Println("The label would have been applied on the following:") - } - - if response.Status.Code == msgs.Ok { - for k := range response.Results { - fmt.Println("Label applied on " + response.Results[k]) - } - } else { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(2) - } - -} - -// deleteLabel ... -func deleteLabel(args []string, ns string) { - log.Debugf("deleteLabel called %v", args) - - req := msgs.DeleteLabelRequest{} - req.Selector = Selector - req.Namespace = ns - req.Args = args - req.LabelCmdLabel = LabelCmdLabel - req.ClientVersion = msgs.PGO_VERSION - - response, err := api.DeleteLabel(httpclient, &SessionCredentials, &req) - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(2) - } - - if response.Status.Code == msgs.Ok { - for _, result := range response.Results { - fmt.Println(result) - } - } else { - fmt.Println("Error: " + response.Status.Msg) - } - -} diff --git a/pgo/cmd/namespace.go b/pgo/cmd/namespace.go deleted file mode 100644 index 1065193ad8..0000000000 --- a/pgo/cmd/namespace.go +++ /dev/null @@ -1,198 +0,0 @@ -package cmd - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "encoding/json" - "fmt" - "os" - - "github.com/crunchydata/postgres-operator/pgo/api" - "github.com/crunchydata/postgres-operator/pgo/util" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" -) - -func showNamespace(args []string) { - // copy arg list to keep track of original cli args - nsList := make([]string, len(args)) - copy(nsList, args) - - defNS := os.Getenv("PGO_NAMESPACE") - if defNS != "" { - fmt.Printf("current local default namespace: %s\n", defNS) - found := false - // check if default is already in nsList - for _, ns := range nsList { - if ns == defNS { - found = true - break - } - } - - if !found { - log.Debugf("adding default namespace [%s] to args", defNS) - nsList = append(nsList, defNS) - } - } - - r := msgs.ShowNamespaceRequest{} - r.ClientVersion = msgs.PGO_VERSION - r.Args = nsList - r.AllFlag = AllFlag - - if len(nsList) == 0 && AllFlag == false { - fmt.Println("Error: namespace args or --all is required") - os.Exit(2) - } - - log.Debugf("showNamespace called %v", nsList) - - response, err := api.ShowNamespace(httpclient, &SessionCredentials, &r) - - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(2) - } - - if response.Status.Code != msgs.Ok { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(2) - } - - if OutputFormat == "json" { - b, err := json.MarshalIndent(response, "", " ") - if err != nil { - fmt.Println("Error: ", err) - } - fmt.Println(string(b)) - return - } - - if len(response.Results) == 0 { - fmt.Println("Nothing found.") - return - } - - fmt.Printf("pgo username: %s\n", response.Username) - - fmt.Printf("%s", util.Rpad("namespace", " ", 25)) - fmt.Printf("%s", util.Rpad("useraccess", " ", 20)) - fmt.Printf("%s\n", util.Rpad("installaccess", " ", 20)) - - var accessible, iAccessible string - for _, result := range response.Results { - accessible = GREEN(util.Rpad("accessible", " ", 20)) - if !result.UserAccess { - accessible = RED(util.Rpad("no access", " ", 20)) - } - iAccessible = GREEN(util.Rpad("accessible", " ", 20)) - if !result.InstallationAccess { - iAccessible = RED(util.Rpad("no access", " ", 20)) - } - fmt.Printf("%s", util.Rpad(result.Namespace, " ", 25)) - fmt.Printf("%s", accessible) - fmt.Printf("%s\n", iAccessible) - } - -} - -func createNamespace(args []string, ns string) { - log.Debugf("createNamespace called %v [%s]", args, Selector) - - r := msgs.CreateNamespaceRequest{} - r.ClientVersion = msgs.PGO_VERSION - r.Namespace = ns - r.Args = args - - if len(args) == 0 { - fmt.Println("Error: namespace names are required") - os.Exit(2) - } - - response, err := api.CreateNamespace(httpclient, &SessionCredentials, &r) - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(2) - } - - log.Debugf("createNamespace response %v", response) - if response.Status.Code == msgs.Ok { - for _, v := range response.Results { - fmt.Println(v) - } - } else { - fmt.Println("Error: " + response.Status.Msg) - } -} - -func deleteNamespace(args []string, ns string) { - log.Debugf("deleteNamespace called %v [%s]", args, Selector) - - r := msgs.DeleteNamespaceRequest{} - r.Selector = Selector - r.AllFlag = AllFlag - r.ClientVersion = msgs.PGO_VERSION - r.Namespace = ns - r.Args = args - - if Selector != "" && len(args) > 0 { - fmt.Println("Error: can not specify both arguments and --selector") - os.Exit(2) - } - - response, err := api.DeleteNamespace(httpclient, &r, &SessionCredentials) - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(2) - } - - if response.Status.Code == msgs.Ok { - for _, v := range response.Results { - fmt.Println(v) - } - } else { - fmt.Println("Error: " + response.Status.Msg) - } - -} -func updateNamespace(args []string) { - var err error - - if len(args) == 0 { - fmt.Println("Error: A Namespace name argument is required.") - return - } - - r := new(msgs.UpdateNamespaceRequest) - r.Args = args - r.ClientVersion = msgs.PGO_VERSION - - response, err := api.UpdateNamespace(httpclient, r, &SessionCredentials) - - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(2) - } - - if response.Status.Code == msgs.Ok { - fmt.Println("namespace updated ") - } else { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(2) - } - -} diff --git a/pgo/cmd/pgadmin.go b/pgo/cmd/pgadmin.go deleted file mode 100644 index 52412324d4..0000000000 --- a/pgo/cmd/pgadmin.go +++ /dev/null @@ -1,219 +0,0 @@ -package cmd - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - "os" - "strings" - - "github.com/crunchydata/postgres-operator/pgo/api" - "github.com/crunchydata/postgres-operator/pgo/util" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" -) - -// showPgAdminTextPadding contains the values for what the text padding should be -type showPgAdminTextPadding struct { - ClusterName int - ClusterIP int - ExternalIP int - ServiceName int -} - -// updatePgAdminTextPadding contains the values for what the text padding should be -type updatePgAdminTextPadding struct { - ClusterName int - ErrorMessage int - Status int -} - -func createPgAdmin(args []string, ns string) { - if Selector == "" && len(args) == 0 { - fmt.Println("Error: The --selector flag is required when cluster is unspecified.") - os.Exit(1) - } - - request := msgs.CreatePgAdminRequest{ - Args: args, - ClientVersion: msgs.PGO_VERSION, - Namespace: ns, - Selector: Selector, - } - - response, err := api.CreatePgAdmin(httpclient, &SessionCredentials, &request) - - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(1) - } - - // this is slightly rewritten from the legacy method - if response.Status.Code != msgs.Ok { - fmt.Println("Error: " + response.Status.Msg) - for _, v := range response.Results { - fmt.Println(v) - } - os.Exit(1) - } - - for _, v := range response.Results { - fmt.Println(v) - } -} - -func deletePgAdmin(args []string, ns string) { - if Selector == "" && len(args) == 0 { - fmt.Println("Error: The --selector flag or a cluster name is required.") - os.Exit(1) - } - - // set up the API request - request := msgs.DeletePgAdminRequest{ - Args: args, - ClientVersion: msgs.PGO_VERSION, - Selector: Selector, - Namespace: ns, - } - - response, err := api.DeletePgAdmin(httpclient, &SessionCredentials, &request) - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(1) - } - - if response.Status.Code == msgs.Ok { - for _, v := range response.Results { - fmt.Println(v) - } - } else { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(1) - } - -} - -// makeShowPgAdminInterface returns an interface slice of the available values -// in show pgadmin -func makeShowPgAdminInterface(values []msgs.ShowPgAdminDetail) []interface{} { - // iterate through the list of values to make the interface - showPgAdminInterface := make([]interface{}, len(values)) - - for i, value := range values { - showPgAdminInterface[i] = value - } - - return showPgAdminInterface -} - -// printShowPgAdminText prints out the information around each PostgreSQL -// cluster's pgAdmin -// printShowPgAdminText renders a text response -func printShowPgAdminText(response msgs.ShowPgAdminResponse) { - // if the request errored, return the message here and exit with an error - if response.Status.Code != msgs.Ok { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(1) - } - - // if no results returned, return an error - if len(response.Results) == 0 { - fmt.Println("Nothing found.") - return - } - - // make the interface for the pgadmin clusters - showPgAdminInterface := makeShowPgAdminInterface(response.Results) - - // format the header - // start by setting up the different text paddings - padding := showPgAdminTextPadding{ - ClusterName: getMaxLength(showPgAdminInterface, headingCluster, "ClusterName"), - ClusterIP: getMaxLength(showPgAdminInterface, headingClusterIP, "ServiceClusterIP"), - ExternalIP: getMaxLength(showPgAdminInterface, headingExternalIP, "ServiceExternalIP"), - ServiceName: getMaxLength(showPgAdminInterface, headingService, "ServiceName"), - } - - printShowPgAdminTextHeader(padding) - - // iterate through the reuslts and print them out - for _, result := range response.Results { - printShowPgAdminTextRow(result, padding) - } -} - -// printShowPgAdminTextHeader prints out the header -func printShowPgAdminTextHeader(padding showPgAdminTextPadding) { - // print the header - fmt.Println("") - fmt.Printf("%s", util.Rpad(headingCluster, " ", padding.ClusterName)) - fmt.Printf("%s", util.Rpad(headingService, " ", padding.ServiceName)) - fmt.Printf("%s", util.Rpad(headingClusterIP, " ", padding.ClusterIP)) - fmt.Printf("%s", util.Rpad(headingExternalIP, " ", padding.ExternalIP)) - fmt.Println("") - - // print the layer below the header...which prints out a bunch of "-" that's - // 1 less than the padding value - fmt.Println( - strings.Repeat("-", padding.ClusterName-1), - strings.Repeat("-", padding.ServiceName-1), - strings.Repeat("-", padding.ClusterIP-1), - strings.Repeat("-", padding.ExternalIP-1), - ) -} - -// printShowPgAdminTextRow prints a row of the text data -func printShowPgAdminTextRow(result msgs.ShowPgAdminDetail, padding showPgAdminTextPadding) { - fmt.Printf("%s", util.Rpad(result.ClusterName, " ", padding.ClusterName)) - fmt.Printf("%s", util.Rpad(result.ServiceName, " ", padding.ServiceName)) - fmt.Printf("%s", util.Rpad(result.ServiceClusterIP, " ", padding.ClusterIP)) - fmt.Printf("%s", util.Rpad(result.ServiceExternalIP, " ", padding.ExternalIP)) - fmt.Println("") -} - -// showPgAdmin prepares to make an API requests to display information about -// one or more pgAdmin deployments. "clusterNames" is an array of cluster -// names to iterate over -func showPgAdmin(namespace string, clusterNames []string) { - // first, determine if any arguments have been pass in - if len(clusterNames) == 0 && Selector == "" { - fmt.Println("Error: You must provide at least one cluster name, or use a selector with the `--selector` flag") - os.Exit(1) - } - - request := msgs.ShowPgAdminRequest{ - ClusterNames: clusterNames, - Namespace: namespace, - Selector: Selector, - } - - response, err := api.ShowPgAdmin(httpclient, &SessionCredentials, request) - if err != nil { - fmt.Println("Error:", err.Error()) - os.Exit(1) - } - - // great! now we can work on interpreting the results and outputting them - // per the user's desired output format - // render the next bit based on the output type - switch OutputFormat { - case "json": - fmt.Println("outputting in json") - printJSON(response) - default: - fmt.Println("outputting text") - printShowPgAdminText(response) - } -} diff --git a/pgo/cmd/pgbouncer.go b/pgo/cmd/pgbouncer.go deleted file mode 100644 index 9d302a8cdb..0000000000 --- a/pgo/cmd/pgbouncer.go +++ /dev/null @@ -1,414 +0,0 @@ -package cmd - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - "os" - "strings" - - "github.com/crunchydata/postgres-operator/pgo/api" - "github.com/crunchydata/postgres-operator/pgo/util" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" -) - -// showPgBouncerTextPadding contains the values for what the text padding should be -type showPgBouncerTextPadding struct { - ClusterName int - ClusterIP int - ExternalIP int - Password int - ServiceName int - Username int -} - -// updatePgBouncerTextPadding contains the values for what the text padding should be -type updatePgBouncerTextPadding struct { - ClusterName int - ErrorMessage int - Status int -} - -// PgBouncerReplicas is the total number of replica pods to deploy with a -// pgBouncer Deployment -var PgBouncerReplicas int32 - -// PgBouncerUninstall is used to ensure the objects intalled in PostgreSQL on -// behalf of pgbouncer are either not applied (in the case of a cluster create) -// or are removed (in the case of a pgo delete pgbouncer) -var PgBouncerUninstall bool - -func createPgbouncer(args []string, ns string) { - - if Selector == "" && len(args) == 0 { - fmt.Println("Error: The --selector flag is required.") - return - } - - request := msgs.CreatePgbouncerRequest{ - Args: args, - ClientVersion: msgs.PGO_VERSION, - CPURequest: PgBouncerCPURequest, - CPULimit: PgBouncerCPULimit, - MemoryRequest: PgBouncerMemoryRequest, - MemoryLimit: PgBouncerMemoryLimit, - Namespace: ns, - Replicas: PgBouncerReplicas, - Selector: Selector, - } - - if err := util.ValidateQuantity(request.CPURequest, "cpu"); err != nil { - fmt.Println(err) - os.Exit(1) - } - - if err := util.ValidateQuantity(request.CPULimit, "cpu-limit"); err != nil { - fmt.Println(err) - os.Exit(1) - } - - if err := util.ValidateQuantity(request.MemoryRequest, "memory"); err != nil { - fmt.Println(err) - os.Exit(1) - } - - if err := util.ValidateQuantity(request.MemoryLimit, "memory-limit"); err != nil { - fmt.Println(err) - os.Exit(1) - } - - response, err := api.CreatePgbouncer(httpclient, &SessionCredentials, &request) - - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(1) - } - - // this is slightly rewritten from the legacy method - if response.Status.Code != msgs.Ok { - fmt.Println("Error: " + response.Status.Msg) - - for _, v := range response.Results { - fmt.Println(v) - } - - os.Exit(1) - } - - for _, v := range response.Results { - fmt.Println(v) - } -} - -func deletePgbouncer(args []string, ns string) { - - if Selector == "" && len(args) == 0 { - fmt.Println("Error: The --selector flag or a cluster name is required.") - return - } - - // set up the API request - request := msgs.DeletePgbouncerRequest{ - Args: args, - ClientVersion: msgs.PGO_VERSION, - Selector: Selector, - Namespace: ns, - Uninstall: PgBouncerUninstall, - } - - response, err := api.DeletePgbouncer(httpclient, &SessionCredentials, &request) - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(2) - } - - if response.Status.Code == msgs.Ok { - for _, v := range response.Results { - fmt.Println(v) - } - } else { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(2) - } - -} - -// makeShowPgBouncerInterface returns an interface slice of the available values -// in show pgbouncer -func makeShowPgBouncerInterface(values []msgs.ShowPgBouncerDetail) []interface{} { - // iterate through the list of values to make the interface - showPgBouncerInterface := make([]interface{}, len(values)) - - for i, value := range values { - showPgBouncerInterface[i] = value - } - - return showPgBouncerInterface -} - -// makeUpdatePgBouncerInterface returns an interface slice of the available values -// in show pgbouncer -func makeUpdatePgBouncerInterface(values []msgs.UpdatePgBouncerDetail) []interface{} { - // iterate through the list of values to make the interface - updatePgBouncerInterface := make([]interface{}, len(values)) - - for i, value := range values { - updatePgBouncerInterface[i] = value - } - - return updatePgBouncerInterface -} - -// printShowPgBouncerText prints out the information around each PostgreSQL -// cluster's pgBouncer -// printShowPgBouncerText renders a text response -func printShowPgBouncerText(response msgs.ShowPgBouncerResponse) { - // if the request errored, return the message here and exit with an error - if response.Status.Code != msgs.Ok { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(1) - } - - // if no results returned, return an error - if len(response.Results) == 0 { - fmt.Println("Nothing found.") - return - } - - // make the interface for the pgbouncer clusters - showPgBouncerInterface := makeShowPgBouncerInterface(response.Results) - - // format the header - // start by setting up the different text paddings - padding := showPgBouncerTextPadding{ - ClusterName: getMaxLength(showPgBouncerInterface, headingCluster, "ClusterName"), - ClusterIP: getMaxLength(showPgBouncerInterface, headingClusterIP, "ServiceClusterIP"), - ExternalIP: getMaxLength(showPgBouncerInterface, headingExternalIP, "ServiceExternalIP"), - ServiceName: getMaxLength(showPgBouncerInterface, headingService, "ServiceName"), - Password: getMaxLength(showPgBouncerInterface, headingPassword, "Password"), - Username: getMaxLength(showPgBouncerInterface, headingUsername, "Username"), - } - - printShowPgBouncerTextHeader(padding) - - // iterate through the reuslts and print them out - for _, result := range response.Results { - printShowPgBouncerTextRow(result, padding) - } -} - -// printShowPgBouncerTextHeader prints out the header -func printShowPgBouncerTextHeader(padding showPgBouncerTextPadding) { - // print the header - fmt.Println("") - fmt.Printf("%s", util.Rpad(headingCluster, " ", padding.ClusterName)) - fmt.Printf("%s", util.Rpad(headingService, " ", padding.ServiceName)) - fmt.Printf("%s", util.Rpad(headingUsername, " ", padding.Username)) - fmt.Printf("%s", util.Rpad(headingPassword, " ", padding.Password)) - fmt.Printf("%s", util.Rpad(headingClusterIP, " ", padding.ClusterIP)) - fmt.Printf("%s", util.Rpad(headingExternalIP, " ", padding.ExternalIP)) - fmt.Println("") - - // print the layer below the header...which prints out a bunch of "-" that's - // 1 less than the padding value - fmt.Println( - strings.Repeat("-", padding.ClusterName-1), - strings.Repeat("-", padding.ServiceName-1), - strings.Repeat("-", padding.Username-1), - strings.Repeat("-", padding.Password-1), - strings.Repeat("-", padding.ClusterIP-1), - strings.Repeat("-", padding.ExternalIP-1), - ) -} - -// printShowPgBouncerTextRow prints a row of the text data -func printShowPgBouncerTextRow(result msgs.ShowPgBouncerDetail, padding showPgBouncerTextPadding) { - fmt.Printf("%s", util.Rpad(result.ClusterName, " ", padding.ClusterName)) - fmt.Printf("%s", util.Rpad(result.ServiceName, " ", padding.ServiceName)) - fmt.Printf("%s", util.Rpad(result.Username, " ", padding.Username)) - fmt.Printf("%s", util.Rpad(result.Password, " ", padding.Password)) - fmt.Printf("%s", util.Rpad(result.ServiceClusterIP, " ", padding.ClusterIP)) - fmt.Printf("%s", util.Rpad(result.ServiceExternalIP, " ", padding.ExternalIP)) - fmt.Println("") -} - -// printUpdatePgBouncerText prints out the information about how each pgBouncer -// updat efared after a request -// printShowPgBouncerText renders a text response -func printUpdatePgBouncerText(response msgs.UpdatePgBouncerResponse) { - // if the request errored, return the message here and exit with an error - if response.Status.Code != msgs.Ok { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(1) - } - - // if no results returned, return an error - if len(response.Results) == 0 { - fmt.Println("Nothing found.") - return - } - - // make the interface for the pgbouncer clusters - updatePgBouncerInterface := makeUpdatePgBouncerInterface(response.Results) - - // format the header - // start by setting up the different text paddings - padding := updatePgBouncerTextPadding{ - ClusterName: getMaxLength(updatePgBouncerInterface, headingCluster, "ClusterName"), - ErrorMessage: getMaxLength(updatePgBouncerInterface, headingErrorMessage, "ErrorMessage"), - Status: len(headingStatus) + 1, - } - - printUpdatePgBouncerTextHeader(padding) - - // iterate through the reuslts and print them out - for _, result := range response.Results { - printUpdatePgBouncerTextRow(result, padding) - } -} - -// printUpdatePgBouncerTextHeader prints out the header -func printUpdatePgBouncerTextHeader(padding updatePgBouncerTextPadding) { - // print the header - fmt.Println("") - fmt.Printf("%s", util.Rpad(headingCluster, " ", padding.ClusterName)) - fmt.Printf("%s", util.Rpad(headingStatus, " ", padding.Status)) - fmt.Printf("%s", util.Rpad(headingErrorMessage, " ", padding.ErrorMessage)) - fmt.Println("") - - // print the layer below the header...which prints out a bunch of "-" that's - // 1 less than the padding value - fmt.Println( - strings.Repeat("-", padding.ClusterName-1), - strings.Repeat("-", padding.Status-1), - strings.Repeat("-", padding.ErrorMessage-1), - ) -} - -// printUpdatePgBouncerTextRow prints a row of the text data -func printUpdatePgBouncerTextRow(result msgs.UpdatePgBouncerDetail, padding updatePgBouncerTextPadding) { - // set the text-based status - status := "ok" - if result.Error { - status = "error" - } - - fmt.Printf("%s", util.Rpad(result.ClusterName, " ", padding.ClusterName)) - fmt.Printf("%s", util.Rpad(status, " ", padding.Status)) - fmt.Printf("%s", util.Rpad(result.ErrorMessage, " ", padding.ErrorMessage)) - fmt.Println("") -} - -// showPgBouncer prepares to make an API requests to display information about -// one or more pgBouncer deployments. "clusterNames" is an array of cluster -// names to iterate over -func showPgBouncer(namespace string, clusterNames []string) { - // first, determine if any arguments have been pass in - if len(clusterNames) == 0 && Selector == "" { - fmt.Println("Error: You must provide at least one cluster name, or use a selector with the `--selector` flag") - os.Exit(1) - } - - // next prepare the request! - request := msgs.ShowPgBouncerRequest{ - ClusterNames: clusterNames, - Namespace: namespace, - Selector: Selector, - } - - // and make the API request! - response, err := api.ShowPgBouncer(httpclient, &SessionCredentials, request) - - // if there is a bona-fide error, log and exit - if err != nil { - fmt.Println("Error:", err.Error()) - os.Exit(1) - } - - // great! now we can work on interpreting the results and outputting them - // per the user's desired output format - // render the next bit based on the output type - switch OutputFormat { - case "json": - printJSON(response) - default: - printShowPgBouncerText(response) - } -} - -// updatePgBouncer prepares to make an API requests to update information about -// a pgBouncer deployment in a cluster -// one or more pgBouncer deployments. "clusterNames" is an array of cluster -// names to iterate over -func updatePgBouncer(namespace string, clusterNames []string) { - // first, determine if any arguments have been pass in - if len(clusterNames) == 0 && Selector == "" { - fmt.Println("Error: You must provide at least one cluster name, or use a selector with the `--selector` flag") - os.Exit(1) - } - - // next prepare the request! - request := msgs.UpdatePgBouncerRequest{ - ClusterNames: clusterNames, - CPURequest: PgBouncerCPURequest, - CPULimit: PgBouncerCPULimit, - MemoryRequest: PgBouncerMemoryRequest, - MemoryLimit: PgBouncerMemoryLimit, - Namespace: namespace, - Replicas: PgBouncerReplicas, - RotatePassword: RotatePassword, - Selector: Selector, - } - - if err := util.ValidateQuantity(request.CPURequest, "cpu"); err != nil { - fmt.Println(err) - os.Exit(1) - } - - if err := util.ValidateQuantity(request.CPULimit, "cpu-limit"); err != nil { - fmt.Println(err) - os.Exit(1) - } - - if err := util.ValidateQuantity(request.MemoryRequest, "memory"); err != nil { - fmt.Println(err) - os.Exit(1) - } - - if err := util.ValidateQuantity(request.MemoryLimit, "memory-limit"); err != nil { - fmt.Println(err) - os.Exit(1) - } - - // and make the API request! - response, err := api.UpdatePgBouncer(httpclient, &SessionCredentials, request) - - // if there is a bona-fide error, log and exit - if err != nil { - fmt.Println("Error:", err.Error()) - os.Exit(1) - } - - // great! now we can work on interpreting the results and outputting them - // per the user's desired output format - // render the next bit based on the output type - switch OutputFormat { - case "json": - printJSON(response) - default: - printUpdatePgBouncerText(response) - } -} diff --git a/pgo/cmd/pgdump.go b/pgo/cmd/pgdump.go deleted file mode 100644 index 744ebe7274..0000000000 --- a/pgo/cmd/pgdump.go +++ /dev/null @@ -1,117 +0,0 @@ -// Package cmd provides the command line functions of the crunchy CLI -package cmd - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - "os" - - "github.com/crunchydata/postgres-operator/pgo/api" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" -) - -// createpgDumpBackup -func createpgDumpBackup(args []string, ns string) { - log.Debugf("createpgDumpBackup called %v %s", args, BackupOpts) - - request := new(msgs.CreatepgDumpBackupRequest) - request.Args = args - request.Namespace = ns - request.Selector = Selector - request.PVCName = PVCName - request.PGDumpDB = PGDumpDB - request.StorageConfig = StorageConfig - request.BackupOpts = BackupOpts - - response, err := api.CreatepgDumpBackup(httpclient, &SessionCredentials, request) - if err != nil { - fmt.Println("Error: ", err.Error()) - os.Exit(2) - } - - if response.Status.Code == msgs.Ok { - for k := range response.Results { - fmt.Println(response.Results[k]) - } - } else { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(2) - } - - if len(response.Results) == 0 { - fmt.Println("No clusters found.") - return - } - -} - -// pgDump .... -func showpgDump(args []string, ns string) { - log.Debugf("showpgDump called %v", args) - - for _, v := range args { - response, err := api.ShowpgDump(httpclient, v, Selector, &SessionCredentials, ns) - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(2) - } - - if response.Status.Code != msgs.Ok { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(2) - } - - if len(response.BackupList.Items) == 0 { - fmt.Println("No pgDumps found for " + v + ".") - return - } - - log.Debugf("response = %v", response) - log.Debugf("len of items = %d", len(response.BackupList.Items)) - - for _, backup := range response.BackupList.Items { - printDumpCRD(&backup) - } - } -} - -// printBackrest -func printpgDump(result *msgs.ShowpgDumpDetail) { - fmt.Printf("%s%s\n", "", "") - fmt.Printf("%s%s\n", "", "pgDump : "+result.Name) - fmt.Printf("%s%s\n", "", result.Info) - -} - -// printBackupCRD ... -func printDumpCRD(result *msgs.Pgbackup) { - fmt.Printf("%s%s\n", "", "") - fmt.Printf("%s%s\n", "", "pgdump : "+result.Name) - - fmt.Printf("%s%s\n", TreeBranch, "PVC Name:\t"+result.BackupPVC) - fmt.Printf("%s%s\n", TreeBranch, "Access Mode:\t"+result.StorageSpec.AccessMode) - fmt.Printf("%s%s\n", TreeBranch, "PVC Size:\t"+result.StorageSpec.Size) - fmt.Printf("%s%s\n", TreeBranch, "Creation:\t"+result.CreationTimestamp) - fmt.Printf("%s%s\n", TreeBranch, "CCPImageTag:\t"+result.CCPImageTag) - fmt.Printf("%s%s\n", TreeBranch, "Backup Status:\t"+result.BackupStatus) - fmt.Printf("%s%s\n", TreeBranch, "Backup Host:\t"+result.BackupHost) - fmt.Printf("%s%s\n", TreeBranch, "Backup User Secret:\t"+result.BackupUserSecret) - fmt.Printf("%s%s\n", TreeTrunk, "Backup Port:\t"+result.BackupPort) - fmt.Printf("%s%s\n", TreeTrunk, "Backup Opts:\t"+result.BackupOpts) - -} diff --git a/pgo/cmd/pgorole.go b/pgo/cmd/pgorole.go deleted file mode 100644 index 171b03e401..0000000000 --- a/pgo/cmd/pgorole.go +++ /dev/null @@ -1,168 +0,0 @@ -package cmd - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - "github.com/crunchydata/postgres-operator/pgo/api" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "os" -) - -func updatePgorole(args []string, ns string) { - var err error - - if Permissions == "" { - fmt.Println("Error: --permissions flag is required.") - return - } - - if len(args) == 0 { - fmt.Println("Error: A pgorole name argument is required.") - return - } - - r := new(msgs.UpdatePgoroleRequest) - r.PgoroleName = args[0] - r.Namespace = ns - r.ChangePermissions = PgoroleChangePermissions - r.PgorolePermissions = Permissions - r.ClientVersion = msgs.PGO_VERSION - - response, err := api.UpdatePgorole(httpclient, &SessionCredentials, r) - - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(2) - } - - if response.Status.Code == msgs.Ok { - fmt.Println("pgorole updated ") - } else { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(2) - } - -} - -func showPgorole(args []string, ns string) { - - r := new(msgs.ShowPgoroleRequest) - r.PgoroleName = args - r.Namespace = ns - r.AllFlag = AllFlag - r.ClientVersion = msgs.PGO_VERSION - - if len(args) == 0 && !AllFlag { - fmt.Println("Error: either a pgorole name or --all flag is required") - os.Exit(2) - } - - response, err := api.ShowPgorole(httpclient, &SessionCredentials, r) - - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(2) - } - - if response.Status.Code != msgs.Ok { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(2) - } - - if len(response.RoleInfo) == 0 { - fmt.Println("No pgoroles found.") - return - } - - log.Debugf("response = %v", response) - - for _, pgorole := range response.RoleInfo { - fmt.Println("") - fmt.Println("pgorole : " + pgorole.Name) - fmt.Println("permissions : " + pgorole.Permissions) - } - -} - -func createPgorole(args []string, ns string) { - - if Permissions == "" { - fmt.Println("Error: permissions flag is required.") - return - } - - if len(args) == 0 { - fmt.Println("Error: A pgorole name argument is required.") - return - } - var err error - //create the request - r := new(msgs.CreatePgoroleRequest) - r.PgoroleName = args[0] - r.PgorolePermissions = Permissions - r.Namespace = ns - r.ClientVersion = msgs.PGO_VERSION - - response, err := api.CreatePgorole(httpclient, &SessionCredentials, r) - - log.Debugf("response is %v", response) - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(2) - } - - if response.Status.Code == msgs.Ok { - fmt.Println("Created pgorole.") - } else { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(2) - } - -} - -func deletePgorole(args []string, ns string) { - - log.Debugf("deletePgorole called %v", args) - - r := msgs.DeletePgoroleRequest{} - r.PgoroleName = args - r.AllFlag = AllFlag - r.ClientVersion = msgs.PGO_VERSION - r.Namespace = ns - - if AllFlag { - args = make([]string, 1) - args[0] = "all" - } - - log.Debugf("deleting pgorole %v", args) - - response, err := api.DeletePgorole(httpclient, &r, &SessionCredentials) - if err != nil { - fmt.Println("Error: " + err.Error()) - } - - if response.Status.Code == msgs.Ok { - for _, v := range response.Results { - fmt.Println(v) - } - } else { - fmt.Println("Error: " + response.Status.Msg) - } - -} diff --git a/pgo/cmd/pgouser.go b/pgo/cmd/pgouser.go deleted file mode 100644 index 2ccc72d970..0000000000 --- a/pgo/cmd/pgouser.go +++ /dev/null @@ -1,187 +0,0 @@ -package cmd - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - "github.com/crunchydata/postgres-operator/pgo/api" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "os" -) - -func updatePgouser(args []string, ns string) { - var err error - - if len(args) == 0 { - fmt.Println("Error: A pgouser name argument is required.") - return - } - - if PgouserNamespaces != "" && AllNamespaces { - fmt.Println("Error: pgouser-namespaces flag and --all-namespaces flag are mutually exclusive, choose one or the other.") - return - } - - r := new(msgs.UpdatePgouserRequest) - r.PgouserName = args[0] - r.Namespace = ns - r.PgouserNamespaces = PgouserNamespaces - r.AllNamespaces = AllNamespaces - r.PgouserPassword = PgouserPassword - r.PgouserRoles = PgouserRoles - r.ClientVersion = msgs.PGO_VERSION - - response, err := api.UpdatePgouser(httpclient, &SessionCredentials, r) - - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(2) - } - - if response.Status.Code == msgs.Ok { - fmt.Println("pgouser updated ") - } else { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(2) - } - -} - -func showPgouser(args []string, ns string) { - - r := new(msgs.ShowPgouserRequest) - r.PgouserName = args - r.Namespace = ns - r.AllFlag = AllFlag - r.ClientVersion = msgs.PGO_VERSION - - if len(args) == 0 && !AllFlag { - fmt.Println("Error: either a pgouser name or --all flag is required") - os.Exit(2) - } - - response, err := api.ShowPgouser(httpclient, &SessionCredentials, r) - - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(2) - } - - if response.Status.Code != msgs.Ok { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(2) - } - - if len(response.UserInfo) == 0 { - fmt.Println("No pgousers found.") - return - } - - log.Debugf("response = %v", response) - - for _, pgouser := range response.UserInfo { - fmt.Println("") - fmt.Println("pgouser : " + pgouser.Username) - fmt.Printf("roles : %v\n", pgouser.Role) - fmt.Printf("namespaces : %v\n", pgouser.Namespace) - } - -} - -func createPgouser(args []string, ns string) { - - if PgouserPassword == "" { - fmt.Println("Error: pgouser-password flag is required.") - return - } - if PgouserRoles == "" { - fmt.Println("Error: pgouser-roles flag is required.") - return - } - if PgouserNamespaces == "" && !AllNamespaces { - fmt.Println("Error: pgouser-namespaces flag or --all-namespaces flag is required.") - return - } - - if PgouserNamespaces != "" && AllNamespaces { - fmt.Println("Error: pgouser-namespaces flag and --all-namespaces flag are mutually exclusive, choose one or the other.") - return - } - - if len(args) == 0 { - fmt.Println("Error: A pgouser username argument is required.") - return - } - var err error - //create the request - r := new(msgs.CreatePgouserRequest) - r.PgouserName = args[0] - r.PgouserPassword = PgouserPassword - r.AllNamespaces = AllNamespaces - r.PgouserRoles = PgouserRoles - r.PgouserNamespaces = PgouserNamespaces - r.Namespace = ns - r.ClientVersion = msgs.PGO_VERSION - - response, err := api.CreatePgouser(httpclient, &SessionCredentials, r) - - log.Debugf("response is %v", response) - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(2) - } - - if response.Status.Code == msgs.Ok { - fmt.Println("Created pgouser.") - } else { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(2) - } - -} - -func deletePgouser(args []string, ns string) { - - log.Debugf("deletePgouser called %v", args) - - r := msgs.DeletePgouserRequest{} - r.PgouserName = args - r.AllFlag = AllFlag - r.ClientVersion = msgs.PGO_VERSION - r.Namespace = ns - - if AllFlag { - args = make([]string, 1) - args[0] = "all" - } - - log.Debugf("deleting pgouser %v", args) - - response, err := api.DeletePgouser(httpclient, &r, &SessionCredentials) - if err != nil { - fmt.Println("Error: " + err.Error()) - } - - if response.Status.Code == msgs.Ok { - for _, v := range response.Results { - fmt.Println(v) - } - } else { - fmt.Println("Error: " + response.Status.Msg) - } - -} diff --git a/pgo/cmd/policy.go b/pgo/cmd/policy.go deleted file mode 100644 index 3afa306923..0000000000 --- a/pgo/cmd/policy.go +++ /dev/null @@ -1,238 +0,0 @@ -package cmd - -/* - Copyright 2017 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - "github.com/crunchydata/postgres-operator/pgo/api" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "github.com/spf13/cobra" - "io/ioutil" - "os" -) - -var applyCmd = &cobra.Command{ - Use: "apply", - Short: "Apply a policy", - Long: `APPLY allows you to apply a Policy to a set of clusters. For example: - - pgo apply mypolicy1 --selector=name=mycluster - pgo apply mypolicy1 --selector=someotherpolicy - pgo apply mypolicy1 --selector=someotherpolicy --dry-run`, - Run: func(cmd *cobra.Command, args []string) { - log.Debug("apply called") - - if Namespace == "" { - Namespace = PGONamespace - } - - if Selector == "" { - fmt.Println("Error: Selector is required to apply a policy.") - return - } - if len(args) == 0 { - fmt.Println("Error: You must specify the name of a policy to apply.") - } else { - applyPolicy(args, Namespace) - } - }, -} - -func init() { - RootCmd.AddCommand(applyCmd) - - applyCmd.Flags().StringVarP(&Selector, "selector", "s", "", "The selector to use for cluster filtering.") - applyCmd.Flags().BoolVarP(&DryRun, "dry-run", "", false, "Shows the clusters that the label would be applied to, without labelling them.") - -} - -func applyPolicy(args []string, ns string) { - var err error - - if len(args) == 0 { - fmt.Println("Error: A policy name argument is required.") - return - } - - if Selector == "" { - fmt.Println("Error: The --selector flag is required.") - return - } - - r := new(msgs.ApplyPolicyRequest) - r.Name = args[0] - r.Selector = Selector - r.Namespace = ns - r.DryRun = DryRun - r.ClientVersion = msgs.PGO_VERSION - - response, err := api.ApplyPolicy(httpclient, &SessionCredentials, r) - - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(2) - } - - if DryRun { - fmt.Println("The label would have been applied on the following:") - } - - if response.Status.Code == msgs.Ok { - if len(response.Name) == 0 { - fmt.Println("No clusters found.") - } else { - for _, v := range response.Name { - fmt.Println("Applied policy on " + v) - } - } - } else { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(2) - } - -} -func showPolicy(args []string, ns string) { - - r := new(msgs.ShowPolicyRequest) - r.Selector = Selector - r.Namespace = ns - r.AllFlag = AllFlag - r.ClientVersion = msgs.PGO_VERSION - - if len(args) == 0 && AllFlag { - args = []string{""} - } - - for _, v := range args { - r.Policyname = v - - response, err := api.ShowPolicy(httpclient, &SessionCredentials, r) - - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(2) - } - - if response.Status.Code != msgs.Ok { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(2) - } - - if len(response.PolicyList.Items) == 0 { - fmt.Println("No policies found.") - return - } - - log.Debugf("response = %v", response) - - for _, policy := range response.PolicyList.Items { - fmt.Println("") - fmt.Println("policy : " + policy.Spec.Name) - fmt.Println(TreeBranch + "url : " + policy.Spec.URL) - fmt.Println(TreeBranch + "status : " + policy.Spec.Status) - fmt.Println(TreeTrunk + "sql : " + policy.Spec.SQL) - } - } - -} - -func createPolicy(args []string, ns string) { - - if len(args) == 0 { - fmt.Println("Error: A poliicy name argument is required.") - return - } - var err error - //create the request - r := new(msgs.CreatePolicyRequest) - r.Name = args[0] - r.Namespace = ns - r.ClientVersion = msgs.PGO_VERSION - - if PolicyURL != "" { - r.URL = PolicyURL - } - if PolicyFile != "" { - r.SQL, err = getPolicyString(PolicyFile) - - if err != nil { - fmt.Println("Error: ", err) - return - } - } - - response, err := api.CreatePolicy(httpclient, &SessionCredentials, r) - - log.Debugf("response is %v", response) - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(2) - } - - if response.Status.Code == msgs.Ok { - fmt.Println("Created policy.") - } else { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(2) - } - -} - -func getPolicyString(filename string) (string, error) { - var err error - var buf []byte - - buf, err = ioutil.ReadFile(filename) - if err != nil { - return "", err - } - return string(buf), err -} - -func deletePolicy(args []string, ns string) { - - log.Debugf("deletePolicy called %v", args) - - r := msgs.DeletePolicyRequest{} - r.Selector = Selector - r.AllFlag = AllFlag - r.ClientVersion = msgs.PGO_VERSION - r.Namespace = ns - if AllFlag { - args = make([]string, 1) - args[0] = "all" - } - - for _, arg := range args { - r.PolicyName = arg - log.Debugf("deleting policy %s", arg) - - response, err := api.DeletePolicy(httpclient, &r, &SessionCredentials) - if err != nil { - fmt.Println("Error: " + err.Error()) - } - - if response.Status.Code == msgs.Ok { - for _, v := range response.Results { - fmt.Println(v) - } - } else { - fmt.Println("Error: " + response.Status.Msg) - } - - } -} diff --git a/pgo/cmd/pvc.go b/pgo/cmd/pvc.go deleted file mode 100644 index 47bc481cb0..0000000000 --- a/pgo/cmd/pvc.go +++ /dev/null @@ -1,77 +0,0 @@ -package cmd - -/* - Copyright 2017 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - "github.com/crunchydata/postgres-operator/pgo/api" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "os" -) - -func showPVC(args []string, ns string) { - log.Debugf("showPVC called %v", args) - - // ShowPVCRequest ... - r := msgs.ShowPVCRequest{} - r.Namespace = ns - r.AllFlag = AllFlag - r.ClientVersion = msgs.PGO_VERSION - - if AllFlag { - //special case to just list all the PVCs - r.ClusterName = "" - printPVC(&r) - } else { - //args are a list of pvc names...for this case show details - for _, arg := range args { - r.ClusterName = arg - log.Debugf("show pvc called for %s", arg) - printPVC(&r) - } - } - -} - -func printPVC(r *msgs.ShowPVCRequest) { - - response, err := api.ShowPVC(httpclient, r, &SessionCredentials) - - log.Debugf("response = %v", response) - - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(2) - } - - if response.Status.Code == msgs.Error { - fmt.Println("Error: " + response.Status.Msg) - return - } - - if len(response.Results) == 0 { - fmt.Println("No PVC Results") - return - } - - fmt.Printf("%-20s\t%-30s\n", "Cluster Name", "PVC Name") - - for _, v := range response.Results { - fmt.Printf("%-20s\t%-30s\n", v.ClusterName, v.PVCName) - } - -} diff --git a/pgo/cmd/reload.go b/pgo/cmd/reload.go deleted file mode 100644 index cf6c5b647c..0000000000 --- a/pgo/cmd/reload.go +++ /dev/null @@ -1,99 +0,0 @@ -// Package cmd provides the command line functions of the crunchy CLI -package cmd - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - "os" - - "github.com/crunchydata/postgres-operator/pgo/api" - "github.com/crunchydata/postgres-operator/pgo/util" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "github.com/spf13/cobra" -) - -//unused but coming soon to a theatre near you -var ConfigMapName string - -var reloadCmd = &cobra.Command{ - Use: "reload", - Short: "Perform a cluster reload", - Long: `RELOAD performs a PostgreSQL reload on a cluster or set of clusters. For example: - - pgo reload mycluster`, - Run: func(cmd *cobra.Command, args []string) { - if Namespace == "" { - Namespace = PGONamespace - } - log.Debug("reload called") - if len(args) == 0 && Selector == "" { - fmt.Println(`Error: You must specify the cluster to reload or specify a selector flag.`) - } else { - if util.AskForConfirmation(NoPrompt, "") { - reload(args, Namespace) - } else { - fmt.Println("Aborting...") - } - } - - }, -} - -func init() { - RootCmd.AddCommand(reloadCmd) - - reloadCmd.Flags().StringVarP(&Selector, "selector", "s", "", "The selector to use for cluster filtering.") - reloadCmd.Flags().BoolVar(&NoPrompt, "no-prompt", false, "No command line confirmation.") - -} - -// reload .... -func reload(args []string, ns string) { - log.Debugf("reload called %v", args) - - request := new(msgs.ReloadRequest) - request.Args = args - request.Selector = Selector - request.Namespace = ns - response, err := api.Reload(httpclient, &SessionCredentials, request) - - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(2) - } - - if response.Status.Code == msgs.Ok { - for k := range response.Results { - fmt.Println(response.Results[k]) - } - } else { - // print the error message as well as the results, since the reload might have succeeded - // for certain clusters specified, but not for others - fmt.Println(response.Status.Msg) - for k := range response.Results { - fmt.Println(response.Results[k]) - } - os.Exit(2) - } - - if len(response.Results) == 0 { - fmt.Println("No clusters found.") - return - } - -} diff --git a/pgo/cmd/restart.go b/pgo/cmd/restart.go deleted file mode 100644 index 5303466f9b..0000000000 --- a/pgo/cmd/restart.go +++ /dev/null @@ -1,183 +0,0 @@ -// Package cmd provides the command line functions of the crunchy CLI -package cmd - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "encoding/json" - "fmt" - "os" - - "github.com/crunchydata/postgres-operator/pgo/api" - "github.com/crunchydata/postgres-operator/pgo/util" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "github.com/spf13/cobra" -) - -var restartCmd = &cobra.Command{ - Use: "restart", - Short: "Restarts the PostgrSQL database within a PostgreSQL cluster", - Long: `Restarts one or more PostgreSQL databases within a PostgreSQL cluster. - - For example, to restart the primary and all replicas: - pgo restart mycluster - - Or target a specific instance within the cluster: - pgo restart mycluster --target=mycluster-abcd - - And use the 'query' flag obtain a list of all instances within the cluster: - pgo restart mycluster --query`, - Run: func(cmd *cobra.Command, args []string) { - - if OutputFormat != "" { - if OutputFormat != "json" { - fmt.Println("Error: ", "json is the only supported --output format value") - os.Exit(2) - } - } - - if Namespace == "" { - Namespace = PGONamespace - } - - if len(args) == 0 { - fmt.Println(`Error: You must specify the cluster to restart.`) - } else { - switch { - case Query: - queryRestart(args, Namespace) - case len(args) > 1: - fmt.Println("Error: a single cluster must be specified when performing a restart") - case util.AskForConfirmation(NoPrompt, ""): - restart(args[0], Namespace) - default: - fmt.Println("Aborting...") - } - } - }, -} - -func init() { - - RootCmd.AddCommand(restartCmd) - - restartCmd.Flags().BoolVar(&NoPrompt, "no-prompt", false, "No command line confirmation.") - restartCmd.Flags().StringVarP(&OutputFormat, "output", "o", "", `The output format. Supported types are: "json"`) - restartCmd.Flags().BoolVarP(&Query, "query", "", false, "Prints the list of instances that can be restarted.") - restartCmd.Flags().StringArrayVarP(&Targets, "target", "", []string{}, "The instance that will be restarted.") -} - -// restart sends a request to restart a PG cluster or one or more instances within it. -func restart(clusterName, namespace string) { - - log.Debugf("restart called %v", clusterName) - - request := new(msgs.RestartRequest) - request.Namespace = namespace - request.ClusterName = clusterName - request.Targets = Targets - request.ClientVersion = msgs.PGO_VERSION - - response, err := api.Restart(httpclient, &SessionCredentials, request) - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(1) - } - - if OutputFormat == "json" { - b, err := json.MarshalIndent(response, "", " ") - if err != nil { - fmt.Println("Error: ", err) - } - fmt.Println(string(b)) - - if response.Status.Code != msgs.Ok { - os.Exit(1) - } - return - } - - if response.Status.Code != msgs.Ok { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(1) - } - - for _, instance := range response.Result.Instances { - if instance.Error { - fmt.Printf("Error restarting instance %s: %s\n", instance.InstanceName, instance.ErrorMessage) - continue - } - fmt.Printf("Successfully restarted instance %s\n", instance.InstanceName) - } -} - -// queryRestart is called when the "--query" flag is specified, and displays a list of all -// instances (the primary and all replicas) within a cluster. This is useful when the user -// would like to specify one or more instances for a restart using the "--target" flag. -func queryRestart(args []string, namespace string) { - - log.Debugf("queryRestart called %v", args) - - for _, clusterName := range args { - response, err := api.QueryRestart(httpclient, clusterName, &SessionCredentials, namespace) - if err != nil { - fmt.Println("\nError: " + err.Error()) - continue - } - - if response.Status.Code != msgs.Ok { - fmt.Println("\nError: " + response.Status.Msg) - continue - } - - if OutputFormat == "json" { - b, err := json.MarshalIndent(response, "", " ") - if err != nil { - fmt.Println("Error: ", err) - } - fmt.Println(string(b)) - return - } - - // indicate whether or not a standby cluster - if !response.Standby { - fmt.Printf("\nCluster: %s\n", clusterName) - } else { - fmt.Printf("\nCluster (standby): %s\n", clusterName) - } - - // output the information about each instance - fmt.Printf("%-20s\t%-10s\t%-10s\t%-10s\t%-20s\t%s\n", "INSTANCE", "ROLE", "STATUS", "NODE", - "REPLICATION LAG", "PENDING RESTART") - - for i := 0; i < len(response.Results); i++ { - instance := response.Results[i] - - log.Debugf("postgresql instance: %v", instance) - - if instance.ReplicationLag != -1 { - fmt.Printf("%-20s\t%-10s\t%-10s\t%-10s\t%12d %-7s\t%15t\n", - instance.Name, instance.Role, instance.Status, instance.Node, instance.ReplicationLag, "MB", - instance.PendingRestart) - } else { - fmt.Printf("%-20s\t%-10s\t%-10s\t%-10s\t%15s\t%23t\n", - instance.Name, instance.Role, instance.Status, instance.Node, "unknown", - instance.PendingRestart) - } - } - } -} diff --git a/pgo/cmd/restore.go b/pgo/cmd/restore.go deleted file mode 100644 index bafe7418e3..0000000000 --- a/pgo/cmd/restore.go +++ /dev/null @@ -1,127 +0,0 @@ -// Package cmd provides the command line functions of the crunchy CLI -package cmd - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - "os" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/pgo/api" - pgoutil "github.com/crunchydata/postgres-operator/pgo/util" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "github.com/spf13/cobra" -) - -var PITRTarget string -var BackupPath, BackupPVC string - -var restoreCmd = &cobra.Command{ - Use: "restore", - Short: "Perform a restore from previous backup", - Long: `RESTORE performs a restore to a new PostgreSQL cluster. This includes stopping the database and recreating a new primary with the restored data. Valid backup types to restore from are pgbackrest and pgdump. For example: - - pgo restore mycluster `, - Run: func(cmd *cobra.Command, args []string) { - if Namespace == "" { - Namespace = PGONamespace - } - log.Debug("restore called") - if len(args) == 0 { - fmt.Println(`Error: You must specify the cluster name to restore from.`) - } else { - if BackupType == "" || BackupType == config.LABEL_BACKUP_TYPE_BACKREST { - fmt.Println("If currently running, the primary database in this cluster will be stopped and recreated as part of this workflow!") - } - if pgoutil.AskForConfirmation(NoPrompt, "") { - restore(args, Namespace) - } else { - fmt.Println("Aborting...") - } - } - - }, -} - -func init() { - RootCmd.AddCommand(restoreCmd) - - restoreCmd.Flags().StringVarP(&BackupOpts, "backup-opts", "", "", "The restore options for pgbackrest or pgdump.") - restoreCmd.Flags().StringVarP(&PITRTarget, "pitr-target", "", "", "The PITR target, being a PostgreSQL timestamp such as '2018-08-13 11:25:42.582117-04'.") - restoreCmd.Flags().StringVarP(&NodeLabel, "node-label", "", "", "The node label (key=value) to use when scheduling "+ - "the restore job, and in the case of a pgBackRest restore, also the new (i.e. restored) primary deployment. If not set, any node is used.") - restoreCmd.Flags().BoolVar(&NoPrompt, "no-prompt", false, "No command line confirmation.") - restoreCmd.Flags().StringVarP(&BackupPVC, "backup-pvc", "", "", "The PVC containing the pgdump to restore from.") - restoreCmd.Flags().StringVarP(&PGDumpDB, "pgdump-database", "d", "postgres", "The name of the database pgdump will restore.") - restoreCmd.Flags().StringVarP(&BackupType, "backup-type", "", "", "The type of backup to restore from, default is pgbackrest. Valid types are pgbackrest or pgdump.") - restoreCmd.Flags().StringVarP(&BackrestStorageType, "pgbackrest-storage-type", "", "", "The type of storage to use for a pgBackRest restore. Either \"local\", \"s3\". (default \"local\")") -} - -// restore .... -func restore(args []string, ns string) { - log.Debugf("restore called %v", args) - - var response msgs.RestoreResponse - var err error - - // use different request message, depending on type. - if BackupType == "pgdump" { - - request := new(msgs.PgRestoreRequest) - request.Namespace = ns - request.FromCluster = args[0] - request.RestoreOpts = BackupOpts - request.PITRTarget = PITRTarget - request.FromPVC = BackupPVC // use PVC specified on command line for pgrestore - request.PGDumpDB = PGDumpDB - request.NodeLabel = NodeLabel - - response, err = api.RestoreDump(httpclient, &SessionCredentials, request) - } else { - - request := new(msgs.RestoreRequest) - request.Namespace = ns - request.FromCluster = args[0] - request.RestoreOpts = BackupOpts - request.PITRTarget = PITRTarget - request.NodeLabel = NodeLabel - request.BackrestStorageType = BackrestStorageType - - response, err = api.Restore(httpclient, &SessionCredentials, request) - } - - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(2) - } - - if response.Status.Code == msgs.Ok { - for k := range response.Results { - fmt.Println(response.Results[k]) - } - } else { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(2) - } - - if len(response.Results) == 0 { - fmt.Println("No clusters found.") - return - } - -} diff --git a/pgo/cmd/root.go b/pgo/cmd/root.go deleted file mode 100644 index 075a9f13e8..0000000000 --- a/pgo/cmd/root.go +++ /dev/null @@ -1,124 +0,0 @@ -package cmd - -/* - Copyright 2017 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - "os" - "runtime" - "strconv" - - "github.com/fatih/color" - log "github.com/sirupsen/logrus" - "github.com/spf13/cobra" -) - -// RootCmd represents the base command when called without any subcommands -var RootCmd = &cobra.Command{ - Use: "pgo", - Short: "The pgo command line interface.", - Long: `The pgo command line interface lets you create and manage PostgreSQL clusters.`, - // Uncomment the following line if your bare application - // has an action associated with it: - // Run: func(cmd *cobra.Command, args []string) { }, -} - -// Execute adds all child commands to the root command sets flags appropriately. -// This is called by main.main(). It only needs to happen once to the rootCmd. -func Execute() { - fmt.Println("Execute called") - - if err := RootCmd.Execute(); err != nil { - log.Debug(err.Error()) - os.Exit(-1) - } - -} - -func init() { - - cobra.OnInitialize(initConfig) - log.Debug("init called") - GREEN = color.New(color.FgGreen).SprintFunc() - YELLOW = color.New(color.FgYellow).SprintFunc() - RED = color.New(color.FgRed).SprintFunc() - - // Go currently guarantees an error when attempting to load OS_TRUST for - // windows-based systems (see https://golang.org/issue/16736) - defExclOSTrust := (runtime.GOOS == "windows") - - RootCmd.PersistentFlags().StringVarP(&Namespace, "namespace", "n", "", "The namespace to use for pgo requests.") - RootCmd.PersistentFlags().StringVar(&APIServerURL, "apiserver-url", "", "The URL for the PostgreSQL Operator apiserver that will process the request from the pgo client.") - RootCmd.PersistentFlags().StringVar(&PGO_CA_CERT, "pgo-ca-cert", "", "The CA Certificate file path for authenticating to the PostgreSQL Operator apiserver.") - RootCmd.PersistentFlags().StringVar(&PGO_CLIENT_KEY, "pgo-client-key", "", "The Client Key file path for authenticating to the PostgreSQL Operator apiserver.") - RootCmd.PersistentFlags().StringVar(&PGO_CLIENT_CERT, "pgo-client-cert", "", "The Client Certificate file path for authenticating to the PostgreSQL Operator apiserver.") - RootCmd.PersistentFlags().BoolVar(&PGO_DISABLE_TLS, "disable-tls", false, "Disable TLS authentication to the Postgres Operator.") - RootCmd.PersistentFlags().BoolVar(&EXCLUDE_OS_TRUST, "exclude-os-trust", defExclOSTrust, "Exclude CA certs from OS default trust store") - RootCmd.PersistentFlags().BoolVar(&DebugFlag, "debug", false, "Enable additional output for debugging.") - -} - -func initConfig() { - if DebugFlag { - log.SetLevel(log.DebugLevel) - log.Debug("debug flag is set to true") - } - - if APIServerURL == "" { - APIServerURL = os.Getenv("PGO_APISERVER_URL") - if APIServerURL == "" { - fmt.Println("Error: The PGO_APISERVER_URL environment variable or the --apiserver-url flag needs to be supplied.") - os.Exit(-1) - } - } - log.Debugf("in initConfig with url=%s", APIServerURL) - - tmp := os.Getenv("PGO_NAMESPACE") - if tmp != "" { - PGONamespace = tmp - log.Debugf("using PGO_NAMESPACE env var %s", tmp) - } - - // Get the pgouser and password information - SetSessionUserCredentials() - - // Setup the API HTTP client based on TLS enablement - if noTLS, _ := strconv.ParseBool(os.Getenv("DISABLE_TLS")); noTLS || PGO_DISABLE_TLS { - log.Debug("setting up httpclient without TLS") - httpclient = NewAPIClient() - } else { - log.Debug("setting up httpclient with TLS") - if hc, err := NewAPIClientTLS(); err != nil { - log.Fatalf("failed to set up TLS client: %s", err) - } else { - httpclient = hc - } - } - - if os.Getenv("GENERATE_BASH_COMPLETION") != "" { - generateBashCompletion() - } -} - -func generateBashCompletion() { - log.Debugf("generating bash completion script") - file, err2 := os.Create("/tmp/pgo-bash-completion.out") - if err2 != nil { - fmt.Println("Error: ", err2.Error()) - } - defer file.Close() - RootCmd.GenBashCompletion(file) -} diff --git a/pgo/cmd/scale.go b/pgo/cmd/scale.go deleted file mode 100644 index 831ec2ddf9..0000000000 --- a/pgo/cmd/scale.go +++ /dev/null @@ -1,88 +0,0 @@ -package cmd - -/* - Copyright 2017 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - "os" - - "github.com/crunchydata/postgres-operator/pgo/api" - "github.com/crunchydata/postgres-operator/pgo/util" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "github.com/spf13/cobra" -) - -var ReplicaCount int - -var scaleCmd = &cobra.Command{ - Use: "scale", - Short: "Scale a PostgreSQL cluster", - Long: `The scale command allows you to adjust a Cluster's replica configuration. For example: - - pgo scale mycluster --replica-count=1`, - Run: func(cmd *cobra.Command, args []string) { - if Namespace == "" { - Namespace = PGONamespace - } - log.Debug("scale called") - - if len(args) == 0 { - fmt.Println(`Error: You must specify the clusters to scale.`) - } else { - if util.AskForConfirmation(NoPrompt, "") { - } else { - fmt.Println("Aborting...") - os.Exit(2) - } - scaleCluster(args, Namespace) - } - }, -} - -func init() { - RootCmd.AddCommand(scaleCmd) - - scaleCmd.Flags().StringVarP(&ServiceType, "service-type", "", "", "The service type to use in the replica Service. If not set, the default in pgo.yaml will be used.") - scaleCmd.Flags().StringVarP(&CCPImageTag, "ccp-image-tag", "", "", "The CCPImageTag to use for cluster creation. If specified, overrides the .pgo.yaml setting.") - scaleCmd.Flags().BoolVar(&NoPrompt, "no-prompt", false, "No command line confirmation.") - scaleCmd.Flags().IntVarP(&ReplicaCount, "replica-count", "", 1, "The replica count to apply to the clusters.") - scaleCmd.Flags().StringVarP(&StorageConfig, "storage-config", "", "", "The name of a Storage config in pgo.yaml to use for the replica storage.") - scaleCmd.Flags().StringVarP(&NodeLabel, "node-label", "", "", "The node label (key) to use in placing the replica database. If not set, any node is used.") -} - -func scaleCluster(args []string, ns string) { - - for _, arg := range args { - log.Debugf(" %s ReplicaCount is %d", arg, ReplicaCount) - response, err := api.ScaleCluster(httpclient, arg, ReplicaCount, - StorageConfig, NodeLabel, CCPImageTag, ServiceType, &SessionCredentials, ns) - - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(2) - } - - if response.Status.Code == msgs.Ok { - for _, v := range response.Results { - fmt.Println(v) - } - } else { - fmt.Println("Error: " + response.Status.Msg) - } - - } -} diff --git a/pgo/cmd/scaledown.go b/pgo/cmd/scaledown.go deleted file mode 100644 index 20241c3d46..0000000000 --- a/pgo/cmd/scaledown.go +++ /dev/null @@ -1,160 +0,0 @@ -package cmd - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - "os" - - "github.com/crunchydata/postgres-operator/pgo/api" - "github.com/crunchydata/postgres-operator/pgo/util" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "github.com/spf13/cobra" -) - -var scaledownCmd = &cobra.Command{ - Use: "scaledown", - Short: "Scale down a PostgreSQL cluster", - Long: `The scale command allows you to scale down a Cluster's replica configuration. For example: - - To list targetable replicas: - pgo scaledown mycluster --query - - To scale down a specific replica: - pgo scaledown mycluster --target=mycluster-replica-xxxx`, - Run: func(cmd *cobra.Command, args []string) { - if Namespace == "" { - Namespace = PGONamespace - } - log.Debug("scaledown called") - - if len(args) == 0 { - fmt.Println(`Error: You must specify the clusters to scale down.`) - } else { - if Query { - queryCluster(args, Namespace) - } else { - if util.AskForConfirmation(NoPrompt, "") { - } else { - fmt.Println("Aborting...") - os.Exit(2) - } - scaleDownCluster(args[0], Namespace) - } - } - }, -} - -func init() { - RootCmd.AddCommand(scaledownCmd) - - scaledownCmd.Flags().BoolVarP(&Query, "query", "", false, "Prints the list of targetable replica candidates.") - scaledownCmd.Flags().StringVarP(&Target, "target", "", "", "The replica to target for scaling down") - scaledownCmd.Flags().BoolVarP(&DeleteData, "delete-data", "d", true, - "Causes the data for the scaled down replica to be removed permanently.") - scaledownCmd.Flags().MarkDeprecated("delete-data", "Data is deleted by default.") - scaledownCmd.Flags().BoolVar(&KeepData, "keep-data", false, - "Causes data for the scale down replica to *not* be deleted") - scaledownCmd.Flags().BoolVar(&NoPrompt, "no-prompt", false, "No command line confirmation.") -} - -// queryCluster is a helper function that returns information about the -// available replicas that can be scaled down. This is called when the "--query" -// flag is specified -func queryCluster(args []string, ns string) { - - // iterate through the clusters and output information about each one - for _, arg := range args { - - // call the API - response, err := api.ScaleQuery(httpclient, arg, &SessionCredentials, ns) - - // indicate which cluster this is. Put a newline before to put some - // separation between each line - if !response.Standby { - fmt.Printf("\nCluster: %s\n", arg) - } else { - fmt.Printf("\nCluster (standby): %s\n", arg) - } - - // If the API returns in error, just bail out here - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(2) - } - - // If there is a controlled error, output the message here and continue - // to iterate through the list - if response.Status.Code != msgs.Ok { - fmt.Println("Error: " + response.Status.Msg) - continue - } - - // If there are no replicas found for this cluster, indicate so, and - // continue to iterate through the list - if len(response.Results) == 0 { - fmt.Println("No replicas found.") - continue - } - - // output the information about each instance - fmt.Printf("%-20s\t%-10s\t%-10s\t%-20s\t%s\n", "REPLICA", "STATUS", "NODE", "REPLICATION LAG", - "PENDING RESTART") - - for i := 0; i < len(response.Results); i++ { - instance := response.Results[i] - - log.Debugf("postgresql instance: %v", instance) - - if instance.ReplicationLag != -1 { - fmt.Printf("%-20s\t%-10s\t%-10s\t%12d %-7s\t%15t\n", - instance.Name, instance.Status, instance.Node, instance.ReplicationLag, "MB", - instance.PendingRestart) - } else { - fmt.Printf("%-20s\t%-10s\t%-10s\t%15s\t%23t\n", - instance.Name, instance.Status, instance.Node, "unknown", - instance.PendingRestart) - } - } - } -} - -func scaleDownCluster(clusterName, ns string) { - - // determine if the data should be deleted. The modern flag for handling this - // is "KeepData" which defaults to "false". We will honor the "DeleteData" - // flag (which defaults to "true"), but this will be removed in a future - // release - deleteData := !KeepData && DeleteData - - response, err := api.ScaleDownCluster(httpclient, clusterName, Target, deleteData, - &SessionCredentials, ns) - - if err != nil { - fmt.Println("Error: ", err.Error()) - return - } - - if response.Status.Code == msgs.Ok { - for _, v := range response.Results { - fmt.Println(v) - } - } else { - fmt.Println("Error: " + response.Status.Msg) - } - -} diff --git a/pgo/cmd/schedule.go b/pgo/cmd/schedule.go deleted file mode 100644 index 5413001efb..0000000000 --- a/pgo/cmd/schedule.go +++ /dev/null @@ -1,213 +0,0 @@ -// Package cmd provides the command line functions of the crunchy CLI -package cmd - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - "os" - "strings" - - "github.com/crunchydata/postgres-operator/pgo-scheduler/scheduler" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - - "github.com/crunchydata/postgres-operator/pgo/api" -) - -type schedule struct { - schedule string - scheduleType string - pvcName string - backrestType string - backrestStorageType string - clusterName string - selector string - policy string - database string -} - -func createSchedule(args []string, ns string) { - log.Debugf("createSchedule called %v", args) - - var clusterName string - if len(args) > 0 { - clusterName = args[0] - } - - s := schedule{ - clusterName: clusterName, - backrestType: PGBackRestType, - backrestStorageType: BackrestStorageType, - pvcName: PVCName, - schedule: Schedule, - selector: Selector, - scheduleType: ScheduleType, - policy: SchedulePolicy, - database: ScheduleDatabase, - } - - err := s.validateSchedule() - if err != nil { - fmt.Println(err) - return - } - - r := &msgs.CreateScheduleRequest{ - ClusterName: clusterName, - PGBackRestType: PGBackRestType, - BackrestStorageType: BackrestStorageType, - PVCName: PVCName, - ScheduleOptions: ScheduleOptions, - Schedule: Schedule, - Selector: Selector, - ScheduleType: strings.ToLower(ScheduleType), - PolicyName: SchedulePolicy, - Database: ScheduleDatabase, - Secret: ScheduleSecret, - Namespace: ns, - } - - response, err := api.CreateSchedule(httpclient, &SessionCredentials, r) - - if err != nil { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(2) - } - - if response.Status.Code == msgs.Ok { - for k := range response.Results { - fmt.Println(response.Results[k]) - } - } else { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(2) - } - - if len(response.Results) == 0 { - fmt.Println("No clusters found.") - return - } - -} - -func deleteSchedule(args []string, ns string) { - log.Debugf("deleteSchedule called %v", args) - - if len(args) == 0 && Selector == "" && ScheduleName == "" { - fmt.Println("Error: cluster name, schedule name or selector is required to delete a schedule.") - return - } - - var clusterName string - if len(args) > 0 { - clusterName = args[0] - } - - r := &msgs.DeleteScheduleRequest{ - ClusterName: clusterName, - ScheduleName: ScheduleName, - Selector: Selector, - Namespace: ns, - } - - response, err := api.DeleteSchedule(httpclient, &SessionCredentials, r) - - if err != nil { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(2) - } - - if response.Status.Code == msgs.Ok { - for k := range response.Results { - fmt.Println(response.Results[k]) - } - } else { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(2) - } - - if len(response.Results) == 0 { - fmt.Println("No schedules found.") - return - } - -} - -func showSchedule(args []string, ns string) { - log.Debugf("showSchedule called %v", args) - - if len(args) == 0 && Selector == "" && ScheduleName == "" { - fmt.Println("Error: cluster name, schedule name or selector is required to show a schedule.") - return - } - - var clusterName string - if Selector != "" { - clusterName = "" - } else if len(args) > 0 { - clusterName = args[0] - } - - r := &msgs.ShowScheduleRequest{ - ClusterName: clusterName, - ScheduleName: ScheduleName, - Selector: Selector, - Namespace: ns, - } - - response, err := api.ShowSchedule(httpclient, &SessionCredentials, r) - - if err != nil { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(2) - } - - if response.Status.Code == msgs.Ok { - for k := range response.Results { - fmt.Println(response.Results[k]) - } - } else { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(2) - } - - if len(response.Results) == 0 { - fmt.Println("No schedules found.") - return - } -} - -func (s *schedule) validateSchedule() error { - if err := scheduler.ValidateSchedule(s.schedule); err != nil { - return err - } - - if err := scheduler.ValidateScheduleType(s.scheduleType); err != nil { - return err - } - - if err := scheduler.ValidateBackRestSchedule(s.scheduleType, s.clusterName, s.selector, s.backrestType, - s.backrestStorageType); err != nil { - return err - } - - if err := scheduler.ValidatePolicySchedule(s.scheduleType, s.policy, s.database); err != nil { - return err - } - - return nil -} diff --git a/pgo/cmd/show.go b/pgo/cmd/show.go deleted file mode 100644 index b42ba2a7d9..0000000000 --- a/pgo/cmd/show.go +++ /dev/null @@ -1,357 +0,0 @@ -package cmd - -/* - Copyright 2017 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/spf13/cobra" -) - -const TreeBranch = "\t" -const TreeTrunk = "\t" - -var AllFlag bool - -var ShowCmd = &cobra.Command{ - Use: "show", - Short: "Show the description of a cluster", - Long: `Show allows you to show the details of a policy, backup, pvc, or cluster. For example: - - pgo show backup mycluster - pgo show backup mycluster --backup-type=pgbackrest - pgo show cluster mycluster - pgo show config - pgo show pgouser someuser - pgo show policy policy1 - pgo show pvc mycluster - pgo show namespace - pgo show workflow 25927091-b343-4017-be4b-71575f0b3eb5 - pgo show user --selector=name=mycluster`, - Run: func(cmd *cobra.Command, args []string) { - if len(args) == 0 { - fmt.Println(`Error: You must specify the type of resource to show. -Valid resource types include: - * backup - * cluster - * config - * pgadmin - * pgbouncer - * pgouser - * policy - * pvc - * namespace - * workflow - * user - `) - } else { - switch args[0] { - case "backup", "cluster", "config", "pgadmin", "pgbouncer", - "pgouser", "policy", "pvc", "schedule", "namespace", - "workflow", "user": - break - default: - fmt.Println(`Error: You must specify the type of resource to show. -Valid resource types include: - * backup - * cluster - * config - * pgadmin - * pgbouncer - * pgouser - * policy - * pvc - * namespace - * workflow - * user`) - } - } - - }, -} - -var showBackupType string - -func init() { - RootCmd.AddCommand(ShowCmd) - ShowCmd.AddCommand(ShowBackupCmd) - ShowCmd.AddCommand(ShowClusterCmd) - ShowCmd.AddCommand(ShowConfigCmd) - ShowCmd.AddCommand(ShowNamespaceCmd) - ShowCmd.AddCommand(ShowPgAdminCmd) - ShowCmd.AddCommand(ShowPgBouncerCmd) - ShowCmd.AddCommand(ShowPgouserCmd) - ShowCmd.AddCommand(ShowPgoroleCmd) - ShowCmd.AddCommand(ShowPolicyCmd) - ShowCmd.AddCommand(ShowPVCCmd) - ShowCmd.AddCommand(ShowWorkflowCmd) - ShowCmd.AddCommand(ShowScheduleCmd) - ShowCmd.AddCommand(ShowUserCmd) - - ShowBackupCmd.Flags().StringVarP(&showBackupType, "backup-type", "", "pgbackrest", "The backup type output to list. Valid choices are pgbackrest or pgdump.") - ShowClusterCmd.Flags().StringVarP(&CCPImageTag, "ccp-image-tag", "", "", "Filter the results based on the image tag of the cluster.") - ShowClusterCmd.Flags().StringVarP(&OutputFormat, "output", "o", "", "The output format. Currently, json is the only supported value.") - ShowClusterCmd.Flags().StringVarP(&Selector, "selector", "s", "", "The selector to use for cluster filtering.") - ShowNamespaceCmd.Flags().BoolVar(&AllFlag, "all", false, "show all resources.") - ShowClusterCmd.Flags().BoolVar(&AllFlag, "all", false, "show all resources.") - ShowPolicyCmd.Flags().BoolVar(&AllFlag, "all", false, "show all resources.") - ShowPgAdminCmd.Flags().StringVarP(&Selector, "selector", "s", "", "The selector to use for cluster filtering.") - ShowPgAdminCmd.Flags().StringVarP(&OutputFormat, "output", "o", "", `The output format. Supported types are: "json"`) - ShowPgBouncerCmd.Flags().StringVarP(&Selector, "selector", "s", "", "The selector to use for cluster filtering.") - ShowPgBouncerCmd.Flags().StringVarP(&OutputFormat, "output", "o", "", `The output format. Supported types are: "json"`) - ShowPVCCmd.Flags().BoolVar(&AllFlag, "all", false, "show all resources.") - ShowScheduleCmd.Flags().StringVarP(&Selector, "selector", "s", "", "The selector to use for cluster filtering.") - ShowScheduleCmd.Flags().StringVarP(&ScheduleName, "schedule-name", "", "", "The name of the schedule to show.") - ShowScheduleCmd.Flags().BoolVar(&NoPrompt, "no-prompt", false, "No command line confirmation.") - ShowUserCmd.Flags().StringVarP(&Selector, "selector", "s", "", "The selector to use for cluster filtering.") - ShowUserCmd.Flags().BoolVar(&AllFlag, "all", false, "show all clusters.") - ShowUserCmd.Flags().IntVarP(&Expired, "expired", "", 0, "Shows passwords that will expire in X days.") - ShowUserCmd.Flags().StringVarP(&OutputFormat, "output", "o", "", `The output format. Supported types are: "json"`) - ShowUserCmd.Flags().BoolVar(&ShowSystemAccounts, "show-system-accounts", false, "Include the system accounts in the results.") - ShowPgouserCmd.Flags().BoolVar(&AllFlag, "all", false, "show all resources.") - ShowPgoroleCmd.Flags().BoolVar(&AllFlag, "all", false, "show all resources.") -} - -var ShowConfigCmd = &cobra.Command{ - Use: "config", - Short: "Show configuration information", - Long: `Show configuration information for the Operator. For example: - - pgo show config`, - Run: func(cmd *cobra.Command, args []string) { - if Namespace == "" { - Namespace = PGONamespace - } - showConfig(args, Namespace) - }, -} - -var ShowPgAdminCmd = &cobra.Command{ - Use: "pgadmin", - Short: "Show pgadmin deployment information", - Long: `Show service information about a pgadmin deployment. For example: - - pgo show pgadmin thecluster - pgo show pgadmin --selector=app=theapp - `, - - Run: func(cmd *cobra.Command, args []string) { - if Namespace == "" { - Namespace = PGONamespace - } - showPgAdmin(Namespace, args) - }, -} - -var ShowPgBouncerCmd = &cobra.Command{ - Use: "pgbouncer", - Short: "Show pgbouncer deployment information", - Long: `Show user, password, and service information about a pgbouncer deployment. For example: - - pgo show pgbouncer hacluster - pgo show pgbouncer --selector=app=payment - `, - - Run: func(cmd *cobra.Command, args []string) { - if Namespace == "" { - Namespace = PGONamespace - } - showPgBouncer(Namespace, args) - }, -} - -var ShowPgouserCmd = &cobra.Command{ - Use: "pgouser", - Short: "Show pgouser information", - Long: `Show pgouser information for an Operator user. For example: - - pgo show pgouser someuser`, - Run: func(cmd *cobra.Command, args []string) { - if Namespace == "" { - Namespace = PGONamespace - } - showPgouser(args, Namespace) - }, -} - -var ShowPgoroleCmd = &cobra.Command{ - Use: "pgorole", - Short: "Show pgorole information", - Long: `Show pgorole information . For example: - - pgo show pgorole somerole`, - Run: func(cmd *cobra.Command, args []string) { - if Namespace == "" { - Namespace = PGONamespace - } - showPgorole(args, Namespace) - }, -} - -var ShowNamespaceCmd = &cobra.Command{ - Use: "namespace", - Short: "Show namespace information", - Long: `Show namespace information for the Operator. For example: - - pgo show namespace`, - Run: func(cmd *cobra.Command, args []string) { - if Namespace == "" { - Namespace = PGONamespace - } - showNamespace(args) - }, -} - -var ShowWorkflowCmd = &cobra.Command{ - Use: "workflow", - Short: "Show workflow information", - Long: `Show workflow information for a given workflow. For example: - - pgo show workflow 25927091-b343-4017-be4b-71575f0b3eb5`, - Run: func(cmd *cobra.Command, args []string) { - if Namespace == "" { - Namespace = PGONamespace - } - showWorkflow(args, Namespace) - }, -} - -var ShowPolicyCmd = &cobra.Command{ - Use: "policy", - Short: "Show policy information", - Long: `Show policy information. For example: - - pgo show policy --all - pgo show policy policy1`, - Run: func(cmd *cobra.Command, args []string) { - if len(args) == 0 && !AllFlag { - fmt.Println("Error: Policy name(s) or --all required for this command.") - } else { - if Namespace == "" { - Namespace = PGONamespace - } - showPolicy(args, Namespace) - } - }, -} - -var ShowPVCCmd = &cobra.Command{ - Use: "pvc", - Short: "Show PVC information for a cluster", - Long: `Show PVC information. For example: - - pgo show pvc mycluster - pgo show pvc --all`, - Run: func(cmd *cobra.Command, args []string) { - if len(args) == 0 && !AllFlag { - fmt.Println("Error: Cluster name(s) or --all required for this command.") - } else { - if Namespace == "" { - Namespace = PGONamespace - } - showPVC(args, Namespace) - } - }, -} - -// showBackupCmd represents the show backup command -var ShowBackupCmd = &cobra.Command{ - Use: "backup", - Short: "Show backup information", - Long: `Show backup information. For example: - - pgo show backup mycluser`, - Run: func(cmd *cobra.Command, args []string) { - if Namespace == "" { - Namespace = PGONamespace - } - if len(args) == 0 { - fmt.Println("Error: cluster name(s) required for this command.") - } else { - // default is pgbackrest - if showBackupType == "" || showBackupType == config.LABEL_BACKUP_TYPE_BACKREST { - showBackrest(args, Namespace) - } else if showBackupType == config.LABEL_BACKUP_TYPE_PGDUMP { - showpgDump(args, Namespace) - } else { - fmt.Println("Error: Valid backup-type values are pgbackrest and pgdump. The default if not supplied is pgbackrest.") - } - } - }, -} - -// ShowClusterCmd represents the show cluster command -var ShowClusterCmd = &cobra.Command{ - Use: "cluster", - Short: "Show cluster information", - Long: `Show a PostgreSQL cluster. For example: - - pgo show cluster --all - pgo show cluster mycluster`, - Run: func(cmd *cobra.Command, args []string) { - if Namespace == "" { - Namespace = PGONamespace - } - if Selector == "" && len(args) == 0 && !AllFlag { - fmt.Println("Error: Cluster name(s), --selector, or --all required for this command.") - } else { - showCluster(args, Namespace) - } - }, -} - -// ShowUserCmd represents the show user command -var ShowUserCmd = &cobra.Command{ - Use: "user", - Short: "Show user information", - Long: `Show users on a cluster. For example: - - pgo show user --all - pgo show user mycluster - pgo show user --selector=name=nycluster`, - Run: func(cmd *cobra.Command, args []string) { - if Namespace == "" { - Namespace = PGONamespace - } - if Selector == "" && AllFlag == false && len(args) == 0 { - fmt.Println("Error: --selector, --all, or cluster name()s required for this command") - } else { - showUser(args, Namespace) - } - }, -} - -// ShowScheduleCmd represents the show schedule command -var ShowScheduleCmd = &cobra.Command{ - Use: "schedule", - Short: "Show schedule information", - Long: `Show cron-like schedules. For example: - - pgo show schedule mycluster - pgo show schedule --selector=pg-cluster=mycluster - pgo show schedule --schedule-name=mycluster-pgbackrest-full`, - Run: func(cmd *cobra.Command, args []string) { - if Namespace == "" { - Namespace = PGONamespace - } - if len(args) == 0 && Selector == "" && ScheduleName == "" { - fmt.Println("Error: cluster name, schedule name or selector is required to show a schedule.") - return - } - showSchedule(args, Namespace) - }, -} diff --git a/pgo/cmd/status.go b/pgo/cmd/status.go deleted file mode 100644 index 681ad08d6c..0000000000 --- a/pgo/cmd/status.go +++ /dev/null @@ -1,111 +0,0 @@ -package cmd - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "encoding/json" - "fmt" - "os" - - "github.com/crunchydata/postgres-operator/pgo/api" - "github.com/crunchydata/postgres-operator/pgo/util" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "github.com/spf13/cobra" -) - -var Summary bool - -func init() { - RootCmd.AddCommand(statusCmd) - - statusCmd.Flags().StringVarP(&OutputFormat, "output", "o", "", "The output format. Currently, json is the only supported value.") - -} - -func showStatus(args []string, ns string) { - - log.Debugf("showStatus called %v", args) - - if OutputFormat != "" && OutputFormat != "json" { - fmt.Println("Error: json is the only supported --output-format value ") - os.Exit(2) - } - - response, err := api.ShowStatus(httpclient, &SessionCredentials, ns) - - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(2) - } - - if response.Status.Code != msgs.Ok { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(2) - } - - if OutputFormat == "json" { - b, err := json.MarshalIndent(response, "", " ") - if err != nil { - fmt.Println("Error: ", err) - } - fmt.Println(string(b)) - return - } - - printSummary(&response.Result) - -} - -func printSummary(status *msgs.StatusDetail) { - - WID := 25 - fmt.Printf("%s%d\n", util.Rpad("Databases:", " ", WID), status.NumDatabases) - fmt.Printf("%s%d\n", util.Rpad("Claims:", " ", WID), status.NumClaims) - fmt.Printf("%s%s\n", util.Rpad("Total Volume Size:", " ", WID), util.Rpad(status.VolumeCap, " ", 10)) - - fmt.Printf("\n%s\n", "Database Images:") - for k, v := range status.DbTags { - fmt.Printf("%s%d\t%s\n", util.Rpad(" ", " ", WID), v, k) - } - - fmt.Printf("\n%s\n", "Databases Not Ready:") - for i := 0; i < len(status.NotReady); i++ { - fmt.Printf("\t%s\n", util.Rpad(status.NotReady[i], " ", 30)) - } - - fmt.Printf("\n%s\n", "Labels (count > 1): [count] [label]") - for i := 0; i < len(status.Labels); i++ { - if status.Labels[i].Value > 1 { - fmt.Printf("\t[%d]\t[%s]\n", status.Labels[i].Value, status.Labels[i].Key) - } - } -} - -var statusCmd = &cobra.Command{ - Use: "status", - Short: "Display PostgreSQL cluster status", - Long: `Display namespace wide information for PostgreSQL clusters. For example: - - pgo status`, - Run: func(cmd *cobra.Command, args []string) { - log.Debug("status called") - if Namespace == "" { - Namespace = PGONamespace - } - showStatus(args, Namespace) - }, -} diff --git a/pgo/cmd/test.go b/pgo/cmd/test.go deleted file mode 100644 index 111475fbed..0000000000 --- a/pgo/cmd/test.go +++ /dev/null @@ -1,138 +0,0 @@ -package cmd - -/* - Copyright 2017 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "encoding/json" - "fmt" - "os" - - "github.com/crunchydata/postgres-operator/pgo/api" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - - log "github.com/sirupsen/logrus" - "github.com/spf13/cobra" -) - -var testCmd = &cobra.Command{ - Use: "test", - Short: "Test cluster connectivity", - Long: `TEST allows you to test the availability of a PostgreSQL cluster. For example: - - pgo test mycluster - pgo test --selector=env=research - pgo test --all`, - Run: func(cmd *cobra.Command, args []string) { - if Namespace == "" { - Namespace = PGONamespace - } - log.Debug("test called") - if Selector == "" && len(args) == 0 && !AllFlag { - fmt.Println(`Error: You must specify the name of the clusters to test or --all or a --selector.`) - } else { - if OutputFormat != "" && OutputFormat != "json" { - fmt.Println("Error: Only 'json' is currently supported for the --output flag value.") - os.Exit(2) - } - showTest(args, Namespace) - } - }, -} - -func init() { - RootCmd.AddCommand(testCmd) - testCmd.Flags().StringVarP(&Selector, "selector", "s", "", "The selector to use for cluster filtering.") - testCmd.Flags().StringVarP(&OutputFormat, "output", "o", "", "The output format. Currently, json is the only supported value.") - testCmd.Flags().BoolVar(&AllFlag, "all", false, "test all resources.") - -} - -func showTest(args []string, ns string) { - - log.Debugf("showCluster called %v", args) - - log.Debugf("selector is %s", Selector) - - if len(args) == 0 && !AllFlag && Selector == "" { - fmt.Println("Error: ", "--all needs to be set or a cluster name be entered or a --selector be specified") - os.Exit(2) - } - if Selector != "" || AllFlag { - args = make([]string, 1) - args[0] = "all" - } - - r := new(msgs.ClusterTestRequest) - r.Selector = Selector - r.Namespace = ns - r.AllFlag = AllFlag - r.ClientVersion = msgs.PGO_VERSION - - for _, arg := range args { - r.Clustername = arg - response, err := api.ShowTest(httpclient, &SessionCredentials, r) - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(2) - } - - if response.Status.Code != msgs.Ok { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(2) - } - - if OutputFormat == "json" { - b, err := json.MarshalIndent(response, "", " ") - if err != nil { - fmt.Println("Error: ", err) - } - fmt.Println(string(b)) - return - } - - if len(response.Results) == 0 { - fmt.Println("Nothing found.") - return - } - - for _, result := range response.Results { - fmt.Println("") - fmt.Println(fmt.Sprintf("cluster : %s", result.ClusterName)) - - // first, print the test results for the endpoints, which make up - // the services - printTestResults("Services", result.Endpoints) - // first, print the test results for the instances - printTestResults("Instances", result.Instances) - } - } -} - -// prints out a set of test results -func printTestResults(testName string, results []msgs.ClusterTestDetail) { - // print out the header for this group of tests - fmt.Println(fmt.Sprintf("%s%s", TreeBranch, testName)) - // iterate though the results and print them! - for _, v := range results { - fmt.Printf("%s%s%s (%s): ", - TreeBranch, TreeBranch, v.InstanceType, v.Message) - if v.Available { - fmt.Println(fmt.Sprintf("%s", GREEN("UP"))) - } else { - fmt.Println(fmt.Sprintf("%s", RED("DOWN"))) - } - } -} diff --git a/pgo/cmd/update.go b/pgo/cmd/update.go deleted file mode 100644 index 806b44388e..0000000000 --- a/pgo/cmd/update.go +++ /dev/null @@ -1,411 +0,0 @@ -package cmd - -/* - Copyright 2017 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - "os" - - "github.com/crunchydata/postgres-operator/pgo/util" - "github.com/spf13/cobra" -) - -const pgBouncerPrompt = "This may cause an interruption in your pgBouncer service. Are you sure you wish to proceed?" - -var ( - // DisableLogin allows a user to disable the ability for a PostgreSQL uesr to - // log in - DisableLogin bool - // EnableLogin allows a user to enable the ability for a PostgreSQL uesr to - // log in - EnableLogin bool - // ExpireUser sets a user to having their password expired - ExpireUser bool - // PgoroleChangePermissions does something with the pgouser access controls, - // I'm not sure but I wanted this at least to be documented - PgoroleChangePermissions bool - // RotatePassword is a flag that allows one to specify that a password be - // automatically rotated, such as a service account type password - RotatePassword bool - // DisableStandby can be used to disable standby mode when enabled in an existing cluster - DisableStandby bool - // EnableStandby can be used to enable standby mode in an existing cluster - EnableStandby bool - // Shutdown is used to indicate that the cluster should be shutdown - Shutdown bool - // Startup is used to indicate that the cluster should be started (assuming it is shutdown) - Startup bool -) - -func init() { - RootCmd.AddCommand(UpdateCmd) - UpdateCmd.AddCommand(UpdatePgBouncerCmd) - UpdateCmd.AddCommand(UpdatePgouserCmd) - UpdateCmd.AddCommand(UpdatePgoroleCmd) - UpdateCmd.AddCommand(UpdateClusterCmd) - UpdateCmd.AddCommand(UpdateUserCmd) - UpdateCmd.AddCommand(UpdateNamespaceCmd) - - UpdateClusterCmd.Flags().BoolVar(&NoPrompt, "no-prompt", false, "No command line confirmation.") - UpdateClusterCmd.Flags().BoolVar(&AllFlag, "all", false, "all resources.") - UpdateClusterCmd.Flags().StringSliceVar(&Annotations, "annotation", []string{}, - "Add an Annotation to all of the managed deployments (PostgreSQL, pgBackRest, pgBouncer)\n"+ - "The format to add an annotation is \"name=value\"\n"+ - "The format to remove an annotation is \"name-\"\n\n"+ - "For example, to add two annotations: \"--annotation=hippo=awesome,elephant=cool\"") - UpdateClusterCmd.Flags().StringSliceVar(&AnnotationsBackrest, "annotation-pgbackrest", []string{}, - "Add an Annotation specifically to pgBackRest deployments\n"+ - "The format to add an annotation is \"name=value\"\n"+ - "The format to remove an annotation is \"name-\"") - UpdateClusterCmd.Flags().StringSliceVar(&AnnotationsPgBouncer, "annotation-pgbouncer", []string{}, - "Add an Annotation specifically to pgBouncer deployments\n"+ - "The format to add an annotation is \"name=value\"\n"+ - "The format to remove an annotation is \"name-\"") - UpdateClusterCmd.Flags().StringSliceVar(&AnnotationsPostgres, "annotation-postgres", []string{}, - "Add an Annotation specifically to PostgreSQL deployments"+ - "The format to add an annotation is \"name=value\"\n"+ - "The format to remove an annotation is \"name-\"") - UpdateClusterCmd.Flags().StringVar(&CPURequest, "cpu", "", "Set the number of millicores to request for the CPU, e.g. "+ - "\"100m\" or \"0.1\".") - UpdateClusterCmd.Flags().StringVar(&CPULimit, "cpu-limit", "", "Set the number of millicores to limit for the CPU, e.g. "+ - "\"100m\" or \"0.1\".") - UpdateClusterCmd.Flags().BoolVar(&DisableAutofailFlag, "disable-autofail", false, "Disables autofail capabitilies in the cluster.") - UpdateClusterCmd.Flags().BoolVar(&EnableAutofailFlag, "enable-autofail", false, "Enables autofail capabitilies in the cluster.") - UpdateClusterCmd.Flags().StringVar(&MemoryRequest, "memory", "", "Set the amount of RAM to request, e.g. "+ - "1GiB.") - UpdateClusterCmd.Flags().StringVar(&MemoryLimit, "memory-limit", "", "Set the amount of RAM to limit, e.g. "+ - "1GiB.") - UpdateClusterCmd.Flags().StringVarP(&Selector, "selector", "s", "", "The selector to use for cluster filtering.") - UpdateClusterCmd.Flags().BoolVarP(&DisableStandby, "promote-standby", "", false, - "Disables standby mode (if enabled) and promotes the cluster(s) specified.") - UpdateClusterCmd.Flags().StringVar(&BackrestCPURequest, "pgbackrest-cpu", "", "Set the number of millicores to request for CPU "+ - "for the pgBackRest repository.") - UpdateClusterCmd.Flags().StringVar(&BackrestCPULimit, "pgbackrest-cpu-limit", "", "Set the number of millicores to limit for CPU "+ - "for the pgBackRest repository.") - UpdateClusterCmd.Flags().StringVar(&BackrestMemoryRequest, "pgbackrest-memory", "", "Set the amount of memory to request for "+ - "the pgBackRest repository.") - UpdateClusterCmd.Flags().StringVar(&BackrestMemoryLimit, "pgbackrest-memory-limit", "", "Set the amount of memory to limit for "+ - "the pgBackRest repository.") - UpdateClusterCmd.Flags().StringVar(&ExporterCPURequest, "exporter-cpu", "", "Set the number of millicores to request for CPU "+ - "for the Crunchy Postgres Exporter sidecar container, e.g. \"100m\" or \"0.1\".") - UpdateClusterCmd.Flags().StringVar(&ExporterCPULimit, "exporter-cpu-limit", "", "Set the number of millicores to limit for CPU "+ - "for the Crunchy Postgres Exporter sidecar container, e.g. \"100m\" or \"0.1\".") - UpdateClusterCmd.Flags().StringVar(&ExporterMemoryRequest, "exporter-memory", "", "Set the amount of memory to request for "+ - "the Crunchy Postgres Exporter sidecar container.") - UpdateClusterCmd.Flags().StringVar(&ExporterMemoryLimit, "exporter-memory-limit", "", "Set the amount of memory to limit for "+ - "the Crunchy Postgres Exporter sidecar container.") - - UpdateClusterCmd.Flags().BoolVarP(&EnableStandby, "enable-standby", "", false, - "Enables standby mode in the cluster(s) specified.") - UpdateClusterCmd.Flags().BoolVar(&Startup, "startup", false, "Restart the database cluster if it "+ - "is currently shutdown.") - UpdateClusterCmd.Flags().BoolVar(&Shutdown, "shutdown", false, "Shutdown the database "+ - "cluster if it is currently running.") - UpdateClusterCmd.Flags().StringSliceVar(&Tablespaces, "tablespace", []string{}, - "Add a PostgreSQL tablespace on the cluster, e.g. \"name=ts1:storageconfig=nfsstorage\". The format is "+ - "a key/value map that is delimited by \"=\" and separated by \":\". The following parameters are available:\n\n"+ - "- name (required): the name of the PostgreSQL tablespace\n"+ - "- storageconfig (required): the storage configuration to use, as specified in the list available in the "+ - "\"pgo-config\" ConfigMap (aka \"pgo.yaml\")\n"+ - "- pvcsize: the size of the PVC capacity, which overrides the value set in the specified storageconfig. "+ - "Follows the Kubernetes quantity format.\n\n"+ - "For example, to create a tablespace with the NFS storage configuration with a PVC of size 10GiB:\n\n"+ - "--tablespace=name=ts1:storageconfig=nfsstorage:pvcsize=10Gi") - UpdatePgBouncerCmd.Flags().StringVar(&PgBouncerCPURequest, "cpu", "", "Set the number of millicores to request for CPU "+ - "for pgBouncer.") - UpdatePgBouncerCmd.Flags().StringVar(&PgBouncerCPULimit, "cpu-limit", "", "Set the number of millicores to limit for CPU "+ - "for pgBouncer.") - UpdatePgBouncerCmd.Flags().StringVar(&PgBouncerMemoryRequest, "memory", "", "Set the amount of memory to request for "+ - "pgBouncer.") - UpdatePgBouncerCmd.Flags().StringVar(&PgBouncerMemoryLimit, "memory-limit", "", "Set the amount of memory to limit for "+ - "pgBouncer.") - UpdatePgBouncerCmd.Flags().BoolVar(&NoPrompt, "no-prompt", false, "No command line confirmation.") - UpdatePgBouncerCmd.Flags().StringVarP(&OutputFormat, "output", "o", "", `The output format. Supported types are: "json"`) - UpdatePgBouncerCmd.Flags().Int32Var(&PgBouncerReplicas, "replicas", 0, "Set the total number of pgBouncer instances to deploy. If not set, defaults to 1.") - UpdatePgBouncerCmd.Flags().BoolVar(&RotatePassword, "rotate-password", false, "Used to rotate the pgBouncer service account password. Can cause interruption of service.") - UpdatePgBouncerCmd.Flags().StringVarP(&Selector, "selector", "s", "", "The selector to use for cluster filtering.") - UpdatePgouserCmd.Flags().StringVarP(&PgouserNamespaces, "pgouser-namespaces", "", "", "The namespaces to use for updating the pgouser roles.") - UpdatePgouserCmd.Flags().BoolVar(&AllNamespaces, "all-namespaces", false, "all namespaces.") - UpdatePgouserCmd.Flags().StringVarP(&PgouserRoles, "pgouser-roles", "", "", "The roles to use for updating the pgouser roles.") - UpdatePgouserCmd.Flags().StringVarP(&PgouserPassword, "pgouser-password", "", "", "The password to use for updating the pgouser password.") - UpdatePgouserCmd.Flags().BoolVar(&NoPrompt, "no-prompt", false, "No command line confirmation.") - UpdatePgoroleCmd.Flags().StringVarP(&Permissions, "permissions", "", "", "The permissions to use for updating the pgorole permissions.") - UpdatePgoroleCmd.Flags().BoolVar(&NoPrompt, "no-prompt", false, "No command line confirmation.") - // pgo update user -- flags - UpdateUserCmd.Flags().BoolVar(&AllFlag, "all", false, "all clusters.") - UpdateUserCmd.Flags().BoolVar(&DisableLogin, "disable-login", false, "Disables a PostgreSQL user from being able to log into the PostgreSQL cluster.") - UpdateUserCmd.Flags().BoolVar(&EnableLogin, "enable-login", false, "Enables a PostgreSQL user to be able to log into the PostgreSQL cluster.") - UpdateUserCmd.Flags().IntVarP(&Expired, "expired", "", 0, "Updates passwords that will expire in X days using an autogenerated password.") - UpdateUserCmd.Flags().BoolVarP(&ExpireUser, "expire-user", "", false, "Performs expiring a user if set to true.") - UpdateUserCmd.Flags().IntVarP(&PasswordAgeDays, "valid-days", "", 0, "Sets the number of days that a password is valid. Defaults to the server value.") - UpdateUserCmd.Flags().StringVarP(&Username, "username", "", "", "Updates the postgres user on selective clusters.") - UpdateUserCmd.Flags().StringVarP(&OutputFormat, "output", "o", "", `The output format. Supported types are: "json"`) - UpdateUserCmd.Flags().StringVarP(&Password, "password", "", "", "Specifies the user password when updating a user password or creating a new user. If --rotate-password is set as well, --password takes precedence.") - UpdateUserCmd.Flags().IntVarP(&PasswordLength, "password-length", "", 0, "If no password is supplied, sets the length of the automatically generated password. Defaults to the value set on the server.") - UpdateUserCmd.Flags().StringVar(&PasswordType, "password-type", "md5", "The type of password hashing to use."+ - "Choices are: (md5, scram-sha-256). This only takes effect if the password is being changed.") - UpdateUserCmd.Flags().BoolVar(&PasswordValidAlways, "valid-always", false, "Sets a password to never expire based on expiration time. Takes precedence over --valid-days") - UpdateUserCmd.Flags().BoolVar(&RotatePassword, "rotate-password", false, "Rotates the user's password with an automatically generated password. The length of the password is determine by either --password-length or the value set on the server, in that order.") - UpdateUserCmd.Flags().StringVarP(&Selector, "selector", "s", "", "The selector to use for cluster filtering.") - -} - -// UpdateCmd represents the update command -var UpdateCmd = &cobra.Command{ - Use: "update", - Short: "Update a pgouser, pgorole, or cluster", - Long: `The update command allows you to update a pgouser, pgorole, or cluster. For example: - - pgo update cluster --selector=name=mycluster --autofail=false - pgo update cluster --all --autofail=true - pgo update namespace mynamespace - pgo update pgbouncer mycluster --rotate-password - pgo update pgorole somerole --pgorole-permission="Cat" - pgo update pgouser someuser --pgouser-password=somenewpassword - pgo update pgouser someuser --pgouser-roles="role1,role2" - pgo update pgouser someuser --pgouser-namespaces="pgouser2" - pgo update pgorole somerole --pgorole-permission="Cat" - pgo update user mycluster --username=testuser --selector=name=mycluster --password=somepassword`, - Run: func(cmd *cobra.Command, args []string) { - - if len(args) == 0 { - fmt.Println(`Error: You must specify the type of resource to update. Valid resource types include: - * cluster - * namespace - * pgbouncer - * pgorole - * pgouser - * user`) - } else { - switch args[0] { - case "user", "cluster", "pgbouncer", "pgouser", "pgorole", "namespace": - break - default: - fmt.Println(`Error: You must specify the type of resource to update. Valid resource types include: - * cluster - * namespace - * pgbouncer - * pgorole - * pgouser - * user`) - } - } - - }, -} - -var PgouserChangePassword bool - -// UpdateClusterCmd ... -var UpdateClusterCmd = &cobra.Command{ - Use: "cluster", - Short: "Update a PostgreSQL cluster", - Long: `Update a PostgreSQL cluster. For example: - - pgo update cluster mycluster --autofail=false - pgo update cluster mycluster myothercluster --disable-autofail - pgo update cluster --selector=name=mycluster --disable-autofail - pgo update cluster --all --enable-autofail`, - Run: func(cmd *cobra.Command, args []string) { - if Namespace == "" { - Namespace = PGONamespace - } - - if len(args) == 0 && Selector == "" && !AllFlag { - fmt.Println("Error: A cluster name(s) or selector or --all is required for this command.") - os.Exit(1) - } - - // if both --enable-autofail and --disable-autofail are true, then abort - if EnableAutofailFlag && DisableAutofailFlag { - fmt.Println("Error: Cannot set --enable-autofail and --disable-autofail simultaneously") - os.Exit(1) - } - - if EnableStandby { - fmt.Println("Enabling standby mode will result in the deltion of all PVCs " + - "for this cluster!\nData will only be retained if the proper retention policy " + - "is configured for any associated storage classes and/or persistent volumes.\n" + - "Please proceed with caution.") - } - - if DisableStandby { - fmt.Println("Disabling standby mode will enable database writes for this " + - "cluster.\nPlease ensure the cluster this standby cluster is replicating " + - "from has been properly shutdown before proceeding!") - } - - if len(Tablespaces) > 0 { - fmt.Println("Adding tablespaces can cause downtime.") - } - - if CPURequest != "" || CPULimit != "" { - fmt.Println("Updating CPU resources can cause downtime.") - } - - if MemoryRequest != "" || MemoryLimit != "" { - fmt.Println("Updating memory resources can cause downtime.") - } - - if BackrestCPURequest != "" || BackrestMemoryRequest != "" || - BackrestCPULimit != "" || BackrestMemoryLimit != "" { - fmt.Println("Updating pgBackRest resources can cause temporary unavailability of backups and WAL archives.") - } - - if ExporterCPURequest != "" || ExporterMemoryRequest != "" || - ExporterCPULimit != "" || ExporterMemoryLimit != "" { - fmt.Println("Updating Crunchy Postgres Exporter resources can cause downtime.") - } - - if !util.AskForConfirmation(NoPrompt, "") { - fmt.Println("Aborting...") - return - } - - updateCluster(args, Namespace) - }, -} - -var UpdateUserCmd = &cobra.Command{ - Use: "user", - Short: "Update a PostgreSQL user", - Long: `Allows the ability to perform various user management functions for PostgreSQL users. - -For example: - -//change a password, set valid days for 40 days from now -pgo update user mycluster --username=someuser --password=foo -//expire password for a user -pgo update user mycluster --username=someuser --expire-user -//Update all passwords older than the number of days specified -pgo update user mycluster --expired=45 --password-length=8 - -# Disable the ability for a user to log into the PostgreSQL cluster -pgo update user mycluster --username=foobar --disable-login - -# Enable the ability for a user to log into the PostgreSQL cluster -pgo update user mycluster --username=foobar --enable-login - `, - Run: func(cmd *cobra.Command, args []string) { - - if Namespace == "" { - Namespace = PGONamespace - } - - // Check to see that there is an appropriate selector, be it clusters names, - // a Kubernetes selector, or the --all flag - if !AllFlag && Selector == "" && len(args) == 0 { - fmt.Println("Error: You must specify a --selector, --all or a list of clusters.") - os.Exit(1) - } - - // require either the "username" flag or the "expired" flag - if Username == "" && Expired == 0 { - fmt.Println("Error: You must specify either --username or --expired") - os.Exit(1) - } - - // if both --enable-login and --disable-login are true, then abort - if EnableLogin && DisableLogin { - fmt.Println("Error: Cannot set --enable-login and --disable-login simultaneously") - os.Exit(1) - } - - updateUser(args, Namespace) - }, -} - -var UpdatePgBouncerCmd = &cobra.Command{ - Use: "pgbouncer", - Short: "Update a pgBouncer deployment for a PostgreSQL cluster", - Long: `Used to update the pgBouncer deployment for a PostgreSQL cluster, such - as by rotating a password. For example: - - pgo update pgbouncer hacluster --rotate-password - `, - - Run: func(cmd *cobra.Command, args []string) { - if !util.AskForConfirmation(NoPrompt, pgBouncerPrompt) { - fmt.Println("Aborting...") - return - } - - if Namespace == "" { - Namespace = PGONamespace - } - - if PgBouncerReplicas < 0 { - fmt.Println("Error: You must specify one or more replicas.") - os.Exit(1) - } - - updatePgBouncer(Namespace, args) - }, -} - -var UpdatePgouserCmd = &cobra.Command{ - Use: "pgouser", - Short: "Update a pgouser", - Long: `UPDATE allows you to update a pgo user. For example: - pgo update pgouser myuser --pgouser-roles=somerole - pgo update pgouser myuser --pgouser-password=somepassword --pgouser-roles=somerole - pgo update pgouser myuser --pgouser-password=somepassword --no-prompt`, - Run: func(cmd *cobra.Command, args []string) { - - if Namespace == "" { - Namespace = PGONamespace - } - - if len(args) == 0 { - fmt.Println("Error: You must specify the name of a pgouser.") - } else { - updatePgouser(args, Namespace) - } - }, -} -var UpdatePgoroleCmd = &cobra.Command{ - Use: "pgorole", - Short: "Update a pgorole", - Long: `UPDATE allows you to update a pgo role. For example: - pgo update pgorole somerole --permissions="Cat,Ls`, - Run: func(cmd *cobra.Command, args []string) { - - if Namespace == "" { - Namespace = PGONamespace - } - - if len(args) == 0 { - fmt.Println("Error: You must specify the name of a pgorole.") - } else { - updatePgorole(args, Namespace) - } - }, -} - -var UpdateNamespaceCmd = &cobra.Command{ - Use: "namespace", - Short: "Update a namespace, applying Operator RBAC", - Long: `UPDATE allows you to update a Namespace. For example: - pgo update namespace mynamespace`, - Run: func(cmd *cobra.Command, args []string) { - - if len(args) == 0 { - fmt.Println("Error: You must specify the name of a Namespace.") - } else { - updateNamespace(args) - } - }, -} diff --git a/pgo/cmd/upgrade.go b/pgo/cmd/upgrade.go deleted file mode 100644 index e6672f1aa8..0000000000 --- a/pgo/cmd/upgrade.go +++ /dev/null @@ -1,108 +0,0 @@ -// Package cmd provides the command line functions of the crunchy CLI -package cmd - -/* - Copyright 2017 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - "os" - - "github.com/crunchydata/postgres-operator/pgo/api" - "github.com/crunchydata/postgres-operator/pgo/util" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "github.com/spf13/cobra" -) - -// IgnoreValidation stores the flag input value that determines whether -// image tag version checking should be done before allowing an upgrade -// to continue -var IgnoreValidation bool - -// UpgradeCCPImageTag stores the image tag for the cluster being upgraded. -// This is specifically required when upgrading PostGIS clusters because -// that tag will necessarily differ from the other images tags due to the -// inclusion of the PostGIS version. -var UpgradeCCPImageTag string - -var UpgradeCmd = &cobra.Command{ - Use: "upgrade", - Short: "Perform a cluster upgrade.", - Long: `UPGRADE allows you to perform a comprehensive PGCluster upgrade - (for use after performing a Postgres Operator upgrade). - For example: - - pgo upgrade mycluster - Upgrades the cluster for use with the upgraded Postgres Operator version.`, - Run: func(cmd *cobra.Command, args []string) { - log.Debug("cluster upgrade called") - if Namespace == "" { - Namespace = PGONamespace - } - if len(args) == 0 && Selector == "" { - fmt.Println(`Error: You must specify the cluster to upgrade.`) - } else { - fmt.Println("All active replicas will be scaled down and the primary database in this cluster will be stopped and recreated as part of this workflow!") - if util.AskForConfirmation(NoPrompt, "") { - createUpgrade(args, Namespace) - } else { - fmt.Println("Aborting...") - } - } - }, -} - -func init() { - RootCmd.AddCommand(UpgradeCmd) - - // flags for "pgo upgrade" - UpgradeCmd.Flags().BoolVarP(&IgnoreValidation, "ignore-validation", "", false, "Disables version checking against the image tags when performing an cluster upgrade.") - UpgradeCmd.Flags().StringVarP(&UpgradeCCPImageTag, "ccp-image-tag", "", "", "The image tag to use for cluster creation. If specified, it overrides the default configuration setting and disables tag validation checking.") -} - -func createUpgrade(args []string, ns string) { - log.Debugf("createUpgrade called %v", args) - - if len(args) == 0 && Selector == "" { - fmt.Println("Error: Cluster name(s) or a selector flag is required.") - os.Exit(2) - } - - request := msgs.CreateUpgradeRequest{} - request.Args = args - request.Namespace = ns - request.Selector = Selector - request.ClientVersion = msgs.PGO_VERSION - request.IgnoreValidation = IgnoreValidation - request.UpgradeCCPImageTag = UpgradeCCPImageTag - - response, err := api.CreateUpgrade(httpclient, &SessionCredentials, &request) - - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(2) - } - - if response.Status.Code == msgs.Ok { - for k := range response.Results { - fmt.Println(response.Results[k]) - } - } else { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(2) - } - -} diff --git a/pgo/cmd/user.go b/pgo/cmd/user.go deleted file mode 100644 index 7216ce9e1d..0000000000 --- a/pgo/cmd/user.go +++ /dev/null @@ -1,425 +0,0 @@ -package cmd - -/* - Copyright 2017 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - "os" - "strings" - - utiloperator "github.com/crunchydata/postgres-operator/internal/util" - "github.com/crunchydata/postgres-operator/pgo/api" - "github.com/crunchydata/postgres-operator/pgo/util" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - - log "github.com/sirupsen/logrus" -) - -// userTextPadding contains the values for what the text padding should be -type userTextPadding struct { - ClusterName int - ErrorMessage int - Expires int - Password int - Username int - Status int -} - -// PasswordAgeDays password age flag -var PasswordAgeDays int - -// Username is a postgres username -var Username string - -// Expired expired flag -var Expired int - -// PasswordLength password length flag -var PasswordLength int - -// PasswordValidAlways allows a user to explicitly set that their passowrd -// is always valid (i.e. no expiration time) -var PasswordValidAlways bool - -// ShowSystemAccounts enables the display of the PostgreSQL user accounts that -// perform system functions, such as the "postgres" user -var ShowSystemAccounts bool - -func createUser(args []string, ns string) { - username := strings.TrimSpace(Username) - - // ensure the username is nonempty - if username == "" { - fmt.Println("Error: --username is required") - os.Exit(1) - } - - // check to see if this is a system account. if it is, do not let the request - // go through - if utiloperator.IsPostgreSQLUserSystemAccount(username) { - fmt.Println("Error:", username, "is a system account and cannot be used") - os.Exit(1) - } - - request := msgs.CreateUserRequest{ - AllFlag: AllFlag, - Clusters: args, - ManagedUser: ManagedUser, - Namespace: ns, - Password: Password, - PasswordAgeDays: PasswordAgeDays, - PasswordLength: PasswordLength, - PasswordType: PasswordType, - Username: username, - Selector: Selector, - } - - // determine if the user provies a valid password type - if _, err := msgs.GetPasswordType(PasswordType); err != nil { - fmt.Println("Error:", err.Error()) - os.Exit(1) - } - - response, err := api.CreateUser(httpclient, &SessionCredentials, &request) - - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(1) - } - - // great! now we can work on interpreting the results and outputting them - // per the user's desired output format - // render the next bit based on the output type - switch OutputFormat { - case "json": - printJSON(response) - default: - printCreateUserText(response) - } -} - -// deleteUser ... -func deleteUser(args []string, ns string) { - - log.Debugf("deleting user %s selector=%s args=%v", Username, Selector, args) - - if Username == "" { - fmt.Println("Error: --username is required") - return - } - - request := msgs.DeleteUserRequest{ - AllFlag: AllFlag, - Clusters: args, - Namespace: ns, - Selector: Selector, - Username: Username, - } - - response, err := api.DeleteUser(httpclient, &SessionCredentials, &request) - - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(1) - } - - // great! now we can work on interpreting the results and outputting them - // per the user's desired output format - // render the next bit based on the output type - switch OutputFormat { - case "json": - printJSON(response) - default: - printDeleteUserText(response) - } -} - -// generateUserPadding returns the paddings based on the values of the response -func generateUserPadding(results []msgs.UserResponseDetail) userTextPadding { - // make the interface for the users - userInterface := makeUserInterface(results) - - // set up the text padding - return userTextPadding{ - ClusterName: getMaxLength(userInterface, headingCluster, "ClusterName"), - ErrorMessage: getMaxLength(userInterface, headingErrorMessage, "ErrorMessage"), - Expires: getMaxLength(userInterface, headingExpires, "ValidUntil"), - Password: getMaxLength(userInterface, headingPassword, "Password"), - Status: len(headingStatus) + 1, - Username: getMaxLength(userInterface, headingUsername, "Username"), - } -} - -// makeUserInterface returns an interface slice of the available values -// in pgo create user -func makeUserInterface(values []msgs.UserResponseDetail) []interface{} { - // iterate through the list of values to make the interface - userInterface := make([]interface{}, len(values)) - - for i, value := range values { - userInterface[i] = value - } - - return userInterface -} - -// printCreateUserText prints out the information that is created after -// pgo create user is called -func printCreateUserText(response msgs.CreateUserResponse) { - // if the request errored, return the message here and exit with an error - if response.Status.Code != msgs.Ok { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(1) - } - - // if no results returned, return an error - if len(response.Results) == 0 { - fmt.Println("No users created.") - return - } - - padding := generateUserPadding(response.Results) - - // print the header - printUserTextHeader(padding) - - // iterate through the reuslts and print them out - for _, result := range response.Results { - printUserTextRow(result, padding) - } -} - -// printDeleteUserText prints out the information that is created after -// pgo delete user is called -func printDeleteUserText(response msgs.DeleteUserResponse) { - // if the request errored, return the message here and exit with an error - if response.Status.Code != msgs.Ok { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(1) - } - - // if no results returned, return an error - if len(response.Results) == 0 { - fmt.Println("No users deleted.") - return - } - - padding := generateUserPadding(response.Results) - - // print the header - printUserTextHeader(padding) - - // iterate through the reuslts and print them out - for _, result := range response.Results { - printUserTextRow(result, padding) - } -} - -// printShowUserText prints out the information from calling pgo show user -func printShowUserText(response msgs.ShowUserResponse) { - // if the request errored, return the message here and exit with an error - if response.Status.Code != msgs.Ok { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(1) - } - - // if no results returned, return an error - if len(response.Results) == 0 { - fmt.Println("No users found.") - return - } - - padding := generateUserPadding(response.Results) - - // print the header - printUserTextHeader(padding) - - // iterate through the reuslts and print them out - for _, result := range response.Results { - printUserTextRow(result, padding) - } -} - -// printUpdateUserText prints out the information from calling pgo update user -func printUpdateUserText(response msgs.UpdateUserResponse) { - // if the request errored, return the message here and exit with an error - if response.Status.Code != msgs.Ok { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(1) - } - - // if no results returned, return an error - if len(response.Results) == 0 { - fmt.Println("No users updated.") - return - } - - padding := generateUserPadding(response.Results) - - // print the header - printUserTextHeader(padding) - - // iterate through the reuslts and print them out - for _, result := range response.Results { - printUserTextRow(result, padding) - } -} - -// printUserTextHeader prints out the header -func printUserTextHeader(padding userTextPadding) { - // print the header - fmt.Println("") - fmt.Printf("%s", util.Rpad(headingCluster, " ", padding.ClusterName)) - fmt.Printf("%s", util.Rpad(headingUsername, " ", padding.Username)) - fmt.Printf("%s", util.Rpad(headingPassword, " ", padding.Password)) - fmt.Printf("%s", util.Rpad(headingExpires, " ", padding.Expires)) - fmt.Printf("%s", util.Rpad(headingStatus, " ", padding.Status)) - fmt.Printf("%s", util.Rpad(headingErrorMessage, " ", padding.ErrorMessage)) - fmt.Println("") - - // print the layer below the header...which prints out a bunch of "-" that's - // 1 less than the padding value - fmt.Println( - strings.Repeat("-", padding.ClusterName-1), - strings.Repeat("-", padding.Username-1), - strings.Repeat("-", padding.Password-1), - strings.Repeat("-", padding.Expires-1), - strings.Repeat("-", padding.Status-1), - strings.Repeat("-", padding.ErrorMessage-1), - ) -} - -// printUserTextRow prints a row of the text data -func printUserTextRow(result msgs.UserResponseDetail, padding userTextPadding) { - expires := result.ValidUntil - - // check for special values of expires, e.g. if the password matches special - // values to indicate if it has expired or not - switch { - case expires == "" || expires == utiloperator.SQLValidUntilAlways: - expires = "never" - case expires == utiloperator.SQLValidUntilNever: - expires = "expired" - } - - password := result.Password - - // set the text-based status, and use it to drive some of the display - status := "ok" - - if result.Error { - expires = "" - password = "" - status = "error" - } - - fmt.Printf("%s", util.Rpad(result.ClusterName, " ", padding.ClusterName)) - fmt.Printf("%s", util.Rpad(result.Username, " ", padding.Username)) - fmt.Printf("%s", util.Rpad(password, " ", padding.Password)) - fmt.Printf("%s", util.Rpad(expires, " ", padding.Expires)) - fmt.Printf("%s", util.Rpad(status, " ", padding.Status)) - fmt.Printf("%s", util.Rpad(result.ErrorMessage, " ", padding.ErrorMessage)) - fmt.Println("") -} - -// showUser prepares the API attributes for getting information about PostgreSQL -// users in clusters -func showUser(args []string, ns string) { - request := msgs.ShowUserRequest{ - AllFlag: AllFlag, - Clusters: args, - Expired: Expired, - Namespace: ns, - Selector: Selector, - ShowSystemAccounts: ShowSystemAccounts, - } - - response, err := api.ShowUser(httpclient, &SessionCredentials, &request) - - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(1) - } - - // great! now we can work on interpreting the results and outputting them - // per the user's desired output format - // render the next bit based on the output type - switch OutputFormat { - case "json": - printJSON(response) - default: - printShowUserText(response) - } -} - -// updateUser prepares the API call for updating attributes of a PostgreSQL -// user -func updateUser(clusterNames []string, namespace string) { - // set up the reuqest - request := msgs.UpdateUserRequest{ - AllFlag: AllFlag, - Clusters: clusterNames, - Expired: Expired, - ExpireUser: ExpireUser, - ManagedUser: ManagedUser, - Namespace: namespace, - Password: Password, - PasswordAgeDays: PasswordAgeDays, - PasswordLength: PasswordLength, - PasswordValidAlways: PasswordValidAlways, - PasswordType: PasswordType, - RotatePassword: RotatePassword, - Selector: Selector, - Username: strings.TrimSpace(Username), - } - - // check to see if EnableLogin or DisableLogin is set. If so, set a value - // for the LoginState parameter - if EnableLogin { - request.LoginState = msgs.UpdateUserLoginEnable - } else if DisableLogin { - request.LoginState = msgs.UpdateUserLoginDisable - } - - // check to see if this is a system account if a user name is passed in - if request.Username != "" && utiloperator.IsPostgreSQLUserSystemAccount(request.Username) { - fmt.Println("Error:", request.Username, "is a system account and cannot be used") - os.Exit(1) - } - - // determine if the user provies a valid password type - if _, err := msgs.GetPasswordType(PasswordType); err != nil { - fmt.Println("Error:", err.Error()) - os.Exit(1) - } - - response, err := api.UpdateUser(httpclient, &SessionCredentials, &request) - - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(1) - } - - // great! now we can work on interpreting the results and outputting them - // per the user's desired output format - // render the next bit based on the output type - switch OutputFormat { - case "json": - printJSON(response) - default: - printUpdateUserText(response) - } -} diff --git a/pgo/cmd/version.go b/pgo/cmd/version.go deleted file mode 100644 index 25e6253767..0000000000 --- a/pgo/cmd/version.go +++ /dev/null @@ -1,72 +0,0 @@ -package cmd - -/* - Copyright 2017 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - "github.com/crunchydata/postgres-operator/pgo/api" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "github.com/spf13/cobra" - "os" -) - -// ClientVersionOnly indicates that only the client version should be returned, not make -// a call to the server -var ClientVersionOnly bool - -var versionCmd = &cobra.Command{ - Use: "version", - Short: "Print version information for the PostgreSQL Operator", - Long: `VERSION allows you to print version information for the postgres-operator. For example: - - pgo version`, - Run: func(cmd *cobra.Command, args []string) { - log.Debug("version called") - showVersion() - }, -} - -func init() { - RootCmd.AddCommand(versionCmd) - versionCmd.Flags().BoolVar(&ClientVersionOnly, "client", false, "Only return the version of the pgo client. This does not make a call to the API server.") -} - -func showVersion() { - - // print the client version - fmt.Println("pgo client version " + msgs.PGO_VERSION) - - // if the user selects only to display the client version, return here - if ClientVersionOnly { - return - } - - // otherwise, get the server version - response, err := api.ShowVersion(httpclient, &SessionCredentials) - - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(2) - } - - if response.Status.Code != msgs.Ok { - fmt.Println("Error: " + response.Status.Msg) - os.Exit(1) - } - - fmt.Println("pgo-apiserver version " + response.Version) -} diff --git a/pgo/cmd/watch.go b/pgo/cmd/watch.go deleted file mode 100644 index 69d288a1d8..0000000000 --- a/pgo/cmd/watch.go +++ /dev/null @@ -1,137 +0,0 @@ -package cmd - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - "github.com/nsqio/go-nsq" - log "github.com/sirupsen/logrus" - "github.com/spf13/cobra" - "math/rand" - "os" - "os/signal" - "syscall" - "time" -) - -type TailHandler struct { - topicName string - totalMessages int - messagesShown int -} - -var watchCmd = &cobra.Command{ - Use: "watch", - Short: "Print watch information for the PostgreSQL Operator", - Long: `WATCH allows you to watch event information for the postgres-operator. For example: - pgo watch --pgo-event-address=localhost:14150 alltopic - pgo watch alltopic`, - Run: func(cmd *cobra.Command, args []string) { - if Namespace == "" { - Namespace = PGONamespace - } - - log.Debug("watch called") - watch(args, Namespace) - }, -} - -var PGOEventAddress string - -func init() { - RootCmd.AddCommand(watchCmd) - - watchCmd.Flags().StringVarP(&PGOEventAddress, "pgo-event-address", "a", "localhost:14150", "The address (host:port) where the event stream is.") -} - -func watch(args []string, ns string) { - log.Debugf("watch called %v", args) - - if len(args) == 0 { - log.Fatal("topic is required") - } - - topic := args[0] - - var totalMessages = 0 - - var channel string - rand.Seed(time.Now().UnixNano()) - channel = fmt.Sprintf("tail%06d#ephemeral", rand.Int()%999999) - - sigChan := make(chan os.Signal, 1) - signal.Notify(sigChan, syscall.SIGINT, syscall.SIGTERM) - - cfg := nsq.NewConfig() - cfg.MaxInFlight = 200 - - consumers := []*nsq.Consumer{} - log.Printf("Adding consumer for topic: %s", topic) - - consumer, err := nsq.NewConsumer(topic, channel, cfg) - if err != nil { - log.Fatal(err) - } - - consumer.AddHandler(&TailHandler{topicName: topic, totalMessages: totalMessages}) - - addrs := make([]string, 1) - if PGOEventAddress != "" { - addrs[0] = PGOEventAddress - err = consumer.ConnectToNSQDs(addrs) - if err != nil { - log.Fatal(err) - } - } - - consumers = append(consumers, consumer) - - <-sigChan - - for _, consumer := range consumers { - consumer.Stop() - } - for _, consumer := range consumers { - <-consumer.StopChan - } - -} - -func (th *TailHandler) HandleMessage(m *nsq.Message) error { - th.messagesShown++ - - _, err := os.Stdout.WriteString(th.topicName) - if err != nil { - log.Fatalf("ERROR: failed to write to os.Stdout - %s", err) - } - _, err = os.Stdout.WriteString(" | ") - if err != nil { - log.Fatalf("ERROR: failed to write to os.Stdout - %s", err) - } - - _, err = os.Stdout.Write(m.Body) - if err != nil { - log.Fatalf("ERROR: failed to write to os.Stdout - %s", err) - } - _, err = os.Stdout.WriteString("\n") - if err != nil { - log.Fatalf("ERROR: failed to write to os.Stdout - %s", err) - } - if th.totalMessages > 0 && th.messagesShown >= th.totalMessages { - os.Exit(0) - } - return nil -} diff --git a/pgo/cmd/workflow.go b/pgo/cmd/workflow.go deleted file mode 100644 index 239fcd6ef2..0000000000 --- a/pgo/cmd/workflow.go +++ /dev/null @@ -1,61 +0,0 @@ -package cmd - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - "github.com/crunchydata/postgres-operator/pgo/api" - "github.com/crunchydata/postgres-operator/pgo/util" - msgs "github.com/crunchydata/postgres-operator/pkg/apiservermsgs" - log "github.com/sirupsen/logrus" - "os" -) - -func showWorkflow(args []string, ns string) { - log.Debugf("showWorkflow called %v", args) - - if len(args) < 1 { - fmt.Println("Error: workflow ID is a required parameter") - os.Exit(2) - } - - printWorkflow(args[0], ns) - -} - -func printWorkflow(id, ns string) { - - response, err := api.ShowWorkflow(httpclient, id, &SessionCredentials, ns) - - if err != nil { - fmt.Println("Error: " + err.Error()) - os.Exit(2) - } - - if response.Status.Code == msgs.Error { - fmt.Println("Error: " + response.Status.Msg) - return - } - - log.Debugf("response = %v", response) - - fmt.Printf("%s%s\n", util.Rpad("parameter", " ", 20), "value") - fmt.Printf("%s%s\n", util.Rpad("---------", " ", 20), "-----") - for k, v := range response.Results.Parameters { - fmt.Printf("%s%s\n", util.Rpad(k, " ", 20), v) - } - -} diff --git a/pgo/generatedocs.go b/pgo/generatedocs.go deleted file mode 100644 index daafa3224c..0000000000 --- a/pgo/generatedocs.go +++ /dev/null @@ -1,55 +0,0 @@ -package main - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - "github.com/crunchydata/postgres-operator/pgo/cmd" - log "github.com/sirupsen/logrus" - "github.com/spf13/cobra/doc" - "path" - "path/filepath" - "strings" -) - -const fmTemplate = `--- -title: "%s" ---- -` - -func main() { - - fmt.Println("generate CLI markdown") - - filePrepender := func(filename string) string { - // now := time.Now().Format(time.RFC3339) - name := filepath.Base(filename) - base := strings.TrimSuffix(name, path.Ext(name)) - fmt.Println(base) - // url := "/commands/" + strings.ToLower(base) + "/" - return fmt.Sprintf(fmTemplate, strings.ReplaceAll(base, "_", " ")) - } - - linkHandler := func(name string) string { - base := strings.TrimSuffix(name, path.Ext(name)) - return "/pgo-client/reference/" + strings.ToLower(base) + "/" - } - - err := doc.GenMarkdownTreeCustom(cmd.RootCmd, "./", filePrepender, linkHandler) - if err != nil { - log.Fatal(err) - } -} diff --git a/pgo/pgo.go b/pgo/pgo.go deleted file mode 100644 index 6895b7863a..0000000000 --- a/pgo/pgo.go +++ /dev/null @@ -1,31 +0,0 @@ -package main - -/* - Copyright 2017 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - "github.com/crunchydata/postgres-operator/pgo/cmd" - "os" -) - -func main() { - err := cmd.RootCmd.Execute() - if err != nil { - fmt.Println(err) - os.Exit(1) - } - -} diff --git a/pgo/util/confirmation.go b/pgo/util/confirmation.go deleted file mode 100644 index c227055cb1..0000000000 --- a/pgo/util/confirmation.go +++ /dev/null @@ -1,70 +0,0 @@ -package util - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" -) - -// AskForConfirmation uses Scanln to parse user input. A user must type in "yes" or "no" and -// then press enter. It has fuzzy matching, so "y", "Y", "yes", "YES", and "Yes" all count as -// confirmations. If the input is not recognized, it will ask again. The function does not return -// until it gets a valid response from the user. Typically, you should use fmt to print out a question -// before calling AskForConfirmation. E.g. fmt.Println("WARNING: Are you sure? (yes/no)") -func AskForConfirmation(NoPrompt bool, msg string) bool { - var response string - - if NoPrompt { - return true - } - if msg == "" { - fmt.Print("WARNING: Are you sure? (yes/no): ") - } else { - fmt.Print("WARNING - " + msg + " (yes/no): ") - } - - _, err := fmt.Scanln(&response) - if err != nil { - fmt.Println("Please type yes or no and then press enter:") - return AskForConfirmation(NoPrompt, msg) - } - okayResponses := []string{"y", "Y", "yes", "Yes", "YES"} - nokayResponses := []string{"n", "N", "no", "No", "NO", ""} - if containsString(okayResponses, response) { - return true - } else if containsString(nokayResponses, response) { - return false - } else { - fmt.Println("Please type yes or no and then press enter:") - return AskForConfirmation(NoPrompt, msg) - } -} - -// posString returns the first index of element in slice. -// If slice does not contain element, returns -1. -func posString(slice []string, element string) int { - for index, elem := range slice { - if elem == element { - return index - } - } - return -1 -} - -// containsString returns true iff slice contains element -func containsString(slice []string, element string) bool { - return !(posString(slice, element) == -1) -} diff --git a/pgo/util/pad.go b/pgo/util/pad.go deleted file mode 100644 index 276469471a..0000000000 --- a/pgo/util/pad.go +++ /dev/null @@ -1,31 +0,0 @@ -package util - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "fmt" -) - -func Lpad(value, wid string) string { - return fmt.Sprintf("%"+wid+"s", value) -} - -func Rpad(value, pad string, plen int) string { - for i := len(value); i < plen; i++ { - value = value + pad - } - return value -} diff --git a/pgo/util/validation.go b/pgo/util/validation.go deleted file mode 100644 index 7d90f6a6ac..0000000000 --- a/pgo/util/validation.go +++ /dev/null @@ -1,56 +0,0 @@ -package util - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - "regexp" - - log "github.com/sirupsen/logrus" - "k8s.io/apimachinery/pkg/api/resource" -) - -var validResourceName = regexp.MustCompile(`^[a-z0-9.\-]+$`).MatchString - -// validates whether a string meets requirements for a valid resource name for kubernetes. -// It can consist of lowercase alphanumeric characters, '-' and '.', per -// -// https://kubernetes.io/docs/concepts/overview/working-with-objects/names/ -// -func IsValidForResourceName(target string) bool { - - log.Debugf("IsValidForResourceName: %s", target) - - return validResourceName(target) -} - -// ValidateQuantity runs the Kubernetes "ParseQuantity" function on a string -// and determine whether or not it is a valid quantity object. Returns an error -// if it is invalid, along with the error message pertaining to the specific -// flag. -// -// Does nothing if no value is passed in -// -// See: https://github.com/kubernetes/apimachinery/blob/master/pkg/api/resource/quantity.go -func ValidateQuantity(quantity, flag string) error { - if quantity != "" { - if _, err := resource.ParseQuantity(quantity); err != nil { - return fmt.Errorf("Error: \"%s\" - %s", flag, err.Error()) - } - } - - return nil -} diff --git a/pkg/apis/crunchydata.com/v1/cluster.go b/pkg/apis/crunchydata.com/v1/cluster.go deleted file mode 100644 index e67d0f107d..0000000000 --- a/pkg/apis/crunchydata.com/v1/cluster.go +++ /dev/null @@ -1,339 +0,0 @@ -package v1 - -/* - Copyright 2017 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - - v1 "k8s.io/api/core/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" -) - -// PgclusterResourcePlural .. -const PgclusterResourcePlural = "pgclusters" - -// Pgcluster is the CRD that defines a Crunchy PG Cluster -// -// swagger:ignore Pgcluster -// +genclient -// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object -type Pgcluster struct { - metav1.TypeMeta `json:",inline"` - metav1.ObjectMeta `json:"metadata"` - Spec PgclusterSpec `json:"spec"` - Status PgclusterStatus `json:"status,omitempty"` -} - -// PgclusterSpec is the CRD that defines a Crunchy PG Cluster Spec -// swagger:ignore -type PgclusterSpec struct { - Namespace string `json:"namespace"` - Name string `json:"name"` - ClusterName string `json:"clustername"` - Policies string `json:"policies"` - CCPImage string `json:"ccpimage"` - CCPImageTag string `json:"ccpimagetag"` - CCPImagePrefix string `json:"ccpimageprefix"` - PGOImagePrefix string `json:"pgoimageprefix"` - Port string `json:"port"` - PGBadgerPort string `json:"pgbadgerport"` - ExporterPort string `json:"exporterport"` - PrimaryStorage PgStorageSpec `json:primarystorage` - WALStorage PgStorageSpec `json:walstorage` - ArchiveStorage PgStorageSpec `json:archivestorage` - ReplicaStorage PgStorageSpec `json:replicastorage` - BackrestStorage PgStorageSpec `json:backreststorage` - // Resources behaves just like the "Requests" section of a Kubernetes - // container definition. You can set individual items such as "cpu" and - // "memory", e.g. "{ cpu: "0.5", memory: "2Gi" }" - Resources v1.ResourceList `json:"resources"` - // Limits stores the CPU/memory limits to use with PostgreSQL instances - // - // A long note on memory limits. - // - // We want to avoid the OOM killer coming for the PostgreSQL process or any - // of their backends per lots of guidance from the PostgreSQL documentation. - // Based on Kubernetes' behavior with limits, the best thing is to not set - // them. However, if they ever do set, we suggest that you have - // Request == Limit to get the Guaranteed QoS - // - // Guaranteed QoS prevents a backend from being first in line to be killed if - // the *Node* has memory pressure, but if there is, say - // a runaway client backend that causes the *Pod* to exceed its memory - // limit, a backend can still be killed by the OOM killer, which is not - // great. - // - // As such, given the choice, the preference is for the Pod to be evicted - // and have a failover event, vs. having an individual client backend killed - // and causing potential "bad things." - // - // For more info on PostgreSQL and Kubernetes memory management, see: - // - // https://www.postgresql.org/docs/current/kernel-resources.html#LINUX-MEMORY-OVERCOMMIT - // https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#how-pods-with-resource-limits-are-run - Limits v1.ResourceList `json:"limits"` - // BackrestResources, if specified, contains the container request resources - // for the pgBackRest Deployment for this PostgreSQL cluster - BackrestResources v1.ResourceList `json:"backrestResources"` - // BackrestLimits, if specified, contains the container resource limits - // for the pgBackRest Deployment for this PostgreSQL cluster - BackrestLimits v1.ResourceList `json:"backrestLimits"` - // ExporterResources, if specified, contains the container request resources - // for the Crunchy Postgres Exporter Deployment for this PostgreSQL cluster - ExporterResources v1.ResourceList `json:"exporterResources"` - // ExporterLimits, if specified, contains the container resource limits - // for the Crunchy Postgres Exporter Deployment for this PostgreSQL cluster - ExporterLimits v1.ResourceList `json:"exporterLimits"` - - // PgBouncer contains all of the settings to properly maintain a pgBouncer - // implementation - PgBouncer PgBouncerSpec `json:"pgBouncer"` - User string `json:"user"` - Database string `json:"database"` - Replicas string `json:"replicas"` - UserSecretName string `json:"usersecretname"` - RootSecretName string `json:"rootsecretname"` - PrimarySecretName string `json:"primarysecretname"` - CollectSecretName string `json:"collectSecretName"` - Status string `json:"status"` - CustomConfig string `json:"customconfig"` - UserLabels map[string]string `json:"userlabels"` - PodAntiAffinity PodAntiAffinitySpec `json:"podAntiAffinity"` - SyncReplication *bool `json:"syncReplication"` - BackrestConfig []v1.VolumeProjection `json:"backrestConfig"` - BackrestS3Bucket string `json:"backrestS3Bucket"` - BackrestS3Region string `json:"backrestS3Region"` - BackrestS3Endpoint string `json:"backrestS3Endpoint"` - BackrestS3URIStyle string `json:"backrestS3URIStyle"` - BackrestS3VerifyTLS string `json:"backrestS3VerifyTLS"` - BackrestRepoPath string `json:"backrestRepoPath"` - TablespaceMounts map[string]PgStorageSpec `json:"tablespaceMounts"` - TLS TLSSpec `json:"tls"` - TLSOnly bool `json:"tlsOnly"` - Standby bool `json:"standby"` - Shutdown bool `json:"shutdown"` - PGDataSource PGDataSourceSpec `json:"pgDataSource"` - - // Annotations contains a set of Deployment (and by association, Pod) - // annotations that are propagated to all managed Deployments - Annotations ClusterAnnotations `json:"annotations"` -} - -// ClusterAnnotations provides a set of annotations that can be propagated to -// the managed deployments. These are subdivided into four categories, which -// are explained further below: -// -// - Global -// - Postgres -// - Backrest -// - PgBouncer -type ClusterAnnotations struct { - // Backrest annotations will be propagated **only** to the pgBackRest managed - // Deployments - Backrest map[string]string `json:"backrest"` - - // Global annotations will be propagated to **all** managed Deployments - Global map[string]string `json:"global"` - - // PgBouncer annotations will be propagated **only** to the PgBouncer managed - // Deployments - PgBouncer map[string]string `json:"pgBouncer"` - - // Postgres annotations will be propagated **only** to the PostgreSQL managed - // deployments - Postgres map[string]string `json:"postgres"` -} - -// ClusterAnnotationType just helps with the various cluster annotation types -// available -type ClusterAnnotationType int - -// the following constants help with selecting which annotations we may want to -// apply to a particular Deployment -const ( - // ClusterAnnotationGlobal indicates to apply the annotation regardless of - // deployment type - ClusterAnnotationGlobal ClusterAnnotationType = iota - // ClusterAnnotationPostgres indicates to apply the annotation to the - // PostgreSQL deployments - ClusterAnnotationPostgres - // ClusterAnnotationBackrest indicates to apply the annotation to the - // pgBackRest deployments - ClusterAnnotationBackrest - // ClusterAnnotationPgBouncer indicates to apply the annotation to the - // pgBouncer deployments - ClusterAnnotationPgBouncer -) - -// PGDataSourceSpec defines the data source that should be used to populate the initial PGDATA -// directory when bootstrapping a new PostgreSQL cluster -// swagger:ignore -type PGDataSourceSpec struct { - RestoreFrom string `json:"restoreFrom"` - RestoreOpts string `json:"restoreOpts"` -} - -// PgclusterList is the CRD that defines a Crunchy PG Cluster List -// swagger:ignore -// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object -type PgclusterList struct { - metav1.TypeMeta `json:",inline"` - metav1.ListMeta `json:"metadata"` - - Items []Pgcluster `json:"items"` -} - -// PgclusterStatus is the CRD that defines PG Cluster Status -// swagger:ignore -type PgclusterStatus struct { - State PgclusterState `json:"state,omitempty"` - Message string `json:"message,omitempty"` -} - -// PgclusterState is the crd that defines PG Cluster Stage -// swagger:ignore -type PgclusterState string - -// PodAntiAffinityDeployment distinguishes between the different types of -// Deployments that can leverage PodAntiAffinity -type PodAntiAffinityDeployment int - -// PodAntiAffinityType defines the different types of type of anti-affinity rules applied to pg -// clusters when utilizing the default pod anti-affinity rules provided by the PostgreSQL Operator, -// which are enabled for a new pg cluster by default. Valid Values include "required" for -// requiredDuringSchedulingIgnoredDuringExecution anti-affinity, "preferred" for -// preferredDuringSchedulingIgnoredDuringExecution anti-affinity, and "disabled" to disable the -// default pod anti-affinity rules for the pg cluster all together. -type PodAntiAffinityType string - -// PodAntiAffinitySpec provides multiple configurations for how pod -// anti-affinity can be set. -// - "Default" is the default rule that applies to all Pods that are a part of -// the PostgreSQL cluster -// - "PgBackrest" applies just to the pgBackRest repository Pods in said -// Deployment -// - "PgBouncer" applies to just pgBouncer Pods in said Deployment -// swaggier:ignore -type PodAntiAffinitySpec struct { - Default PodAntiAffinityType `json:"default"` - PgBackRest PodAntiAffinityType `json:"pgBackRest"` - PgBouncer PodAntiAffinityType `json:"pgBouncer"` -} - -// PgBouncerSpec is a struct that is used within the Cluster specification that -// provides the attributes for managing a PgBouncer implementation, including: -// - is it enabled? -// - what resources it should consume -// - the total number of replicas -type PgBouncerSpec struct { - // Replicas represents the total number of Pods to deploy with pgBouncer, - // which effectively enables/disables the pgBouncer. - // - // if it is set to 0 or less, it is disabled. - // - // if it is set to 1 or more, it is enabled - Replicas int32 `json:"replicas"` - // Resources, if specified, contains the container request resources - // for any pgBouncer Deployments that are part of a PostgreSQL cluster - Resources v1.ResourceList `json:"resources"` - // Limits, if specified, contains the container resource limits - // for any pgBouncer Deployments that are part of a PostgreSQL cluster - Limits v1.ResourceList `json:"limits"` -} - -// Enabled returns true if the pgBouncer is enabled for the cluster, i.e. there -// is at least one replica set -func (s *PgBouncerSpec) Enabled() bool { - return s.Replicas > 0 -} - -// TLSSpec contains the information to set up a TLS-enabled PostgreSQL cluster -type TLSSpec struct { - // CASecret contains the name of the secret to use as the trusted CA for the - // TLSSecret - // This is our own format and should contain at least one key: "ca.crt" - // It can also contain a key "ca.crl" which is the certificate revocation list - CASecret string `json:"caSecret"` - // ReplicationTLSSecret contains the name of the secret that specifies a TLS - // keypair that can be used by the replication user (e.g. "primaryuser") to - // perform certificate based authentication between replicas. - // The keypair must be considered valid by the CA specified in the CASecret - ReplicationTLSSecret string `json:"replicationTLSSecret"` - // TLSSecret contains the name of the secret to use that contains the TLS - // keypair for the PostgreSQL server - // This follows the Kubernetes secret format ("kubernetes.io/tls") which has - // two keys: tls.crt and tls.key - TLSSecret string `json:"tlsSecret"` -} - -// IsTLSEnabled returns true if the cluster is TLS enabled, i.e. both the TLS -// secret name and the CA secret name are available -func (t TLSSpec) IsTLSEnabled() bool { - return (t.TLSSecret != "" && t.CASecret != "") -} - -const ( - // PgclusterStateCreated ... - PgclusterStateCreated PgclusterState = "pgcluster Created" - // PgclusterStateProcessed ... - PgclusterStateProcessed PgclusterState = "pgcluster Processed" - // PgclusterStateInitialized ... - PgclusterStateInitialized PgclusterState = "pgcluster Initialized" - // PgclusterStateBootstrapping defines the state of a cluster when it is being bootstrapped - // from an existing data source - PgclusterStateBootstrapping PgclusterState = "pgcluster Bootstrapping" - // PgclusterStateBootstrapped defines the state of a cluster when it has been bootstrapped - // successfully from an existing data source - PgclusterStateBootstrapped PgclusterState = "pgcluster Bootstrapped" - // PgclusterStateRestore ... - PgclusterStateRestore PgclusterState = "pgcluster Restoring" - // PgclusterStateShutdown indicates that the cluster has been shut down (i.e. the primary) - // deployment has been scaled to 0 - PgclusterStateShutdown PgclusterState = "pgcluster Shutdown" - - // PodAntiAffinityRequired results in requiredDuringSchedulingIgnoredDuringExecution for any - // default pod anti-affinity rules applied to pg custers - PodAntiAffinityRequired PodAntiAffinityType = "required" - - // PodAntiAffinityPreffered results in preferredDuringSchedulingIgnoredDuringExecution for any - // default pod anti-affinity rules applied to pg custers - PodAntiAffinityPreffered PodAntiAffinityType = "preferred" - - // PodAntiAffinityDisabled disables any default pod anti-affinity rules applied to pg custers - PodAntiAffinityDisabled PodAntiAffinityType = "disabled" -) - -// The list of different types of PodAntiAffinityDeployments -const ( - PodAntiAffinityDeploymentDefault PodAntiAffinityDeployment = iota - PodAntiAffinityDeploymentPgBackRest - PodAntiAffinityDeploymentPgBouncer -) - -// ValidatePodAntiAffinityType is responsible for validating whether or not the type of pod -// anti-affinity specified is valid -func (p PodAntiAffinityType) Validate() error { - switch p { - case - PodAntiAffinityRequired, - PodAntiAffinityPreffered, - PodAntiAffinityDisabled, - "": - return nil - } - return fmt.Errorf("Invalid pod anti-affinity type. Valid values are '%s', '%s' or '%s'", - PodAntiAffinityRequired, PodAntiAffinityPreffered, PodAntiAffinityDisabled) -} diff --git a/pkg/apis/crunchydata.com/v1/common.go b/pkg/apis/crunchydata.com/v1/common.go deleted file mode 100644 index 723c2a0a60..0000000000 --- a/pkg/apis/crunchydata.com/v1/common.go +++ /dev/null @@ -1,148 +0,0 @@ -package v1 - -/* -Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "strconv" - "strings" - - log "github.com/sirupsen/logrus" -) - -// RootSecretSuffix ... -const RootSecretSuffix = "-postgres-secret" - -// UserSecretSuffix ... -const UserSecretSuffix = "-secret" - -// PrimarySecretSuffix ... -const PrimarySecretSuffix = "-primaryuser-secret" - -// ExporterSecretSuffix ... -const ExporterSecretSuffix = "-exporter-secret" - -// StorageExisting ... -const StorageExisting = "existing" - -// StorageCreate ... -const StorageCreate = "create" - -// StorageEmptydir ... -const StorageEmptydir = "emptydir" - -// StorageDynamic ... -const StorageDynamic = "dynamic" - -// the following are standard PostgreSQL user service accounts that are created -// as part of managed the PostgreSQL cluster environment via the Operator -const ( - // PGUserAdmin is a special user that can perform administrative actions - // without being a superuser itself - PGUserAdmin = "crunchyadm" - // PGUserMonitor is the monitoring user that can access metric data - PGUserMonitor = "ccp_monitoring" - // PGUserPgBouncer is the user that's used for managing pgBouncer, which a - // user can use to access pgBouncer stats, etc. - PGUserPgBouncer = "pgbouncer" - // PGUserReplication is the user that's used for replication, which has - // elevated privileges - PGUserReplication = "primaryuser" - // PGUserSuperuser is the superuser account that can do anything - PGUserSuperuser = "postgres" -) - -// PGFSGroup stores the UID of the PostgreSQL user that runs the PostgreSQL -// process, which is 26. This also sets up for future work, as the -// PodSecurityContext structure takes a *int64 for its FSGroup -// -// This has to be a "var" as Kubernetes requires for this to be a pointer -var PGFSGroup int64 = 26 - -// PGUserSystemAccounts maintains an easy-to-access list of what the systems -// accounts are, which may affect how information is returned, etc. -var PGUserSystemAccounts = map[string]struct{}{ - PGUserAdmin: {}, - PGUserMonitor: {}, - PGUserPgBouncer: {}, - PGUserReplication: {}, - PGUserSuperuser: {}, -} - -// PgStorageSpec ... -// swagger:ignore -type PgStorageSpec struct { - Name string `json:"name"` - StorageClass string `json:"storageclass"` - AccessMode string `json:"accessmode"` - Size string `json:"size"` - StorageType string `json:"storagetype"` - SupplementalGroups string `json:"supplementalgroups"` - MatchLabels string `json:"matchLabels"` -} - -// GetSupplementalGroups converts the comma-separated list of SupplementalGroups -// into a slice of int64 IDs. If it errors, it returns an empty slice and logs -// a warning -func (s PgStorageSpec) GetSupplementalGroups() []int64 { - supplementalGroups := []int64{} - - // split the supplemental group list - results := strings.Split(s.SupplementalGroups, ",") - - // iterate through the results and try to append to the supplementalGroups - // array - for _, result := range results { - result = strings.TrimSpace(result) - - // if the result is the empty string (likely because there are no - // supplemental groups), continue on - if result == "" { - continue - } - - supplementalGroup, err := strconv.Atoi(result) - - // if there is an error, only warn about it and continue through the loop - if err != nil { - log.Warnf("malformed storage supplemental group: %v", err) - continue - } - - // convert the int to an int64 to match the Kubernetes spec, and append to - // the supplementalGroups slice - supplementalGroups = append(supplementalGroups, int64(supplementalGroup)) - } - - return supplementalGroups -} - -// CompletedStatus - -const CompletedStatus = "completed" - -// InProgressStatus - -const InProgressStatus = "in progress" - -// SubmittedStatus - -const SubmittedStatus = "submitted" - -// JobCompletedStatus .... -const JobCompletedStatus = "job completed" - -// JobSubmittedStatus .... -const JobSubmittedStatus = "job submitted" - -// JobErrorStatus .... -const JobErrorStatus = "job error" diff --git a/pkg/apis/crunchydata.com/v1/common_test.go b/pkg/apis/crunchydata.com/v1/common_test.go deleted file mode 100644 index 8ad909e64f..0000000000 --- a/pkg/apis/crunchydata.com/v1/common_test.go +++ /dev/null @@ -1,54 +0,0 @@ -package v1 - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "reflect" - "testing" -) - -func TestPgStorageSpecGetSupplementalGroups(t *testing.T) { - { - groups := PgStorageSpec{}.GetSupplementalGroups() - if len(groups) != 0 { - t.Errorf("expected none, got %v", groups) - } - } - { - groups := PgStorageSpec{SupplementalGroups: "99"}.GetSupplementalGroups() - if expected := []int64{99}; !reflect.DeepEqual(expected, groups) { - t.Errorf("expected %v, got %v", expected, groups) - } - } - { - groups := PgStorageSpec{SupplementalGroups: "7,8,9"}.GetSupplementalGroups() - if expected := []int64{7, 8, 9}; !reflect.DeepEqual(expected, groups) { - t.Errorf("expected %v, got %v", expected, groups) - } - } - { - groups := PgStorageSpec{SupplementalGroups: " "}.GetSupplementalGroups() - if len(groups) != 0 { - t.Errorf("expected none, got %v", groups) - } - } - { - groups := PgStorageSpec{SupplementalGroups: ", 5 "}.GetSupplementalGroups() - if expected := []int64{5}; !reflect.DeepEqual(expected, groups) { - t.Errorf("expected %v, got %v", expected, groups) - } - } -} diff --git a/pkg/apis/crunchydata.com/v1/doc.go b/pkg/apis/crunchydata.com/v1/doc.go deleted file mode 100644 index 3d5c49cd25..0000000000 --- a/pkg/apis/crunchydata.com/v1/doc.go +++ /dev/null @@ -1,123 +0,0 @@ -/* -Crunchy PostgreSQL Operator API - -The Crunchy PostgreSQL Operator API defines HTTP(S) interactions with the Crunchy PostgreSQL Operator. - - -## Direct API Calls - -The API can also be accessed by interacting directly with the API server. This -can be done by making HTTP requests with curl to get information from the -server. In order to make these calls you will need to provide certificates along -with your request using the `--cacert`, `--key`, and `--cert` flags. Next you -will need to provide the username and password for the RBAC along with a header -that includes the content type and the `--insecure` flag. These flags will be -the same for all of your interactions with the API server and can be seen in the -following examples. - - -###### Get API Server Version - -The most basic example of this interaction is getting the version of the API -server. You can send a GET request to `$PGO_APISERVER_URL/version` and this will -send back a json response including the API server version. You must specify the -client version that matches the API server version as part of the request. - -The API server is setup to work with the pgo command line interface so the -parameters that are passed to the server can be found by looking at the related -flags. -``` -curl --cacert $PGO_CA_CERT --key $PGO_CLIENT_KEY --cert $PGO_CA_CERT -u \ -admin:examplepassword -H "Content-Type:application/json" --insecure -X \ -GET $PGO_APISERVER_URL/version -``` - -#### Body examples -In the following examples data is being passed to the apiserver using a json -structure. These json structures are defined in the following documentation. - -``` -curl --cacert $PGO_CA_CERT --key $PGO_CLIENT_KEY --cert $PGO_CA_CERT -u \ -admin:examplepassword -H "Content-Type:application/json" --insecure -X GET \ -"$PGO_APISERVER_URL/workflow/?version=&namespace=" -``` - -###### Create Cluster -You can create a cluster by sending a POST request to -`$PGO_APISERVER_URL/clusters`. In this example `--data` is being sent to the -API URL that includes the client version that was returned from the version -call, the namespace where the cluster should be created, and the name of the new -cluster. - -``` -curl --cacert $PGO_CA_CERT --key $PGO_CLIENT_KEY --cert $PGO_CA_CERT -u \ -admin:examplepassword -H "Content-Type:application/json" --insecure -X \ -POST --data \ - '{"ClientVersion":"4.5.0", - "Namespace":"pgouser1", - "Name":"mycluster", -$PGO_APISERVER_URL/clusters -``` - - - -###### Show and Delete Cluster -The last two examples show you how to `show` and `delete` a cluster. Notice -how instead of passing `"Name":"mycluster"` you pass `"Clustername":"mycluster" -to reference a cluster that has already been created. For the show cluster -example you can replace `"Clustername":"mycluster"` with `"AllFlag":true` to -show all of the clusters that are in the given namespace. - -``` -curl --cacert $PGO_CA_CERT --key $PGO_CLIENT_KEY --cert $PGO_CA_CERT -u \ -admin:examplepassword -H "Content-Type:application/json" --insecure -X \ -POST --data \ - '{"ClientVersion":"4.5.0", - "Namespace":"pgouser1", - "Clustername":"mycluster"}' \ -$PGO_APISERVER_URL/showclusters -``` - -``` -curl --cacert $PGO_CA_CERT --key $PGO_CLIENT_KEY --cert $PGO_CA_CERT -u \ -admin:examplepassword -H "Content-Type:application/json" --insecure -X \ -POST --data \ - '{"ClientVersion":"4.5.0", - "Namespace":"pgouser1", - "Clustername":"mycluster"}' \ -$PGO_APISERVER_URL/clustersdelete -``` - - Schemes: http, https - BasePath: / - Version: 4.5.0 - License: Apache 2.0 http://www.apache.org/licenses/LICENSE-2.0 - Contact: Crunchy Data https://www.crunchydata.com/ - - - Consumes: - - application/json - - Produces: - - application/json - -swagger:meta -*/ -package v1 - -// +k8s:deepcopy-gen=package,register - -/* - Copyright 2017 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ diff --git a/pkg/apis/crunchydata.com/v1/policy.go b/pkg/apis/crunchydata.com/v1/policy.go deleted file mode 100644 index 28347f9950..0000000000 --- a/pkg/apis/crunchydata.com/v1/policy.go +++ /dev/null @@ -1,73 +0,0 @@ -package v1 - -/* - Copyright 2017 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" -) - -// PgpolicyResourcePlural ... -const PgpolicyResourcePlural = "pgpolicies" - -// PgpolicySpec ... -// swagger:ignore -type PgpolicySpec struct { - Namespace string `json:"namespace"` - Name string `json:"name"` - URL string `json:"url"` - SQL string `json:"sql"` - Status string `json:"status"` -} - -// Pgpolicy ... -// swagger:ignore -// +genclient -// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object -type Pgpolicy struct { - metav1.TypeMeta `json:",inline"` - metav1.ObjectMeta `json:"metadata"` - - Spec PgpolicySpec `json:"spec"` - Status PgpolicyStatus `json:"status,omitempty"` -} - -// PgpolicyList ... -// swagger:ignore -// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object -type PgpolicyList struct { - metav1.TypeMeta `json:",inline"` - metav1.ListMeta `json:"metadata"` - - Items []Pgpolicy `json:"items"` -} - -// PgpolicyStatus ... -// swagger:ignore -type PgpolicyStatus struct { - State PgpolicyState `json:"state,omitempty"` - Message string `json:"message,omitempty"` -} - -// PgpolicyState ... -// swagger:ignore -type PgpolicyState string - -const ( - // PgpolicyStateCreated ... - PgpolicyStateCreated PgpolicyState = "pgpolicy Created" - // PgpolicyStateProcessed ... - PgpolicyStateProcessed PgpolicyState = "pgpolicy Processed" -) diff --git a/pkg/apis/crunchydata.com/v1/register.go b/pkg/apis/crunchydata.com/v1/register.go deleted file mode 100644 index 00db119bda..0000000000 --- a/pkg/apis/crunchydata.com/v1/register.go +++ /dev/null @@ -1,63 +0,0 @@ -package v1 - -/* -Copyright 2017 - 2020 Crunchy Data Solutions, Inc. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/runtime" - "k8s.io/apimachinery/pkg/runtime/schema" -) - -var ( - // SchemeBuilder ... - SchemeBuilder = runtime.NewSchemeBuilder(addKnownTypes) - // AddToScheme ... - AddToScheme = SchemeBuilder.AddToScheme -) - -// GroupName is the group name used in this package. -//const GroupName = "cr.client-go.k8s.io" -const GroupName = "crunchydata.com" - -// SchemeGroupVersion is the group version used to register these objects. -var SchemeGroupVersion = schema.GroupVersion{Group: GroupName, Version: "v1"} - -// Kind takes an unqualified kind and returns back a Group qualified GroupKind -func Kind(kind string) schema.GroupKind { - return SchemeGroupVersion.WithKind(kind).GroupKind() -} - -// Resource takes an unqualified resource and returns a Group-qualified GroupResource. -func Resource(resource string) schema.GroupResource { - return SchemeGroupVersion.WithResource(resource).GroupResource() -} - -// addKnownTypes adds the set of types defined in this package to the supplied scheme. -func addKnownTypes(scheme *runtime.Scheme) error { - scheme.AddKnownTypes(SchemeGroupVersion, - &Pgcluster{}, - &PgclusterList{}, - &Pgreplica{}, - &PgreplicaList{}, - &Pgpolicy{}, - &PgpolicyList{}, - &Pgtask{}, - &PgtaskList{}, - ) - metav1.AddToGroupVersion(scheme, SchemeGroupVersion) - return nil -} diff --git a/pkg/apis/crunchydata.com/v1/replica.go b/pkg/apis/crunchydata.com/v1/replica.go deleted file mode 100644 index 386fa033d0..0000000000 --- a/pkg/apis/crunchydata.com/v1/replica.go +++ /dev/null @@ -1,77 +0,0 @@ -package v1 - -/* - Copyright 2018 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" -) - -// PgreplicaResourcePlural .. -const PgreplicaResourcePlural = "pgreplicas" - -// Pgreplica .. -// swagger:ignore -// +genclient -// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object -type Pgreplica struct { - metav1.TypeMeta `json:",inline"` - metav1.ObjectMeta `json:"metadata"` - Spec PgreplicaSpec `json:"spec"` - Status PgreplicaStatus `json:"status,omitempty"` -} - -// PgreplicaSpec ... -// swagger:ignore -type PgreplicaSpec struct { - Namespace string `json:"namespace"` - Name string `json:"name"` - ClusterName string `json:"clustername"` - ReplicaStorage PgStorageSpec `json:"replicastorage"` - Status string `json:"status"` - UserLabels map[string]string `json:"userlabels"` -} - -// PgreplicaList ... -// swagger:ignore -// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object -type PgreplicaList struct { - metav1.TypeMeta `json:",inline"` - metav1.ListMeta `json:"metadata"` - - Items []Pgreplica `json:"items"` -} - -// PgreplicaStatus ... -// swagger:ignore -type PgreplicaStatus struct { - State PgreplicaState `json:"state,omitempty"` - Message string `json:"message,omitempty"` -} - -// PgreplicaState ... -// swagger:ignore -type PgreplicaState string - -const ( - // PgreplicaStateCreated ... - PgreplicaStateCreated PgreplicaState = "pgreplica Created" - // PgreplicaStatePending ... - PgreplicaStatePendingInit PgreplicaState = "pgreplica Pending init" - // PgreplicaStatePendingRestore ... - PgreplicaStatePendingRestore PgreplicaState = "pgreplica Pending restore" - // PgreplicaStateProcessed ... - PgreplicaStateProcessed PgreplicaState = "pgreplica Processed" -) diff --git a/pkg/apis/crunchydata.com/v1/task.go b/pkg/apis/crunchydata.com/v1/task.go deleted file mode 100644 index 5e54d97ab3..0000000000 --- a/pkg/apis/crunchydata.com/v1/task.go +++ /dev/null @@ -1,136 +0,0 @@ -package v1 - -/* - Copyright 2017 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" -) - -// PgtaskResourcePlural ... -const PgtaskResourcePlural = "pgtasks" - -const PgtaskDeleteBackups = "delete-backups" -const PgtaskDeleteData = "delete-data" -const PgtaskFailover = "failover" -const PgtaskAutoFailover = "autofailover" -const PgtaskAddPolicies = "addpolicies" - -const PgtaskUpgrade = "clusterupgrade" -const PgtaskUpgradeCreated = "cluster upgrade - task created" -const PgtaskUpgradeInProgress = "cluster upgrade - in progress" - -const PgtaskPgAdminAdd = "add-pgadmin" -const PgtaskPgAdminDelete = "delete-pgadmin" - -const PgtaskWorkflow = "workflow" -const PgtaskWorkflowCloneType = "cloneworkflow" -const PgtaskWorkflowCreateClusterType = "createcluster" -const PgtaskWorkflowBackrestRestoreType = "pgbackrestrestore" -const PgtaskWorkflowBackupType = "backupworkflow" -const PgtaskWorkflowSubmittedStatus = "task submitted" -const PgtaskWorkflowCompletedStatus = "task completed" -const PgtaskWorkflowID = "workflowid" - -const PgtaskWorkflowBackrestRestorePVCCreatedStatus = "restored PVC created" -const PgtaskWorkflowBackrestRestorePrimaryCreatedStatus = "restored Primary created" -const PgtaskWorkflowBackrestRestoreJobCreatedStatus = "restore job created" - -const PgtaskWorkflowCloneCreatePVC = "clone 1.1: create pvc" -const PgtaskWorkflowCloneSyncRepo = "clone 1.2: sync pgbackrest repo" -const PgtaskWorkflowCloneRestoreBackup = "clone 2: restoring backup" -const PgtaskWorkflowCloneClusterCreate = "clone 3: cluster creating" - -const PgtaskBackrest = "backrest" -const PgtaskBackrestBackup = "backup" -const PgtaskBackrestInfo = "info" -const PgtaskBackrestRestore = "restore" -const PgtaskBackrestStanzaCreate = "stanza-create" - -const PgtaskpgDump = "pgdump" -const PgtaskpgDumpBackup = "pgdumpbackup" -const PgtaskpgDumpInfo = "pgdumpinfo" -const PgtaskpgRestore = "pgrestore" - -const PgtaskCloneStep1 = "clone-step1" // performs a pgBackRest repo sync -const PgtaskCloneStep2 = "clone-step2" // performs a pgBackRest restore -const PgtaskCloneStep3 = "clone-step3" // creates the Pgcluster - -// this is ported over from legacy backup code -const PgBackupJobSubmitted = "Backup Job Submitted" - -// Defines the types of pgBackRest backups that are taken throughout a clusters -// lifecycle -const ( - // this type of backup is taken following a failover event - BackupTypeFailover string = "failover" - // this type of backup is taken when a new cluster is being bootstrapped - BackupTypeBootstrap string = "bootstrap" -) - -// BackrestStorageTypes defines the valid types of storage that can be utilized -// with pgBackRest -var BackrestStorageTypes = []string{"local", "s3"} - -// PgtaskSpec ... -// swagger:ignore -type PgtaskSpec struct { - Namespace string `json:"namespace"` - Name string `json:"name"` - StorageSpec PgStorageSpec `json:"storagespec"` - TaskType string `json:"tasktype"` - Status string `json:"status"` - Parameters map[string]string `json:"parameters"` -} - -// Pgtask ... -// swagger:ignore -// +genclient -// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object -type Pgtask struct { - metav1.TypeMeta `json:",inline"` - metav1.ObjectMeta `json:"metadata"` - - Spec PgtaskSpec `json:"spec"` - Status PgtaskStatus `json:"status,omitempty"` -} - -// PgtaskList ... -// swagger:ignore -// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object -type PgtaskList struct { - metav1.TypeMeta `json:",inline"` - metav1.ListMeta `json:"metadata"` - - Items []Pgtask `json:"items"` -} - -// PgtaskStatus ... -// swagger:ignore -type PgtaskStatus struct { - State PgtaskState `json:"state,omitempty"` - Message string `json:"message,omitempty"` -} - -// PgtaskState ... -// swagger:ignore -type PgtaskState string - -const ( - // PgtaskStateCreated ... - PgtaskStateCreated PgtaskState = "pgtask Created" - // PgtaskStateProcessed ... - PgtaskStateProcessed PgtaskState = "pgtask Processed" -) diff --git a/pkg/apis/crunchydata.com/v1/zz_generated.deepcopy.go b/pkg/apis/crunchydata.com/v1/zz_generated.deepcopy.go deleted file mode 100644 index 80fd389e4f..0000000000 --- a/pkg/apis/crunchydata.com/v1/zz_generated.deepcopy.go +++ /dev/null @@ -1,629 +0,0 @@ -// +build !ignore_autogenerated - -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by deepcopy-gen. DO NOT EDIT. - -package v1 - -import ( - corev1 "k8s.io/api/core/v1" - runtime "k8s.io/apimachinery/pkg/runtime" -) - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *ClusterAnnotations) DeepCopyInto(out *ClusterAnnotations) { - *out = *in - if in.Backrest != nil { - in, out := &in.Backrest, &out.Backrest - *out = make(map[string]string, len(*in)) - for key, val := range *in { - (*out)[key] = val - } - } - if in.Global != nil { - in, out := &in.Global, &out.Global - *out = make(map[string]string, len(*in)) - for key, val := range *in { - (*out)[key] = val - } - } - if in.PgBouncer != nil { - in, out := &in.PgBouncer, &out.PgBouncer - *out = make(map[string]string, len(*in)) - for key, val := range *in { - (*out)[key] = val - } - } - if in.Postgres != nil { - in, out := &in.Postgres, &out.Postgres - *out = make(map[string]string, len(*in)) - for key, val := range *in { - (*out)[key] = val - } - } - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ClusterAnnotations. -func (in *ClusterAnnotations) DeepCopy() *ClusterAnnotations { - if in == nil { - return nil - } - out := new(ClusterAnnotations) - in.DeepCopyInto(out) - return out -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *PGDataSourceSpec) DeepCopyInto(out *PGDataSourceSpec) { - *out = *in - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PGDataSourceSpec. -func (in *PGDataSourceSpec) DeepCopy() *PGDataSourceSpec { - if in == nil { - return nil - } - out := new(PGDataSourceSpec) - in.DeepCopyInto(out) - return out -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *PgBouncerSpec) DeepCopyInto(out *PgBouncerSpec) { - *out = *in - if in.Resources != nil { - in, out := &in.Resources, &out.Resources - *out = make(corev1.ResourceList, len(*in)) - for key, val := range *in { - (*out)[key] = val.DeepCopy() - } - } - if in.Limits != nil { - in, out := &in.Limits, &out.Limits - *out = make(corev1.ResourceList, len(*in)) - for key, val := range *in { - (*out)[key] = val.DeepCopy() - } - } - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PgBouncerSpec. -func (in *PgBouncerSpec) DeepCopy() *PgBouncerSpec { - if in == nil { - return nil - } - out := new(PgBouncerSpec) - in.DeepCopyInto(out) - return out -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *PgStorageSpec) DeepCopyInto(out *PgStorageSpec) { - *out = *in - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PgStorageSpec. -func (in *PgStorageSpec) DeepCopy() *PgStorageSpec { - if in == nil { - return nil - } - out := new(PgStorageSpec) - in.DeepCopyInto(out) - return out -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *Pgcluster) DeepCopyInto(out *Pgcluster) { - *out = *in - out.TypeMeta = in.TypeMeta - in.ObjectMeta.DeepCopyInto(&out.ObjectMeta) - in.Spec.DeepCopyInto(&out.Spec) - out.Status = in.Status - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Pgcluster. -func (in *Pgcluster) DeepCopy() *Pgcluster { - if in == nil { - return nil - } - out := new(Pgcluster) - in.DeepCopyInto(out) - return out -} - -// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. -func (in *Pgcluster) DeepCopyObject() runtime.Object { - if c := in.DeepCopy(); c != nil { - return c - } - return nil -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *PgclusterList) DeepCopyInto(out *PgclusterList) { - *out = *in - out.TypeMeta = in.TypeMeta - in.ListMeta.DeepCopyInto(&out.ListMeta) - if in.Items != nil { - in, out := &in.Items, &out.Items - *out = make([]Pgcluster, len(*in)) - for i := range *in { - (*in)[i].DeepCopyInto(&(*out)[i]) - } - } - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PgclusterList. -func (in *PgclusterList) DeepCopy() *PgclusterList { - if in == nil { - return nil - } - out := new(PgclusterList) - in.DeepCopyInto(out) - return out -} - -// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. -func (in *PgclusterList) DeepCopyObject() runtime.Object { - if c := in.DeepCopy(); c != nil { - return c - } - return nil -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *PgclusterSpec) DeepCopyInto(out *PgclusterSpec) { - *out = *in - out.PrimaryStorage = in.PrimaryStorage - out.WALStorage = in.WALStorage - out.ArchiveStorage = in.ArchiveStorage - out.ReplicaStorage = in.ReplicaStorage - out.BackrestStorage = in.BackrestStorage - if in.Resources != nil { - in, out := &in.Resources, &out.Resources - *out = make(corev1.ResourceList, len(*in)) - for key, val := range *in { - (*out)[key] = val.DeepCopy() - } - } - if in.Limits != nil { - in, out := &in.Limits, &out.Limits - *out = make(corev1.ResourceList, len(*in)) - for key, val := range *in { - (*out)[key] = val.DeepCopy() - } - } - if in.BackrestResources != nil { - in, out := &in.BackrestResources, &out.BackrestResources - *out = make(corev1.ResourceList, len(*in)) - for key, val := range *in { - (*out)[key] = val.DeepCopy() - } - } - if in.BackrestLimits != nil { - in, out := &in.BackrestLimits, &out.BackrestLimits - *out = make(corev1.ResourceList, len(*in)) - for key, val := range *in { - (*out)[key] = val.DeepCopy() - } - } - if in.ExporterResources != nil { - in, out := &in.ExporterResources, &out.ExporterResources - *out = make(corev1.ResourceList, len(*in)) - for key, val := range *in { - (*out)[key] = val.DeepCopy() - } - } - if in.ExporterLimits != nil { - in, out := &in.ExporterLimits, &out.ExporterLimits - *out = make(corev1.ResourceList, len(*in)) - for key, val := range *in { - (*out)[key] = val.DeepCopy() - } - } - in.PgBouncer.DeepCopyInto(&out.PgBouncer) - if in.UserLabels != nil { - in, out := &in.UserLabels, &out.UserLabels - *out = make(map[string]string, len(*in)) - for key, val := range *in { - (*out)[key] = val - } - } - out.PodAntiAffinity = in.PodAntiAffinity - if in.SyncReplication != nil { - in, out := &in.SyncReplication, &out.SyncReplication - *out = new(bool) - **out = **in - } - if in.BackrestConfig != nil { - in, out := &in.BackrestConfig, &out.BackrestConfig - *out = make([]corev1.VolumeProjection, len(*in)) - for i := range *in { - (*in)[i].DeepCopyInto(&(*out)[i]) - } - } - if in.TablespaceMounts != nil { - in, out := &in.TablespaceMounts, &out.TablespaceMounts - *out = make(map[string]PgStorageSpec, len(*in)) - for key, val := range *in { - (*out)[key] = val - } - } - out.TLS = in.TLS - out.PGDataSource = in.PGDataSource - in.Annotations.DeepCopyInto(&out.Annotations) - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PgclusterSpec. -func (in *PgclusterSpec) DeepCopy() *PgclusterSpec { - if in == nil { - return nil - } - out := new(PgclusterSpec) - in.DeepCopyInto(out) - return out -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *PgclusterStatus) DeepCopyInto(out *PgclusterStatus) { - *out = *in - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PgclusterStatus. -func (in *PgclusterStatus) DeepCopy() *PgclusterStatus { - if in == nil { - return nil - } - out := new(PgclusterStatus) - in.DeepCopyInto(out) - return out -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *Pgpolicy) DeepCopyInto(out *Pgpolicy) { - *out = *in - out.TypeMeta = in.TypeMeta - in.ObjectMeta.DeepCopyInto(&out.ObjectMeta) - out.Spec = in.Spec - out.Status = in.Status - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Pgpolicy. -func (in *Pgpolicy) DeepCopy() *Pgpolicy { - if in == nil { - return nil - } - out := new(Pgpolicy) - in.DeepCopyInto(out) - return out -} - -// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. -func (in *Pgpolicy) DeepCopyObject() runtime.Object { - if c := in.DeepCopy(); c != nil { - return c - } - return nil -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *PgpolicyList) DeepCopyInto(out *PgpolicyList) { - *out = *in - out.TypeMeta = in.TypeMeta - in.ListMeta.DeepCopyInto(&out.ListMeta) - if in.Items != nil { - in, out := &in.Items, &out.Items - *out = make([]Pgpolicy, len(*in)) - for i := range *in { - (*in)[i].DeepCopyInto(&(*out)[i]) - } - } - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PgpolicyList. -func (in *PgpolicyList) DeepCopy() *PgpolicyList { - if in == nil { - return nil - } - out := new(PgpolicyList) - in.DeepCopyInto(out) - return out -} - -// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. -func (in *PgpolicyList) DeepCopyObject() runtime.Object { - if c := in.DeepCopy(); c != nil { - return c - } - return nil -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *PgpolicySpec) DeepCopyInto(out *PgpolicySpec) { - *out = *in - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PgpolicySpec. -func (in *PgpolicySpec) DeepCopy() *PgpolicySpec { - if in == nil { - return nil - } - out := new(PgpolicySpec) - in.DeepCopyInto(out) - return out -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *PgpolicyStatus) DeepCopyInto(out *PgpolicyStatus) { - *out = *in - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PgpolicyStatus. -func (in *PgpolicyStatus) DeepCopy() *PgpolicyStatus { - if in == nil { - return nil - } - out := new(PgpolicyStatus) - in.DeepCopyInto(out) - return out -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *Pgreplica) DeepCopyInto(out *Pgreplica) { - *out = *in - out.TypeMeta = in.TypeMeta - in.ObjectMeta.DeepCopyInto(&out.ObjectMeta) - in.Spec.DeepCopyInto(&out.Spec) - out.Status = in.Status - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Pgreplica. -func (in *Pgreplica) DeepCopy() *Pgreplica { - if in == nil { - return nil - } - out := new(Pgreplica) - in.DeepCopyInto(out) - return out -} - -// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. -func (in *Pgreplica) DeepCopyObject() runtime.Object { - if c := in.DeepCopy(); c != nil { - return c - } - return nil -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *PgreplicaList) DeepCopyInto(out *PgreplicaList) { - *out = *in - out.TypeMeta = in.TypeMeta - in.ListMeta.DeepCopyInto(&out.ListMeta) - if in.Items != nil { - in, out := &in.Items, &out.Items - *out = make([]Pgreplica, len(*in)) - for i := range *in { - (*in)[i].DeepCopyInto(&(*out)[i]) - } - } - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PgreplicaList. -func (in *PgreplicaList) DeepCopy() *PgreplicaList { - if in == nil { - return nil - } - out := new(PgreplicaList) - in.DeepCopyInto(out) - return out -} - -// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. -func (in *PgreplicaList) DeepCopyObject() runtime.Object { - if c := in.DeepCopy(); c != nil { - return c - } - return nil -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *PgreplicaSpec) DeepCopyInto(out *PgreplicaSpec) { - *out = *in - out.ReplicaStorage = in.ReplicaStorage - if in.UserLabels != nil { - in, out := &in.UserLabels, &out.UserLabels - *out = make(map[string]string, len(*in)) - for key, val := range *in { - (*out)[key] = val - } - } - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PgreplicaSpec. -func (in *PgreplicaSpec) DeepCopy() *PgreplicaSpec { - if in == nil { - return nil - } - out := new(PgreplicaSpec) - in.DeepCopyInto(out) - return out -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *PgreplicaStatus) DeepCopyInto(out *PgreplicaStatus) { - *out = *in - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PgreplicaStatus. -func (in *PgreplicaStatus) DeepCopy() *PgreplicaStatus { - if in == nil { - return nil - } - out := new(PgreplicaStatus) - in.DeepCopyInto(out) - return out -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *Pgtask) DeepCopyInto(out *Pgtask) { - *out = *in - out.TypeMeta = in.TypeMeta - in.ObjectMeta.DeepCopyInto(&out.ObjectMeta) - in.Spec.DeepCopyInto(&out.Spec) - out.Status = in.Status - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Pgtask. -func (in *Pgtask) DeepCopy() *Pgtask { - if in == nil { - return nil - } - out := new(Pgtask) - in.DeepCopyInto(out) - return out -} - -// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. -func (in *Pgtask) DeepCopyObject() runtime.Object { - if c := in.DeepCopy(); c != nil { - return c - } - return nil -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *PgtaskList) DeepCopyInto(out *PgtaskList) { - *out = *in - out.TypeMeta = in.TypeMeta - in.ListMeta.DeepCopyInto(&out.ListMeta) - if in.Items != nil { - in, out := &in.Items, &out.Items - *out = make([]Pgtask, len(*in)) - for i := range *in { - (*in)[i].DeepCopyInto(&(*out)[i]) - } - } - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PgtaskList. -func (in *PgtaskList) DeepCopy() *PgtaskList { - if in == nil { - return nil - } - out := new(PgtaskList) - in.DeepCopyInto(out) - return out -} - -// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. -func (in *PgtaskList) DeepCopyObject() runtime.Object { - if c := in.DeepCopy(); c != nil { - return c - } - return nil -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *PgtaskSpec) DeepCopyInto(out *PgtaskSpec) { - *out = *in - out.StorageSpec = in.StorageSpec - if in.Parameters != nil { - in, out := &in.Parameters, &out.Parameters - *out = make(map[string]string, len(*in)) - for key, val := range *in { - (*out)[key] = val - } - } - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PgtaskSpec. -func (in *PgtaskSpec) DeepCopy() *PgtaskSpec { - if in == nil { - return nil - } - out := new(PgtaskSpec) - in.DeepCopyInto(out) - return out -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *PgtaskStatus) DeepCopyInto(out *PgtaskStatus) { - *out = *in - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PgtaskStatus. -func (in *PgtaskStatus) DeepCopy() *PgtaskStatus { - if in == nil { - return nil - } - out := new(PgtaskStatus) - in.DeepCopyInto(out) - return out -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *PodAntiAffinitySpec) DeepCopyInto(out *PodAntiAffinitySpec) { - *out = *in - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PodAntiAffinitySpec. -func (in *PodAntiAffinitySpec) DeepCopy() *PodAntiAffinitySpec { - if in == nil { - return nil - } - out := new(PodAntiAffinitySpec) - in.DeepCopyInto(out) - return out -} - -// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. -func (in *TLSSpec) DeepCopyInto(out *TLSSpec) { - *out = *in - return -} - -// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new TLSSpec. -func (in *TLSSpec) DeepCopy() *TLSSpec { - if in == nil { - return nil - } - out := new(TLSSpec) - in.DeepCopyInto(out) - return out -} diff --git a/pkg/apis/postgres-operator.crunchydata.com/v1beta1/crunchy_bridgecluster_types.go b/pkg/apis/postgres-operator.crunchydata.com/v1beta1/crunchy_bridgecluster_types.go new file mode 100644 index 0000000000..0b94a4dae1 --- /dev/null +++ b/pkg/apis/postgres-operator.crunchydata.com/v1beta1/crunchy_bridgecluster_types.go @@ -0,0 +1,239 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package v1beta1 + +import ( + "k8s.io/apimachinery/pkg/api/resource" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" +) + +// CrunchyBridgeClusterSpec defines the desired state of CrunchyBridgeCluster +// to be managed by Crunchy Data Bridge +type CrunchyBridgeClusterSpec struct { + // +optional + Metadata *Metadata `json:"metadata,omitempty"` + + // Whether the cluster is high availability, + // meaning that it has a secondary it can fail over to quickly + // in case the primary becomes unavailable. + // +kubebuilder:validation:Required + IsHA bool `json:"isHa"` + + // Whether the cluster is protected. Protected clusters can't be destroyed until + // their protected flag is removed + // +kubebuilder:validation:Optional + IsProtected bool `json:"isProtected,omitempty"` + + // The name of the cluster + // --- + // According to Bridge API/GUI errors, + // "Field name should be between 5 and 50 characters in length, containing only unicode characters, unicode numbers, hyphens, spaces, or underscores, and starting with a character", and ending with a character or number. + // +kubebuilder:validation:MinLength=5 + // +kubebuilder:validation:MaxLength=50 + // +kubebuilder:validation:Pattern=`^[A-Za-z][A-Za-z0-9\-_ ]*[A-Za-z0-9]$` + // +kubebuilder:validation:Required + // +kubebuilder:validation:Type=string + ClusterName string `json:"clusterName"` + + // The ID of the cluster's plan. Determines instance, CPU, and memory. + // +kubebuilder:validation:Required + Plan string `json:"plan"` + + // The ID of the cluster's major Postgres version. + // Currently Bridge offers 13-17 + // +kubebuilder:validation:Required + // +kubebuilder:validation:Minimum=13 + // +kubebuilder:validation:Maximum=17 + // +operator-sdk:csv:customresourcedefinitions:type=spec,order=1 + PostgresVersion int `json:"majorVersion"` + + // The cloud provider where the cluster is located. + // Currently Bridge offers aws, azure, and gcp only + // +kubebuilder:validation:Required + // +kubebuilder:validation:Enum={aws,azure,gcp} + // +kubebuilder:validation:XValidation:rule=`self == oldSelf`,message="immutable" + Provider string `json:"provider"` + + // The provider region where the cluster is located. + // +kubebuilder:validation:Required + // +kubebuilder:validation:XValidation:rule=`self == oldSelf`,message="immutable" + Region string `json:"region"` + + // Roles for which to create Secrets that contain their credentials which + // are retrieved from the Bridge API. An empty list creates no role secrets. + // Removing a role from this list does NOT drop the role nor revoke their + // access, but it will delete that role's secret from the kube cluster. + // +kubebuilder:validation:Optional + // +listType=map + // +listMapKey=name + Roles []*CrunchyBridgeClusterRoleSpec `json:"roles,omitempty"` + + // The name of the secret containing the API key and team id + // +kubebuilder:validation:Required + Secret string `json:"secret"` + + // The amount of storage available to the cluster in gigabytes. + // The amount must be an integer, followed by Gi (gibibytes) or G (gigabytes) to match Kubernetes conventions. + // If the amount is given in Gi, we round to the nearest G value. + // The minimum value allowed by Bridge is 10 GB. + // The maximum value allowed by Bridge is 65535 GB. + // +kubebuilder:validation:Required + Storage resource.Quantity `json:"storage"` +} + +type CrunchyBridgeClusterRoleSpec struct { + // Name of the role within Crunchy Bridge. + // More info: https://docs.crunchybridge.com/concepts/users + // +kubebuilder:validation:Required + Name string `json:"name"` + + // The name of the Secret that will hold the role credentials. + // +kubebuilder:validation:Required + // +kubebuilder:validation:Pattern=`^[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*$` + // +kubebuilder:validation:MaxLength=253 + // +kubebuilder:validation:Type=string + SecretName string `json:"secretName"` +} + +// CrunchyBridgeClusterStatus defines the observed state of CrunchyBridgeCluster +type CrunchyBridgeClusterStatus struct { + // The name of the cluster in Bridge. + // +optional + ClusterName string `json:"name,omitempty"` + + // conditions represent the observations of postgres cluster's current state. + // +optional + // +listType=map + // +listMapKey=type + // +operator-sdk:csv:customresourcedefinitions:type=status,xDescriptors={"urn:alm:descriptor:io.kubernetes.conditions"} + Conditions []metav1.Condition `json:"conditions,omitempty"` + + // The Hostname of the postgres cluster in Bridge, provided by Bridge API and null until then. + // +optional + Host string `json:"host,omitempty"` + + // The ID of the postgres cluster in Bridge, provided by Bridge API and null until then. + // +optional + ID string `json:"id,omitempty"` + + // Whether the cluster is high availability, meaning that it has a secondary it can fail + // over to quickly in case the primary becomes unavailable. + // +optional + IsHA *bool `json:"isHa"` + + // Whether the cluster is protected. Protected clusters can't be destroyed until + // their protected flag is removed + // +optional + IsProtected *bool `json:"isProtected"` + + // The cluster's major Postgres version. + // +optional + MajorVersion int `json:"majorVersion"` + + // observedGeneration represents the .metadata.generation on which the status was based. + // +optional + // +kubebuilder:validation:Minimum=0 + ObservedGeneration int64 `json:"observedGeneration,omitempty"` + + // The cluster upgrade as represented by Bridge + // +optional + OngoingUpgrade []*UpgradeOperation `json:"ongoingUpgrade,omitempty"` + + // The ID of the cluster's plan. Determines instance, CPU, and memory. + // +optional + Plan string `json:"plan"` + + // Most recent, raw responses from Bridge API + // +optional + // +kubebuilder:pruning:PreserveUnknownFields + // +kubebuilder:validation:Schemaless + // +kubebuilder:validation:Type=object + Responses APIResponses `json:"responses"` + + // State of cluster in Bridge. + // +optional + State string `json:"state,omitempty"` + + // The amount of storage available to the cluster. + // +optional + Storage *resource.Quantity `json:"storage"` +} + +type APIResponses struct { + Cluster SchemalessObject `json:"cluster,omitempty"` + Status SchemalessObject `json:"status,omitempty"` + Upgrade SchemalessObject `json:"upgrade,omitempty"` +} + +type ClusterUpgrade struct { + Operations []*UpgradeOperation `json:"operations,omitempty"` +} + +type UpgradeOperation struct { + Flavor string `json:"flavor"` + StartingFrom string `json:"starting_from"` + State string `json:"state"` +} + +// TODO(crunchybridgecluster) Think through conditions +// CrunchyBridgeClusterStatus condition types. +const ( + ConditionUnknown = "" + ConditionUpgrading = "Upgrading" + ConditionReady = "Ready" + ConditionDeleting = "Deleting" +) + +// +kubebuilder:object:root=true +// +kubebuilder:subresource:status +// +operator-sdk:csv:customresourcedefinitions:resources={{ConfigMap,v1},{Secret,v1},{Service,v1},{CronJob,v1beta1},{Deployment,v1},{Job,v1},{StatefulSet,v1},{PersistentVolumeClaim,v1}} + +// CrunchyBridgeCluster is the Schema for the crunchybridgeclusters API +type CrunchyBridgeCluster struct { + // ObjectMeta.Name is a DNS subdomain. + // - https://docs.k8s.io/concepts/overview/working-with-objects/names/#dns-subdomain-names + // - https://releases.k8s.io/v1.21.0/staging/src/k8s.io/apiextensions-apiserver/pkg/registry/customresource/validator.go#L60 + + // In Bridge json, meta.name is "name" + metav1.TypeMeta `json:",inline"` + metav1.ObjectMeta `json:"metadata,omitempty"` + + // NOTE(cbandy): Every CrunchyBridgeCluster needs a Spec, but it is optional here + // so ObjectMeta can be managed independently. + + Spec CrunchyBridgeClusterSpec `json:"spec,omitempty"` + Status CrunchyBridgeClusterStatus `json:"status,omitempty"` +} + +// Default implements "sigs.k8s.io/controller-runtime/pkg/webhook.Defaulter" so +// a webhook can be registered for the type. +// - https://book.kubebuilder.io/reference/webhook-overview.html +func (c *CrunchyBridgeCluster) Default() { + if len(c.APIVersion) == 0 { + c.APIVersion = GroupVersion.String() + } + if len(c.Kind) == 0 { + c.Kind = "CrunchyBridgeCluster" + } +} + +// +kubebuilder:object:root=true + +// CrunchyBridgeClusterList contains a list of CrunchyBridgeCluster +type CrunchyBridgeClusterList struct { + metav1.TypeMeta `json:",inline"` + metav1.ListMeta `json:"metadata,omitempty"` + Items []CrunchyBridgeCluster `json:"items"` +} + +func init() { + SchemeBuilder.Register(&CrunchyBridgeCluster{}, &CrunchyBridgeClusterList{}) +} + +func NewCrunchyBridgeCluster() *CrunchyBridgeCluster { + cluster := &CrunchyBridgeCluster{} + cluster.SetGroupVersionKind(GroupVersion.WithKind("CrunchyBridgeCluster")) + return cluster +} diff --git a/pkg/apis/postgres-operator.crunchydata.com/v1beta1/groupversion_info.go b/pkg/apis/postgres-operator.crunchydata.com/v1beta1/groupversion_info.go new file mode 100644 index 0000000000..15773a1815 --- /dev/null +++ b/pkg/apis/postgres-operator.crunchydata.com/v1beta1/groupversion_info.go @@ -0,0 +1,24 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +// Package v1beta1 contains API Schema definitions for the postgres-operator v1beta1 API group +// +kubebuilder:object:generate=true +// +groupName=postgres-operator.crunchydata.com +package v1beta1 + +import ( + "k8s.io/apimachinery/pkg/runtime/schema" + "sigs.k8s.io/controller-runtime/pkg/scheme" +) + +var ( + // GroupVersion is group version used to register these objects + GroupVersion = schema.GroupVersion{Group: "postgres-operator.crunchydata.com", Version: "v1beta1"} + + // SchemeBuilder is used to add go types to the GroupVersionKind scheme + SchemeBuilder = &scheme.Builder{GroupVersion: GroupVersion} + + // AddToScheme adds the types in this group-version to the given scheme. + AddToScheme = SchemeBuilder.AddToScheme +) diff --git a/pkg/apis/postgres-operator.crunchydata.com/v1beta1/patroni_types.go b/pkg/apis/postgres-operator.crunchydata.com/v1beta1/patroni_types.go new file mode 100644 index 0000000000..2f01399372 --- /dev/null +++ b/pkg/apis/postgres-operator.crunchydata.com/v1beta1/patroni_types.go @@ -0,0 +1,117 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package v1beta1 + +type PatroniSpec struct { + // Patroni dynamic configuration settings. Changes to this value will be + // automatically reloaded without validation. Changes to certain PostgreSQL + // parameters cause PostgreSQL to restart. + // More info: https://patroni.readthedocs.io/en/latest/dynamic_configuration.html + // +optional + // +kubebuilder:pruning:PreserveUnknownFields + // +kubebuilder:validation:Schemaless + // +kubebuilder:validation:Type=object + DynamicConfiguration SchemalessObject `json:"dynamicConfiguration,omitempty"` + + // TTL of the cluster leader lock. "Think of it as the + // length of time before initiation of the automatic failover process." + // Changing this value causes PostgreSQL to restart. + // +optional + // +kubebuilder:default=30 + // +kubebuilder:validation:Minimum=3 + LeaderLeaseDurationSeconds *int32 `json:"leaderLeaseDurationSeconds,omitempty"` + + // The port on which Patroni should listen. + // Changing this value causes PostgreSQL to restart. + // +optional + // +kubebuilder:default=8008 + // +kubebuilder:validation:Minimum=1024 + Port *int32 `json:"port,omitempty"` + + // The interval for refreshing the leader lock and applying + // dynamicConfiguration. Must be less than leaderLeaseDurationSeconds. + // Changing this value causes PostgreSQL to restart. + // +optional + // +kubebuilder:default=10 + // +kubebuilder:validation:Minimum=1 + SyncPeriodSeconds *int32 `json:"syncPeriodSeconds,omitempty"` + + // Switchover gives options to perform ad hoc switchovers in a PostgresCluster. + // +optional + Switchover *PatroniSwitchover `json:"switchover,omitempty"` + + // TODO(cbandy): Add UseConfigMaps bool, default false. + // TODO(cbandy): Allow other DCS: etcd, raft, etc? + // N.B. changing this will cause downtime. + // - https://patroni.readthedocs.io/en/latest/kubernetes.html +} + +type PatroniSwitchover struct { + + // Whether or not the operator should allow switchovers in a PostgresCluster + // +required + Enabled bool `json:"enabled"` + + // The instance that should become primary during a switchover. This field is + // optional when Type is "Switchover" and required when Type is "Failover". + // When it is not specified, a healthy replica is automatically selected. + // +optional + TargetInstance *string `json:"targetInstance,omitempty"` + + // Type of switchover to perform. Valid options are Switchover and Failover. + // "Switchover" changes the primary instance of a healthy PostgresCluster. + // "Failover" forces a particular instance to be primary, regardless of other + // factors. A TargetInstance must be specified to failover. + // NOTE: The Failover type is reserved as the "last resort" case. + // +kubebuilder:validation:Enum={Switchover,Failover} + // +kubebuilder:default:=Switchover + // +optional + Type string `json:"type,omitempty"` +} + +// PatroniSwitchover types. +const ( + PatroniSwitchoverTypeFailover = "Failover" + PatroniSwitchoverTypeSwitchover = "Switchover" +) + +// Default sets the default values for certain Patroni configuration attributes, +// including: +// - Lock Lease Duration +// - Patroni's API port +// - Frequency of syncing with Kube API +func (s *PatroniSpec) Default() { + if s.LeaderLeaseDurationSeconds == nil { + s.LeaderLeaseDurationSeconds = new(int32) + *s.LeaderLeaseDurationSeconds = 30 + } + if s.Port == nil { + s.Port = new(int32) + *s.Port = 8008 + } + if s.SyncPeriodSeconds == nil { + s.SyncPeriodSeconds = new(int32) + *s.SyncPeriodSeconds = 10 + } +} + +type PatroniStatus struct { + + // - "database_system_identifier" of https://github.com/zalando/patroni/blob/v2.0.1/docs/rest_api.rst#monitoring-endpoint + // - "system_identifier" of https://www.postgresql.org/docs/current/functions-info.html#FUNCTIONS-PG-CONTROL-SYSTEM + // - "systemid" of https://www.postgresql.org/docs/current/protocol-replication.html + + // The PostgreSQL system identifier reported by Patroni. + // +optional + SystemIdentifier string `json:"systemIdentifier,omitempty"` + + // Tracks the execution of the switchover requests. + // +optional + Switchover *string `json:"switchover,omitempty"` + + // Tracks the current timeline during switchovers + // +optional + SwitchoverTimeline *int64 `json:"switchoverTimeline,omitempty"` +} diff --git a/pkg/apis/postgres-operator.crunchydata.com/v1beta1/pgadmin_types.go b/pkg/apis/postgres-operator.crunchydata.com/v1beta1/pgadmin_types.go new file mode 100644 index 0000000000..06c7321bc4 --- /dev/null +++ b/pkg/apis/postgres-operator.crunchydata.com/v1beta1/pgadmin_types.go @@ -0,0 +1,109 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package v1beta1 + +import ( + corev1 "k8s.io/api/core/v1" +) + +// PGAdminConfiguration represents pgAdmin configuration files. +type PGAdminConfiguration struct { + // Files allows the user to mount projected volumes into the pgAdmin + // container so that files can be referenced by pgAdmin as needed. + Files []corev1.VolumeProjection `json:"files,omitempty"` + + // A Secret containing the value for the LDAP_BIND_PASSWORD setting. + // More info: https://www.pgadmin.org/docs/pgadmin4/latest/ldap.html + // +optional + LDAPBindPassword *corev1.SecretKeySelector `json:"ldapBindPassword,omitempty"` + + // Settings for the pgAdmin server process. Keys should be uppercase and + // values must be constants. + // More info: https://www.pgadmin.org/docs/pgadmin4/latest/config_py.html + // +optional + // +kubebuilder:pruning:PreserveUnknownFields + // +kubebuilder:validation:Schemaless + // +kubebuilder:validation:Type=object + Settings SchemalessObject `json:"settings,omitempty"` +} + +// PGAdminPodSpec defines the desired state of a pgAdmin deployment. +type PGAdminPodSpec struct { + // +optional + Metadata *Metadata `json:"metadata,omitempty"` + + // Scheduling constraints of a pgAdmin pod. Changing this value causes + // pgAdmin to restart. + // More info: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node + // +optional + Affinity *corev1.Affinity `json:"affinity,omitempty"` + + // Configuration settings for the pgAdmin process. Changes to any of these + // values will be loaded without validation. Be careful, as + // you may put pgAdmin into an unusable state. + // +optional + Config PGAdminConfiguration `json:"config,omitempty"` + + // Defines a PersistentVolumeClaim for pgAdmin data. + // More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes + // +kubebuilder:validation:Required + DataVolumeClaimSpec corev1.PersistentVolumeClaimSpec `json:"dataVolumeClaimSpec"` + + // Name of a container image that can run pgAdmin 4. Changing this value causes + // pgAdmin to restart. The image may also be set using the RELATED_IMAGE_PGADMIN + // environment variable. + // More info: https://kubernetes.io/docs/concepts/containers/images + // +optional + Image string `json:"image,omitempty"` + + // Priority class name for the pgAdmin pod. Changing this value causes pgAdmin + // to restart. + // More info: https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/ + // +optional + PriorityClassName *string `json:"priorityClassName,omitempty"` + + // Number of desired pgAdmin pods. + // +optional + // +kubebuilder:default=1 + // +kubebuilder:validation:Minimum=0 + // +kubebuilder:validation:Maximum=1 + Replicas *int32 `json:"replicas,omitempty"` + + // Compute resources of a pgAdmin container. Changing this value causes + // pgAdmin to restart. + // More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers + // +optional + Resources corev1.ResourceRequirements `json:"resources,omitempty"` + + // Specification of the service that exposes pgAdmin. + // +optional + Service *ServiceSpec `json:"service,omitempty"` + + // Tolerations of a pgAdmin pod. Changing this value causes pgAdmin to restart. + // More info: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration + // +optional + Tolerations []corev1.Toleration `json:"tolerations,omitempty"` + + // Topology spread constraints of a pgAdmin pod. Changing this value causes + // pgAdmin to restart. + // More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/ + // +optional + TopologySpreadConstraints []corev1.TopologySpreadConstraint `json:"topologySpreadConstraints,omitempty"` +} + +// Default sets the port and replica count for pgAdmin if not set +func (s *PGAdminPodSpec) Default() { + if s.Replicas == nil { + s.Replicas = new(int32) + *s.Replicas = 1 + } +} + +// PGAdminPodStatus represents the observed state of a pgAdmin deployment. +type PGAdminPodStatus struct { + + // Hash that indicates which users have been installed into pgAdmin. + UsersRevision string `json:"usersRevision,omitempty"` +} diff --git a/pkg/apis/postgres-operator.crunchydata.com/v1beta1/pgbackrest_types.go b/pkg/apis/postgres-operator.crunchydata.com/v1beta1/pgbackrest_types.go new file mode 100644 index 0000000000..3e3098a602 --- /dev/null +++ b/pkg/apis/postgres-operator.crunchydata.com/v1beta1/pgbackrest_types.go @@ -0,0 +1,474 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package v1beta1 + +import ( + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" +) + +// PGBackRestJobStatus contains information about the state of a pgBackRest Job. +type PGBackRestJobStatus struct { + + // A unique identifier for the manual backup as provided using the "pgbackrest-backup" + // annotation when initiating a backup. + // +kubebuilder:validation:Required + ID string `json:"id"` + + // Specifies whether or not the Job is finished executing (does not indicate success or + // failure). + // +kubebuilder:validation:Required + Finished bool `json:"finished"` + + // Represents the time the manual backup Job was acknowledged by the Job controller. + // It is represented in RFC3339 form and is in UTC. + // +optional + StartTime *metav1.Time `json:"startTime,omitempty"` + + // Represents the time the manual backup Job was determined by the Job controller + // to be completed. This field is only set if the backup completed successfully. + // Additionally, it is represented in RFC3339 form and is in UTC. + // +optional + CompletionTime *metav1.Time `json:"completionTime,omitempty"` + + // The number of actively running manual backup Pods. + // +optional + Active int32 `json:"active,omitempty"` + + // The number of Pods for the manual backup Job that reached the "Succeeded" phase. + // +optional + Succeeded int32 `json:"succeeded,omitempty"` + + // The number of Pods for the manual backup Job that reached the "Failed" phase. + // +optional + Failed int32 `json:"failed,omitempty"` +} + +type PGBackRestScheduledBackupStatus struct { + + // The name of the associated pgBackRest scheduled backup CronJob + // +kubebuilder:validation:Optional + CronJobName string `json:"cronJobName,omitempty"` + + // The name of the associated pgBackRest repository + // +kubebuilder:validation:Optional + RepoName string `json:"repo,omitempty"` + + // The pgBackRest backup type for this Job + // +kubebuilder:validation:Optional + Type string `json:"type,omitempty"` + + // Represents the time the manual backup Job was acknowledged by the Job controller. + // It is represented in RFC3339 form and is in UTC. + // +optional + StartTime *metav1.Time `json:"startTime,omitempty"` + + // Represents the time the manual backup Job was determined by the Job controller + // to be completed. This field is only set if the backup completed successfully. + // Additionally, it is represented in RFC3339 form and is in UTC. + // +optional + CompletionTime *metav1.Time `json:"completionTime,omitempty"` + + // The number of actively running manual backup Pods. + // +optional + Active int32 `json:"active,omitempty"` + + // The number of Pods for the manual backup Job that reached the "Succeeded" phase. + // +optional + Succeeded int32 `json:"succeeded,omitempty"` + + // The number of Pods for the manual backup Job that reached the "Failed" phase. + // +optional + Failed int32 `json:"failed,omitempty"` +} + +// PGBackRestArchive defines a pgBackRest archive configuration +type PGBackRestArchive struct { + // +optional + Metadata *Metadata `json:"metadata,omitempty"` + + // Projected volumes containing custom pgBackRest configuration. These files are mounted + // under "/etc/pgbackrest/conf.d" alongside any pgBackRest configuration generated by the + // PostgreSQL Operator: + // https://pgbackrest.org/configuration.html + // +optional + Configuration []corev1.VolumeProjection `json:"configuration,omitempty"` + + // Global pgBackRest configuration settings. These settings are included in the "global" + // section of the pgBackRest configuration generated by the PostgreSQL Operator, and then + // mounted under "/etc/pgbackrest/conf.d": + // https://pgbackrest.org/configuration.html + // +optional + Global map[string]string `json:"global,omitempty"` + + // The image name to use for pgBackRest containers. Utilized to run + // pgBackRest repository hosts and backups. The image may also be set using + // the RELATED_IMAGE_PGBACKREST environment variable + // +optional + Image string `json:"image,omitempty"` + + // Jobs field allows configuration for all backup jobs + // +optional + Jobs *BackupJobs `json:"jobs,omitempty"` + + // Defines a pgBackRest repository + // +kubebuilder:validation:MinItems=1 + // +listType=map + // +listMapKey=name + Repos []PGBackRestRepo `json:"repos"` + + // Defines configuration for a pgBackRest dedicated repository host. This section is only + // applicable if at least one "volume" (i.e. PVC-based) repository is defined in the "repos" + // section, therefore enabling a dedicated repository host Deployment. + // +optional + RepoHost *PGBackRestRepoHost `json:"repoHost,omitempty"` + + // Defines details for manual pgBackRest backup Jobs + // +optional + Manual *PGBackRestManualBackup `json:"manual,omitempty"` + + // Defines details for performing an in-place restore using pgBackRest + // +optional + Restore *PGBackRestRestore `json:"restore,omitempty"` + + // Configuration for pgBackRest sidecar containers + // +optional + Sidecars *PGBackRestSidecars `json:"sidecars,omitempty"` +} + +// PGBackRestSidecars defines the configuration for pgBackRest sidecar containers +type PGBackRestSidecars struct { + // Defines the configuration for the pgBackRest sidecar container + // +optional + PGBackRest *Sidecar `json:"pgbackrest,omitempty"` + + // Defines the configuration for the pgBackRest config sidecar container + // +optional + PGBackRestConfig *Sidecar `json:"pgbackrestConfig,omitempty"` +} + +type BackupJobs struct { + // Resource limits for backup jobs. Includes manual, scheduled and replica + // create backups + // +optional + Resources corev1.ResourceRequirements `json:"resources,omitempty"` + + // Priority class name for the pgBackRest backup Job pods. + // More info: https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/ + // +optional + PriorityClassName *string `json:"priorityClassName,omitempty"` + + // Scheduling constraints of pgBackRest backup Job pods. + // More info: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node + // +optional + Affinity *corev1.Affinity `json:"affinity,omitempty"` + + // Tolerations of pgBackRest backup Job pods. + // More info: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration + // +optional + Tolerations []corev1.Toleration `json:"tolerations,omitempty"` + + // Limit the lifetime of a Job that has finished. + // More info: https://kubernetes.io/docs/concepts/workloads/controllers/job + // +optional + // +kubebuilder:validation:Minimum=60 + TTLSecondsAfterFinished *int32 `json:"ttlSecondsAfterFinished,omitempty"` +} + +// PGBackRestManualBackup contains information that is used for creating a +// pgBackRest backup that is invoked manually (i.e. it's unscheduled). +type PGBackRestManualBackup struct { + // The name of the pgBackRest repo to run the backup command against. + // +kubebuilder:validation:Required + // +kubebuilder:validation:Pattern=^repo[1-4] + RepoName string `json:"repoName"` + + // Command line options to include when running the pgBackRest backup command. + // https://pgbackrest.org/command.html#command-backup + // +optional + Options []string `json:"options,omitempty"` +} + +// PGBackRestRepoHost represents a pgBackRest dedicated repository host +type PGBackRestRepoHost struct { + + // Scheduling constraints of the Dedicated repo host pod. + // Changing this value causes repo host to restart. + // More info: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node + // +optional + Affinity *corev1.Affinity `json:"affinity,omitempty"` + + // Priority class name for the pgBackRest repo host pod. Changing this value + // causes PostgreSQL to restart. + // More info: https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/ + // +optional + PriorityClassName *string `json:"priorityClassName,omitempty"` + + // Resource requirements for a pgBackRest repository host + // +optional + Resources corev1.ResourceRequirements `json:"resources,omitempty"` + + // Tolerations of a PgBackRest repo host pod. Changing this value causes a restart. + // More info: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration + // +optional + Tolerations []corev1.Toleration `json:"tolerations,omitempty"` + + // Topology spread constraints of a Dedicated repo host pod. Changing this + // value causes the repo host to restart. + // More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/ + // +optional + TopologySpreadConstraints []corev1.TopologySpreadConstraint `json:"topologySpreadConstraints,omitempty"` + + // ConfigMap containing custom SSH configuration. + // Deprecated: Repository hosts use mTLS for encryption, authentication, and authorization. + // +optional + SSHConfiguration *corev1.ConfigMapProjection `json:"sshConfigMap,omitempty"` + + // Secret containing custom SSH keys. + // Deprecated: Repository hosts use mTLS for encryption, authentication, and authorization. + // +optional + SSHSecret *corev1.SecretProjection `json:"sshSecret,omitempty"` +} + +// PGBackRestRestore defines an in-place restore for the PostgresCluster. +type PGBackRestRestore struct { + + // Whether or not in-place pgBackRest restores are enabled for this PostgresCluster. + // +kubebuilder:default=false + Enabled *bool `json:"enabled"` + + *PostgresClusterDataSource `json:",inline"` +} + +// PGBackRestBackupSchedules defines a pgBackRest scheduled backup +type PGBackRestBackupSchedules struct { + // Validation set to minimum length of six to account for @daily option + + // Defines the Cron schedule for a full pgBackRest backup. + // Follows the standard Cron schedule syntax: + // https://k8s.io/docs/concepts/workloads/controllers/cron-jobs/#cron-schedule-syntax + // +optional + // +kubebuilder:validation:MinLength=6 + Full *string `json:"full,omitempty"` + + // Defines the Cron schedule for a differential pgBackRest backup. + // Follows the standard Cron schedule syntax: + // https://k8s.io/docs/concepts/workloads/controllers/cron-jobs/#cron-schedule-syntax + // +optional + // +kubebuilder:validation:MinLength=6 + Differential *string `json:"differential,omitempty"` + + // Defines the Cron schedule for an incremental pgBackRest backup. + // Follows the standard Cron schedule syntax: + // https://k8s.io/docs/concepts/workloads/controllers/cron-jobs/#cron-schedule-syntax + // +optional + // +kubebuilder:validation:MinLength=6 + Incremental *string `json:"incremental,omitempty"` +} + +// PGBackRestStatus defines the status of pgBackRest within a PostgresCluster +type PGBackRestStatus struct { + + // Status information for manual backups + // +optional + ManualBackup *PGBackRestJobStatus `json:"manualBackup,omitempty"` + + // Status information for scheduled backups + // +optional + ScheduledBackups []PGBackRestScheduledBackupStatus `json:"scheduledBackups,omitempty"` + + // Status information for the pgBackRest dedicated repository host + // +optional + RepoHost *RepoHostStatus `json:"repoHost,omitempty"` + + // Status information for pgBackRest repositories + // +optional + // +listType=map + // +listMapKey=name + Repos []RepoStatus `json:"repos,omitempty"` + + // Status information for in-place restores + // +optional + Restore *PGBackRestJobStatus `json:"restore,omitempty"` +} + +// PGBackRestRepo represents a pgBackRest repository. Only one of its members may be specified. +type PGBackRestRepo struct { + // Please note that as a Union type that follows OpenAPI 3.0 'oneOf' semantics, the following KEP + // will be applicable once implemented: + // https://github.com/kubernetes/enhancements/tree/master/keps/sig-api-machinery/1027-api-unions + + // The name of the repository + // +kubebuilder:validation:Required + // +kubebuilder:validation:Pattern=^repo[1-4] + Name string `json:"name"` + + // Defines the schedules for the pgBackRest backups + // Full, Differential and Incremental backup types are supported: + // https://pgbackrest.org/user-guide.html#concept/backup + // +optional + BackupSchedules *PGBackRestBackupSchedules `json:"schedules,omitempty"` + + // Represents a pgBackRest repository that is created using Azure storage + // +optional + Azure *RepoAzure `json:"azure,omitempty"` + + // Represents a pgBackRest repository that is created using Google Cloud Storage + // +optional + GCS *RepoGCS `json:"gcs,omitempty"` + + // RepoS3 represents a pgBackRest repository that is created using AWS S3 (or S3-compatible) + // storage + // +optional + S3 *RepoS3 `json:"s3,omitempty"` + + // Represents a pgBackRest repository that is created using a PersistentVolumeClaim + // +optional + Volume *RepoPVC `json:"volume,omitempty"` +} + +// RepoHostStatus defines the status of a pgBackRest repository host +type RepoHostStatus struct { + metav1.TypeMeta `json:",inline"` + + // Whether or not the pgBackRest repository host is ready for use + // +optional + Ready bool `json:"ready"` +} + +// RepoPVC represents a pgBackRest repository that is created using a PersistentVolumeClaim +type RepoPVC struct { + + // Defines a PersistentVolumeClaim spec used to create and/or bind a volume + // --- + // +kubebuilder:validation:Required + // + // NOTE(validation): Every PVC must have at least one accessMode. NOTE(KEP-4153) + // TODO(k8s-1.28): fieldPath=`.accessModes`,reason="FieldValueRequired" + // - https://releases.k8s.io/v1.25.0/pkg/apis/core/validation/validation.go#L2098-L2100 + // - https://releases.k8s.io/v1.31.0/pkg/apis/core/validation/validation.go#L2292-L2294 + // +kubebuilder:validation:XValidation:rule=`has(self.accessModes) && size(self.accessModes) > 0`,message=`missing accessModes` + // + // NOTE(validation): Every PVC must have a positive storage request. NOTE(KEP-4153) + // TODO(k8s-1.28): fieldPath=`.resources.requests.storage`,reason="FieldValueRequired" + // - https://releases.k8s.io/v1.25.0/pkg/apis/core/validation/validation.go#L2126-L2133 + // - https://releases.k8s.io/v1.31.0/pkg/apis/core/validation/validation.go#L2318-L2325 + // +kubebuilder:validation:XValidation:rule=`has(self.resources) && has(self.resources.requests) && has(self.resources.requests.storage)`,message=`missing storage request` + VolumeClaimSpec corev1.PersistentVolumeClaimSpec `json:"volumeClaimSpec"` +} + +// RepoAzure represents a pgBackRest repository that is created using Azure storage +type RepoAzure struct { + + // The Azure container utilized for the repository + // +kubebuilder:validation:Required + Container string `json:"container"` +} + +// RepoGCS represents a pgBackRest repository that is created using Google Cloud Storage +type RepoGCS struct { + + // The GCS bucket utilized for the repository + // +kubebuilder:validation:Required + Bucket string `json:"bucket"` +} + +// RepoS3 represents a pgBackRest repository that is created using AWS S3 (or S3-compatible) +// storage +type RepoS3 struct { + + // The S3 bucket utilized for the repository + // +kubebuilder:validation:Required + Bucket string `json:"bucket"` + + // A valid endpoint corresponding to the specified region + // +kubebuilder:validation:Required + Endpoint string `json:"endpoint"` + + // The region corresponding to the S3 bucket + // +kubebuilder:validation:Required + Region string `json:"region"` +} + +// RepoStatus the status of a pgBackRest repository +type RepoStatus struct { + + // The name of the pgBackRest repository + // +kubebuilder:validation:Required + Name string `json:"name"` + + // Whether or not the pgBackRest repository PersistentVolumeClaim is bound to a volume + // +optional + Bound bool `json:"bound,omitempty"` + + // The name of the volume the containing the pgBackRest repository + // +optional + VolumeName string `json:"volume,omitempty"` + + // Specifies whether or not a stanza has been successfully created for the repository + // +optional + StanzaCreated bool `json:"stanzaCreated"` + + // ReplicaCreateBackupReady indicates whether a backup exists in the repository as needed + // to bootstrap replicas. + ReplicaCreateBackupComplete bool `json:"replicaCreateBackupComplete,omitempty"` + + // A hash of the required fields in the spec for defining an Azure, GCS or S3 repository, + // Utilized to detect changes to these fields and then execute pgBackRest stanza-create + // commands accordingly. + // +optional + RepoOptionsHash string `json:"repoOptionsHash,omitempty"` +} + +// PGBackRestDataSource defines a pgBackRest configuration specifically for restoring from cloud-based data source +type PGBackRestDataSource struct { + // Projected volumes containing custom pgBackRest configuration. These files are mounted + // under "/etc/pgbackrest/conf.d" alongside any pgBackRest configuration generated by the + // PostgreSQL Operator: + // https://pgbackrest.org/configuration.html + // +optional + Configuration []corev1.VolumeProjection `json:"configuration,omitempty"` + + // Global pgBackRest configuration settings. These settings are included in the "global" + // section of the pgBackRest configuration generated by the PostgreSQL Operator, and then + // mounted under "/etc/pgbackrest/conf.d": + // https://pgbackrest.org/configuration.html + // +optional + Global map[string]string `json:"global,omitempty"` + + // Defines a pgBackRest repository + // +kubebuilder:validation:Required + Repo PGBackRestRepo `json:"repo"` + + // The name of an existing pgBackRest stanza to use as the data source for the new PostgresCluster. + // Defaults to `db` if not provided. + // +kubebuilder:default="db" + Stanza string `json:"stanza"` + + // Command line options to include when running the pgBackRest restore command. + // https://pgbackrest.org/command.html#command-restore + // +optional + Options []string `json:"options,omitempty"` + + // Resource requirements for the pgBackRest restore Job. + // +optional + Resources corev1.ResourceRequirements `json:"resources,omitempty"` + + // Scheduling constraints of the pgBackRest restore Job. + // More info: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node + // +optional + Affinity *corev1.Affinity `json:"affinity,omitempty"` + + // Priority class name for the pgBackRest restore Job pod. Changing this + // value causes PostgreSQL to restart. + // More info: https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/ + // +optional + PriorityClassName *string `json:"priorityClassName,omitempty"` + + // Tolerations of the pgBackRest restore Job. + // More info: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration + // +optional + Tolerations []corev1.Toleration `json:"tolerations,omitempty"` +} diff --git a/pkg/apis/postgres-operator.crunchydata.com/v1beta1/pgbouncer_types.go b/pkg/apis/postgres-operator.crunchydata.com/v1beta1/pgbouncer_types.go new file mode 100644 index 0000000000..e940a9300d --- /dev/null +++ b/pkg/apis/postgres-operator.crunchydata.com/v1beta1/pgbouncer_types.go @@ -0,0 +1,168 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package v1beta1 + +import ( + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/util/intstr" +) + +// PGBouncerConfiguration represents PgBouncer configuration files. +type PGBouncerConfiguration struct { + + // Files to mount under "/etc/pgbouncer". When specified, settings in the + // "pgbouncer.ini" file are loaded before all others. From there, other + // files may be included by absolute path. Changing these references causes + // PgBouncer to restart, but changes to the file contents are automatically + // reloaded. + // More info: https://www.pgbouncer.org/config.html#include-directive + // +optional + Files []corev1.VolumeProjection `json:"files,omitempty"` + + // NOTE(cbandy): map[string]string fields are not presented in the OpenShift + // web console: https://github.com/openshift/console/issues/9538 + + // Settings that apply to the entire PgBouncer process. + // More info: https://www.pgbouncer.org/config.html + // +optional + Global map[string]string `json:"global,omitempty"` + + // PgBouncer database definitions. The key is the database requested by a + // client while the value is a libpq-styled connection string. The special + // key "*" acts as a fallback. When this field is empty, PgBouncer is + // configured with a single "*" entry that connects to the primary + // PostgreSQL instance. + // More info: https://www.pgbouncer.org/config.html#section-databases + // +optional + Databases map[string]string `json:"databases,omitempty"` + + // Connection settings specific to particular users. + // More info: https://www.pgbouncer.org/config.html#section-users + // +optional + Users map[string]string `json:"users,omitempty"` +} + +// PGBouncerPodSpec defines the desired state of a PgBouncer connection pooler. +type PGBouncerPodSpec struct { + // +optional + Metadata *Metadata `json:"metadata,omitempty"` + + // Scheduling constraints of a PgBouncer pod. Changing this value causes + // PgBouncer to restart. + // More info: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node + // +optional + Affinity *corev1.Affinity `json:"affinity,omitempty"` + + // Configuration settings for the PgBouncer process. Changes to any of these + // values will be automatically reloaded without validation. Be careful, as + // you may put PgBouncer into an unusable state. + // More info: https://www.pgbouncer.org/usage.html#reload + // +optional + Config PGBouncerConfiguration `json:"config,omitempty"` + + // Custom sidecars for a PgBouncer pod. Changing this value causes + // PgBouncer to restart. + // +optional + Containers []corev1.Container `json:"containers,omitempty"` + + // A secret projection containing a certificate and key with which to encrypt + // connections to PgBouncer. The "tls.crt", "tls.key", and "ca.crt" paths must + // be PEM-encoded certificates and keys. Changing this value causes PgBouncer + // to restart. + // More info: https://kubernetes.io/docs/concepts/configuration/secret/#projection-of-secret-keys-to-specific-paths + // +optional + CustomTLSSecret *corev1.SecretProjection `json:"customTLSSecret,omitempty"` + + // Name of a container image that can run PgBouncer 1.15 or newer. Changing + // this value causes PgBouncer to restart. The image may also be set using + // the RELATED_IMAGE_PGBOUNCER environment variable. + // More info: https://kubernetes.io/docs/concepts/containers/images + // +optional + Image string `json:"image,omitempty"` + + // Port on which PgBouncer should listen for client connections. Changing + // this value causes PgBouncer to restart. + // +optional + // +kubebuilder:default=5432 + // +kubebuilder:validation:Minimum=1024 + Port *int32 `json:"port,omitempty"` + + // Priority class name for the pgBouncer pod. Changing this value causes + // PostgreSQL to restart. + // More info: https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/ + // +optional + PriorityClassName *string `json:"priorityClassName,omitempty"` + + // Number of desired PgBouncer pods. + // +optional + // +kubebuilder:default=1 + // +kubebuilder:validation:Minimum=0 + Replicas *int32 `json:"replicas,omitempty"` + + // Minimum number of pods that should be available at a time. + // Defaults to one when the replicas field is greater than one. + // +optional + MinAvailable *intstr.IntOrString `json:"minAvailable,omitempty"` + + // Compute resources of a PgBouncer container. Changing this value causes + // PgBouncer to restart. + // More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers + // +optional + Resources corev1.ResourceRequirements `json:"resources,omitempty"` + + // Specification of the service that exposes PgBouncer. + // +optional + Service *ServiceSpec `json:"service,omitempty"` + + // Configuration for pgBouncer sidecar containers + // +optional + Sidecars *PGBouncerSidecars `json:"sidecars,omitempty"` + + // Tolerations of a PgBouncer pod. Changing this value causes PgBouncer to + // restart. + // More info: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration + // +optional + Tolerations []corev1.Toleration `json:"tolerations,omitempty"` + + // Topology spread constraints of a PgBouncer pod. Changing this value causes + // PgBouncer to restart. + // More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/ + // +optional + TopologySpreadConstraints []corev1.TopologySpreadConstraint `json:"topologySpreadConstraints,omitempty"` +} + +// PGBouncerSidecars defines the configuration for pgBouncer sidecar containers +type PGBouncerSidecars struct { + // Defines the configuration for the pgBouncer config sidecar container + // +optional + PGBouncerConfig *Sidecar `json:"pgbouncerConfig,omitempty"` +} + +// Default returns the default port for PgBouncer (5432) if a port is not +// explicitly set +func (s *PGBouncerPodSpec) Default() { + if s.Port == nil { + s.Port = new(int32) + *s.Port = 5432 + } + + if s.Replicas == nil { + s.Replicas = new(int32) + *s.Replicas = 1 + } +} + +type PGBouncerPodStatus struct { + + // Identifies the revision of PgBouncer assets that have been installed into + // PostgreSQL. + PostgreSQLRevision string `json:"postgresRevision,omitempty"` + + // Total number of ready pods. + ReadyReplicas int32 `json:"readyReplicas,omitempty"` + + // Total number of non-terminated pods. + Replicas int32 `json:"replicas,omitempty"` +} diff --git a/pkg/apis/postgres-operator.crunchydata.com/v1beta1/pgmonitor_types.go b/pkg/apis/postgres-operator.crunchydata.com/v1beta1/pgmonitor_types.go new file mode 100644 index 0000000000..f2cd78335a --- /dev/null +++ b/pkg/apis/postgres-operator.crunchydata.com/v1beta1/pgmonitor_types.go @@ -0,0 +1,39 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package v1beta1 + +import corev1 "k8s.io/api/core/v1" + +// PGMonitorSpec defines the desired state of the pgMonitor tool suite +type PGMonitorSpec struct { + // +optional + Exporter *ExporterSpec `json:"exporter,omitempty"` +} + +type ExporterSpec struct { + + // Projected volumes containing custom PostgreSQL Exporter configuration. Currently supports + // the customization of PostgreSQL Exporter queries. If a "queries.yml" file is detected in + // any volume projected using this field, it will be loaded using the "extend.query-path" flag: + // https://github.com/prometheus-community/postgres_exporter#flags + // Changing the values of field causes PostgreSQL and the exporter to restart. + // +optional + Configuration []corev1.VolumeProjection `json:"configuration,omitempty"` + + // Projected secret containing custom TLS certificates to encrypt output from the exporter + // web server + // +optional + CustomTLSSecret *corev1.SecretProjection `json:"customTLSSecret,omitempty"` + + // The image name to use for crunchy-postgres-exporter containers. The image may + // also be set using the RELATED_IMAGE_PGEXPORTER environment variable. + // +optional + Image string `json:"image,omitempty"` + + // Changing this value causes PostgreSQL and the exporter to restart. + // More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers + // +optional + Resources corev1.ResourceRequirements `json:"resources,omitempty"` +} diff --git a/pkg/apis/postgres-operator.crunchydata.com/v1beta1/pgupgrade_types.go b/pkg/apis/postgres-operator.crunchydata.com/v1beta1/pgupgrade_types.go new file mode 100644 index 0000000000..8e99f8239f --- /dev/null +++ b/pkg/apis/postgres-operator.crunchydata.com/v1beta1/pgupgrade_types.go @@ -0,0 +1,132 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package v1beta1 + +import ( + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" +) + +// PGUpgradeSpec defines the desired state of PGUpgrade +type PGUpgradeSpec struct { + + // +optional + Metadata *Metadata `json:"metadata,omitempty"` + + // The name of the cluster to be updated + // +required + // +kubebuilder:validation:MinLength=1 + PostgresClusterName string `json:"postgresClusterName"` + + // The image name to use for major PostgreSQL upgrades. + // +optional + Image *string `json:"image,omitempty"` + + // ImagePullPolicy is used to determine when Kubernetes will attempt to + // pull (download) container images. + // More info: https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy + // +kubebuilder:validation:Enum={Always,Never,IfNotPresent} + // +optional + ImagePullPolicy corev1.PullPolicy `json:"imagePullPolicy,omitempty"` + + // TODO(benjaminjb) Check the behavior: does updating ImagePullSecrets cause + // all running PGUpgrade pods to restart? + + // The image pull secrets used to pull from a private registry. + // Changing this value causes all running PGUpgrade pods to restart. + // https://k8s.io/docs/tasks/configure-pod-container/pull-image-private-registry/ + // +optional + ImagePullSecrets []corev1.LocalObjectReference `json:"imagePullSecrets,omitempty"` + + // TODO(benjaminjb): define webhook validation to make sure + // `fromPostgresVersion` is below `toPostgresVersion` + // or leverage other validation rules, such as the Common Expression Language + // rules currently in alpha as of Kubernetes 1.23 + // - https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-rules + + // The major version of PostgreSQL before the upgrade. + // +kubebuilder:validation:Required + // +kubebuilder:validation:Minimum=11 + // +kubebuilder:validation:Maximum=17 + FromPostgresVersion int `json:"fromPostgresVersion"` + + // TODO(benjaminjb): define webhook validation to make sure + // `fromPostgresVersion` is below `toPostgresVersion` + // or leverage other validation rules, such as the Common Expression Language + // rules currently in alpha as of Kubernetes 1.23 + + // The major version of PostgreSQL to be upgraded to. + // +kubebuilder:validation:Required + // +kubebuilder:validation:Minimum=11 + // +kubebuilder:validation:Maximum=17 + ToPostgresVersion int `json:"toPostgresVersion"` + + // The image name to use for PostgreSQL containers after upgrade. + // When omitted, the value comes from an operator environment variable. + // +optional + ToPostgresImage string `json:"toPostgresImage,omitempty"` + + // Resource requirements for the PGUpgrade container. + // +optional + Resources corev1.ResourceRequirements `json:"resources,omitempty"` + + // Scheduling constraints of the PGUpgrade pod. + // More info: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node + // +optional + Affinity *corev1.Affinity `json:"affinity,omitempty"` + + // TODO(benjaminjb) Check the behavior: does updating PriorityClassName cause + // PGUpgrade to restart? + + // Priority class name for the PGUpgrade pod. Changing this + // value causes PGUpgrade pod to restart. + // More info: https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/ + // +optional + PriorityClassName *string `json:"priorityClassName,omitempty"` + + // Tolerations of the PGUpgrade pod. + // More info: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration + // +optional + Tolerations []corev1.Toleration `json:"tolerations,omitempty"` +} + +// PGUpgradeStatus defines the observed state of PGUpgrade +type PGUpgradeStatus struct { + // conditions represent the observations of PGUpgrade's current state. + // +optional + // +listType=map + // +listMapKey=type + Conditions []metav1.Condition `json:"conditions,omitempty"` + + // observedGeneration represents the .metadata.generation on which the status was based. + // +optional + // +kubebuilder:validation:Minimum=0 + ObservedGeneration int64 `json:"observedGeneration,omitempty"` +} + +//+kubebuilder:object:root=true +//+kubebuilder:subresource:status + +// PGUpgrade is the Schema for the pgupgrades API +type PGUpgrade struct { + metav1.TypeMeta `json:",inline"` + metav1.ObjectMeta `json:"metadata,omitempty"` + + Spec PGUpgradeSpec `json:"spec,omitempty"` + Status PGUpgradeStatus `json:"status,omitempty"` +} + +//+kubebuilder:object:root=true + +// PGUpgradeList contains a list of PGUpgrade +type PGUpgradeList struct { + metav1.TypeMeta `json:",inline"` + metav1.ListMeta `json:"metadata,omitempty"` + Items []PGUpgrade `json:"items"` +} + +func init() { + SchemeBuilder.Register(&PGUpgrade{}, &PGUpgradeList{}) +} diff --git a/pkg/apis/postgres-operator.crunchydata.com/v1beta1/postgres_types.go b/pkg/apis/postgres-operator.crunchydata.com/v1beta1/postgres_types.go new file mode 100644 index 0000000000..b7baa72942 --- /dev/null +++ b/pkg/apis/postgres-operator.crunchydata.com/v1beta1/postgres_types.go @@ -0,0 +1,62 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package v1beta1 + +// PostgreSQL identifiers are limited in length but may contain any character. +// More info: https://www.postgresql.org/docs/current/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS +// +// +kubebuilder:validation:MinLength=1 +// +kubebuilder:validation:MaxLength=63 +type PostgresIdentifier string + +type PostgresPasswordSpec struct { + // Type of password to generate. Defaults to ASCII. Valid options are ASCII + // and AlphaNumeric. + // "ASCII" passwords contain letters, numbers, and symbols from the US-ASCII character set. + // "AlphaNumeric" passwords contain letters and numbers from the US-ASCII character set. + // +kubebuilder:default=ASCII + // +kubebuilder:validation:Enum={ASCII,AlphaNumeric} + Type string `json:"type"` +} + +// PostgresPasswordSpec types. +const ( + PostgresPasswordTypeAlphaNumeric = "AlphaNumeric" + PostgresPasswordTypeASCII = "ASCII" +) + +type PostgresUserSpec struct { + + // This value goes into the name of a corev1.Secret and a label value, so + // it must match both IsDNS1123Subdomain and IsValidLabelValue. The pattern + // below is IsDNS1123Subdomain without any dots, U+002E. + + // The name of this PostgreSQL user. The value may contain only lowercase + // letters, numbers, and hyphen so that it fits into Kubernetes metadata. + // +kubebuilder:validation:Pattern=`^[a-z0-9]([-a-z0-9]*[a-z0-9])?$` + // +kubebuilder:validation:Type=string + Name PostgresIdentifier `json:"name"` + + // Databases to which this user can connect and create objects. Removing a + // database from this list does NOT revoke access. This field is ignored for + // the "postgres" user. + // +listType=set + // +optional + Databases []PostgresIdentifier `json:"databases,omitempty"` + + // ALTER ROLE options except for PASSWORD. This field is ignored for the + // "postgres" user. + // More info: https://www.postgresql.org/docs/current/role-attributes.html + // +kubebuilder:validation:MaxLength=200 + // +kubebuilder:validation:Pattern=`^[^;]*$` + // +kubebuilder:validation:XValidation:rule=`!self.matches("(?i:PASSWORD)")`,message="cannot assign password" + // +kubebuilder:validation:XValidation:rule=`!self.matches("(?:--|/[*]|[*]/)")`,message="cannot contain comments" + // +optional + Options string `json:"options,omitempty"` + + // Properties of the password generated for this user. + // +optional + Password *PostgresPasswordSpec `json:"password,omitempty"` +} diff --git a/pkg/apis/postgres-operator.crunchydata.com/v1beta1/postgrescluster_test.go b/pkg/apis/postgres-operator.crunchydata.com/v1beta1/postgrescluster_test.go new file mode 100644 index 0000000000..83396902d0 --- /dev/null +++ b/pkg/apis/postgres-operator.crunchydata.com/v1beta1/postgrescluster_test.go @@ -0,0 +1,244 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package v1beta1 + +import ( + "reflect" + "strings" + "testing" + + "gotest.tools/v3/assert" + "sigs.k8s.io/controller-runtime/pkg/webhook" + "sigs.k8s.io/yaml" +) + +func TestPostgresClusterWebhooks(t *testing.T) { + var _ webhook.Defaulter = new(PostgresCluster) +} + +func TestPostgresClusterDefault(t *testing.T) { + t.Run("TypeMeta", func(t *testing.T) { + var cluster PostgresCluster + cluster.Default() + + assert.Equal(t, cluster.APIVersion, GroupVersion.String()) + assert.Equal(t, cluster.Kind, reflect.TypeOf(cluster).Name()) + }) + + t.Run("no instance sets", func(t *testing.T) { + var cluster PostgresCluster + cluster.Default() + + b, err := yaml.Marshal(cluster) + assert.NilError(t, err) + assert.DeepEqual(t, string(b), strings.TrimSpace(` +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + creationTimestamp: null +spec: + backups: + pgbackrest: + repos: null + config: {} + instances: null + patroni: + leaderLeaseDurationSeconds: 30 + port: 8008 + syncPeriodSeconds: 10 + port: 5432 + postgresVersion: 0 +status: + monitoring: {} + patroni: {} + postgresVersion: 0 + proxy: + pgBouncer: {} + `)+"\n") + }) + + t.Run("one instance set", func(t *testing.T) { + var cluster PostgresCluster + cluster.Spec.InstanceSets = []PostgresInstanceSetSpec{{}} + cluster.Default() + + b, err := yaml.Marshal(cluster) + assert.NilError(t, err) + assert.DeepEqual(t, string(b), strings.TrimSpace(` +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + creationTimestamp: null +spec: + backups: + pgbackrest: + repos: null + config: {} + instances: + - dataVolumeClaimSpec: + resources: {} + name: "00" + replicas: 1 + resources: {} + patroni: + leaderLeaseDurationSeconds: 30 + port: 8008 + syncPeriodSeconds: 10 + port: 5432 + postgresVersion: 0 +status: + monitoring: {} + patroni: {} + postgresVersion: 0 + proxy: + pgBouncer: {} + `)+"\n") + }) + + t.Run("empty proxy", func(t *testing.T) { + var cluster PostgresCluster + cluster.Spec.Proxy = new(PostgresProxySpec) + cluster.Default() + + b, err := yaml.Marshal(cluster.Spec.Proxy) + assert.NilError(t, err) + assert.DeepEqual(t, string(b), "pgBouncer: null\n") + }) + + t.Run("PgBouncer proxy", func(t *testing.T) { + var cluster PostgresCluster + cluster.Spec.Proxy = &PostgresProxySpec{PGBouncer: &PGBouncerPodSpec{}} + cluster.Default() + + b, err := yaml.Marshal(cluster.Spec.Proxy) + assert.NilError(t, err) + assert.DeepEqual(t, string(b), strings.TrimSpace(` +pgBouncer: + config: {} + port: 5432 + replicas: 1 + resources: {} + `)+"\n") + }) +} + +func TestPostgresInstanceSetSpecDefault(t *testing.T) { + var spec PostgresInstanceSetSpec + spec.Default(5) + + b, err := yaml.Marshal(spec) + assert.NilError(t, err) + assert.DeepEqual(t, string(b), strings.TrimSpace(` +dataVolumeClaimSpec: + resources: {} +name: "05" +replicas: 1 +resources: {} + `)+"\n") +} + +func TestMetadataGetLabels(t *testing.T) { + for _, test := range []struct { + m Metadata + mp *Metadata + expect map[string]string + description string + }{{ + expect: map[string]string(nil), + description: "meta is defined but unset", + }, { + m: Metadata{}, + mp: &Metadata{}, + expect: map[string]string(nil), + description: "metadata is empty", + }, { + m: Metadata{Labels: map[string]string{}}, + mp: &Metadata{Labels: map[string]string{}}, + expect: map[string]string{}, + description: "metadata contains empty label set", + }, { + m: Metadata{Labels: map[string]string{ + "test": "label", + }}, + mp: &Metadata{Labels: map[string]string{ + "test": "label", + }}, + expect: map[string]string{ + "test": "label", + }, + description: "metadata contains labels", + }, { + m: Metadata{Labels: map[string]string{ + "test": "label", + "test2": "label2", + }}, + mp: &Metadata{Labels: map[string]string{ + "test": "label", + "test2": "label2", + }}, + expect: map[string]string{ + "test": "label", + "test2": "label2", + }, + description: "metadata contains multiple labels", + }} { + t.Run(test.description, func(t *testing.T) { + assert.DeepEqual(t, test.m.GetLabelsOrNil(), test.expect) + assert.DeepEqual(t, test.mp.GetLabelsOrNil(), test.expect) + }) + } +} + +func TestMetadataGetAnnotations(t *testing.T) { + for _, test := range []struct { + m Metadata + mp *Metadata + expect map[string]string + description string + }{{ + expect: map[string]string(nil), + description: "meta is defined but unset", + }, { + m: Metadata{}, + mp: &Metadata{}, + expect: map[string]string(nil), + description: "metadata is empty", + }, { + m: Metadata{Annotations: map[string]string{}}, + mp: &Metadata{Annotations: map[string]string{}}, + expect: map[string]string{}, + description: "metadata contains empty annotation set", + }, { + m: Metadata{Annotations: map[string]string{ + "test": "annotation", + }}, + mp: &Metadata{Annotations: map[string]string{ + "test": "annotation", + }}, + expect: map[string]string{ + "test": "annotation", + }, + description: "metadata contains annotations", + }, { + m: Metadata{Annotations: map[string]string{ + "test": "annotation", + "test2": "annotation2", + }}, + mp: &Metadata{Annotations: map[string]string{ + "test": "annotation", + "test2": "annotation2", + }}, + expect: map[string]string{ + "test": "annotation", + "test2": "annotation2", + }, + description: "metadata contains multiple annotations", + }} { + t.Run(test.description, func(t *testing.T) { + assert.DeepEqual(t, test.m.GetAnnotationsOrNil(), test.expect) + assert.DeepEqual(t, test.mp.GetAnnotationsOrNil(), test.expect) + }) + } +} diff --git a/pkg/apis/postgres-operator.crunchydata.com/v1beta1/postgrescluster_types.go b/pkg/apis/postgres-operator.crunchydata.com/v1beta1/postgrescluster_types.go new file mode 100644 index 0000000000..54e42baa3b --- /dev/null +++ b/pkg/apis/postgres-operator.crunchydata.com/v1beta1/postgrescluster_types.go @@ -0,0 +1,748 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package v1beta1 + +import ( + "fmt" + + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/util/intstr" +) + +// PostgresClusterSpec defines the desired state of PostgresCluster +type PostgresClusterSpec struct { + // +optional + Metadata *Metadata `json:"metadata,omitempty"` + + // Specifies a data source for bootstrapping the PostgreSQL cluster. + // +optional + DataSource *DataSource `json:"dataSource,omitempty"` + + // PostgreSQL backup configuration + // +optional + Backups Backups `json:"backups,omitempty"` + + // The secret containing the Certificates and Keys to encrypt PostgreSQL + // traffic will need to contain the server TLS certificate, TLS key and the + // Certificate Authority certificate with the data keys set to tls.crt, + // tls.key and ca.crt, respectively. It will then be mounted as a volume + // projection to the '/pgconf/tls' directory. For more information on + // Kubernetes secret projections, please see + // https://k8s.io/docs/concepts/configuration/secret/#projection-of-secret-keys-to-specific-paths + // NOTE: If CustomTLSSecret is provided, CustomReplicationClientTLSSecret + // MUST be provided and the ca.crt provided must be the same. + // +optional + CustomTLSSecret *corev1.SecretProjection `json:"customTLSSecret,omitempty"` + + // The secret containing the replication client certificates and keys for + // secure connections to the PostgreSQL server. It will need to contain the + // client TLS certificate, TLS key and the Certificate Authority certificate + // with the data keys set to tls.crt, tls.key and ca.crt, respectively. + // NOTE: If CustomReplicationClientTLSSecret is provided, CustomTLSSecret + // MUST be provided and the ca.crt provided must be the same. + // +optional + CustomReplicationClientTLSSecret *corev1.SecretProjection `json:"customReplicationTLSSecret,omitempty"` + + // DatabaseInitSQL defines a ConfigMap containing custom SQL that will + // be run after the cluster is initialized. This ConfigMap must be in the same + // namespace as the cluster. + // +optional + DatabaseInitSQL *DatabaseInitSQL `json:"databaseInitSQL,omitempty"` + // Whether or not the PostgreSQL cluster should use the defined default + // scheduling constraints. If the field is unset or false, the default + // scheduling constraints will be used in addition to any custom constraints + // provided. + // +optional + DisableDefaultPodScheduling *bool `json:"disableDefaultPodScheduling,omitempty"` + + // The image name to use for PostgreSQL containers. When omitted, the value + // comes from an operator environment variable. For standard PostgreSQL images, + // the format is RELATED_IMAGE_POSTGRES_{postgresVersion}, + // e.g. RELATED_IMAGE_POSTGRES_13. For PostGIS enabled PostgreSQL images, + // the format is RELATED_IMAGE_POSTGRES_{postgresVersion}_GIS_{postGISVersion}, + // e.g. RELATED_IMAGE_POSTGRES_13_GIS_3.1. + // +optional + // +operator-sdk:csv:customresourcedefinitions:type=spec,order=1 + Image string `json:"image,omitempty"` + + // ImagePullPolicy is used to determine when Kubernetes will attempt to + // pull (download) container images. + // More info: https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy + // +kubebuilder:validation:Enum={Always,Never,IfNotPresent} + // +optional + ImagePullPolicy corev1.PullPolicy `json:"imagePullPolicy,omitempty"` + + // The image pull secrets used to pull from a private registry + // Changing this value causes all running pods to restart. + // https://k8s.io/docs/tasks/configure-pod-container/pull-image-private-registry/ + // +optional + ImagePullSecrets []corev1.LocalObjectReference `json:"imagePullSecrets,omitempty"` + + // Specifies one or more sets of PostgreSQL pods that replicate data for + // this cluster. + // +listType=map + // +listMapKey=name + // +kubebuilder:validation:MinItems=1 + // +operator-sdk:csv:customresourcedefinitions:type=spec,order=2 + InstanceSets []PostgresInstanceSetSpec `json:"instances"` + + // Whether or not the PostgreSQL cluster is being deployed to an OpenShift + // environment. If the field is unset, the operator will automatically + // detect the environment. + // +optional + OpenShift *bool `json:"openshift,omitempty"` + + // +optional + Patroni *PatroniSpec `json:"patroni,omitempty"` + + // Suspends the rollout and reconciliation of changes made to the + // PostgresCluster spec. + // +optional + Paused *bool `json:"paused,omitempty"` + + // The port on which PostgreSQL should listen. + // +optional + // +kubebuilder:default=5432 + // +kubebuilder:validation:Minimum=1024 + Port *int32 `json:"port,omitempty"` + + // The major version of PostgreSQL installed in the PostgreSQL image + // +kubebuilder:validation:Required + // +kubebuilder:validation:Minimum=11 + // +kubebuilder:validation:Maximum=17 + // +operator-sdk:csv:customresourcedefinitions:type=spec,order=1 + PostgresVersion int `json:"postgresVersion"` + + // The PostGIS extension version installed in the PostgreSQL image. + // When image is not set, indicates a PostGIS enabled image will be used. + // +optional + PostGISVersion string `json:"postGISVersion,omitempty"` + + // The specification of a proxy that connects to PostgreSQL. + // +optional + Proxy *PostgresProxySpec `json:"proxy,omitempty"` + + // The specification of a user interface that connects to PostgreSQL. + // +optional + UserInterface *UserInterfaceSpec `json:"userInterface,omitempty"` + + // The specification of monitoring tools that connect to PostgreSQL + // +optional + Monitoring *MonitoringSpec `json:"monitoring,omitempty"` + + // Specification of the service that exposes the PostgreSQL primary instance. + // +optional + Service *ServiceSpec `json:"service,omitempty"` + + // Specification of the service that exposes PostgreSQL replica instances + // +optional + ReplicaService *ServiceSpec `json:"replicaService,omitempty"` + + // Whether or not the PostgreSQL cluster should be stopped. + // When this is true, workloads are scaled to zero and CronJobs + // are suspended. + // Other resources, such as Services and Volumes, remain in place. + // +optional + Shutdown *bool `json:"shutdown,omitempty"` + + // Run this cluster as a read-only copy of an existing cluster or archive. + // +optional + Standby *PostgresStandbySpec `json:"standby,omitempty"` + + // A list of group IDs applied to the process of a container. These can be + // useful when accessing shared file systems with constrained permissions. + // More info: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context + // --- + // +kubebuilder:validation:Optional + // + // Containers should not run with a root GID. + // - https://kubernetes.io/docs/concepts/security/pod-security-standards/ + // +kubebuilder:validation:items:Minimum=1 + // + // Supplementary GIDs must fit within int32. + // - https://releases.k8s.io/v1.18.0/pkg/apis/core/validation/validation.go#L3659-L3663 + // - https://releases.k8s.io/v1.22.0/pkg/apis/core/validation/validation.go#L3923-L3927 + // +kubebuilder:validation:items:Maximum=2147483647 + SupplementalGroups []int64 `json:"supplementalGroups,omitempty"` + + // Users to create inside PostgreSQL and the databases they should access. + // The default creates one user that can access one database matching the + // PostgresCluster name. An empty list creates no users. Removing a user + // from this list does NOT drop the user nor revoke their access. + // +listType=map + // +listMapKey=name + // +kubebuilder:validation:MaxItems=64 + // +optional + Users []PostgresUserSpec `json:"users,omitempty"` + + Config PostgresAdditionalConfig `json:"config,omitempty"` +} + +// DataSource defines data sources for a new PostgresCluster. +type DataSource struct { + // Defines a pgBackRest cloud-based data source that can be used to pre-populate the + // PostgreSQL data directory for a new PostgreSQL cluster using a pgBackRest restore. + // The PGBackRest field is incompatible with the PostgresCluster field: only one + // data source can be used for pre-populating a new PostgreSQL cluster + // +optional + PGBackRest *PGBackRestDataSource `json:"pgbackrest,omitempty"` + + // Defines a pgBackRest data source that can be used to pre-populate the PostgreSQL data + // directory for a new PostgreSQL cluster using a pgBackRest restore. + // The PGBackRest field is incompatible with the PostgresCluster field: only one + // data source can be used for pre-populating a new PostgreSQL cluster + // +optional + PostgresCluster *PostgresClusterDataSource `json:"postgresCluster,omitempty"` + + // Defines any existing volumes to reuse for this PostgresCluster. + // +optional + Volumes *DataSourceVolumes `json:"volumes,omitempty"` +} + +// DataSourceVolumes defines any existing volumes to reuse for this PostgresCluster. +type DataSourceVolumes struct { + // Defines the existing pgData volume and directory to use in the current + // PostgresCluster. + // +optional + PGDataVolume *DataSourceVolume `json:"pgDataVolume,omitempty"` + + // Defines the existing pg_wal volume and directory to use in the current + // PostgresCluster. Note that a defined pg_wal volume MUST be accompanied by + // a pgData volume. + // +optional + PGWALVolume *DataSourceVolume `json:"pgWALVolume,omitempty"` + + // Defines the existing pgBackRest repo volume and directory to use in the + // current PostgresCluster. + // +optional + PGBackRestVolume *DataSourceVolume `json:"pgBackRestVolume,omitempty"` +} + +// DataSourceVolume defines the PVC name and data directory path for an existing cluster volume. +type DataSourceVolume struct { + // The existing PVC name. + PVCName string `json:"pvcName"` + + // The existing directory. When not set, a move Job is not created for the + // associated volume. + // +optional + Directory string `json:"directory,omitempty"` +} + +// DatabaseInitSQL defines a ConfigMap containing custom SQL that will +// be run after the cluster is initialized. This ConfigMap must be in the same +// namespace as the cluster. +type DatabaseInitSQL struct { + // Name is the name of a ConfigMap + // +required + Name string `json:"name"` + + // Key is the ConfigMap data key that points to a SQL string + // +required + Key string `json:"key"` +} + +// PostgresClusterDataSource defines a data source for bootstrapping PostgreSQL clusters using a +// an existing PostgresCluster. +type PostgresClusterDataSource struct { + + // The name of an existing PostgresCluster to use as the data source for the new PostgresCluster. + // Defaults to the name of the PostgresCluster being created if not provided. + // +optional + ClusterName string `json:"clusterName,omitempty"` + + // The namespace of the cluster specified as the data source using the clusterName field. + // Defaults to the namespace of the PostgresCluster being created if not provided. + // +optional + ClusterNamespace string `json:"clusterNamespace,omitempty"` + + // The name of the pgBackRest repo within the source PostgresCluster that contains the backups + // that should be utilized to perform a pgBackRest restore when initializing the data source + // for the new PostgresCluster. + // +kubebuilder:validation:Required + // +kubebuilder:validation:Pattern=^repo[1-4] + RepoName string `json:"repoName"` + + // Command line options to include when running the pgBackRest restore command. + // https://pgbackrest.org/command.html#command-restore + // +optional + Options []string `json:"options,omitempty"` + + // Resource requirements for the pgBackRest restore Job. + // +optional + Resources corev1.ResourceRequirements `json:"resources,omitempty"` + + // Scheduling constraints of the pgBackRest restore Job. + // More info: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node + // +optional + Affinity *corev1.Affinity `json:"affinity,omitempty"` + + // Priority class name for the pgBackRest restore Job pod. Changing this + // value causes PostgreSQL to restart. + // More info: https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/ + // +optional + PriorityClassName *string `json:"priorityClassName,omitempty"` + + // Tolerations of the pgBackRest restore Job. + // More info: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration + // +optional + Tolerations []corev1.Toleration `json:"tolerations,omitempty"` +} + +// Default defines several key default values for a Postgres cluster. +func (s *PostgresClusterSpec) Default() { + for i := range s.InstanceSets { + s.InstanceSets[i].Default(i) + } + + if s.Patroni == nil { + s.Patroni = new(PatroniSpec) + } + s.Patroni.Default() + + if s.Port == nil { + s.Port = new(int32) + *s.Port = 5432 + } + + if s.Proxy != nil { + s.Proxy.Default() + } + + if s.UserInterface != nil { + s.UserInterface.Default() + } +} + +// Backups defines a PostgreSQL archive configuration +type Backups struct { + + // pgBackRest archive configuration + // +optional + PGBackRest PGBackRestArchive `json:"pgbackrest"` + + // VolumeSnapshot configuration + // +optional + Snapshots *VolumeSnapshots `json:"snapshots,omitempty"` +} + +// PostgresClusterStatus defines the observed state of PostgresCluster +type PostgresClusterStatus struct { + + // Identifies the databases that have been installed into PostgreSQL. + DatabaseRevision string `json:"databaseRevision,omitempty"` + + // Current state of PostgreSQL instances. + // +listType=map + // +listMapKey=name + // +optional + InstanceSets []PostgresInstanceSetStatus `json:"instances,omitempty"` + + // +optional + Patroni PatroniStatus `json:"patroni,omitempty"` + + // Status information for pgBackRest + // +optional + PGBackRest *PGBackRestStatus `json:"pgbackrest,omitempty"` + + // +optional + RegistrationRequired *RegistrationRequirementStatus `json:"registrationRequired,omitempty"` + + // +optional + TokenRequired string `json:"tokenRequired,omitempty"` + + // Stores the current PostgreSQL major version following a successful + // major PostgreSQL upgrade. + // +optional + PostgresVersion int `json:"postgresVersion"` + + // Current state of the PostgreSQL proxy. + // +optional + Proxy PostgresProxyStatus `json:"proxy,omitempty"` + + // The instance that should be started first when bootstrapping and/or starting a + // PostgresCluster. + // +optional + StartupInstance string `json:"startupInstance,omitempty"` + + // The instance set associated with the startupInstance + // +optional + StartupInstanceSet string `json:"startupInstanceSet,omitempty"` + + // Current state of the PostgreSQL user interface. + // +optional + UserInterface *PostgresUserInterfaceStatus `json:"userInterface,omitempty"` + + // Identifies the users that have been installed into PostgreSQL. + UsersRevision string `json:"usersRevision,omitempty"` + + // Current state of PostgreSQL cluster monitoring tool configuration + // +optional + Monitoring MonitoringStatus `json:"monitoring,omitempty"` + + // DatabaseInitSQL state of custom database initialization in the cluster + // +optional + DatabaseInitSQL *string `json:"databaseInitSQL,omitempty"` + + // observedGeneration represents the .metadata.generation on which the status was based. + // +optional + // +kubebuilder:validation:Minimum=0 + ObservedGeneration int64 `json:"observedGeneration,omitempty"` + + // conditions represent the observations of postgrescluster's current state. + // Known .status.conditions.type are: "PersistentVolumeResizing", + // "Progressing", "ProxyAvailable" + // +optional + // +listType=map + // +listMapKey=type + // +operator-sdk:csv:customresourcedefinitions:type=status,xDescriptors={"urn:alm:descriptor:io.kubernetes.conditions"} + Conditions []metav1.Condition `json:"conditions,omitempty"` +} + +// PostgresClusterStatus condition types. +const ( + PersistentVolumeResizing = "PersistentVolumeResizing" + PostgresClusterProgressing = "Progressing" + ProxyAvailable = "ProxyAvailable" + Registered = "Registered" +) + +type PostgresInstanceSetSpec struct { + // +optional + Metadata *Metadata `json:"metadata,omitempty"` + + // This value goes into the name of an appsv1.StatefulSet, the hostname of + // a corev1.Pod, and label values. The pattern below is IsDNS1123Label + // wrapped in "()?" to accommodate the empty default. + // + // The Pods created by a StatefulSet have a "controller-revision-hash" label + // comprised of the StatefulSet name, a dash, and a 10-character hash. + // The length below is derived from limitations on label values: + // + // 63 (max) ≥ len(cluster) + 1 (dash) + // + len(set) + 1 (dash) + 4 (id) + // + 1 (dash) + 10 (hash) + // + // See: https://issue.k8s.io/64023 + + // Name that associates this set of PostgreSQL pods. This field is optional + // when only one instance set is defined. Each instance set in a cluster + // must have a unique name. The combined length of this and the cluster name + // must be 46 characters or less. + // +optional + // +kubebuilder:default="" + // +kubebuilder:validation:Pattern=`^([a-z0-9]([-a-z0-9]*[a-z0-9])?)?$` + Name string `json:"name"` + + // Scheduling constraints of a PostgreSQL pod. Changing this value causes + // PostgreSQL to restart. + // More info: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node + // +optional + Affinity *corev1.Affinity `json:"affinity,omitempty"` + + // Custom sidecars for PostgreSQL instance pods. Changing this value causes + // PostgreSQL to restart. + // +optional + Containers []corev1.Container `json:"containers,omitempty"` + + // Defines a PersistentVolumeClaim for PostgreSQL data. + // More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes + // --- + // +kubebuilder:validation:Required + // + // NOTE(validation): Every PVC must have at least one accessMode. NOTE(KEP-4153) + // TODO(k8s-1.28): fieldPath=`.accessModes`,reason="FieldValueRequired" + // - https://releases.k8s.io/v1.25.0/pkg/apis/core/validation/validation.go#L2098-L2100 + // - https://releases.k8s.io/v1.31.0/pkg/apis/core/validation/validation.go#L2292-L2294 + // +kubebuilder:validation:XValidation:rule=`has(self.accessModes) && size(self.accessModes) > 0`,message=`missing accessModes` + // + // NOTE(validation): Every PVC must have a positive storage request. NOTE(KEP-4153) + // TODO(k8s-1.28): fieldPath=`.resources.requests.storage`,reason="FieldValueRequired" + // - https://releases.k8s.io/v1.25.0/pkg/apis/core/validation/validation.go#L2126-L2133 + // - https://releases.k8s.io/v1.31.0/pkg/apis/core/validation/validation.go#L2318-L2325 + // +kubebuilder:validation:XValidation:rule=`has(self.resources) && has(self.resources.requests) && has(self.resources.requests.storage)`,message=`missing storage request` + DataVolumeClaimSpec corev1.PersistentVolumeClaimSpec `json:"dataVolumeClaimSpec"` + + // Priority class name for the PostgreSQL pod. Changing this value causes + // PostgreSQL to restart. + // More info: https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/ + // +optional + PriorityClassName *string `json:"priorityClassName,omitempty"` + + // Number of desired PostgreSQL pods. + // +optional + // +kubebuilder:default=1 + // +kubebuilder:validation:Minimum=1 + Replicas *int32 `json:"replicas,omitempty"` + + // Minimum number of pods that should be available at a time. + // Defaults to one when the replicas field is greater than one. + // +optional + MinAvailable *intstr.IntOrString `json:"minAvailable,omitempty"` + + // Compute resources of a PostgreSQL container. + // +optional + Resources corev1.ResourceRequirements `json:"resources,omitempty"` + + // Configuration for instance sidecar containers + // +optional + Sidecars *InstanceSidecars `json:"sidecars,omitempty"` + + // Tolerations of a PostgreSQL pod. Changing this value causes PostgreSQL to restart. + // More info: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration + // +optional + Tolerations []corev1.Toleration `json:"tolerations,omitempty"` + + // Topology spread constraints of a PostgreSQL pod. Changing this value causes + // PostgreSQL to restart. + // More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/ + // +optional + TopologySpreadConstraints []corev1.TopologySpreadConstraint `json:"topologySpreadConstraints,omitempty"` + + // Defines a separate PersistentVolumeClaim for PostgreSQL's write-ahead log. + // More info: https://www.postgresql.org/docs/current/wal.html + // --- + // +kubebuilder:validation:Optional + // + // NOTE(validation): Every PVC must have at least one accessMode. NOTE(KEP-4153) + // TODO(k8s-1.28): fieldPath=`.accessModes`,reason="FieldValueRequired" + // - https://releases.k8s.io/v1.25.0/pkg/apis/core/validation/validation.go#L2098-L2100 + // - https://releases.k8s.io/v1.31.0/pkg/apis/core/validation/validation.go#L2292-L2294 + // +kubebuilder:validation:XValidation:rule=`has(self.accessModes) && size(self.accessModes) > 0`,message=`missing accessModes` + // + // NOTE(validation): Every PVC must have a positive storage request. NOTE(KEP-4153) + // TODO(k8s-1.28): fieldPath=`.resources.requests.storage`,reason="FieldValueRequired" + // - https://releases.k8s.io/v1.25.0/pkg/apis/core/validation/validation.go#L2126-L2133 + // - https://releases.k8s.io/v1.31.0/pkg/apis/core/validation/validation.go#L2318-L2325 + // +kubebuilder:validation:XValidation:rule=`has(self.resources) && has(self.resources.requests) && has(self.resources.requests.storage)`,message=`missing storage request` + WALVolumeClaimSpec *corev1.PersistentVolumeClaimSpec `json:"walVolumeClaimSpec,omitempty"` + + // The list of tablespaces volumes to mount for this postgrescluster + // This field requires enabling TablespaceVolumes feature gate + // +listType=map + // +listMapKey=name + // +optional + TablespaceVolumes []TablespaceVolume `json:"tablespaceVolumes,omitempty"` +} + +type TablespaceVolume struct { + // This value goes into + // a. the name of a corev1.PersistentVolumeClaim, + // b. a label value, and + // c. a path name. + // So it must match both IsDNS1123Subdomain and IsValidLabelValue; + // and be valid as a file path. + + // The name for the tablespace, used as the path name for the volume. + // Must be unique in the instance set since they become the directory names. + // +kubebuilder:validation:Required + // +kubebuilder:validation:MinLength=1 + // +kubebuilder:validation:Pattern=`^[a-z][a-z0-9]*$` + // +kubebuilder:validation:Type=string + Name string `json:"name"` + + // Defines a PersistentVolumeClaim for a tablespace. + // More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes + // --- + // +kubebuilder:validation:Required + // + // NOTE(validation): Every PVC must have at least one accessMode. NOTE(KEP-4153) + // TODO(k8s-1.28): fieldPath=`.accessModes`,reason="FieldValueRequired" + // - https://releases.k8s.io/v1.25.0/pkg/apis/core/validation/validation.go#L2098-L2100 + // - https://releases.k8s.io/v1.31.0/pkg/apis/core/validation/validation.go#L2292-L2294 + // +kubebuilder:validation:XValidation:rule=`has(self.accessModes) && size(self.accessModes) > 0`,message=`missing accessModes` + // + // NOTE(validation): Every PVC must have a positive storage request. NOTE(KEP-4153) + // TODO(k8s-1.28): fieldPath=`.resources.requests.storage`,reason="FieldValueRequired" + // - https://releases.k8s.io/v1.25.0/pkg/apis/core/validation/validation.go#L2126-L2133 + // - https://releases.k8s.io/v1.31.0/pkg/apis/core/validation/validation.go#L2318-L2325 + // +kubebuilder:validation:XValidation:rule=`has(self.resources) && has(self.resources.requests) && has(self.resources.requests.storage)`,message=`missing storage request` + DataVolumeClaimSpec corev1.PersistentVolumeClaimSpec `json:"dataVolumeClaimSpec"` +} + +// InstanceSidecars defines the configuration for instance sidecar containers +type InstanceSidecars struct { + // Defines the configuration for the replica cert copy sidecar container + // +optional + ReplicaCertCopy *Sidecar `json:"replicaCertCopy,omitempty"` +} + +// Default sets the default values for an instance set spec, including the name +// suffix and number of replicas. +func (s *PostgresInstanceSetSpec) Default(i int) { + if s.Name == "" { + s.Name = fmt.Sprintf("%02d", i) + } + if s.Replicas == nil { + s.Replicas = new(int32) + *s.Replicas = 1 + } +} + +type PostgresInstanceSetStatus struct { + Name string `json:"name"` + + // Total number of ready pods. + // +optional + ReadyReplicas int32 `json:"readyReplicas,omitempty"` + + // Total number of pods. + // +optional + Replicas int32 `json:"replicas,omitempty"` + + // Total number of pods that have the desired specification. + // +optional + UpdatedReplicas int32 `json:"updatedReplicas,omitempty"` + + // Desired Size of the pgData volume + // +optional + DesiredPGDataVolume map[string]string `json:"desiredPGDataVolume,omitempty"` +} + +// PostgresProxySpec is a union of the supported PostgreSQL proxies. +type PostgresProxySpec struct { + + // Defines a PgBouncer proxy and connection pooler. + PGBouncer *PGBouncerPodSpec `json:"pgBouncer"` +} + +// Default sets the defaults for any proxies that are set. +func (s *PostgresProxySpec) Default() { + if s.PGBouncer != nil { + s.PGBouncer.Default() + } +} + +type RegistrationRequirementStatus struct { + PGOVersion string `json:"pgoVersion,omitempty"` +} + +type PostgresProxyStatus struct { + PGBouncer PGBouncerPodStatus `json:"pgBouncer,omitempty"` +} + +// PostgresStandbySpec defines if/how the cluster should be a hot standby. +type PostgresStandbySpec struct { + // Whether or not the PostgreSQL cluster should be read-only. When this is + // true, WAL files are applied from a pgBackRest repository or another + // PostgreSQL server. + // +optional + // +kubebuilder:default=true + Enabled bool `json:"enabled"` + + // The name of the pgBackRest repository to follow for WAL files. + // +optional + // +kubebuilder:validation:Pattern=^repo[1-4] + RepoName string `json:"repoName,omitempty"` + + // Network address of the PostgreSQL server to follow via streaming replication. + // +optional + Host string `json:"host,omitempty"` + + // Network port of the PostgreSQL server to follow via streaming replication. + // +optional + // +kubebuilder:validation:Minimum=1024 + Port *int32 `json:"port,omitempty"` +} + +// UserInterfaceSpec is a union of the supported PostgreSQL user interfaces. +type UserInterfaceSpec struct { + + // Defines a pgAdmin user interface. + PGAdmin *PGAdminPodSpec `json:"pgAdmin"` +} + +// Default sets the defaults for any user interfaces that are set. +func (s *UserInterfaceSpec) Default() { + if s.PGAdmin != nil { + s.PGAdmin.Default() + } +} + +// PostgresUserInterfaceStatus is a union of the supported PostgreSQL user +// interface statuses. +type PostgresUserInterfaceStatus struct { + + // The state of the pgAdmin user interface. + PGAdmin PGAdminPodStatus `json:"pgAdmin,omitempty"` +} + +type PostgresAdditionalConfig struct { + Files []corev1.VolumeProjection `json:"files,omitempty"` +} + +// +kubebuilder:object:root=true +// +kubebuilder:subresource:status +// +operator-sdk:csv:customresourcedefinitions:resources={{ConfigMap,v1},{Secret,v1},{Service,v1},{CronJob,v1beta1},{Deployment,v1},{Job,v1},{StatefulSet,v1},{PersistentVolumeClaim,v1}} + +// PostgresCluster is the Schema for the postgresclusters API +type PostgresCluster struct { + // ObjectMeta.Name is a DNS subdomain. + // - https://docs.k8s.io/concepts/overview/working-with-objects/names/#dns-subdomain-names + // - https://releases.k8s.io/v1.21.0/staging/src/k8s.io/apiextensions-apiserver/pkg/registry/customresource/validator.go#L60 + + metav1.TypeMeta `json:",inline"` + metav1.ObjectMeta `json:"metadata,omitempty"` + + // NOTE(cbandy): Every PostgresCluster needs a Spec, but it is optional here + // so ObjectMeta can be managed independently. + + Spec PostgresClusterSpec `json:"spec,omitempty"` + Status PostgresClusterStatus `json:"status,omitempty"` +} + +// Default implements "sigs.k8s.io/controller-runtime/pkg/webhook.Defaulter" so +// a webhook can be registered for the type. +// - https://book.kubebuilder.io/reference/webhook-overview.html +func (c *PostgresCluster) Default() { + if len(c.APIVersion) == 0 { + c.APIVersion = GroupVersion.String() + } + if len(c.Kind) == 0 { + c.Kind = "PostgresCluster" + } + c.Spec.Default() +} + +// +kubebuilder:object:root=true + +// PostgresClusterList contains a list of PostgresCluster +type PostgresClusterList struct { + metav1.TypeMeta `json:",inline"` + metav1.ListMeta `json:"metadata,omitempty"` + Items []PostgresCluster `json:"items"` +} + +func init() { + SchemeBuilder.Register(&PostgresCluster{}, &PostgresClusterList{}) +} + +// MonitoringSpec is a union of the supported PostgreSQL Monitoring tools +type MonitoringSpec struct { + // +optional + PGMonitor *PGMonitorSpec `json:"pgmonitor,omitempty"` +} + +// MonitoringStatus is the current state of PostgreSQL cluster monitoring tool +// configuration +type MonitoringStatus struct { + // +optional + ExporterConfiguration string `json:"exporterConfiguration,omitempty"` +} + +func NewPostgresCluster() *PostgresCluster { + cluster := &PostgresCluster{} + cluster.SetGroupVersionKind(GroupVersion.WithKind("PostgresCluster")) + return cluster +} + +// VolumeSnapshots defines the configuration for VolumeSnapshots +type VolumeSnapshots struct { + // Name of the VolumeSnapshotClass that should be used by VolumeSnapshots + // +kubebuilder:validation:Required + // +kubebuilder:validation:MinLength=1 + VolumeSnapshotClassName string `json:"volumeSnapshotClassName"` +} diff --git a/pkg/apis/postgres-operator.crunchydata.com/v1beta1/shared_types.go b/pkg/apis/postgres-operator.crunchydata.com/v1beta1/shared_types.go new file mode 100644 index 0000000000..1dc4e3627e --- /dev/null +++ b/pkg/apis/postgres-operator.crunchydata.com/v1beta1/shared_types.go @@ -0,0 +1,93 @@ +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package v1beta1 + +import ( + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/runtime" +) + +// SchemalessObject is a map compatible with JSON object. +// +// Use with the following markers: +// - kubebuilder:pruning:PreserveUnknownFields +// - kubebuilder:validation:Schemaless +// - kubebuilder:validation:Type=object +type SchemalessObject map[string]any + +// DeepCopy creates a new SchemalessObject by copying the receiver. +func (in *SchemalessObject) DeepCopy() *SchemalessObject { + if in == nil { + return nil + } + out := new(SchemalessObject) + *out = runtime.DeepCopyJSON(*in) + return out +} + +type ServiceSpec struct { + // +optional + Metadata *Metadata `json:"metadata,omitempty"` + + // The port on which this service is exposed when type is NodePort or + // LoadBalancer. Value must be in-range and not in use or the operation will + // fail. If unspecified, a port will be allocated if this Service requires one. + // - https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport + // +optional + NodePort *int32 `json:"nodePort,omitempty"` + + // More info: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types + // + // +optional + // +kubebuilder:default=ClusterIP + // +kubebuilder:validation:Enum={ClusterIP,NodePort,LoadBalancer} + Type string `json:"type"` + + // More info: https://kubernetes.io/docs/concepts/services-networking/service/#traffic-policies + // + // +optional + // +kubebuilder:validation:Enum={Cluster,Local} + InternalTrafficPolicy *corev1.ServiceInternalTrafficPolicyType `json:"internalTrafficPolicy,omitempty"` + + // More info: https://kubernetes.io/docs/concepts/services-networking/service/#traffic-policies + // + // +optional + // +kubebuilder:validation:Enum={Cluster,Local} + ExternalTrafficPolicy *corev1.ServiceExternalTrafficPolicyType `json:"externalTrafficPolicy,omitempty"` +} + +// Sidecar defines the configuration of a sidecar container +type Sidecar struct { + // Resource requirements for a sidecar container + // +optional + Resources *corev1.ResourceRequirements `json:"resources,omitempty"` +} + +// Metadata contains metadata for custom resources +type Metadata struct { + // +optional + Labels map[string]string `json:"labels,omitempty"` + + // +optional + Annotations map[string]string `json:"annotations,omitempty"` +} + +// GetLabelsOrNil gets labels from a Metadata pointer, if Metadata +// hasn't been set return nil +func (meta *Metadata) GetLabelsOrNil() map[string]string { + if meta == nil { + return nil + } + return meta.Labels +} + +// GetAnnotationsOrNil gets annotations from a Metadata pointer, if Metadata +// hasn't been set return nil +func (meta *Metadata) GetAnnotationsOrNil() map[string]string { + if meta == nil { + return nil + } + return meta.Annotations +} diff --git a/pkg/apis/postgres-operator.crunchydata.com/v1beta1/shared_types_test.go b/pkg/apis/postgres-operator.crunchydata.com/v1beta1/shared_types_test.go new file mode 100644 index 0000000000..96cd4da073 --- /dev/null +++ b/pkg/apis/postgres-operator.crunchydata.com/v1beta1/shared_types_test.go @@ -0,0 +1,59 @@ +// Copyright 2022 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package v1beta1 + +import ( + "reflect" + "testing" + + "gotest.tools/v3/assert" + "sigs.k8s.io/yaml" +) + +func TestSchemalessObjectDeepCopy(t *testing.T) { + t.Parallel() + + var n *SchemalessObject + assert.DeepEqual(t, n, n.DeepCopy()) + + var z SchemalessObject + assert.DeepEqual(t, z, *z.DeepCopy()) + + var one SchemalessObject + assert.NilError(t, yaml.Unmarshal( + []byte(`{ str: value, num: 1, arr: [a, 2, true] }`), &one, + )) + + // reflect and go-cmp agree the original and copy are equivalent. + same := *one.DeepCopy() + assert.DeepEqual(t, one, same) + assert.Assert(t, reflect.DeepEqual(one, same)) + + // Changes to the copy do not affect the original. + { + change := *one.DeepCopy() + change["str"] = "banana" + assert.Assert(t, reflect.DeepEqual(one, same)) + assert.Assert(t, !reflect.DeepEqual(one, change)) + } + { + change := *one.DeepCopy() + change["num"] = 99 + assert.Assert(t, reflect.DeepEqual(one, same)) + assert.Assert(t, !reflect.DeepEqual(one, change)) + } + { + change := *one.DeepCopy() + change["arr"].([]any)[0] = "rock" + assert.Assert(t, reflect.DeepEqual(one, same)) + assert.Assert(t, !reflect.DeepEqual(one, change)) + } + { + change := *one.DeepCopy() + change["arr"] = append(change["arr"].([]any), "more") + assert.Assert(t, reflect.DeepEqual(one, same)) + assert.Assert(t, !reflect.DeepEqual(one, change)) + } +} diff --git a/pkg/apis/postgres-operator.crunchydata.com/v1beta1/standalone_pgadmin_types.go b/pkg/apis/postgres-operator.crunchydata.com/v1beta1/standalone_pgadmin_types.go new file mode 100644 index 0000000000..4fbc90a3b9 --- /dev/null +++ b/pkg/apis/postgres-operator.crunchydata.com/v1beta1/standalone_pgadmin_types.go @@ -0,0 +1,219 @@ +// Copyright 2023 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +package v1beta1 + +import ( + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" +) + +// PGAdminConfiguration represents pgAdmin configuration files. +type StandalonePGAdminConfiguration struct { + // Files allows the user to mount projected volumes into the pgAdmin + // container so that files can be referenced by pgAdmin as needed. + // +optional + Files []corev1.VolumeProjection `json:"files,omitempty"` + + // A Secret containing the value for the CONFIG_DATABASE_URI setting. + // More info: https://www.pgadmin.org/docs/pgadmin4/latest/external_database.html + // +optional + ConfigDatabaseURI *corev1.SecretKeySelector `json:"configDatabaseURI,omitempty"` + + // Settings for the gunicorn server. + // More info: https://docs.gunicorn.org/en/latest/settings.html + // +optional + // +kubebuilder:pruning:PreserveUnknownFields + // +kubebuilder:validation:Schemaless + // +kubebuilder:validation:Type=object + Gunicorn SchemalessObject `json:"gunicorn,omitempty"` + + // A Secret containing the value for the LDAP_BIND_PASSWORD setting. + // More info: https://www.pgadmin.org/docs/pgadmin4/latest/ldap.html + // +optional + LDAPBindPassword *corev1.SecretKeySelector `json:"ldapBindPassword,omitempty"` + + // Settings for the pgAdmin server process. Keys should be uppercase and + // values must be constants. + // More info: https://www.pgadmin.org/docs/pgadmin4/latest/config_py.html + // +optional + // +kubebuilder:pruning:PreserveUnknownFields + // +kubebuilder:validation:Schemaless + // +kubebuilder:validation:Type=object + Settings SchemalessObject `json:"settings,omitempty"` +} + +// PGAdminSpec defines the desired state of PGAdmin +type PGAdminSpec struct { + + // +optional + Metadata *Metadata `json:"metadata,omitempty"` + + // Configuration settings for the pgAdmin process. Changes to any of these + // values will be loaded without validation. Be careful, as + // you may put pgAdmin into an unusable state. + // +optional + Config StandalonePGAdminConfiguration `json:"config,omitempty"` + + // Defines a PersistentVolumeClaim for pgAdmin data. + // More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes + // +kubebuilder:validation:Required + DataVolumeClaimSpec corev1.PersistentVolumeClaimSpec `json:"dataVolumeClaimSpec"` + + // The image name to use for pgAdmin instance. + // +optional + Image *string `json:"image,omitempty"` + + // ImagePullPolicy is used to determine when Kubernetes will attempt to + // pull (download) container images. + // More info: https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy + // +kubebuilder:validation:Enum={Always,Never,IfNotPresent} + // +optional + ImagePullPolicy corev1.PullPolicy `json:"imagePullPolicy,omitempty"` + + // The image pull secrets used to pull from a private registry. + // Changing this value causes all running PGAdmin pods to restart. + // https://k8s.io/docs/tasks/configure-pod-container/pull-image-private-registry/ + // +optional + ImagePullSecrets []corev1.LocalObjectReference `json:"imagePullSecrets,omitempty"` + + // Resource requirements for the PGAdmin container. + // +optional + Resources corev1.ResourceRequirements `json:"resources,omitempty"` + + // Scheduling constraints of the PGAdmin pod. + // More info: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node + // +optional + Affinity *corev1.Affinity `json:"affinity,omitempty"` + + // Priority class name for the PGAdmin pod. Changing this + // value causes PGAdmin pod to restart. + // More info: https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/ + // +optional + PriorityClassName *string `json:"priorityClassName,omitempty"` + + // Tolerations of the PGAdmin pod. + // More info: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration + // +optional + Tolerations []corev1.Toleration `json:"tolerations,omitempty"` + + // ServerGroups for importing PostgresClusters to pgAdmin. + // To create a pgAdmin with no selectors, leave this field empty. + // A pgAdmin created with no `ServerGroups` will not automatically + // add any servers through discovery. PostgresClusters can still be + // added manually. + // +optional + ServerGroups []ServerGroup `json:"serverGroups"` + + // pgAdmin users that are managed via the PGAdmin spec. Users can still + // be added via the pgAdmin GUI, but those users will not show up here. + // +listType=map + // +listMapKey=username + // +optional + Users []PGAdminUser `json:"users,omitempty"` + + // ServiceName will be used as the name of a ClusterIP service pointing + // to the pgAdmin pod and port. If the service already exists, PGO will + // update the service. For more information about services reference + // the Kubernetes and CrunchyData documentation. + // https://kubernetes.io/docs/concepts/services-networking/service/ + // +optional + ServiceName string `json:"serviceName,omitempty"` +} + +// +kubebuilder:validation:XValidation:rule=`[has(self.postgresClusterName),has(self.postgresClusterSelector)].exists_one(x,x)`,message=`exactly one of "postgresClusterName" or "postgresClusterSelector" is required` +type ServerGroup struct { + // The name for the ServerGroup in pgAdmin. + // Must be unique in the pgAdmin's ServerGroups since it becomes the ServerGroup name in pgAdmin. + // +kubebuilder:validation:Required + Name string `json:"name"` + + // PostgresClusterName selects one cluster to add to pgAdmin by name. + // +kubebuilder:validation:Optional + PostgresClusterName string `json:"postgresClusterName,omitempty"` + + // PostgresClusterSelector selects clusters to dynamically add to pgAdmin by matching labels. + // An empty selector like `{}` will select ALL clusters in the namespace. + // +kubebuilder:validation:Optional + PostgresClusterSelector metav1.LabelSelector `json:"postgresClusterSelector,omitempty"` +} + +type PGAdminUser struct { + // A reference to the secret that holds the user's password. + // +kubebuilder:validation:Required + PasswordRef *corev1.SecretKeySelector `json:"passwordRef"` + + // Role determines whether the user has admin privileges or not. + // Defaults to User. Valid options are Administrator and User. + // +kubebuilder:validation:Enum={Administrator,User} + // +optional + Role string `json:"role,omitempty"` + + // The username for User in pgAdmin. + // Must be unique in the pgAdmin's users list. + // +kubebuilder:validation:Required + Username string `json:"username"` +} + +// PGAdminStatus defines the observed state of PGAdmin +type PGAdminStatus struct { + + // conditions represent the observations of pgAdmin's current state. + // Known .status.conditions.type is: "PersistentVolumeResizing" + // +optional + // +listType=map + // +listMapKey=type + // +operator-sdk:csv:customresourcedefinitions:type=status,xDescriptors={"urn:alm:descriptor:io.kubernetes.conditions"} + Conditions []metav1.Condition `json:"conditions,omitempty"` + + // ImageSHA represents the image SHA for the container running pgAdmin. + // +optional + ImageSHA string `json:"imageSHA,omitempty"` + + // MajorVersion represents the major version of the running pgAdmin. + // +optional + MajorVersion int `json:"majorVersion,omitempty"` + + // observedGeneration represents the .metadata.generation on which the status was based. + // +optional + // +kubebuilder:validation:Minimum=0 + ObservedGeneration int64 `json:"observedGeneration,omitempty"` +} + +//+kubebuilder:object:root=true +//+kubebuilder:subresource:status + +// PGAdmin is the Schema for the PGAdmin API +type PGAdmin struct { + metav1.TypeMeta `json:",inline"` + metav1.ObjectMeta `json:"metadata,omitempty"` + + Spec PGAdminSpec `json:"spec,omitempty"` + Status PGAdminStatus `json:"status,omitempty"` +} + +// Default implements "sigs.k8s.io/controller-runtime/pkg/webhook.Defaulter" so +// a webhook can be registered for the type. +// - https://book.kubebuilder.io/reference/webhook-overview.html +func (p *PGAdmin) Default() { + if len(p.APIVersion) == 0 { + p.APIVersion = GroupVersion.String() + } + if len(p.Kind) == 0 { + p.Kind = "PGAdmin" + } +} + +//+kubebuilder:object:root=true + +// PGAdminList contains a list of PGAdmin +type PGAdminList struct { + metav1.TypeMeta `json:",inline"` + metav1.ListMeta `json:"metadata,omitempty"` + Items []PGAdmin `json:"items"` +} + +func init() { + SchemeBuilder.Register(&PGAdmin{}, &PGAdminList{}) +} diff --git a/pkg/apis/postgres-operator.crunchydata.com/v1beta1/zz_generated.deepcopy.go b/pkg/apis/postgres-operator.crunchydata.com/v1beta1/zz_generated.deepcopy.go new file mode 100644 index 0000000000..fa32069d0f --- /dev/null +++ b/pkg/apis/postgres-operator.crunchydata.com/v1beta1/zz_generated.deepcopy.go @@ -0,0 +1,2327 @@ +//go:build !ignore_autogenerated + +// Copyright 2021 - 2024 Crunchy Data Solutions, Inc. +// +// SPDX-License-Identifier: Apache-2.0 + +// Code generated by controller-gen. DO NOT EDIT. + +package v1beta1 + +import ( + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/runtime" + "k8s.io/apimachinery/pkg/util/intstr" +) + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *APIResponses) DeepCopyInto(out *APIResponses) { + *out = *in + in.Cluster.DeepCopyInto(&out.Cluster) + in.Status.DeepCopyInto(&out.Status) + in.Upgrade.DeepCopyInto(&out.Upgrade) +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new APIResponses. +func (in *APIResponses) DeepCopy() *APIResponses { + if in == nil { + return nil + } + out := new(APIResponses) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *BackupJobs) DeepCopyInto(out *BackupJobs) { + *out = *in + in.Resources.DeepCopyInto(&out.Resources) + if in.PriorityClassName != nil { + in, out := &in.PriorityClassName, &out.PriorityClassName + *out = new(string) + **out = **in + } + if in.Affinity != nil { + in, out := &in.Affinity, &out.Affinity + *out = new(corev1.Affinity) + (*in).DeepCopyInto(*out) + } + if in.Tolerations != nil { + in, out := &in.Tolerations, &out.Tolerations + *out = make([]corev1.Toleration, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + if in.TTLSecondsAfterFinished != nil { + in, out := &in.TTLSecondsAfterFinished, &out.TTLSecondsAfterFinished + *out = new(int32) + **out = **in + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new BackupJobs. +func (in *BackupJobs) DeepCopy() *BackupJobs { + if in == nil { + return nil + } + out := new(BackupJobs) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *Backups) DeepCopyInto(out *Backups) { + *out = *in + in.PGBackRest.DeepCopyInto(&out.PGBackRest) + if in.Snapshots != nil { + in, out := &in.Snapshots, &out.Snapshots + *out = new(VolumeSnapshots) + **out = **in + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Backups. +func (in *Backups) DeepCopy() *Backups { + if in == nil { + return nil + } + out := new(Backups) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ClusterUpgrade) DeepCopyInto(out *ClusterUpgrade) { + *out = *in + if in.Operations != nil { + in, out := &in.Operations, &out.Operations + *out = make([]*UpgradeOperation, len(*in)) + for i := range *in { + if (*in)[i] != nil { + in, out := &(*in)[i], &(*out)[i] + *out = new(UpgradeOperation) + **out = **in + } + } + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ClusterUpgrade. +func (in *ClusterUpgrade) DeepCopy() *ClusterUpgrade { + if in == nil { + return nil + } + out := new(ClusterUpgrade) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *CrunchyBridgeCluster) DeepCopyInto(out *CrunchyBridgeCluster) { + *out = *in + out.TypeMeta = in.TypeMeta + in.ObjectMeta.DeepCopyInto(&out.ObjectMeta) + in.Spec.DeepCopyInto(&out.Spec) + in.Status.DeepCopyInto(&out.Status) +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CrunchyBridgeCluster. +func (in *CrunchyBridgeCluster) DeepCopy() *CrunchyBridgeCluster { + if in == nil { + return nil + } + out := new(CrunchyBridgeCluster) + in.DeepCopyInto(out) + return out +} + +// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. +func (in *CrunchyBridgeCluster) DeepCopyObject() runtime.Object { + if c := in.DeepCopy(); c != nil { + return c + } + return nil +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *CrunchyBridgeClusterList) DeepCopyInto(out *CrunchyBridgeClusterList) { + *out = *in + out.TypeMeta = in.TypeMeta + in.ListMeta.DeepCopyInto(&out.ListMeta) + if in.Items != nil { + in, out := &in.Items, &out.Items + *out = make([]CrunchyBridgeCluster, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CrunchyBridgeClusterList. +func (in *CrunchyBridgeClusterList) DeepCopy() *CrunchyBridgeClusterList { + if in == nil { + return nil + } + out := new(CrunchyBridgeClusterList) + in.DeepCopyInto(out) + return out +} + +// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. +func (in *CrunchyBridgeClusterList) DeepCopyObject() runtime.Object { + if c := in.DeepCopy(); c != nil { + return c + } + return nil +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *CrunchyBridgeClusterRoleSpec) DeepCopyInto(out *CrunchyBridgeClusterRoleSpec) { + *out = *in +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CrunchyBridgeClusterRoleSpec. +func (in *CrunchyBridgeClusterRoleSpec) DeepCopy() *CrunchyBridgeClusterRoleSpec { + if in == nil { + return nil + } + out := new(CrunchyBridgeClusterRoleSpec) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *CrunchyBridgeClusterSpec) DeepCopyInto(out *CrunchyBridgeClusterSpec) { + *out = *in + if in.Metadata != nil { + in, out := &in.Metadata, &out.Metadata + *out = new(Metadata) + (*in).DeepCopyInto(*out) + } + if in.Roles != nil { + in, out := &in.Roles, &out.Roles + *out = make([]*CrunchyBridgeClusterRoleSpec, len(*in)) + for i := range *in { + if (*in)[i] != nil { + in, out := &(*in)[i], &(*out)[i] + *out = new(CrunchyBridgeClusterRoleSpec) + **out = **in + } + } + } + out.Storage = in.Storage.DeepCopy() +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CrunchyBridgeClusterSpec. +func (in *CrunchyBridgeClusterSpec) DeepCopy() *CrunchyBridgeClusterSpec { + if in == nil { + return nil + } + out := new(CrunchyBridgeClusterSpec) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *CrunchyBridgeClusterStatus) DeepCopyInto(out *CrunchyBridgeClusterStatus) { + *out = *in + if in.Conditions != nil { + in, out := &in.Conditions, &out.Conditions + *out = make([]v1.Condition, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + if in.IsHA != nil { + in, out := &in.IsHA, &out.IsHA + *out = new(bool) + **out = **in + } + if in.IsProtected != nil { + in, out := &in.IsProtected, &out.IsProtected + *out = new(bool) + **out = **in + } + if in.OngoingUpgrade != nil { + in, out := &in.OngoingUpgrade, &out.OngoingUpgrade + *out = make([]*UpgradeOperation, len(*in)) + for i := range *in { + if (*in)[i] != nil { + in, out := &(*in)[i], &(*out)[i] + *out = new(UpgradeOperation) + **out = **in + } + } + } + in.Responses.DeepCopyInto(&out.Responses) + if in.Storage != nil { + in, out := &in.Storage, &out.Storage + x := (*in).DeepCopy() + *out = &x + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CrunchyBridgeClusterStatus. +func (in *CrunchyBridgeClusterStatus) DeepCopy() *CrunchyBridgeClusterStatus { + if in == nil { + return nil + } + out := new(CrunchyBridgeClusterStatus) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *DataSource) DeepCopyInto(out *DataSource) { + *out = *in + if in.PGBackRest != nil { + in, out := &in.PGBackRest, &out.PGBackRest + *out = new(PGBackRestDataSource) + (*in).DeepCopyInto(*out) + } + if in.PostgresCluster != nil { + in, out := &in.PostgresCluster, &out.PostgresCluster + *out = new(PostgresClusterDataSource) + (*in).DeepCopyInto(*out) + } + if in.Volumes != nil { + in, out := &in.Volumes, &out.Volumes + *out = new(DataSourceVolumes) + (*in).DeepCopyInto(*out) + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DataSource. +func (in *DataSource) DeepCopy() *DataSource { + if in == nil { + return nil + } + out := new(DataSource) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *DataSourceVolume) DeepCopyInto(out *DataSourceVolume) { + *out = *in +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DataSourceVolume. +func (in *DataSourceVolume) DeepCopy() *DataSourceVolume { + if in == nil { + return nil + } + out := new(DataSourceVolume) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *DataSourceVolumes) DeepCopyInto(out *DataSourceVolumes) { + *out = *in + if in.PGDataVolume != nil { + in, out := &in.PGDataVolume, &out.PGDataVolume + *out = new(DataSourceVolume) + **out = **in + } + if in.PGWALVolume != nil { + in, out := &in.PGWALVolume, &out.PGWALVolume + *out = new(DataSourceVolume) + **out = **in + } + if in.PGBackRestVolume != nil { + in, out := &in.PGBackRestVolume, &out.PGBackRestVolume + *out = new(DataSourceVolume) + **out = **in + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DataSourceVolumes. +func (in *DataSourceVolumes) DeepCopy() *DataSourceVolumes { + if in == nil { + return nil + } + out := new(DataSourceVolumes) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *DatabaseInitSQL) DeepCopyInto(out *DatabaseInitSQL) { + *out = *in +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DatabaseInitSQL. +func (in *DatabaseInitSQL) DeepCopy() *DatabaseInitSQL { + if in == nil { + return nil + } + out := new(DatabaseInitSQL) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ExporterSpec) DeepCopyInto(out *ExporterSpec) { + *out = *in + if in.Configuration != nil { + in, out := &in.Configuration, &out.Configuration + *out = make([]corev1.VolumeProjection, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + if in.CustomTLSSecret != nil { + in, out := &in.CustomTLSSecret, &out.CustomTLSSecret + *out = new(corev1.SecretProjection) + (*in).DeepCopyInto(*out) + } + in.Resources.DeepCopyInto(&out.Resources) +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ExporterSpec. +func (in *ExporterSpec) DeepCopy() *ExporterSpec { + if in == nil { + return nil + } + out := new(ExporterSpec) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *InstanceSidecars) DeepCopyInto(out *InstanceSidecars) { + *out = *in + if in.ReplicaCertCopy != nil { + in, out := &in.ReplicaCertCopy, &out.ReplicaCertCopy + *out = new(Sidecar) + (*in).DeepCopyInto(*out) + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new InstanceSidecars. +func (in *InstanceSidecars) DeepCopy() *InstanceSidecars { + if in == nil { + return nil + } + out := new(InstanceSidecars) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *Metadata) DeepCopyInto(out *Metadata) { + *out = *in + if in.Labels != nil { + in, out := &in.Labels, &out.Labels + *out = make(map[string]string, len(*in)) + for key, val := range *in { + (*out)[key] = val + } + } + if in.Annotations != nil { + in, out := &in.Annotations, &out.Annotations + *out = make(map[string]string, len(*in)) + for key, val := range *in { + (*out)[key] = val + } + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Metadata. +func (in *Metadata) DeepCopy() *Metadata { + if in == nil { + return nil + } + out := new(Metadata) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *MonitoringSpec) DeepCopyInto(out *MonitoringSpec) { + *out = *in + if in.PGMonitor != nil { + in, out := &in.PGMonitor, &out.PGMonitor + *out = new(PGMonitorSpec) + (*in).DeepCopyInto(*out) + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new MonitoringSpec. +func (in *MonitoringSpec) DeepCopy() *MonitoringSpec { + if in == nil { + return nil + } + out := new(MonitoringSpec) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *MonitoringStatus) DeepCopyInto(out *MonitoringStatus) { + *out = *in +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new MonitoringStatus. +func (in *MonitoringStatus) DeepCopy() *MonitoringStatus { + if in == nil { + return nil + } + out := new(MonitoringStatus) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PGAdmin) DeepCopyInto(out *PGAdmin) { + *out = *in + out.TypeMeta = in.TypeMeta + in.ObjectMeta.DeepCopyInto(&out.ObjectMeta) + in.Spec.DeepCopyInto(&out.Spec) + in.Status.DeepCopyInto(&out.Status) +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PGAdmin. +func (in *PGAdmin) DeepCopy() *PGAdmin { + if in == nil { + return nil + } + out := new(PGAdmin) + in.DeepCopyInto(out) + return out +} + +// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. +func (in *PGAdmin) DeepCopyObject() runtime.Object { + if c := in.DeepCopy(); c != nil { + return c + } + return nil +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PGAdminConfiguration) DeepCopyInto(out *PGAdminConfiguration) { + *out = *in + if in.Files != nil { + in, out := &in.Files, &out.Files + *out = make([]corev1.VolumeProjection, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + if in.LDAPBindPassword != nil { + in, out := &in.LDAPBindPassword, &out.LDAPBindPassword + *out = new(corev1.SecretKeySelector) + (*in).DeepCopyInto(*out) + } + in.Settings.DeepCopyInto(&out.Settings) +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PGAdminConfiguration. +func (in *PGAdminConfiguration) DeepCopy() *PGAdminConfiguration { + if in == nil { + return nil + } + out := new(PGAdminConfiguration) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PGAdminList) DeepCopyInto(out *PGAdminList) { + *out = *in + out.TypeMeta = in.TypeMeta + in.ListMeta.DeepCopyInto(&out.ListMeta) + if in.Items != nil { + in, out := &in.Items, &out.Items + *out = make([]PGAdmin, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PGAdminList. +func (in *PGAdminList) DeepCopy() *PGAdminList { + if in == nil { + return nil + } + out := new(PGAdminList) + in.DeepCopyInto(out) + return out +} + +// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. +func (in *PGAdminList) DeepCopyObject() runtime.Object { + if c := in.DeepCopy(); c != nil { + return c + } + return nil +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PGAdminPodSpec) DeepCopyInto(out *PGAdminPodSpec) { + *out = *in + if in.Metadata != nil { + in, out := &in.Metadata, &out.Metadata + *out = new(Metadata) + (*in).DeepCopyInto(*out) + } + if in.Affinity != nil { + in, out := &in.Affinity, &out.Affinity + *out = new(corev1.Affinity) + (*in).DeepCopyInto(*out) + } + in.Config.DeepCopyInto(&out.Config) + in.DataVolumeClaimSpec.DeepCopyInto(&out.DataVolumeClaimSpec) + if in.PriorityClassName != nil { + in, out := &in.PriorityClassName, &out.PriorityClassName + *out = new(string) + **out = **in + } + if in.Replicas != nil { + in, out := &in.Replicas, &out.Replicas + *out = new(int32) + **out = **in + } + in.Resources.DeepCopyInto(&out.Resources) + if in.Service != nil { + in, out := &in.Service, &out.Service + *out = new(ServiceSpec) + (*in).DeepCopyInto(*out) + } + if in.Tolerations != nil { + in, out := &in.Tolerations, &out.Tolerations + *out = make([]corev1.Toleration, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + if in.TopologySpreadConstraints != nil { + in, out := &in.TopologySpreadConstraints, &out.TopologySpreadConstraints + *out = make([]corev1.TopologySpreadConstraint, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PGAdminPodSpec. +func (in *PGAdminPodSpec) DeepCopy() *PGAdminPodSpec { + if in == nil { + return nil + } + out := new(PGAdminPodSpec) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PGAdminPodStatus) DeepCopyInto(out *PGAdminPodStatus) { + *out = *in +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PGAdminPodStatus. +func (in *PGAdminPodStatus) DeepCopy() *PGAdminPodStatus { + if in == nil { + return nil + } + out := new(PGAdminPodStatus) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PGAdminSpec) DeepCopyInto(out *PGAdminSpec) { + *out = *in + if in.Metadata != nil { + in, out := &in.Metadata, &out.Metadata + *out = new(Metadata) + (*in).DeepCopyInto(*out) + } + in.Config.DeepCopyInto(&out.Config) + in.DataVolumeClaimSpec.DeepCopyInto(&out.DataVolumeClaimSpec) + if in.Image != nil { + in, out := &in.Image, &out.Image + *out = new(string) + **out = **in + } + if in.ImagePullSecrets != nil { + in, out := &in.ImagePullSecrets, &out.ImagePullSecrets + *out = make([]corev1.LocalObjectReference, len(*in)) + copy(*out, *in) + } + in.Resources.DeepCopyInto(&out.Resources) + if in.Affinity != nil { + in, out := &in.Affinity, &out.Affinity + *out = new(corev1.Affinity) + (*in).DeepCopyInto(*out) + } + if in.PriorityClassName != nil { + in, out := &in.PriorityClassName, &out.PriorityClassName + *out = new(string) + **out = **in + } + if in.Tolerations != nil { + in, out := &in.Tolerations, &out.Tolerations + *out = make([]corev1.Toleration, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + if in.ServerGroups != nil { + in, out := &in.ServerGroups, &out.ServerGroups + *out = make([]ServerGroup, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + if in.Users != nil { + in, out := &in.Users, &out.Users + *out = make([]PGAdminUser, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PGAdminSpec. +func (in *PGAdminSpec) DeepCopy() *PGAdminSpec { + if in == nil { + return nil + } + out := new(PGAdminSpec) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PGAdminStatus) DeepCopyInto(out *PGAdminStatus) { + *out = *in + if in.Conditions != nil { + in, out := &in.Conditions, &out.Conditions + *out = make([]v1.Condition, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PGAdminStatus. +func (in *PGAdminStatus) DeepCopy() *PGAdminStatus { + if in == nil { + return nil + } + out := new(PGAdminStatus) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PGAdminUser) DeepCopyInto(out *PGAdminUser) { + *out = *in + if in.PasswordRef != nil { + in, out := &in.PasswordRef, &out.PasswordRef + *out = new(corev1.SecretKeySelector) + (*in).DeepCopyInto(*out) + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PGAdminUser. +func (in *PGAdminUser) DeepCopy() *PGAdminUser { + if in == nil { + return nil + } + out := new(PGAdminUser) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PGBackRestArchive) DeepCopyInto(out *PGBackRestArchive) { + *out = *in + if in.Metadata != nil { + in, out := &in.Metadata, &out.Metadata + *out = new(Metadata) + (*in).DeepCopyInto(*out) + } + if in.Configuration != nil { + in, out := &in.Configuration, &out.Configuration + *out = make([]corev1.VolumeProjection, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + if in.Global != nil { + in, out := &in.Global, &out.Global + *out = make(map[string]string, len(*in)) + for key, val := range *in { + (*out)[key] = val + } + } + if in.Jobs != nil { + in, out := &in.Jobs, &out.Jobs + *out = new(BackupJobs) + (*in).DeepCopyInto(*out) + } + if in.Repos != nil { + in, out := &in.Repos, &out.Repos + *out = make([]PGBackRestRepo, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + if in.RepoHost != nil { + in, out := &in.RepoHost, &out.RepoHost + *out = new(PGBackRestRepoHost) + (*in).DeepCopyInto(*out) + } + if in.Manual != nil { + in, out := &in.Manual, &out.Manual + *out = new(PGBackRestManualBackup) + (*in).DeepCopyInto(*out) + } + if in.Restore != nil { + in, out := &in.Restore, &out.Restore + *out = new(PGBackRestRestore) + (*in).DeepCopyInto(*out) + } + if in.Sidecars != nil { + in, out := &in.Sidecars, &out.Sidecars + *out = new(PGBackRestSidecars) + (*in).DeepCopyInto(*out) + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PGBackRestArchive. +func (in *PGBackRestArchive) DeepCopy() *PGBackRestArchive { + if in == nil { + return nil + } + out := new(PGBackRestArchive) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PGBackRestBackupSchedules) DeepCopyInto(out *PGBackRestBackupSchedules) { + *out = *in + if in.Full != nil { + in, out := &in.Full, &out.Full + *out = new(string) + **out = **in + } + if in.Differential != nil { + in, out := &in.Differential, &out.Differential + *out = new(string) + **out = **in + } + if in.Incremental != nil { + in, out := &in.Incremental, &out.Incremental + *out = new(string) + **out = **in + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PGBackRestBackupSchedules. +func (in *PGBackRestBackupSchedules) DeepCopy() *PGBackRestBackupSchedules { + if in == nil { + return nil + } + out := new(PGBackRestBackupSchedules) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PGBackRestDataSource) DeepCopyInto(out *PGBackRestDataSource) { + *out = *in + if in.Configuration != nil { + in, out := &in.Configuration, &out.Configuration + *out = make([]corev1.VolumeProjection, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + if in.Global != nil { + in, out := &in.Global, &out.Global + *out = make(map[string]string, len(*in)) + for key, val := range *in { + (*out)[key] = val + } + } + in.Repo.DeepCopyInto(&out.Repo) + if in.Options != nil { + in, out := &in.Options, &out.Options + *out = make([]string, len(*in)) + copy(*out, *in) + } + in.Resources.DeepCopyInto(&out.Resources) + if in.Affinity != nil { + in, out := &in.Affinity, &out.Affinity + *out = new(corev1.Affinity) + (*in).DeepCopyInto(*out) + } + if in.PriorityClassName != nil { + in, out := &in.PriorityClassName, &out.PriorityClassName + *out = new(string) + **out = **in + } + if in.Tolerations != nil { + in, out := &in.Tolerations, &out.Tolerations + *out = make([]corev1.Toleration, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PGBackRestDataSource. +func (in *PGBackRestDataSource) DeepCopy() *PGBackRestDataSource { + if in == nil { + return nil + } + out := new(PGBackRestDataSource) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PGBackRestJobStatus) DeepCopyInto(out *PGBackRestJobStatus) { + *out = *in + if in.StartTime != nil { + in, out := &in.StartTime, &out.StartTime + *out = (*in).DeepCopy() + } + if in.CompletionTime != nil { + in, out := &in.CompletionTime, &out.CompletionTime + *out = (*in).DeepCopy() + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PGBackRestJobStatus. +func (in *PGBackRestJobStatus) DeepCopy() *PGBackRestJobStatus { + if in == nil { + return nil + } + out := new(PGBackRestJobStatus) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PGBackRestManualBackup) DeepCopyInto(out *PGBackRestManualBackup) { + *out = *in + if in.Options != nil { + in, out := &in.Options, &out.Options + *out = make([]string, len(*in)) + copy(*out, *in) + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PGBackRestManualBackup. +func (in *PGBackRestManualBackup) DeepCopy() *PGBackRestManualBackup { + if in == nil { + return nil + } + out := new(PGBackRestManualBackup) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PGBackRestRepo) DeepCopyInto(out *PGBackRestRepo) { + *out = *in + if in.BackupSchedules != nil { + in, out := &in.BackupSchedules, &out.BackupSchedules + *out = new(PGBackRestBackupSchedules) + (*in).DeepCopyInto(*out) + } + if in.Azure != nil { + in, out := &in.Azure, &out.Azure + *out = new(RepoAzure) + **out = **in + } + if in.GCS != nil { + in, out := &in.GCS, &out.GCS + *out = new(RepoGCS) + **out = **in + } + if in.S3 != nil { + in, out := &in.S3, &out.S3 + *out = new(RepoS3) + **out = **in + } + if in.Volume != nil { + in, out := &in.Volume, &out.Volume + *out = new(RepoPVC) + (*in).DeepCopyInto(*out) + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PGBackRestRepo. +func (in *PGBackRestRepo) DeepCopy() *PGBackRestRepo { + if in == nil { + return nil + } + out := new(PGBackRestRepo) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PGBackRestRepoHost) DeepCopyInto(out *PGBackRestRepoHost) { + *out = *in + if in.Affinity != nil { + in, out := &in.Affinity, &out.Affinity + *out = new(corev1.Affinity) + (*in).DeepCopyInto(*out) + } + if in.PriorityClassName != nil { + in, out := &in.PriorityClassName, &out.PriorityClassName + *out = new(string) + **out = **in + } + in.Resources.DeepCopyInto(&out.Resources) + if in.Tolerations != nil { + in, out := &in.Tolerations, &out.Tolerations + *out = make([]corev1.Toleration, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + if in.TopologySpreadConstraints != nil { + in, out := &in.TopologySpreadConstraints, &out.TopologySpreadConstraints + *out = make([]corev1.TopologySpreadConstraint, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + if in.SSHConfiguration != nil { + in, out := &in.SSHConfiguration, &out.SSHConfiguration + *out = new(corev1.ConfigMapProjection) + (*in).DeepCopyInto(*out) + } + if in.SSHSecret != nil { + in, out := &in.SSHSecret, &out.SSHSecret + *out = new(corev1.SecretProjection) + (*in).DeepCopyInto(*out) + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PGBackRestRepoHost. +func (in *PGBackRestRepoHost) DeepCopy() *PGBackRestRepoHost { + if in == nil { + return nil + } + out := new(PGBackRestRepoHost) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PGBackRestRestore) DeepCopyInto(out *PGBackRestRestore) { + *out = *in + if in.Enabled != nil { + in, out := &in.Enabled, &out.Enabled + *out = new(bool) + **out = **in + } + if in.PostgresClusterDataSource != nil { + in, out := &in.PostgresClusterDataSource, &out.PostgresClusterDataSource + *out = new(PostgresClusterDataSource) + (*in).DeepCopyInto(*out) + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PGBackRestRestore. +func (in *PGBackRestRestore) DeepCopy() *PGBackRestRestore { + if in == nil { + return nil + } + out := new(PGBackRestRestore) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PGBackRestScheduledBackupStatus) DeepCopyInto(out *PGBackRestScheduledBackupStatus) { + *out = *in + if in.StartTime != nil { + in, out := &in.StartTime, &out.StartTime + *out = (*in).DeepCopy() + } + if in.CompletionTime != nil { + in, out := &in.CompletionTime, &out.CompletionTime + *out = (*in).DeepCopy() + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PGBackRestScheduledBackupStatus. +func (in *PGBackRestScheduledBackupStatus) DeepCopy() *PGBackRestScheduledBackupStatus { + if in == nil { + return nil + } + out := new(PGBackRestScheduledBackupStatus) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PGBackRestSidecars) DeepCopyInto(out *PGBackRestSidecars) { + *out = *in + if in.PGBackRest != nil { + in, out := &in.PGBackRest, &out.PGBackRest + *out = new(Sidecar) + (*in).DeepCopyInto(*out) + } + if in.PGBackRestConfig != nil { + in, out := &in.PGBackRestConfig, &out.PGBackRestConfig + *out = new(Sidecar) + (*in).DeepCopyInto(*out) + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PGBackRestSidecars. +func (in *PGBackRestSidecars) DeepCopy() *PGBackRestSidecars { + if in == nil { + return nil + } + out := new(PGBackRestSidecars) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PGBackRestStatus) DeepCopyInto(out *PGBackRestStatus) { + *out = *in + if in.ManualBackup != nil { + in, out := &in.ManualBackup, &out.ManualBackup + *out = new(PGBackRestJobStatus) + (*in).DeepCopyInto(*out) + } + if in.ScheduledBackups != nil { + in, out := &in.ScheduledBackups, &out.ScheduledBackups + *out = make([]PGBackRestScheduledBackupStatus, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + if in.RepoHost != nil { + in, out := &in.RepoHost, &out.RepoHost + *out = new(RepoHostStatus) + **out = **in + } + if in.Repos != nil { + in, out := &in.Repos, &out.Repos + *out = make([]RepoStatus, len(*in)) + copy(*out, *in) + } + if in.Restore != nil { + in, out := &in.Restore, &out.Restore + *out = new(PGBackRestJobStatus) + (*in).DeepCopyInto(*out) + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PGBackRestStatus. +func (in *PGBackRestStatus) DeepCopy() *PGBackRestStatus { + if in == nil { + return nil + } + out := new(PGBackRestStatus) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PGBouncerConfiguration) DeepCopyInto(out *PGBouncerConfiguration) { + *out = *in + if in.Files != nil { + in, out := &in.Files, &out.Files + *out = make([]corev1.VolumeProjection, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + if in.Global != nil { + in, out := &in.Global, &out.Global + *out = make(map[string]string, len(*in)) + for key, val := range *in { + (*out)[key] = val + } + } + if in.Databases != nil { + in, out := &in.Databases, &out.Databases + *out = make(map[string]string, len(*in)) + for key, val := range *in { + (*out)[key] = val + } + } + if in.Users != nil { + in, out := &in.Users, &out.Users + *out = make(map[string]string, len(*in)) + for key, val := range *in { + (*out)[key] = val + } + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PGBouncerConfiguration. +func (in *PGBouncerConfiguration) DeepCopy() *PGBouncerConfiguration { + if in == nil { + return nil + } + out := new(PGBouncerConfiguration) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PGBouncerPodSpec) DeepCopyInto(out *PGBouncerPodSpec) { + *out = *in + if in.Metadata != nil { + in, out := &in.Metadata, &out.Metadata + *out = new(Metadata) + (*in).DeepCopyInto(*out) + } + if in.Affinity != nil { + in, out := &in.Affinity, &out.Affinity + *out = new(corev1.Affinity) + (*in).DeepCopyInto(*out) + } + in.Config.DeepCopyInto(&out.Config) + if in.Containers != nil { + in, out := &in.Containers, &out.Containers + *out = make([]corev1.Container, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + if in.CustomTLSSecret != nil { + in, out := &in.CustomTLSSecret, &out.CustomTLSSecret + *out = new(corev1.SecretProjection) + (*in).DeepCopyInto(*out) + } + if in.Port != nil { + in, out := &in.Port, &out.Port + *out = new(int32) + **out = **in + } + if in.PriorityClassName != nil { + in, out := &in.PriorityClassName, &out.PriorityClassName + *out = new(string) + **out = **in + } + if in.Replicas != nil { + in, out := &in.Replicas, &out.Replicas + *out = new(int32) + **out = **in + } + if in.MinAvailable != nil { + in, out := &in.MinAvailable, &out.MinAvailable + *out = new(intstr.IntOrString) + **out = **in + } + in.Resources.DeepCopyInto(&out.Resources) + if in.Service != nil { + in, out := &in.Service, &out.Service + *out = new(ServiceSpec) + (*in).DeepCopyInto(*out) + } + if in.Sidecars != nil { + in, out := &in.Sidecars, &out.Sidecars + *out = new(PGBouncerSidecars) + (*in).DeepCopyInto(*out) + } + if in.Tolerations != nil { + in, out := &in.Tolerations, &out.Tolerations + *out = make([]corev1.Toleration, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + if in.TopologySpreadConstraints != nil { + in, out := &in.TopologySpreadConstraints, &out.TopologySpreadConstraints + *out = make([]corev1.TopologySpreadConstraint, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PGBouncerPodSpec. +func (in *PGBouncerPodSpec) DeepCopy() *PGBouncerPodSpec { + if in == nil { + return nil + } + out := new(PGBouncerPodSpec) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PGBouncerPodStatus) DeepCopyInto(out *PGBouncerPodStatus) { + *out = *in +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PGBouncerPodStatus. +func (in *PGBouncerPodStatus) DeepCopy() *PGBouncerPodStatus { + if in == nil { + return nil + } + out := new(PGBouncerPodStatus) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PGBouncerSidecars) DeepCopyInto(out *PGBouncerSidecars) { + *out = *in + if in.PGBouncerConfig != nil { + in, out := &in.PGBouncerConfig, &out.PGBouncerConfig + *out = new(Sidecar) + (*in).DeepCopyInto(*out) + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PGBouncerSidecars. +func (in *PGBouncerSidecars) DeepCopy() *PGBouncerSidecars { + if in == nil { + return nil + } + out := new(PGBouncerSidecars) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PGMonitorSpec) DeepCopyInto(out *PGMonitorSpec) { + *out = *in + if in.Exporter != nil { + in, out := &in.Exporter, &out.Exporter + *out = new(ExporterSpec) + (*in).DeepCopyInto(*out) + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PGMonitorSpec. +func (in *PGMonitorSpec) DeepCopy() *PGMonitorSpec { + if in == nil { + return nil + } + out := new(PGMonitorSpec) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PGUpgrade) DeepCopyInto(out *PGUpgrade) { + *out = *in + out.TypeMeta = in.TypeMeta + in.ObjectMeta.DeepCopyInto(&out.ObjectMeta) + in.Spec.DeepCopyInto(&out.Spec) + in.Status.DeepCopyInto(&out.Status) +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PGUpgrade. +func (in *PGUpgrade) DeepCopy() *PGUpgrade { + if in == nil { + return nil + } + out := new(PGUpgrade) + in.DeepCopyInto(out) + return out +} + +// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. +func (in *PGUpgrade) DeepCopyObject() runtime.Object { + if c := in.DeepCopy(); c != nil { + return c + } + return nil +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PGUpgradeList) DeepCopyInto(out *PGUpgradeList) { + *out = *in + out.TypeMeta = in.TypeMeta + in.ListMeta.DeepCopyInto(&out.ListMeta) + if in.Items != nil { + in, out := &in.Items, &out.Items + *out = make([]PGUpgrade, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PGUpgradeList. +func (in *PGUpgradeList) DeepCopy() *PGUpgradeList { + if in == nil { + return nil + } + out := new(PGUpgradeList) + in.DeepCopyInto(out) + return out +} + +// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. +func (in *PGUpgradeList) DeepCopyObject() runtime.Object { + if c := in.DeepCopy(); c != nil { + return c + } + return nil +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PGUpgradeSpec) DeepCopyInto(out *PGUpgradeSpec) { + *out = *in + if in.Metadata != nil { + in, out := &in.Metadata, &out.Metadata + *out = new(Metadata) + (*in).DeepCopyInto(*out) + } + if in.Image != nil { + in, out := &in.Image, &out.Image + *out = new(string) + **out = **in + } + if in.ImagePullSecrets != nil { + in, out := &in.ImagePullSecrets, &out.ImagePullSecrets + *out = make([]corev1.LocalObjectReference, len(*in)) + copy(*out, *in) + } + in.Resources.DeepCopyInto(&out.Resources) + if in.Affinity != nil { + in, out := &in.Affinity, &out.Affinity + *out = new(corev1.Affinity) + (*in).DeepCopyInto(*out) + } + if in.PriorityClassName != nil { + in, out := &in.PriorityClassName, &out.PriorityClassName + *out = new(string) + **out = **in + } + if in.Tolerations != nil { + in, out := &in.Tolerations, &out.Tolerations + *out = make([]corev1.Toleration, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PGUpgradeSpec. +func (in *PGUpgradeSpec) DeepCopy() *PGUpgradeSpec { + if in == nil { + return nil + } + out := new(PGUpgradeSpec) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PGUpgradeStatus) DeepCopyInto(out *PGUpgradeStatus) { + *out = *in + if in.Conditions != nil { + in, out := &in.Conditions, &out.Conditions + *out = make([]v1.Condition, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PGUpgradeStatus. +func (in *PGUpgradeStatus) DeepCopy() *PGUpgradeStatus { + if in == nil { + return nil + } + out := new(PGUpgradeStatus) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PatroniSpec) DeepCopyInto(out *PatroniSpec) { + *out = *in + in.DynamicConfiguration.DeepCopyInto(&out.DynamicConfiguration) + if in.LeaderLeaseDurationSeconds != nil { + in, out := &in.LeaderLeaseDurationSeconds, &out.LeaderLeaseDurationSeconds + *out = new(int32) + **out = **in + } + if in.Port != nil { + in, out := &in.Port, &out.Port + *out = new(int32) + **out = **in + } + if in.SyncPeriodSeconds != nil { + in, out := &in.SyncPeriodSeconds, &out.SyncPeriodSeconds + *out = new(int32) + **out = **in + } + if in.Switchover != nil { + in, out := &in.Switchover, &out.Switchover + *out = new(PatroniSwitchover) + (*in).DeepCopyInto(*out) + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PatroniSpec. +func (in *PatroniSpec) DeepCopy() *PatroniSpec { + if in == nil { + return nil + } + out := new(PatroniSpec) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PatroniStatus) DeepCopyInto(out *PatroniStatus) { + *out = *in + if in.Switchover != nil { + in, out := &in.Switchover, &out.Switchover + *out = new(string) + **out = **in + } + if in.SwitchoverTimeline != nil { + in, out := &in.SwitchoverTimeline, &out.SwitchoverTimeline + *out = new(int64) + **out = **in + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PatroniStatus. +func (in *PatroniStatus) DeepCopy() *PatroniStatus { + if in == nil { + return nil + } + out := new(PatroniStatus) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PatroniSwitchover) DeepCopyInto(out *PatroniSwitchover) { + *out = *in + if in.TargetInstance != nil { + in, out := &in.TargetInstance, &out.TargetInstance + *out = new(string) + **out = **in + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PatroniSwitchover. +func (in *PatroniSwitchover) DeepCopy() *PatroniSwitchover { + if in == nil { + return nil + } + out := new(PatroniSwitchover) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PostgresAdditionalConfig) DeepCopyInto(out *PostgresAdditionalConfig) { + *out = *in + if in.Files != nil { + in, out := &in.Files, &out.Files + *out = make([]corev1.VolumeProjection, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PostgresAdditionalConfig. +func (in *PostgresAdditionalConfig) DeepCopy() *PostgresAdditionalConfig { + if in == nil { + return nil + } + out := new(PostgresAdditionalConfig) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PostgresCluster) DeepCopyInto(out *PostgresCluster) { + *out = *in + out.TypeMeta = in.TypeMeta + in.ObjectMeta.DeepCopyInto(&out.ObjectMeta) + in.Spec.DeepCopyInto(&out.Spec) + in.Status.DeepCopyInto(&out.Status) +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PostgresCluster. +func (in *PostgresCluster) DeepCopy() *PostgresCluster { + if in == nil { + return nil + } + out := new(PostgresCluster) + in.DeepCopyInto(out) + return out +} + +// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. +func (in *PostgresCluster) DeepCopyObject() runtime.Object { + if c := in.DeepCopy(); c != nil { + return c + } + return nil +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PostgresClusterDataSource) DeepCopyInto(out *PostgresClusterDataSource) { + *out = *in + if in.Options != nil { + in, out := &in.Options, &out.Options + *out = make([]string, len(*in)) + copy(*out, *in) + } + in.Resources.DeepCopyInto(&out.Resources) + if in.Affinity != nil { + in, out := &in.Affinity, &out.Affinity + *out = new(corev1.Affinity) + (*in).DeepCopyInto(*out) + } + if in.PriorityClassName != nil { + in, out := &in.PriorityClassName, &out.PriorityClassName + *out = new(string) + **out = **in + } + if in.Tolerations != nil { + in, out := &in.Tolerations, &out.Tolerations + *out = make([]corev1.Toleration, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PostgresClusterDataSource. +func (in *PostgresClusterDataSource) DeepCopy() *PostgresClusterDataSource { + if in == nil { + return nil + } + out := new(PostgresClusterDataSource) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PostgresClusterList) DeepCopyInto(out *PostgresClusterList) { + *out = *in + out.TypeMeta = in.TypeMeta + in.ListMeta.DeepCopyInto(&out.ListMeta) + if in.Items != nil { + in, out := &in.Items, &out.Items + *out = make([]PostgresCluster, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PostgresClusterList. +func (in *PostgresClusterList) DeepCopy() *PostgresClusterList { + if in == nil { + return nil + } + out := new(PostgresClusterList) + in.DeepCopyInto(out) + return out +} + +// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. +func (in *PostgresClusterList) DeepCopyObject() runtime.Object { + if c := in.DeepCopy(); c != nil { + return c + } + return nil +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PostgresClusterSpec) DeepCopyInto(out *PostgresClusterSpec) { + *out = *in + if in.Metadata != nil { + in, out := &in.Metadata, &out.Metadata + *out = new(Metadata) + (*in).DeepCopyInto(*out) + } + if in.DataSource != nil { + in, out := &in.DataSource, &out.DataSource + *out = new(DataSource) + (*in).DeepCopyInto(*out) + } + in.Backups.DeepCopyInto(&out.Backups) + if in.CustomTLSSecret != nil { + in, out := &in.CustomTLSSecret, &out.CustomTLSSecret + *out = new(corev1.SecretProjection) + (*in).DeepCopyInto(*out) + } + if in.CustomReplicationClientTLSSecret != nil { + in, out := &in.CustomReplicationClientTLSSecret, &out.CustomReplicationClientTLSSecret + *out = new(corev1.SecretProjection) + (*in).DeepCopyInto(*out) + } + if in.DatabaseInitSQL != nil { + in, out := &in.DatabaseInitSQL, &out.DatabaseInitSQL + *out = new(DatabaseInitSQL) + **out = **in + } + if in.DisableDefaultPodScheduling != nil { + in, out := &in.DisableDefaultPodScheduling, &out.DisableDefaultPodScheduling + *out = new(bool) + **out = **in + } + if in.ImagePullSecrets != nil { + in, out := &in.ImagePullSecrets, &out.ImagePullSecrets + *out = make([]corev1.LocalObjectReference, len(*in)) + copy(*out, *in) + } + if in.InstanceSets != nil { + in, out := &in.InstanceSets, &out.InstanceSets + *out = make([]PostgresInstanceSetSpec, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + if in.OpenShift != nil { + in, out := &in.OpenShift, &out.OpenShift + *out = new(bool) + **out = **in + } + if in.Patroni != nil { + in, out := &in.Patroni, &out.Patroni + *out = new(PatroniSpec) + (*in).DeepCopyInto(*out) + } + if in.Paused != nil { + in, out := &in.Paused, &out.Paused + *out = new(bool) + **out = **in + } + if in.Port != nil { + in, out := &in.Port, &out.Port + *out = new(int32) + **out = **in + } + if in.Proxy != nil { + in, out := &in.Proxy, &out.Proxy + *out = new(PostgresProxySpec) + (*in).DeepCopyInto(*out) + } + if in.UserInterface != nil { + in, out := &in.UserInterface, &out.UserInterface + *out = new(UserInterfaceSpec) + (*in).DeepCopyInto(*out) + } + if in.Monitoring != nil { + in, out := &in.Monitoring, &out.Monitoring + *out = new(MonitoringSpec) + (*in).DeepCopyInto(*out) + } + if in.Service != nil { + in, out := &in.Service, &out.Service + *out = new(ServiceSpec) + (*in).DeepCopyInto(*out) + } + if in.ReplicaService != nil { + in, out := &in.ReplicaService, &out.ReplicaService + *out = new(ServiceSpec) + (*in).DeepCopyInto(*out) + } + if in.Shutdown != nil { + in, out := &in.Shutdown, &out.Shutdown + *out = new(bool) + **out = **in + } + if in.Standby != nil { + in, out := &in.Standby, &out.Standby + *out = new(PostgresStandbySpec) + (*in).DeepCopyInto(*out) + } + if in.SupplementalGroups != nil { + in, out := &in.SupplementalGroups, &out.SupplementalGroups + *out = make([]int64, len(*in)) + copy(*out, *in) + } + if in.Users != nil { + in, out := &in.Users, &out.Users + *out = make([]PostgresUserSpec, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + in.Config.DeepCopyInto(&out.Config) +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PostgresClusterSpec. +func (in *PostgresClusterSpec) DeepCopy() *PostgresClusterSpec { + if in == nil { + return nil + } + out := new(PostgresClusterSpec) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PostgresClusterStatus) DeepCopyInto(out *PostgresClusterStatus) { + *out = *in + if in.InstanceSets != nil { + in, out := &in.InstanceSets, &out.InstanceSets + *out = make([]PostgresInstanceSetStatus, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + in.Patroni.DeepCopyInto(&out.Patroni) + if in.PGBackRest != nil { + in, out := &in.PGBackRest, &out.PGBackRest + *out = new(PGBackRestStatus) + (*in).DeepCopyInto(*out) + } + if in.RegistrationRequired != nil { + in, out := &in.RegistrationRequired, &out.RegistrationRequired + *out = new(RegistrationRequirementStatus) + **out = **in + } + out.Proxy = in.Proxy + if in.UserInterface != nil { + in, out := &in.UserInterface, &out.UserInterface + *out = new(PostgresUserInterfaceStatus) + **out = **in + } + out.Monitoring = in.Monitoring + if in.DatabaseInitSQL != nil { + in, out := &in.DatabaseInitSQL, &out.DatabaseInitSQL + *out = new(string) + **out = **in + } + if in.Conditions != nil { + in, out := &in.Conditions, &out.Conditions + *out = make([]v1.Condition, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PostgresClusterStatus. +func (in *PostgresClusterStatus) DeepCopy() *PostgresClusterStatus { + if in == nil { + return nil + } + out := new(PostgresClusterStatus) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PostgresInstanceSetSpec) DeepCopyInto(out *PostgresInstanceSetSpec) { + *out = *in + if in.Metadata != nil { + in, out := &in.Metadata, &out.Metadata + *out = new(Metadata) + (*in).DeepCopyInto(*out) + } + if in.Affinity != nil { + in, out := &in.Affinity, &out.Affinity + *out = new(corev1.Affinity) + (*in).DeepCopyInto(*out) + } + if in.Containers != nil { + in, out := &in.Containers, &out.Containers + *out = make([]corev1.Container, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + in.DataVolumeClaimSpec.DeepCopyInto(&out.DataVolumeClaimSpec) + if in.PriorityClassName != nil { + in, out := &in.PriorityClassName, &out.PriorityClassName + *out = new(string) + **out = **in + } + if in.Replicas != nil { + in, out := &in.Replicas, &out.Replicas + *out = new(int32) + **out = **in + } + if in.MinAvailable != nil { + in, out := &in.MinAvailable, &out.MinAvailable + *out = new(intstr.IntOrString) + **out = **in + } + in.Resources.DeepCopyInto(&out.Resources) + if in.Sidecars != nil { + in, out := &in.Sidecars, &out.Sidecars + *out = new(InstanceSidecars) + (*in).DeepCopyInto(*out) + } + if in.Tolerations != nil { + in, out := &in.Tolerations, &out.Tolerations + *out = make([]corev1.Toleration, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + if in.TopologySpreadConstraints != nil { + in, out := &in.TopologySpreadConstraints, &out.TopologySpreadConstraints + *out = make([]corev1.TopologySpreadConstraint, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + if in.WALVolumeClaimSpec != nil { + in, out := &in.WALVolumeClaimSpec, &out.WALVolumeClaimSpec + *out = new(corev1.PersistentVolumeClaimSpec) + (*in).DeepCopyInto(*out) + } + if in.TablespaceVolumes != nil { + in, out := &in.TablespaceVolumes, &out.TablespaceVolumes + *out = make([]TablespaceVolume, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PostgresInstanceSetSpec. +func (in *PostgresInstanceSetSpec) DeepCopy() *PostgresInstanceSetSpec { + if in == nil { + return nil + } + out := new(PostgresInstanceSetSpec) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PostgresInstanceSetStatus) DeepCopyInto(out *PostgresInstanceSetStatus) { + *out = *in + if in.DesiredPGDataVolume != nil { + in, out := &in.DesiredPGDataVolume, &out.DesiredPGDataVolume + *out = make(map[string]string, len(*in)) + for key, val := range *in { + (*out)[key] = val + } + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PostgresInstanceSetStatus. +func (in *PostgresInstanceSetStatus) DeepCopy() *PostgresInstanceSetStatus { + if in == nil { + return nil + } + out := new(PostgresInstanceSetStatus) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PostgresPasswordSpec) DeepCopyInto(out *PostgresPasswordSpec) { + *out = *in +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PostgresPasswordSpec. +func (in *PostgresPasswordSpec) DeepCopy() *PostgresPasswordSpec { + if in == nil { + return nil + } + out := new(PostgresPasswordSpec) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PostgresProxySpec) DeepCopyInto(out *PostgresProxySpec) { + *out = *in + if in.PGBouncer != nil { + in, out := &in.PGBouncer, &out.PGBouncer + *out = new(PGBouncerPodSpec) + (*in).DeepCopyInto(*out) + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PostgresProxySpec. +func (in *PostgresProxySpec) DeepCopy() *PostgresProxySpec { + if in == nil { + return nil + } + out := new(PostgresProxySpec) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PostgresProxyStatus) DeepCopyInto(out *PostgresProxyStatus) { + *out = *in + out.PGBouncer = in.PGBouncer +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PostgresProxyStatus. +func (in *PostgresProxyStatus) DeepCopy() *PostgresProxyStatus { + if in == nil { + return nil + } + out := new(PostgresProxyStatus) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PostgresStandbySpec) DeepCopyInto(out *PostgresStandbySpec) { + *out = *in + if in.Port != nil { + in, out := &in.Port, &out.Port + *out = new(int32) + **out = **in + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PostgresStandbySpec. +func (in *PostgresStandbySpec) DeepCopy() *PostgresStandbySpec { + if in == nil { + return nil + } + out := new(PostgresStandbySpec) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PostgresUserInterfaceStatus) DeepCopyInto(out *PostgresUserInterfaceStatus) { + *out = *in + out.PGAdmin = in.PGAdmin +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PostgresUserInterfaceStatus. +func (in *PostgresUserInterfaceStatus) DeepCopy() *PostgresUserInterfaceStatus { + if in == nil { + return nil + } + out := new(PostgresUserInterfaceStatus) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *PostgresUserSpec) DeepCopyInto(out *PostgresUserSpec) { + *out = *in + if in.Databases != nil { + in, out := &in.Databases, &out.Databases + *out = make([]PostgresIdentifier, len(*in)) + copy(*out, *in) + } + if in.Password != nil { + in, out := &in.Password, &out.Password + *out = new(PostgresPasswordSpec) + **out = **in + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PostgresUserSpec. +func (in *PostgresUserSpec) DeepCopy() *PostgresUserSpec { + if in == nil { + return nil + } + out := new(PostgresUserSpec) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *RegistrationRequirementStatus) DeepCopyInto(out *RegistrationRequirementStatus) { + *out = *in +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RegistrationRequirementStatus. +func (in *RegistrationRequirementStatus) DeepCopy() *RegistrationRequirementStatus { + if in == nil { + return nil + } + out := new(RegistrationRequirementStatus) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *RepoAzure) DeepCopyInto(out *RepoAzure) { + *out = *in +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RepoAzure. +func (in *RepoAzure) DeepCopy() *RepoAzure { + if in == nil { + return nil + } + out := new(RepoAzure) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *RepoGCS) DeepCopyInto(out *RepoGCS) { + *out = *in +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RepoGCS. +func (in *RepoGCS) DeepCopy() *RepoGCS { + if in == nil { + return nil + } + out := new(RepoGCS) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *RepoHostStatus) DeepCopyInto(out *RepoHostStatus) { + *out = *in + out.TypeMeta = in.TypeMeta +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RepoHostStatus. +func (in *RepoHostStatus) DeepCopy() *RepoHostStatus { + if in == nil { + return nil + } + out := new(RepoHostStatus) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *RepoPVC) DeepCopyInto(out *RepoPVC) { + *out = *in + in.VolumeClaimSpec.DeepCopyInto(&out.VolumeClaimSpec) +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RepoPVC. +func (in *RepoPVC) DeepCopy() *RepoPVC { + if in == nil { + return nil + } + out := new(RepoPVC) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *RepoS3) DeepCopyInto(out *RepoS3) { + *out = *in +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RepoS3. +func (in *RepoS3) DeepCopy() *RepoS3 { + if in == nil { + return nil + } + out := new(RepoS3) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *RepoStatus) DeepCopyInto(out *RepoStatus) { + *out = *in +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RepoStatus. +func (in *RepoStatus) DeepCopy() *RepoStatus { + if in == nil { + return nil + } + out := new(RepoStatus) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in SchemalessObject) DeepCopyInto(out *SchemalessObject) { + { + in := &in + clone := in.DeepCopy() + *out = *clone + } +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ServerGroup) DeepCopyInto(out *ServerGroup) { + *out = *in + in.PostgresClusterSelector.DeepCopyInto(&out.PostgresClusterSelector) +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ServerGroup. +func (in *ServerGroup) DeepCopy() *ServerGroup { + if in == nil { + return nil + } + out := new(ServerGroup) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *ServiceSpec) DeepCopyInto(out *ServiceSpec) { + *out = *in + if in.Metadata != nil { + in, out := &in.Metadata, &out.Metadata + *out = new(Metadata) + (*in).DeepCopyInto(*out) + } + if in.NodePort != nil { + in, out := &in.NodePort, &out.NodePort + *out = new(int32) + **out = **in + } + if in.InternalTrafficPolicy != nil { + in, out := &in.InternalTrafficPolicy, &out.InternalTrafficPolicy + *out = new(corev1.ServiceInternalTrafficPolicy) + **out = **in + } + if in.ExternalTrafficPolicy != nil { + in, out := &in.ExternalTrafficPolicy, &out.ExternalTrafficPolicy + *out = new(corev1.ServiceExternalTrafficPolicy) + **out = **in + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ServiceSpec. +func (in *ServiceSpec) DeepCopy() *ServiceSpec { + if in == nil { + return nil + } + out := new(ServiceSpec) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *Sidecar) DeepCopyInto(out *Sidecar) { + *out = *in + if in.Resources != nil { + in, out := &in.Resources, &out.Resources + *out = new(corev1.ResourceRequirements) + (*in).DeepCopyInto(*out) + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Sidecar. +func (in *Sidecar) DeepCopy() *Sidecar { + if in == nil { + return nil + } + out := new(Sidecar) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *StandalonePGAdminConfiguration) DeepCopyInto(out *StandalonePGAdminConfiguration) { + *out = *in + if in.Files != nil { + in, out := &in.Files, &out.Files + *out = make([]corev1.VolumeProjection, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } + if in.ConfigDatabaseURI != nil { + in, out := &in.ConfigDatabaseURI, &out.ConfigDatabaseURI + *out = new(corev1.SecretKeySelector) + (*in).DeepCopyInto(*out) + } + in.Gunicorn.DeepCopyInto(&out.Gunicorn) + if in.LDAPBindPassword != nil { + in, out := &in.LDAPBindPassword, &out.LDAPBindPassword + *out = new(corev1.SecretKeySelector) + (*in).DeepCopyInto(*out) + } + in.Settings.DeepCopyInto(&out.Settings) +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new StandalonePGAdminConfiguration. +func (in *StandalonePGAdminConfiguration) DeepCopy() *StandalonePGAdminConfiguration { + if in == nil { + return nil + } + out := new(StandalonePGAdminConfiguration) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *TablespaceVolume) DeepCopyInto(out *TablespaceVolume) { + *out = *in + in.DataVolumeClaimSpec.DeepCopyInto(&out.DataVolumeClaimSpec) +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new TablespaceVolume. +func (in *TablespaceVolume) DeepCopy() *TablespaceVolume { + if in == nil { + return nil + } + out := new(TablespaceVolume) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *UpgradeOperation) DeepCopyInto(out *UpgradeOperation) { + *out = *in +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new UpgradeOperation. +func (in *UpgradeOperation) DeepCopy() *UpgradeOperation { + if in == nil { + return nil + } + out := new(UpgradeOperation) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *UserInterfaceSpec) DeepCopyInto(out *UserInterfaceSpec) { + *out = *in + if in.PGAdmin != nil { + in, out := &in.PGAdmin, &out.PGAdmin + *out = new(PGAdminPodSpec) + (*in).DeepCopyInto(*out) + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new UserInterfaceSpec. +func (in *UserInterfaceSpec) DeepCopy() *UserInterfaceSpec { + if in == nil { + return nil + } + out := new(UserInterfaceSpec) + in.DeepCopyInto(out) + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *VolumeSnapshots) DeepCopyInto(out *VolumeSnapshots) { + *out = *in +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new VolumeSnapshots. +func (in *VolumeSnapshots) DeepCopy() *VolumeSnapshots { + if in == nil { + return nil + } + out := new(VolumeSnapshots) + in.DeepCopyInto(out) + return out +} diff --git a/pkg/apiservermsgs/backrestmsgs.go b/pkg/apiservermsgs/backrestmsgs.go deleted file mode 100644 index 12d72844b9..0000000000 --- a/pkg/apiservermsgs/backrestmsgs.go +++ /dev/null @@ -1,133 +0,0 @@ -package apiservermsgs - -/* -Copyright 2018 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// CreateBackrestBackupResponse ... -// swagger:model -type CreateBackrestBackupResponse struct { - Results []string - Status -} - -// CreateBackrestBackupRequest ... -// swagger:model -type CreateBackrestBackupRequest struct { - Namespace string - Args []string - Selector string - BackupOpts string - BackrestStorageType string -} - -// PgBackRestInfo and its associated structs are available for parsing the info -// that comes from the output of the "pgbackrest info --output json" command -type PgBackRestInfo struct { - Archives []PgBackRestInfoArchive `json:"archive"` - Backups []PgBackRestInfoBackup `json:"backup"` - Cipher string `json:"cipher"` - DBs []PgBackRestInfoDB `json:"db"` - Name string `json:"name"` - Status PgBackRestInfoStatus `json:"status"` -} - -type PgBackRestInfoArchive struct { - DB PgBackRestInfoDB `json:"db"` - ID string `json:"id"` - Max string `json:"max"` - Min string `json:"min"` -} - -type PgBackRestInfoBackup struct { - Archive PgBackRestInfoBackupArchive `json:"archive"` - Backrest PgBackRestInfoBackupBackrest `json:"backrest"` - Database PgBackRestInfoDB `json:"database"` - Info PgBackRestInfoBackupInfo `json:"info"` - Label string `json:"label"` - Prior string `json:"prior"` - Reference []string `json:"reference"` - Timestamp PgBackRestInfoBackupTimestamp `json:"timestamp"` - Type string `json:"type"` -} - -type PgBackRestInfoBackupArchive struct { - Start string `json:"start"` - Stop string `json:"stop"` -} - -type PgBackRestInfoBackupBackrest struct { - Format int `json:"format"` - Version string `json:"version"` -} - -type PgBackRestInfoBackupInfo struct { - Delta int64 `json:"delta"` - Repository PgBackRestInfoBackupInfoRepository `json:"repository"` - Size int64 `json:"size"` -} - -type PgBackRestInfoBackupInfoRepository struct { - Delta int64 `json:"delta"` - Size int64 `json:"size"` -} - -type PgBackRestInfoBackupTimestamp struct { - Start int64 `json:"start"` - Stop int64 `json:"stop"` -} - -type PgBackRestInfoDB struct { - ID int `json:"id"` - SystemID int64 `json:"system-id,omitempty"` - Version string `json:"version,omitempty"` -} - -type PgBackRestInfoStatus struct { - Code int `json:"code"` - Message string `json:"message"` -} - -// ShowBackrestDetail ... -// swagger:model -type ShowBackrestDetail struct { - Name string - Info []PgBackRestInfo - StorageType string -} - -// ShowBackrestResponse ... -// swagger:model -type ShowBackrestResponse struct { - Items []ShowBackrestDetail - Status -} - -// RestoreResponse ... -// swagger:model -type RestoreResponse struct { - Results []string - Status -} - -// RestoreRequest ... -// swagger:model -type RestoreRequest struct { - Namespace string - FromCluster string - RestoreOpts string - PITRTarget string - NodeLabel string - BackrestStorageType string -} diff --git a/pkg/apiservermsgs/catmsgs.go b/pkg/apiservermsgs/catmsgs.go deleted file mode 100644 index ded313371f..0000000000 --- a/pkg/apiservermsgs/catmsgs.go +++ /dev/null @@ -1,30 +0,0 @@ -package apiservermsgs - -/* -Copyright 2019 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// CatResponse ... -// swagger:model -type CatResponse struct { - Results []string - Status -} - -// CatRequest ... -// swagger:model -type CatRequest struct { - Namespace string - Args []string -} diff --git a/pkg/apiservermsgs/clonemsgs.go b/pkg/apiservermsgs/clonemsgs.go deleted file mode 100644 index 7f78139af1..0000000000 --- a/pkg/apiservermsgs/clonemsgs.go +++ /dev/null @@ -1,48 +0,0 @@ -package apiservermsgs - -/* -Copyright 2019 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// CloneRequest ... -// swagger:model -type CloneRequest struct { - // BackrestPVCSize, if set, is the size of the PVC to use for the pgBackRest - // repository if local storage is being used - BackrestPVCSize string - // BackrestStorageSource contains the accepted values for where pgBackRest - // repository storage exists ("local", "s3" or both) - BackrestStorageSource string - ClientVersion string - // EnableMetrics enables metrics support in the target cluster - EnableMetrics bool - Namespace string - // PVCSize, if set, is the size of the PVC to use for the primary and any - // replicas - PVCSize string - // SourceClusterName is the name of the source PostgreSQL cluster being used - // for the clone - SourceClusterName string - // TargetClusterName is the name of the target PostgreSQL cluster that the - // PostgreSQL cluster will be cloned to - TargetClusterName string -} - -// CloneReseponse -// swagger:model -type CloneResponse struct { - Status - TargetClusterName string - WorkflowID string -} diff --git a/pkg/apiservermsgs/clustermsgs.go b/pkg/apiservermsgs/clustermsgs.go deleted file mode 100644 index cfe07dfdd7..0000000000 --- a/pkg/apiservermsgs/clustermsgs.go +++ /dev/null @@ -1,532 +0,0 @@ -package apiservermsgs - -/* -Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" -) - -// ShowClusterRequest shows cluster -// -// swagger:model -type ShowClusterRequest struct { - // Name of the cluster to show - // required: true - Clustername string `json:"clustername"` - // Selector of the cluster to show - Selector string `json:"selector"` - // Image tag of the cluster - Ccpimagetag string `json:"ccpimagetag"` - // Version of API client - // required: true - ClientVersion string `json:"clientversion"` - // Namespace to search - // required: true - Namespace string `json:"namespace"` - // Shows all clusters - AllFlag bool `json:"allflag"` -} - -// CreateClusterRequest -// -// swagger:model -type CreateClusterRequest struct { - Name string `json:"Name"` - Namespace string - NodeLabel string - PasswordLength int - PasswordSuperuser string - PasswordReplication string - Password string - SecretFrom string - UserLabels string - Tablespaces []ClusterTablespaceDetail - Policies string - CCPImage string - CCPImageTag string - CCPImagePrefix string - PGOImagePrefix string - ReplicaCount int - ServiceType string - MetricsFlag bool - // ExporterCPULimit, if specified, is the value of the max CPU for a - // Crunchy Postgres Exporter sidecar container - ExporterCPULimit string - // ExporterCPURequest, if specified, is the value of how much CPU should be - // requested for a Crunchy Postgres Exporter sidecar container. Defaults to - // not being requested - ExporterCPURequest string - // ExporterMemoryLimit is the value of of the limit of how much RAM a - // Crunchy Postgres Exporter sidecar container should use - ExporterMemoryLimit string - // ExporterMemoryRequest, if specified, is the value of how much RAM should - // be requested for a Crunchy Postgres Exporter sidecar container. Defaults - // to the server specified default - ExporterMemoryRequest string - // ExporterCPULimit, if specified, is the value of the max amount of CPU - // to be utilized for a Crunchy Postgres Exporter sidecar container - BadgerFlag bool - AutofailFlag bool - ArchiveFlag bool - BackrestStorageType string - //BackrestRestoreFrom string - PgbouncerFlag bool - // PgBouncerReplicas represents the total number of pgBouncer pods to deploy with a - // PostgreSQL cluster. Only works if PgbouncerFlag is set, and if so, it must - // be at least 1. If 0 is passed in, it will automatically be set to 1 - PgBouncerReplicas int32 - CustomConfig string - StorageConfig string - WALStorageConfig string - ReplicaStorageConfig string - // Version of API client - // required: true - ClientVersion string - PodAntiAffinity string - PodAntiAffinityPgBackRest string - PodAntiAffinityPgBouncer string - SyncReplication *bool - BackrestConfig string - BackrestS3Key string - BackrestS3KeySecret string - BackrestS3Bucket string - BackrestS3Region string - BackrestS3Endpoint string - BackrestS3URIStyle string - BackrestS3VerifyTLS UpdateBackrestS3VerifyTLS - Standby bool - BackrestRepoPath string - - // allow the user to set custom sizes for PVCs - // PVCSize applies to the primary/replica storage specs - PVCSize string - // BackrestPVCSize applies to the pgBackRest storage spec - BackrestPVCSize string - // WALPVCSize applies to the WAL storage spec - WALPVCSize string - - // Username is an optional parameter that allows the user to override the - // default user name to use for the PostgreSQL cluster - Username string - // ShowSystemAccounts is an optional parameter than when set to true, will - // also show the results of the available system accounts (e.g. the PostgreSQL - // superuser) - ShowSystemAccounts bool - // Database is an optional parameter that allows the user to specify the name - // of the initial database that is created - Database string - // TLSOnly indicates that a PostgreSQL cluster should be deployed with only - // TLS connections accepted. Requires that TLSSecret and CASecret are set - TLSOnly bool - // TLSSecret is the name of the secret that contains the keypair required to - // deploy a TLS-enabled PostgreSQL cluster - TLSSecret string - // CASecret is the name of the secret that contains the CA to use along with - // the TLS keypair for deploying a TLS-enabled PostgreSQL cluster - CASecret string - // ReplicationTLSSecret is the name of the secret that contains the keypair - // used for having instances in a PostgreSQL cluster authenticate each another - // using certificate-based authentication. The CN of the certificate must - // either be "primaryuser" (the current name of the replication user) OR - // have a mapping to primaryuser in the pg_ident file. The - // ReplicationTLSSecret must be verifable by the certificate chain in the - // CASecret - ReplicationTLSSecret string - // CPULimit is the value of the max CPU utilization for a Pod that has a - // PostgreSQL cluster - CPULimit string - // CPURequest is the value of how much CPU should be requested for deploying - // the PostgreSQL cluster - CPURequest string - // MemoryLimit is the value of of the limit of how much RAM a Pod with a - // PostgreSQL instance should use. At this time we do not recommend setting - // this. - MemoryLimit string - // MemoryRequest is the value of how much RAM should be requested for - // deploying the PostgreSQL cluster - MemoryRequest string - // PgBouncerCPULimit, if specified, is the value of the max CPU for a - // pgBouncer pod - PgBouncerCPULimit string - // PgBouncerCPURequest, if specified, is the value of how much CPU should be - // requested for deploying pgBouncer instances. Defaults to not being - // requested - PgBouncerCPURequest string - // PgBouncerMemoryLimit is the value of of the limit of how much RAM a Pod - // with a pgBouncer should use - PgBouncerMemoryLimit string - // PgBouncerMemoryRequest, if specified, is the value of how much RAM should - // be requested for deploying pgBouncer instances. Defaults to the server - // specified default - PgBouncerMemoryRequest string - // BackrestCPULimit, if specified, is the value of the max amount of CPU - // to be utilized for a pgBackRest Pod - BackrestCPULimit string - // BackrestCPURequest, if specified, is the value of how much CPU should be - // requested the pgBackRest repository. Defaults to not being requested - BackrestCPURequest string - // BackrestMemoryLimit, if specified is the max amount of memory a pgBackRest - // Pod should use - BackrestMemoryLimit string - // BackrestMemoryRequest, if specified, is the value of how much RAM should - // be requested for the pgBackRest repository. Defaults to the server - // specified default - BackrestMemoryRequest string - // BackrestStorageConfig sets the storage configuration to use for the - // pgBackRest local repository. This overrides the value in pgo.yaml, though - // the value of BackrestPVCSize can override the PVC size set in this - // storage config - BackrestStorageConfig string - // BackrestS3CASecretName specifies the name of a secret to use for the - // pgBackRest S3 CA instead of the default - BackrestS3CASecretName string - // PGDataSourceSpec defines the data source that should be used to populate the initial PGDATA - // directory when bootstrapping a new PostgreSQL cluster - PGDataSource crv1.PGDataSourceSpec - // Annotations provide any custom annotations for a cluster - Annotations crv1.ClusterAnnotations `json:"annotations"` -} - -// CreateClusterDetail provides details about the PostgreSQL cluster that is -// created -// -// swagger:model -type CreateClusterDetail struct { - // Database is the name of the database that is initially created for users to - // connect to - Database string - // Name is the name of the PostgreSQL cluster - Name string - // Users contain an array of users along with their credentials - Users []CreateClusterDetailUser - // WorkflowID matches up to the WorkflowID of the cluster - WorkflowID string -} - -// CreateClusterDetailUser provides information about an individual PostgreSQL -// user, such as password -// -// swagger:model -type CreateClusterDetailUser struct { - // Password is the password used for this username, but it may be empty based - // on what data is allowed to be returned by the server - Password string - // Username is the username in PostgreSQL for the user - Username string -} - -// CreateClusterResponse -// -// swagger:model -type CreateClusterResponse struct { - Result CreateClusterDetail `json:"result"` - Status `json:"status"` -} - -// ShowClusterService -// -// swagger:model -type ShowClusterService struct { - Name string - Data string - ClusterIP string - ExternalIP string - ClusterName string - Pgbouncer bool - BackrestRepo bool -} - -const PodTypePrimary = "primary" -const PodTypeReplica = "replica" -const PodTypePgbouncer = "pgbouncer" -const PodTypePgbackrest = "pgbackrest" -const PodTypeBackup = "backup" -const PodTypeUnknown = "unknown" - -// ShowClusterPod -// -// swagger:model -type ShowClusterPod struct { - Name string - Phase string - NodeName string - PVC []ShowClusterPodPVC - ReadyStatus string - Ready bool - Primary bool - Type string -} - -// ShowClusterPodPVC contains information about a PVC that is bound to a Pod -// -// swagger:model -type ShowClusterPodPVC struct { - // Capacity is the total storage capacity available. This comes from a - // Kubernetes resource Quantity string - Capacity string - - // Name is the name of the PVC - Name string -} - -// ShowClusterDeployment -// -// swagger:model -type ShowClusterDeployment struct { - Name string - PolicyLabels []string -} - -// ShowClusterReplica -// -// swagger:model -type ShowClusterReplica struct { - Name string -} - -// ShowClusterDetail ... -// -// swagger:model -type ShowClusterDetail struct { - // Defines the Cluster using a Crunchy Pgcluster crd - Cluster crv1.Pgcluster `json:"cluster"` - Deployments []ShowClusterDeployment - Pods []ShowClusterPod - Services []ShowClusterService - Replicas []ShowClusterReplica - Standby bool -} - -// ShowClusterResponse ... -// -// swagger:model -type ShowClusterResponse struct { - // results from show cluster - Results []ShowClusterDetail - // status of response - Status -} - -// DeleteClusterRequest ... -// swagger:model -type DeleteClusterRequest struct { - Clustername string - Selector string - // Version of API client - // required: true - ClientVersion string - Namespace string - AllFlag bool - DeleteBackups bool - DeleteData bool -} - -// DeleteClusterResponse ... -// swagger:model -type DeleteClusterResponse struct { - Results []string - Status -} - -// set the types for updating the Autofail status -type UpdateClusterAutofailStatus int - -// set the different values around updating the autofail configuration -const ( - UpdateClusterAutofailDoNothing UpdateClusterAutofailStatus = iota - UpdateClusterAutofailEnable - UpdateClusterAutofailDisable -) - -// UpdateClusterStandbyStatus defines the types for updating the Standby status -type UpdateClusterStandbyStatus int - -// set the different values around updating the standby configuration -const ( - UpdateClusterStandbyDoNothing UpdateClusterStandbyStatus = iota - UpdateClusterStandbyEnable - UpdateClusterStandbyDisable -) - -// UpdateBackrestS3VerifyTLS defines the types for updating the S3 TLS verification configuration -type UpdateBackrestS3VerifyTLS int - -// set the different values around updating the S3 TLS verification configuration -const ( - UpdateBackrestS3VerifyTLSDoNothing UpdateBackrestS3VerifyTLS = iota - UpdateBackrestS3VerifyTLSEnable - UpdateBackrestS3VerifyTLSDisable -) - -// UpdateClusterRequest ... -// swagger:model -type UpdateClusterRequest struct { - Clustername []string - Selector string - // Version of API client - // required: true - ClientVersion string - Namespace string - AllFlag bool - // Annotations provide any custom annotations for a cluster - Annotations crv1.ClusterAnnotations `json:"annotations"` - Autofail UpdateClusterAutofailStatus - // BackrestCPULimit, if specified, is the value of the max amount of CPU - // to be utilized for a pgBackRest Pod - BackrestCPULimit string - // BackrestCPURequest, if specified, is the value of how much CPU should be - // requested the pgBackRest repository. Defaults to not being requested - BackrestCPURequest string - // BackrestMemoryLimit, if specified is the max amount of memory a pgBackRest - // Pod should use - BackrestMemoryLimit string - // BackrestMemoryRequest, if specified, is the value of how much RAM should - // be requested for the pgBackRest repository. - BackrestMemoryRequest string - // ExporterCPULimit, if specified, is the value of the max amount of CPU - // to be utilized for a Crunchy Postgres Exporter instance - ExporterCPULimit string - // ExporterCPURequest, if specified, is the value of how much CPU should be - // requested the Crunchy Postgres Exporter. Defaults to not being requested - ExporterCPURequest string - // ExporterMemoryLimit, if specified is the max amount of memory a Crunchy - // Postgres Exporter instance should use - ExporterMemoryLimit string - // ExporterMemoryRequest, if specified, is the value of how much RAM should - // be requested for the Crunchy Postgres Exporter instance. - ExporterMemoryRequest string - // CPULimit is the value of the max CPU utilization for a Pod that has a - // PostgreSQL cluster - CPULimit string - // CPURequest is the value of how much CPU should be requested for deploying - // the PostgreSQL cluster - CPURequest string - // MemoryLimit is the value of of the limit of how much RAM a Pod with a - // PostgreSQL instance should use. At this time we do not recommend setting - // this. - MemoryLimit string - // MemoryRequest is the value of how much RAM should be requested for - // deploying the PostgreSQL cluster - MemoryRequest string - Standby UpdateClusterStandbyStatus - Startup bool - Shutdown bool - Tablespaces []ClusterTablespaceDetail -} - -// UpdateClusterResponse ... -// swagger:model -type UpdateClusterResponse struct { - Results []string - Status -} - -// ClusterTestRequest ... -// swagger:model -type ClusterTestRequest struct { - Clustername string - Selector string - // Version of API client - // required: true - ClientVersion string - Namespace string - AllFlag bool -} - -// a collection of constants used to enumerate the output for -// ClusterTestDetail => InstanceType -const ( - ClusterTestInstanceTypePrimary = "primary" - ClusterTestInstanceTypeReplica = "replica" - ClusterTestInstanceTypePGBouncer = "pgbouncer" - ClusterTestInstanceTypeBackups = "backups" - ClusterTestInstanceTypeUnknown = "unknown" -) - -// ClusterTestDetail provides the output of an individual test that is performed -// on either a PostgreSQL instance (i.e. pod) or a service endpoint that is used -// to connect to the instances - -// swagger:model -type ClusterTestDetail struct { - Available bool // true if the object being tested is available (ready) - Message string // a descriptive message that can be displayed with - InstanceType string // an enumerated set of what this instance can be, e.g. "primary" -} - -// ClusterTestResult contains the output for a test on a single PostgreSQL -// cluster. This includes the endpoints (i.e. how to connect to instances -// in a cluster) and the instances themselves (which are pods) -// swagger:model -type ClusterTestResult struct { - ClusterName string - Endpoints []ClusterTestDetail // a list of endpoints - Instances []ClusterTestDetail // a list of instances (pods) -} - -// ClusterTestResponse ... -// swagger:model -type ClusterTestResponse struct { - Results []ClusterTestResult - Status -} - -// ScaleQueryTargetSpec -// swagger:model -type ScaleQueryTargetSpec struct { - Name string // the name of the PostgreSQL instance - Node string // the node that the instance is running on - ReplicationLag int // how far behind the instance is behind the primary, in MB - Status string // the current status of the instance - Timeline int // the timeline the replica is on; timelines are adjusted after failover events - PendingRestart bool // whether or not a restart is pending for the target -} - -// ScaleQueryResponse -// swagger:model -type ScaleQueryResponse struct { - Results []ScaleQueryTargetSpec - Status - Standby bool -} - -// ScaleDownResponse -// swagger:model -type ScaleDownResponse struct { - Results []string - Status -} - -// ClusterScaleResponse ... -// swagger:model -type ClusterScaleResponse struct { - Results []string - Status -} - -// ClusterTablespaceDetail contains details required to create a tablespace -// swagger:model -type ClusterTablespaceDetail struct { - // Name is the name of the tablespace. Becomes the name of the tablespace in - // PostgreSQL - Name string - // optional: allows for the specification of the size of the PVC for the - // tablespace, overriding the value that is in "StorageClass" - PVCSize string - // StorageConfig is the name of the storage config to use for the tablespace, - // e.g. "nfsstorage", that is specified in the pgo.yaml configuration - StorageConfig string -} diff --git a/pkg/apiservermsgs/common.go b/pkg/apiservermsgs/common.go deleted file mode 100644 index cce7f6be40..0000000000 --- a/pkg/apiservermsgs/common.go +++ /dev/null @@ -1,55 +0,0 @@ -package apiservermsgs - -/* -Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -const PGO_VERSION = "4.5.0" - -// Ok status -const Ok = "ok" - -// Error code string -const Error = "error" - -// UpgradeError is the error used for when a command is tried against a cluster that has not -// been upgraded to the current Operator version -const UpgradeError = " has not yet been upgraded. Please upgrade the cluster before running this Postgres Operator command." - -// Status ... -// swagger:model Status -type Status struct { - // status code - Code string - // status message - Msg string -} - -// Syntactic sugar for consistency and readibility -func (s *Status) SetError(msg string) { - s.Code = Error - s.Msg = msg -} - -// BasicAuthCredentials ... -// swagger:model BasicAuthCredentials -type BasicAuthCredentials struct { - Username string - Password string - APIServerURL string -} - -func (b BasicAuthCredentials) HasUsernameAndPassword() bool { - return len(b.Username) > 0 && len(b.Password) > 0 -} diff --git a/pkg/apiservermsgs/configmsgs.go b/pkg/apiservermsgs/configmsgs.go deleted file mode 100644 index 06ed680008..0000000000 --- a/pkg/apiservermsgs/configmsgs.go +++ /dev/null @@ -1,27 +0,0 @@ -package apiservermsgs - -/* -Copyright 2018 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "github.com/crunchydata/postgres-operator/internal/config" -) - -// ShowConfigResponse ... -// swagger:model -type ShowConfigResponse struct { - Result config.PgoConfig - Status -} diff --git a/pkg/apiservermsgs/dfmsgs.go b/pkg/apiservermsgs/dfmsgs.go deleted file mode 100644 index 22541840e7..0000000000 --- a/pkg/apiservermsgs/dfmsgs.go +++ /dev/null @@ -1,58 +0,0 @@ -package apiservermsgs - -/* -Copyright 2018 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -type DfPVCType int - -// DfShowAllSelector is a value that is used to represent "all" -const DfShowAllSelector = "*" - -// the DfPVCType selectors help to display determine what type of PVC is being -// analyzed as part of the DF command -const ( - PVCTypePostgreSQL DfPVCType = iota - PVCTypepgBackRest - PVCTypeTablespace - PVCTypeWriteAheadLog -) - -// DfRequest contains the parameters that can be used to get disk utilization -// for PostgreSQL clusters -// swagger:model -type DfRequest struct { - ClientVersion string - Namespace string - Selector string -} - -// DfDetail returns specific information about the utilization of a PVC -// swagger:model -type DfDetail struct { - InstanceName string - PodName string - PVCType DfPVCType - PVCName string - PVCUsed int64 - PVCCapacity int64 -} - -// DfResponse returns the results of how PVCs are being utilized, or an error -// message -// swagger:model -type DfResponse struct { - Results []DfDetail - Status -} diff --git a/pkg/apiservermsgs/failovermsgs.go b/pkg/apiservermsgs/failovermsgs.go deleted file mode 100644 index bfeefcb49a..0000000000 --- a/pkg/apiservermsgs/failovermsgs.go +++ /dev/null @@ -1,59 +0,0 @@ -package apiservermsgs - -/* -Copyright 2018 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// FailoverTargetSpec -// swagger:model -type FailoverTargetSpec struct { - Name string // the name of the PostgreSQL instance - Node string // the node that the instance is running on - ReplicationLag int // how far behind the instance is behind the primary, in MB - Status string // the current status of the instance - Timeline int // the timeline the replica is on; timelines are adjusted after failover events - PendingRestart bool // whether or not a restart is pending for the target -} - -// QueryFailoverResponse ... -// swagger:model -type QueryFailoverResponse struct { - Results []FailoverTargetSpec - Status - Standby bool -} - -// CreateFailoverResponse ... -// swagger:model -type CreateFailoverResponse struct { - Results []string - Targets string - Status -} - -// CreateFailoverRequest ... -// swagger:model -type CreateFailoverRequest struct { - Namespace string - ClusterName string - Target string - ClientVersion string -} - -// QueryFailoverRequest ... -// swagger:model -type QueryFailoverRequest struct { - ClusterName string - ClientVersion string -} diff --git a/pkg/apiservermsgs/labelmsgs.go b/pkg/apiservermsgs/labelmsgs.go deleted file mode 100644 index eabf3e8ecf..0000000000 --- a/pkg/apiservermsgs/labelmsgs.go +++ /dev/null @@ -1,45 +0,0 @@ -package apiservermsgs - -/* -Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// LabelRequest ... -// swagger:model -type LabelRequest struct { - Selector string - Namespace string - Args []string - LabelCmdLabel string - DryRun bool - DeleteLabel bool - ClientVersion string -} - -// DeleteLabelRequest ... -// swagger:model -type DeleteLabelRequest struct { - Selector string - Namespace string - Args []string - LabelCmdLabel string - ClientVersion string -} - -// LabelResponse ... -// swagger:model -type LabelResponse struct { - Results []string - Status -} diff --git a/pkg/apiservermsgs/namespacemsgs.go b/pkg/apiservermsgs/namespacemsgs.go deleted file mode 100644 index 3921604a00..0000000000 --- a/pkg/apiservermsgs/namespacemsgs.go +++ /dev/null @@ -1,86 +0,0 @@ -package apiservermsgs - -/* -Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// NamespaceResult ... -// swagger:model -type NamespaceResult struct { - Namespace string - InstallationAccess bool - UserAccess bool -} - -// ShowNamespaceRequest ... -// swagger:model -type ShowNamespaceRequest struct { - Args []string - AllFlag bool - ClientVersion string -} - -// ShowNamespaceResponse ... -// swagger:model -type ShowNamespaceResponse struct { - Username string - Results []NamespaceResult - Status -} - -// UpdateNamespaceRequest ... -// swagger:model -type UpdateNamespaceRequest struct { - Args []string - ClientVersion string -} - -// UpdateNamespaceResponse ... -// swagger:model -type UpdateNamespaceResponse struct { - Results []string - Status -} - -// CreateNamespaceRequest ... -// swagger:model -type CreateNamespaceRequest struct { - Args []string - Namespace string - ClientVersion string -} - -// CreateNamespaceResponse ... -// swagger:model -type CreateNamespaceResponse struct { - Results []string - Status -} - -// DeleteNamespaceRequest ... -// swagger:model -type DeleteNamespaceRequest struct { - Args []string - Selector string - Namespace string - AllFlag bool - ClientVersion string -} - -// DeleteNamespaceResponse ... -// swagger:model -type DeleteNamespaceResponse struct { - Results []string - Status -} diff --git a/pkg/apiservermsgs/pgadminmsgs.go b/pkg/apiservermsgs/pgadminmsgs.go deleted file mode 100644 index 5d68b9352d..0000000000 --- a/pkg/apiservermsgs/pgadminmsgs.go +++ /dev/null @@ -1,101 +0,0 @@ -package apiservermsgs - -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// CreatePgAdminRequest ... -// swagger:model -type CreatePgAdminRequest struct { - Args []string - ClientVersion string - Namespace string - Selector string -} - -// CreatePgAdminResponse ... -// swagger:model -type CreatePgAdminResponse struct { - Results []string - Status -} - -// DeletePgAdminRequest ... -// swagger:model -type DeletePgAdminRequest struct { - Args []string - Selector string - Namespace string - ClientVersion string - Uninstall bool -} - -// DeletePgAdminResponse ... -// swagger:model -type DeletePgAdminResponse struct { - Results []string - Status -} - -// ShowPgAdminDetail is the specific information about a pgAdmin deployment -// for a cluster -// -// swagger:model -type ShowPgAdminDetail struct { - // ClusterName is the name of the PostgreSQL cluster associated with this - // pgAdmin deployment - ClusterName string - // HasPgAdmin is set to true if there is a pgAdmin deployment with this - // cluster, otherwise its false - HasPgAdmin bool - // ServiceClusterIP contains the ClusterIP address of the Service - ServiceClusterIP string - // ServiceExternalIP contains the external IP address of the Service, if it - // is assigned - ServiceExternalIP string - // ServiceName contains the name of the Kubernetes Service - ServiceName string - // Users contains the list of users configured for pgAdmin login - Users []string -} - -// ShowPgAdminRequest contains the attributes for requesting information about -// a pgAdmin deployment -// -// swagger:model -type ShowPgAdminRequest struct { - // ClientVersion is the required parameter that includes the version of the - // Operator that is requesting - ClientVersion string - - // ClusterNames contains one or more names of cluster to be queried to show - // information about their pgAdmin deployment - ClusterNames []string - - // Namespace is the namespace to perform the query in - Namespace string - - // Selector is optional and contains a selector to gather information about - // a PostgreSQL cluster's pgAdmin - Selector string -} - -// ShowPgAdminResponse contains the attributes that are part of the response -// from the pgAdmin request, i.e. pgAdmin information -// -// swagger:model -type ShowPgAdminResponse struct { - Results []ShowPgAdminDetail - Status -} diff --git a/pkg/apiservermsgs/pgbouncermsgs.go b/pkg/apiservermsgs/pgbouncermsgs.go deleted file mode 100644 index 0feab5f15e..0000000000 --- a/pkg/apiservermsgs/pgbouncermsgs.go +++ /dev/null @@ -1,194 +0,0 @@ -package apiservermsgs - -/* -Copyright 2018 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// CreatePgbouncerRequest ... -// swagger:model -type CreatePgbouncerRequest struct { - Args []string - ClientVersion string - // CPULimit, if specified, is the max CPU that should be used on a pgBouncer - // Pod. Defaults to not being set. - CPULimit string - // CPURequest, if specified, is the value of how much CPU should be - // requested for deploying pgBouncer instances. Defaults to not being - // requested - CPURequest string - // MemoryLimit, if specified, is the max CPU that should be used on a - // pgBouncer Pod. Defaults to not being set. - MemoryLimit string - // MemoryRequest, if specified, is the value of how much RAM should - // be requested for deploying pgBouncer instances. Defaults to the server - // specified default - MemoryRequest string - Namespace string - // Replicas represents the total number of pgBouncer pods to deploy with a - // PostgreSQL cluster. Must be at least 1. If 0 is passed in, it will - // automatically be set to 1 - Replicas int32 - Selector string -} - -// CreatePgbouncerResponse ... -// swagger:model -type CreatePgbouncerResponse struct { - Results []string - Status -} - -// DeletePgbouncerRequest ... -// swagger:model -type DeletePgbouncerRequest struct { - Args []string - Selector string - Namespace string - ClientVersion string - Uninstall bool -} - -// DeletePgbouncerResponse ... -// swagger:model -type DeletePgbouncerResponse struct { - Results []string - Status -} - -// ShowPgBouncerDetail is the specific information about a pgBouncer deployment -// for a cluster -// -// swagger:model -type ShowPgBouncerDetail struct { - // ClusterName is the name of the PostgreSQL cluster associated with this - // pgBouncer deployment - ClusterName string - // HasPgBouncer is set to true if there is a pgBouncer deployment with this - // cluster, otherwise its false - HasPgBouncer bool - // Password contains the password for the pgBouncer service account - Password string - // ServiceClusterIP contains the ClusterIP address of the Service - ServiceClusterIP string - // ServiceExternalIP contains the external IP address of the Service, if it - // is assigned - ServiceExternalIP string - // ServiceName contains the name of the Kubernetes Service - ServiceName string - // Username is the username for the pgBouncer service account - Username string -} - -// ShowPgBouncerRequest contains the attributes for requesting information about -// a pgBouncer deployment -// -// swagger:model -type ShowPgBouncerRequest struct { - // ClientVersion is the required parameter that includes the version of the - // Operator that is requesting - ClientVersion string - - // ClusterNames contains one or more names of cluster to be queried to show - // information about their pgBouncer deployment - ClusterNames []string - - // Namespace is the namespace to perform the query in - Namespace string - - // Selector is optional and contains a selector to gather information about - // a PostgreSQL cluster's pgBouncer - Selector string -} - -// ShowPgBouncerResponse contains the attributes that are part of the response -// from the pgBouncer request, i.e. pgBouncer information -// -// swagger:model -type ShowPgBouncerResponse struct { - Results []ShowPgBouncerDetail - Status -} - -// UpdatePgBouncerDetail is the specific information about the pgBouncer update -// request for each deployment -// -// swagger:model -type UpdatePgBouncerDetail struct { - // ClusterName is the name of the PostgreSQL cluster associated with this - // pgBouncer deployment - ClusterName string - // Error is set to true if there is an error. HasPgbouncer == false is not - // an error - Error bool - // ErrorMessage contains an error message if there is an error - ErrorMessage string - // HasPgBouncer is set to true if there is a pgBouncer deployment with this - // cluster, otherwise its false - HasPgBouncer bool -} - -// UpdatePgBouncerRequest contains the attributes for updating a pgBouncer -// deployment -// -// swagger:model -type UpdatePgBouncerRequest struct { - // ClientVersion is the required parameter that includes the version of the - // Operator that is requesting - ClientVersion string - - // ClusterNames contains one or more names of pgBouncer deployments to be - // updated - ClusterNames []string - - // CPULimit, if specified, is the max CPU that should be used on a pgBouncer - // Pod. Defaults to not being set. - CPULimit string - - // CPURequest, if specified, is the value of how much CPU should be - // requested for deploying pgBouncer instances. Defaults to not being - // requested - CPURequest string - - // MemoryLimit, if specified, is the max CPU that should be used on a - // pgBouncer Pod. Defaults to not being set. - MemoryLimit string - - // MemoryRequest, if specified, is the value of how much RAM should - // be requested for deploying pgBouncer instances. Defaults to the server - // specified default - MemoryRequest string - - // Namespace is the namespace to perform the query in - Namespace string - - // Replicas represents the total number of pgBouncer pods to deploy with a - // PostgreSQL cluster. Must be at least 1. If 0 is passed in, it is ignored - Replicas int32 - - // RotatePassword is used to rotate the password for the "pgbouncer" service - // account - RotatePassword bool - - // Selector is optional and contains a selector for pgBouncer deployments that - // are to be updated - Selector string -} - -// UpdatePgBouncerResponse contains the resulting output of the update request -// -// swagger:model -type UpdatePgBouncerResponse struct { - Results []UpdatePgBouncerDetail - Status -} diff --git a/pkg/apiservermsgs/pgdumpmsgs.go b/pkg/apiservermsgs/pgdumpmsgs.go deleted file mode 100644 index e247fca304..0000000000 --- a/pkg/apiservermsgs/pgdumpmsgs.go +++ /dev/null @@ -1,97 +0,0 @@ -package apiservermsgs - -import ( - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" -) - -/* -Copyright 2019 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// CreatepgDumpBackupResponse ... -// swagger:model -type CreatepgDumpBackupResponse struct { - Results []string - Status -} - -// CreatepgDumpBackup ... -// swagger:model -type CreatepgDumpBackupRequest struct { - Namespace string - Args []string - Selector string - PGDumpDB string - PVCName string - StorageConfig string - BackupOpts string -} - -// ShowpgDumpDetail -// swagger:model -type ShowpgDumpDetail struct { - Name string - Info string -} - -// PgRestoreResponse -// swagger:model -type PgRestoreResponse struct { - Results []string - Status -} - -// PgRestoreRequest ... -// swagger:model -type PgRestoreRequest struct { - Namespace string - FromCluster string - FromPVC string - PGDumpDB string - RestoreOpts string - PITRTarget string - NodeLabel string -} - -// NOTE: these are ported over from legacy functionality - -// ShowBackupResponse ... -// swagger:model -type ShowBackupResponse struct { - BackupList PgbackupList - Status -} - -// PgbackupList ... -// swagger:model -type PgbackupList struct { - Items []Pgbackup `json:"items"` -} - -// Pgbackup ... -// swagger:model -type Pgbackup struct { - CreationTimestamp string - Namespace string `json:"namespace"` - Name string `json:"name"` - StorageSpec crv1.PgStorageSpec `json:"storagespec"` - CCPImageTag string `json:"ccpimagetag"` - BackupHost string `json:"backuphost"` - BackupUserSecret string `json:"backupusersecret"` - BackupPort string `json:"backupport"` - BackupStatus string `json:"backupstatus"` - BackupPVC string `json:"backuppvc"` - BackupOpts string `json:"backupopts"` - Toc map[string]string `json:"toc"` -} diff --git a/pkg/apiservermsgs/pgorolemsgs.go b/pkg/apiservermsgs/pgorolemsgs.go deleted file mode 100644 index 1f62efa1ab..0000000000 --- a/pkg/apiservermsgs/pgorolemsgs.go +++ /dev/null @@ -1,87 +0,0 @@ -package apiservermsgs - -/* -Copyright 2019 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// ShowPgoroleRequest ... -// swagger:model -type ShowPgoroleRequest struct { - Namespace string - AllFlag bool - ClientVersion string - PgoroleName []string -} - -// PgroleInfo ... -// swagger:model -type PgoroleInfo struct { - Name string - Permissions string -} - -// ShowPgoroleResponse ... -// swagger:model -type ShowPgoroleResponse struct { - RoleInfo []PgoroleInfo - Status -} - -// CreatePgoroleRequest ... -// swagger:model -type CreatePgoroleRequest struct { - PgoroleName string - PgorolePermissions string - Namespace string - ClientVersion string -} - -// CreatePgoroleResponse ... -// swagger:model -type CreatePgoroleResponse struct { - Status -} - -// UpdatePgoroleRequest ... -// swagger:model -type UpdatePgoroleRequest struct { - Name string - PgorolePermissions string - PgoroleName string - ChangePermissions bool - Namespace string - ClientVersion string -} - -// ApplyPgoroleResponse ... -// swagger:model -type UpdatePgoroleResponse struct { - Status -} - -// DeletePgoroleRequest ... -// swagger:model -type DeletePgoroleRequest struct { - PgoroleName []string - Namespace string - AllFlag bool - ClientVersion string -} - -// DeletePgoroleResponse ... -// swagger:model -type DeletePgoroleResponse struct { - Results []string - Status -} diff --git a/pkg/apiservermsgs/pgousermsgs.go b/pkg/apiservermsgs/pgousermsgs.go deleted file mode 100644 index 815f8f1fdf..0000000000 --- a/pkg/apiservermsgs/pgousermsgs.go +++ /dev/null @@ -1,93 +0,0 @@ -package apiservermsgs - -/* -Copyright 2019 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// ShowPgouserRequest ... -// swagger:model -type ShowPgouserRequest struct { - Namespace string - AllFlag bool - ClientVersion string - PgouserName []string -} - -// PgouserInfo ... -// swagger:model -type PgouserInfo struct { - Username string - Role []string - Namespace []string -} - -// ShowPgouserResponse ... -// swagger:model -type ShowPgouserResponse struct { - UserInfo []PgouserInfo - Status -} - -// CreatePgouserRequest ... -// swagger:model -type CreatePgouserRequest struct { - PgouserName string - PgouserPassword string - PgouserRoles string - AllNamespaces bool - PgouserNamespaces string - Namespace string - ClientVersion string -} - -// CreatePgouserResponse ... -// swagger:model -type CreatePgouserResponse struct { - Status -} - -// UpdatePgouserRequest ... -// swagger:model -type UpdatePgouserRequest struct { - Name string - PgouserRoles string - PgouserNamespaces string - AllNamespaces bool - PgouserPassword string - PgouserName string - Namespace string - ClientVersion string -} - -// ApplyPgouserResponse ... -// swagger:model -type UpdatePgouserResponse struct { - Status -} - -// DeletePgouserRequest ... -// swagger:model -type DeletePgouserRequest struct { - PgouserName []string - Namespace string - AllFlag bool - ClientVersion string -} - -// DeletePgouserResponse ... -// swagger:model -type DeletePgouserResponse struct { - Results []string - Status -} diff --git a/pkg/apiservermsgs/policymsgs.go b/pkg/apiservermsgs/policymsgs.go deleted file mode 100644 index ec3e7cf2f9..0000000000 --- a/pkg/apiservermsgs/policymsgs.go +++ /dev/null @@ -1,93 +0,0 @@ -package apiservermsgs - -/* -Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - crv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" -) - -// ShowPolicyRequest ... -// swagger:model -type ShowPolicyRequest struct { - Selector string - Namespace string - AllFlag bool - ClientVersion string - Policyname string -} - -// CreatePolicyRequest ... -// swagger:model -type CreatePolicyRequest struct { - Name string - URL string - SQL string - Namespace string - ClientVersion string -} - -// CreatePolicyResponse ... -// swagger:model -type CreatePolicyResponse struct { - Status -} - -// ApplyPolicyRequest ... -// swagger:model -type ApplyPolicyRequest struct { - Name string - Selector string - DryRun bool - Namespace string - ClientVersion string -} - -// ApplyPolicyResponse ... -// swagger:model -type ApplyPolicyResponse struct { - Name []string - Status -} - -// ApplyResults ... -// swagger:model -type ApplyResults struct { - Results []string -} - -// ShowPolicyResponse ... -// swagger:model -type ShowPolicyResponse struct { - PolicyList crv1.PgpolicyList - Status -} - -// DeletePolicyRequest ... -// swagger:model -type DeletePolicyRequest struct { - Selector string - Namespace string - AllFlag bool - ClientVersion string - PolicyName string -} - -// DeletePolicyResponse ... -// swagger:model -type DeletePolicyResponse struct { - Results []string - Status -} diff --git a/pkg/apiservermsgs/pvcmsgs.go b/pkg/apiservermsgs/pvcmsgs.go deleted file mode 100644 index f59ddd7983..0000000000 --- a/pkg/apiservermsgs/pvcmsgs.go +++ /dev/null @@ -1,40 +0,0 @@ -package apiservermsgs - -/* -Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// ShowPVCRequest ... -// swagger:model -type ShowPVCRequest struct { - ClusterName string - Selector string - ClientVersion string - Namespace string - AllFlag bool -} - -// ShowPVCResponse ... -// swagger:model -type ShowPVCResponse struct { - Results []ShowPVCResponseResult - Status -} - -// ShowPVCResponseResult contains a semi structured result of information -// about a PVC in a cluster -type ShowPVCResponseResult struct { - ClusterName string - PVCName string -} diff --git a/pkg/apiservermsgs/reloadmsgs.go b/pkg/apiservermsgs/reloadmsgs.go deleted file mode 100644 index 34a3738399..0000000000 --- a/pkg/apiservermsgs/reloadmsgs.go +++ /dev/null @@ -1,31 +0,0 @@ -package apiservermsgs - -/* -Copyright 2018 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// ReloadResponse ... -// swagger:model -type ReloadResponse struct { - Results []string - Status -} - -// ReloadRequest ... -// swagger:model -type ReloadRequest struct { - Namespace string - Args []string - Selector string -} diff --git a/pkg/apiservermsgs/restartmsgs.go b/pkg/apiservermsgs/restartmsgs.go deleted file mode 100644 index c0b32d3d00..0000000000 --- a/pkg/apiservermsgs/restartmsgs.go +++ /dev/null @@ -1,81 +0,0 @@ -package apiservermsgs - -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// RestartResponse is the response generated for a request to restart a cluster. -// swagger:model -type RestartResponse struct { - Result RestartDetail - Status -} - -// RestartDetail defines the details for a cluster restart request, specifically -// information about each instance restarted as a result of the request. -// swagger:model -type RestartDetail struct { - ClusterName string - Instances []InstanceDetail - Error bool - ErrorMessage string -} - -// InstanceDetail defines the details of an instance within a cluster restarted as a result -// of a cluster restart request. This includes the name of each instance, along with any -// errors that may have occurred while attempting to restart an instance. -type InstanceDetail struct { - InstanceName string - Error bool - ErrorMessage string -} - -// RestartRequest defines a request to restart a cluster, or one or more targets (i.e. -// instances) within a cluster -// swagger:model -type RestartRequest struct { - Namespace string - ClusterName string - Targets []string - ClientVersion string -} - -// QueryRestartRequest defines a request to query a specific cluster for available restart targets. -// swagger:model -type QueryRestartRequest struct { - ClusterName string - ClientVersion string -} - -// QueryRestartResponse is the response generated when querying the available instances within a -// cluster in order to perform a restart against a specific target. -// swagger:model -type QueryRestartResponse struct { - Results []RestartTargetSpec - Status - Standby bool -} - -// RestartTargetSpec defines the details for a specific restart target identified while querying a -// cluster for available targets (i.e. instances). -// swagger:model -type RestartTargetSpec struct { - Name string // the name of the PostgreSQL instance - Node string // the node that the instance is running on - ReplicationLag int // how far behind the instance is behind the primary, in MB - Status string // the current status of the instance - Timeline int // the timeline the replica is on; timelines are adjusted after failover events - PendingRestart bool // whether or not a restart is pending for the target - Role string // the role of the specific instance -} diff --git a/pkg/apiservermsgs/schedulemsgs.go b/pkg/apiservermsgs/schedulemsgs.go deleted file mode 100644 index 4b037a5992..0000000000 --- a/pkg/apiservermsgs/schedulemsgs.go +++ /dev/null @@ -1,74 +0,0 @@ -package apiservermsgs - -/* -Copyright 2018 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// CreateScheduleRequest ... -// swagger:model -type CreateScheduleRequest struct { - ClusterName string - Name string - Namespace string - Schedule string - ScheduleType string - Selector string - PGBackRestType string - BackrestStorageType string - PVCName string - ScheduleOptions string - StorageConfig string - PolicyName string - Database string - Secret string -} - -// CreateScheduleResponse ... -// swagger:model -type CreateScheduleResponse struct { - Results []string - Status -} - -// DeleteScheduleResponse ... -// swagger:model -type DeleteScheduleRequest struct { - Namespace string - ScheduleName string - ClusterName string - Selector string -} - -// ShowScheduleRequest ... -// swagger:model -type ShowScheduleRequest struct { - Namespace string - ScheduleName string - ClusterName string - Selector string -} - -// DeleteScheduleResponse ... -// swagger:model -type DeleteScheduleResponse struct { - Results []string - Status -} - -// ShowSchewduleResponse ... -// swagger:model -type ShowScheduleResponse struct { - Results []string - Status -} diff --git a/pkg/apiservermsgs/statusmsgs.go b/pkg/apiservermsgs/statusmsgs.go deleted file mode 100644 index 94994c75b9..0000000000 --- a/pkg/apiservermsgs/statusmsgs.go +++ /dev/null @@ -1,52 +0,0 @@ -package apiservermsgs - -/* -Copyright 2018 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// NodeInfo ... -// swagger:model -type NodeInfo struct { - Name string - Status string - Labels map[string]string -} - -// KeyValue ... -// swagger:model -type KeyValue struct { - Key string - Value int -} - -// StatusDetail ... -// this aggregated status comes from the pgo-status container -// by means of a volume mounted json blob it generates -// swagger:model -type StatusDetail struct { - NumDatabases int - NumClaims int - VolumeCap string - DbTags map[string]int - NotReady []string - Nodes []NodeInfo - Labels []KeyValue -} - -// ShowClusterResponse ... -// swagger:model -type StatusResponse struct { - Result StatusDetail - Status -} diff --git a/pkg/apiservermsgs/upgrademsgs.go b/pkg/apiservermsgs/upgrademsgs.go deleted file mode 100644 index ab7fecc47a..0000000000 --- a/pkg/apiservermsgs/upgrademsgs.go +++ /dev/null @@ -1,35 +0,0 @@ -package apiservermsgs - -/* -Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// CreateUpgradeRequest ... -// swagger:model -type CreateUpgradeRequest struct { - Args []string - Selector string - Namespace string - ClientVersion string - IgnoreValidation bool - UpgradeCCPImageTag string -} - -// CreateUpgradeResponse ... -// swagger:model -type CreateUpgradeResponse struct { - Results []string - Status - WorkflowID string -} diff --git a/pkg/apiservermsgs/usermsgs.go b/pkg/apiservermsgs/usermsgs.go deleted file mode 100644 index 1f0ba56295..0000000000 --- a/pkg/apiservermsgs/usermsgs.go +++ /dev/null @@ -1,166 +0,0 @@ -package apiservermsgs - -/* -Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "errors" - - pgpassword "github.com/crunchydata/postgres-operator/internal/postgres/password" -) - -type UpdateClusterLoginState int - -// set the different values around whether or not to disable/enable a user's -// ability to login -const ( - UpdateUserLoginDoNothing UpdateClusterLoginState = iota - UpdateUserLoginEnable - UpdateUserLoginDisable -) - -var ( - // ErrPasswordTypeInvalid is used when a string that's not included in - // PasswordTypeStrings is used - ErrPasswordTypeInvalid = errors.New("invalid password type. choices are (md5, scram-sha-256)") -) - -// passwordTypeStrings is a mapping of strings of password types to their -// corresponding value of the structured password type -var passwordTypeStrings = map[string]pgpassword.PasswordType{ - "": pgpassword.MD5, - "md5": pgpassword.MD5, - "scram": pgpassword.SCRAM, - "scram-sha-256": pgpassword.SCRAM, -} - -// CreateUserRequest contains the parameters that are passed in when an Operator -// user requests to create a new PostgreSQL user -// swagger:model -type CreateUserRequest struct { - AllFlag bool - Clusters []string - ClientVersion string - ManagedUser bool - Namespace string - Password string - PasswordAgeDays int - PasswordLength int - // PasswordType is one of "md5" or "scram-sha-256", defaults to "md5" - PasswordType string - Selector string - Username string -} - -// CreateUserResponse is the response to a create user request -// swagger:model -type CreateUserResponse struct { - Results []UserResponseDetail - Status -} - -// DeleteUserRequest contains the parameters that are used to delete PostgreSQL -// users from clusters -// swagger:model -type DeleteUserRequest struct { - AllFlag bool - ClientVersion string - Clusters []string - Namespace string - Selector string - Username string -} - -// DeleteUserResponse contains the results from trying to delete PostgreSQL -// users from clusters. The content in this will be much sparser than the -// others -// swagger:model -type DeleteUserResponse struct { - Results []UserResponseDetail - Status -} - -// ShowUserRequest finds information about users in various PostgreSQL clusters -// swagger:model -type ShowUserRequest struct { - AllFlag bool - Clusters []string - ClientVersion string - Expired int - Namespace string - Selector string - ShowSystemAccounts bool -} - -// ShowUsersResponse ... -// swagger:model -type ShowUserResponse struct { - Results []UserResponseDetail - Status -} - -// UpdateUserRequest is the API to allow an Operator user to update information -// about a PostgreSQL user -// swagger:model -type UpdateUserRequest struct { - AllFlag bool - ClientVersion string - Clusters []string - Expired int - ExpireUser bool - LoginState UpdateClusterLoginState - ManagedUser bool - Namespace string - Password string - PasswordAgeDays int - PasswordLength int - // PasswordType is one of "md5" or "scram-sha-256", defaults to "md5" - PasswordType string - PasswordValidAlways bool - RotatePassword bool - Selector string - Username string -} - -// UpdateUserResponse contains the response after an update user request -// swagger:model -type UpdateUserResponse struct { - Results []UserResponseDetail - Status -} - -// UserResponseDetail returns specific information about the user that -// was updated, including password, expiration time, etc. -// swagger:model -type UserResponseDetail struct { - ClusterName string - Error bool - ErrorMessage string - Password string - Username string - ValidUntil string -} - -// GetPasswordType returns the enumerated password type based on the string, and -// an error if it cannot match one -func GetPasswordType(passwordTypeStr string) (pgpassword.PasswordType, error) { - passwordType, ok := passwordTypeStrings[passwordTypeStr] - - if !ok { - return passwordType, ErrPasswordTypeInvalid - } - - return passwordType, nil -} diff --git a/pkg/apiservermsgs/usermsgs_test.go b/pkg/apiservermsgs/usermsgs_test.go deleted file mode 100644 index d2f70388fc..0000000000 --- a/pkg/apiservermsgs/usermsgs_test.go +++ /dev/null @@ -1,63 +0,0 @@ -package apiservermsgs - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "testing" - - pgpassword "github.com/crunchydata/postgres-operator/internal/postgres/password" -) - -func TestGetPasswordType(t *testing.T) { - t.Run("valid", func(t *testing.T) { - tests := map[string]pgpassword.PasswordType{ - "": pgpassword.MD5, - "md5": pgpassword.MD5, - "scram": pgpassword.SCRAM, - "scram-sha-256": pgpassword.SCRAM, - } - - for passwordTypeStr, expected := range tests { - t.Run(passwordTypeStr, func(t *testing.T) { - passwordType, err := GetPasswordType(passwordTypeStr) - - if err != nil { - t.Error(err) - return - } - - if passwordType != expected { - t.Errorf("password type %q should yield %d", passwordTypeStr, expected) - } - }) - } - }) - - t.Run("invalid", func(t *testing.T) { - tests := map[string]error{ - "magic": ErrPasswordTypeInvalid, - "scram-sha-512": ErrPasswordTypeInvalid, - } - - for passwordTypeStr, expected := range tests { - t.Run(passwordTypeStr, func(t *testing.T) { - if _, err := GetPasswordType(passwordTypeStr); err != expected { - t.Errorf("password type %q should yield error %q", passwordTypeStr, expected.Error()) - } - }) - } - }) -} diff --git a/pkg/apiservermsgs/versionmsgs.go b/pkg/apiservermsgs/versionmsgs.go deleted file mode 100644 index 7685221c44..0000000000 --- a/pkg/apiservermsgs/versionmsgs.go +++ /dev/null @@ -1,23 +0,0 @@ -package apiservermsgs - -/* -Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// VersionResponse ... -// swagger:model -type VersionResponse struct { - Version string - Status -} diff --git a/pkg/apiservermsgs/watchmsgs.go b/pkg/apiservermsgs/watchmsgs.go deleted file mode 100644 index 9d50a81ccd..0000000000 --- a/pkg/apiservermsgs/watchmsgs.go +++ /dev/null @@ -1,31 +0,0 @@ -package apiservermsgs - -/* -Copyright 2019 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// WatchRequest ... -// swagger:model -type WatchRequest struct { - Topics []string - ClientVersion string - Namespace string -} - -// WatchResponse ... -// swagger:model -type WatchResponse struct { - Results []string - Status -} diff --git a/pkg/apiservermsgs/workflowmsgs.go b/pkg/apiservermsgs/workflowmsgs.go deleted file mode 100644 index 2908d75347..0000000000 --- a/pkg/apiservermsgs/workflowmsgs.go +++ /dev/null @@ -1,30 +0,0 @@ -package apiservermsgs - -/* -Copyright 2018 - 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// ShowWorkflowDetail ... -// swagger:model -type ShowWorkflowDetail struct { - ClusterName string - Parameters map[string]string -} - -// ShowWorkflowResponse ... -// swagger:model -type ShowWorkflowResponse struct { - Results ShowWorkflowDetail - Status -} diff --git a/pkg/events/eventing.go b/pkg/events/eventing.go deleted file mode 100644 index 66fc353117..0000000000 --- a/pkg/events/eventing.go +++ /dev/null @@ -1,91 +0,0 @@ -package events - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "encoding/json" - "errors" - "fmt" - crunchylog "github.com/crunchydata/postgres-operator/internal/logging" - "github.com/nsqio/go-nsq" - log "github.com/sirupsen/logrus" - "os" - "reflect" - "time" -) - -// String returns the string form for a given LogLevel -func Publish(e EventInterface) error { - //Add logging configuration - crunchylog.CrunchyLogger(crunchylog.SetParameters()) - eventAddr := os.Getenv("EVENT_ADDR") - if eventAddr == "" { - return errors.New("EVENT_ADDR not set") - } - if os.Getenv("DISABLE_EVENTING") == "true" { - log.Debugf("eventing disabled") - return nil - } - - cfg := nsq.NewConfig() - if cfg == nil { - } - //cfg.UserAgent = fmt.Sprintf("to_nsq/%s go-nsq/%s", version.Binary, nsq.VERSION) - cfg.UserAgent = fmt.Sprintf("go-nsq/%s", nsq.VERSION) - - log.Debugf("publishing %s message %s", reflect.TypeOf(e), e.String()) - log.Debugf("header %s ", e.GetHeader().String()) - - header := e.GetHeader() - header.Timestamp = time.Now() - - b, err := json.MarshalIndent(e, "", " ") - if err != nil { - log.Errorf("Error: %s", err) - return err - } - log.Debug(string(b)) - - var producer *nsq.Producer - producer, err = nsq.NewProducer(eventAddr, cfg) - if err != nil { - log.Errorf("Error: %s", err) - return err - } - - topics := e.GetHeader().Topic - if len(topics) == 0 { - log.Errorf("Error: topics list is empty and is required to publish") - return err - } - - for i := 0; i < len(topics); i++ { - err = producer.Publish(topics[i], b) - if err != nil { - log.Errorf("Error: %s", err) - return err - } - } - - //always publish to the All topic - err = producer.Publish(EventTopicAll, b) - if err != nil { - log.Errorf("Error: %s", err) - return err - } - - return nil -} diff --git a/pkg/events/eventtype.go b/pkg/events/eventtype.go deleted file mode 100644 index 15b1bcd726..0000000000 --- a/pkg/events/eventtype.go +++ /dev/null @@ -1,682 +0,0 @@ -package events - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - "time" -) - -const ( - EventTopicAll = "alltopic" - EventTopicCluster = "clustertopic" - EventTopicBackup = "backuptopic" - EventTopicLoad = "loadtopic" - EventTopicUser = "postgresusertopic" - EventTopicPolicy = "policytopic" - EventTopicPgAdmin = "pgadmintopic" - EventTopicPgbouncer = "pgbouncertopic" - EventTopicPGO = "pgotopic" - EventTopicPGOUser = "pgousertopic" - EventTopicUpgrade = "upgradetopic" -) -const ( - EventReloadCluster = "ReloadCluster" - EventPrimaryNotReady = "PrimaryNotReady" - EventPrimaryDeleted = "PrimaryDeleted" - EventCloneCluster = "CloneCluster" - EventCloneClusterCompleted = "CloneClusterCompleted" - EventCloneClusterFailure = "CloneClusterFailure" - EventCreateCluster = "CreateCluster" - EventCreateClusterCompleted = "CreateClusterCompleted" - EventCreateClusterFailure = "CreateClusterFailure" - EventScaleCluster = "ScaleCluster" - EventScaleClusterFailure = "ScaleClusterFailure" - EventScaleDownCluster = "ScaleDownCluster" - EventShutdownCluster = "ShutdownCluster" - EventFailoverCluster = "FailoverCluster" - EventFailoverClusterCompleted = "FailoverClusterCompleted" - EventRestoreCluster = "RestoreCluster" - EventRestoreClusterCompleted = "RestoreClusterCompleted" - EventUpgradeCluster = "UpgradeCluster" - EventUpgradeClusterCreateSubmitted = "UpgradeClusterCreateSubmitted" - EventUpgradeClusterFailure = "UpgradeClusterFailure" - EventDeleteCluster = "DeleteCluster" - EventDeleteClusterCompleted = "DeleteClusterCompleted" - EventCreateLabel = "CreateLabel" - - EventCreateBackup = "CreateBackup" - EventCreateBackupCompleted = "CreateBackupCompleted" - - EventCreatePolicy = "CreatePolicy" - EventApplyPolicy = "ApplyPolicy" - EventDeletePolicy = "DeletePolicy" - - EventCreatePgAdmin = "CreatePgAdmin" - EventDeletePgAdmin = "DeletePgAdmin" - - EventCreatePgbouncer = "CreatePgbouncer" - EventDeletePgbouncer = "DeletePgbouncer" - EventUpdatePgbouncer = "UpdatePgbouncer" - - EventPGOCreateUser = "PGOCreateUser" - EventPGOUpdateUser = "PGOUpdateUser" - EventPGODeleteUser = "PGODeleteUser" - EventPGOCreateRole = "PGOCreateRole" - EventPGOUpdateRole = "PGOUpdateRole" - EventPGODeleteRole = "PGODeleteRole" - EventPGOStart = "PGOStart" - EventPGOStop = "PGOStop" - EventPGOUpdateConfig = "PGOUpdateConfig" - EventPGODeleteNamespace = "PGODeleteNamespace" - EventPGOCreateNamespace = "PGOCreateNamespace" - - EventStandbyEnabled = "StandbyEnabled" - EventStandbyDisabled = "StandbyDisabled" -) - -type EventHeader struct { - EventType string `json:eventtype` - Namespace string `json:"namespace"` - Username string `json:"username"` - Timestamp time.Time `json:"timestamp"` - Topic []string `json:"topic"` -} - -func (lvl EventHeader) String() string { - msg := fmt.Sprintf("Event %s - ns [%s] - user [%s] topics [%v] timestamp [%s]", lvl.EventType, lvl.Namespace, lvl.Username, lvl.Topic, lvl.Timestamp) - return msg -} - -type EventInterface interface { - GetHeader() EventHeader - String() string -} - -//-------- -type EventReloadClusterFormat struct { - EventHeader `json:"eventheader"` - Clustername string `json:"clustername"` -} - -func (p EventReloadClusterFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventReloadClusterFormat) String() string { - msg := fmt.Sprintf("Event %s - (reload) name %s", lvl.EventHeader, lvl.Clustername) - return msg -} - -//---------------------------- -type EventCloneClusterFailureFormat struct { - EventHeader `json:"eventheader"` - SourceClusterName string `json:"sourceClusterName"` - TargetClusterName string `json:"targetClusterName"` - ErrorMessage string `json:"errormessage"` - WorkflowID string `json:"workflowid"` -} - -func (p EventCloneClusterFailureFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventCloneClusterFailureFormat) String() string { - return fmt.Sprintf( - "Event %s - (clone cluster failure) sourceclustername %s targetclustername %s workflow %s error %s", - lvl.EventHeader, lvl.SourceClusterName, lvl.TargetClusterName, lvl.WorkflowID, lvl.ErrorMessage) -} - -//---------------------------- -type EventCloneClusterFormat struct { - EventHeader `json:"eventheader"` - SourceClusterName string `json:"sourceClusterName"` - TargetClusterName string `json:"targetClusterName"` - WorkflowID string `json:"workflowid"` -} - -func (p EventCloneClusterFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventCloneClusterFormat) String() string { - return fmt.Sprintf( - "Event %s - (Clone cluster) sourceclustername %s targetclustername %s workflow %s", - lvl.EventHeader, lvl.SourceClusterName, lvl.TargetClusterName, lvl.WorkflowID) -} - -//---------------------------- -type EventCloneClusterCompletedFormat struct { - EventHeader `json:"eventheader"` - SourceClusterName string `json:"sourceClusterName"` - TargetClusterName string `json:"targetClusterName"` - WorkflowID string `json:"workflowid"` -} - -func (p EventCloneClusterCompletedFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventCloneClusterCompletedFormat) String() string { - return fmt.Sprintf( - "Event %s - (Clone cluster completed) sourceclustername %s targetclustername %s workflow %s", - lvl.EventHeader, lvl.SourceClusterName, lvl.TargetClusterName, lvl.WorkflowID) -} - -//---------------------------- -type EventCreateClusterFailureFormat struct { - EventHeader `json:"eventheader"` - Clustername string `json:"clustername"` - ErrorMessage string `json:"errormessage"` - WorkflowID string `json:"workflowid"` -} - -func (p EventCreateClusterFailureFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventCreateClusterFailureFormat) String() string { - msg := fmt.Sprintf("Event %s - (create cluster failure) clustername %s workflow %s error %s", lvl.EventHeader, lvl.Clustername, lvl.WorkflowID, lvl.ErrorMessage) - return msg -} - -//---------------------------- -type EventCreateClusterFormat struct { - EventHeader `json:"eventheader"` - Clustername string `json:"clustername"` - WorkflowID string `json:"workflowid"` -} - -func (p EventCreateClusterFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventCreateClusterFormat) String() string { - msg := fmt.Sprintf("Event %s - (create cluster) clustername %s workflow %s", lvl.EventHeader, lvl.Clustername, lvl.WorkflowID) - return msg -} - -//---------------------------- -type EventCreateClusterCompletedFormat struct { - EventHeader `json:"eventheader"` - Clustername string `json:"clustername"` - WorkflowID string `json:"workflowid"` -} - -func (p EventCreateClusterCompletedFormat) GetHeader() EventHeader { - return p.EventHeader -} -func (lvl EventCreateClusterCompletedFormat) String() string { - msg := fmt.Sprintf("Event %s - (create cluster completed) clustername %s workflow %s", lvl.EventHeader, lvl.Clustername, lvl.WorkflowID) - return msg -} - -//---------------------------- -type EventScaleClusterFormat struct { - EventHeader `json:"eventheader"` - Clustername string `json:"clustername"` - Replicaname string `json:"replicaname"` -} - -func (p EventScaleClusterFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventScaleClusterFormat) String() string { - msg := fmt.Sprintf("Event %s (scale) - clustername %s - replicaname %s", lvl.EventHeader, lvl.Clustername, lvl.Replicaname) - return msg -} - -//---------------------------- -type EventScaleClusterFailureFormat struct { - EventHeader `json:"eventheader"` - Clustername string `json:"clustername"` - Replicaname string `json:"replicaname"` - ErrorMessage string `json:"errormessage"` -} - -func (p EventScaleClusterFailureFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventScaleClusterFailureFormat) String() string { - msg := fmt.Sprintf("Event %s (scale failure) - clustername %s - replicaname %s error %s", lvl.EventHeader, lvl.Clustername, lvl.Replicaname, lvl.ErrorMessage) - return msg -} - -//---------------------------- -type EventScaleDownClusterFormat struct { - EventHeader `json:"eventheader"` - Clustername string `json:"clustername"` - Replicaname string `json:"replicaname"` -} - -func (p EventScaleDownClusterFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventScaleDownClusterFormat) String() string { - msg := fmt.Sprintf("Event %s (scaledown) - clustername %s - replicaname %s", lvl.EventHeader, lvl.Clustername, lvl.Replicaname) - return msg -} - -//---------------------------- -type EventFailoverClusterFormat struct { - EventHeader `json:"eventheader"` - Clustername string `json:"clustername"` - Target string `json:"target"` -} - -func (p EventFailoverClusterFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventFailoverClusterFormat) String() string { - msg := fmt.Sprintf("Event %s (failover) - clustername %s - target %s", lvl.EventHeader, lvl.Clustername, lvl.Target) - return msg -} - -//---------------------------- -type EventFailoverClusterCompletedFormat struct { - EventHeader `json:"eventheader"` - Clustername string `json:"clustername"` - Target string `json:"target"` -} - -func (p EventFailoverClusterCompletedFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventFailoverClusterCompletedFormat) String() string { - msg := fmt.Sprintf("Event %s (failover completed) - clustername %s - target %s", lvl.EventHeader, lvl.Clustername, lvl.Target) - return msg -} - -//---------------------------- -type EventUpgradeClusterFormat struct { - EventHeader `json:"eventheader"` - Clustername string `json:"clustername"` - WorkflowID string `json:"workflowid"` -} - -func (p EventUpgradeClusterFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventUpgradeClusterFormat) String() string { - msg := fmt.Sprintf("Event %s (upgrade) - clustername %s", lvl.EventHeader, lvl.Clustername) - return msg -} - -//---------------------------- -type EventUpgradeClusterCreateFormat struct { - EventHeader `json:"eventheader"` - Clustername string `json:"clustername"` - WorkflowID string `json:"workflowid"` -} - -func (p EventUpgradeClusterCreateFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventUpgradeClusterCreateFormat) String() string { - msg := fmt.Sprintf("Event %s (upgraded pgcluster submitted for creation) - clustername %s", lvl.EventHeader, lvl.Clustername) - return msg -} - -//---------------------------- -type EventUpgradeClusterFailureFormat struct { - EventHeader `json:"eventheader"` - Clustername string `json:"clustername"` - WorkflowID string `json:"workflowid"` - ErrorMessage string `json:"errormessage"` -} - -func (p EventUpgradeClusterFailureFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventUpgradeClusterFailureFormat) String() string { - return fmt.Sprintf( - "Event %s - (upgrade cluster failure) clustername %s workflow %s error %s", - lvl.EventHeader, lvl.Clustername, lvl.WorkflowID, lvl.ErrorMessage) -} - -//---------------------------- -type EventDeleteClusterFormat struct { - EventHeader `json:"eventheader"` - Clustername string `json:"clustername"` -} - -func (p EventDeleteClusterFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventDeleteClusterFormat) String() string { - msg := fmt.Sprintf("Event %s (delete) - clustername %s", lvl.EventHeader, lvl.Clustername) - return msg -} - -//---------------------------- -type EventDeleteClusterCompletedFormat struct { - EventHeader `json:"eventheader"` - Clustername string `json:"clustername"` -} - -func (p EventDeleteClusterCompletedFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventDeleteClusterCompletedFormat) String() string { - msg := fmt.Sprintf("Event %s (delete completed) - clustername %s", lvl.EventHeader, lvl.Clustername) - return msg -} - -//---------------------------- -type EventCreateBackupFormat struct { - EventHeader `json:"eventheader"` - Clustername string `json:"clustername"` - BackupType string `json:"backuptype"` -} - -func (p EventCreateBackupFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventCreateBackupFormat) String() string { - msg := fmt.Sprintf("Event %s (create backup) - clustername %s - backuptype %s", lvl.EventHeader, lvl.Clustername, lvl.BackupType) - return msg -} - -//---------------------------- -type EventCreateBackupCompletedFormat struct { - EventHeader `json:"eventheader"` - Clustername string `json:"clustername"` - BackupType string `json:"backuptype"` - Path string `json:"path"` -} - -func (p EventCreateBackupCompletedFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventCreateBackupCompletedFormat) String() string { - msg := fmt.Sprintf("Event %s (create backup completed) - clustername %s", lvl.EventHeader, lvl.Clustername) - return msg -} - -//---------------------------- -type EventCreateLabelFormat struct { - EventHeader `json:"eventheader"` - Clustername string `json:"clustername"` - Label string `json:"label"` -} - -func (p EventCreateLabelFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventCreateLabelFormat) String() string { - msg := fmt.Sprintf("Event %s (create label) - clustername %s - label [%s]", lvl.EventHeader, lvl.Clustername, lvl.Label) - return msg -} - -//---------------------------- -type EventCreatePolicyFormat struct { - EventHeader `json:"eventheader"` - Policyname string `json:"policyname"` -} - -func (p EventCreatePolicyFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventCreatePolicyFormat) String() string { - msg := fmt.Sprintf("Event %s (create policy) - policy [%s]", lvl.EventHeader, lvl.Policyname) - return msg -} - -//---------------------------- -type EventDeletePolicyFormat struct { - EventHeader `json:"eventheader"` - Clustername string `json:"clustername"` - Policyname string `json:"policyname"` -} - -func (p EventDeletePolicyFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventDeletePolicyFormat) String() string { - msg := fmt.Sprintf("Event %s (delete policy) - clustername %s - policy [%s]", lvl.EventHeader, lvl.Clustername, lvl.Policyname) - return msg -} - -//---------------------------- -type EventApplyPolicyFormat struct { - EventHeader `json:"eventheader"` - Clustername string `json:"clustername"` - Policyname string `json:"policyname"` -} - -func (p EventApplyPolicyFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventApplyPolicyFormat) String() string { - msg := fmt.Sprintf("Event %s (apply policy) - clustername %s - policy [%s]", lvl.EventHeader, lvl.Clustername, lvl.Policyname) - return msg -} - -//---------------------------- -type EventCreatePgAdminFormat struct { - EventHeader `json:"eventheader"` - Clustername string `json:"clustername"` -} - -func (p EventCreatePgAdminFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventCreatePgAdminFormat) String() string { - msg := fmt.Sprintf("Event %s (create pgbouncer) - clustername %s", lvl.EventHeader, lvl.Clustername) - return msg -} - -//---------------------------- -type EventDeletePgAdminFormat struct { - EventHeader `json:"eventheader"` - Clustername string `json:"clustername"` -} - -func (p EventDeletePgAdminFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventDeletePgAdminFormat) String() string { - msg := fmt.Sprintf("Event %s (delete pgbouncer) - clustername %s", lvl.EventHeader, lvl.Clustername) - return msg -} - -//---------------------------- -type EventCreatePgbouncerFormat struct { - EventHeader `json:"eventheader"` - Clustername string `json:"clustername"` -} - -func (p EventCreatePgbouncerFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventCreatePgbouncerFormat) String() string { - msg := fmt.Sprintf("Event %s (create pgbouncer) - clustername %s", lvl.EventHeader, lvl.Clustername) - return msg -} - -//---------------------------- -type EventDeletePgbouncerFormat struct { - EventHeader `json:"eventheader"` - Clustername string `json:"clustername"` -} - -func (p EventDeletePgbouncerFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventDeletePgbouncerFormat) String() string { - msg := fmt.Sprintf("Event %s (delete pgbouncer) - clustername %s", lvl.EventHeader, lvl.Clustername) - return msg -} - -//---------------------------- -type EventUpdatePgbouncerFormat struct { - EventHeader `json:"eventheader"` - Clustername string `json:"clustername"` -} - -func (p EventUpdatePgbouncerFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventUpdatePgbouncerFormat) String() string { - msg := fmt.Sprintf("Event %s (update pgbouncer) - clustername %s", lvl.EventHeader, lvl.Clustername) - return msg -} - -//---------------------------- -type EventRestoreClusterFormat struct { - EventHeader `json:"eventheader"` - Clustername string `json:"clustername"` -} - -func (p EventRestoreClusterFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventRestoreClusterFormat) String() string { - msg := fmt.Sprintf("Event %s (restore) - clustername %s ", lvl.EventHeader, lvl.Clustername) - return msg -} - -//---------------------------- -type EventRestoreClusterCompletedFormat struct { - EventHeader `json:"eventheader"` - Clustername string `json:"clustername"` -} - -func (p EventRestoreClusterCompletedFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventRestoreClusterCompletedFormat) String() string { - msg := fmt.Sprintf("Event %s (restore completed) - clustername %s", lvl.EventHeader, lvl.Clustername) - return msg -} - -//---------------------------- -type EventPrimaryNotReadyFormat struct { - EventHeader `json:"eventheader"` - Clustername string `json:"clustername"` -} - -func (p EventPrimaryNotReadyFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventPrimaryNotReadyFormat) String() string { - msg := fmt.Sprintf("Event %s - (primary not ready) clustername %s", lvl.EventHeader, lvl.Clustername) - return msg -} - -//---------------------------- -type EventPrimaryDeletedFormat struct { - EventHeader `json:"eventheader"` - Clustername string `json:"clustername"` - Deploymentname string `json:"deploymentname"` -} - -func (p EventPrimaryDeletedFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventPrimaryDeletedFormat) String() string { - msg := fmt.Sprintf("Event %s - (primary deleted) clustername %s deployment %s", lvl.EventHeader, lvl.Clustername, lvl.Deploymentname) - return msg -} - -//---------------------------- -type EventClusterShutdownFormat struct { - EventHeader `json:"eventheader"` - Clustername string `json:"clustername"` -} - -func (p EventClusterShutdownFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventClusterShutdownFormat) String() string { - msg := fmt.Sprintf("Event %s - (cluster shutdown) clustername %s", lvl.EventHeader, - lvl.Clustername) - return msg -} - -//---------------------------- -type EventStandbyEnabledFormat struct { - EventHeader `json:"eventheader"` - Clustername string `json:"clustername"` -} - -func (p EventStandbyEnabledFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventStandbyEnabledFormat) String() string { - msg := fmt.Sprintf("Event %s - (standby mode enabled) clustername %s", lvl.EventHeader, - lvl.Clustername) - return msg -} - -//---------------------------- -type EventStandbyDisabledFormat struct { - EventHeader `json:"eventheader"` - Clustername string `json:"clustername"` -} - -func (p EventStandbyDisabledFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventStandbyDisabledFormat) String() string { - msg := fmt.Sprintf("Event %s - (standby mode disabled) clustername %s", lvl.EventHeader, - lvl.Clustername) - return msg -} - -//---------------------------- -type EventShutdownClusterFormat struct { - EventHeader `json:"eventheader"` - Clustername string `json:"clustername"` -} - -func (p EventShutdownClusterFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventShutdownClusterFormat) String() string { - msg := fmt.Sprintf("Event %s - (cluster shutdown) clustername %s", lvl.EventHeader, - lvl.Clustername) - return msg -} diff --git a/pkg/events/pgoeventtype.go b/pkg/events/pgoeventtype.go deleted file mode 100644 index 75c076b311..0000000000 --- a/pkg/events/pgoeventtype.go +++ /dev/null @@ -1,182 +0,0 @@ -package events - -/* - Copyright 2019 - 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" -) - -//-------- -type EventPGOCreateUserFormat struct { - EventHeader `json:"eventheader"` - CreatedUsername string `json:"createdusername"` -} - -func (p EventPGOCreateUserFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventPGOCreateUserFormat) String() string { - msg := fmt.Sprintf("Event %s - (pgo create user) %s - created by %s", lvl.EventHeader, lvl.CreatedUsername, lvl.EventHeader.Username) - return msg -} - -//-------- -type EventPGOUpdateUserFormat struct { - EventHeader `json:"eventheader"` - UpdatedUsername string `json:"updatedusername"` -} - -func (p EventPGOUpdateUserFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventPGOUpdateUserFormat) String() string { - msg := fmt.Sprintf("Event %s - (pgo update user) %s - updated by %s", lvl.EventHeader, lvl.UpdatedUsername, lvl.EventHeader.Username) - return msg -} - -//-------- -type EventPGODeleteUserFormat struct { - EventHeader `json:"eventheader"` - DeletedUsername string `json:"deletedusername"` -} - -func (p EventPGODeleteUserFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventPGODeleteUserFormat) String() string { - msg := fmt.Sprintf("Event %s - (pgo delete user) %s - deleted by %s", lvl.EventHeader, lvl.DeletedUsername, lvl.EventHeader.Username) - return msg -} - -//-------- -type EventPGOStartFormat struct { - EventHeader `json:"eventheader"` -} - -func (p EventPGOStartFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventPGOStartFormat) String() string { - msg := fmt.Sprintf("Event %s - (pgo start) ", lvl.EventHeader) - return msg -} - -//-------- -type EventPGOStopFormat struct { - EventHeader `json:"eventheader"` -} - -func (p EventPGOStopFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventPGOStopFormat) String() string { - msg := fmt.Sprintf("Event %s - (pgo stop) ", lvl.EventHeader) - return msg -} - -//-------- -type EventPGOUpdateConfigFormat struct { - EventHeader `json:"eventheader"` -} - -func (p EventPGOUpdateConfigFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventPGOUpdateConfigFormat) String() string { - msg := fmt.Sprintf("Event %s - (pgo update config) ", lvl.EventHeader) - return msg -} - -//-------- -type EventPGOCreateRoleFormat struct { - EventHeader `json:"eventheader"` - CreatedRolename string `json:"createdrolename"` -} - -func (p EventPGOCreateRoleFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventPGOCreateRoleFormat) String() string { - msg := fmt.Sprintf("Event %s - (pgo create role) %s - created by %s", lvl.EventHeader, lvl.CreatedRolename, lvl.EventHeader.Username) - return msg -} - -//-------- -type EventPGOUpdateRoleFormat struct { - EventHeader `json:"eventheader"` - UpdatedRolename string `json:"updatedrolename"` -} - -func (p EventPGOUpdateRoleFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventPGOUpdateRoleFormat) String() string { - msg := fmt.Sprintf("Event %s - (pgo update role) %s - updated by %s", lvl.EventHeader, lvl.UpdatedRolename, lvl.EventHeader.Username) - return msg -} - -//-------- -type EventPGODeleteRoleFormat struct { - EventHeader `json:"eventheader"` - DeletedRolename string `json:"deletedRolename"` -} - -func (p EventPGODeleteRoleFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventPGODeleteRoleFormat) String() string { - msg := fmt.Sprintf("Event %s - (pgo delete role) %s - deleted by %s", lvl.EventHeader, lvl.DeletedRolename, lvl.EventHeader.Username) - return msg -} - -//-------- -type EventPGOCreateNamespaceFormat struct { - EventHeader `json:"eventheader"` - CreatedNamespace string `json:"creatednamespace"` -} - -func (p EventPGOCreateNamespaceFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventPGOCreateNamespaceFormat) String() string { - msg := fmt.Sprintf("Event %s - (pgo create namespace) %s - created by %s", lvl.EventHeader, lvl.CreatedNamespace, lvl.EventHeader.Username) - return msg -} - -//-------- -type EventPGODeleteNamespaceFormat struct { - EventHeader `json:"eventheader"` - DeletedNamespace string `json:"deletednamespace"` -} - -func (p EventPGODeleteNamespaceFormat) GetHeader() EventHeader { - return p.EventHeader -} - -func (lvl EventPGODeleteNamespaceFormat) String() string { - msg := fmt.Sprintf("Event %s - (pgo delete namespace) %s - deleted by %s", lvl.EventHeader, lvl.DeletedNamespace, lvl.EventHeader.Username) - return msg -} diff --git a/pkg/generated/clientset/versioned/clientset.go b/pkg/generated/clientset/versioned/clientset.go deleted file mode 100644 index 7c5e89774a..0000000000 --- a/pkg/generated/clientset/versioned/clientset.go +++ /dev/null @@ -1,96 +0,0 @@ -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by client-gen. DO NOT EDIT. - -package versioned - -import ( - "fmt" - - crunchydatav1 "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned/typed/crunchydata.com/v1" - discovery "k8s.io/client-go/discovery" - rest "k8s.io/client-go/rest" - flowcontrol "k8s.io/client-go/util/flowcontrol" -) - -type Interface interface { - Discovery() discovery.DiscoveryInterface - CrunchydataV1() crunchydatav1.CrunchydataV1Interface -} - -// Clientset contains the clients for groups. Each group has exactly one -// version included in a Clientset. -type Clientset struct { - *discovery.DiscoveryClient - crunchydataV1 *crunchydatav1.CrunchydataV1Client -} - -// CrunchydataV1 retrieves the CrunchydataV1Client -func (c *Clientset) CrunchydataV1() crunchydatav1.CrunchydataV1Interface { - return c.crunchydataV1 -} - -// Discovery retrieves the DiscoveryClient -func (c *Clientset) Discovery() discovery.DiscoveryInterface { - if c == nil { - return nil - } - return c.DiscoveryClient -} - -// NewForConfig creates a new Clientset for the given config. -// If config's RateLimiter is not set and QPS and Burst are acceptable, -// NewForConfig will generate a rate-limiter in configShallowCopy. -func NewForConfig(c *rest.Config) (*Clientset, error) { - configShallowCopy := *c - if configShallowCopy.RateLimiter == nil && configShallowCopy.QPS > 0 { - if configShallowCopy.Burst <= 0 { - return nil, fmt.Errorf("Burst is required to be greater than 0 when RateLimiter is not set and QPS is set to greater than 0") - } - configShallowCopy.RateLimiter = flowcontrol.NewTokenBucketRateLimiter(configShallowCopy.QPS, configShallowCopy.Burst) - } - var cs Clientset - var err error - cs.crunchydataV1, err = crunchydatav1.NewForConfig(&configShallowCopy) - if err != nil { - return nil, err - } - - cs.DiscoveryClient, err = discovery.NewDiscoveryClientForConfig(&configShallowCopy) - if err != nil { - return nil, err - } - return &cs, nil -} - -// NewForConfigOrDie creates a new Clientset for the given config and -// panics if there is an error in the config. -func NewForConfigOrDie(c *rest.Config) *Clientset { - var cs Clientset - cs.crunchydataV1 = crunchydatav1.NewForConfigOrDie(c) - - cs.DiscoveryClient = discovery.NewDiscoveryClientForConfigOrDie(c) - return &cs -} - -// New creates a new Clientset for the given RESTClient. -func New(c rest.Interface) *Clientset { - var cs Clientset - cs.crunchydataV1 = crunchydatav1.New(c) - - cs.DiscoveryClient = discovery.NewDiscoveryClient(c) - return &cs -} diff --git a/pkg/generated/clientset/versioned/doc.go b/pkg/generated/clientset/versioned/doc.go deleted file mode 100644 index e2534c0fe7..0000000000 --- a/pkg/generated/clientset/versioned/doc.go +++ /dev/null @@ -1,19 +0,0 @@ -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by client-gen. DO NOT EDIT. - -// This package has the automatically generated clientset. -package versioned diff --git a/pkg/generated/clientset/versioned/fake/clientset_generated.go b/pkg/generated/clientset/versioned/fake/clientset_generated.go deleted file mode 100644 index 384d0e7737..0000000000 --- a/pkg/generated/clientset/versioned/fake/clientset_generated.go +++ /dev/null @@ -1,81 +0,0 @@ -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by client-gen. DO NOT EDIT. - -package fake - -import ( - clientset "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned" - crunchydatav1 "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned/typed/crunchydata.com/v1" - fakecrunchydatav1 "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake" - "k8s.io/apimachinery/pkg/runtime" - "k8s.io/apimachinery/pkg/watch" - "k8s.io/client-go/discovery" - fakediscovery "k8s.io/client-go/discovery/fake" - "k8s.io/client-go/testing" -) - -// NewSimpleClientset returns a clientset that will respond with the provided objects. -// It's backed by a very simple object tracker that processes creates, updates and deletions as-is, -// without applying any validations and/or defaults. It shouldn't be considered a replacement -// for a real clientset and is mostly useful in simple unit tests. -func NewSimpleClientset(objects ...runtime.Object) *Clientset { - o := testing.NewObjectTracker(scheme, codecs.UniversalDecoder()) - for _, obj := range objects { - if err := o.Add(obj); err != nil { - panic(err) - } - } - - cs := &Clientset{tracker: o} - cs.discovery = &fakediscovery.FakeDiscovery{Fake: &cs.Fake} - cs.AddReactor("*", "*", testing.ObjectReaction(o)) - cs.AddWatchReactor("*", func(action testing.Action) (handled bool, ret watch.Interface, err error) { - gvr := action.GetResource() - ns := action.GetNamespace() - watch, err := o.Watch(gvr, ns) - if err != nil { - return false, nil, err - } - return true, watch, nil - }) - - return cs -} - -// Clientset implements clientset.Interface. Meant to be embedded into a -// struct to get a default implementation. This makes faking out just the method -// you want to test easier. -type Clientset struct { - testing.Fake - discovery *fakediscovery.FakeDiscovery - tracker testing.ObjectTracker -} - -func (c *Clientset) Discovery() discovery.DiscoveryInterface { - return c.discovery -} - -func (c *Clientset) Tracker() testing.ObjectTracker { - return c.tracker -} - -var _ clientset.Interface = &Clientset{} - -// CrunchydataV1 retrieves the CrunchydataV1Client -func (c *Clientset) CrunchydataV1() crunchydatav1.CrunchydataV1Interface { - return &fakecrunchydatav1.FakeCrunchydataV1{Fake: &c.Fake} -} diff --git a/pkg/generated/clientset/versioned/fake/doc.go b/pkg/generated/clientset/versioned/fake/doc.go deleted file mode 100644 index 6318a06f3c..0000000000 --- a/pkg/generated/clientset/versioned/fake/doc.go +++ /dev/null @@ -1,19 +0,0 @@ -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by client-gen. DO NOT EDIT. - -// This package has the automatically generated fake clientset. -package fake diff --git a/pkg/generated/clientset/versioned/fake/register.go b/pkg/generated/clientset/versioned/fake/register.go deleted file mode 100644 index 26c69c1594..0000000000 --- a/pkg/generated/clientset/versioned/fake/register.go +++ /dev/null @@ -1,55 +0,0 @@ -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by client-gen. DO NOT EDIT. - -package fake - -import ( - crunchydatav1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - v1 "k8s.io/apimachinery/pkg/apis/meta/v1" - runtime "k8s.io/apimachinery/pkg/runtime" - schema "k8s.io/apimachinery/pkg/runtime/schema" - serializer "k8s.io/apimachinery/pkg/runtime/serializer" - utilruntime "k8s.io/apimachinery/pkg/util/runtime" -) - -var scheme = runtime.NewScheme() -var codecs = serializer.NewCodecFactory(scheme) -var parameterCodec = runtime.NewParameterCodec(scheme) -var localSchemeBuilder = runtime.SchemeBuilder{ - crunchydatav1.AddToScheme, -} - -// AddToScheme adds all types of this clientset into the given scheme. This allows composition -// of clientsets, like in: -// -// import ( -// "k8s.io/client-go/kubernetes" -// clientsetscheme "k8s.io/client-go/kubernetes/scheme" -// aggregatorclientsetscheme "k8s.io/kube-aggregator/pkg/client/clientset_generated/clientset/scheme" -// ) -// -// kclientset, _ := kubernetes.NewForConfig(c) -// _ = aggregatorclientsetscheme.AddToScheme(clientsetscheme.Scheme) -// -// After this, RawExtensions in Kubernetes types will serialize kube-aggregator types -// correctly. -var AddToScheme = localSchemeBuilder.AddToScheme - -func init() { - v1.AddToGroupVersion(scheme, schema.GroupVersion{Version: "v1"}) - utilruntime.Must(AddToScheme(scheme)) -} diff --git a/pkg/generated/clientset/versioned/scheme/doc.go b/pkg/generated/clientset/versioned/scheme/doc.go deleted file mode 100644 index 462fec5e30..0000000000 --- a/pkg/generated/clientset/versioned/scheme/doc.go +++ /dev/null @@ -1,19 +0,0 @@ -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by client-gen. DO NOT EDIT. - -// This package contains the scheme of the automatically generated clientset. -package scheme diff --git a/pkg/generated/clientset/versioned/scheme/register.go b/pkg/generated/clientset/versioned/scheme/register.go deleted file mode 100644 index 4850f74045..0000000000 --- a/pkg/generated/clientset/versioned/scheme/register.go +++ /dev/null @@ -1,55 +0,0 @@ -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by client-gen. DO NOT EDIT. - -package scheme - -import ( - crunchydatav1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - v1 "k8s.io/apimachinery/pkg/apis/meta/v1" - runtime "k8s.io/apimachinery/pkg/runtime" - schema "k8s.io/apimachinery/pkg/runtime/schema" - serializer "k8s.io/apimachinery/pkg/runtime/serializer" - utilruntime "k8s.io/apimachinery/pkg/util/runtime" -) - -var Scheme = runtime.NewScheme() -var Codecs = serializer.NewCodecFactory(Scheme) -var ParameterCodec = runtime.NewParameterCodec(Scheme) -var localSchemeBuilder = runtime.SchemeBuilder{ - crunchydatav1.AddToScheme, -} - -// AddToScheme adds all types of this clientset into the given scheme. This allows composition -// of clientsets, like in: -// -// import ( -// "k8s.io/client-go/kubernetes" -// clientsetscheme "k8s.io/client-go/kubernetes/scheme" -// aggregatorclientsetscheme "k8s.io/kube-aggregator/pkg/client/clientset_generated/clientset/scheme" -// ) -// -// kclientset, _ := kubernetes.NewForConfig(c) -// _ = aggregatorclientsetscheme.AddToScheme(clientsetscheme.Scheme) -// -// After this, RawExtensions in Kubernetes types will serialize kube-aggregator types -// correctly. -var AddToScheme = localSchemeBuilder.AddToScheme - -func init() { - v1.AddToGroupVersion(Scheme, schema.GroupVersion{Version: "v1"}) - utilruntime.Must(AddToScheme(Scheme)) -} diff --git a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/crunchydata.com_client.go b/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/crunchydata.com_client.go deleted file mode 100644 index aac71b2aa3..0000000000 --- a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/crunchydata.com_client.go +++ /dev/null @@ -1,103 +0,0 @@ -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by client-gen. DO NOT EDIT. - -package v1 - -import ( - v1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned/scheme" - rest "k8s.io/client-go/rest" -) - -type CrunchydataV1Interface interface { - RESTClient() rest.Interface - PgclustersGetter - PgpoliciesGetter - PgreplicasGetter - PgtasksGetter -} - -// CrunchydataV1Client is used to interact with features provided by the crunchydata.com group. -type CrunchydataV1Client struct { - restClient rest.Interface -} - -func (c *CrunchydataV1Client) Pgclusters(namespace string) PgclusterInterface { - return newPgclusters(c, namespace) -} - -func (c *CrunchydataV1Client) Pgpolicies(namespace string) PgpolicyInterface { - return newPgpolicies(c, namespace) -} - -func (c *CrunchydataV1Client) Pgreplicas(namespace string) PgreplicaInterface { - return newPgreplicas(c, namespace) -} - -func (c *CrunchydataV1Client) Pgtasks(namespace string) PgtaskInterface { - return newPgtasks(c, namespace) -} - -// NewForConfig creates a new CrunchydataV1Client for the given config. -func NewForConfig(c *rest.Config) (*CrunchydataV1Client, error) { - config := *c - if err := setConfigDefaults(&config); err != nil { - return nil, err - } - client, err := rest.RESTClientFor(&config) - if err != nil { - return nil, err - } - return &CrunchydataV1Client{client}, nil -} - -// NewForConfigOrDie creates a new CrunchydataV1Client for the given config and -// panics if there is an error in the config. -func NewForConfigOrDie(c *rest.Config) *CrunchydataV1Client { - client, err := NewForConfig(c) - if err != nil { - panic(err) - } - return client -} - -// New creates a new CrunchydataV1Client for the given RESTClient. -func New(c rest.Interface) *CrunchydataV1Client { - return &CrunchydataV1Client{c} -} - -func setConfigDefaults(config *rest.Config) error { - gv := v1.SchemeGroupVersion - config.GroupVersion = &gv - config.APIPath = "/apis" - config.NegotiatedSerializer = scheme.Codecs.WithoutConversion() - - if config.UserAgent == "" { - config.UserAgent = rest.DefaultKubernetesUserAgent() - } - - return nil -} - -// RESTClient returns a RESTClient that is used to communicate -// with API server by this client implementation. -func (c *CrunchydataV1Client) RESTClient() rest.Interface { - if c == nil { - return nil - } - return c.restClient -} diff --git a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/doc.go b/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/doc.go deleted file mode 100644 index b7311c21af..0000000000 --- a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/doc.go +++ /dev/null @@ -1,19 +0,0 @@ -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by client-gen. DO NOT EDIT. - -// This package has the automatically generated typed clients. -package v1 diff --git a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake/doc.go b/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake/doc.go deleted file mode 100644 index 759d8fff95..0000000000 --- a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake/doc.go +++ /dev/null @@ -1,19 +0,0 @@ -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by client-gen. DO NOT EDIT. - -// Package fake has the automatically generated clients. -package fake diff --git a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake/fake_crunchydata.com_client.go b/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake/fake_crunchydata.com_client.go deleted file mode 100644 index f8d6b6b350..0000000000 --- a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake/fake_crunchydata.com_client.go +++ /dev/null @@ -1,51 +0,0 @@ -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by client-gen. DO NOT EDIT. - -package fake - -import ( - v1 "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned/typed/crunchydata.com/v1" - rest "k8s.io/client-go/rest" - testing "k8s.io/client-go/testing" -) - -type FakeCrunchydataV1 struct { - *testing.Fake -} - -func (c *FakeCrunchydataV1) Pgclusters(namespace string) v1.PgclusterInterface { - return &FakePgclusters{c, namespace} -} - -func (c *FakeCrunchydataV1) Pgpolicies(namespace string) v1.PgpolicyInterface { - return &FakePgpolicies{c, namespace} -} - -func (c *FakeCrunchydataV1) Pgreplicas(namespace string) v1.PgreplicaInterface { - return &FakePgreplicas{c, namespace} -} - -func (c *FakeCrunchydataV1) Pgtasks(namespace string) v1.PgtaskInterface { - return &FakePgtasks{c, namespace} -} - -// RESTClient returns a RESTClient that is used to communicate -// with API server by this client implementation. -func (c *FakeCrunchydataV1) RESTClient() rest.Interface { - var ret *rest.RESTClient - return ret -} diff --git a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake/fake_pgcluster.go b/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake/fake_pgcluster.go deleted file mode 100644 index 516955d577..0000000000 --- a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake/fake_pgcluster.go +++ /dev/null @@ -1,139 +0,0 @@ -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by client-gen. DO NOT EDIT. - -package fake - -import ( - crunchydatacomv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - v1 "k8s.io/apimachinery/pkg/apis/meta/v1" - labels "k8s.io/apimachinery/pkg/labels" - schema "k8s.io/apimachinery/pkg/runtime/schema" - types "k8s.io/apimachinery/pkg/types" - watch "k8s.io/apimachinery/pkg/watch" - testing "k8s.io/client-go/testing" -) - -// FakePgclusters implements PgclusterInterface -type FakePgclusters struct { - Fake *FakeCrunchydataV1 - ns string -} - -var pgclustersResource = schema.GroupVersionResource{Group: "crunchydata.com", Version: "v1", Resource: "pgclusters"} - -var pgclustersKind = schema.GroupVersionKind{Group: "crunchydata.com", Version: "v1", Kind: "Pgcluster"} - -// Get takes name of the pgcluster, and returns the corresponding pgcluster object, and an error if there is any. -func (c *FakePgclusters) Get(name string, options v1.GetOptions) (result *crunchydatacomv1.Pgcluster, err error) { - obj, err := c.Fake. - Invokes(testing.NewGetAction(pgclustersResource, c.ns, name), &crunchydatacomv1.Pgcluster{}) - - if obj == nil { - return nil, err - } - return obj.(*crunchydatacomv1.Pgcluster), err -} - -// List takes label and field selectors, and returns the list of Pgclusters that match those selectors. -func (c *FakePgclusters) List(opts v1.ListOptions) (result *crunchydatacomv1.PgclusterList, err error) { - obj, err := c.Fake. - Invokes(testing.NewListAction(pgclustersResource, pgclustersKind, c.ns, opts), &crunchydatacomv1.PgclusterList{}) - - if obj == nil { - return nil, err - } - - label, _, _ := testing.ExtractFromListOptions(opts) - if label == nil { - label = labels.Everything() - } - list := &crunchydatacomv1.PgclusterList{ListMeta: obj.(*crunchydatacomv1.PgclusterList).ListMeta} - for _, item := range obj.(*crunchydatacomv1.PgclusterList).Items { - if label.Matches(labels.Set(item.Labels)) { - list.Items = append(list.Items, item) - } - } - return list, err -} - -// Watch returns a watch.Interface that watches the requested pgclusters. -func (c *FakePgclusters) Watch(opts v1.ListOptions) (watch.Interface, error) { - return c.Fake. - InvokesWatch(testing.NewWatchAction(pgclustersResource, c.ns, opts)) - -} - -// Create takes the representation of a pgcluster and creates it. Returns the server's representation of the pgcluster, and an error, if there is any. -func (c *FakePgclusters) Create(pgcluster *crunchydatacomv1.Pgcluster) (result *crunchydatacomv1.Pgcluster, err error) { - obj, err := c.Fake. - Invokes(testing.NewCreateAction(pgclustersResource, c.ns, pgcluster), &crunchydatacomv1.Pgcluster{}) - - if obj == nil { - return nil, err - } - return obj.(*crunchydatacomv1.Pgcluster), err -} - -// Update takes the representation of a pgcluster and updates it. Returns the server's representation of the pgcluster, and an error, if there is any. -func (c *FakePgclusters) Update(pgcluster *crunchydatacomv1.Pgcluster) (result *crunchydatacomv1.Pgcluster, err error) { - obj, err := c.Fake. - Invokes(testing.NewUpdateAction(pgclustersResource, c.ns, pgcluster), &crunchydatacomv1.Pgcluster{}) - - if obj == nil { - return nil, err - } - return obj.(*crunchydatacomv1.Pgcluster), err -} - -// UpdateStatus was generated because the type contains a Status member. -// Add a +genclient:noStatus comment above the type to avoid generating UpdateStatus(). -func (c *FakePgclusters) UpdateStatus(pgcluster *crunchydatacomv1.Pgcluster) (*crunchydatacomv1.Pgcluster, error) { - obj, err := c.Fake. - Invokes(testing.NewUpdateSubresourceAction(pgclustersResource, "status", c.ns, pgcluster), &crunchydatacomv1.Pgcluster{}) - - if obj == nil { - return nil, err - } - return obj.(*crunchydatacomv1.Pgcluster), err -} - -// Delete takes name of the pgcluster and deletes it. Returns an error if one occurs. -func (c *FakePgclusters) Delete(name string, options *v1.DeleteOptions) error { - _, err := c.Fake. - Invokes(testing.NewDeleteAction(pgclustersResource, c.ns, name), &crunchydatacomv1.Pgcluster{}) - - return err -} - -// DeleteCollection deletes a collection of objects. -func (c *FakePgclusters) DeleteCollection(options *v1.DeleteOptions, listOptions v1.ListOptions) error { - action := testing.NewDeleteCollectionAction(pgclustersResource, c.ns, listOptions) - - _, err := c.Fake.Invokes(action, &crunchydatacomv1.PgclusterList{}) - return err -} - -// Patch applies the patch and returns the patched pgcluster. -func (c *FakePgclusters) Patch(name string, pt types.PatchType, data []byte, subresources ...string) (result *crunchydatacomv1.Pgcluster, err error) { - obj, err := c.Fake. - Invokes(testing.NewPatchSubresourceAction(pgclustersResource, c.ns, name, pt, data, subresources...), &crunchydatacomv1.Pgcluster{}) - - if obj == nil { - return nil, err - } - return obj.(*crunchydatacomv1.Pgcluster), err -} diff --git a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake/fake_pgpolicy.go b/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake/fake_pgpolicy.go deleted file mode 100644 index f44e8a4ebb..0000000000 --- a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake/fake_pgpolicy.go +++ /dev/null @@ -1,139 +0,0 @@ -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by client-gen. DO NOT EDIT. - -package fake - -import ( - crunchydatacomv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - v1 "k8s.io/apimachinery/pkg/apis/meta/v1" - labels "k8s.io/apimachinery/pkg/labels" - schema "k8s.io/apimachinery/pkg/runtime/schema" - types "k8s.io/apimachinery/pkg/types" - watch "k8s.io/apimachinery/pkg/watch" - testing "k8s.io/client-go/testing" -) - -// FakePgpolicies implements PgpolicyInterface -type FakePgpolicies struct { - Fake *FakeCrunchydataV1 - ns string -} - -var pgpoliciesResource = schema.GroupVersionResource{Group: "crunchydata.com", Version: "v1", Resource: "pgpolicies"} - -var pgpoliciesKind = schema.GroupVersionKind{Group: "crunchydata.com", Version: "v1", Kind: "Pgpolicy"} - -// Get takes name of the pgpolicy, and returns the corresponding pgpolicy object, and an error if there is any. -func (c *FakePgpolicies) Get(name string, options v1.GetOptions) (result *crunchydatacomv1.Pgpolicy, err error) { - obj, err := c.Fake. - Invokes(testing.NewGetAction(pgpoliciesResource, c.ns, name), &crunchydatacomv1.Pgpolicy{}) - - if obj == nil { - return nil, err - } - return obj.(*crunchydatacomv1.Pgpolicy), err -} - -// List takes label and field selectors, and returns the list of Pgpolicies that match those selectors. -func (c *FakePgpolicies) List(opts v1.ListOptions) (result *crunchydatacomv1.PgpolicyList, err error) { - obj, err := c.Fake. - Invokes(testing.NewListAction(pgpoliciesResource, pgpoliciesKind, c.ns, opts), &crunchydatacomv1.PgpolicyList{}) - - if obj == nil { - return nil, err - } - - label, _, _ := testing.ExtractFromListOptions(opts) - if label == nil { - label = labels.Everything() - } - list := &crunchydatacomv1.PgpolicyList{ListMeta: obj.(*crunchydatacomv1.PgpolicyList).ListMeta} - for _, item := range obj.(*crunchydatacomv1.PgpolicyList).Items { - if label.Matches(labels.Set(item.Labels)) { - list.Items = append(list.Items, item) - } - } - return list, err -} - -// Watch returns a watch.Interface that watches the requested pgpolicies. -func (c *FakePgpolicies) Watch(opts v1.ListOptions) (watch.Interface, error) { - return c.Fake. - InvokesWatch(testing.NewWatchAction(pgpoliciesResource, c.ns, opts)) - -} - -// Create takes the representation of a pgpolicy and creates it. Returns the server's representation of the pgpolicy, and an error, if there is any. -func (c *FakePgpolicies) Create(pgpolicy *crunchydatacomv1.Pgpolicy) (result *crunchydatacomv1.Pgpolicy, err error) { - obj, err := c.Fake. - Invokes(testing.NewCreateAction(pgpoliciesResource, c.ns, pgpolicy), &crunchydatacomv1.Pgpolicy{}) - - if obj == nil { - return nil, err - } - return obj.(*crunchydatacomv1.Pgpolicy), err -} - -// Update takes the representation of a pgpolicy and updates it. Returns the server's representation of the pgpolicy, and an error, if there is any. -func (c *FakePgpolicies) Update(pgpolicy *crunchydatacomv1.Pgpolicy) (result *crunchydatacomv1.Pgpolicy, err error) { - obj, err := c.Fake. - Invokes(testing.NewUpdateAction(pgpoliciesResource, c.ns, pgpolicy), &crunchydatacomv1.Pgpolicy{}) - - if obj == nil { - return nil, err - } - return obj.(*crunchydatacomv1.Pgpolicy), err -} - -// UpdateStatus was generated because the type contains a Status member. -// Add a +genclient:noStatus comment above the type to avoid generating UpdateStatus(). -func (c *FakePgpolicies) UpdateStatus(pgpolicy *crunchydatacomv1.Pgpolicy) (*crunchydatacomv1.Pgpolicy, error) { - obj, err := c.Fake. - Invokes(testing.NewUpdateSubresourceAction(pgpoliciesResource, "status", c.ns, pgpolicy), &crunchydatacomv1.Pgpolicy{}) - - if obj == nil { - return nil, err - } - return obj.(*crunchydatacomv1.Pgpolicy), err -} - -// Delete takes name of the pgpolicy and deletes it. Returns an error if one occurs. -func (c *FakePgpolicies) Delete(name string, options *v1.DeleteOptions) error { - _, err := c.Fake. - Invokes(testing.NewDeleteAction(pgpoliciesResource, c.ns, name), &crunchydatacomv1.Pgpolicy{}) - - return err -} - -// DeleteCollection deletes a collection of objects. -func (c *FakePgpolicies) DeleteCollection(options *v1.DeleteOptions, listOptions v1.ListOptions) error { - action := testing.NewDeleteCollectionAction(pgpoliciesResource, c.ns, listOptions) - - _, err := c.Fake.Invokes(action, &crunchydatacomv1.PgpolicyList{}) - return err -} - -// Patch applies the patch and returns the patched pgpolicy. -func (c *FakePgpolicies) Patch(name string, pt types.PatchType, data []byte, subresources ...string) (result *crunchydatacomv1.Pgpolicy, err error) { - obj, err := c.Fake. - Invokes(testing.NewPatchSubresourceAction(pgpoliciesResource, c.ns, name, pt, data, subresources...), &crunchydatacomv1.Pgpolicy{}) - - if obj == nil { - return nil, err - } - return obj.(*crunchydatacomv1.Pgpolicy), err -} diff --git a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake/fake_pgreplica.go b/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake/fake_pgreplica.go deleted file mode 100644 index d6dc4fbd40..0000000000 --- a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake/fake_pgreplica.go +++ /dev/null @@ -1,139 +0,0 @@ -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by client-gen. DO NOT EDIT. - -package fake - -import ( - crunchydatacomv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - v1 "k8s.io/apimachinery/pkg/apis/meta/v1" - labels "k8s.io/apimachinery/pkg/labels" - schema "k8s.io/apimachinery/pkg/runtime/schema" - types "k8s.io/apimachinery/pkg/types" - watch "k8s.io/apimachinery/pkg/watch" - testing "k8s.io/client-go/testing" -) - -// FakePgreplicas implements PgreplicaInterface -type FakePgreplicas struct { - Fake *FakeCrunchydataV1 - ns string -} - -var pgreplicasResource = schema.GroupVersionResource{Group: "crunchydata.com", Version: "v1", Resource: "pgreplicas"} - -var pgreplicasKind = schema.GroupVersionKind{Group: "crunchydata.com", Version: "v1", Kind: "Pgreplica"} - -// Get takes name of the pgreplica, and returns the corresponding pgreplica object, and an error if there is any. -func (c *FakePgreplicas) Get(name string, options v1.GetOptions) (result *crunchydatacomv1.Pgreplica, err error) { - obj, err := c.Fake. - Invokes(testing.NewGetAction(pgreplicasResource, c.ns, name), &crunchydatacomv1.Pgreplica{}) - - if obj == nil { - return nil, err - } - return obj.(*crunchydatacomv1.Pgreplica), err -} - -// List takes label and field selectors, and returns the list of Pgreplicas that match those selectors. -func (c *FakePgreplicas) List(opts v1.ListOptions) (result *crunchydatacomv1.PgreplicaList, err error) { - obj, err := c.Fake. - Invokes(testing.NewListAction(pgreplicasResource, pgreplicasKind, c.ns, opts), &crunchydatacomv1.PgreplicaList{}) - - if obj == nil { - return nil, err - } - - label, _, _ := testing.ExtractFromListOptions(opts) - if label == nil { - label = labels.Everything() - } - list := &crunchydatacomv1.PgreplicaList{ListMeta: obj.(*crunchydatacomv1.PgreplicaList).ListMeta} - for _, item := range obj.(*crunchydatacomv1.PgreplicaList).Items { - if label.Matches(labels.Set(item.Labels)) { - list.Items = append(list.Items, item) - } - } - return list, err -} - -// Watch returns a watch.Interface that watches the requested pgreplicas. -func (c *FakePgreplicas) Watch(opts v1.ListOptions) (watch.Interface, error) { - return c.Fake. - InvokesWatch(testing.NewWatchAction(pgreplicasResource, c.ns, opts)) - -} - -// Create takes the representation of a pgreplica and creates it. Returns the server's representation of the pgreplica, and an error, if there is any. -func (c *FakePgreplicas) Create(pgreplica *crunchydatacomv1.Pgreplica) (result *crunchydatacomv1.Pgreplica, err error) { - obj, err := c.Fake. - Invokes(testing.NewCreateAction(pgreplicasResource, c.ns, pgreplica), &crunchydatacomv1.Pgreplica{}) - - if obj == nil { - return nil, err - } - return obj.(*crunchydatacomv1.Pgreplica), err -} - -// Update takes the representation of a pgreplica and updates it. Returns the server's representation of the pgreplica, and an error, if there is any. -func (c *FakePgreplicas) Update(pgreplica *crunchydatacomv1.Pgreplica) (result *crunchydatacomv1.Pgreplica, err error) { - obj, err := c.Fake. - Invokes(testing.NewUpdateAction(pgreplicasResource, c.ns, pgreplica), &crunchydatacomv1.Pgreplica{}) - - if obj == nil { - return nil, err - } - return obj.(*crunchydatacomv1.Pgreplica), err -} - -// UpdateStatus was generated because the type contains a Status member. -// Add a +genclient:noStatus comment above the type to avoid generating UpdateStatus(). -func (c *FakePgreplicas) UpdateStatus(pgreplica *crunchydatacomv1.Pgreplica) (*crunchydatacomv1.Pgreplica, error) { - obj, err := c.Fake. - Invokes(testing.NewUpdateSubresourceAction(pgreplicasResource, "status", c.ns, pgreplica), &crunchydatacomv1.Pgreplica{}) - - if obj == nil { - return nil, err - } - return obj.(*crunchydatacomv1.Pgreplica), err -} - -// Delete takes name of the pgreplica and deletes it. Returns an error if one occurs. -func (c *FakePgreplicas) Delete(name string, options *v1.DeleteOptions) error { - _, err := c.Fake. - Invokes(testing.NewDeleteAction(pgreplicasResource, c.ns, name), &crunchydatacomv1.Pgreplica{}) - - return err -} - -// DeleteCollection deletes a collection of objects. -func (c *FakePgreplicas) DeleteCollection(options *v1.DeleteOptions, listOptions v1.ListOptions) error { - action := testing.NewDeleteCollectionAction(pgreplicasResource, c.ns, listOptions) - - _, err := c.Fake.Invokes(action, &crunchydatacomv1.PgreplicaList{}) - return err -} - -// Patch applies the patch and returns the patched pgreplica. -func (c *FakePgreplicas) Patch(name string, pt types.PatchType, data []byte, subresources ...string) (result *crunchydatacomv1.Pgreplica, err error) { - obj, err := c.Fake. - Invokes(testing.NewPatchSubresourceAction(pgreplicasResource, c.ns, name, pt, data, subresources...), &crunchydatacomv1.Pgreplica{}) - - if obj == nil { - return nil, err - } - return obj.(*crunchydatacomv1.Pgreplica), err -} diff --git a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake/fake_pgtask.go b/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake/fake_pgtask.go deleted file mode 100644 index 2db70f152f..0000000000 --- a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/fake/fake_pgtask.go +++ /dev/null @@ -1,139 +0,0 @@ -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by client-gen. DO NOT EDIT. - -package fake - -import ( - crunchydatacomv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - v1 "k8s.io/apimachinery/pkg/apis/meta/v1" - labels "k8s.io/apimachinery/pkg/labels" - schema "k8s.io/apimachinery/pkg/runtime/schema" - types "k8s.io/apimachinery/pkg/types" - watch "k8s.io/apimachinery/pkg/watch" - testing "k8s.io/client-go/testing" -) - -// FakePgtasks implements PgtaskInterface -type FakePgtasks struct { - Fake *FakeCrunchydataV1 - ns string -} - -var pgtasksResource = schema.GroupVersionResource{Group: "crunchydata.com", Version: "v1", Resource: "pgtasks"} - -var pgtasksKind = schema.GroupVersionKind{Group: "crunchydata.com", Version: "v1", Kind: "Pgtask"} - -// Get takes name of the pgtask, and returns the corresponding pgtask object, and an error if there is any. -func (c *FakePgtasks) Get(name string, options v1.GetOptions) (result *crunchydatacomv1.Pgtask, err error) { - obj, err := c.Fake. - Invokes(testing.NewGetAction(pgtasksResource, c.ns, name), &crunchydatacomv1.Pgtask{}) - - if obj == nil { - return nil, err - } - return obj.(*crunchydatacomv1.Pgtask), err -} - -// List takes label and field selectors, and returns the list of Pgtasks that match those selectors. -func (c *FakePgtasks) List(opts v1.ListOptions) (result *crunchydatacomv1.PgtaskList, err error) { - obj, err := c.Fake. - Invokes(testing.NewListAction(pgtasksResource, pgtasksKind, c.ns, opts), &crunchydatacomv1.PgtaskList{}) - - if obj == nil { - return nil, err - } - - label, _, _ := testing.ExtractFromListOptions(opts) - if label == nil { - label = labels.Everything() - } - list := &crunchydatacomv1.PgtaskList{ListMeta: obj.(*crunchydatacomv1.PgtaskList).ListMeta} - for _, item := range obj.(*crunchydatacomv1.PgtaskList).Items { - if label.Matches(labels.Set(item.Labels)) { - list.Items = append(list.Items, item) - } - } - return list, err -} - -// Watch returns a watch.Interface that watches the requested pgtasks. -func (c *FakePgtasks) Watch(opts v1.ListOptions) (watch.Interface, error) { - return c.Fake. - InvokesWatch(testing.NewWatchAction(pgtasksResource, c.ns, opts)) - -} - -// Create takes the representation of a pgtask and creates it. Returns the server's representation of the pgtask, and an error, if there is any. -func (c *FakePgtasks) Create(pgtask *crunchydatacomv1.Pgtask) (result *crunchydatacomv1.Pgtask, err error) { - obj, err := c.Fake. - Invokes(testing.NewCreateAction(pgtasksResource, c.ns, pgtask), &crunchydatacomv1.Pgtask{}) - - if obj == nil { - return nil, err - } - return obj.(*crunchydatacomv1.Pgtask), err -} - -// Update takes the representation of a pgtask and updates it. Returns the server's representation of the pgtask, and an error, if there is any. -func (c *FakePgtasks) Update(pgtask *crunchydatacomv1.Pgtask) (result *crunchydatacomv1.Pgtask, err error) { - obj, err := c.Fake. - Invokes(testing.NewUpdateAction(pgtasksResource, c.ns, pgtask), &crunchydatacomv1.Pgtask{}) - - if obj == nil { - return nil, err - } - return obj.(*crunchydatacomv1.Pgtask), err -} - -// UpdateStatus was generated because the type contains a Status member. -// Add a +genclient:noStatus comment above the type to avoid generating UpdateStatus(). -func (c *FakePgtasks) UpdateStatus(pgtask *crunchydatacomv1.Pgtask) (*crunchydatacomv1.Pgtask, error) { - obj, err := c.Fake. - Invokes(testing.NewUpdateSubresourceAction(pgtasksResource, "status", c.ns, pgtask), &crunchydatacomv1.Pgtask{}) - - if obj == nil { - return nil, err - } - return obj.(*crunchydatacomv1.Pgtask), err -} - -// Delete takes name of the pgtask and deletes it. Returns an error if one occurs. -func (c *FakePgtasks) Delete(name string, options *v1.DeleteOptions) error { - _, err := c.Fake. - Invokes(testing.NewDeleteAction(pgtasksResource, c.ns, name), &crunchydatacomv1.Pgtask{}) - - return err -} - -// DeleteCollection deletes a collection of objects. -func (c *FakePgtasks) DeleteCollection(options *v1.DeleteOptions, listOptions v1.ListOptions) error { - action := testing.NewDeleteCollectionAction(pgtasksResource, c.ns, listOptions) - - _, err := c.Fake.Invokes(action, &crunchydatacomv1.PgtaskList{}) - return err -} - -// Patch applies the patch and returns the patched pgtask. -func (c *FakePgtasks) Patch(name string, pt types.PatchType, data []byte, subresources ...string) (result *crunchydatacomv1.Pgtask, err error) { - obj, err := c.Fake. - Invokes(testing.NewPatchSubresourceAction(pgtasksResource, c.ns, name, pt, data, subresources...), &crunchydatacomv1.Pgtask{}) - - if obj == nil { - return nil, err - } - return obj.(*crunchydatacomv1.Pgtask), err -} diff --git a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/generated_expansion.go b/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/generated_expansion.go deleted file mode 100644 index 066f811e51..0000000000 --- a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/generated_expansion.go +++ /dev/null @@ -1,26 +0,0 @@ -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by client-gen. DO NOT EDIT. - -package v1 - -type PgclusterExpansion interface{} - -type PgpolicyExpansion interface{} - -type PgreplicaExpansion interface{} - -type PgtaskExpansion interface{} diff --git a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/pgcluster.go b/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/pgcluster.go deleted file mode 100644 index 035712a6ef..0000000000 --- a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/pgcluster.go +++ /dev/null @@ -1,190 +0,0 @@ -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by client-gen. DO NOT EDIT. - -package v1 - -import ( - "time" - - v1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - scheme "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned/scheme" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - types "k8s.io/apimachinery/pkg/types" - watch "k8s.io/apimachinery/pkg/watch" - rest "k8s.io/client-go/rest" -) - -// PgclustersGetter has a method to return a PgclusterInterface. -// A group's client should implement this interface. -type PgclustersGetter interface { - Pgclusters(namespace string) PgclusterInterface -} - -// PgclusterInterface has methods to work with Pgcluster resources. -type PgclusterInterface interface { - Create(*v1.Pgcluster) (*v1.Pgcluster, error) - Update(*v1.Pgcluster) (*v1.Pgcluster, error) - UpdateStatus(*v1.Pgcluster) (*v1.Pgcluster, error) - Delete(name string, options *metav1.DeleteOptions) error - DeleteCollection(options *metav1.DeleteOptions, listOptions metav1.ListOptions) error - Get(name string, options metav1.GetOptions) (*v1.Pgcluster, error) - List(opts metav1.ListOptions) (*v1.PgclusterList, error) - Watch(opts metav1.ListOptions) (watch.Interface, error) - Patch(name string, pt types.PatchType, data []byte, subresources ...string) (result *v1.Pgcluster, err error) - PgclusterExpansion -} - -// pgclusters implements PgclusterInterface -type pgclusters struct { - client rest.Interface - ns string -} - -// newPgclusters returns a Pgclusters -func newPgclusters(c *CrunchydataV1Client, namespace string) *pgclusters { - return &pgclusters{ - client: c.RESTClient(), - ns: namespace, - } -} - -// Get takes name of the pgcluster, and returns the corresponding pgcluster object, and an error if there is any. -func (c *pgclusters) Get(name string, options metav1.GetOptions) (result *v1.Pgcluster, err error) { - result = &v1.Pgcluster{} - err = c.client.Get(). - Namespace(c.ns). - Resource("pgclusters"). - Name(name). - VersionedParams(&options, scheme.ParameterCodec). - Do(). - Into(result) - return -} - -// List takes label and field selectors, and returns the list of Pgclusters that match those selectors. -func (c *pgclusters) List(opts metav1.ListOptions) (result *v1.PgclusterList, err error) { - var timeout time.Duration - if opts.TimeoutSeconds != nil { - timeout = time.Duration(*opts.TimeoutSeconds) * time.Second - } - result = &v1.PgclusterList{} - err = c.client.Get(). - Namespace(c.ns). - Resource("pgclusters"). - VersionedParams(&opts, scheme.ParameterCodec). - Timeout(timeout). - Do(). - Into(result) - return -} - -// Watch returns a watch.Interface that watches the requested pgclusters. -func (c *pgclusters) Watch(opts metav1.ListOptions) (watch.Interface, error) { - var timeout time.Duration - if opts.TimeoutSeconds != nil { - timeout = time.Duration(*opts.TimeoutSeconds) * time.Second - } - opts.Watch = true - return c.client.Get(). - Namespace(c.ns). - Resource("pgclusters"). - VersionedParams(&opts, scheme.ParameterCodec). - Timeout(timeout). - Watch() -} - -// Create takes the representation of a pgcluster and creates it. Returns the server's representation of the pgcluster, and an error, if there is any. -func (c *pgclusters) Create(pgcluster *v1.Pgcluster) (result *v1.Pgcluster, err error) { - result = &v1.Pgcluster{} - err = c.client.Post(). - Namespace(c.ns). - Resource("pgclusters"). - Body(pgcluster). - Do(). - Into(result) - return -} - -// Update takes the representation of a pgcluster and updates it. Returns the server's representation of the pgcluster, and an error, if there is any. -func (c *pgclusters) Update(pgcluster *v1.Pgcluster) (result *v1.Pgcluster, err error) { - result = &v1.Pgcluster{} - err = c.client.Put(). - Namespace(c.ns). - Resource("pgclusters"). - Name(pgcluster.Name). - Body(pgcluster). - Do(). - Into(result) - return -} - -// UpdateStatus was generated because the type contains a Status member. -// Add a +genclient:noStatus comment above the type to avoid generating UpdateStatus(). - -func (c *pgclusters) UpdateStatus(pgcluster *v1.Pgcluster) (result *v1.Pgcluster, err error) { - result = &v1.Pgcluster{} - err = c.client.Put(). - Namespace(c.ns). - Resource("pgclusters"). - Name(pgcluster.Name). - SubResource("status"). - Body(pgcluster). - Do(). - Into(result) - return -} - -// Delete takes name of the pgcluster and deletes it. Returns an error if one occurs. -func (c *pgclusters) Delete(name string, options *metav1.DeleteOptions) error { - return c.client.Delete(). - Namespace(c.ns). - Resource("pgclusters"). - Name(name). - Body(options). - Do(). - Error() -} - -// DeleteCollection deletes a collection of objects. -func (c *pgclusters) DeleteCollection(options *metav1.DeleteOptions, listOptions metav1.ListOptions) error { - var timeout time.Duration - if listOptions.TimeoutSeconds != nil { - timeout = time.Duration(*listOptions.TimeoutSeconds) * time.Second - } - return c.client.Delete(). - Namespace(c.ns). - Resource("pgclusters"). - VersionedParams(&listOptions, scheme.ParameterCodec). - Timeout(timeout). - Body(options). - Do(). - Error() -} - -// Patch applies the patch and returns the patched pgcluster. -func (c *pgclusters) Patch(name string, pt types.PatchType, data []byte, subresources ...string) (result *v1.Pgcluster, err error) { - result = &v1.Pgcluster{} - err = c.client.Patch(pt). - Namespace(c.ns). - Resource("pgclusters"). - SubResource(subresources...). - Name(name). - Body(data). - Do(). - Into(result) - return -} diff --git a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/pgpolicy.go b/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/pgpolicy.go deleted file mode 100644 index 402b99f523..0000000000 --- a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/pgpolicy.go +++ /dev/null @@ -1,190 +0,0 @@ -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by client-gen. DO NOT EDIT. - -package v1 - -import ( - "time" - - v1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - scheme "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned/scheme" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - types "k8s.io/apimachinery/pkg/types" - watch "k8s.io/apimachinery/pkg/watch" - rest "k8s.io/client-go/rest" -) - -// PgpoliciesGetter has a method to return a PgpolicyInterface. -// A group's client should implement this interface. -type PgpoliciesGetter interface { - Pgpolicies(namespace string) PgpolicyInterface -} - -// PgpolicyInterface has methods to work with Pgpolicy resources. -type PgpolicyInterface interface { - Create(*v1.Pgpolicy) (*v1.Pgpolicy, error) - Update(*v1.Pgpolicy) (*v1.Pgpolicy, error) - UpdateStatus(*v1.Pgpolicy) (*v1.Pgpolicy, error) - Delete(name string, options *metav1.DeleteOptions) error - DeleteCollection(options *metav1.DeleteOptions, listOptions metav1.ListOptions) error - Get(name string, options metav1.GetOptions) (*v1.Pgpolicy, error) - List(opts metav1.ListOptions) (*v1.PgpolicyList, error) - Watch(opts metav1.ListOptions) (watch.Interface, error) - Patch(name string, pt types.PatchType, data []byte, subresources ...string) (result *v1.Pgpolicy, err error) - PgpolicyExpansion -} - -// pgpolicies implements PgpolicyInterface -type pgpolicies struct { - client rest.Interface - ns string -} - -// newPgpolicies returns a Pgpolicies -func newPgpolicies(c *CrunchydataV1Client, namespace string) *pgpolicies { - return &pgpolicies{ - client: c.RESTClient(), - ns: namespace, - } -} - -// Get takes name of the pgpolicy, and returns the corresponding pgpolicy object, and an error if there is any. -func (c *pgpolicies) Get(name string, options metav1.GetOptions) (result *v1.Pgpolicy, err error) { - result = &v1.Pgpolicy{} - err = c.client.Get(). - Namespace(c.ns). - Resource("pgpolicies"). - Name(name). - VersionedParams(&options, scheme.ParameterCodec). - Do(). - Into(result) - return -} - -// List takes label and field selectors, and returns the list of Pgpolicies that match those selectors. -func (c *pgpolicies) List(opts metav1.ListOptions) (result *v1.PgpolicyList, err error) { - var timeout time.Duration - if opts.TimeoutSeconds != nil { - timeout = time.Duration(*opts.TimeoutSeconds) * time.Second - } - result = &v1.PgpolicyList{} - err = c.client.Get(). - Namespace(c.ns). - Resource("pgpolicies"). - VersionedParams(&opts, scheme.ParameterCodec). - Timeout(timeout). - Do(). - Into(result) - return -} - -// Watch returns a watch.Interface that watches the requested pgpolicies. -func (c *pgpolicies) Watch(opts metav1.ListOptions) (watch.Interface, error) { - var timeout time.Duration - if opts.TimeoutSeconds != nil { - timeout = time.Duration(*opts.TimeoutSeconds) * time.Second - } - opts.Watch = true - return c.client.Get(). - Namespace(c.ns). - Resource("pgpolicies"). - VersionedParams(&opts, scheme.ParameterCodec). - Timeout(timeout). - Watch() -} - -// Create takes the representation of a pgpolicy and creates it. Returns the server's representation of the pgpolicy, and an error, if there is any. -func (c *pgpolicies) Create(pgpolicy *v1.Pgpolicy) (result *v1.Pgpolicy, err error) { - result = &v1.Pgpolicy{} - err = c.client.Post(). - Namespace(c.ns). - Resource("pgpolicies"). - Body(pgpolicy). - Do(). - Into(result) - return -} - -// Update takes the representation of a pgpolicy and updates it. Returns the server's representation of the pgpolicy, and an error, if there is any. -func (c *pgpolicies) Update(pgpolicy *v1.Pgpolicy) (result *v1.Pgpolicy, err error) { - result = &v1.Pgpolicy{} - err = c.client.Put(). - Namespace(c.ns). - Resource("pgpolicies"). - Name(pgpolicy.Name). - Body(pgpolicy). - Do(). - Into(result) - return -} - -// UpdateStatus was generated because the type contains a Status member. -// Add a +genclient:noStatus comment above the type to avoid generating UpdateStatus(). - -func (c *pgpolicies) UpdateStatus(pgpolicy *v1.Pgpolicy) (result *v1.Pgpolicy, err error) { - result = &v1.Pgpolicy{} - err = c.client.Put(). - Namespace(c.ns). - Resource("pgpolicies"). - Name(pgpolicy.Name). - SubResource("status"). - Body(pgpolicy). - Do(). - Into(result) - return -} - -// Delete takes name of the pgpolicy and deletes it. Returns an error if one occurs. -func (c *pgpolicies) Delete(name string, options *metav1.DeleteOptions) error { - return c.client.Delete(). - Namespace(c.ns). - Resource("pgpolicies"). - Name(name). - Body(options). - Do(). - Error() -} - -// DeleteCollection deletes a collection of objects. -func (c *pgpolicies) DeleteCollection(options *metav1.DeleteOptions, listOptions metav1.ListOptions) error { - var timeout time.Duration - if listOptions.TimeoutSeconds != nil { - timeout = time.Duration(*listOptions.TimeoutSeconds) * time.Second - } - return c.client.Delete(). - Namespace(c.ns). - Resource("pgpolicies"). - VersionedParams(&listOptions, scheme.ParameterCodec). - Timeout(timeout). - Body(options). - Do(). - Error() -} - -// Patch applies the patch and returns the patched pgpolicy. -func (c *pgpolicies) Patch(name string, pt types.PatchType, data []byte, subresources ...string) (result *v1.Pgpolicy, err error) { - result = &v1.Pgpolicy{} - err = c.client.Patch(pt). - Namespace(c.ns). - Resource("pgpolicies"). - SubResource(subresources...). - Name(name). - Body(data). - Do(). - Into(result) - return -} diff --git a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/pgreplica.go b/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/pgreplica.go deleted file mode 100644 index 88fb060a69..0000000000 --- a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/pgreplica.go +++ /dev/null @@ -1,190 +0,0 @@ -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by client-gen. DO NOT EDIT. - -package v1 - -import ( - "time" - - v1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - scheme "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned/scheme" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - types "k8s.io/apimachinery/pkg/types" - watch "k8s.io/apimachinery/pkg/watch" - rest "k8s.io/client-go/rest" -) - -// PgreplicasGetter has a method to return a PgreplicaInterface. -// A group's client should implement this interface. -type PgreplicasGetter interface { - Pgreplicas(namespace string) PgreplicaInterface -} - -// PgreplicaInterface has methods to work with Pgreplica resources. -type PgreplicaInterface interface { - Create(*v1.Pgreplica) (*v1.Pgreplica, error) - Update(*v1.Pgreplica) (*v1.Pgreplica, error) - UpdateStatus(*v1.Pgreplica) (*v1.Pgreplica, error) - Delete(name string, options *metav1.DeleteOptions) error - DeleteCollection(options *metav1.DeleteOptions, listOptions metav1.ListOptions) error - Get(name string, options metav1.GetOptions) (*v1.Pgreplica, error) - List(opts metav1.ListOptions) (*v1.PgreplicaList, error) - Watch(opts metav1.ListOptions) (watch.Interface, error) - Patch(name string, pt types.PatchType, data []byte, subresources ...string) (result *v1.Pgreplica, err error) - PgreplicaExpansion -} - -// pgreplicas implements PgreplicaInterface -type pgreplicas struct { - client rest.Interface - ns string -} - -// newPgreplicas returns a Pgreplicas -func newPgreplicas(c *CrunchydataV1Client, namespace string) *pgreplicas { - return &pgreplicas{ - client: c.RESTClient(), - ns: namespace, - } -} - -// Get takes name of the pgreplica, and returns the corresponding pgreplica object, and an error if there is any. -func (c *pgreplicas) Get(name string, options metav1.GetOptions) (result *v1.Pgreplica, err error) { - result = &v1.Pgreplica{} - err = c.client.Get(). - Namespace(c.ns). - Resource("pgreplicas"). - Name(name). - VersionedParams(&options, scheme.ParameterCodec). - Do(). - Into(result) - return -} - -// List takes label and field selectors, and returns the list of Pgreplicas that match those selectors. -func (c *pgreplicas) List(opts metav1.ListOptions) (result *v1.PgreplicaList, err error) { - var timeout time.Duration - if opts.TimeoutSeconds != nil { - timeout = time.Duration(*opts.TimeoutSeconds) * time.Second - } - result = &v1.PgreplicaList{} - err = c.client.Get(). - Namespace(c.ns). - Resource("pgreplicas"). - VersionedParams(&opts, scheme.ParameterCodec). - Timeout(timeout). - Do(). - Into(result) - return -} - -// Watch returns a watch.Interface that watches the requested pgreplicas. -func (c *pgreplicas) Watch(opts metav1.ListOptions) (watch.Interface, error) { - var timeout time.Duration - if opts.TimeoutSeconds != nil { - timeout = time.Duration(*opts.TimeoutSeconds) * time.Second - } - opts.Watch = true - return c.client.Get(). - Namespace(c.ns). - Resource("pgreplicas"). - VersionedParams(&opts, scheme.ParameterCodec). - Timeout(timeout). - Watch() -} - -// Create takes the representation of a pgreplica and creates it. Returns the server's representation of the pgreplica, and an error, if there is any. -func (c *pgreplicas) Create(pgreplica *v1.Pgreplica) (result *v1.Pgreplica, err error) { - result = &v1.Pgreplica{} - err = c.client.Post(). - Namespace(c.ns). - Resource("pgreplicas"). - Body(pgreplica). - Do(). - Into(result) - return -} - -// Update takes the representation of a pgreplica and updates it. Returns the server's representation of the pgreplica, and an error, if there is any. -func (c *pgreplicas) Update(pgreplica *v1.Pgreplica) (result *v1.Pgreplica, err error) { - result = &v1.Pgreplica{} - err = c.client.Put(). - Namespace(c.ns). - Resource("pgreplicas"). - Name(pgreplica.Name). - Body(pgreplica). - Do(). - Into(result) - return -} - -// UpdateStatus was generated because the type contains a Status member. -// Add a +genclient:noStatus comment above the type to avoid generating UpdateStatus(). - -func (c *pgreplicas) UpdateStatus(pgreplica *v1.Pgreplica) (result *v1.Pgreplica, err error) { - result = &v1.Pgreplica{} - err = c.client.Put(). - Namespace(c.ns). - Resource("pgreplicas"). - Name(pgreplica.Name). - SubResource("status"). - Body(pgreplica). - Do(). - Into(result) - return -} - -// Delete takes name of the pgreplica and deletes it. Returns an error if one occurs. -func (c *pgreplicas) Delete(name string, options *metav1.DeleteOptions) error { - return c.client.Delete(). - Namespace(c.ns). - Resource("pgreplicas"). - Name(name). - Body(options). - Do(). - Error() -} - -// DeleteCollection deletes a collection of objects. -func (c *pgreplicas) DeleteCollection(options *metav1.DeleteOptions, listOptions metav1.ListOptions) error { - var timeout time.Duration - if listOptions.TimeoutSeconds != nil { - timeout = time.Duration(*listOptions.TimeoutSeconds) * time.Second - } - return c.client.Delete(). - Namespace(c.ns). - Resource("pgreplicas"). - VersionedParams(&listOptions, scheme.ParameterCodec). - Timeout(timeout). - Body(options). - Do(). - Error() -} - -// Patch applies the patch and returns the patched pgreplica. -func (c *pgreplicas) Patch(name string, pt types.PatchType, data []byte, subresources ...string) (result *v1.Pgreplica, err error) { - result = &v1.Pgreplica{} - err = c.client.Patch(pt). - Namespace(c.ns). - Resource("pgreplicas"). - SubResource(subresources...). - Name(name). - Body(data). - Do(). - Into(result) - return -} diff --git a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/pgtask.go b/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/pgtask.go deleted file mode 100644 index 25b2cd1055..0000000000 --- a/pkg/generated/clientset/versioned/typed/crunchydata.com/v1/pgtask.go +++ /dev/null @@ -1,190 +0,0 @@ -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by client-gen. DO NOT EDIT. - -package v1 - -import ( - "time" - - v1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - scheme "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned/scheme" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - types "k8s.io/apimachinery/pkg/types" - watch "k8s.io/apimachinery/pkg/watch" - rest "k8s.io/client-go/rest" -) - -// PgtasksGetter has a method to return a PgtaskInterface. -// A group's client should implement this interface. -type PgtasksGetter interface { - Pgtasks(namespace string) PgtaskInterface -} - -// PgtaskInterface has methods to work with Pgtask resources. -type PgtaskInterface interface { - Create(*v1.Pgtask) (*v1.Pgtask, error) - Update(*v1.Pgtask) (*v1.Pgtask, error) - UpdateStatus(*v1.Pgtask) (*v1.Pgtask, error) - Delete(name string, options *metav1.DeleteOptions) error - DeleteCollection(options *metav1.DeleteOptions, listOptions metav1.ListOptions) error - Get(name string, options metav1.GetOptions) (*v1.Pgtask, error) - List(opts metav1.ListOptions) (*v1.PgtaskList, error) - Watch(opts metav1.ListOptions) (watch.Interface, error) - Patch(name string, pt types.PatchType, data []byte, subresources ...string) (result *v1.Pgtask, err error) - PgtaskExpansion -} - -// pgtasks implements PgtaskInterface -type pgtasks struct { - client rest.Interface - ns string -} - -// newPgtasks returns a Pgtasks -func newPgtasks(c *CrunchydataV1Client, namespace string) *pgtasks { - return &pgtasks{ - client: c.RESTClient(), - ns: namespace, - } -} - -// Get takes name of the pgtask, and returns the corresponding pgtask object, and an error if there is any. -func (c *pgtasks) Get(name string, options metav1.GetOptions) (result *v1.Pgtask, err error) { - result = &v1.Pgtask{} - err = c.client.Get(). - Namespace(c.ns). - Resource("pgtasks"). - Name(name). - VersionedParams(&options, scheme.ParameterCodec). - Do(). - Into(result) - return -} - -// List takes label and field selectors, and returns the list of Pgtasks that match those selectors. -func (c *pgtasks) List(opts metav1.ListOptions) (result *v1.PgtaskList, err error) { - var timeout time.Duration - if opts.TimeoutSeconds != nil { - timeout = time.Duration(*opts.TimeoutSeconds) * time.Second - } - result = &v1.PgtaskList{} - err = c.client.Get(). - Namespace(c.ns). - Resource("pgtasks"). - VersionedParams(&opts, scheme.ParameterCodec). - Timeout(timeout). - Do(). - Into(result) - return -} - -// Watch returns a watch.Interface that watches the requested pgtasks. -func (c *pgtasks) Watch(opts metav1.ListOptions) (watch.Interface, error) { - var timeout time.Duration - if opts.TimeoutSeconds != nil { - timeout = time.Duration(*opts.TimeoutSeconds) * time.Second - } - opts.Watch = true - return c.client.Get(). - Namespace(c.ns). - Resource("pgtasks"). - VersionedParams(&opts, scheme.ParameterCodec). - Timeout(timeout). - Watch() -} - -// Create takes the representation of a pgtask and creates it. Returns the server's representation of the pgtask, and an error, if there is any. -func (c *pgtasks) Create(pgtask *v1.Pgtask) (result *v1.Pgtask, err error) { - result = &v1.Pgtask{} - err = c.client.Post(). - Namespace(c.ns). - Resource("pgtasks"). - Body(pgtask). - Do(). - Into(result) - return -} - -// Update takes the representation of a pgtask and updates it. Returns the server's representation of the pgtask, and an error, if there is any. -func (c *pgtasks) Update(pgtask *v1.Pgtask) (result *v1.Pgtask, err error) { - result = &v1.Pgtask{} - err = c.client.Put(). - Namespace(c.ns). - Resource("pgtasks"). - Name(pgtask.Name). - Body(pgtask). - Do(). - Into(result) - return -} - -// UpdateStatus was generated because the type contains a Status member. -// Add a +genclient:noStatus comment above the type to avoid generating UpdateStatus(). - -func (c *pgtasks) UpdateStatus(pgtask *v1.Pgtask) (result *v1.Pgtask, err error) { - result = &v1.Pgtask{} - err = c.client.Put(). - Namespace(c.ns). - Resource("pgtasks"). - Name(pgtask.Name). - SubResource("status"). - Body(pgtask). - Do(). - Into(result) - return -} - -// Delete takes name of the pgtask and deletes it. Returns an error if one occurs. -func (c *pgtasks) Delete(name string, options *metav1.DeleteOptions) error { - return c.client.Delete(). - Namespace(c.ns). - Resource("pgtasks"). - Name(name). - Body(options). - Do(). - Error() -} - -// DeleteCollection deletes a collection of objects. -func (c *pgtasks) DeleteCollection(options *metav1.DeleteOptions, listOptions metav1.ListOptions) error { - var timeout time.Duration - if listOptions.TimeoutSeconds != nil { - timeout = time.Duration(*listOptions.TimeoutSeconds) * time.Second - } - return c.client.Delete(). - Namespace(c.ns). - Resource("pgtasks"). - VersionedParams(&listOptions, scheme.ParameterCodec). - Timeout(timeout). - Body(options). - Do(). - Error() -} - -// Patch applies the patch and returns the patched pgtask. -func (c *pgtasks) Patch(name string, pt types.PatchType, data []byte, subresources ...string) (result *v1.Pgtask, err error) { - result = &v1.Pgtask{} - err = c.client.Patch(pt). - Namespace(c.ns). - Resource("pgtasks"). - SubResource(subresources...). - Name(name). - Body(data). - Do(). - Into(result) - return -} diff --git a/pkg/generated/informers/externalversions/crunchydata.com/interface.go b/pkg/generated/informers/externalversions/crunchydata.com/interface.go deleted file mode 100644 index dfe44a0fcb..0000000000 --- a/pkg/generated/informers/externalversions/crunchydata.com/interface.go +++ /dev/null @@ -1,45 +0,0 @@ -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by informer-gen. DO NOT EDIT. - -package crunchydata - -import ( - v1 "github.com/crunchydata/postgres-operator/pkg/generated/informers/externalversions/crunchydata.com/v1" - internalinterfaces "github.com/crunchydata/postgres-operator/pkg/generated/informers/externalversions/internalinterfaces" -) - -// Interface provides access to each of this group's versions. -type Interface interface { - // V1 provides access to shared informers for resources in V1. - V1() v1.Interface -} - -type group struct { - factory internalinterfaces.SharedInformerFactory - namespace string - tweakListOptions internalinterfaces.TweakListOptionsFunc -} - -// New returns a new Interface. -func New(f internalinterfaces.SharedInformerFactory, namespace string, tweakListOptions internalinterfaces.TweakListOptionsFunc) Interface { - return &group{factory: f, namespace: namespace, tweakListOptions: tweakListOptions} -} - -// V1 returns a new v1.Interface. -func (g *group) V1() v1.Interface { - return v1.New(g.factory, g.namespace, g.tweakListOptions) -} diff --git a/pkg/generated/informers/externalversions/crunchydata.com/v1/interface.go b/pkg/generated/informers/externalversions/crunchydata.com/v1/interface.go deleted file mode 100644 index c34a37f8e7..0000000000 --- a/pkg/generated/informers/externalversions/crunchydata.com/v1/interface.go +++ /dev/null @@ -1,65 +0,0 @@ -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by informer-gen. DO NOT EDIT. - -package v1 - -import ( - internalinterfaces "github.com/crunchydata/postgres-operator/pkg/generated/informers/externalversions/internalinterfaces" -) - -// Interface provides access to all the informers in this group version. -type Interface interface { - // Pgclusters returns a PgclusterInformer. - Pgclusters() PgclusterInformer - // Pgpolicies returns a PgpolicyInformer. - Pgpolicies() PgpolicyInformer - // Pgreplicas returns a PgreplicaInformer. - Pgreplicas() PgreplicaInformer - // Pgtasks returns a PgtaskInformer. - Pgtasks() PgtaskInformer -} - -type version struct { - factory internalinterfaces.SharedInformerFactory - namespace string - tweakListOptions internalinterfaces.TweakListOptionsFunc -} - -// New returns a new Interface. -func New(f internalinterfaces.SharedInformerFactory, namespace string, tweakListOptions internalinterfaces.TweakListOptionsFunc) Interface { - return &version{factory: f, namespace: namespace, tweakListOptions: tweakListOptions} -} - -// Pgclusters returns a PgclusterInformer. -func (v *version) Pgclusters() PgclusterInformer { - return &pgclusterInformer{factory: v.factory, namespace: v.namespace, tweakListOptions: v.tweakListOptions} -} - -// Pgpolicies returns a PgpolicyInformer. -func (v *version) Pgpolicies() PgpolicyInformer { - return &pgpolicyInformer{factory: v.factory, namespace: v.namespace, tweakListOptions: v.tweakListOptions} -} - -// Pgreplicas returns a PgreplicaInformer. -func (v *version) Pgreplicas() PgreplicaInformer { - return &pgreplicaInformer{factory: v.factory, namespace: v.namespace, tweakListOptions: v.tweakListOptions} -} - -// Pgtasks returns a PgtaskInformer. -func (v *version) Pgtasks() PgtaskInformer { - return &pgtaskInformer{factory: v.factory, namespace: v.namespace, tweakListOptions: v.tweakListOptions} -} diff --git a/pkg/generated/informers/externalversions/crunchydata.com/v1/pgcluster.go b/pkg/generated/informers/externalversions/crunchydata.com/v1/pgcluster.go deleted file mode 100644 index 92f0d9a6a9..0000000000 --- a/pkg/generated/informers/externalversions/crunchydata.com/v1/pgcluster.go +++ /dev/null @@ -1,88 +0,0 @@ -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by informer-gen. DO NOT EDIT. - -package v1 - -import ( - time "time" - - crunchydatacomv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - versioned "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned" - internalinterfaces "github.com/crunchydata/postgres-operator/pkg/generated/informers/externalversions/internalinterfaces" - v1 "github.com/crunchydata/postgres-operator/pkg/generated/listers/crunchydata.com/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - runtime "k8s.io/apimachinery/pkg/runtime" - watch "k8s.io/apimachinery/pkg/watch" - cache "k8s.io/client-go/tools/cache" -) - -// PgclusterInformer provides access to a shared informer and lister for -// Pgclusters. -type PgclusterInformer interface { - Informer() cache.SharedIndexInformer - Lister() v1.PgclusterLister -} - -type pgclusterInformer struct { - factory internalinterfaces.SharedInformerFactory - tweakListOptions internalinterfaces.TweakListOptionsFunc - namespace string -} - -// NewPgclusterInformer constructs a new informer for Pgcluster type. -// Always prefer using an informer factory to get a shared informer instead of getting an independent -// one. This reduces memory footprint and number of connections to the server. -func NewPgclusterInformer(client versioned.Interface, namespace string, resyncPeriod time.Duration, indexers cache.Indexers) cache.SharedIndexInformer { - return NewFilteredPgclusterInformer(client, namespace, resyncPeriod, indexers, nil) -} - -// NewFilteredPgclusterInformer constructs a new informer for Pgcluster type. -// Always prefer using an informer factory to get a shared informer instead of getting an independent -// one. This reduces memory footprint and number of connections to the server. -func NewFilteredPgclusterInformer(client versioned.Interface, namespace string, resyncPeriod time.Duration, indexers cache.Indexers, tweakListOptions internalinterfaces.TweakListOptionsFunc) cache.SharedIndexInformer { - return cache.NewSharedIndexInformer( - &cache.ListWatch{ - ListFunc: func(options metav1.ListOptions) (runtime.Object, error) { - if tweakListOptions != nil { - tweakListOptions(&options) - } - return client.CrunchydataV1().Pgclusters(namespace).List(options) - }, - WatchFunc: func(options metav1.ListOptions) (watch.Interface, error) { - if tweakListOptions != nil { - tweakListOptions(&options) - } - return client.CrunchydataV1().Pgclusters(namespace).Watch(options) - }, - }, - &crunchydatacomv1.Pgcluster{}, - resyncPeriod, - indexers, - ) -} - -func (f *pgclusterInformer) defaultInformer(client versioned.Interface, resyncPeriod time.Duration) cache.SharedIndexInformer { - return NewFilteredPgclusterInformer(client, f.namespace, resyncPeriod, cache.Indexers{cache.NamespaceIndex: cache.MetaNamespaceIndexFunc}, f.tweakListOptions) -} - -func (f *pgclusterInformer) Informer() cache.SharedIndexInformer { - return f.factory.InformerFor(&crunchydatacomv1.Pgcluster{}, f.defaultInformer) -} - -func (f *pgclusterInformer) Lister() v1.PgclusterLister { - return v1.NewPgclusterLister(f.Informer().GetIndexer()) -} diff --git a/pkg/generated/informers/externalversions/crunchydata.com/v1/pgpolicy.go b/pkg/generated/informers/externalversions/crunchydata.com/v1/pgpolicy.go deleted file mode 100644 index ea70fa720d..0000000000 --- a/pkg/generated/informers/externalversions/crunchydata.com/v1/pgpolicy.go +++ /dev/null @@ -1,88 +0,0 @@ -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by informer-gen. DO NOT EDIT. - -package v1 - -import ( - time "time" - - crunchydatacomv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - versioned "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned" - internalinterfaces "github.com/crunchydata/postgres-operator/pkg/generated/informers/externalversions/internalinterfaces" - v1 "github.com/crunchydata/postgres-operator/pkg/generated/listers/crunchydata.com/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - runtime "k8s.io/apimachinery/pkg/runtime" - watch "k8s.io/apimachinery/pkg/watch" - cache "k8s.io/client-go/tools/cache" -) - -// PgpolicyInformer provides access to a shared informer and lister for -// Pgpolicies. -type PgpolicyInformer interface { - Informer() cache.SharedIndexInformer - Lister() v1.PgpolicyLister -} - -type pgpolicyInformer struct { - factory internalinterfaces.SharedInformerFactory - tweakListOptions internalinterfaces.TweakListOptionsFunc - namespace string -} - -// NewPgpolicyInformer constructs a new informer for Pgpolicy type. -// Always prefer using an informer factory to get a shared informer instead of getting an independent -// one. This reduces memory footprint and number of connections to the server. -func NewPgpolicyInformer(client versioned.Interface, namespace string, resyncPeriod time.Duration, indexers cache.Indexers) cache.SharedIndexInformer { - return NewFilteredPgpolicyInformer(client, namespace, resyncPeriod, indexers, nil) -} - -// NewFilteredPgpolicyInformer constructs a new informer for Pgpolicy type. -// Always prefer using an informer factory to get a shared informer instead of getting an independent -// one. This reduces memory footprint and number of connections to the server. -func NewFilteredPgpolicyInformer(client versioned.Interface, namespace string, resyncPeriod time.Duration, indexers cache.Indexers, tweakListOptions internalinterfaces.TweakListOptionsFunc) cache.SharedIndexInformer { - return cache.NewSharedIndexInformer( - &cache.ListWatch{ - ListFunc: func(options metav1.ListOptions) (runtime.Object, error) { - if tweakListOptions != nil { - tweakListOptions(&options) - } - return client.CrunchydataV1().Pgpolicies(namespace).List(options) - }, - WatchFunc: func(options metav1.ListOptions) (watch.Interface, error) { - if tweakListOptions != nil { - tweakListOptions(&options) - } - return client.CrunchydataV1().Pgpolicies(namespace).Watch(options) - }, - }, - &crunchydatacomv1.Pgpolicy{}, - resyncPeriod, - indexers, - ) -} - -func (f *pgpolicyInformer) defaultInformer(client versioned.Interface, resyncPeriod time.Duration) cache.SharedIndexInformer { - return NewFilteredPgpolicyInformer(client, f.namespace, resyncPeriod, cache.Indexers{cache.NamespaceIndex: cache.MetaNamespaceIndexFunc}, f.tweakListOptions) -} - -func (f *pgpolicyInformer) Informer() cache.SharedIndexInformer { - return f.factory.InformerFor(&crunchydatacomv1.Pgpolicy{}, f.defaultInformer) -} - -func (f *pgpolicyInformer) Lister() v1.PgpolicyLister { - return v1.NewPgpolicyLister(f.Informer().GetIndexer()) -} diff --git a/pkg/generated/informers/externalversions/crunchydata.com/v1/pgreplica.go b/pkg/generated/informers/externalversions/crunchydata.com/v1/pgreplica.go deleted file mode 100644 index 99332793ac..0000000000 --- a/pkg/generated/informers/externalversions/crunchydata.com/v1/pgreplica.go +++ /dev/null @@ -1,88 +0,0 @@ -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by informer-gen. DO NOT EDIT. - -package v1 - -import ( - time "time" - - crunchydatacomv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - versioned "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned" - internalinterfaces "github.com/crunchydata/postgres-operator/pkg/generated/informers/externalversions/internalinterfaces" - v1 "github.com/crunchydata/postgres-operator/pkg/generated/listers/crunchydata.com/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - runtime "k8s.io/apimachinery/pkg/runtime" - watch "k8s.io/apimachinery/pkg/watch" - cache "k8s.io/client-go/tools/cache" -) - -// PgreplicaInformer provides access to a shared informer and lister for -// Pgreplicas. -type PgreplicaInformer interface { - Informer() cache.SharedIndexInformer - Lister() v1.PgreplicaLister -} - -type pgreplicaInformer struct { - factory internalinterfaces.SharedInformerFactory - tweakListOptions internalinterfaces.TweakListOptionsFunc - namespace string -} - -// NewPgreplicaInformer constructs a new informer for Pgreplica type. -// Always prefer using an informer factory to get a shared informer instead of getting an independent -// one. This reduces memory footprint and number of connections to the server. -func NewPgreplicaInformer(client versioned.Interface, namespace string, resyncPeriod time.Duration, indexers cache.Indexers) cache.SharedIndexInformer { - return NewFilteredPgreplicaInformer(client, namespace, resyncPeriod, indexers, nil) -} - -// NewFilteredPgreplicaInformer constructs a new informer for Pgreplica type. -// Always prefer using an informer factory to get a shared informer instead of getting an independent -// one. This reduces memory footprint and number of connections to the server. -func NewFilteredPgreplicaInformer(client versioned.Interface, namespace string, resyncPeriod time.Duration, indexers cache.Indexers, tweakListOptions internalinterfaces.TweakListOptionsFunc) cache.SharedIndexInformer { - return cache.NewSharedIndexInformer( - &cache.ListWatch{ - ListFunc: func(options metav1.ListOptions) (runtime.Object, error) { - if tweakListOptions != nil { - tweakListOptions(&options) - } - return client.CrunchydataV1().Pgreplicas(namespace).List(options) - }, - WatchFunc: func(options metav1.ListOptions) (watch.Interface, error) { - if tweakListOptions != nil { - tweakListOptions(&options) - } - return client.CrunchydataV1().Pgreplicas(namespace).Watch(options) - }, - }, - &crunchydatacomv1.Pgreplica{}, - resyncPeriod, - indexers, - ) -} - -func (f *pgreplicaInformer) defaultInformer(client versioned.Interface, resyncPeriod time.Duration) cache.SharedIndexInformer { - return NewFilteredPgreplicaInformer(client, f.namespace, resyncPeriod, cache.Indexers{cache.NamespaceIndex: cache.MetaNamespaceIndexFunc}, f.tweakListOptions) -} - -func (f *pgreplicaInformer) Informer() cache.SharedIndexInformer { - return f.factory.InformerFor(&crunchydatacomv1.Pgreplica{}, f.defaultInformer) -} - -func (f *pgreplicaInformer) Lister() v1.PgreplicaLister { - return v1.NewPgreplicaLister(f.Informer().GetIndexer()) -} diff --git a/pkg/generated/informers/externalversions/crunchydata.com/v1/pgtask.go b/pkg/generated/informers/externalversions/crunchydata.com/v1/pgtask.go deleted file mode 100644 index bf1cbd60a8..0000000000 --- a/pkg/generated/informers/externalversions/crunchydata.com/v1/pgtask.go +++ /dev/null @@ -1,88 +0,0 @@ -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by informer-gen. DO NOT EDIT. - -package v1 - -import ( - time "time" - - crunchydatacomv1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - versioned "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned" - internalinterfaces "github.com/crunchydata/postgres-operator/pkg/generated/informers/externalversions/internalinterfaces" - v1 "github.com/crunchydata/postgres-operator/pkg/generated/listers/crunchydata.com/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - runtime "k8s.io/apimachinery/pkg/runtime" - watch "k8s.io/apimachinery/pkg/watch" - cache "k8s.io/client-go/tools/cache" -) - -// PgtaskInformer provides access to a shared informer and lister for -// Pgtasks. -type PgtaskInformer interface { - Informer() cache.SharedIndexInformer - Lister() v1.PgtaskLister -} - -type pgtaskInformer struct { - factory internalinterfaces.SharedInformerFactory - tweakListOptions internalinterfaces.TweakListOptionsFunc - namespace string -} - -// NewPgtaskInformer constructs a new informer for Pgtask type. -// Always prefer using an informer factory to get a shared informer instead of getting an independent -// one. This reduces memory footprint and number of connections to the server. -func NewPgtaskInformer(client versioned.Interface, namespace string, resyncPeriod time.Duration, indexers cache.Indexers) cache.SharedIndexInformer { - return NewFilteredPgtaskInformer(client, namespace, resyncPeriod, indexers, nil) -} - -// NewFilteredPgtaskInformer constructs a new informer for Pgtask type. -// Always prefer using an informer factory to get a shared informer instead of getting an independent -// one. This reduces memory footprint and number of connections to the server. -func NewFilteredPgtaskInformer(client versioned.Interface, namespace string, resyncPeriod time.Duration, indexers cache.Indexers, tweakListOptions internalinterfaces.TweakListOptionsFunc) cache.SharedIndexInformer { - return cache.NewSharedIndexInformer( - &cache.ListWatch{ - ListFunc: func(options metav1.ListOptions) (runtime.Object, error) { - if tweakListOptions != nil { - tweakListOptions(&options) - } - return client.CrunchydataV1().Pgtasks(namespace).List(options) - }, - WatchFunc: func(options metav1.ListOptions) (watch.Interface, error) { - if tweakListOptions != nil { - tweakListOptions(&options) - } - return client.CrunchydataV1().Pgtasks(namespace).Watch(options) - }, - }, - &crunchydatacomv1.Pgtask{}, - resyncPeriod, - indexers, - ) -} - -func (f *pgtaskInformer) defaultInformer(client versioned.Interface, resyncPeriod time.Duration) cache.SharedIndexInformer { - return NewFilteredPgtaskInformer(client, f.namespace, resyncPeriod, cache.Indexers{cache.NamespaceIndex: cache.MetaNamespaceIndexFunc}, f.tweakListOptions) -} - -func (f *pgtaskInformer) Informer() cache.SharedIndexInformer { - return f.factory.InformerFor(&crunchydatacomv1.Pgtask{}, f.defaultInformer) -} - -func (f *pgtaskInformer) Lister() v1.PgtaskLister { - return v1.NewPgtaskLister(f.Informer().GetIndexer()) -} diff --git a/pkg/generated/informers/externalversions/factory.go b/pkg/generated/informers/externalversions/factory.go deleted file mode 100644 index 56886a005a..0000000000 --- a/pkg/generated/informers/externalversions/factory.go +++ /dev/null @@ -1,179 +0,0 @@ -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by informer-gen. DO NOT EDIT. - -package externalversions - -import ( - reflect "reflect" - sync "sync" - time "time" - - versioned "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned" - crunchydatacom "github.com/crunchydata/postgres-operator/pkg/generated/informers/externalversions/crunchydata.com" - internalinterfaces "github.com/crunchydata/postgres-operator/pkg/generated/informers/externalversions/internalinterfaces" - v1 "k8s.io/apimachinery/pkg/apis/meta/v1" - runtime "k8s.io/apimachinery/pkg/runtime" - schema "k8s.io/apimachinery/pkg/runtime/schema" - cache "k8s.io/client-go/tools/cache" -) - -// SharedInformerOption defines the functional option type for SharedInformerFactory. -type SharedInformerOption func(*sharedInformerFactory) *sharedInformerFactory - -type sharedInformerFactory struct { - client versioned.Interface - namespace string - tweakListOptions internalinterfaces.TweakListOptionsFunc - lock sync.Mutex - defaultResync time.Duration - customResync map[reflect.Type]time.Duration - - informers map[reflect.Type]cache.SharedIndexInformer - // startedInformers is used for tracking which informers have been started. - // This allows Start() to be called multiple times safely. - startedInformers map[reflect.Type]bool -} - -// WithCustomResyncConfig sets a custom resync period for the specified informer types. -func WithCustomResyncConfig(resyncConfig map[v1.Object]time.Duration) SharedInformerOption { - return func(factory *sharedInformerFactory) *sharedInformerFactory { - for k, v := range resyncConfig { - factory.customResync[reflect.TypeOf(k)] = v - } - return factory - } -} - -// WithTweakListOptions sets a custom filter on all listers of the configured SharedInformerFactory. -func WithTweakListOptions(tweakListOptions internalinterfaces.TweakListOptionsFunc) SharedInformerOption { - return func(factory *sharedInformerFactory) *sharedInformerFactory { - factory.tweakListOptions = tweakListOptions - return factory - } -} - -// WithNamespace limits the SharedInformerFactory to the specified namespace. -func WithNamespace(namespace string) SharedInformerOption { - return func(factory *sharedInformerFactory) *sharedInformerFactory { - factory.namespace = namespace - return factory - } -} - -// NewSharedInformerFactory constructs a new instance of sharedInformerFactory for all namespaces. -func NewSharedInformerFactory(client versioned.Interface, defaultResync time.Duration) SharedInformerFactory { - return NewSharedInformerFactoryWithOptions(client, defaultResync) -} - -// NewFilteredSharedInformerFactory constructs a new instance of sharedInformerFactory. -// Listers obtained via this SharedInformerFactory will be subject to the same filters -// as specified here. -// Deprecated: Please use NewSharedInformerFactoryWithOptions instead -func NewFilteredSharedInformerFactory(client versioned.Interface, defaultResync time.Duration, namespace string, tweakListOptions internalinterfaces.TweakListOptionsFunc) SharedInformerFactory { - return NewSharedInformerFactoryWithOptions(client, defaultResync, WithNamespace(namespace), WithTweakListOptions(tweakListOptions)) -} - -// NewSharedInformerFactoryWithOptions constructs a new instance of a SharedInformerFactory with additional options. -func NewSharedInformerFactoryWithOptions(client versioned.Interface, defaultResync time.Duration, options ...SharedInformerOption) SharedInformerFactory { - factory := &sharedInformerFactory{ - client: client, - namespace: v1.NamespaceAll, - defaultResync: defaultResync, - informers: make(map[reflect.Type]cache.SharedIndexInformer), - startedInformers: make(map[reflect.Type]bool), - customResync: make(map[reflect.Type]time.Duration), - } - - // Apply all options - for _, opt := range options { - factory = opt(factory) - } - - return factory -} - -// Start initializes all requested informers. -func (f *sharedInformerFactory) Start(stopCh <-chan struct{}) { - f.lock.Lock() - defer f.lock.Unlock() - - for informerType, informer := range f.informers { - if !f.startedInformers[informerType] { - go informer.Run(stopCh) - f.startedInformers[informerType] = true - } - } -} - -// WaitForCacheSync waits for all started informers' cache were synced. -func (f *sharedInformerFactory) WaitForCacheSync(stopCh <-chan struct{}) map[reflect.Type]bool { - informers := func() map[reflect.Type]cache.SharedIndexInformer { - f.lock.Lock() - defer f.lock.Unlock() - - informers := map[reflect.Type]cache.SharedIndexInformer{} - for informerType, informer := range f.informers { - if f.startedInformers[informerType] { - informers[informerType] = informer - } - } - return informers - }() - - res := map[reflect.Type]bool{} - for informType, informer := range informers { - res[informType] = cache.WaitForCacheSync(stopCh, informer.HasSynced) - } - return res -} - -// InternalInformerFor returns the SharedIndexInformer for obj using an internal -// client. -func (f *sharedInformerFactory) InformerFor(obj runtime.Object, newFunc internalinterfaces.NewInformerFunc) cache.SharedIndexInformer { - f.lock.Lock() - defer f.lock.Unlock() - - informerType := reflect.TypeOf(obj) - informer, exists := f.informers[informerType] - if exists { - return informer - } - - resyncPeriod, exists := f.customResync[informerType] - if !exists { - resyncPeriod = f.defaultResync - } - - informer = newFunc(f.client, resyncPeriod) - f.informers[informerType] = informer - - return informer -} - -// SharedInformerFactory provides shared informers for resources in all known -// API group versions. -type SharedInformerFactory interface { - internalinterfaces.SharedInformerFactory - ForResource(resource schema.GroupVersionResource) (GenericInformer, error) - WaitForCacheSync(stopCh <-chan struct{}) map[reflect.Type]bool - - Crunchydata() crunchydatacom.Interface -} - -func (f *sharedInformerFactory) Crunchydata() crunchydatacom.Interface { - return crunchydatacom.New(f, f.namespace, f.tweakListOptions) -} diff --git a/pkg/generated/informers/externalversions/generic.go b/pkg/generated/informers/externalversions/generic.go deleted file mode 100644 index 130dd5ad37..0000000000 --- a/pkg/generated/informers/externalversions/generic.go +++ /dev/null @@ -1,67 +0,0 @@ -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by informer-gen. DO NOT EDIT. - -package externalversions - -import ( - "fmt" - - v1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - schema "k8s.io/apimachinery/pkg/runtime/schema" - cache "k8s.io/client-go/tools/cache" -) - -// GenericInformer is type of SharedIndexInformer which will locate and delegate to other -// sharedInformers based on type -type GenericInformer interface { - Informer() cache.SharedIndexInformer - Lister() cache.GenericLister -} - -type genericInformer struct { - informer cache.SharedIndexInformer - resource schema.GroupResource -} - -// Informer returns the SharedIndexInformer. -func (f *genericInformer) Informer() cache.SharedIndexInformer { - return f.informer -} - -// Lister returns the GenericLister. -func (f *genericInformer) Lister() cache.GenericLister { - return cache.NewGenericLister(f.Informer().GetIndexer(), f.resource) -} - -// ForResource gives generic access to a shared informer of the matching type -// TODO extend this to unknown resources with a client pool -func (f *sharedInformerFactory) ForResource(resource schema.GroupVersionResource) (GenericInformer, error) { - switch resource { - // Group=crunchydata.com, Version=v1 - case v1.SchemeGroupVersion.WithResource("pgclusters"): - return &genericInformer{resource: resource.GroupResource(), informer: f.Crunchydata().V1().Pgclusters().Informer()}, nil - case v1.SchemeGroupVersion.WithResource("pgpolicies"): - return &genericInformer{resource: resource.GroupResource(), informer: f.Crunchydata().V1().Pgpolicies().Informer()}, nil - case v1.SchemeGroupVersion.WithResource("pgreplicas"): - return &genericInformer{resource: resource.GroupResource(), informer: f.Crunchydata().V1().Pgreplicas().Informer()}, nil - case v1.SchemeGroupVersion.WithResource("pgtasks"): - return &genericInformer{resource: resource.GroupResource(), informer: f.Crunchydata().V1().Pgtasks().Informer()}, nil - - } - - return nil, fmt.Errorf("no informer found for %v", resource) -} diff --git a/pkg/generated/informers/externalversions/internalinterfaces/factory_interfaces.go b/pkg/generated/informers/externalversions/internalinterfaces/factory_interfaces.go deleted file mode 100644 index 4086ab3a09..0000000000 --- a/pkg/generated/informers/externalversions/internalinterfaces/factory_interfaces.go +++ /dev/null @@ -1,39 +0,0 @@ -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by informer-gen. DO NOT EDIT. - -package internalinterfaces - -import ( - time "time" - - versioned "github.com/crunchydata/postgres-operator/pkg/generated/clientset/versioned" - v1 "k8s.io/apimachinery/pkg/apis/meta/v1" - runtime "k8s.io/apimachinery/pkg/runtime" - cache "k8s.io/client-go/tools/cache" -) - -// NewInformerFunc takes versioned.Interface and time.Duration to return a SharedIndexInformer. -type NewInformerFunc func(versioned.Interface, time.Duration) cache.SharedIndexInformer - -// SharedInformerFactory a small interface to allow for adding an informer without an import cycle -type SharedInformerFactory interface { - Start(stopCh <-chan struct{}) - InformerFor(obj runtime.Object, newFunc NewInformerFunc) cache.SharedIndexInformer -} - -// TweakListOptionsFunc is a function that transforms a v1.ListOptions. -type TweakListOptionsFunc func(*v1.ListOptions) diff --git a/pkg/generated/listers/crunchydata.com/v1/expansion_generated.go b/pkg/generated/listers/crunchydata.com/v1/expansion_generated.go deleted file mode 100644 index ca6b77b1a3..0000000000 --- a/pkg/generated/listers/crunchydata.com/v1/expansion_generated.go +++ /dev/null @@ -1,50 +0,0 @@ -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by lister-gen. DO NOT EDIT. - -package v1 - -// PgclusterListerExpansion allows custom methods to be added to -// PgclusterLister. -type PgclusterListerExpansion interface{} - -// PgclusterNamespaceListerExpansion allows custom methods to be added to -// PgclusterNamespaceLister. -type PgclusterNamespaceListerExpansion interface{} - -// PgpolicyListerExpansion allows custom methods to be added to -// PgpolicyLister. -type PgpolicyListerExpansion interface{} - -// PgpolicyNamespaceListerExpansion allows custom methods to be added to -// PgpolicyNamespaceLister. -type PgpolicyNamespaceListerExpansion interface{} - -// PgreplicaListerExpansion allows custom methods to be added to -// PgreplicaLister. -type PgreplicaListerExpansion interface{} - -// PgreplicaNamespaceListerExpansion allows custom methods to be added to -// PgreplicaNamespaceLister. -type PgreplicaNamespaceListerExpansion interface{} - -// PgtaskListerExpansion allows custom methods to be added to -// PgtaskLister. -type PgtaskListerExpansion interface{} - -// PgtaskNamespaceListerExpansion allows custom methods to be added to -// PgtaskNamespaceLister. -type PgtaskNamespaceListerExpansion interface{} diff --git a/pkg/generated/listers/crunchydata.com/v1/pgcluster.go b/pkg/generated/listers/crunchydata.com/v1/pgcluster.go deleted file mode 100644 index 10db1c63a2..0000000000 --- a/pkg/generated/listers/crunchydata.com/v1/pgcluster.go +++ /dev/null @@ -1,93 +0,0 @@ -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by lister-gen. DO NOT EDIT. - -package v1 - -import ( - v1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - "k8s.io/apimachinery/pkg/api/errors" - "k8s.io/apimachinery/pkg/labels" - "k8s.io/client-go/tools/cache" -) - -// PgclusterLister helps list Pgclusters. -type PgclusterLister interface { - // List lists all Pgclusters in the indexer. - List(selector labels.Selector) (ret []*v1.Pgcluster, err error) - // Pgclusters returns an object that can list and get Pgclusters. - Pgclusters(namespace string) PgclusterNamespaceLister - PgclusterListerExpansion -} - -// pgclusterLister implements the PgclusterLister interface. -type pgclusterLister struct { - indexer cache.Indexer -} - -// NewPgclusterLister returns a new PgclusterLister. -func NewPgclusterLister(indexer cache.Indexer) PgclusterLister { - return &pgclusterLister{indexer: indexer} -} - -// List lists all Pgclusters in the indexer. -func (s *pgclusterLister) List(selector labels.Selector) (ret []*v1.Pgcluster, err error) { - err = cache.ListAll(s.indexer, selector, func(m interface{}) { - ret = append(ret, m.(*v1.Pgcluster)) - }) - return ret, err -} - -// Pgclusters returns an object that can list and get Pgclusters. -func (s *pgclusterLister) Pgclusters(namespace string) PgclusterNamespaceLister { - return pgclusterNamespaceLister{indexer: s.indexer, namespace: namespace} -} - -// PgclusterNamespaceLister helps list and get Pgclusters. -type PgclusterNamespaceLister interface { - // List lists all Pgclusters in the indexer for a given namespace. - List(selector labels.Selector) (ret []*v1.Pgcluster, err error) - // Get retrieves the Pgcluster from the indexer for a given namespace and name. - Get(name string) (*v1.Pgcluster, error) - PgclusterNamespaceListerExpansion -} - -// pgclusterNamespaceLister implements the PgclusterNamespaceLister -// interface. -type pgclusterNamespaceLister struct { - indexer cache.Indexer - namespace string -} - -// List lists all Pgclusters in the indexer for a given namespace. -func (s pgclusterNamespaceLister) List(selector labels.Selector) (ret []*v1.Pgcluster, err error) { - err = cache.ListAllByNamespace(s.indexer, s.namespace, selector, func(m interface{}) { - ret = append(ret, m.(*v1.Pgcluster)) - }) - return ret, err -} - -// Get retrieves the Pgcluster from the indexer for a given namespace and name. -func (s pgclusterNamespaceLister) Get(name string) (*v1.Pgcluster, error) { - obj, exists, err := s.indexer.GetByKey(s.namespace + "/" + name) - if err != nil { - return nil, err - } - if !exists { - return nil, errors.NewNotFound(v1.Resource("pgcluster"), name) - } - return obj.(*v1.Pgcluster), nil -} diff --git a/pkg/generated/listers/crunchydata.com/v1/pgpolicy.go b/pkg/generated/listers/crunchydata.com/v1/pgpolicy.go deleted file mode 100644 index d996df08ee..0000000000 --- a/pkg/generated/listers/crunchydata.com/v1/pgpolicy.go +++ /dev/null @@ -1,93 +0,0 @@ -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by lister-gen. DO NOT EDIT. - -package v1 - -import ( - v1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - "k8s.io/apimachinery/pkg/api/errors" - "k8s.io/apimachinery/pkg/labels" - "k8s.io/client-go/tools/cache" -) - -// PgpolicyLister helps list Pgpolicies. -type PgpolicyLister interface { - // List lists all Pgpolicies in the indexer. - List(selector labels.Selector) (ret []*v1.Pgpolicy, err error) - // Pgpolicies returns an object that can list and get Pgpolicies. - Pgpolicies(namespace string) PgpolicyNamespaceLister - PgpolicyListerExpansion -} - -// pgpolicyLister implements the PgpolicyLister interface. -type pgpolicyLister struct { - indexer cache.Indexer -} - -// NewPgpolicyLister returns a new PgpolicyLister. -func NewPgpolicyLister(indexer cache.Indexer) PgpolicyLister { - return &pgpolicyLister{indexer: indexer} -} - -// List lists all Pgpolicies in the indexer. -func (s *pgpolicyLister) List(selector labels.Selector) (ret []*v1.Pgpolicy, err error) { - err = cache.ListAll(s.indexer, selector, func(m interface{}) { - ret = append(ret, m.(*v1.Pgpolicy)) - }) - return ret, err -} - -// Pgpolicies returns an object that can list and get Pgpolicies. -func (s *pgpolicyLister) Pgpolicies(namespace string) PgpolicyNamespaceLister { - return pgpolicyNamespaceLister{indexer: s.indexer, namespace: namespace} -} - -// PgpolicyNamespaceLister helps list and get Pgpolicies. -type PgpolicyNamespaceLister interface { - // List lists all Pgpolicies in the indexer for a given namespace. - List(selector labels.Selector) (ret []*v1.Pgpolicy, err error) - // Get retrieves the Pgpolicy from the indexer for a given namespace and name. - Get(name string) (*v1.Pgpolicy, error) - PgpolicyNamespaceListerExpansion -} - -// pgpolicyNamespaceLister implements the PgpolicyNamespaceLister -// interface. -type pgpolicyNamespaceLister struct { - indexer cache.Indexer - namespace string -} - -// List lists all Pgpolicies in the indexer for a given namespace. -func (s pgpolicyNamespaceLister) List(selector labels.Selector) (ret []*v1.Pgpolicy, err error) { - err = cache.ListAllByNamespace(s.indexer, s.namespace, selector, func(m interface{}) { - ret = append(ret, m.(*v1.Pgpolicy)) - }) - return ret, err -} - -// Get retrieves the Pgpolicy from the indexer for a given namespace and name. -func (s pgpolicyNamespaceLister) Get(name string) (*v1.Pgpolicy, error) { - obj, exists, err := s.indexer.GetByKey(s.namespace + "/" + name) - if err != nil { - return nil, err - } - if !exists { - return nil, errors.NewNotFound(v1.Resource("pgpolicy"), name) - } - return obj.(*v1.Pgpolicy), nil -} diff --git a/pkg/generated/listers/crunchydata.com/v1/pgreplica.go b/pkg/generated/listers/crunchydata.com/v1/pgreplica.go deleted file mode 100644 index 23632d1ee4..0000000000 --- a/pkg/generated/listers/crunchydata.com/v1/pgreplica.go +++ /dev/null @@ -1,93 +0,0 @@ -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by lister-gen. DO NOT EDIT. - -package v1 - -import ( - v1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - "k8s.io/apimachinery/pkg/api/errors" - "k8s.io/apimachinery/pkg/labels" - "k8s.io/client-go/tools/cache" -) - -// PgreplicaLister helps list Pgreplicas. -type PgreplicaLister interface { - // List lists all Pgreplicas in the indexer. - List(selector labels.Selector) (ret []*v1.Pgreplica, err error) - // Pgreplicas returns an object that can list and get Pgreplicas. - Pgreplicas(namespace string) PgreplicaNamespaceLister - PgreplicaListerExpansion -} - -// pgreplicaLister implements the PgreplicaLister interface. -type pgreplicaLister struct { - indexer cache.Indexer -} - -// NewPgreplicaLister returns a new PgreplicaLister. -func NewPgreplicaLister(indexer cache.Indexer) PgreplicaLister { - return &pgreplicaLister{indexer: indexer} -} - -// List lists all Pgreplicas in the indexer. -func (s *pgreplicaLister) List(selector labels.Selector) (ret []*v1.Pgreplica, err error) { - err = cache.ListAll(s.indexer, selector, func(m interface{}) { - ret = append(ret, m.(*v1.Pgreplica)) - }) - return ret, err -} - -// Pgreplicas returns an object that can list and get Pgreplicas. -func (s *pgreplicaLister) Pgreplicas(namespace string) PgreplicaNamespaceLister { - return pgreplicaNamespaceLister{indexer: s.indexer, namespace: namespace} -} - -// PgreplicaNamespaceLister helps list and get Pgreplicas. -type PgreplicaNamespaceLister interface { - // List lists all Pgreplicas in the indexer for a given namespace. - List(selector labels.Selector) (ret []*v1.Pgreplica, err error) - // Get retrieves the Pgreplica from the indexer for a given namespace and name. - Get(name string) (*v1.Pgreplica, error) - PgreplicaNamespaceListerExpansion -} - -// pgreplicaNamespaceLister implements the PgreplicaNamespaceLister -// interface. -type pgreplicaNamespaceLister struct { - indexer cache.Indexer - namespace string -} - -// List lists all Pgreplicas in the indexer for a given namespace. -func (s pgreplicaNamespaceLister) List(selector labels.Selector) (ret []*v1.Pgreplica, err error) { - err = cache.ListAllByNamespace(s.indexer, s.namespace, selector, func(m interface{}) { - ret = append(ret, m.(*v1.Pgreplica)) - }) - return ret, err -} - -// Get retrieves the Pgreplica from the indexer for a given namespace and name. -func (s pgreplicaNamespaceLister) Get(name string) (*v1.Pgreplica, error) { - obj, exists, err := s.indexer.GetByKey(s.namespace + "/" + name) - if err != nil { - return nil, err - } - if !exists { - return nil, errors.NewNotFound(v1.Resource("pgreplica"), name) - } - return obj.(*v1.Pgreplica), nil -} diff --git a/pkg/generated/listers/crunchydata.com/v1/pgtask.go b/pkg/generated/listers/crunchydata.com/v1/pgtask.go deleted file mode 100644 index 94a405754c..0000000000 --- a/pkg/generated/listers/crunchydata.com/v1/pgtask.go +++ /dev/null @@ -1,93 +0,0 @@ -/* -Copyright 2020 Crunchy Data Solutions, Inc. -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -// Code generated by lister-gen. DO NOT EDIT. - -package v1 - -import ( - v1 "github.com/crunchydata/postgres-operator/pkg/apis/crunchydata.com/v1" - "k8s.io/apimachinery/pkg/api/errors" - "k8s.io/apimachinery/pkg/labels" - "k8s.io/client-go/tools/cache" -) - -// PgtaskLister helps list Pgtasks. -type PgtaskLister interface { - // List lists all Pgtasks in the indexer. - List(selector labels.Selector) (ret []*v1.Pgtask, err error) - // Pgtasks returns an object that can list and get Pgtasks. - Pgtasks(namespace string) PgtaskNamespaceLister - PgtaskListerExpansion -} - -// pgtaskLister implements the PgtaskLister interface. -type pgtaskLister struct { - indexer cache.Indexer -} - -// NewPgtaskLister returns a new PgtaskLister. -func NewPgtaskLister(indexer cache.Indexer) PgtaskLister { - return &pgtaskLister{indexer: indexer} -} - -// List lists all Pgtasks in the indexer. -func (s *pgtaskLister) List(selector labels.Selector) (ret []*v1.Pgtask, err error) { - err = cache.ListAll(s.indexer, selector, func(m interface{}) { - ret = append(ret, m.(*v1.Pgtask)) - }) - return ret, err -} - -// Pgtasks returns an object that can list and get Pgtasks. -func (s *pgtaskLister) Pgtasks(namespace string) PgtaskNamespaceLister { - return pgtaskNamespaceLister{indexer: s.indexer, namespace: namespace} -} - -// PgtaskNamespaceLister helps list and get Pgtasks. -type PgtaskNamespaceLister interface { - // List lists all Pgtasks in the indexer for a given namespace. - List(selector labels.Selector) (ret []*v1.Pgtask, err error) - // Get retrieves the Pgtask from the indexer for a given namespace and name. - Get(name string) (*v1.Pgtask, error) - PgtaskNamespaceListerExpansion -} - -// pgtaskNamespaceLister implements the PgtaskNamespaceLister -// interface. -type pgtaskNamespaceLister struct { - indexer cache.Indexer - namespace string -} - -// List lists all Pgtasks in the indexer for a given namespace. -func (s pgtaskNamespaceLister) List(selector labels.Selector) (ret []*v1.Pgtask, err error) { - err = cache.ListAllByNamespace(s.indexer, s.namespace, selector, func(m interface{}) { - ret = append(ret, m.(*v1.Pgtask)) - }) - return ret, err -} - -// Get retrieves the Pgtask from the indexer for a given namespace and name. -func (s pgtaskNamespaceLister) Get(name string) (*v1.Pgtask, error) { - obj, exists, err := s.indexer.GetByKey(s.namespace + "/" + name) - if err != nil { - return nil, err - } - if !exists { - return nil, errors.NewNotFound(v1.Resource("pgtask"), name) - } - return obj.(*v1.Pgtask), nil -} diff --git a/postgres-operator.go b/postgres-operator.go deleted file mode 100644 index 325303c9a2..0000000000 --- a/postgres-operator.go +++ /dev/null @@ -1,148 +0,0 @@ -package main - -/* -Copyright 2017 - 2020 Crunchy Data -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -*/ - -import ( - "fmt" - "os" - "time" - - "github.com/crunchydata/postgres-operator/internal/config" - "github.com/crunchydata/postgres-operator/internal/controller" - "github.com/crunchydata/postgres-operator/internal/controller/manager" - nscontroller "github.com/crunchydata/postgres-operator/internal/controller/namespace" - crunchylog "github.com/crunchydata/postgres-operator/internal/logging" - "github.com/crunchydata/postgres-operator/internal/ns" - log "github.com/sirupsen/logrus" - - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - kubeinformers "k8s.io/client-go/informers" - "k8s.io/client-go/kubernetes" - "k8s.io/client-go/tools/cache" - "sigs.k8s.io/controller-runtime/pkg/manager/signals" - - "github.com/crunchydata/postgres-operator/internal/kubeapi" - "github.com/crunchydata/postgres-operator/internal/operator" -) - -func main() { - - debugFlag := os.Getenv("CRUNCHY_DEBUG") - //add logging configuration - crunchylog.CrunchyLogger(crunchylog.SetParameters()) - if debugFlag == "true" { - log.SetLevel(log.DebugLevel) - log.Debug("debug flag set to true") - } else { - log.Info("debug flag set to false") - } - - //give time for pgo-event to start up - time.Sleep(time.Duration(5) * time.Second) - - client, err := kubeapi.NewClient() - if err != nil { - log.Error(err) - os.Exit(2) - } - - operator.Initialize(client) - - // Configure namespaces for the Operator. This includes determining the namespace - // operating mode, creating/updating namespaces (if permitted), and obtaining a valid - // list of target namespaces for the operator install - namespaceList, err := operator.SetupNamespaces(client) - if err != nil { - log.Errorf("Error configuring operator namespaces: %v", err) - os.Exit(2) - } - - // set up signals so we handle the first shutdown signal gracefully - stopCh := signals.SetupSignalHandler() - - // create a new controller manager with controllers for all current namespaces and then run - // all of those controllers - controllerManager, err := manager.NewControllerManager(namespaceList, operator.Pgo, - operator.PgoNamespace, operator.InstallationName, operator.NamespaceOperatingMode()) - if err != nil { - log.Error(err) - os.Exit(2) - } - log.Debug("controller manager created") - - // If not using the "disabled" namespace operating mode, start a real namespace controller - // that is able to resond to namespace events in the Kube cluster. If using the "disabled" - // operating mode, then create a fake client containing all namespaces defined for the install - // (i.e. via the NAMESPACE environment variable) and use that to create the namespace - // controller. This allows for namespace and RBAC reconciliation logic to be run in a - // consistent manner regardless of the namespace operating mode being utilized. - if operator.NamespaceOperatingMode() != ns.NamespaceOperatingModeDisabled { - if err := createAndStartNamespaceController(client, controllerManager, - stopCh); err != nil { - log.Fatal(err) - } - } else { - fakeClient, err := ns.CreateFakeNamespaceClient(operator.InstallationName) - if err != nil { - log.Fatal(err) - } - if err := createAndStartNamespaceController(fakeClient, controllerManager, - stopCh); err != nil { - log.Fatal(err) - } - } - - defer controllerManager.RemoveAll() - - log.Info("PostgreSQL Operator initialized and running, waiting for signal to exit") - <-stopCh - log.Infof("Signal received, now exiting") -} - -// createAndStartNamespaceController creates a namespace controller and then starts it -func createAndStartNamespaceController(kubeClientset kubernetes.Interface, - controllerManager controller.Manager, stopCh <-chan struct{}) error { - - nsKubeInformerFactory := kubeinformers.NewSharedInformerFactoryWithOptions(kubeClientset, - time.Duration(*operator.Pgo.Pgo.NamespaceRefreshInterval)*time.Second, - kubeinformers.WithTweakListOptions(func(options *metav1.ListOptions) { - options.LabelSelector = fmt.Sprintf("%s=%s,%s=%s", - config.LABEL_VENDOR, config.LABEL_CRUNCHY, - config.LABEL_PGO_INSTALLATION_NAME, operator.InstallationName) - })) - nsController, err := nscontroller.NewNamespaceController(controllerManager, - nsKubeInformerFactory.Core().V1().Namespaces(), - *operator.Pgo.Pgo.NamespaceWorkerCount) - if err != nil { - return err - } - - // start the namespace controller - nsKubeInformerFactory.Start(stopCh) - - if ok := cache.WaitForNamedCacheSync("namespace", stopCh, - nsKubeInformerFactory.Core().V1().Namespaces().Informer().HasSynced); !ok { - return fmt.Errorf("failed waiting for namespace cache to sync") - } - - for i := 0; i < nsController.WorkerCount(); i++ { - go nsController.RunWorker(stopCh) - } - - log.Debug("namespace controller is now running") - - return nil -} diff --git a/pv/create-pv-nfs-label.sh b/pv/create-pv-nfs-label.sh deleted file mode 100755 index a77e3e68e3..0000000000 --- a/pv/create-pv-nfs-label.sh +++ /dev/null @@ -1,26 +0,0 @@ -#!/bin/bash - -# Copyright 2018 - 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" - -echo "create the test PV and PVC using the NFS dir" -for i in {1..180} -do - echo "creating PV crunchy-pv$i" - export COUNTER=$i - $PGO_CMD --namespace=$PGO_OPERATOR_NAMESPACE delete pv crunchy-pv$i - cat $DIR/crunchy-pv-nfs-label.json | envsubst | $PGO_CMD --namespace=$PGO_OPERATOR_NAMESPACE create -f - -done diff --git a/pv/create-pv-nfs-legacy.sh b/pv/create-pv-nfs-legacy.sh deleted file mode 100755 index 4850e73652..0000000000 --- a/pv/create-pv-nfs-legacy.sh +++ /dev/null @@ -1,26 +0,0 @@ -#!/bin/bash - -# Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" - -echo "create the test PV and PVC using the NFS dir" -for i in {1..160} -do - echo "creating PV crunchy-pv$i" - export COUNTER=$i - $PGO_CMD --namespace=$PGO_OPERATOR_NAMESPACE delete pv crunchy-pv$i - cat $DIR/crunchy-pv-nfs.json | envsubst | $PGO_CMD --namespace=$PGO_OPERATOR_NAMESPACE create -f - -done diff --git a/pv/create-pv-nfs.sh b/pv/create-pv-nfs.sh deleted file mode 100755 index 8b2ef4ab67..0000000000 --- a/pv/create-pv-nfs.sh +++ /dev/null @@ -1,71 +0,0 @@ -#!/bin/bash -# Copyright 2017 - 2020 Crunchy Data Solutions, Inc. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# NOTE: this is script is intended for setting up development environments to -# use NFS as the persistent volume storage area. It is **not** intended for -# production. -# -# This script makes some assumptions, i.e: -# -# - You have sudo -# - You have your NFS filesystem mounted to the location you are running this -# script -# - Your NFS filesystem is mounted to /nfsfileshare -# - Your PV names will be one of "crunchy-pvNNN" where NNN is a natural number -# - Your NFS UID:GID is "nfsnobody:nfsnobody", which correspunds to "65534:65534" -# -# And awaaaay we go... -DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" - -echo "create the test PV and PVC using the NFS dir" -for i in {1..160} -do - PV_NAME="crunchy-pv${i}" - NFS_PV_PATH="/nfsfileshare/${PV_NAME}" - - echo "deleting PV ${PV_NAME}" - $PGO_CMD delete pv "${PV_NAME}" - sudo rm -rf "${NFS_PV_PATH}" - - # this is the manifest used to create the persistent volumes - MANIFEST=$(cat < 63 { - value = value[:63] - } - - // "a valid label must be an empty string or consist of alphanumeric characters, '-', '_' or '.'" - return strings.Map(func(r rune) rune { - if r == '-' || r == '_' || r == '.' || - ('A' <= r && r <= 'Z') || - ('a' <= r && r <= 'z') || - ('0' <= r && r <= '9') { - return r - } - return '-' - }, value) -} diff --git a/testing/kubeapi/meta_test.go b/testing/kubeapi/meta_test.go deleted file mode 100644 index 05f158df2a..0000000000 --- a/testing/kubeapi/meta_test.go +++ /dev/null @@ -1,24 +0,0 @@ -package kubeapi - -import ( - "strings" - "testing" - - "k8s.io/apimachinery/pkg/util/validation" -) - -func TestSanitizeLabelValue(t *testing.T) { - for _, tt := range []struct{ input, expected string }{ - {"", ""}, - {"a-very-fine-label", "a-very-fine-label"}, - {"TestSomething/With_Underscore/#01", "TestSomething-With_Underscore--01"}, - {strings.Repeat("abc456ghi0", 8), "abc456ghi0abc456ghi0abc456ghi0abc456ghi0abc456ghi0abc456ghi0abc"}, - } { - if errors := validation.IsValidLabelValue(tt.expected); len(errors) != 0 { - t.Fatalf("bug in test: %q is invalid: %v", tt.expected, errors) - } - if actual := SanitizeLabelValue(tt.input); tt.expected != actual { - t.Errorf("expected %q to be %q, got %q", tt.input, tt.expected, actual) - } - } -} diff --git a/testing/kubeapi/namespace.go b/testing/kubeapi/namespace.go deleted file mode 100644 index 2176c0f59e..0000000000 --- a/testing/kubeapi/namespace.go +++ /dev/null @@ -1,18 +0,0 @@ -package kubeapi - -import ( - core_v1 "k8s.io/api/core/v1" - meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1" -) - -// DeleteNamespace deletes an existing namespace. -func (k *KubeAPI) DeleteNamespace(name string) error { - return k.Client.CoreV1().Namespaces().Delete(name, nil) -} - -// GenerateNamespace creates a new namespace with a random name that begins with prefix. -func (k *KubeAPI) GenerateNamespace(prefix string, labels map[string]string) (*core_v1.Namespace, error) { - return k.Client.CoreV1().Namespaces().Create(&core_v1.Namespace{ - ObjectMeta: meta_v1.ObjectMeta{GenerateName: prefix, Labels: labels}, - }) -} diff --git a/testing/kubeapi/pod.go b/testing/kubeapi/pod.go deleted file mode 100644 index 32f24a6876..0000000000 --- a/testing/kubeapi/pod.go +++ /dev/null @@ -1,39 +0,0 @@ -package kubeapi - -import ( - core_v1 "k8s.io/api/core/v1" - meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/fields" -) - -// IsPodReady returns true if all containers of pod are ready. -func IsPodReady(pod core_v1.Pod) bool { - for _, status := range pod.Status.ContainerStatuses { - if !status.Ready { - return false - } - } - return true -} - -// GetPod returns a pod from the specified namespace. -func (k *KubeAPI) GetPod(namespace, name string) (*core_v1.Pod, error) { - return k.Client.CoreV1().Pods(namespace).Get(name, meta_v1.GetOptions{}) -} - -// ListPods returns pods matching labels, if any. -func (k *KubeAPI) ListPods(namespace string, labels map[string]string) ([]core_v1.Pod, error) { - var options meta_v1.ListOptions - - if labels != nil { - options.LabelSelector = fields.Set(labels).String() - } - - list, err := k.Client.CoreV1().Pods(namespace).List(options) - - if list == nil && err != nil { - list = &core_v1.PodList{} - } - - return list.Items, err -} diff --git a/testing/kubeapi/proxy.go b/testing/kubeapi/proxy.go deleted file mode 100644 index 34b4d06c9d..0000000000 --- a/testing/kubeapi/proxy.go +++ /dev/null @@ -1,68 +0,0 @@ -package kubeapi - -import ( - "fmt" - "io/ioutil" - "net" - "net/http" - - "k8s.io/client-go/tools/portforward" - "k8s.io/client-go/transport/spdy" -) - -type Proxy struct { - addr string - err chan error - stop chan struct{} - proxy *portforward.PortForwarder -} - -func (p *Proxy) Close() error { close(p.stop); return <-p.err } -func (p Proxy) LocalAddr() string { return p.addr } - -// PodPortForward proxies TCP connections to a random local port to a port on a -// pod. -func (k *KubeAPI) PodPortForward(namespace, name, port string) (*Proxy, error) { - // portforward.PortForwarder tries to listen on both IPv4 and IPv6 when - // address is "localhost". That'd be great, but it doesn't indicate which - // random port was assigned to which network. Use IPv4 (tcp4) loopback address - // to avoid that ambiguity. - const address = "127.0.0.1" - - request := k.Client.CoreV1().RESTClient().Post(). - Resource("pods").SubResource("portforward"). - Namespace(namespace).Name(name) - - tripper, upgrader, err := spdy.RoundTripperFor(k.Config) - if err != nil { - return nil, err - } - - dialer := spdy.NewDialer(upgrader, &http.Client{Transport: tripper}, "POST", request.URL()) - ready := make(chan struct{}) - - p := Proxy{ - err: make(chan error), - stop: make(chan struct{}), - } - - if p.proxy, err = portforward.NewOnAddresses( - dialer, []string{address}, []string{":" + port}, - p.stop, ready, ioutil.Discard, ioutil.Discard, - ); err != nil { - return nil, err - } - - go func() { p.err <- p.proxy.ForwardPorts() }() - - select { - case err = <-p.err: - return nil, err - case <-ready: - } - - ports, _ := p.proxy.GetPorts() - p.addr = net.JoinHostPort(address, fmt.Sprintf("%d", ports[0].Local)) - - return &p, nil -} diff --git a/testing/kubeapi/pvc.go b/testing/kubeapi/pvc.go deleted file mode 100644 index 66a3670d94..0000000000 --- a/testing/kubeapi/pvc.go +++ /dev/null @@ -1,29 +0,0 @@ -package kubeapi - -import ( - core_v1 "k8s.io/api/core/v1" - meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/fields" -) - -// IsPVCBound returns true if pvc is bound. -func IsPVCBound(pvc core_v1.PersistentVolumeClaim) bool { - return pvc.Status.Phase == core_v1.ClaimBound -} - -// ListPVCs returns persistent volume claims matching labels, if any. -func (k *KubeAPI) ListPVCs(namespace string, labels map[string]string) ([]core_v1.PersistentVolumeClaim, error) { - var options meta_v1.ListOptions - - if labels != nil { - options.LabelSelector = fields.Set(labels).String() - } - - list, err := k.Client.CoreV1().PersistentVolumeClaims(namespace).List(options) - - if list == nil && err != nil { - list = &core_v1.PersistentVolumeClaimList{} - } - - return list.Items, err -} diff --git a/testing/kuttl/README.md b/testing/kuttl/README.md new file mode 100644 index 0000000000..555ce9a26d --- /dev/null +++ b/testing/kuttl/README.md @@ -0,0 +1,92 @@ +# KUTTL + +## Installing + +Docs for install: https://kuttl.dev/docs/cli.html#setup-the-kuttl-kubectl-plugin + +Options: + - Download and install the binary + - Install the `kubectl krew` [plugin manager](https://github.com/kubernetes-sigs/krew) + and `kubectl krew install kuttl` + +## Cheat sheet + +### Suppressing Noisy Logs + +KUTTL gives you the option to suppress events from the test logging output. To enable this feature +update the `kuttl` parameter when calling the `make` target + +``` +KUTTL_TEST='kuttl test --suppress-log=events' make check-kuttl +``` + +To suppress the events permanently, you can add the following to the KUTTL config (kuttl-test.yaml) +``` +suppress: +- events +``` + +### Run test suite + +Make sure that the operator is running in your Kubernetes environment and that your `kubeconfig` is +set up. Then run the make targets: + +``` +make generate-kuttl check-kuttl +``` + +### Running a single test + +A single test is considered to be one directory under `kuttl/e2e-generated`, for example +`kuttl/e2e-generated/restore` is the `restore` test. + +There are two ways to run a single test in isolation: +- using an env var with the make target: `KUTTL_TEST='kuttl test --test ' make check-kuttl` +- using `kubectl kuttl --test` flag: `kubectl kuttl test testing/kuttl/e2e-generated --test ` + +### Writing additional tests + +To make it easier to read tests, we want to put our `assert.yaml`/`errors.yaml` files after the +files that create/update the objects for a step. To achieve this, infix an extra `-` between the +step number and the object/step name. + +For example, if the `00` test step wants to create a cluster and then assert that the cluster is ready, +the files would be named + +```yaml +00--cluster.yaml # note the extra `-` to ensure that it sorts above the following file +00-assert.yaml +``` + +### Generating tests + +KUTTL is good at setting up K8s objects for testing, but does not have a native way to dynamically +change those K8s objects before applying them. That means that, if we wanted to write a cluster +connection test for PG 13 and PG 14, we would end up writing two nearly identical tests. + +Rather than write those multiple tests, we are using `envsubst` to replace some common variables +and writing those files to the `testing/kuttl/e2e-generated*` directories. + +These templated test files can be generated by setting some variables in the command line and +calling the `make generate-kuttl` target: + +```console +KUTTL_PG_VERSION=13 KUTTL_POSTGIS_VERSION=3.0 make generate-kuttl +``` + +This will loop through the files under the `e2e` and `e2e-other` directories and create matching +files under the `e2e-generated` and `e2e-generated-other` directories that can be checked for +correctness before running the tests. + +Please note, `make check-kuttl` does not run the `e2e-other` tests. To run the `postgis-cluster` +test, you can use: + +``` +kubectl kuttl test testing/kuttl/e2e-generated-other/ --timeout=180 --test postgis-cluster` +``` + +To run the `gssapi` test, please see testing/kuttl/e2e-other/gssapi/README.md. + +To prevent errors, we want to set defaults for all the environment variables used in the source +YAML files; so if you add a new test with a new variable, please update the Makefile with a +reasonable/preferred default. diff --git a/testing/kuttl/e2e-other/autogrow-volume/00-assert.yaml b/testing/kuttl/e2e-other/autogrow-volume/00-assert.yaml new file mode 100644 index 0000000000..b4372b75e7 --- /dev/null +++ b/testing/kuttl/e2e-other/autogrow-volume/00-assert.yaml @@ -0,0 +1,7 @@ +# Ensure that the default StorageClass supports VolumeExpansion +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + annotations: + storageclass.kubernetes.io/is-default-class: "true" +allowVolumeExpansion: true diff --git a/testing/kuttl/e2e-other/autogrow-volume/01-create.yaml b/testing/kuttl/e2e-other/autogrow-volume/01-create.yaml new file mode 100644 index 0000000000..fc947a538f --- /dev/null +++ b/testing/kuttl/e2e-other/autogrow-volume/01-create.yaml @@ -0,0 +1,6 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +apply: +- files/01-create-cluster.yaml +assert: +- files/01-cluster-and-pvc-created.yaml diff --git a/testing/kuttl/e2e-other/autogrow-volume/02-add-data.yaml b/testing/kuttl/e2e-other/autogrow-volume/02-add-data.yaml new file mode 100644 index 0000000000..261c274a51 --- /dev/null +++ b/testing/kuttl/e2e-other/autogrow-volume/02-add-data.yaml @@ -0,0 +1,6 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +apply: +- files/02-create-data.yaml +assert: +- files/02-create-data-completed.yaml diff --git a/testing/kuttl/e2e-other/autogrow-volume/03-assert.yaml b/testing/kuttl/e2e-other/autogrow-volume/03-assert.yaml new file mode 100644 index 0000000000..ad31b61401 --- /dev/null +++ b/testing/kuttl/e2e-other/autogrow-volume/03-assert.yaml @@ -0,0 +1,12 @@ +--- +# Check that annotation is set +apiVersion: v1 +kind: Pod +metadata: + labels: + postgres-operator.crunchydata.com/cluster: auto-grow-volume + postgres-operator.crunchydata.com/data: postgres + postgres-operator.crunchydata.com/instance-set: instance1 + postgres-operator.crunchydata.com/patroni: auto-grow-volume-ha + annotations: + suggested-pgdata-pvc-size: 1461Mi diff --git a/testing/kuttl/e2e-other/autogrow-volume/04-assert.yaml b/testing/kuttl/e2e-other/autogrow-volume/04-assert.yaml new file mode 100644 index 0000000000..d486f9de18 --- /dev/null +++ b/testing/kuttl/e2e-other/autogrow-volume/04-assert.yaml @@ -0,0 +1,19 @@ +# We know that the PVC sizes have changed so now we can check that they have been +# updated to have the expected size +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + labels: + postgres-operator.crunchydata.com/cluster: auto-grow-volume + postgres-operator.crunchydata.com/instance-set: instance1 +spec: + resources: + requests: + storage: 1461Mi +status: + accessModes: + - ReadWriteOnce + capacity: + storage: 2Gi + phase: Bound diff --git a/testing/kuttl/e2e-other/autogrow-volume/05-check-event.yaml b/testing/kuttl/e2e-other/autogrow-volume/05-check-event.yaml new file mode 100644 index 0000000000..475177d242 --- /dev/null +++ b/testing/kuttl/e2e-other/autogrow-volume/05-check-event.yaml @@ -0,0 +1,12 @@ +--- +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +commands: + # Verify expected event has occurred + - script: | + EVENT=$( + kubectl get events --namespace="${NAMESPACE}" \ + --field-selector reason="VolumeAutoGrow" --output=jsonpath={.items..message} + ) + + if [[ "${EVENT}" != "pgData volume expansion to 1461Mi requested for auto-grow-volume/instance1." ]]; then exit 1; fi diff --git a/testing/kuttl/e2e-other/autogrow-volume/README.md b/testing/kuttl/e2e-other/autogrow-volume/README.md new file mode 100644 index 0000000000..674bc69b40 --- /dev/null +++ b/testing/kuttl/e2e-other/autogrow-volume/README.md @@ -0,0 +1,9 @@ +### AutoGrow Volume + +* 00: Assert the storage class allows volume expansion +* 01: Create and verify PostgresCluster and PVC +* 02: Add data to trigger growth and verify Job completes +* 03: Verify annotation on the instance Pod +* 04: Verify the PVC request has been set and the PVC has grown +* 05: Verify the expansion request Event has been created + Note: This Event should be created between steps 03 and 04 but is checked at the end for timing purposes. diff --git a/testing/kuttl/e2e-other/autogrow-volume/files/01-cluster-and-pvc-created.yaml b/testing/kuttl/e2e-other/autogrow-volume/files/01-cluster-and-pvc-created.yaml new file mode 100644 index 0000000000..17804b8205 --- /dev/null +++ b/testing/kuttl/e2e-other/autogrow-volume/files/01-cluster-and-pvc-created.yaml @@ -0,0 +1,27 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: auto-grow-volume +status: + instances: + - name: instance1 + readyReplicas: 1 + replicas: 1 + updatedReplicas: 1 +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + labels: + postgres-operator.crunchydata.com/cluster: auto-grow-volume + postgres-operator.crunchydata.com/instance-set: instance1 +spec: + resources: + requests: + storage: 1Gi +status: + accessModes: + - ReadWriteOnce + capacity: + storage: 1Gi + phase: Bound diff --git a/testing/kuttl/e2e-other/autogrow-volume/files/01-create-cluster.yaml b/testing/kuttl/e2e-other/autogrow-volume/files/01-create-cluster.yaml new file mode 100644 index 0000000000..01eaf7a684 --- /dev/null +++ b/testing/kuttl/e2e-other/autogrow-volume/files/01-create-cluster.yaml @@ -0,0 +1,27 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: auto-grow-volume +spec: + postgresVersion: ${KUTTL_PG_VERSION} + instances: + - name: instance1 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + limits: + storage: 2Gi + backups: + pgbackrest: + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi diff --git a/testing/kuttl/e2e-other/autogrow-volume/files/02-create-data-completed.yaml b/testing/kuttl/e2e-other/autogrow-volume/files/02-create-data-completed.yaml new file mode 100644 index 0000000000..fdb42e68f5 --- /dev/null +++ b/testing/kuttl/e2e-other/autogrow-volume/files/02-create-data-completed.yaml @@ -0,0 +1,7 @@ +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: create-data +status: + succeeded: 1 diff --git a/testing/kuttl/e2e-other/autogrow-volume/files/02-create-data.yaml b/testing/kuttl/e2e-other/autogrow-volume/files/02-create-data.yaml new file mode 100644 index 0000000000..c42f0dec10 --- /dev/null +++ b/testing/kuttl/e2e-other/autogrow-volume/files/02-create-data.yaml @@ -0,0 +1,32 @@ +--- +# Create some data that should be present after resizing. +apiVersion: batch/v1 +kind: Job +metadata: + name: create-data + labels: { postgres-operator-test: kuttl } +spec: + backoffLimit: 3 + template: + metadata: + labels: { postgres-operator-test: kuttl } + spec: + restartPolicy: Never + containers: + - name: psql + image: ${KUTTL_PSQL_IMAGE} + env: + - name: PGURI + valueFrom: { secretKeyRef: { name: auto-grow-volume-pguser-auto-grow-volume, key: uri } } + + # Do not wait indefinitely, but leave enough time to create the data. + - { name: PGCONNECT_TIMEOUT, value: '60' } + + command: + - psql + - $(PGURI) + - --set=ON_ERROR_STOP=1 + - --command + - | # create schema for user and add enough data to get over 75% usage + CREATE SCHEMA "auto-grow-volume" AUTHORIZATION "auto-grow-volume"; + CREATE TABLE big_table AS SELECT 'data' || s AS mydata FROM generate_series(1,6000000) AS s; diff --git a/testing/kuttl/e2e-other/cluster-migrate/01--non-crunchy-cluster.yaml b/testing/kuttl/e2e-other/cluster-migrate/01--non-crunchy-cluster.yaml new file mode 100644 index 0000000000..1ccceb7098 --- /dev/null +++ b/testing/kuttl/e2e-other/cluster-migrate/01--non-crunchy-cluster.yaml @@ -0,0 +1,193 @@ +apiVersion: v1 +kind: Secret +metadata: + name: non-crunchy-cluster + labels: + postgres-operator-test: kuttl + app.kubernetes.io/name: postgresql + app.kubernetes.io/instance: non-crunchy-cluster +type: Opaque +stringData: + postgres-password: "SR6kNAFXvX" +--- +apiVersion: v1 +kind: Service +metadata: + name: non-crunchy-cluster-hl + labels: + postgres-operator-test: kuttl + app.kubernetes.io/name: postgresql + app.kubernetes.io/instance: non-crunchy-cluster + app.kubernetes.io/component: primary + service.alpha.kubernetes.io/tolerate-unready-endpoints: "true" +spec: + type: ClusterIP + clusterIP: None + publishNotReadyAddresses: true + ports: + - name: tcp-postgresql + port: 5432 + targetPort: tcp-postgresql + selector: + app.kubernetes.io/name: postgresql + app.kubernetes.io/instance: non-crunchy-cluster + app.kubernetes.io/component: primary +--- +apiVersion: v1 +kind: Service +metadata: + name: non-crunchy-cluster + labels: + postgres-operator-test: kuttl + app.kubernetes.io/name: postgresql + app.kubernetes.io/instance: non-crunchy-cluster + app.kubernetes.io/component: primary +spec: + type: ClusterIP + sessionAffinity: None + ports: + - name: tcp-postgresql + port: 5432 + targetPort: tcp-postgresql + nodePort: null + selector: + app.kubernetes.io/name: postgresql + app.kubernetes.io/instance: non-crunchy-cluster + app.kubernetes.io/component: primary +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: non-crunchy-cluster + labels: + postgres-operator-test: kuttl + app.kubernetes.io/name: postgresql + app.kubernetes.io/instance: non-crunchy-cluster + app.kubernetes.io/component: primary +spec: + replicas: 1 + serviceName: non-crunchy-cluster-hl + updateStrategy: + rollingUpdate: {} + type: RollingUpdate + selector: + matchLabels: + postgres-operator-test: kuttl + app.kubernetes.io/name: postgresql + app.kubernetes.io/instance: non-crunchy-cluster + app.kubernetes.io/component: primary + template: + metadata: + name: non-crunchy-cluster + labels: + postgres-operator-test: kuttl + app.kubernetes.io/name: postgresql + app.kubernetes.io/instance: non-crunchy-cluster + app.kubernetes.io/component: primary + spec: + serviceAccountName: default + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - podAffinityTerm: + labelSelector: + matchLabels: + postgres-operator-test: kuttl + app.kubernetes.io/name: postgresql + app.kubernetes.io/instance: non-crunchy-cluster + app.kubernetes.io/component: primary + namespaces: + - "default" + topologyKey: kubernetes.io/hostname + weight: 1 + securityContext: + fsGroup: 1001 + hostNetwork: false + hostIPC: false + containers: + - name: postgresql + image: docker.io/bitnami/postgresql:${KUTTL_BITNAMI_IMAGE_TAG} + imagePullPolicy: "IfNotPresent" + securityContext: + runAsUser: 1001 + env: + - name: BITNAMI_DEBUG + value: "false" + - name: POSTGRESQL_PORT_NUMBER + value: "5432" + - name: POSTGRESQL_VOLUME_DIR + value: "/bitnami/postgresql" + - name: PGDATA + value: "/bitnami/postgresql/data" + - name: POSTGRES_PASSWORD + valueFrom: + secretKeyRef: + name: non-crunchy-cluster + key: postgres-password + - name: POSTGRESQL_ENABLE_LDAP + value: "no" + - name: POSTGRESQL_ENABLE_TLS + value: "no" + - name: POSTGRESQL_LOG_HOSTNAME + value: "false" + - name: POSTGRESQL_LOG_CONNECTIONS + value: "false" + - name: POSTGRESQL_LOG_DISCONNECTIONS + value: "false" + - name: POSTGRESQL_PGAUDIT_LOG_CATALOG + value: "off" + - name: POSTGRESQL_CLIENT_MIN_MESSAGES + value: "error" + - name: POSTGRESQL_SHARED_PRELOAD_LIBRARIES + value: "pgaudit" + ports: + - name: tcp-postgresql + containerPort: 5432 + livenessProbe: + failureThreshold: 6 + initialDelaySeconds: 30 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 5 + exec: + command: + - /bin/sh + - -c + - exec pg_isready -U "postgres" -h localhost -p 5432 + readinessProbe: + failureThreshold: 6 + initialDelaySeconds: 5 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 5 + exec: + command: + - /bin/sh + - -c + - -e + - | + exec pg_isready -U "postgres" -h localhost -p 5432 + [ -f /opt/bitnami/postgresql/tmp/.initialized ] || [ -f /bitnami/postgresql/.initialized ] + resources: + limits: {} + requests: + cpu: 250m + memory: 256Mi + volumeMounts: + - name: dshm + mountPath: /dev/shm + - name: data + mountPath: /bitnami/postgresql + volumes: + - name: dshm + emptyDir: + medium: Memory + volumeClaimTemplates: + - metadata: + name: data + spec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: "1Gi" diff --git a/testing/kuttl/e2e-other/cluster-migrate/01-assert.yaml b/testing/kuttl/e2e-other/cluster-migrate/01-assert.yaml new file mode 100644 index 0000000000..c45fe79261 --- /dev/null +++ b/testing/kuttl/e2e-other/cluster-migrate/01-assert.yaml @@ -0,0 +1,8 @@ +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: non-crunchy-cluster +status: + readyReplicas: 1 + replicas: 1 + updatedReplicas: 1 diff --git a/testing/kuttl/e2e-other/cluster-migrate/02--create-data.yaml b/testing/kuttl/e2e-other/cluster-migrate/02--create-data.yaml new file mode 100644 index 0000000000..a9b7ebf152 --- /dev/null +++ b/testing/kuttl/e2e-other/cluster-migrate/02--create-data.yaml @@ -0,0 +1,30 @@ +--- +# Create some data that will be preserved after migration. +apiVersion: batch/v1 +kind: Job +metadata: + name: original-data + labels: { postgres-operator-test: kuttl } +spec: + backoffLimit: 3 + template: + metadata: + labels: { postgres-operator-test: kuttl } + spec: + restartPolicy: Never + containers: + - name: psql + image: ${KUTTL_PSQL_IMAGE} + env: + - { name: PGHOST, value: "non-crunchy-cluster" } + # Do not wait indefinitely. + - { name: PGCONNECT_TIMEOUT, value: '5' } + - { name: PGPASSWORD, valueFrom: { secretKeyRef: { name: non-crunchy-cluster, key: postgres-password } } } + command: + - psql + - --username=postgres + - --dbname=postgres + - --set=ON_ERROR_STOP=1 + - --command + - | + CREATE TABLE IF NOT EXISTS important (data) AS VALUES ('treasure'); diff --git a/testing/kuttl/e2e-other/cluster-migrate/02-assert.yaml b/testing/kuttl/e2e-other/cluster-migrate/02-assert.yaml new file mode 100644 index 0000000000..5115ba97c9 --- /dev/null +++ b/testing/kuttl/e2e-other/cluster-migrate/02-assert.yaml @@ -0,0 +1,7 @@ +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: original-data +status: + succeeded: 1 diff --git a/testing/kuttl/e2e-other/cluster-migrate/03--alter-pv.yaml b/testing/kuttl/e2e-other/cluster-migrate/03--alter-pv.yaml new file mode 100644 index 0000000000..64fa700297 --- /dev/null +++ b/testing/kuttl/e2e-other/cluster-migrate/03--alter-pv.yaml @@ -0,0 +1,23 @@ +--- +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +commands: + - script: | + set -e + VOLUME_NAME=$( + kubectl get pvc --namespace "${NAMESPACE}" \ + --output=jsonpath={.items..spec.volumeName} + ) + + ORIGINAL_POLICY=$( + kubectl get pv "${VOLUME_NAME}" \ + --output=jsonpath={.spec.persistentVolumeReclaimPolicy} + ) + + kubectl create configmap persistent-volume-reclaim-policy --namespace "${NAMESPACE}" \ + --from-literal=ORIGINAL_POLICY="${ORIGINAL_POLICY}" \ + --from-literal=VOLUME_NAME="${VOLUME_NAME}" + + kubectl patch pv "${VOLUME_NAME}" -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' + + kubectl label pv "${VOLUME_NAME}" postgres-operator-test=kuttl app.kubernetes.io/name=postgresql app.kubernetes.io/instance=non-crunchy-cluster test-namespace="${NAMESPACE}" diff --git a/testing/kuttl/e2e-other/cluster-migrate/04--delete.yaml b/testing/kuttl/e2e-other/cluster-migrate/04--delete.yaml new file mode 100644 index 0000000000..ed38b23d9f --- /dev/null +++ b/testing/kuttl/e2e-other/cluster-migrate/04--delete.yaml @@ -0,0 +1,15 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +delete: +- apiVersion: apps/v1 + kind: StatefulSet + name: non-crunchy-cluster +- apiVersion: v1 + kind: Service + name: non-crunchy-cluster +- apiVersion: v1 + kind: Service + name: non-crunchy-cluster-hl +- apiVersion: v1 + kind: Secret + name: non-crunchy-cluster diff --git a/testing/kuttl/e2e-other/cluster-migrate/04-errors.yaml b/testing/kuttl/e2e-other/cluster-migrate/04-errors.yaml new file mode 100644 index 0000000000..1767e8040f --- /dev/null +++ b/testing/kuttl/e2e-other/cluster-migrate/04-errors.yaml @@ -0,0 +1,4 @@ +apiVersion: v1 +kind: Pod +metadata: + name: non-crunchy-cluster-0 diff --git a/testing/kuttl/e2e-other/cluster-migrate/05--cluster.yaml b/testing/kuttl/e2e-other/cluster-migrate/05--cluster.yaml new file mode 100644 index 0000000000..a81666ed01 --- /dev/null +++ b/testing/kuttl/e2e-other/cluster-migrate/05--cluster.yaml @@ -0,0 +1,30 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: cluster-migrate +spec: + dataSource: + volumes: + pgDataVolume: + pvcName: data-non-crunchy-cluster-0 + directory: data + postgresVersion: ${KUTTL_PG_VERSION} + instances: + - name: instance1 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + backups: + pgbackrest: + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi diff --git a/testing/kuttl/e2e-other/cluster-migrate/06-assert.yaml b/testing/kuttl/e2e-other/cluster-migrate/06-assert.yaml new file mode 100644 index 0000000000..1a25966abb --- /dev/null +++ b/testing/kuttl/e2e-other/cluster-migrate/06-assert.yaml @@ -0,0 +1,21 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: cluster-migrate +status: + instances: + - name: instance1 + readyReplicas: 1 + replicas: 1 + updatedReplicas: 1 +--- +apiVersion: v1 +kind: Pod +metadata: + labels: + postgres-operator.crunchydata.com/cluster: cluster-migrate + postgres-operator.crunchydata.com/data: postgres + postgres-operator.crunchydata.com/instance-set: instance1 + postgres-operator.crunchydata.com/role: master +status: + phase: Running diff --git a/testing/kuttl/e2e-other/cluster-migrate/07--set-collation.yaml b/testing/kuttl/e2e-other/cluster-migrate/07--set-collation.yaml new file mode 100644 index 0000000000..00eb741f80 --- /dev/null +++ b/testing/kuttl/e2e-other/cluster-migrate/07--set-collation.yaml @@ -0,0 +1,23 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +commands: + - script: | + set -e + if [[ ${KUTTL_PG_VERSION} -ge 15 ]]; then + PRIMARY= + while [[ -z "${PRIMARY}" ]]; do + PRIMARY=$( + kubectl get pod --namespace "${NAMESPACE}" \ + --output name --selector ' + postgres-operator.crunchydata.com/cluster=cluster-migrate, + postgres-operator.crunchydata.com/role=master' + ) + done + + # Ignore warnings about collation changes. This is DANGEROUS on real data! + # Only do this automatic step in test conditions; with real data, this may cause + # more problems as you may need to reindex. + kubectl exec --namespace "${NAMESPACE}" "${PRIMARY}" -c database \ + -- psql -qAt --command \ + 'ALTER DATABASE postgres REFRESH COLLATION VERSION; ALTER DATABASE template1 REFRESH COLLATION VERSION;' + fi diff --git a/testing/kuttl/e2e-other/cluster-migrate/08--alter-pv.yaml b/testing/kuttl/e2e-other/cluster-migrate/08--alter-pv.yaml new file mode 100644 index 0000000000..c5edfb4c99 --- /dev/null +++ b/testing/kuttl/e2e-other/cluster-migrate/08--alter-pv.yaml @@ -0,0 +1,16 @@ +--- +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +commands: + - script: | + set -e + SAVED_DATA=$( + kubectl get configmap persistent-volume-reclaim-policy --namespace "${NAMESPACE}" \ + --output=jsonpath="{.data..['ORIGINAL_POLICY','VOLUME_NAME']}" + ) + + IFS=' ' + read ORIGINAL_POLICY VOLUME_NAME <<< "${SAVED_DATA}" + + kubectl patch pv "${VOLUME_NAME}" -p '{"spec":{"persistentVolumeReclaimPolicy":"'${ORIGINAL_POLICY}'"}}' + diff --git a/testing/kuttl/e2e-other/cluster-migrate/09--check-data.yaml b/testing/kuttl/e2e-other/cluster-migrate/09--check-data.yaml new file mode 100644 index 0000000000..6a46bd8e9a --- /dev/null +++ b/testing/kuttl/e2e-other/cluster-migrate/09--check-data.yaml @@ -0,0 +1,23 @@ +--- +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +commands: + - script: | + set -e + PRIMARY=$( + kubectl get pod --namespace "${NAMESPACE}" \ + --output name --selector ' + postgres-operator.crunchydata.com/cluster=cluster-migrate, + postgres-operator.crunchydata.com/role=master' + ) + + TREASURE=$( + kubectl exec "${PRIMARY}" --namespace "${NAMESPACE}" \ + --container database \ + -- psql -U postgres -qt -c "select data from important" + ) + + if [[ "${TREASURE}" != " treasure" ]]; then + echo "Migration from 3rd-party PG pod failed, result from query: ${TREASURE}" + exit 1 + fi diff --git a/testing/kuttl/e2e-other/cluster-migrate/README.md b/testing/kuttl/e2e-other/cluster-migrate/README.md new file mode 100644 index 0000000000..09026f9e8b --- /dev/null +++ b/testing/kuttl/e2e-other/cluster-migrate/README.md @@ -0,0 +1,45 @@ +## Cluster Migrate + +This test was developed to check that users could bypass some known problems when +migrating from a non-Crunchy PostgreSQL image to a Crunchy PostgreSQL image: + +1) it changes the ownership of the data directory (which depends on fsGroup +behavior to change group ownership which is not available in all providers); +2) it makes sure a postgresql.conf file is available, as required by Patroni. + +Important note on *environment*: +As noted above, this work relies on fsGroup, so this test will not work in the current +form in all environments. For instance, this creates a PG cluster with fsGroup set, +which will result in an error in OpenShift. + +Important note on *PV permissions*: +This test involves changing permissions on PersistentVolumes, which may not be available +in all environments to all users (since this is a cluster-wide permission). + +Important note on migrating between different builds of *Postgres 15*: +PG 15 introduced new behavior around database collation versions, which result in errors like: + +``` +WARNING: database \"postgres\" has a collation version mismatch +DETAIL: The database was created using collation version 2.31, but the operating system provides version 2.28 +``` + +This error occurred in `reconcilePostgresDatabases` and prevented PGO from finishing the reconcile +loop. For _testing purposes_, this problem is worked around in steps 06 and 07, which wait for +the PG pod to be ready and then send a command to `REFRESH COLLATION VERSION` on the `postgres` +and `template1` databases (which were the only databases where this error was observed during +testing). + +This solution is fine for testing purposes, but is not a solution that should be done in production +as an automatic step. User intervention and supervision is recommended in that case. + +### Steps + +* 01: Create a non-Crunchy PostgreSQL cluster and wait for it to be ready +* 02: Create data on that cluster +* 03: Alter the Reclaim policy of the PV so that it will survive deletion of the cluster +* 04: Delete the original cluster, leaving the PV +* 05: Create a PGO-managed `postgrescluster` with the remaining PV as the datasource +* 06-07: Wait for the PG pod to be ready and alter the collation (PG 15 only, see above) +* 08: Alter the PV to the original Reclaim policy +* 09: Check that the data successfully migrated diff --git a/testing/kuttl/e2e-other/delete-with-replica-and-check-timestamps/10--cluster.yaml b/testing/kuttl/e2e-other/delete-with-replica-and-check-timestamps/10--cluster.yaml new file mode 100644 index 0000000000..a3236da358 --- /dev/null +++ b/testing/kuttl/e2e-other/delete-with-replica-and-check-timestamps/10--cluster.yaml @@ -0,0 +1,29 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: delete-switchover-with-timestamp +spec: + postgresVersion: ${KUTTL_PG_VERSION} + patroni: + switchover: + enabled: true + instances: + - name: instance1 + replicas: 2 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + backups: + pgbackrest: + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi diff --git a/testing/kuttl/e2e-other/delete-with-replica-and-check-timestamps/10-assert.yaml b/testing/kuttl/e2e-other/delete-with-replica-and-check-timestamps/10-assert.yaml new file mode 100644 index 0000000000..d77e27e307 --- /dev/null +++ b/testing/kuttl/e2e-other/delete-with-replica-and-check-timestamps/10-assert.yaml @@ -0,0 +1,36 @@ +--- +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: delete-switchover-with-timestamp +status: + instances: + - name: instance1 + readyReplicas: 2 + replicas: 2 + updatedReplicas: 2 +--- +# Patroni labels and readiness happen separately. +# The next step expects to find pods by their role label; wait for them here. +apiVersion: v1 +kind: Pod +metadata: + labels: + postgres-operator.crunchydata.com/cluster: delete-switchover-with-timestamp + postgres-operator.crunchydata.com/role: master +--- +apiVersion: v1 +kind: Pod +metadata: + labels: + postgres-operator.crunchydata.com/cluster: delete-switchover-with-timestamp + postgres-operator.crunchydata.com/role: replica +--- +apiVersion: batch/v1 +kind: Job +metadata: + labels: + postgres-operator.crunchydata.com/cluster: delete-switchover-with-timestamp + postgres-operator.crunchydata.com/pgbackrest-backup: replica-create +status: + succeeded: 1 diff --git a/testing/kuttl/e2e-other/delete-with-replica-and-check-timestamps/11-annotate.yaml b/testing/kuttl/e2e-other/delete-with-replica-and-check-timestamps/11-annotate.yaml new file mode 100644 index 0000000000..844d5f1336 --- /dev/null +++ b/testing/kuttl/e2e-other/delete-with-replica-and-check-timestamps/11-annotate.yaml @@ -0,0 +1,19 @@ +--- +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +commands: + # Label instance pods with their current role. These labels will stick around + # because switchover does not recreate any pods. + - script: | + kubectl label --namespace="${NAMESPACE}" pods \ + --selector='postgres-operator.crunchydata.com/role=master' \ + 'testing/role-before=master' + - script: | + kubectl label --namespace="${NAMESPACE}" pods \ + --selector='postgres-operator.crunchydata.com/role=replica' \ + 'testing/role-before=replica' + + # Annotate the cluster to trigger a switchover. + - script: | + kubectl annotate --namespace="${NAMESPACE}" postgrescluster/delete-switchover-with-timestamp \ + "postgres-operator.crunchydata.com/trigger-switchover=$(date)" diff --git a/testing/kuttl/e2e-other/delete-with-replica-and-check-timestamps/12-assert.yaml b/testing/kuttl/e2e-other/delete-with-replica-and-check-timestamps/12-assert.yaml new file mode 100644 index 0000000000..76f0f8dff6 --- /dev/null +++ b/testing/kuttl/e2e-other/delete-with-replica-and-check-timestamps/12-assert.yaml @@ -0,0 +1,32 @@ +--- +# Wait for switchover to finish. A former replica should now be the primary. +apiVersion: v1 +kind: Pod +metadata: + labels: + postgres-operator.crunchydata.com/cluster: delete-switchover-with-timestamp + postgres-operator.crunchydata.com/data: postgres + postgres-operator.crunchydata.com/role: master + testing/role-before: replica +--- +# The former primary should now be a replica. +apiVersion: v1 +kind: Pod +metadata: + labels: + postgres-operator.crunchydata.com/cluster: delete-switchover-with-timestamp + postgres-operator.crunchydata.com/data: postgres + postgres-operator.crunchydata.com/role: replica + testing/role-before: master +--- +# All instances should be healthy. +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: delete-switchover-with-timestamp +status: + instances: + - name: instance1 + replicas: 2 + readyReplicas: 2 + updatedReplicas: 2 diff --git a/testing/kuttl/e2e-other/delete-with-replica-and-check-timestamps/13-delete-cluster-and-check.yaml b/testing/kuttl/e2e-other/delete-with-replica-and-check-timestamps/13-delete-cluster-and-check.yaml new file mode 100644 index 0000000000..45352cca2e --- /dev/null +++ b/testing/kuttl/e2e-other/delete-with-replica-and-check-timestamps/13-delete-cluster-and-check.yaml @@ -0,0 +1,47 @@ +--- +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +commands: + # Get the names of the current primary and replica -- error if either is blank + # Delete the cluster + # Get the delete event for the pods + # Verify that the replica delete event is greater than the primary delete event + - script: | + PRIMARY=$( + kubectl get pods --namespace="${NAMESPACE}" \ + --selector='postgres-operator.crunchydata.com/role=master' \ + --output=jsonpath={.items..metadata.name} + ) + + REPLICA=$( + kubectl get pods --namespace="${NAMESPACE}" \ + --selector='postgres-operator.crunchydata.com/role=replica' \ + --output=jsonpath={.items..metadata.name} + ) + + echo "DELETE: Found primary ${PRIMARY} and replica ${REPLICA} pods" + + if [ -z "$PRIMARY" ]; then exit 1; fi + if [ -z "$REPLICA" ]; then exit 1; fi + + kubectl delete postgrescluster -n "${NAMESPACE}" delete-switchover-with-timestamp + + kubectl wait "pod/${REPLICA}" --namespace "${NAMESPACE}" --for=delete --timeout=180s + + KILLING_REPLICA_TIMESTAMP=$( + kubectl get events --namespace="${NAMESPACE}" \ + --field-selector reason="Killing",involvedObject.fieldPath="spec.containers{database}",involvedObject.name="${REPLICA}" \ + --output=jsonpath={.items..firstTimestamp} + ) + + kubectl wait "pod/${PRIMARY}" --namespace "${NAMESPACE}" --for=delete --timeout=180s + + KILLING_PRIMARY_TIMESTAMP=$( + kubectl get events --namespace="${NAMESPACE}" \ + --field-selector reason="Killing",involvedObject.fieldPath="spec.containers{database}",involvedObject.name="${PRIMARY}" \ + --output=jsonpath={.items..firstTimestamp} + ) + + echo "DELETE: Found primary ${KILLING_PRIMARY_TIMESTAMP} and replica ${KILLING_REPLICA_TIMESTAMP} timestamps" + + if [[ "${KILLING_PRIMARY_TIMESTAMP}" < "${KILLING_REPLICA_TIMESTAMP}" ]]; then exit 1; fi diff --git a/testing/kuttl/e2e-other/delete-with-replica-and-check-timestamps/14-errors.yaml b/testing/kuttl/e2e-other/delete-with-replica-and-check-timestamps/14-errors.yaml new file mode 100644 index 0000000000..2a1015824b --- /dev/null +++ b/testing/kuttl/e2e-other/delete-with-replica-and-check-timestamps/14-errors.yaml @@ -0,0 +1,42 @@ +--- +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: delete-switchover-with-timestamp +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + labels: + postgres-operator.crunchydata.com/cluster: delete-switchover-with-timestamp +--- +# Patroni DCS objects are not owned by the PostgresCluster. +apiVersion: v1 +kind: Endpoints +metadata: + labels: + postgres-operator.crunchydata.com/cluster: delete-switchover-with-timestamp +--- +apiVersion: v1 +kind: Pod +metadata: + labels: + postgres-operator.crunchydata.com/cluster: delete-switchover-with-timestamp +--- +apiVersion: v1 +kind: Service +metadata: + labels: + postgres-operator.crunchydata.com/cluster: delete-switchover-with-timestamp +--- +apiVersion: v1 +kind: Secret +metadata: + labels: + postgres-operator.crunchydata.com/cluster: delete-switchover-with-timestamp +--- +apiVersion: v1 +kind: ConfigMap +metadata: + labels: + postgres-operator.crunchydata.com/cluster: delete-switchover-with-timestamp diff --git a/testing/kuttl/e2e-other/delete-with-replica-and-check-timestamps/README.md b/testing/kuttl/e2e-other/delete-with-replica-and-check-timestamps/README.md new file mode 100644 index 0000000000..bf914aa6cf --- /dev/null +++ b/testing/kuttl/e2e-other/delete-with-replica-and-check-timestamps/README.md @@ -0,0 +1,7 @@ +This test originally existed as the second test-case in the `delete` KUTTL test. +The test as written was prone to occasional flakes, sometimes due to missing events +(which were being used to check the timestamp of the container delete event). + +After discussion, we decided that this behavior (replica deleting before the primary) +was no longer required in v5, and the decision was made to sequester this test-case for +further testing and refinement. \ No newline at end of file diff --git a/testing/kuttl/e2e-other/exporter-append-custom-queries/00--create-cluster.yaml b/testing/kuttl/e2e-other/exporter-append-custom-queries/00--create-cluster.yaml new file mode 100644 index 0000000000..bc515e3534 --- /dev/null +++ b/testing/kuttl/e2e-other/exporter-append-custom-queries/00--create-cluster.yaml @@ -0,0 +1,7 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +apply: +- files/exporter-append-queries-configmap.yaml +- files/exporter-append-queries-cluster.yaml +assert: +- files/exporter-append-queries-cluster-checks.yaml diff --git a/testing/kuttl/e2e-other/exporter-append-custom-queries/00-assert.yaml b/testing/kuttl/e2e-other/exporter-append-custom-queries/00-assert.yaml new file mode 100644 index 0000000000..2655841597 --- /dev/null +++ b/testing/kuttl/e2e-other/exporter-append-custom-queries/00-assert.yaml @@ -0,0 +1,50 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +commands: +# First, check that all containers in the instance pod are ready +# Then, list the query files mounted to the exporter and check for expected files +# Finally, check the contents of the queries to ensure queries.yml was generated correctly +- script: | + retry() { bash -ceu 'printf "$1\nSleeping...\n" && sleep 5' - "$@"; } + check_containers_ready() { bash -ceu 'echo "$1" | jq -e ".[] | select(.type==\"ContainersReady\") | .status==\"True\""' - "$@"; } + contains() { bash -ceu '[[ "$1" == *"$2"* ]]' - "$@"; } + + pod=$(kubectl get pods -o name -n "${NAMESPACE}" \ + -l postgres-operator.crunchydata.com/cluster=exporter-append-queries \ + -l postgres-operator.crunchydata.com/crunchy-postgres-exporter=true) + [ "$pod" = "" ] && retry "Pod not found" && exit 1 + + condition_json=$(kubectl get "${pod}" -n "${NAMESPACE}" -o jsonpath="{.status.conditions}") + [ "$condition_json" = "" ] && retry "conditions not found" && exit 1 + { check_containers_ready "$condition_json"; } || { + retry "containers not ready" + exit 1 + } + + queries_files=$( + kubectl exec --namespace "${NAMESPACE}" "${pod}" -c exporter \ + -- ls /conf + ) + + { + contains "${queries_files}" "queries.yml" && + contains "${queries_files}" "defaultQueries.yml" + } || { + echo >&2 'The /conf directory should contain queries.yml and defaultQueries.yml. Instead it has:' + echo "${queries_files}" + exit 1 + } + + master_queries_contents=$( + kubectl exec --namespace "${NAMESPACE}" "${pod}" -c exporter \ + -- cat /tmp/queries.yml + ) + + { + contains "${master_queries_contents}" "# This is a test." && + contains "${master_queries_contents}" "ccp_postgresql_version" + } || { + echo >&2 'The master queries.yml file should contain the contents of both defaultQueries.yml and the custom queries.yml file. Instead it contains:' + echo "${master_queries_contents}" + exit 1 + } diff --git a/testing/kuttl/e2e-other/exporter-append-custom-queries/README.md b/testing/kuttl/e2e-other/exporter-append-custom-queries/README.md new file mode 100644 index 0000000000..a24aa444c7 --- /dev/null +++ b/testing/kuttl/e2e-other/exporter-append-custom-queries/README.md @@ -0,0 +1,5 @@ +Exporter - AppendCustomQueries Enabled + +Note: This series of tests depends on PGO being deployed with the AppendCustomQueries feature gate ON. There is a separate set of tests in e2e that tests exporter functionality without the AppendCustomQueries feature. + +When running this test, make sure that the PGO_FEATURE_GATES environment variable is set to "AppendCustomQueries=true" on the PGO Deployment. diff --git a/testing/kuttl/e2e-other/exporter-append-custom-queries/files/exporter-append-queries-cluster-checks.yaml b/testing/kuttl/e2e-other/exporter-append-custom-queries/files/exporter-append-queries-cluster-checks.yaml new file mode 100644 index 0000000000..459356ddfc --- /dev/null +++ b/testing/kuttl/e2e-other/exporter-append-custom-queries/files/exporter-append-queries-cluster-checks.yaml @@ -0,0 +1,29 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: exporter-append-queries +status: + instances: + - name: instance1 + readyReplicas: 1 + replicas: 1 + updatedReplicas: 1 +--- +apiVersion: v1 +kind: Pod +metadata: + labels: + postgres-operator.crunchydata.com/cluster: exporter-append-queries + postgres-operator.crunchydata.com/crunchy-postgres-exporter: "true" +status: + phase: Running +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: exporter-append-queries-exporter-queries-config +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: custom-queries-test diff --git a/testing/kuttl/e2e-other/exporter-append-custom-queries/files/exporter-append-queries-cluster.yaml b/testing/kuttl/e2e-other/exporter-append-custom-queries/files/exporter-append-queries-cluster.yaml new file mode 100644 index 0000000000..c4f75771aa --- /dev/null +++ b/testing/kuttl/e2e-other/exporter-append-custom-queries/files/exporter-append-queries-cluster.yaml @@ -0,0 +1,21 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: exporter-append-queries +spec: + postgresVersion: ${KUTTL_PG_VERSION} + instances: + - name: instance1 + dataVolumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } + backups: + pgbackrest: + repos: + - name: repo1 + volume: + volumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } + monitoring: + pgmonitor: + exporter: + configuration: + - configMap: + name: custom-queries-test diff --git a/testing/kuttl/e2e-other/exporter-append-custom-queries/files/exporter-append-queries-configmap.yaml b/testing/kuttl/e2e-other/exporter-append-custom-queries/files/exporter-append-queries-configmap.yaml new file mode 100644 index 0000000000..9964d6bc1e --- /dev/null +++ b/testing/kuttl/e2e-other/exporter-append-custom-queries/files/exporter-append-queries-configmap.yaml @@ -0,0 +1,6 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: custom-queries-test +data: + queries.yml: "# This is a test." diff --git a/testing/kuttl/e2e-other/exporter-replica/00--create-cluster.yaml b/testing/kuttl/e2e-other/exporter-replica/00--create-cluster.yaml new file mode 100644 index 0000000000..2abec0814e --- /dev/null +++ b/testing/kuttl/e2e-other/exporter-replica/00--create-cluster.yaml @@ -0,0 +1,6 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +apply: +- files/exporter-replica-cluster.yaml +assert: +- files/exporter-replica-cluster-checks.yaml diff --git a/testing/kuttl/e2e-other/exporter-replica/00-assert.yaml b/testing/kuttl/e2e-other/exporter-replica/00-assert.yaml new file mode 100644 index 0000000000..280be2d395 --- /dev/null +++ b/testing/kuttl/e2e-other/exporter-replica/00-assert.yaml @@ -0,0 +1,45 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +commands: +# First, check that all containers in the instance(s) pod are ready +# Then, grab the exporter metrics output and check that there were no scrape errors +# Finally, ensure the monitoring user exists and is configured +- script: | + retry() { bash -ceu 'printf "$1\nSleeping...\n" && sleep 5' - "$@"; } + check_containers_ready() { bash -ceu 'echo "$1" | jq -e ".[] | select(.type==\"ContainersReady\") | .status==\"True\""' - "$@"; } + contains() { bash -ceu '[[ "$1" == *"$2"* ]]' - "$@"; } + + replica=$(kubectl get pods -o name -n "${NAMESPACE}" \ + -l postgres-operator.crunchydata.com/cluster=exporter-replica \ + -l postgres-operator.crunchydata.com/crunchy-postgres-exporter=true \ + -l postgres-operator.crunchydata.com/role=replica) + [ "$replica" = "" ] && retry "Replica Pod not found" && exit 1 + + replica_condition_json=$(kubectl get "${replica}" -n "${NAMESPACE}" -o jsonpath="{.status.conditions}") + [ "$replica_condition_json" = "" ] && retry "Replica conditions not found" && exit 1 + { + check_containers_ready "$replica_condition_json" + } || { + retry "containers not ready" + exit 1 + } + + scrape_metrics=$(kubectl exec ${replica} -c exporter -n ${NAMESPACE} -- \ + curl --silent http://localhost:9187/metrics | grep "pg_exporter_last_scrape_error") + { + contains "${scrape_metrics}" 'pg_exporter_last_scrape_error 0'; + } || { + retry "${scrape_metrics}" + exit 1 + } + + kubectl exec --stdin "${replica}" --namespace "${NAMESPACE}" -c database \ + -- psql -qb --set ON_ERROR_STOP=1 --file=- <<'SQL' + DO $$ + DECLARE + result record; + BEGIN + SELECT * INTO result FROM pg_catalog.pg_roles WHERE rolname = 'ccp_monitoring'; + ASSERT FOUND, 'user not found'; + END $$ + SQL diff --git a/testing/kuttl/e2e-other/exporter-replica/files/exporter-replica-cluster-checks.yaml b/testing/kuttl/e2e-other/exporter-replica/files/exporter-replica-cluster-checks.yaml new file mode 100644 index 0000000000..7c775b47b1 --- /dev/null +++ b/testing/kuttl/e2e-other/exporter-replica/files/exporter-replica-cluster-checks.yaml @@ -0,0 +1,24 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: exporter-replica +status: + instances: + - name: instance1 + readyReplicas: 2 + replicas: 2 + updatedReplicas: 2 +--- +apiVersion: v1 +kind: Pod +metadata: + labels: + postgres-operator.crunchydata.com/cluster: exporter-replica + postgres-operator.crunchydata.com/crunchy-postgres-exporter: "true" +status: + phase: Running +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: exporter-replica-exporter-queries-config diff --git a/testing/kuttl/e2e-other/exporter-replica/files/exporter-replica-cluster.yaml b/testing/kuttl/e2e-other/exporter-replica/files/exporter-replica-cluster.yaml new file mode 100644 index 0000000000..504d33bc3a --- /dev/null +++ b/testing/kuttl/e2e-other/exporter-replica/files/exporter-replica-cluster.yaml @@ -0,0 +1,19 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: exporter-replica +spec: + postgresVersion: ${KUTTL_PG_VERSION} + instances: + - name: instance1 + replicas: 2 + dataVolumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } + backups: + pgbackrest: + repos: + - name: repo1 + volume: + volumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } + monitoring: + pgmonitor: + exporter: {} diff --git a/testing/kuttl/e2e-other/exporter-standby/00--create-certs.yaml b/testing/kuttl/e2e-other/exporter-standby/00--create-certs.yaml new file mode 100644 index 0000000000..9c9cd140ac --- /dev/null +++ b/testing/kuttl/e2e-other/exporter-standby/00--create-certs.yaml @@ -0,0 +1,4 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +apply: +- files/cluster-certs.yaml diff --git a/testing/kuttl/e2e-other/exporter-standby/01--create-primary.yaml b/testing/kuttl/e2e-other/exporter-standby/01--create-primary.yaml new file mode 100644 index 0000000000..6b5b721d4e --- /dev/null +++ b/testing/kuttl/e2e-other/exporter-standby/01--create-primary.yaml @@ -0,0 +1,6 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +apply: +- files/primary-cluster.yaml +assert: +- files/primary-cluster-checks.yaml diff --git a/testing/kuttl/e2e-other/exporter-standby/01-assert.yaml b/testing/kuttl/e2e-other/exporter-standby/01-assert.yaml new file mode 100644 index 0000000000..cd2d16c783 --- /dev/null +++ b/testing/kuttl/e2e-other/exporter-standby/01-assert.yaml @@ -0,0 +1,22 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +commands: +# Store the exporter pid as an annotation on the pod +- script: | + retry() { bash -ceu 'printf "$1\nSleeping...\n" && sleep 5' - "$@"; } + check_containers_ready() { bash -ceu 'echo "$1" | jq -e ".[] | select(.type==\"ContainersReady\") | .status==\"True\""' - "$@"; } + + pod=$(kubectl get pods -o name -n $NAMESPACE \ + -l postgres-operator.crunchydata.com/cluster=primary-cluster \ + -l postgres-operator.crunchydata.com/crunchy-postgres-exporter=true) + [ "$pod" = "" ] && retry "Pod not found" && exit 1 + + condition_json=$(kubectl get ${pod} -n ${NAMESPACE} -o jsonpath="{.status.conditions}") + [ "$condition_json" = "" ] && retry "conditions not found" && exit 1 + { check_containers_ready "$condition_json"; } || { + retry "containers not ready" + exit 1 + } + + pid=$(kubectl exec ${pod} -n ${NAMESPACE} -c exporter -- cat /tmp/postgres_exporter.pid) + kubectl annotate --overwrite -n ${NAMESPACE} ${pod} oldpid=${pid} diff --git a/testing/kuttl/e2e-other/exporter-standby/02--set-primary-password.yaml b/testing/kuttl/e2e-other/exporter-standby/02--set-primary-password.yaml new file mode 100644 index 0000000000..4e613a277f --- /dev/null +++ b/testing/kuttl/e2e-other/exporter-standby/02--set-primary-password.yaml @@ -0,0 +1,6 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +apply: +- files/update-primary-password.yaml +assert: +- files/update-primary-password-checks.yaml diff --git a/testing/kuttl/e2e-other/exporter-standby/03--create-standby.yaml b/testing/kuttl/e2e-other/exporter-standby/03--create-standby.yaml new file mode 100644 index 0000000000..fa2e653353 --- /dev/null +++ b/testing/kuttl/e2e-other/exporter-standby/03--create-standby.yaml @@ -0,0 +1,6 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +apply: +- files/standby-cluster.yaml +assert: +- files/standby-cluster-checks.yaml diff --git a/testing/kuttl/e2e-other/exporter-standby/03-assert.yaml b/testing/kuttl/e2e-other/exporter-standby/03-assert.yaml new file mode 100644 index 0000000000..327e5562fa --- /dev/null +++ b/testing/kuttl/e2e-other/exporter-standby/03-assert.yaml @@ -0,0 +1,16 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +commands: +# Grab the exporter pod +# Check that the postgres_exporter pid is running +# Store the exporter pid as an annotation on the pod +- script: | + retry() { bash -ceu 'printf "$1\nSleeping...\n" && sleep 5' - "$@"; } + check_containers_ready() { bash -ceu 'echo "$1" | jq -e ".[] | select(.type==\"ContainersReady\") | .status==\"True\""' - "$@"; } + + pod=$(kubectl get pods -o name -n $NAMESPACE \ + -l postgres-operator.crunchydata.com/cluster=standby-cluster,postgres-operator.crunchydata.com/crunchy-postgres-exporter=true) + [ "$pod" = "" ] && retry "Pod not found" && exit 1 + + pid=$(kubectl exec ${pod} -n ${NAMESPACE} -c exporter -- cat /tmp/postgres_exporter.pid) + kubectl annotate --overwrite -n ${NAMESPACE} ${pod} oldpid=${pid} diff --git a/testing/kuttl/e2e-other/exporter-standby/04--set-standby-password.yaml b/testing/kuttl/e2e-other/exporter-standby/04--set-standby-password.yaml new file mode 100644 index 0000000000..18c98e423e --- /dev/null +++ b/testing/kuttl/e2e-other/exporter-standby/04--set-standby-password.yaml @@ -0,0 +1,6 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +apply: +- files/update-standby-password.yaml +assert: +- files/update-standby-password-checks.yaml diff --git a/testing/kuttl/e2e-other/exporter-standby/04-assert.yaml b/testing/kuttl/e2e-other/exporter-standby/04-assert.yaml new file mode 100644 index 0000000000..7e77784a65 --- /dev/null +++ b/testing/kuttl/e2e-other/exporter-standby/04-assert.yaml @@ -0,0 +1,38 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +commands: +# Grab the exporter pod +# Check that the postgres_exporter pid is running +# Store the exporter pid as an annotation on the pod +- script: | + retry() { bash -ceu 'printf "$1\nSleeping...\n" && sleep 5' - "$@"; } + contains() { bash -ceu '[[ "$1" == *"$2"* ]]' - "$@"; } + check_containers_ready() { bash -ceu ' echo "$1" | jq -e ".[] | select(.type==\"ContainersReady\") | .status==\"True\""' - "$@";} + + pod=$(kubectl get pods -o name -n $NAMESPACE \ + -l postgres-operator.crunchydata.com/cluster=standby-cluster,postgres-operator.crunchydata.com/crunchy-postgres-exporter=true) + [ "$pod" = "" ] && retry "Pod not found" && exit 1 + + oldPid=$(kubectl get ${pod} -n ${NAMESPACE} -o jsonpath="{.metadata.annotations.oldpid}") + newPid=$(kubectl exec ${pod} -n ${NAMESPACE} -c exporter -- cat /tmp/postgres_exporter.pid) + [ "${oldPid}" -eq "${newPid}" ] && retry "pid should have changed" && exit 1 + + password=$(kubectl exec -n ${NAMESPACE} ${pod} -c exporter -- bash -c 'cat /opt/crunchy/password') + { contains "${password}" "password"; } || { + retry "unexpected password: ${password}" + exit 1 + } + + condition_json=$(kubectl get ${pod} -n ${NAMESPACE} -o jsonpath="{.status.conditions}") + [ "$condition_json" = "" ] && retry "conditions not found" && exit 1 + { check_containers_ready "$condition_json"; } || { + retry "containers not ready" + exit 1 + } + + scrape_metrics=$(kubectl exec ${pod} -c exporter -n ${NAMESPACE} -- \ + curl --silent http://localhost:9187/metrics | grep "pg_exporter_last_scrape_error") + { contains "${scrape_metrics}" 'pg_exporter_last_scrape_error 0'; } || { + retry "${scrape_metrics}" + exit 1 + } diff --git a/testing/kuttl/e2e-other/exporter-standby/README.md b/testing/kuttl/e2e-other/exporter-standby/README.md new file mode 100644 index 0000000000..34df4e5b7a --- /dev/null +++ b/testing/kuttl/e2e-other/exporter-standby/README.md @@ -0,0 +1,9 @@ +# Exporter connection on standby cluster + +The exporter standby test will deploy two clusters, one primary and one standby. +Both clusters have monitoring enabled and are created in the same namespace to +allow for easy connections over the network. + +The `ccp_monitoring` password for both clusters are updated to match allowing +the exporter on the standby cluster to query postgres using the proper `ccp_monitoring` +password. diff --git a/testing/kuttl/e2e-other/exporter-standby/files/cluster-certs.yaml b/testing/kuttl/e2e-other/exporter-standby/files/cluster-certs.yaml new file mode 100644 index 0000000000..1f8dd06ccf --- /dev/null +++ b/testing/kuttl/e2e-other/exporter-standby/files/cluster-certs.yaml @@ -0,0 +1,19 @@ +apiVersion: v1 +data: + ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJnakNDQVNlZ0F3SUJBZ0lRZUpacWMxMmR3TDh6cDNRVjZVMzg0ekFLQmdncWhrak9QUVFEQXpBZk1SMHcKR3dZRFZRUURFeFJ3YjNOMFozSmxjeTF2Y0dWeVlYUnZjaTFqWVRBZUZ3MHlNekEwTVRFeE56UTFNemhhRncwegpNekEwTURneE9EUTFNemhhTUI4eEhUQWJCZ05WQkFNVEZIQnZjM1JuY21WekxXOXdaWEpoZEc5eUxXTmhNRmt3CkV3WUhLb1pJemowQ0FRWUlLb1pJemowREFRY0RRZ0FFWEZwMU1nOFQ0aWxFRFlleVh4Nm5hRU0weEtNUStNZU0KWnM3dUtockdmTnY1cVd3N0puNzJEMEZNWE9raVNTN1BsZUhtN1lwYk1lelZ4UytjLzV6a2NLTkZNRU13RGdZRApWUjBQQVFIL0JBUURBZ0VHTUJJR0ExVWRFd0VCL3dRSU1BWUJBZjhDQVFBd0hRWURWUjBPQkJZRUZGU2JSZzdXCnpIZFdIODN2aEtTcld3dGV4K2FtTUFvR0NDcUdTTTQ5QkFNREEwa0FNRVlDSVFDK3pXTHh4bmpna1ZYYzBFOVAKbWlmZm9jeTIrM3AxREZMUkJRcHlZNFE0RVFJaEFPSDhQVEtvWnRZUWlobVlqTkd3Q1J3aTgvVFRaYWIxSnVIMAo2YnpodHZobgotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== + tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNQakNDQWVXZ0F3SUJBZ0lSQU93NURHaGVVZnVNY25KYVdKNkllall3Q2dZSUtvWkl6ajBFQXdNd0h6RWQKTUJzR0ExVUVBeE1VY0c5emRHZHlaWE10YjNCbGNtRjBiM0l0WTJFd0hoY05Nak13TkRFeE1UYzBOVE01V2hjTgpNek13TkRBNE1UZzBOVE01V2pBOU1Uc3dPUVlEVlFRREV6SndjbWx0WVhKNUxXTnNkWE4wWlhJdGNISnBiV0Z5CmVTNWtaV1poZFd4MExuTjJZeTVqYkhWemRHVnlMbXh2WTJGc0xqQlpNQk1HQnlxR1NNNDlBZ0VHQ0NxR1NNNDkKQXdFSEEwSUFCT3RlNytQWFlDci9RQVJkcHlwYTFHcEpkbW5wOFN3ZG9FOTIzUXoraWt4UllTalgwUHBXcytqUQpVNXlKZ0NDdGxyZmxFZVZ4S2YzaVpiVHdadFlIaHVxamdlTXdnZUF3RGdZRFZSMFBBUUgvQkFRREFnV2dNQXdHCkExVWRFd0VCL3dRQ01BQXdId1lEVlIwakJCZ3dGb0FVVkp0R0R0Yk1kMVlmemUrRXBLdGJDMTdINXFZd2daNEcKQTFVZEVRU0JsakNCazRJeWNISnBiV0Z5ZVMxamJIVnpkR1Z5TFhCeWFXMWhjbmt1WkdWbVlYVnNkQzV6ZG1NdQpZMngxYzNSbGNpNXNiMk5oYkM2Q0kzQnlhVzFoY25rdFkyeDFjM1JsY2kxd2NtbHRZWEo1TG1SbFptRjFiSFF1CmMzWmpnaDl3Y21sdFlYSjVMV05zZFhOMFpYSXRjSEpwYldGeWVTNWtaV1poZFd4MGdoZHdjbWx0WVhKNUxXTnMKZFhOMFpYSXRjSEpwYldGeWVUQUtCZ2dxaGtqT1BRUURBd05IQURCRUFpQjA3Q3YzRHJTNXUxRFdaek1MQjdvbAppcjFFWEpQTnFaOXZWQUF5ZTdDMGJRSWdWQVlDM2F0ekl4a0syNHlQUU1TSjU1OGFaN3JEdkZGZXdOaVpmdSt0CjdETT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= + tls.key: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUoxYkNXMTByR3o2VWQ1K2R3WmZWcGNUNFlqck9XVG1iVW9XNXRxYTA2b1ZvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFNjE3djQ5ZGdLdjlBQkYybktsclVha2wyYWVueExCMmdUM2JkRFA2S1RGRmhLTmZRK2xhego2TkJUbkltQUlLMld0K1VSNVhFcC9lSmx0UEJtMWdlRzZnPT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo= +kind: Secret +metadata: + name: cluster-cert +type: Opaque +--- +apiVersion: v1 +data: + ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJnakNDQVNlZ0F3SUJBZ0lRZUpacWMxMmR3TDh6cDNRVjZVMzg0ekFLQmdncWhrak9QUVFEQXpBZk1SMHcKR3dZRFZRUURFeFJ3YjNOMFozSmxjeTF2Y0dWeVlYUnZjaTFqWVRBZUZ3MHlNekEwTVRFeE56UTFNemhhRncwegpNekEwTURneE9EUTFNemhhTUI4eEhUQWJCZ05WQkFNVEZIQnZjM1JuY21WekxXOXdaWEpoZEc5eUxXTmhNRmt3CkV3WUhLb1pJemowQ0FRWUlLb1pJemowREFRY0RRZ0FFWEZwMU1nOFQ0aWxFRFlleVh4Nm5hRU0weEtNUStNZU0KWnM3dUtockdmTnY1cVd3N0puNzJEMEZNWE9raVNTN1BsZUhtN1lwYk1lelZ4UytjLzV6a2NLTkZNRU13RGdZRApWUjBQQVFIL0JBUURBZ0VHTUJJR0ExVWRFd0VCL3dRSU1BWUJBZjhDQVFBd0hRWURWUjBPQkJZRUZGU2JSZzdXCnpIZFdIODN2aEtTcld3dGV4K2FtTUFvR0NDcUdTTTQ5QkFNREEwa0FNRVlDSVFDK3pXTHh4bmpna1ZYYzBFOVAKbWlmZm9jeTIrM3AxREZMUkJRcHlZNFE0RVFJaEFPSDhQVEtvWnRZUWlobVlqTkd3Q1J3aTgvVFRaYWIxSnVIMAo2YnpodHZobgotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== + tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJqekNDQVRTZ0F3SUJBZ0lRRzA0MEprWjYwZkZtanpaVG1SekhyakFLQmdncWhrak9QUVFEQXpBZk1SMHcKR3dZRFZRUURFeFJ3YjNOMFozSmxjeTF2Y0dWeVlYUnZjaTFqWVRBZUZ3MHlNekEwTVRFeE56UTFNemhhRncwegpNekEwTURneE9EUTFNemhhTUJjeEZUQVRCZ05WQkFNTURGOWpjblZ1WTJoNWNtVndiREJaTUJNR0J5cUdTTTQ5CkFnRUdDQ3FHU000OUF3RUhBMElBQk5HVHcvSmVtaGxGK28xUlRBb0VXSndzdjJ6WjIyc1p4N2NjT2VmL1NXdjYKeXphYkpaUmkvREFyK0kwUHNyTlhmand3a0xMa3hERGZsTklvcFZMNVYwT2pXakJZTUE0R0ExVWREd0VCL3dRRQpBd0lGb0RBTUJnTlZIUk1CQWY4RUFqQUFNQjhHQTFVZEl3UVlNQmFBRkZTYlJnN1d6SGRXSDgzdmhLU3JXd3RlCngrYW1NQmNHQTFVZEVRUVFNQTZDREY5amNuVnVZMmg1Y21Wd2JEQUtCZ2dxaGtqT1BRUURBd05KQURCR0FpRUEKcWVsYmUvdTQzRFRPWFdlell1b3Nva0dUbHg1U2ljUFRkNk05Q3pwU2VoWUNJUUNOOS91Znc0SUZzdDZOM1RtYQo4MmZpSElKSUpQY0RjM2ZKUnFna01RQmF0QT09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K + tls.key: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSVBxeTVzNVJxWThKUmdycjJreE9zaG9hc25yTWhUUkJPYjZ0alI3T2ZqTFlvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFMFpQRDhsNmFHVVg2alZGTUNnUlluQ3kvYk5uYmF4bkh0eHc1NS85SmEvckxOcHNsbEdMOApNQ3Y0alEreXMxZCtQRENRc3VURU1OK1UwaWlsVXZsWFF3PT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo= +kind: Secret +metadata: + name: replication-cert +type: Opaque diff --git a/testing/kuttl/e2e-other/exporter-standby/files/primary-cluster-checks.yaml b/testing/kuttl/e2e-other/exporter-standby/files/primary-cluster-checks.yaml new file mode 100644 index 0000000000..c2a59244a5 --- /dev/null +++ b/testing/kuttl/e2e-other/exporter-standby/files/primary-cluster-checks.yaml @@ -0,0 +1,20 @@ +--- +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: primary-cluster +status: + instances: + - name: instance1 + readyReplicas: 1 + replicas: 1 + updatedReplicas: 1 +--- +apiVersion: v1 +kind: Pod +metadata: + labels: + postgres-operator.crunchydata.com/cluster: primary-cluster + postgres-operator.crunchydata.com/crunchy-postgres-exporter: "true" +status: + phase: Running diff --git a/testing/kuttl/e2e-other/exporter-standby/files/primary-cluster.yaml b/testing/kuttl/e2e-other/exporter-standby/files/primary-cluster.yaml new file mode 100644 index 0000000000..8f51632f5b --- /dev/null +++ b/testing/kuttl/e2e-other/exporter-standby/files/primary-cluster.yaml @@ -0,0 +1,22 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: primary-cluster +spec: + postgresVersion: ${KUTTL_PG_VERSION} + customTLSSecret: + name: cluster-cert + customReplicationTLSSecret: + name: replication-cert + instances: + - name: instance1 + dataVolumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } + backups: + pgbackrest: + repos: + - name: repo1 + volume: + volumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } + monitoring: + pgmonitor: + exporter: {} diff --git a/testing/kuttl/e2e-other/exporter-standby/files/standby-cluster-checks.yaml b/testing/kuttl/e2e-other/exporter-standby/files/standby-cluster-checks.yaml new file mode 100644 index 0000000000..237dec721e --- /dev/null +++ b/testing/kuttl/e2e-other/exporter-standby/files/standby-cluster-checks.yaml @@ -0,0 +1,21 @@ +--- +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: standby-cluster +status: + instances: + - name: instance1 + replicas: 1 + updatedReplicas: 1 + # The cluster should not become fully ready in this step, the ccp_monitoring password + # on the standby does not match the primary +--- +apiVersion: v1 +kind: Pod +metadata: + labels: + postgres-operator.crunchydata.com/cluster: standby-cluster + postgres-operator.crunchydata.com/crunchy-postgres-exporter: "true" +status: + phase: Running diff --git a/testing/kuttl/e2e-other/exporter-standby/files/standby-cluster.yaml b/testing/kuttl/e2e-other/exporter-standby/files/standby-cluster.yaml new file mode 100644 index 0000000000..33e9ec2c2c --- /dev/null +++ b/testing/kuttl/e2e-other/exporter-standby/files/standby-cluster.yaml @@ -0,0 +1,25 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: standby-cluster +spec: + postgresVersion: ${KUTTL_PG_VERSION} + standby: + enabled: true + host: primary-cluster-primary + customTLSSecret: + name: cluster-cert + customReplicationTLSSecret: + name: replication-cert + instances: + - name: instance1 + dataVolumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } + backups: + pgbackrest: + repos: + - name: repo1 + volume: + volumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } + monitoring: + pgmonitor: + exporter: {} diff --git a/testing/kuttl/e2e-other/exporter-standby/files/update-primary-password-checks.yaml b/testing/kuttl/e2e-other/exporter-standby/files/update-primary-password-checks.yaml new file mode 100644 index 0000000000..1ef72b49c9 --- /dev/null +++ b/testing/kuttl/e2e-other/exporter-standby/files/update-primary-password-checks.yaml @@ -0,0 +1,18 @@ +apiVersion: v1 +kind: Secret +metadata: + name: primary-cluster-monitoring + labels: + postgres-operator.crunchydata.com/cluster: primary-cluster + postgres-operator.crunchydata.com/role: monitoring + ownerReferences: + - apiVersion: postgres-operator.crunchydata.com/v1beta1 + blockOwnerDeletion: true + controller: true + kind: PostgresCluster + name: primary-cluster +data: + # ensure the password is encoded to 'password' + password: cGFzc3dvcmQ= +--- +# TODO: Check that password is set as a file diff --git a/testing/kuttl/e2e-other/exporter-standby/files/update-primary-password.yaml b/testing/kuttl/e2e-other/exporter-standby/files/update-primary-password.yaml new file mode 100644 index 0000000000..a66450b103 --- /dev/null +++ b/testing/kuttl/e2e-other/exporter-standby/files/update-primary-password.yaml @@ -0,0 +1,11 @@ +apiVersion: v1 +kind: Secret +metadata: + name: primary-cluster-monitoring + labels: + postgres-operator.crunchydata.com/cluster: primary-cluster + postgres-operator.crunchydata.com/role: monitoring +stringData: + password: password +data: +# Ensure data field is deleted so that password/verifier will be regenerated diff --git a/testing/kuttl/e2e-other/exporter-standby/files/update-standby-password-checks.yaml b/testing/kuttl/e2e-other/exporter-standby/files/update-standby-password-checks.yaml new file mode 100644 index 0000000000..34d5357318 --- /dev/null +++ b/testing/kuttl/e2e-other/exporter-standby/files/update-standby-password-checks.yaml @@ -0,0 +1,18 @@ +apiVersion: v1 +kind: Secret +metadata: + name: standby-cluster-monitoring + labels: + postgres-operator.crunchydata.com/cluster: standby-cluster + postgres-operator.crunchydata.com/role: monitoring + ownerReferences: + - apiVersion: postgres-operator.crunchydata.com/v1beta1 + blockOwnerDeletion: true + controller: true + kind: PostgresCluster + name: standby-cluster +data: + # ensure the password is encoded to 'password' + password: cGFzc3dvcmQ= +--- +# TODO: Check that password is set as a file diff --git a/testing/kuttl/e2e-other/exporter-standby/files/update-standby-password.yaml b/testing/kuttl/e2e-other/exporter-standby/files/update-standby-password.yaml new file mode 100644 index 0000000000..57371fce93 --- /dev/null +++ b/testing/kuttl/e2e-other/exporter-standby/files/update-standby-password.yaml @@ -0,0 +1,11 @@ +apiVersion: v1 +kind: Secret +metadata: + name: standby-cluster-monitoring + labels: + postgres-operator.crunchydata.com/cluster: standby-cluster + postgres-operator.crunchydata.com/role: monitoring +stringData: + password: password +data: +# Ensure data field is deleted so that password/verifier will be regenerated diff --git a/testing/kuttl/e2e-other/exporter-upgrade/00--cluster.yaml b/testing/kuttl/e2e-other/exporter-upgrade/00--cluster.yaml new file mode 100644 index 0000000000..0e53eab2de --- /dev/null +++ b/testing/kuttl/e2e-other/exporter-upgrade/00--cluster.yaml @@ -0,0 +1,30 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: exporter +spec: + postgresVersion: 14 + image: us.gcr.io/container-suite/crunchy-postgres:ubi8-14.0-5.0.3-0 + instances: + - name: instance1 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + backups: + pgbackrest: + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + monitoring: + pgmonitor: + exporter: + image: registry.developers.crunchydata.com/crunchydata/crunchy-postgres-exporter:ubi8-5.3.1-0 diff --git a/testing/kuttl/e2e-other/exporter-upgrade/00-assert.yaml b/testing/kuttl/e2e-other/exporter-upgrade/00-assert.yaml new file mode 100644 index 0000000000..c569c97454 --- /dev/null +++ b/testing/kuttl/e2e-other/exporter-upgrade/00-assert.yaml @@ -0,0 +1,10 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: exporter +status: + instances: + - name: instance1 + readyReplicas: 1 + replicas: 1 + updatedReplicas: 1 diff --git a/testing/kuttl/e2e-other/exporter-upgrade/01--check-exporter.yaml b/testing/kuttl/e2e-other/exporter-upgrade/01--check-exporter.yaml new file mode 100644 index 0000000000..0e72f2a0bf --- /dev/null +++ b/testing/kuttl/e2e-other/exporter-upgrade/01--check-exporter.yaml @@ -0,0 +1,31 @@ +--- +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +commands: + - script: | + set -e + PRIMARY=$( + kubectl get pod --namespace "${NAMESPACE}" \ + --output name --selector ' + postgres-operator.crunchydata.com/cluster=exporter, + postgres-operator.crunchydata.com/role=master' + ) + + # Ensure that the metrics endpoint is available from inside the exporter container + for i in {1..5}; do + kubectl exec --namespace "${NAMESPACE}" "${PRIMARY}" -c exporter -- curl http://localhost:9187/metrics + sleep 2 + done + + # Ensure that the monitoring user exists and is configured. + kubectl exec --stdin --namespace "${NAMESPACE}" "${PRIMARY}" \ + -- psql -qb --set ON_ERROR_STOP=1 --file=- <<'SQL' + DO $$ + DECLARE + result record; + BEGIN + SELECT * INTO result FROM pg_catalog.pg_roles WHERE rolname = 'ccp_monitoring'; + ASSERT FOUND, 'user not found'; + ASSERT result.rolconfig @> '{jit=off}', format('got config: %L', result.rolconfig); + END $$ + SQL diff --git a/testing/kuttl/e2e-other/exporter-upgrade/02--update-cluster.yaml b/testing/kuttl/e2e-other/exporter-upgrade/02--update-cluster.yaml new file mode 100644 index 0000000000..cde17d80b4 --- /dev/null +++ b/testing/kuttl/e2e-other/exporter-upgrade/02--update-cluster.yaml @@ -0,0 +1,7 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: exporter +spec: + postgresVersion: 14 + image: registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-14.5-1 diff --git a/testing/kuttl/e2e-other/exporter-upgrade/02-assert.yaml b/testing/kuttl/e2e-other/exporter-upgrade/02-assert.yaml new file mode 100644 index 0000000000..9ad238b944 --- /dev/null +++ b/testing/kuttl/e2e-other/exporter-upgrade/02-assert.yaml @@ -0,0 +1,24 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: exporter +status: + instances: + - name: instance1 + readyReplicas: 1 + replicas: 1 + updatedReplicas: 1 +--- +apiVersion: batch/v1 +kind: Job +metadata: + labels: + postgres-operator.crunchydata.com/cluster: exporter + postgres-operator.crunchydata.com/pgbackrest-backup: replica-create +status: + succeeded: 1 +--- +apiVersion: v1 +kind: Service +metadata: + name: exporter-primary diff --git a/testing/kuttl/e2e-other/exporter-upgrade/03--check-exporter.yaml b/testing/kuttl/e2e-other/exporter-upgrade/03--check-exporter.yaml new file mode 100644 index 0000000000..8161e463fc --- /dev/null +++ b/testing/kuttl/e2e-other/exporter-upgrade/03--check-exporter.yaml @@ -0,0 +1,21 @@ +--- +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +commands: + - script: | + PRIMARY=$( + kubectl get pod --namespace "${NAMESPACE}" \ + --output name --selector ' + postgres-operator.crunchydata.com/cluster=exporter, + postgres-operator.crunchydata.com/role=master' + ) + + # Get errors from the exporter + # See the README.md for a discussion of these errors + ERR=$(kubectl logs --namespace "${NAMESPACE}" "${PRIMARY}" -c exporter | grep -e "Error running query on database") + ERR_COUNT=$(echo "$ERR" | wc -l) + + if [[ "$ERR_COUNT" -gt 2 ]]; then + echo "Errors in log from exporter: ${ERR}" + exit 1 + fi diff --git a/testing/kuttl/e2e-other/exporter-upgrade/README.md b/testing/kuttl/e2e-other/exporter-upgrade/README.md new file mode 100644 index 0000000000..fefe28a95c --- /dev/null +++ b/testing/kuttl/e2e-other/exporter-upgrade/README.md @@ -0,0 +1,31 @@ +The exporter-upgrade test makes sure that PGO updates an extension used for monitoring. This +avoids an error where a user might update to a new PG image with a newer extension, but with an +older extension operative. + +Note: This test relies on two `crunchy-postgres` images with known, different `pgnodemx` extensions: +the image created in 00--cluster.yaml has `pgnodemx` 1.1; the image we update the cluster to in +02--update-cluster.yaml has `pgnodemx` 1.3. + +00-01 +This starts up a cluster with a purposely outdated `pgnodemx` extension. Because we want a specific +extension, the image used here is hard-coded (and so outdated it's not publicly available). + +(This image is so outdated that it doesn't finish creating a backup with the current PGO, which is +why the 00-assert.yaml only checks that the pod is ready; and why 01--check-exporter.yaml wraps the +call in a retry loop.) + +02-03 +The cluster is updated with a newer (and hardcoded) image with a newer version of `pgnodemx`. Due +to the change made in https://github.com/CrunchyData/postgres-operator/pull/3400, this should no +longer produce multiple errors. + +Note: a few errors may be logged after the `exporter` container attempts to run the `pgnodemx` +functions but before the extension is updated. So this checks that there are no more than 2 errors, +since that was the observed maximum number of printed errors during manual tests of the check. + +For instance, using these hardcoded images (with `pgnodemx` versions 1.1 and 1.3), those errors were: + +``` +Error running query on database \"localhost:5432\": ccp_nodemx_disk_activity pq: query-specified return tuple and function return type are not compatible" +Error running query on database \"localhost:5432\": ccp_nodemx_data_disk pq: query-specified return tuple and function return type are not compatible +``` diff --git a/testing/kuttl/e2e-other/gssapi/00-assert.yaml b/testing/kuttl/e2e-other/gssapi/00-assert.yaml new file mode 100644 index 0000000000..ea828be0c4 --- /dev/null +++ b/testing/kuttl/e2e-other/gssapi/00-assert.yaml @@ -0,0 +1,9 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: krb5 +--- +apiVersion: v1 +kind: Secret +metadata: + name: krb5-keytab diff --git a/testing/kuttl/e2e-other/gssapi/00-krb5-keytab.yaml b/testing/kuttl/e2e-other/gssapi/00-krb5-keytab.yaml new file mode 100644 index 0000000000..6311193d55 --- /dev/null +++ b/testing/kuttl/e2e-other/gssapi/00-krb5-keytab.yaml @@ -0,0 +1,4 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +commands: +- command: kubectl exec -n krb5 -it krb5-kdc-0 -- /krb5-scripts/krb5.sh "${NAMESPACE}" diff --git a/testing/kuttl/e2e-other/gssapi/01-assert.yaml b/testing/kuttl/e2e-other/gssapi/01-assert.yaml new file mode 100644 index 0000000000..dbda953ead --- /dev/null +++ b/testing/kuttl/e2e-other/gssapi/01-assert.yaml @@ -0,0 +1,15 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: gssapi +status: + instances: + - name: instance1 + readyReplicas: 1 + replicas: 1 + updatedReplicas: 1 +--- +apiVersion: v1 +kind: Service +metadata: + name: gssapi-primary diff --git a/testing/kuttl/e2e-other/gssapi/01-cluster.yaml b/testing/kuttl/e2e-other/gssapi/01-cluster.yaml new file mode 100644 index 0000000000..8acfe46c4d --- /dev/null +++ b/testing/kuttl/e2e-other/gssapi/01-cluster.yaml @@ -0,0 +1,41 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: gssapi +spec: + config: + files: + - secret: + name: krb5-keytab + - configMap: + name: krb5 + patroni: + dynamicConfiguration: + postgresql: + pg_hba: + - host postgres postgres 0.0.0.0/0 scram-sha-256 + - host all krb5hippo@PGO.CRUNCHYDATA.COM 0.0.0.0/0 gss + parameters: + krb_server_keyfile: /etc/postgres/krb5.keytab + users: + - name: postgres + postgresVersion: ${KUTTL_PG_VERSION} + instances: + - name: instance1 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + backups: + pgbackrest: + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi diff --git a/testing/kuttl/e2e-other/gssapi/02-assert.yaml b/testing/kuttl/e2e-other/gssapi/02-assert.yaml new file mode 100644 index 0000000000..36f85d95d4 --- /dev/null +++ b/testing/kuttl/e2e-other/gssapi/02-assert.yaml @@ -0,0 +1,6 @@ +apiVersion: batch/v1 +kind: Job +metadata: + name: psql-connect-gssapi +status: + succeeded: 1 diff --git a/testing/kuttl/e2e-other/gssapi/02-psql-connect.yaml b/testing/kuttl/e2e-other/gssapi/02-psql-connect.yaml new file mode 100644 index 0000000000..30f02b3b19 --- /dev/null +++ b/testing/kuttl/e2e-other/gssapi/02-psql-connect.yaml @@ -0,0 +1,47 @@ +apiVersion: batch/v1 +kind: Job +metadata: + name: psql-connect-gssapi +spec: + backoffLimit: 6 + template: + spec: + restartPolicy: Never + containers: + - name: psql + image: ${KUTTL_PSQL_IMAGE} + command: + - bash + - -c + - -- + - |- + psql -c 'create user "krb5hippo@PGO.CRUNCHYDATA.COM";' + kinit -k -t /krb5-conf/krb5.keytab krb5hippo@PGO.CRUNCHYDATA.COM + psql -U krb5hippo@PGO.CRUNCHYDATA.COM -h gssapi-primary.$(NAMESPACE).svc.cluster.local -d postgres \ + -c 'select version();' + env: + - name: NAMESPACE + valueFrom: { fieldRef: { fieldPath: metadata.namespace } } + - name: PGHOST + valueFrom: { secretKeyRef: { name: gssapi-pguser-postgres, key: host } } + - name: PGPORT + valueFrom: { secretKeyRef: { name: gssapi-pguser-postgres, key: port } } + - name: PGUSER + valueFrom: { secretKeyRef: { name: gssapi-pguser-postgres, key: user } } + - name: PGPASSWORD + valueFrom: { secretKeyRef: { name: gssapi-pguser-postgres, key: password } } + - name: PGDATABASE + value: postgres + - name: KRB5_CONFIG + value: /krb5-conf/krb5.conf + volumeMounts: + - name: krb5-conf + mountPath: /krb5-conf + volumes: + - name: krb5-conf + projected: + sources: + - configMap: + name: krb5 + - secret: + name: krb5-keytab diff --git a/testing/kuttl/e2e-other/gssapi/README.md b/testing/kuttl/e2e-other/gssapi/README.md new file mode 100644 index 0000000000..72d8d2b997 --- /dev/null +++ b/testing/kuttl/e2e-other/gssapi/README.md @@ -0,0 +1,14 @@ +# GSSAPI Authentication + +This test verifies that it is possible to properly configure PostgreSQL for GSSAPI +authentication. This is done by configuring a PostgresCluster for GSSAPI authentication, +and then utilizing a Kerberos ticket that has been issued by a Kerberos KDC server to log into +PostgreSQL. + +## Assumptions + +- A Kerberos Key Distribution Center (KDC) Pod named `krb5-kdc-0` is deployed inside of a `krb5` +namespace within the Kubernetes cluster +- The KDC server (`krb5-kdc-0`) contains a `/krb5-conf/krb5.sh` script that can be run as part +of the test to create the Kerberos principals, keytab secret and client configuration needed to +successfully run the test diff --git a/testing/kuttl/e2e-other/postgis-cluster/00--cluster.yaml b/testing/kuttl/e2e-other/postgis-cluster/00--cluster.yaml new file mode 100644 index 0000000000..8dc88788bc --- /dev/null +++ b/testing/kuttl/e2e-other/postgis-cluster/00--cluster.yaml @@ -0,0 +1,26 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: postgis +spec: + postgresVersion: ${KUTTL_PG_VERSION} + postGISVersion: "${KUTTL_POSTGIS_VERSION}" + instances: + - name: instance1 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + backups: + pgbackrest: + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi diff --git a/testing/kuttl/e2e-other/postgis-cluster/00-assert.yaml b/testing/kuttl/e2e-other/postgis-cluster/00-assert.yaml new file mode 100644 index 0000000000..b0bda7753f --- /dev/null +++ b/testing/kuttl/e2e-other/postgis-cluster/00-assert.yaml @@ -0,0 +1,24 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: postgis +status: + instances: + - name: instance1 + readyReplicas: 1 + replicas: 1 + updatedReplicas: 1 +--- +apiVersion: batch/v1 +kind: Job +metadata: + labels: + postgres-operator.crunchydata.com/cluster: postgis + postgres-operator.crunchydata.com/pgbackrest-backup: replica-create +status: + succeeded: 1 +--- +apiVersion: v1 +kind: Service +metadata: + name: postgis-primary diff --git a/testing/kuttl/e2e-other/postgis-cluster/01--psql-connect.yaml b/testing/kuttl/e2e-other/postgis-cluster/01--psql-connect.yaml new file mode 100644 index 0000000000..814958a9f6 --- /dev/null +++ b/testing/kuttl/e2e-other/postgis-cluster/01--psql-connect.yaml @@ -0,0 +1,132 @@ +apiVersion: batch/v1 +kind: Job +metadata: + name: psql-postgis-connect +spec: + backoffLimit: 6 + template: + metadata: + labels: { postgres-operator-test: kuttl } + spec: + restartPolicy: Never + containers: + - name: psql + image: ${KUTTL_PSQL_IMAGE} + env: + - name: PGHOST + valueFrom: { secretKeyRef: { name: postgis-pguser-postgis, key: host } } + - name: PGPORT + valueFrom: { secretKeyRef: { name: postgis-pguser-postgis, key: port } } + - name: PGDATABASE + valueFrom: { secretKeyRef: { name: postgis-pguser-postgis, key: dbname } } + - name: PGUSER + valueFrom: { secretKeyRef: { name: postgis-pguser-postgis, key: user } } + - name: PGPASSWORD + valueFrom: { secretKeyRef: { name: postgis-pguser-postgis, key: password } } + - { name: GIS_VERSION, value: "${KUTTL_POSTGIS_VERSION}" } + # Do not wait indefinitely. + - { name: PGCONNECT_TIMEOUT, value: '5' } + command: + - bash + - -c + - | + # Ensure PostGIS version is set + GIS_VERSION=${KUTTL_POSTGIS_VERSION} + GIS_VERSION=${GIS_VERSION:-notset} + + # check version + RESULT=$(psql -c "DO \$\$ + DECLARE + result boolean; + BEGIN + SELECT postgis_version() LIKE '%${GIS_VERSION}%' INTO result; + ASSERT result = 't', 'PostGIS version incorrect'; + END \$\$;" 2>&1) + + if [[ "$RESULT" == *"ERROR"* ]]; then + echo "$RESULT" + exit 1 + fi + + # check full version + RESULT=$(psql -c "DO \$\$ + DECLARE + result boolean; + BEGIN + SELECT postgis_full_version() LIKE 'POSTGIS=\"%${GIS_VERSION}%' INTO result; + ASSERT result = 't', 'PostGIS full version incorrect'; + END \$\$;" 2>&1) + + if [[ "$RESULT" == *"ERROR"* ]]; then + echo "$RESULT" + exit 1 + fi + + # check expected schemas (tiger, tiger_data and topology) + # - https://www.postgresql.org/docs/current/catalog-pg-namespace.html + RESULT=$(psql -c "DO \$\$ + DECLARE + result text; + BEGIN + SELECT nspname FROM pg_catalog.pg_namespace WHERE nspname='tiger' INTO result; + ASSERT result = 'tiger', 'PostGIS tiger schema missing'; + END \$\$;" 2>&1) + + if [[ "$RESULT" == *"ERROR"* ]]; then + echo "$RESULT" + exit 1 + fi + + RESULT=$(psql -c "DO \$\$ + DECLARE + result text; + BEGIN + SELECT nspname FROM pg_catalog.pg_namespace WHERE nspname='tiger_data' INTO result; + ASSERT result = 'tiger_data', 'PostGIS tiger_data schema missing'; + END \$\$;" 2>&1) + + if [[ "$RESULT" == *"ERROR"* ]]; then + echo "$RESULT" + exit 1 + fi + + RESULT=$(psql -c "DO \$\$ + DECLARE + result text; + BEGIN + SELECT nspname FROM pg_catalog.pg_namespace WHERE nspname='topology' INTO result; + ASSERT result = 'topology', 'PostGIS topology schema missing'; + END \$\$;" 2>&1) + + if [[ "$RESULT" == *"ERROR"* ]]; then + echo "$RESULT" + exit 1 + fi + + # check point creation + RESULT=$(psql -c "DO \$\$ + DECLARE + result text; + BEGIN + SELECT pg_typeof(ST_MakePoint(28.385200,-81.563900)) INTO result; + ASSERT result = 'geometry', 'Unable to create PostGIS point'; + END \$\$;" 2>&1) + + if [[ "$RESULT" == *"ERROR"* ]]; then + echo "$RESULT" + exit 1 + fi + + # check GeoJSON function + RESULT=$(psql -c "DO \$\$ + DECLARE + result text; + BEGIN + SELECT ST_AsGeoJSON('SRID=4326;POINT(-118.4079 33.9434)'::geography) INTO result; + ASSERT result = '{\"type\":\"Point\",\"coordinates\":[-118.4079,33.9434]}', FORMAT('GeoJSON check failed, got %L', result); + END \$\$;" 2>&1) + + if [[ "$RESULT" == *"ERROR"* ]]; then + echo "$RESULT" + exit 1 + fi diff --git a/testing/kuttl/e2e-other/postgis-cluster/01-assert.yaml b/testing/kuttl/e2e-other/postgis-cluster/01-assert.yaml new file mode 100644 index 0000000000..22e9e6f9de --- /dev/null +++ b/testing/kuttl/e2e-other/postgis-cluster/01-assert.yaml @@ -0,0 +1,6 @@ +apiVersion: batch/v1 +kind: Job +metadata: + name: psql-postgis-connect +status: + succeeded: 1 diff --git a/testing/kuttl/e2e-other/replica-service/00-base-cluster.yaml b/testing/kuttl/e2e-other/replica-service/00-base-cluster.yaml new file mode 100644 index 0000000000..725f40de14 --- /dev/null +++ b/testing/kuttl/e2e-other/replica-service/00-base-cluster.yaml @@ -0,0 +1,6 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +apply: +- files/base-cluster.yaml +assert: +- files/base-check.yaml diff --git a/testing/kuttl/e2e-other/replica-service/01-node-port.yaml b/testing/kuttl/e2e-other/replica-service/01-node-port.yaml new file mode 100644 index 0000000000..c80e947e40 --- /dev/null +++ b/testing/kuttl/e2e-other/replica-service/01-node-port.yaml @@ -0,0 +1,6 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +apply: +- files/np-cluster.yaml +assert: +- files/np-check.yaml diff --git a/testing/kuttl/e2e-other/replica-service/02-loadbalancer.yaml b/testing/kuttl/e2e-other/replica-service/02-loadbalancer.yaml new file mode 100644 index 0000000000..f1433111db --- /dev/null +++ b/testing/kuttl/e2e-other/replica-service/02-loadbalancer.yaml @@ -0,0 +1,6 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +apply: +- files/lb-cluster.yaml +assert: +- files/lb-check.yaml diff --git a/testing/kuttl/e2e-other/replica-service/03-cluster-ip.yaml b/testing/kuttl/e2e-other/replica-service/03-cluster-ip.yaml new file mode 100644 index 0000000000..de6055ea6b --- /dev/null +++ b/testing/kuttl/e2e-other/replica-service/03-cluster-ip.yaml @@ -0,0 +1,6 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +apply: +- files/cip-cluster.yaml +assert: +- files/cip-check.yaml diff --git a/testing/kuttl/e2e-other/replica-service/files/base-check.yaml b/testing/kuttl/e2e-other/replica-service/files/base-check.yaml new file mode 100644 index 0000000000..a83fce0f57 --- /dev/null +++ b/testing/kuttl/e2e-other/replica-service/files/base-check.yaml @@ -0,0 +1,15 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: service +status: + instances: + - name: instance1 + readyReplicas: 2 + replicas: 2 + updatedReplicas: 2 +--- +apiVersion: v1 +kind: Service +metadata: + name: service-replicas diff --git a/testing/kuttl/e2e-other/replica-service/files/base-cluster.yaml b/testing/kuttl/e2e-other/replica-service/files/base-cluster.yaml new file mode 100644 index 0000000000..67c4481d2f --- /dev/null +++ b/testing/kuttl/e2e-other/replica-service/files/base-cluster.yaml @@ -0,0 +1,28 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: service +spec: + postgresVersion: ${KUTTL_PG_VERSION} + replicaService: + type: ClusterIP + instances: + - name: instance1 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 0.5Gi + replicas: 2 + backups: + pgbackrest: + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 0.5Gi diff --git a/testing/kuttl/e2e-other/replica-service/files/cip-check.yaml b/testing/kuttl/e2e-other/replica-service/files/cip-check.yaml new file mode 100644 index 0000000000..5bf5422bb8 --- /dev/null +++ b/testing/kuttl/e2e-other/replica-service/files/cip-check.yaml @@ -0,0 +1,9 @@ +apiVersion: v1 +kind: Service +metadata: + name: service-replicas +spec: + type: ClusterIP + selector: + postgres-operator.crunchydata.com/cluster: service + postgres-operator.crunchydata.com/role: replica diff --git a/testing/kuttl/e2e-other/replica-service/files/cip-cluster.yaml b/testing/kuttl/e2e-other/replica-service/files/cip-cluster.yaml new file mode 100644 index 0000000000..8545aa8223 --- /dev/null +++ b/testing/kuttl/e2e-other/replica-service/files/cip-cluster.yaml @@ -0,0 +1,8 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: service +spec: + replicaService: + type: ClusterIP + nodePort: null diff --git a/testing/kuttl/e2e-other/replica-service/files/lb-check.yaml b/testing/kuttl/e2e-other/replica-service/files/lb-check.yaml new file mode 100644 index 0000000000..b8519491c7 --- /dev/null +++ b/testing/kuttl/e2e-other/replica-service/files/lb-check.yaml @@ -0,0 +1,9 @@ +apiVersion: v1 +kind: Service +metadata: + name: service-replicas +spec: + type: LoadBalancer + selector: + postgres-operator.crunchydata.com/cluster: service + postgres-operator.crunchydata.com/role: replica diff --git a/testing/kuttl/e2e-other/replica-service/files/lb-cluster.yaml b/testing/kuttl/e2e-other/replica-service/files/lb-cluster.yaml new file mode 100644 index 0000000000..5e18f71dcd --- /dev/null +++ b/testing/kuttl/e2e-other/replica-service/files/lb-cluster.yaml @@ -0,0 +1,8 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: service +spec: + replicaService: + type: LoadBalancer + nodePort: null diff --git a/testing/kuttl/e2e-other/replica-service/files/np-check.yaml b/testing/kuttl/e2e-other/replica-service/files/np-check.yaml new file mode 100644 index 0000000000..c7d791e36a --- /dev/null +++ b/testing/kuttl/e2e-other/replica-service/files/np-check.yaml @@ -0,0 +1,14 @@ +apiVersion: v1 +kind: Service +metadata: + name: service-replicas +spec: + type: NodePort + ports: + - name: postgres + port: 5432 + protocol: TCP + targetPort: postgres + selector: + postgres-operator.crunchydata.com/cluster: service + postgres-operator.crunchydata.com/role: replica diff --git a/testing/kuttl/e2e-other/replica-service/files/np-cluster.yaml b/testing/kuttl/e2e-other/replica-service/files/np-cluster.yaml new file mode 100644 index 0000000000..0b20ae63ad --- /dev/null +++ b/testing/kuttl/e2e-other/replica-service/files/np-cluster.yaml @@ -0,0 +1,7 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: service +spec: + replicaService: + type: NodePort diff --git a/testing/kuttl/e2e-other/resize-volume/00-assert.yaml b/testing/kuttl/e2e-other/resize-volume/00-assert.yaml new file mode 100644 index 0000000000..b4372b75e7 --- /dev/null +++ b/testing/kuttl/e2e-other/resize-volume/00-assert.yaml @@ -0,0 +1,7 @@ +# Ensure that the default StorageClass supports VolumeExpansion +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + annotations: + storageclass.kubernetes.io/is-default-class: "true" +allowVolumeExpansion: true diff --git a/testing/kuttl/e2e-other/resize-volume/01--cluster.yaml b/testing/kuttl/e2e-other/resize-volume/01--cluster.yaml new file mode 100644 index 0000000000..4737fb25f4 --- /dev/null +++ b/testing/kuttl/e2e-other/resize-volume/01--cluster.yaml @@ -0,0 +1,25 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: resize-volume-up +spec: + postgresVersion: ${KUTTL_PG_VERSION} + instances: + - name: instance1 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + backups: + pgbackrest: + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi diff --git a/testing/kuttl/e2e-other/resize-volume/01-assert.yaml b/testing/kuttl/e2e-other/resize-volume/01-assert.yaml new file mode 100644 index 0000000000..ea72af469c --- /dev/null +++ b/testing/kuttl/e2e-other/resize-volume/01-assert.yaml @@ -0,0 +1,59 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: resize-volume-up +status: + instances: + - name: instance1 + readyReplicas: 1 + replicas: 1 + updatedReplicas: 1 +--- +apiVersion: batch/v1 +kind: Job +metadata: + labels: + postgres-operator.crunchydata.com/cluster: resize-volume-up + postgres-operator.crunchydata.com/pgbackrest-backup: replica-create +status: + succeeded: 1 +--- +apiVersion: v1 +kind: Service +metadata: + name: resize-volume-up-primary +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + labels: + postgres-operator.crunchydata.com/cluster: resize-volume-up + postgres-operator.crunchydata.com/instance-set: instance1 +spec: + resources: + requests: + storage: 1Gi +status: + accessModes: + - ReadWriteOnce + capacity: + storage: 1Gi + phase: Bound +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + labels: + postgres-operator.crunchydata.com/cluster: resize-volume-up + postgres-operator.crunchydata.com/data: pgbackrest + postgres-operator.crunchydata.com/pgbackrest-repo: repo1 +spec: + resources: + requests: + storage: 1Gi +status: + accessModes: + - ReadWriteOnce + capacity: + storage: 1Gi + phase: Bound diff --git a/testing/kuttl/e2e-other/resize-volume/02--create-data.yaml b/testing/kuttl/e2e-other/resize-volume/02--create-data.yaml new file mode 100644 index 0000000000..c41a6f80c4 --- /dev/null +++ b/testing/kuttl/e2e-other/resize-volume/02--create-data.yaml @@ -0,0 +1,31 @@ +--- +# Create some data that should be present after resizing. +apiVersion: batch/v1 +kind: Job +metadata: + name: create-data + labels: { postgres-operator-test: kuttl } +spec: + backoffLimit: 3 + template: + metadata: + labels: { postgres-operator-test: kuttl } + spec: + restartPolicy: Never + containers: + - name: psql + image: ${KUTTL_PSQL_IMAGE} + env: + - name: PGURI + valueFrom: { secretKeyRef: { name: resize-volume-up-pguser-resize-volume-up, key: uri } } + + # Do not wait indefinitely. + - { name: PGCONNECT_TIMEOUT, value: '5' } + + command: + - psql + - $(PGURI) + - --set=ON_ERROR_STOP=1 + - --command + - | + CREATE TABLE important (data) AS VALUES ('treasure'); diff --git a/testing/kuttl/e2e-other/resize-volume/02-assert.yaml b/testing/kuttl/e2e-other/resize-volume/02-assert.yaml new file mode 100644 index 0000000000..fdb42e68f5 --- /dev/null +++ b/testing/kuttl/e2e-other/resize-volume/02-assert.yaml @@ -0,0 +1,7 @@ +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: create-data +status: + succeeded: 1 diff --git a/testing/kuttl/e2e-other/resize-volume/03--resize.yaml b/testing/kuttl/e2e-other/resize-volume/03--resize.yaml new file mode 100644 index 0000000000..dd7c96901f --- /dev/null +++ b/testing/kuttl/e2e-other/resize-volume/03--resize.yaml @@ -0,0 +1,25 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: resize-volume-up +spec: + postgresVersion: ${KUTTL_PG_VERSION} + instances: + - name: instance1 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 2Gi + backups: + pgbackrest: + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 2Gi diff --git a/testing/kuttl/e2e-other/resize-volume/03-assert.yaml b/testing/kuttl/e2e-other/resize-volume/03-assert.yaml new file mode 100644 index 0000000000..11aa230cd4 --- /dev/null +++ b/testing/kuttl/e2e-other/resize-volume/03-assert.yaml @@ -0,0 +1,37 @@ +# We know that the PVC sizes have change so now we can check that they have been +# updated to have the expected size +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + labels: + postgres-operator.crunchydata.com/cluster: resize-volume-up + postgres-operator.crunchydata.com/instance-set: instance1 +spec: + resources: + requests: + storage: 2Gi +status: + accessModes: + - ReadWriteOnce + capacity: + storage: 2Gi + phase: Bound +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + labels: + postgres-operator.crunchydata.com/cluster: resize-volume-up + postgres-operator.crunchydata.com/data: pgbackrest + postgres-operator.crunchydata.com/pgbackrest-repo: repo1 +spec: + resources: + requests: + storage: 2Gi +status: + accessModes: + - ReadWriteOnce + capacity: + storage: 2Gi + phase: Bound diff --git a/testing/kuttl/e2e-other/resize-volume/06--check-data.yaml b/testing/kuttl/e2e-other/resize-volume/06--check-data.yaml new file mode 100644 index 0000000000..682a46ef4d --- /dev/null +++ b/testing/kuttl/e2e-other/resize-volume/06--check-data.yaml @@ -0,0 +1,40 @@ +--- +# Confirm that all the data still exists. +apiVersion: batch/v1 +kind: Job +metadata: + name: check-data + labels: { postgres-operator-test: kuttl } +spec: + backoffLimit: 3 + template: + metadata: + labels: { postgres-operator-test: kuttl } + spec: + restartPolicy: Never + containers: + - name: psql + image: ${KUTTL_PSQL_IMAGE} + env: + - name: PGURI + valueFrom: { secretKeyRef: { name: resize-volume-up-pguser-resize-volume-up, key: uri } } + + # Do not wait indefinitely. + - { name: PGCONNECT_TIMEOUT, value: '5' } + + # Confirm that all the data still exists. + # Note: the `$$$$` is reduced to `$$` by Kubernetes. + # - https://kubernetes.io/docs/tasks/inject-data-application/ + command: + - psql + - $(PGURI) + - --set=ON_ERROR_STOP=1 + - --command + - | + DO $$$$ + DECLARE + keep_data jsonb; + BEGIN + SELECT jsonb_agg(important) INTO keep_data FROM important; + ASSERT keep_data = '[{"data":"treasure"}]', format('got %L', keep_data); + END $$$$; diff --git a/testing/kuttl/e2e-other/resize-volume/06-assert.yaml b/testing/kuttl/e2e-other/resize-volume/06-assert.yaml new file mode 100644 index 0000000000..cf743b8701 --- /dev/null +++ b/testing/kuttl/e2e-other/resize-volume/06-assert.yaml @@ -0,0 +1,7 @@ +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: check-data +status: + succeeded: 1 diff --git a/testing/kuttl/e2e-other/resize-volume/11--cluster.yaml b/testing/kuttl/e2e-other/resize-volume/11--cluster.yaml new file mode 100644 index 0000000000..8d2d602ca6 --- /dev/null +++ b/testing/kuttl/e2e-other/resize-volume/11--cluster.yaml @@ -0,0 +1,25 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: resize-volume-down +spec: + postgresVersion: ${KUTTL_PG_VERSION} + instances: + - name: instance1 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 2Gi + backups: + pgbackrest: + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 2Gi diff --git a/testing/kuttl/e2e-other/resize-volume/11-assert.yaml b/testing/kuttl/e2e-other/resize-volume/11-assert.yaml new file mode 100644 index 0000000000..666b4a85c7 --- /dev/null +++ b/testing/kuttl/e2e-other/resize-volume/11-assert.yaml @@ -0,0 +1,59 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: resize-volume-down +status: + instances: + - name: instance1 + readyReplicas: 1 + replicas: 1 + updatedReplicas: 1 +--- +apiVersion: batch/v1 +kind: Job +metadata: + labels: + postgres-operator.crunchydata.com/cluster: resize-volume-down + postgres-operator.crunchydata.com/pgbackrest-backup: replica-create +status: + succeeded: 1 +--- +apiVersion: v1 +kind: Service +metadata: + name: resize-volume-down-primary +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + labels: + postgres-operator.crunchydata.com/cluster: resize-volume-down + postgres-operator.crunchydata.com/instance-set: instance1 +spec: + resources: + requests: + storage: 2Gi +status: + accessModes: + - ReadWriteOnce + capacity: + storage: 2Gi + phase: Bound +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + labels: + postgres-operator.crunchydata.com/cluster: resize-volume-down + postgres-operator.crunchydata.com/data: pgbackrest + postgres-operator.crunchydata.com/pgbackrest-repo: repo1 +spec: + resources: + requests: + storage: 2Gi +status: + accessModes: + - ReadWriteOnce + capacity: + storage: 2Gi + phase: Bound diff --git a/testing/kuttl/e2e-other/resize-volume/13--resize.yaml b/testing/kuttl/e2e-other/resize-volume/13--resize.yaml new file mode 100644 index 0000000000..77af2f2aa3 --- /dev/null +++ b/testing/kuttl/e2e-other/resize-volume/13--resize.yaml @@ -0,0 +1,25 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: resize-volume-down +spec: + postgresVersion: ${KUTTL_PG_VERSION} + instances: + - name: instance1 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + backups: + pgbackrest: + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi diff --git a/testing/kuttl/e2e-other/resize-volume/13-assert.yaml b/testing/kuttl/e2e-other/resize-volume/13-assert.yaml new file mode 100644 index 0000000000..4210214fd6 --- /dev/null +++ b/testing/kuttl/e2e-other/resize-volume/13-assert.yaml @@ -0,0 +1,43 @@ +apiVersion: v1 +kind: Event +type: Warning +involvedObject: + apiVersion: postgres-operator.crunchydata.com/v1beta1 + kind: PostgresCluster + name: resize-volume-down +reason: PersistentVolumeError +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + labels: + postgres-operator.crunchydata.com/cluster: resize-volume-down + postgres-operator.crunchydata.com/instance-set: instance1 +spec: + resources: + requests: + storage: 2Gi +status: + accessModes: + - ReadWriteOnce + capacity: + storage: 2Gi + phase: Bound +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + labels: + postgres-operator.crunchydata.com/cluster: resize-volume-down + postgres-operator.crunchydata.com/data: pgbackrest + postgres-operator.crunchydata.com/pgbackrest-repo: repo1 +spec: + resources: + requests: + storage: 2Gi +status: + accessModes: + - ReadWriteOnce + capacity: + storage: 2Gi + phase: Bound diff --git a/testing/kuttl/e2e/cluster-pause/00--cluster.yaml b/testing/kuttl/e2e/cluster-pause/00--cluster.yaml new file mode 100644 index 0000000000..801a22d460 --- /dev/null +++ b/testing/kuttl/e2e/cluster-pause/00--cluster.yaml @@ -0,0 +1,6 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +apply: +- files/00-create-cluster.yaml +assert: +- files/00-cluster-created.yaml diff --git a/testing/kuttl/e2e/cluster-pause/00-assert.yaml b/testing/kuttl/e2e/cluster-pause/00-assert.yaml new file mode 100644 index 0000000000..a51dd3ab4a --- /dev/null +++ b/testing/kuttl/e2e/cluster-pause/00-assert.yaml @@ -0,0 +1,7 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +collectors: +- type: command + command: kubectl -n $NAMESPACE describe pods --selector postgres-operator.crunchydata.com/cluster=cluster-pause +- namespace: $NAMESPACE + selector: postgres-operator.crunchydata.com/cluster=cluster-pause diff --git a/testing/kuttl/e2e/cluster-pause/01--cluster-paused.yaml b/testing/kuttl/e2e/cluster-pause/01--cluster-paused.yaml new file mode 100644 index 0000000000..deab5e0228 --- /dev/null +++ b/testing/kuttl/e2e/cluster-pause/01--cluster-paused.yaml @@ -0,0 +1,6 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +apply: +- files/01-pause-cluster.yaml +assert: +- files/01-cluster-paused.yaml diff --git a/testing/kuttl/e2e/cluster-pause/01-assert.yaml b/testing/kuttl/e2e/cluster-pause/01-assert.yaml new file mode 100644 index 0000000000..a51dd3ab4a --- /dev/null +++ b/testing/kuttl/e2e/cluster-pause/01-assert.yaml @@ -0,0 +1,7 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +collectors: +- type: command + command: kubectl -n $NAMESPACE describe pods --selector postgres-operator.crunchydata.com/cluster=cluster-pause +- namespace: $NAMESPACE + selector: postgres-operator.crunchydata.com/cluster=cluster-pause diff --git a/testing/kuttl/e2e/cluster-pause/02--cluster-resume.yaml b/testing/kuttl/e2e/cluster-pause/02--cluster-resume.yaml new file mode 100644 index 0000000000..bb1def96c5 --- /dev/null +++ b/testing/kuttl/e2e/cluster-pause/02--cluster-resume.yaml @@ -0,0 +1,6 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +apply: +- files/02-resume-cluster.yaml +assert: +- files/02-cluster-resumed.yaml diff --git a/testing/kuttl/e2e/cluster-pause/02-assert.yaml b/testing/kuttl/e2e/cluster-pause/02-assert.yaml new file mode 100644 index 0000000000..a51dd3ab4a --- /dev/null +++ b/testing/kuttl/e2e/cluster-pause/02-assert.yaml @@ -0,0 +1,7 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +collectors: +- type: command + command: kubectl -n $NAMESPACE describe pods --selector postgres-operator.crunchydata.com/cluster=cluster-pause +- namespace: $NAMESPACE + selector: postgres-operator.crunchydata.com/cluster=cluster-pause diff --git a/testing/kuttl/e2e/cluster-pause/files/00-cluster-created.yaml b/testing/kuttl/e2e/cluster-pause/files/00-cluster-created.yaml new file mode 100644 index 0000000000..a5fe982b1a --- /dev/null +++ b/testing/kuttl/e2e/cluster-pause/files/00-cluster-created.yaml @@ -0,0 +1,10 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: cluster-pause +status: + instances: + - name: instance1 + readyReplicas: 1 + replicas: 1 + updatedReplicas: 1 diff --git a/testing/kuttl/e2e/cluster-pause/files/00-create-cluster.yaml b/testing/kuttl/e2e/cluster-pause/files/00-create-cluster.yaml new file mode 100644 index 0000000000..9f687a1dfa --- /dev/null +++ b/testing/kuttl/e2e/cluster-pause/files/00-create-cluster.yaml @@ -0,0 +1,14 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: cluster-pause +spec: + postgresVersion: ${KUTTL_PG_VERSION} + instances: + - name: instance1 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi diff --git a/testing/kuttl/e2e/cluster-pause/files/01-cluster-paused.yaml b/testing/kuttl/e2e/cluster-pause/files/01-cluster-paused.yaml new file mode 100644 index 0000000000..6776fc542b --- /dev/null +++ b/testing/kuttl/e2e/cluster-pause/files/01-cluster-paused.yaml @@ -0,0 +1,22 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: cluster-pause +status: + conditions: + - message: No spec changes will be applied and no other statuses will be updated. + reason: Paused + status: "False" + type: Progressing + instances: + - name: instance1 + readyReplicas: 1 + replicas: 1 + updatedReplicas: 1 +--- +apiVersion: v1 +kind: Service +metadata: + name: cluster-pause-ha +spec: + type: ClusterIP diff --git a/testing/kuttl/e2e/cluster-pause/files/01-pause-cluster.yaml b/testing/kuttl/e2e/cluster-pause/files/01-pause-cluster.yaml new file mode 100644 index 0000000000..6a21b00b22 --- /dev/null +++ b/testing/kuttl/e2e/cluster-pause/files/01-pause-cluster.yaml @@ -0,0 +1,17 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: cluster-pause +spec: + # We change the service, but this won't result in a change until we resume + service: + type: LoadBalancer + paused: true + instances: + - name: instance1 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi diff --git a/testing/kuttl/e2e/cluster-pause/files/02-cluster-resumed.yaml b/testing/kuttl/e2e/cluster-pause/files/02-cluster-resumed.yaml new file mode 100644 index 0000000000..82062fb908 --- /dev/null +++ b/testing/kuttl/e2e/cluster-pause/files/02-cluster-resumed.yaml @@ -0,0 +1,17 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: cluster-pause +status: + instances: + - name: instance1 + readyReplicas: 1 + replicas: 1 + updatedReplicas: 1 +--- +apiVersion: v1 +kind: Service +metadata: + name: cluster-pause-ha +spec: + type: LoadBalancer diff --git a/testing/kuttl/e2e/cluster-pause/files/02-resume-cluster.yaml b/testing/kuttl/e2e/cluster-pause/files/02-resume-cluster.yaml new file mode 100644 index 0000000000..2f5665e146 --- /dev/null +++ b/testing/kuttl/e2e/cluster-pause/files/02-resume-cluster.yaml @@ -0,0 +1,6 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: cluster-pause +spec: + paused: false diff --git a/testing/kuttl/e2e/cluster-start/00--cluster.yaml b/testing/kuttl/e2e/cluster-start/00--cluster.yaml new file mode 100644 index 0000000000..801a22d460 --- /dev/null +++ b/testing/kuttl/e2e/cluster-start/00--cluster.yaml @@ -0,0 +1,6 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +apply: +- files/00-create-cluster.yaml +assert: +- files/00-cluster-created.yaml diff --git a/testing/kuttl/e2e/cluster-start/00-assert.yaml b/testing/kuttl/e2e/cluster-start/00-assert.yaml new file mode 100644 index 0000000000..b513f5ffda --- /dev/null +++ b/testing/kuttl/e2e/cluster-start/00-assert.yaml @@ -0,0 +1,7 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +collectors: +- type: command + command: kubectl -n $NAMESPACE describe pods --selector postgres-operator.crunchydata.com/cluster=cluster-start +- namespace: $NAMESPACE + selector: postgres-operator.crunchydata.com/cluster=cluster-start diff --git a/testing/kuttl/e2e/cluster-start/01--connect.yaml b/testing/kuttl/e2e/cluster-start/01--connect.yaml new file mode 100644 index 0000000000..9586a772ad --- /dev/null +++ b/testing/kuttl/e2e/cluster-start/01--connect.yaml @@ -0,0 +1,6 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +apply: +- files/01-connect-psql.yaml +assert: +- files/01-psql-connected.yaml diff --git a/testing/kuttl/e2e/cluster-start/01-assert.yaml b/testing/kuttl/e2e/cluster-start/01-assert.yaml new file mode 100644 index 0000000000..b513f5ffda --- /dev/null +++ b/testing/kuttl/e2e/cluster-start/01-assert.yaml @@ -0,0 +1,7 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +collectors: +- type: command + command: kubectl -n $NAMESPACE describe pods --selector postgres-operator.crunchydata.com/cluster=cluster-start +- namespace: $NAMESPACE + selector: postgres-operator.crunchydata.com/cluster=cluster-start diff --git a/testing/kuttl/e2e/cluster-start/files/00-cluster-created.yaml b/testing/kuttl/e2e/cluster-start/files/00-cluster-created.yaml new file mode 100644 index 0000000000..4eebece89e --- /dev/null +++ b/testing/kuttl/e2e/cluster-start/files/00-cluster-created.yaml @@ -0,0 +1,15 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: cluster-start +status: + instances: + - name: instance1 + readyReplicas: 1 + replicas: 1 + updatedReplicas: 1 +--- +apiVersion: v1 +kind: Service +metadata: + name: cluster-start-primary diff --git a/testing/kuttl/e2e/cluster-start/files/00-create-cluster.yaml b/testing/kuttl/e2e/cluster-start/files/00-create-cluster.yaml new file mode 100644 index 0000000000..713cd14eb3 --- /dev/null +++ b/testing/kuttl/e2e/cluster-start/files/00-create-cluster.yaml @@ -0,0 +1,14 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: cluster-start +spec: + postgresVersion: ${KUTTL_PG_VERSION} + instances: + - name: instance1 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi diff --git a/testing/kuttl/e2e/cluster-start/files/01-connect-psql.yaml b/testing/kuttl/e2e/cluster-start/files/01-connect-psql.yaml new file mode 100644 index 0000000000..b4cef74941 --- /dev/null +++ b/testing/kuttl/e2e/cluster-start/files/01-connect-psql.yaml @@ -0,0 +1,29 @@ +apiVersion: batch/v1 +kind: Job +metadata: + name: psql-connect +spec: + backoffLimit: 6 + template: + spec: + restartPolicy: Never + containers: + - name: psql + image: ${KUTTL_PSQL_IMAGE} + command: + - psql + - -c + - "select version();" + env: + - name: PGHOST + valueFrom: { secretKeyRef: { name: cluster-start-pguser-cluster-start, key: host } } + - name: PGPORT + valueFrom: { secretKeyRef: { name: cluster-start-pguser-cluster-start, key: port } } + - name: PGDATABASE + valueFrom: { secretKeyRef: { name: cluster-start-pguser-cluster-start, key: dbname } } + - name: PGUSER + valueFrom: { secretKeyRef: { name: cluster-start-pguser-cluster-start, key: user } } + - name: PGPASSWORD + valueFrom: { secretKeyRef: { name: cluster-start-pguser-cluster-start, key: password } } + # Do not wait indefinitely. + - { name: PGCONNECT_TIMEOUT, value: '5' } diff --git a/testing/kuttl/e2e/cluster-start/files/01-psql-connected.yaml b/testing/kuttl/e2e/cluster-start/files/01-psql-connected.yaml new file mode 100644 index 0000000000..e4d8bbb37a --- /dev/null +++ b/testing/kuttl/e2e/cluster-start/files/01-psql-connected.yaml @@ -0,0 +1,6 @@ +apiVersion: batch/v1 +kind: Job +metadata: + name: psql-connect +status: + succeeded: 1 diff --git a/testing/kuttl/e2e/delete-namespace/00-assert.yaml b/testing/kuttl/e2e/delete-namespace/00-assert.yaml new file mode 100644 index 0000000000..78aea811c3 --- /dev/null +++ b/testing/kuttl/e2e/delete-namespace/00-assert.yaml @@ -0,0 +1,7 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +collectors: +- type: command + command: kubectl -n ${KUTTL_TEST_DELETE_NAMESPACE} describe pods --selector postgres-operator.crunchydata.com/cluster=delete-namespace +- namespace: ${KUTTL_TEST_DELETE_NAMESPACE} + selector: postgres-operator.crunchydata.com/cluster=delete-namespace diff --git a/testing/kuttl/e2e/delete-namespace/00-create-cluster.yaml b/testing/kuttl/e2e/delete-namespace/00-create-cluster.yaml new file mode 100644 index 0000000000..2245df00c8 --- /dev/null +++ b/testing/kuttl/e2e/delete-namespace/00-create-cluster.yaml @@ -0,0 +1,7 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +apply: +- files/00-create-namespace.yaml +- files/00-create-cluster.yaml +assert: +- files/00-created.yaml diff --git a/testing/kuttl/e2e/delete-namespace/01-assert.yaml b/testing/kuttl/e2e/delete-namespace/01-assert.yaml new file mode 100644 index 0000000000..78aea811c3 --- /dev/null +++ b/testing/kuttl/e2e/delete-namespace/01-assert.yaml @@ -0,0 +1,7 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +collectors: +- type: command + command: kubectl -n ${KUTTL_TEST_DELETE_NAMESPACE} describe pods --selector postgres-operator.crunchydata.com/cluster=delete-namespace +- namespace: ${KUTTL_TEST_DELETE_NAMESPACE} + selector: postgres-operator.crunchydata.com/cluster=delete-namespace diff --git a/testing/kuttl/e2e/delete-namespace/01-delete-namespace.yaml b/testing/kuttl/e2e/delete-namespace/01-delete-namespace.yaml new file mode 100644 index 0000000000..8fed721e5e --- /dev/null +++ b/testing/kuttl/e2e/delete-namespace/01-delete-namespace.yaml @@ -0,0 +1,10 @@ +--- +# Remove the namespace. +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +delete: + - apiVersion: v1 + kind: Namespace + name: ${KUTTL_TEST_DELETE_NAMESPACE} +error: +- files/01-errors.yaml diff --git a/testing/kuttl/e2e/delete-namespace/README.md b/testing/kuttl/e2e/delete-namespace/README.md new file mode 100644 index 0000000000..697e2ae915 --- /dev/null +++ b/testing/kuttl/e2e/delete-namespace/README.md @@ -0,0 +1,11 @@ +### Delete namespace test + +* Create a namespace +* Start a regular cluster in that namespace +* Delete the namespace +* Check that nothing remains. + +Note: KUTTL provides a `$NAMESPACE` var that can be used in scripts/commands, +but which cannot be used in object definition yamls (like `01--cluster.yaml`). +Therefore, we use a given, non-random namespace that is defined in the makefile +and generated with `generate-kuttl`. diff --git a/testing/kuttl/e2e/delete-namespace/files/00-create-cluster.yaml b/testing/kuttl/e2e/delete-namespace/files/00-create-cluster.yaml new file mode 100644 index 0000000000..fe6392d75a --- /dev/null +++ b/testing/kuttl/e2e/delete-namespace/files/00-create-cluster.yaml @@ -0,0 +1,18 @@ +--- +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: delete-namespace + namespace: ${KUTTL_TEST_DELETE_NAMESPACE} +spec: + postgresVersion: ${KUTTL_PG_VERSION} + instances: + - name: instance1 + replicas: 1 + dataVolumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } + backups: + pgbackrest: + repos: + - name: repo1 + volume: + volumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } diff --git a/testing/kuttl/e2e/delete-namespace/files/00-create-namespace.yaml b/testing/kuttl/e2e/delete-namespace/files/00-create-namespace.yaml new file mode 100644 index 0000000000..617c1e5399 --- /dev/null +++ b/testing/kuttl/e2e/delete-namespace/files/00-create-namespace.yaml @@ -0,0 +1,5 @@ +--- +apiVersion: v1 +kind: Namespace +metadata: + name: ${KUTTL_TEST_DELETE_NAMESPACE} diff --git a/testing/kuttl/e2e/delete-namespace/files/00-created.yaml b/testing/kuttl/e2e/delete-namespace/files/00-created.yaml new file mode 100644 index 0000000000..3d2c7ec936 --- /dev/null +++ b/testing/kuttl/e2e/delete-namespace/files/00-created.yaml @@ -0,0 +1,22 @@ +--- +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: delete-namespace + namespace: ${KUTTL_TEST_DELETE_NAMESPACE} +status: + instances: + - name: instance1 + readyReplicas: 1 + replicas: 1 + updatedReplicas: 1 +--- +apiVersion: batch/v1 +kind: Job +metadata: + namespace: ${KUTTL_TEST_DELETE_NAMESPACE} + labels: + postgres-operator.crunchydata.com/cluster: delete-namespace + postgres-operator.crunchydata.com/pgbackrest-backup: replica-create +status: + succeeded: 1 diff --git a/testing/kuttl/e2e/delete-namespace/files/01-errors.yaml b/testing/kuttl/e2e/delete-namespace/files/01-errors.yaml new file mode 100644 index 0000000000..ee6f31178c --- /dev/null +++ b/testing/kuttl/e2e/delete-namespace/files/01-errors.yaml @@ -0,0 +1,49 @@ +--- +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + namespace: ${KUTTL_TEST_DELETE_NAMESPACE} + name: delete-namespace +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + namespace: ${KUTTL_TEST_DELETE_NAMESPACE} + labels: + postgres-operator.crunchydata.com/cluster: delete-namespace +--- +# Patroni DCS objects are not owned by the PostgresCluster. +apiVersion: v1 +kind: Endpoints +metadata: + namespace: ${KUTTL_TEST_DELETE_NAMESPACE} + labels: + postgres-operator.crunchydata.com/cluster: delete-namespace +--- +apiVersion: v1 +kind: Pod +metadata: + namespace: ${KUTTL_TEST_DELETE_NAMESPACE} + labels: + postgres-operator.crunchydata.com/cluster: delete-namespace +--- +apiVersion: v1 +kind: Service +metadata: + namespace: ${KUTTL_TEST_DELETE_NAMESPACE} + labels: + postgres-operator.crunchydata.com/cluster: delete-namespace +--- +apiVersion: v1 +kind: Secret +metadata: + namespace: ${KUTTL_TEST_DELETE_NAMESPACE} + labels: + postgres-operator.crunchydata.com/cluster: delete-namespace +--- +apiVersion: v1 +kind: ConfigMap +metadata: + namespace: ${KUTTL_TEST_DELETE_NAMESPACE} + labels: + postgres-operator.crunchydata.com/cluster: delete-namespace diff --git a/testing/kuttl/e2e/delete/00-assert.yaml b/testing/kuttl/e2e/delete/00-assert.yaml new file mode 100644 index 0000000000..e4d88b3031 --- /dev/null +++ b/testing/kuttl/e2e/delete/00-assert.yaml @@ -0,0 +1,7 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +collectors: +- type: command + command: kubectl -n $NAMESPACE describe pods --selector postgres-operator.crunchydata.com/cluster=delete +- namespace: $NAMESPACE + selector: postgres-operator.crunchydata.com/cluster=delete diff --git a/testing/kuttl/e2e/delete/00-create-cluster.yaml b/testing/kuttl/e2e/delete/00-create-cluster.yaml new file mode 100644 index 0000000000..801a22d460 --- /dev/null +++ b/testing/kuttl/e2e/delete/00-create-cluster.yaml @@ -0,0 +1,6 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +apply: +- files/00-create-cluster.yaml +assert: +- files/00-cluster-created.yaml diff --git a/testing/kuttl/e2e/delete/01-delete-cluster.yaml b/testing/kuttl/e2e/delete/01-delete-cluster.yaml new file mode 100644 index 0000000000..a1f26b39c4 --- /dev/null +++ b/testing/kuttl/e2e/delete/01-delete-cluster.yaml @@ -0,0 +1,8 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +delete: + - apiVersion: postgres-operator.crunchydata.com/v1beta1 + kind: PostgresCluster + name: delete +error: +- files/01-cluster-deleted.yaml diff --git a/testing/kuttl/e2e/delete/10-assert.yaml b/testing/kuttl/e2e/delete/10-assert.yaml new file mode 100644 index 0000000000..a2c226cc7a --- /dev/null +++ b/testing/kuttl/e2e/delete/10-assert.yaml @@ -0,0 +1,7 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +collectors: +- type: command + command: kubectl -n $NAMESPACE describe pods --selector postgres-operator.crunchydata.com/cluster=delete-with-replica +- namespace: $NAMESPACE + selector: postgres-operator.crunchydata.com/cluster=delete-with-replica diff --git a/testing/kuttl/e2e/delete/10-create-cluster-with-replicas.yaml b/testing/kuttl/e2e/delete/10-create-cluster-with-replicas.yaml new file mode 100644 index 0000000000..678a09c710 --- /dev/null +++ b/testing/kuttl/e2e/delete/10-create-cluster-with-replicas.yaml @@ -0,0 +1,6 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +apply: +- files/10-create-cluster-with-replicas.yaml +assert: +- files/10-cluster-with-replicas-created.yaml diff --git a/testing/kuttl/e2e/delete/11-delete-cluster-with-replicas.yaml b/testing/kuttl/e2e/delete/11-delete-cluster-with-replicas.yaml new file mode 100644 index 0000000000..b2f04ea7ed --- /dev/null +++ b/testing/kuttl/e2e/delete/11-delete-cluster-with-replicas.yaml @@ -0,0 +1,10 @@ +--- +# Remove the cluster. +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +delete: + - apiVersion: postgres-operator.crunchydata.com/v1beta1 + kind: PostgresCluster + name: delete-with-replica +error: +- files/11-cluster-with-replicas-deleted.yaml diff --git a/testing/kuttl/e2e/delete/20-assert.yaml b/testing/kuttl/e2e/delete/20-assert.yaml new file mode 100644 index 0000000000..d85d96101f --- /dev/null +++ b/testing/kuttl/e2e/delete/20-assert.yaml @@ -0,0 +1,6 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +collectors: +- type: command + command: kubectl -n $NAMESPACE describe pods --selector postgres-operator.crunchydata.com/cluster=delete-not-running +# This shouldn't be running, so skip logs; if there's an error, we'll be able to see it in the describe diff --git a/testing/kuttl/e2e/delete/20-create-broken-cluster.yaml b/testing/kuttl/e2e/delete/20-create-broken-cluster.yaml new file mode 100644 index 0000000000..9db684036e --- /dev/null +++ b/testing/kuttl/e2e/delete/20-create-broken-cluster.yaml @@ -0,0 +1,6 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +apply: +- files/20-create-broken-cluster.yaml +error: +- files/20-broken-cluster-not-created.yaml diff --git a/testing/kuttl/e2e/delete/21-delete-broken-cluster.yaml b/testing/kuttl/e2e/delete/21-delete-broken-cluster.yaml new file mode 100644 index 0000000000..3e159f17d4 --- /dev/null +++ b/testing/kuttl/e2e/delete/21-delete-broken-cluster.yaml @@ -0,0 +1,10 @@ +--- +# Remove the cluster. +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +delete: + - apiVersion: postgres-operator.crunchydata.com/v1beta1 + kind: PostgresCluster + name: delete-not-running +error: +- files/21-broken-cluster-deleted.yaml diff --git a/testing/kuttl/e2e/delete/README.md b/testing/kuttl/e2e/delete/README.md new file mode 100644 index 0000000000..7e99680162 --- /dev/null +++ b/testing/kuttl/e2e/delete/README.md @@ -0,0 +1,19 @@ +### Delete test + +#### Regular cluster delete (00-01) + +* Start a regular cluster +* Delete it +* Check that nothing remains. + +#### Delete cluster with replica (10-11) + +* Start a regular cluster with 2 replicas +* Delete it +* Check that nothing remains + +#### Delete a cluster that never started (20-21) + +* Start a cluster with a bad image +* Delete it +* Check that nothing remains diff --git a/testing/kuttl/e2e/delete/files/00-cluster-created.yaml b/testing/kuttl/e2e/delete/files/00-cluster-created.yaml new file mode 100644 index 0000000000..6130475c07 --- /dev/null +++ b/testing/kuttl/e2e/delete/files/00-cluster-created.yaml @@ -0,0 +1,20 @@ +--- +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: delete +status: + instances: + - name: instance1 + readyReplicas: 1 + replicas: 1 + updatedReplicas: 1 +--- +apiVersion: batch/v1 +kind: Job +metadata: + labels: + postgres-operator.crunchydata.com/cluster: delete + postgres-operator.crunchydata.com/pgbackrest-backup: replica-create +status: + succeeded: 1 diff --git a/testing/kuttl/e2e/delete/files/00-create-cluster.yaml b/testing/kuttl/e2e/delete/files/00-create-cluster.yaml new file mode 100644 index 0000000000..0dbcb08204 --- /dev/null +++ b/testing/kuttl/e2e/delete/files/00-create-cluster.yaml @@ -0,0 +1,27 @@ +--- +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: delete +spec: + postgresVersion: ${KUTTL_PG_VERSION} + instances: + - name: instance1 + replicas: 1 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + backups: + pgbackrest: + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi diff --git a/testing/kuttl/e2e/delete/files/01-cluster-deleted.yaml b/testing/kuttl/e2e/delete/files/01-cluster-deleted.yaml new file mode 100644 index 0000000000..091bc96b7b --- /dev/null +++ b/testing/kuttl/e2e/delete/files/01-cluster-deleted.yaml @@ -0,0 +1,42 @@ +--- +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: delete +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + labels: + postgres-operator.crunchydata.com/cluster: delete +--- +# Patroni DCS objects are not owned by the PostgresCluster. +apiVersion: v1 +kind: Endpoints +metadata: + labels: + postgres-operator.crunchydata.com/cluster: delete +--- +apiVersion: v1 +kind: Pod +metadata: + labels: + postgres-operator.crunchydata.com/cluster: delete +--- +apiVersion: v1 +kind: Service +metadata: + labels: + postgres-operator.crunchydata.com/cluster: delete +--- +apiVersion: v1 +kind: Secret +metadata: + labels: + postgres-operator.crunchydata.com/cluster: delete +--- +apiVersion: v1 +kind: ConfigMap +metadata: + labels: + postgres-operator.crunchydata.com/cluster: delete diff --git a/testing/kuttl/e2e/delete/files/10-cluster-with-replicas-created.yaml b/testing/kuttl/e2e/delete/files/10-cluster-with-replicas-created.yaml new file mode 100644 index 0000000000..1940fc680a --- /dev/null +++ b/testing/kuttl/e2e/delete/files/10-cluster-with-replicas-created.yaml @@ -0,0 +1,36 @@ +--- +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: delete-with-replica +status: + instances: + - name: instance1 + readyReplicas: 2 + replicas: 2 + updatedReplicas: 2 +--- +# Patroni labels and readiness happen separately. +# The next step expects to find pods by their role label; wait for them here. +apiVersion: v1 +kind: Pod +metadata: + labels: + postgres-operator.crunchydata.com/cluster: delete-with-replica + postgres-operator.crunchydata.com/role: master +--- +apiVersion: v1 +kind: Pod +metadata: + labels: + postgres-operator.crunchydata.com/cluster: delete-with-replica + postgres-operator.crunchydata.com/role: replica +--- +apiVersion: batch/v1 +kind: Job +metadata: + labels: + postgres-operator.crunchydata.com/cluster: delete-with-replica + postgres-operator.crunchydata.com/pgbackrest-backup: replica-create +status: + succeeded: 1 diff --git a/testing/kuttl/e2e/delete/files/10-create-cluster-with-replicas.yaml b/testing/kuttl/e2e/delete/files/10-create-cluster-with-replicas.yaml new file mode 100644 index 0000000000..53c4fc434d --- /dev/null +++ b/testing/kuttl/e2e/delete/files/10-create-cluster-with-replicas.yaml @@ -0,0 +1,29 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: delete-with-replica +spec: + postgresVersion: ${KUTTL_PG_VERSION} + patroni: + switchover: + enabled: true + instances: + - name: instance1 + replicas: 2 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + backups: + pgbackrest: + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi diff --git a/testing/kuttl/e2e/delete/files/11-cluster-with-replicas-deleted.yaml b/testing/kuttl/e2e/delete/files/11-cluster-with-replicas-deleted.yaml new file mode 100644 index 0000000000..cc14b60d3d --- /dev/null +++ b/testing/kuttl/e2e/delete/files/11-cluster-with-replicas-deleted.yaml @@ -0,0 +1,42 @@ +--- +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: delete-with-replica +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + labels: + postgres-operator.crunchydata.com/cluster: delete-with-replica +--- +# Patroni DCS objects are not owned by the PostgresCluster. +apiVersion: v1 +kind: Endpoints +metadata: + labels: + postgres-operator.crunchydata.com/cluster: delete-with-replica +--- +apiVersion: v1 +kind: Pod +metadata: + labels: + postgres-operator.crunchydata.com/cluster: delete-with-replica +--- +apiVersion: v1 +kind: Service +metadata: + labels: + postgres-operator.crunchydata.com/cluster: delete-with-replica +--- +apiVersion: v1 +kind: Secret +metadata: + labels: + postgres-operator.crunchydata.com/cluster: delete-with-replica +--- +apiVersion: v1 +kind: ConfigMap +metadata: + labels: + postgres-operator.crunchydata.com/cluster: delete-with-replica diff --git a/testing/kuttl/e2e/delete/files/20-broken-cluster-not-created.yaml b/testing/kuttl/e2e/delete/files/20-broken-cluster-not-created.yaml new file mode 100644 index 0000000000..f910fa9811 --- /dev/null +++ b/testing/kuttl/e2e/delete/files/20-broken-cluster-not-created.yaml @@ -0,0 +1,10 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: delete-not-running +status: + instances: + - name: instance1 + readyReplicas: 1 + replicas: 1 + updatedReplicas: 1 diff --git a/testing/kuttl/e2e/delete/files/20-create-broken-cluster.yaml b/testing/kuttl/e2e/delete/files/20-create-broken-cluster.yaml new file mode 100644 index 0000000000..2b7d34f3f6 --- /dev/null +++ b/testing/kuttl/e2e/delete/files/20-create-broken-cluster.yaml @@ -0,0 +1,27 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: delete-not-running +spec: + postgresVersion: ${KUTTL_PG_VERSION} + image: "example.com/does-not-exist" + instances: + - name: instance1 + replicas: 1 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + backups: + pgbackrest: + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi diff --git a/testing/kuttl/e2e/delete/files/21-broken-cluster-deleted.yaml b/testing/kuttl/e2e/delete/files/21-broken-cluster-deleted.yaml new file mode 100644 index 0000000000..4527a3659d --- /dev/null +++ b/testing/kuttl/e2e/delete/files/21-broken-cluster-deleted.yaml @@ -0,0 +1,42 @@ +--- +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: delete-not-running +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + labels: + postgres-operator.crunchydata.com/cluster: delete-not-running +--- +# Patroni DCS objects are not owned by the PostgresCluster. +apiVersion: v1 +kind: Endpoints +metadata: + labels: + postgres-operator.crunchydata.com/cluster: delete-not-running +--- +apiVersion: v1 +kind: Pod +metadata: + labels: + postgres-operator.crunchydata.com/cluster: delete-not-running +--- +apiVersion: v1 +kind: Service +metadata: + labels: + postgres-operator.crunchydata.com/cluster: delete-not-running +--- +apiVersion: v1 +kind: Secret +metadata: + labels: + postgres-operator.crunchydata.com/cluster: delete-not-running +--- +apiVersion: v1 +kind: ConfigMap +metadata: + labels: + postgres-operator.crunchydata.com/cluster: delete-not-running diff --git a/testing/kuttl/e2e/exporter-custom-queries/00--create-cluster.yaml b/testing/kuttl/e2e/exporter-custom-queries/00--create-cluster.yaml new file mode 100644 index 0000000000..975567b066 --- /dev/null +++ b/testing/kuttl/e2e/exporter-custom-queries/00--create-cluster.yaml @@ -0,0 +1,7 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +apply: +- files/exporter-custom-queries-configmap.yaml +- files/exporter-custom-queries-cluster.yaml +assert: +- files/exporter-custom-queries-cluster-checks.yaml diff --git a/testing/kuttl/e2e/exporter-custom-queries/00-assert.yaml b/testing/kuttl/e2e/exporter-custom-queries/00-assert.yaml new file mode 100644 index 0000000000..bbf5c051fd --- /dev/null +++ b/testing/kuttl/e2e/exporter-custom-queries/00-assert.yaml @@ -0,0 +1,54 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +commands: +# First, check that all containers in the instance pod are ready +# Then, list the query files mounted to the exporter and check for expected files +# Then, check the contents of the queries to ensure queries.yml was generated correctly +# Finally, store the current exporter pid as an annotation +- script: | + retry() { bash -ceu 'printf "$1\nSleeping...\n" && sleep 5' - "$@"; } + check_containers_ready() { bash -ceu 'echo "$1" | jq -e ".[] | select(.type==\"ContainersReady\") | .status==\"True\""' - "$@"; } + contains() { bash -ceu '[[ "$1" == *"$2"* ]]' - "$@"; } + + pod=$(kubectl get pods -o name -n "${NAMESPACE}" \ + -l postgres-operator.crunchydata.com/cluster=exporter-custom-queries \ + -l postgres-operator.crunchydata.com/crunchy-postgres-exporter=true) + [ "$pod" = "" ] && retry "Pod not found" && exit 1 + + condition_json=$(kubectl get "${pod}" -n "${NAMESPACE}" -o jsonpath="{.status.conditions}") + [ "$condition_json" = "" ] && retry "conditions not found" && exit 1 + { check_containers_ready "$condition_json"; } || { + retry "containers not ready" + exit 1 + } + + queries_files=$( + kubectl exec --namespace "${NAMESPACE}" "${pod}" -c exporter \ + -- ls /conf + ) + + { + contains "${queries_files}" "queries.yml" && + !(contains "${queries_files}" "defaultQueries.yml") + } || { + echo >&2 'The /conf directory should contain the queries.yml file. Instead it has:' + echo "${queries_files}" + exit 1 + } + + master_queries_contents=$( + kubectl exec --namespace "${NAMESPACE}" "${pod}" -c exporter \ + -- cat /tmp/queries.yml + ) + + { + contains "${master_queries_contents}" "# This is a test." && + !(contains "${master_queries_contents}" "ccp_postgresql_version") + } || { + echo >&2 'The master queries.yml file should only contain the contents of the custom queries.yml file. Instead it contains:' + echo "${master_queries_contents}" + exit 1 + } + + pid=$(kubectl exec ${pod} -n ${NAMESPACE} -c exporter -- cat /tmp/postgres_exporter.pid) + kubectl annotate --overwrite -n ${NAMESPACE} ${pod} oldpid=${pid} diff --git a/testing/kuttl/e2e/exporter-custom-queries/01--change-custom-queries.yaml b/testing/kuttl/e2e/exporter-custom-queries/01--change-custom-queries.yaml new file mode 100644 index 0000000000..7a28d431d1 --- /dev/null +++ b/testing/kuttl/e2e/exporter-custom-queries/01--change-custom-queries.yaml @@ -0,0 +1,6 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +apply: +- files/exporter-custom-queries-configmap-update.yaml +assert: +- files/exporter-custom-queries-configmap-update-checks.yaml diff --git a/testing/kuttl/e2e/exporter-custom-queries/01-assert.yaml b/testing/kuttl/e2e/exporter-custom-queries/01-assert.yaml new file mode 100644 index 0000000000..db5a4757cb --- /dev/null +++ b/testing/kuttl/e2e/exporter-custom-queries/01-assert.yaml @@ -0,0 +1,33 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +commands: +# First, check that all containers in the instance pod are ready +# Then, check that the exporter pid has changed +# Finally, check the contents of the queries to ensure queries.yml was generated correctly +- script: | + retry() { bash -ceu 'printf "$1\nSleeping...\n" && sleep 5' - "$@"; } + check_containers_ready() { bash -ceu 'echo "$1" | jq -e ".[] | select(.type==\"ContainersReady\") | .status==\"True\""' - "$@"; } + contains() { bash -ceu '[[ "$1" == *"$2"* ]]' - "$@"; } + + pod=$(kubectl get pods -o name -n "${NAMESPACE}" \ + -l postgres-operator.crunchydata.com/cluster=exporter-custom-queries \ + -l postgres-operator.crunchydata.com/crunchy-postgres-exporter=true) + [ "$pod" = "" ] && retry "Pod not found" && exit 1 + + oldPid=$(kubectl get ${pod} -n ${NAMESPACE} -o jsonpath="{.metadata.annotations.oldpid}") + newPid=$(kubectl exec ${pod} -n ${NAMESPACE} -c exporter -- cat /tmp/postgres_exporter.pid) + [ "${oldPid}" -eq "${newPid}" ] && retry "pid should have changed" && exit 1 + + master_queries_contents=$( + kubectl exec --namespace "${NAMESPACE}" "${pod}" -c exporter \ + -- cat /tmp/queries.yml + ) + + { + contains "${master_queries_contents}" "# This is a different test." && + !(contains "${master_queries_contents}" "ccp_postgresql_version") + } || { + echo >&2 'The master queries.yml file should only contain the contents of the custom queries.yml file. Instead it contains:' + echo "${master_queries_contents}" + exit 1 + } diff --git a/testing/kuttl/e2e/exporter-custom-queries/README.md b/testing/kuttl/e2e/exporter-custom-queries/README.md new file mode 100644 index 0000000000..801b6d02a8 --- /dev/null +++ b/testing/kuttl/e2e/exporter-custom-queries/README.md @@ -0,0 +1,3 @@ +# Exporter + +**Note**: This series of tests depends on PGO being deployed with the `AppendCustomQueries` feature gate OFF. There is a separate set of tests in `e2e-other` that tests the `AppendCustomQueries` functionality. diff --git a/testing/kuttl/e2e/exporter-custom-queries/files/exporter-custom-queries-cluster-checks.yaml b/testing/kuttl/e2e/exporter-custom-queries/files/exporter-custom-queries-cluster-checks.yaml new file mode 100644 index 0000000000..ed6fd22b7c --- /dev/null +++ b/testing/kuttl/e2e/exporter-custom-queries/files/exporter-custom-queries-cluster-checks.yaml @@ -0,0 +1,31 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: exporter-custom-queries +status: + instances: + - name: instance1 + readyReplicas: 1 + replicas: 1 + updatedReplicas: 1 +--- +apiVersion: v1 +kind: Pod +metadata: + labels: + postgres-operator.crunchydata.com/cluster: exporter-custom-queries + postgres-operator.crunchydata.com/crunchy-postgres-exporter: "true" +status: + phase: Running +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: exporter-custom-queries-exporter-queries-config +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: custom-queries-test +data: + queries.yml: "# This is a test." diff --git a/testing/kuttl/e2e/exporter-custom-queries/files/exporter-custom-queries-cluster.yaml b/testing/kuttl/e2e/exporter-custom-queries/files/exporter-custom-queries-cluster.yaml new file mode 100644 index 0000000000..5356b83be9 --- /dev/null +++ b/testing/kuttl/e2e/exporter-custom-queries/files/exporter-custom-queries-cluster.yaml @@ -0,0 +1,15 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: exporter-custom-queries +spec: + postgresVersion: ${KUTTL_PG_VERSION} + instances: + - name: instance1 + dataVolumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } + monitoring: + pgmonitor: + exporter: + configuration: + - configMap: + name: custom-queries-test diff --git a/testing/kuttl/e2e/exporter-custom-queries/files/exporter-custom-queries-configmap-update-checks.yaml b/testing/kuttl/e2e/exporter-custom-queries/files/exporter-custom-queries-configmap-update-checks.yaml new file mode 100644 index 0000000000..72af1103af --- /dev/null +++ b/testing/kuttl/e2e/exporter-custom-queries/files/exporter-custom-queries-configmap-update-checks.yaml @@ -0,0 +1,6 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: custom-queries-test +data: + queries.yml: "# This is a different test." diff --git a/testing/kuttl/e2e/exporter-custom-queries/files/exporter-custom-queries-configmap-update.yaml b/testing/kuttl/e2e/exporter-custom-queries/files/exporter-custom-queries-configmap-update.yaml new file mode 100644 index 0000000000..72af1103af --- /dev/null +++ b/testing/kuttl/e2e/exporter-custom-queries/files/exporter-custom-queries-configmap-update.yaml @@ -0,0 +1,6 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: custom-queries-test +data: + queries.yml: "# This is a different test." diff --git a/testing/kuttl/e2e/exporter-custom-queries/files/exporter-custom-queries-configmap.yaml b/testing/kuttl/e2e/exporter-custom-queries/files/exporter-custom-queries-configmap.yaml new file mode 100644 index 0000000000..9964d6bc1e --- /dev/null +++ b/testing/kuttl/e2e/exporter-custom-queries/files/exporter-custom-queries-configmap.yaml @@ -0,0 +1,6 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: custom-queries-test +data: + queries.yml: "# This is a test." diff --git a/testing/kuttl/e2e/exporter-no-tls/00--create-cluster.yaml b/testing/kuttl/e2e/exporter-no-tls/00--create-cluster.yaml new file mode 100644 index 0000000000..8209623cf8 --- /dev/null +++ b/testing/kuttl/e2e/exporter-no-tls/00--create-cluster.yaml @@ -0,0 +1,6 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +apply: +- files/exporter-no-tls-cluster.yaml +assert: +- files/exporter-no-tls-cluster-checks.yaml diff --git a/testing/kuttl/e2e/exporter-no-tls/00-assert.yaml b/testing/kuttl/e2e/exporter-no-tls/00-assert.yaml new file mode 100644 index 0000000000..c6bbea051b --- /dev/null +++ b/testing/kuttl/e2e/exporter-no-tls/00-assert.yaml @@ -0,0 +1,47 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +commands: +# First, check that all containers in the instance pod are ready +# Then, check the exporter logs for the 'TLS is disabled' line +# Then, grab the exporter metrics output and check that there were no scrape errors +# Finally, ensure the monitoring user exists and is configured +- script: | + retry() { bash -ceu 'printf "$1\nSleeping...\n" && sleep 5' - "$@"; } + check_containers_ready() { bash -ceu 'echo "$1" | jq -e ".[] | select(.type==\"ContainersReady\") | .status==\"True\""' - "$@"; } + contains() { bash -ceu '[[ "$1" == *"$2"* ]]' - "$@"; } + + pod=$(kubectl get pods -o name -n "${NAMESPACE}" \ + -l postgres-operator.crunchydata.com/cluster=exporter-no-tls \ + -l postgres-operator.crunchydata.com/crunchy-postgres-exporter=true) + [ "$pod" = "" ] && retry "Pod not found" && exit 1 + + condition_json=$(kubectl get "${pod}" -n "${NAMESPACE}" -o jsonpath="{.status.conditions}") + [ "$condition_json" = "" ] && retry "conditions not found" && exit 1 + { check_containers_ready "$condition_json"; } || { + retry "containers not ready" + exit 1 + } + + logs=$(kubectl logs "${pod}" --namespace "${NAMESPACE}" -c exporter) + { contains "${logs}" 'TLS is disabled'; } || { + echo 'tls is not disabled - it should be' + exit 1 + } + + scrape_metrics=$(kubectl exec "${pod}" -c exporter -n "${NAMESPACE}" -- \ + curl --silent http://localhost:9187/metrics | grep "pg_exporter_last_scrape_error") + { contains "${scrape_metrics}" 'pg_exporter_last_scrape_error 0'; } || { + retry "${scrape_metrics}" + exit 1 + } + + kubectl exec --stdin "${pod}" --namespace "${NAMESPACE}" -c database \ + -- psql -qb --set ON_ERROR_STOP=1 --file=- <<'SQL' + DO $$ + DECLARE + result record; + BEGIN + SELECT * INTO result FROM pg_catalog.pg_roles WHERE rolname = 'ccp_monitoring'; + ASSERT FOUND, 'user not found'; + END $$ + SQL diff --git a/testing/kuttl/e2e/exporter-no-tls/files/exporter-no-tls-cluster-checks.yaml b/testing/kuttl/e2e/exporter-no-tls/files/exporter-no-tls-cluster-checks.yaml new file mode 100644 index 0000000000..eab02c6888 --- /dev/null +++ b/testing/kuttl/e2e/exporter-no-tls/files/exporter-no-tls-cluster-checks.yaml @@ -0,0 +1,24 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: exporter-no-tls +status: + instances: + - name: instance1 + readyReplicas: 1 + replicas: 1 + updatedReplicas: 1 +--- +apiVersion: v1 +kind: Pod +metadata: + labels: + postgres-operator.crunchydata.com/cluster: exporter-no-tls + postgres-operator.crunchydata.com/crunchy-postgres-exporter: "true" +status: + phase: Running +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: exporter-no-tls-exporter-queries-config diff --git a/testing/kuttl/e2e/exporter-no-tls/files/exporter-no-tls-cluster.yaml b/testing/kuttl/e2e/exporter-no-tls/files/exporter-no-tls-cluster.yaml new file mode 100644 index 0000000000..690d5b505d --- /dev/null +++ b/testing/kuttl/e2e/exporter-no-tls/files/exporter-no-tls-cluster.yaml @@ -0,0 +1,12 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: exporter-no-tls +spec: + postgresVersion: ${KUTTL_PG_VERSION} + instances: + - name: instance1 + dataVolumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } + monitoring: + pgmonitor: + exporter: {} diff --git a/testing/kuttl/e2e/exporter-password-change/00--create-cluster.yaml b/testing/kuttl/e2e/exporter-password-change/00--create-cluster.yaml new file mode 100644 index 0000000000..4c60626fa5 --- /dev/null +++ b/testing/kuttl/e2e/exporter-password-change/00--create-cluster.yaml @@ -0,0 +1,6 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +apply: +- files/initial-postgrescluster.yaml +assert: +- files/initial-postgrescluster-checks.yaml diff --git a/testing/kuttl/e2e/exporter-password-change/00-assert.yaml b/testing/kuttl/e2e/exporter-password-change/00-assert.yaml new file mode 100644 index 0000000000..df2a331f10 --- /dev/null +++ b/testing/kuttl/e2e/exporter-password-change/00-assert.yaml @@ -0,0 +1,22 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +commands: +# Check that all containers in the instance pod are ready +- script: | + retry() { bash -ceu 'printf "$1\nSleeping...\n" && sleep 5' - "$@"; } + check_containers_ready() { bash -ceu 'echo "$1" | jq -e ".[] | select(.type==\"ContainersReady\") | .status==\"True\""' - "$@"; } + + pod=$(kubectl get pods -o name -n $NAMESPACE \ + -l postgres-operator.crunchydata.com/cluster=exporter-password-change \ + -l postgres-operator.crunchydata.com/crunchy-postgres-exporter=true) + [ "$pod" = "" ] && retry "Pod not found" && exit 1 + + condition_json=$(kubectl get ${pod} -n ${NAMESPACE} -o jsonpath="{.status.conditions}") + [ "$condition_json" = "" ] && retry "conditions not found" && exit 1 + { check_containers_ready "$condition_json"; } || { + retry "containers not ready" + exit 1 + } +collectors: +- type: command + command: kubectl -n $NAMESPACE describe pods --selector postgres-operator.crunchydata.com/cluster=exporter-password-change,postgres-operator.crunchydata.com/crunchy-postgres-exporter=true diff --git a/testing/kuttl/e2e/exporter-password-change/01-assert.yaml b/testing/kuttl/e2e/exporter-password-change/01-assert.yaml new file mode 100644 index 0000000000..c3b25bd16c --- /dev/null +++ b/testing/kuttl/e2e/exporter-password-change/01-assert.yaml @@ -0,0 +1,27 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +commands: +# Grab the exporter metrics output and check that there were no scrape errors +# Store the exporter pid as an annotation on the pod +- script: | + retry() { bash -ceu 'printf "$1\nSleeping...\n" && sleep 5' - "$@"; } + contains() { bash -ceu '[[ "$1" == *"$2"* ]]' - "$@"; } + + pod=$(kubectl get pods -o name -n $NAMESPACE \ + -l postgres-operator.crunchydata.com/cluster=exporter-password-change \ + -l postgres-operator.crunchydata.com/crunchy-postgres-exporter=true) + [ "$pod" = "" ] && retry "Pod not found" && exit 1 + + scrape_metrics=$(kubectl exec ${pod} -c exporter -n ${NAMESPACE} -- \ + curl --silent http://localhost:9187/metrics | grep "pg_exporter_last_scrape_error") + { contains "${scrape_metrics}" 'pg_exporter_last_scrape_error 0'; } || { + retry "${scrape_metrics}" + exit 1 + } + + pid=$(kubectl exec ${pod} -n ${NAMESPACE} -c exporter -- cat /tmp/postgres_exporter.pid) + kubectl annotate --overwrite -n ${NAMESPACE} ${pod} oldpid=${pid} +collectors: +- type: pod + selector: "postgres-operator.crunchydata.com/cluster=exporter-password-change,postgres-operator.crunchydata.com/crunchy-postgres-exporter=true" + container: exporter diff --git a/testing/kuttl/e2e/exporter-password-change/02--change-password.yaml b/testing/kuttl/e2e/exporter-password-change/02--change-password.yaml new file mode 100644 index 0000000000..e16e473f62 --- /dev/null +++ b/testing/kuttl/e2e/exporter-password-change/02--change-password.yaml @@ -0,0 +1,6 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +apply: +- files/update-monitoring-password.yaml +assert: +- files/update-monitoring-password-checks.yaml diff --git a/testing/kuttl/e2e/exporter-password-change/02-assert.yaml b/testing/kuttl/e2e/exporter-password-change/02-assert.yaml new file mode 100644 index 0000000000..a06b350cdc --- /dev/null +++ b/testing/kuttl/e2e/exporter-password-change/02-assert.yaml @@ -0,0 +1,34 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +commands: +# Ensure that the password has been updated in the exporter and it can still access +# Postgres. +# - Check that the exporter pid has changed meaning the current process should have the correct password +# - Check that the DATA_SOURCE_PASS_FILE contains the expected password (`password`) +# - Grab the scrape_error output from exporter metrics and check that there were no scrape errors +- script: | + retry() { bash -ceu 'printf "$1\nSleeping...\n" && sleep 5' - "$@"; } + contains() { bash -ceu '[[ "$1" == *"$2"* ]]' - "$@"; } + check_containers_ready() { bash -ceu 'echo "$1" | jq -e ".[] | select(.type==\"ContainersReady\") | .status==\"True\""' - "$@";} + + pod=$(kubectl get pods -o name -n $NAMESPACE \ + -l postgres-operator.crunchydata.com/cluster=exporter-password-change \ + -l postgres-operator.crunchydata.com/crunchy-postgres-exporter=true) + [ "$pod" = "" ] && retry "Pod not found" && exit 1 + + oldPid=$(kubectl get ${pod} -n ${NAMESPACE} -o jsonpath="{.metadata.annotations.oldpid}") + newPid=$(kubectl exec ${pod} -n ${NAMESPACE} -c exporter -- cat /tmp/postgres_exporter.pid) + [ "${oldPid}" -eq "${newPid}" ] && retry "pid should have changed" && exit 1 + + password=$(kubectl exec -n ${NAMESPACE} ${pod} -c exporter -- bash -c 'cat /opt/crunchy/password') + { contains "${password}" "password"; } || { + retry "unexpected password: ${password}" + exit 1 + } + + scrape_metrics=$(kubectl exec ${pod} -c exporter -n ${NAMESPACE} -- \ + curl --silent http://localhost:9187/metrics | grep "pg_exporter_last_scrape_error") + { contains "${scrape_metrics}" 'pg_exporter_last_scrape_error 0'; } || { + retry "${scrape_metrics}" + exit 1 + } diff --git a/testing/kuttl/e2e/exporter-password-change/README.md b/testing/kuttl/e2e/exporter-password-change/README.md new file mode 100644 index 0000000000..2a5b596309 --- /dev/null +++ b/testing/kuttl/e2e/exporter-password-change/README.md @@ -0,0 +1,36 @@ +# Exporter Password Change + +## 00--create-cluster: +The TestStep will: + +1) Apply the `files/inital-postgrescluster.yaml` file to create a cluster with monitoring enabled +2) Assert that conditions outlined in `files/initial-postgrescluster-checks.yaml` are met + - PostgresCluster exists with a single ready replica + - A pod with `cluster` and `crunchy-postgres-exporter` labels has the status `{phase: Running}` + - A `-monitoring` secret exists with correct labels and ownerReferences + +## 00-assert: + +This TestAssert will loop through a script until: +1) the instance pod has the `ContainersReady` condition with status `true` +2) the asserts from `00--create-cluster` are met. + +## 01-assert: + +This TestAssert will loop through a script until: +1) The metrics endpoint returns `pg_exporter_last_scrape_error 0` meaning the exporter was able to access postgres metrics +2) It is able to store the pid of the running postgres_exporter process + +## 02-change-password: + +This TestStep will: +1) Apply the `files/update-monitoring-password.yaml` file to set the monitoring password to `password` +2) Assert that conditions outlined in `files/update-monitoring-password-checks.yaml` are met + - A `-monitoring` secret exists with `data.password` set to the encoded value for `password` + +## 02-assert: + +This TestAssert will loop through a script until: +1) An exec command can confirm that `/opt/crunchy/password` file contains the updated password +2) It can confirm that the pid of the postgres_exporter process has changed +3) The metrics endpoint returns `pg_exporter_last_scrape_error 0` meaning the exporter was able to access postgres metrics using the updated password diff --git a/testing/kuttl/e2e/exporter-password-change/files/check-restarted-pod.yaml b/testing/kuttl/e2e/exporter-password-change/files/check-restarted-pod.yaml new file mode 100644 index 0000000000..012dafa41c --- /dev/null +++ b/testing/kuttl/e2e/exporter-password-change/files/check-restarted-pod.yaml @@ -0,0 +1,8 @@ +apiVersion: v1 +kind: Pod +metadata: + labels: + postgres-operator.crunchydata.com/cluster: exporter-password-change + postgres-operator.crunchydata.com/crunchy-postgres-exporter: "true" +status: + phase: Running diff --git a/testing/kuttl/e2e/exporter-password-change/files/initial-postgrescluster-checks.yaml b/testing/kuttl/e2e/exporter-password-change/files/initial-postgrescluster-checks.yaml new file mode 100644 index 0000000000..19887a0e10 --- /dev/null +++ b/testing/kuttl/e2e/exporter-password-change/files/initial-postgrescluster-checks.yaml @@ -0,0 +1,33 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: exporter-password-change +status: + instances: + - name: instance1 + readyReplicas: 1 + replicas: 1 + updatedReplicas: 1 +--- +apiVersion: v1 +kind: Pod +metadata: + labels: + postgres-operator.crunchydata.com/cluster: exporter-password-change + postgres-operator.crunchydata.com/crunchy-postgres-exporter: "true" +status: + phase: Running +--- +apiVersion: v1 +kind: Secret +metadata: + name: exporter-password-change-monitoring + labels: + postgres-operator.crunchydata.com/cluster: exporter-password-change + postgres-operator.crunchydata.com/role: monitoring + ownerReferences: + - apiVersion: postgres-operator.crunchydata.com/v1beta1 + blockOwnerDeletion: true + controller: true + kind: PostgresCluster + name: exporter-password-change diff --git a/testing/kuttl/e2e/exporter-password-change/files/initial-postgrescluster.yaml b/testing/kuttl/e2e/exporter-password-change/files/initial-postgrescluster.yaml new file mode 100644 index 0000000000..d16c898ac2 --- /dev/null +++ b/testing/kuttl/e2e/exporter-password-change/files/initial-postgrescluster.yaml @@ -0,0 +1,12 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: exporter-password-change +spec: + postgresVersion: ${KUTTL_PG_VERSION} + instances: + - name: instance1 + dataVolumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } + monitoring: + pgmonitor: + exporter: {} diff --git a/testing/kuttl/e2e/exporter-password-change/files/update-monitoring-password-checks.yaml b/testing/kuttl/e2e/exporter-password-change/files/update-monitoring-password-checks.yaml new file mode 100644 index 0000000000..dcf1703861 --- /dev/null +++ b/testing/kuttl/e2e/exporter-password-change/files/update-monitoring-password-checks.yaml @@ -0,0 +1,16 @@ +apiVersion: v1 +kind: Secret +metadata: + name: exporter-password-change-monitoring + labels: + postgres-operator.crunchydata.com/cluster: exporter-password-change + postgres-operator.crunchydata.com/role: monitoring + ownerReferences: + - apiVersion: postgres-operator.crunchydata.com/v1beta1 + blockOwnerDeletion: true + controller: true + kind: PostgresCluster + name: exporter-password-change +data: + # ensure the password is encoded to 'password' + password: cGFzc3dvcmQ= diff --git a/testing/kuttl/e2e/exporter-password-change/files/update-monitoring-password.yaml b/testing/kuttl/e2e/exporter-password-change/files/update-monitoring-password.yaml new file mode 100644 index 0000000000..7832c89f69 --- /dev/null +++ b/testing/kuttl/e2e/exporter-password-change/files/update-monitoring-password.yaml @@ -0,0 +1,11 @@ +apiVersion: v1 +kind: Secret +metadata: + name: exporter-password-change-monitoring + labels: + postgres-operator.crunchydata.com/cluster: exporter-password-change + postgres-operator.crunchydata.com/role: monitoring +stringData: + password: password +data: +# Ensure data field is deleted so that password/verifier will be regenerated diff --git a/testing/kuttl/e2e/exporter-tls/00--create-cluster.yaml b/testing/kuttl/e2e/exporter-tls/00--create-cluster.yaml new file mode 100644 index 0000000000..fbb92cbf0e --- /dev/null +++ b/testing/kuttl/e2e/exporter-tls/00--create-cluster.yaml @@ -0,0 +1,7 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +apply: +- files/exporter-tls-certs.yaml +- files/exporter-tls-cluster.yaml +assert: +- files/exporter-tls-cluster-checks.yaml diff --git a/testing/kuttl/e2e/exporter-tls/00-assert.yaml b/testing/kuttl/e2e/exporter-tls/00-assert.yaml new file mode 100644 index 0000000000..9ea53266c9 --- /dev/null +++ b/testing/kuttl/e2e/exporter-tls/00-assert.yaml @@ -0,0 +1,48 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +commands: +# First, check that all containers in the instance pod are ready +# Then, grab the exporter metrics output and check that there were no scrape errors +# Then, check the exporter logs for the 'TLS is disabled' line +# Finally, ensure the monitoring user exists and is configured +- script: | + retry() { bash -ceu 'printf "$1\nSleeping...\n" && sleep 5' - "$@"; } + check_containers_ready() { bash -ceu 'echo "$1" | jq -e ".[] | select(.type==\"ContainersReady\") | .status==\"True\""' - "$@"; } + contains() { bash -ceu '[[ "$1" == *"$2"* ]]' - "$@"; } + + pod=$(kubectl get pods -o name -n "${NAMESPACE}" \ + -l postgres-operator.crunchydata.com/cluster=exporter-tls \ + -l postgres-operator.crunchydata.com/crunchy-postgres-exporter=true) + [ "$pod" = "" ] && retry "Pod not found" && exit 1 + + condition_json=$(kubectl get "${pod}" -n "${NAMESPACE}" -o jsonpath="{.status.conditions}") + [ "$condition_json" = "" ] && retry "conditions not found" && exit 1 + { check_containers_ready "$condition_json"; } || { + retry "containers not ready" + exit 1 + } + + logs=$(kubectl logs "${pod}" --namespace "${NAMESPACE}" -c exporter) + { contains "${logs}" 'TLS is enabled'; } || { + echo >&2 'TLS is not enabled - it should be' + echo "${LOGS}" + exit 1 + } + + scrape_metrics=$(kubectl exec "${pod}" -c exporter -n "${NAMESPACE}" -- \ + curl --insecure --silent https://localhost:9187/metrics | grep "pg_exporter_last_scrape_error") + { contains "${scrape_metrics}" 'pg_exporter_last_scrape_error 0'; } || { + retry "${scrape_metrics}" + exit 1 + } + + kubectl exec --stdin "${pod}" --namespace "${NAMESPACE}" -c database \ + -- psql -qb --set ON_ERROR_STOP=1 --file=- <<'SQL' + DO $$ + DECLARE + result record; + BEGIN + SELECT * INTO result FROM pg_catalog.pg_roles WHERE rolname = 'ccp_monitoring'; + ASSERT FOUND, 'user not found'; + END $$ + SQL diff --git a/testing/kuttl/e2e/exporter-tls/files/exporter-tls-certs.yaml b/testing/kuttl/e2e/exporter-tls/files/exporter-tls-certs.yaml new file mode 100644 index 0000000000..1a1340a7b3 --- /dev/null +++ b/testing/kuttl/e2e/exporter-tls/files/exporter-tls-certs.yaml @@ -0,0 +1,12 @@ +# Generated certs using openssl +# openssl req -x509 -nodes -newkey ec -pkeyopt ec_paramgen_curve:prime256v1 \ +# -pkeyopt ec_param_enc:named_curve -sha384 -keyout ca.key -out ca.crt \ +# -days 365 -subj "/CN=*" +apiVersion: v1 +kind: Secret +metadata: + name: cluster-cert +type: Opaque +data: + tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJiakNDQVJPZ0F3SUJBZ0lVUUU3T0pqRDM5WHUvelZlenZQYjdSQ0ZTcE1Jd0NnWUlLb1pJemowRUF3TXcKRERFS01BZ0dBMVVFQXd3QktqQWVGdzB5TWpFd01USXhPRE14TURoYUZ3MHlNekV3TVRJeE9ETXhNRGhhTUF3eApDakFJQmdOVkJBTU1BU293V1RBVEJnY3Foa2pPUFFJQkJnZ3Foa2pPUFFNQkJ3TkNBQVJjaUYyckNlbmg4UFFLClZGUWJaRVcvWi9XUGgwZkk1aHhVb1ZkVVpuRTBTNGhCK1U3aGV5L3QvQVJNbDF3cXovazQ0cmlBa1g1ckFMakgKei9hTm16bnJvMU13VVRBZEJnTlZIUTRFRmdRVTQvUFc2MEdUcWFQdGpYWXdsMk56d0RGMFRmY3dId1lEVlIwagpCQmd3Rm9BVTQvUFc2MEdUcWFQdGpYWXdsMk56d0RGMFRmY3dEd1lEVlIwVEFRSC9CQVV3QXdFQi96QUtCZ2dxCmhrak9QUVFEQXdOSkFEQkdBaUVBbG9iemo3Uml4NkU0OW8yS2JjOUdtYlRSbWE1SVdGb0k4Uk1zcGZDQzVOUUMKSVFET0hzLzhLNVkxeWhoWDc3SGIxSUpsdnFaVVNjdm5NTjBXeS9JUWRuemJ4QT09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K + tls.key: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JR0hBZ0VBTUJNR0J5cUdTTTQ5QWdFR0NDcUdTTTQ5QXdFSEJHMHdhd0lCQVFRZ1preDQ4cktidnZtUVRLSC8KSTN4STZzYW45Wk55MjQrOUQ4ODd5a2svb1l1aFJBTkNBQVJjaUYyckNlbmg4UFFLVkZRYlpFVy9aL1dQaDBmSQo1aHhVb1ZkVVpuRTBTNGhCK1U3aGV5L3QvQVJNbDF3cXovazQ0cmlBa1g1ckFMakh6L2FObXpucgotLS0tLUVORCBQUklWQVRFIEtFWS0tLS0tCg== diff --git a/testing/kuttl/e2e/exporter-tls/files/exporter-tls-cluster-checks.yaml b/testing/kuttl/e2e/exporter-tls/files/exporter-tls-cluster-checks.yaml new file mode 100644 index 0000000000..e192191fcd --- /dev/null +++ b/testing/kuttl/e2e/exporter-tls/files/exporter-tls-cluster-checks.yaml @@ -0,0 +1,29 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: exporter-tls +status: + instances: + - name: instance1 + readyReplicas: 1 + replicas: 1 + updatedReplicas: 1 +--- +apiVersion: v1 +kind: Pod +metadata: + labels: + postgres-operator.crunchydata.com/cluster: exporter-tls + postgres-operator.crunchydata.com/crunchy-postgres-exporter: "true" +status: + phase: Running +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: exporter-tls-exporter-queries-config +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: exporter-tls-exporter-web-config diff --git a/testing/kuttl/e2e/exporter-tls/files/exporter-tls-cluster.yaml b/testing/kuttl/e2e/exporter-tls/files/exporter-tls-cluster.yaml new file mode 100644 index 0000000000..4fa420664a --- /dev/null +++ b/testing/kuttl/e2e/exporter-tls/files/exporter-tls-cluster.yaml @@ -0,0 +1,14 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: exporter-tls +spec: + postgresVersion: ${KUTTL_PG_VERSION} + instances: + - name: instance1 + dataVolumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } + monitoring: + pgmonitor: + exporter: + customTLSSecret: + name: cluster-cert diff --git a/testing/kuttl/e2e/major-upgrade-missing-image/01--valid-upgrade.yaml b/testing/kuttl/e2e/major-upgrade-missing-image/01--valid-upgrade.yaml new file mode 100644 index 0000000000..741efead41 --- /dev/null +++ b/testing/kuttl/e2e/major-upgrade-missing-image/01--valid-upgrade.yaml @@ -0,0 +1,11 @@ +--- +# This upgrade is valid, but has no pgcluster to work on and should get that condition +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PGUpgrade +metadata: + name: empty-image-upgrade +spec: + # postgres version that is no longer available + fromPostgresVersion: 11 + toPostgresVersion: ${KUTTL_PG_UPGRADE_TO_VERSION} + postgresClusterName: major-upgrade-empty-image diff --git a/testing/kuttl/e2e/major-upgrade-missing-image/01-assert.yaml b/testing/kuttl/e2e/major-upgrade-missing-image/01-assert.yaml new file mode 100644 index 0000000000..b7d0f936fb --- /dev/null +++ b/testing/kuttl/e2e/major-upgrade-missing-image/01-assert.yaml @@ -0,0 +1,10 @@ +--- +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PGUpgrade +metadata: + name: empty-image-upgrade +status: + conditions: + - type: "Progressing" + status: "False" + reason: "PGClusterNotFound" diff --git a/testing/kuttl/e2e/major-upgrade-missing-image/10--cluster.yaml b/testing/kuttl/e2e/major-upgrade-missing-image/10--cluster.yaml new file mode 100644 index 0000000000..f5ef8c029e --- /dev/null +++ b/testing/kuttl/e2e/major-upgrade-missing-image/10--cluster.yaml @@ -0,0 +1,23 @@ +--- +# Create the cluster we will do an actual upgrade on, but set the postgres version +# to '10' to force a missing image scenario +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: major-upgrade-empty-image +spec: + # postgres version that is no longer available + postgresVersion: 11 + patroni: + dynamicConfiguration: + postgresql: + parameters: + shared_preload_libraries: pgaudit, set_user, pg_stat_statements, pgnodemx, pg_cron + instances: + - dataVolumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } + backups: + pgbackrest: + repos: + - name: repo1 + volume: + volumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } diff --git a/testing/kuttl/e2e/major-upgrade-missing-image/10-assert.yaml b/testing/kuttl/e2e/major-upgrade-missing-image/10-assert.yaml new file mode 100644 index 0000000000..72e9ff6387 --- /dev/null +++ b/testing/kuttl/e2e/major-upgrade-missing-image/10-assert.yaml @@ -0,0 +1,12 @@ +--- +# The cluster is not running due to the missing image, not due to a proper +# shutdown status. +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PGUpgrade +metadata: + name: empty-image-upgrade +status: + conditions: + - type: "Progressing" + status: "False" + reason: "PGClusterNotShutdown" diff --git a/testing/kuttl/e2e/major-upgrade-missing-image/11--shutdown-cluster.yaml b/testing/kuttl/e2e/major-upgrade-missing-image/11--shutdown-cluster.yaml new file mode 100644 index 0000000000..316f3a5472 --- /dev/null +++ b/testing/kuttl/e2e/major-upgrade-missing-image/11--shutdown-cluster.yaml @@ -0,0 +1,8 @@ +--- +# Shutdown the cluster -- but without the annotation. +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: major-upgrade-empty-image +spec: + shutdown: true diff --git a/testing/kuttl/e2e/major-upgrade-missing-image/11-assert.yaml b/testing/kuttl/e2e/major-upgrade-missing-image/11-assert.yaml new file mode 100644 index 0000000000..5bd9d447cb --- /dev/null +++ b/testing/kuttl/e2e/major-upgrade-missing-image/11-assert.yaml @@ -0,0 +1,11 @@ +--- +# Since the cluster is missing the annotation, we get this condition +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PGUpgrade +metadata: + name: empty-image-upgrade +status: + conditions: + - type: "Progressing" + status: "False" + reason: "PGClusterPrimaryNotIdentified" diff --git a/testing/kuttl/e2e/major-upgrade-missing-image/12--start-and-update-version.yaml b/testing/kuttl/e2e/major-upgrade-missing-image/12--start-and-update-version.yaml new file mode 100644 index 0000000000..fcdf4f62e3 --- /dev/null +++ b/testing/kuttl/e2e/major-upgrade-missing-image/12--start-and-update-version.yaml @@ -0,0 +1,17 @@ +--- +# Update the postgres version and restart the cluster. +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: major-upgrade-empty-image +spec: + shutdown: false + postgresVersion: ${KUTTL_PG_UPGRADE_FROM_VERSION} +--- +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PGUpgrade +metadata: + name: empty-image-upgrade +spec: + # update postgres version + fromPostgresVersion: ${KUTTL_PG_UPGRADE_FROM_VERSION} diff --git a/testing/kuttl/e2e/major-upgrade-missing-image/12-assert.yaml b/testing/kuttl/e2e/major-upgrade-missing-image/12-assert.yaml new file mode 100644 index 0000000000..14c33cccfe --- /dev/null +++ b/testing/kuttl/e2e/major-upgrade-missing-image/12-assert.yaml @@ -0,0 +1,31 @@ +--- +# Wait for the instances to be ready and the replica backup to complete +# by waiting for the status to signal pods ready and pgbackrest stanza created +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: major-upgrade-empty-image +spec: + postgresVersion: ${KUTTL_PG_UPGRADE_FROM_VERSION} +status: + instances: + - name: '00' + replicas: 1 + readyReplicas: 1 + updatedReplicas: 1 + pgbackrest: + repos: + - name: repo1 + replicaCreateBackupComplete: true + stanzaCreated: true +--- +# Even when the cluster exists, the pgupgrade is not progressing because the cluster is not shutdown +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PGUpgrade +metadata: + name: empty-image-upgrade +status: + conditions: + - type: "Progressing" + status: "False" + reason: "PGClusterNotShutdown" diff --git a/testing/kuttl/e2e/major-upgrade-missing-image/13--shutdown-cluster.yaml b/testing/kuttl/e2e/major-upgrade-missing-image/13--shutdown-cluster.yaml new file mode 100644 index 0000000000..316f3a5472 --- /dev/null +++ b/testing/kuttl/e2e/major-upgrade-missing-image/13--shutdown-cluster.yaml @@ -0,0 +1,8 @@ +--- +# Shutdown the cluster -- but without the annotation. +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: major-upgrade-empty-image +spec: + shutdown: true diff --git a/testing/kuttl/e2e/major-upgrade-missing-image/13-assert.yaml b/testing/kuttl/e2e/major-upgrade-missing-image/13-assert.yaml new file mode 100644 index 0000000000..78e51e566a --- /dev/null +++ b/testing/kuttl/e2e/major-upgrade-missing-image/13-assert.yaml @@ -0,0 +1,11 @@ +--- +# Since the cluster is missing the annotation, we get this condition +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PGUpgrade +metadata: + name: empty-image-upgrade +status: + conditions: + - type: "Progressing" + status: "False" + reason: "PGClusterMissingRequiredAnnotation" diff --git a/testing/kuttl/e2e/major-upgrade-missing-image/14--annotate-cluster.yaml b/testing/kuttl/e2e/major-upgrade-missing-image/14--annotate-cluster.yaml new file mode 100644 index 0000000000..2fa2c949a9 --- /dev/null +++ b/testing/kuttl/e2e/major-upgrade-missing-image/14--annotate-cluster.yaml @@ -0,0 +1,8 @@ +--- +# Annotate the cluster for an upgrade. +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: major-upgrade-empty-image + annotations: + postgres-operator.crunchydata.com/allow-upgrade: empty-image-upgrade diff --git a/testing/kuttl/e2e/major-upgrade-missing-image/14-assert.yaml b/testing/kuttl/e2e/major-upgrade-missing-image/14-assert.yaml new file mode 100644 index 0000000000..bd828180f4 --- /dev/null +++ b/testing/kuttl/e2e/major-upgrade-missing-image/14-assert.yaml @@ -0,0 +1,22 @@ +--- +# Now that the postgres cluster is shut down and annotated, the pgupgrade +# can finish reconciling. We know the reconciliation is complete when +# the pgupgrade status is succeeded and the postgres cluster status +# has the updated version. +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PGUpgrade +metadata: + name: empty-image-upgrade +status: + conditions: + - type: "Progressing" + status: "False" + - type: "Succeeded" + status: "True" +--- +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: major-upgrade-empty-image +status: + postgresVersion: ${KUTTL_PG_UPGRADE_TO_VERSION} diff --git a/testing/kuttl/e2e/major-upgrade-missing-image/15--start-cluster.yaml b/testing/kuttl/e2e/major-upgrade-missing-image/15--start-cluster.yaml new file mode 100644 index 0000000000..e5f270fb2f --- /dev/null +++ b/testing/kuttl/e2e/major-upgrade-missing-image/15--start-cluster.yaml @@ -0,0 +1,10 @@ +--- +# Once the pgupgrade is finished, update the version and set shutdown to false +# in the postgres cluster +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: major-upgrade-empty-image +spec: + postgresVersion: ${KUTTL_PG_UPGRADE_TO_VERSION} + shutdown: false diff --git a/testing/kuttl/e2e/major-upgrade-missing-image/15-assert.yaml b/testing/kuttl/e2e/major-upgrade-missing-image/15-assert.yaml new file mode 100644 index 0000000000..dfcbd4c819 --- /dev/null +++ b/testing/kuttl/e2e/major-upgrade-missing-image/15-assert.yaml @@ -0,0 +1,18 @@ +--- +# Wait for the instances to be ready with the target Postgres version. +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: major-upgrade-empty-image +status: + postgresVersion: ${KUTTL_PG_UPGRADE_TO_VERSION} + instances: + - name: '00' + replicas: 1 + readyReplicas: 1 + updatedReplicas: 1 + pgbackrest: + repos: + - name: repo1 + replicaCreateBackupComplete: true + stanzaCreated: true diff --git a/testing/kuttl/e2e/major-upgrade-missing-image/16-check-pgbackrest.yaml b/testing/kuttl/e2e/major-upgrade-missing-image/16-check-pgbackrest.yaml new file mode 100644 index 0000000000..969e7f0ac3 --- /dev/null +++ b/testing/kuttl/e2e/major-upgrade-missing-image/16-check-pgbackrest.yaml @@ -0,0 +1,6 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +commands: +# Check that the pgbackrest setup has successfully completed +- script: | + kubectl -n "${NAMESPACE}" exec "statefulset.apps/major-upgrade-empty-image-repo-host" -c pgbackrest -- pgbackrest check --stanza=db diff --git a/testing/kuttl/e2e/major-upgrade-missing-image/17--check-version.yaml b/testing/kuttl/e2e/major-upgrade-missing-image/17--check-version.yaml new file mode 100644 index 0000000000..5315c1d14f --- /dev/null +++ b/testing/kuttl/e2e/major-upgrade-missing-image/17--check-version.yaml @@ -0,0 +1,39 @@ +--- +# Check the version reported by PostgreSQL +apiVersion: batch/v1 +kind: Job +metadata: + name: major-upgrade-empty-image-after + labels: { postgres-operator-test: kuttl } +spec: + backoffLimit: 6 + template: + metadata: + labels: { postgres-operator-test: kuttl } + spec: + restartPolicy: Never + containers: + - name: psql + image: ${KUTTL_PSQL_IMAGE} + env: + - name: PGURI + valueFrom: { secretKeyRef: { name: major-upgrade-empty-image-pguser-major-upgrade-empty-image, key: uri } } + + # Do not wait indefinitely. + - { name: PGCONNECT_TIMEOUT, value: '5' } + + # Note: the `$$$$` is reduced to `$$` by Kubernetes. + # - https://kubernetes.io/docs/tasks/inject-data-application/ + command: + - psql + - $(PGURI) + - --quiet + - --echo-errors + - --set=ON_ERROR_STOP=1 + - --command + - | + DO $$$$ + BEGIN + ASSERT current_setting('server_version_num') LIKE '${KUTTL_PG_UPGRADE_TO_VERSION}%', + format('got %L', current_setting('server_version_num')); + END $$$$; diff --git a/testing/kuttl/e2e/major-upgrade-missing-image/17-assert.yaml b/testing/kuttl/e2e/major-upgrade-missing-image/17-assert.yaml new file mode 100644 index 0000000000..56289c35c1 --- /dev/null +++ b/testing/kuttl/e2e/major-upgrade-missing-image/17-assert.yaml @@ -0,0 +1,7 @@ +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: major-upgrade-empty-image-after +status: + succeeded: 1 diff --git a/testing/kuttl/e2e/major-upgrade-missing-image/README.md b/testing/kuttl/e2e/major-upgrade-missing-image/README.md new file mode 100644 index 0000000000..1053da29ed --- /dev/null +++ b/testing/kuttl/e2e/major-upgrade-missing-image/README.md @@ -0,0 +1,36 @@ +## Major upgrade missing image tests + +This is a variation derived from our major upgrade KUTTL tests designed to +test scenarios where required container images are not defined in either the +PostgresCluster spec or via the RELATED_IMAGES environment variables. + +### Basic PGUpgrade controller and CRD instance validation + +* 01--valid-upgrade: create a valid PGUpgrade instance +* 01-assert: check that the PGUpgrade instance exists and has the expected status + +### Verify new statuses for missing required container images + +* 10--cluster: create the cluster with an unavailable image (i.e. Postgres 11) +* 10-assert: check that the PGUpgrade instance has the expected reason: "PGClusterNotShutdown" +* 11-shutdown-cluster: set the spec.shutdown value to 'true' as required for upgrade +* 11-assert: check that the new reason is set, "PGClusterPrimaryNotIdentified" + +### Update to an available Postgres version, start and upgrade PostgresCluster + +* 12--start-and-update-version: update the Postgres version on both CRD instances and set 'shutdown' to false +* 12-assert: verify that the cluster is running and the PGUpgrade instance now has the new status info with reason: "PGClusterNotShutdown" +* 13--shutdown-cluster: set spec.shutdown to 'true' +* 13-assert: check that the PGUpgrade instance has the expected reason: "PGClusterMissingRequiredAnnotation" +* 14--annotate-cluster: set the required annotation +* 14-assert: verify that the upgrade succeeded and the new Postgres version shows in the cluster's status +* 15--start-cluster: set the new Postgres version and spec.shutdown to 'false' + +### Verify upgraded PostgresCluster + +* 15-assert: verify that the cluster is running +* 16-check-pgbackrest: check that the pgbackrest setup has successfully completed +* 17--check-version: check the version reported by PostgreSQL +* 17-assert: assert the Job from the previous step succeeded + + diff --git a/testing/kuttl/e2e/major-upgrade/01--invalid-pgupgrade.yaml b/testing/kuttl/e2e/major-upgrade/01--invalid-pgupgrade.yaml new file mode 100644 index 0000000000..ea90f5718a --- /dev/null +++ b/testing/kuttl/e2e/major-upgrade/01--invalid-pgupgrade.yaml @@ -0,0 +1,10 @@ +--- +# This pgupgrade is invalid and should get that condition (even with no cluster) +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PGUpgrade +metadata: + name: major-upgrade-do-it +spec: + fromPostgresVersion: ${KUTTL_PG_VERSION} + toPostgresVersion: ${KUTTL_PG_VERSION} + postgresClusterName: major-upgrade diff --git a/testing/kuttl/e2e/major-upgrade/01-assert.yaml b/testing/kuttl/e2e/major-upgrade/01-assert.yaml new file mode 100644 index 0000000000..f4cef66aa7 --- /dev/null +++ b/testing/kuttl/e2e/major-upgrade/01-assert.yaml @@ -0,0 +1,10 @@ +--- +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PGUpgrade +metadata: + name: major-upgrade-do-it +status: + conditions: + - type: "Progressing" + status: "False" + reason: "PGUpgradeInvalid" diff --git a/testing/kuttl/e2e/major-upgrade/02--valid-upgrade.yaml b/testing/kuttl/e2e/major-upgrade/02--valid-upgrade.yaml new file mode 100644 index 0000000000..f76ff06a9f --- /dev/null +++ b/testing/kuttl/e2e/major-upgrade/02--valid-upgrade.yaml @@ -0,0 +1,10 @@ +--- +# This upgrade is valid, but has no pgcluster to work on and should get that condition +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PGUpgrade +metadata: + name: major-upgrade-do-it +spec: + fromPostgresVersion: ${KUTTL_PG_UPGRADE_FROM_VERSION} + toPostgresVersion: ${KUTTL_PG_UPGRADE_TO_VERSION} + postgresClusterName: major-upgrade diff --git a/testing/kuttl/e2e/major-upgrade/02-assert.yaml b/testing/kuttl/e2e/major-upgrade/02-assert.yaml new file mode 100644 index 0000000000..4df0ecc4d9 --- /dev/null +++ b/testing/kuttl/e2e/major-upgrade/02-assert.yaml @@ -0,0 +1,10 @@ +--- +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PGUpgrade +metadata: + name: major-upgrade-do-it +status: + conditions: + - type: "Progressing" + status: "False" + reason: "PGClusterNotFound" diff --git a/testing/kuttl/e2e/major-upgrade/10--already-updated-cluster.yaml b/testing/kuttl/e2e/major-upgrade/10--already-updated-cluster.yaml new file mode 100644 index 0000000000..0591645221 --- /dev/null +++ b/testing/kuttl/e2e/major-upgrade/10--already-updated-cluster.yaml @@ -0,0 +1,16 @@ +--- +# Create a cluster that is already at the correct version +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: major-upgrade +spec: + postgresVersion: ${KUTTL_PG_UPGRADE_TO_VERSION} + instances: + - dataVolumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } + backups: + pgbackrest: + repos: + - name: repo1 + volume: + volumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } diff --git a/testing/kuttl/e2e/major-upgrade/10-assert.yaml b/testing/kuttl/e2e/major-upgrade/10-assert.yaml new file mode 100644 index 0000000000..202864ef09 --- /dev/null +++ b/testing/kuttl/e2e/major-upgrade/10-assert.yaml @@ -0,0 +1,11 @@ +--- +# pgupgrade should exit since the cluster is already at the requested version +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PGUpgrade +metadata: + name: major-upgrade-do-it +status: + conditions: + - type: "Progressing" + status: "False" + reason: "PGUpgradeResolved" diff --git a/testing/kuttl/e2e/major-upgrade/11-delete-cluster.yaml b/testing/kuttl/e2e/major-upgrade/11-delete-cluster.yaml new file mode 100644 index 0000000000..14eab0efbb --- /dev/null +++ b/testing/kuttl/e2e/major-upgrade/11-delete-cluster.yaml @@ -0,0 +1,8 @@ +--- +# Delete the existing cluster. +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +delete: + - apiVersion: postgres-operator.crunchydata.com/v1beta1 + kind: PostgresCluster + name: major-upgrade diff --git a/testing/kuttl/e2e/major-upgrade/20--cluster-with-invalid-version.yaml b/testing/kuttl/e2e/major-upgrade/20--cluster-with-invalid-version.yaml new file mode 100644 index 0000000000..8d73277292 --- /dev/null +++ b/testing/kuttl/e2e/major-upgrade/20--cluster-with-invalid-version.yaml @@ -0,0 +1,18 @@ +--- +# Create a cluster where the version does not match the pgupgrade's `from` +# TODO(benjaminjb): this isn't quite working out +# apiVersion: postgres-operator.crunchydata.com/v1beta1 +# kind: PostgresCluster +# metadata: +# name: major-upgrade +# spec: +# shutdown: true +# postgresVersion: ${KUTTL_PG_UPGRADE_TOO_EARLY_FROM_VERSION} +# instances: +# - dataVolumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } +# backups: +# pgbackrest: +# repos: +# - name: repo1 +# volume: +# volumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } diff --git a/testing/kuttl/e2e/major-upgrade/20-assert.yaml b/testing/kuttl/e2e/major-upgrade/20-assert.yaml new file mode 100644 index 0000000000..2ea1486284 --- /dev/null +++ b/testing/kuttl/e2e/major-upgrade/20-assert.yaml @@ -0,0 +1,11 @@ +--- +# # pgupgrade should exit since the cluster is already at the requested version +# apiVersion: postgres-operator.crunchydata.com/v1beta1 +# kind: PGUpgrade +# metadata: +# name: major-upgrade-do-it +# status: +# conditions: +# - type: "Progressing" +# status: "False" +# reason: "PGUpgradeInvalidForCluster" diff --git a/testing/kuttl/e2e/major-upgrade/21-delete-cluster.yaml b/testing/kuttl/e2e/major-upgrade/21-delete-cluster.yaml new file mode 100644 index 0000000000..535c6311a4 --- /dev/null +++ b/testing/kuttl/e2e/major-upgrade/21-delete-cluster.yaml @@ -0,0 +1,8 @@ +--- +# # Delete the existing cluster. +# apiVersion: kuttl.dev/v1beta1 +# kind: TestStep +# delete: +# - apiVersion: postgres-operator.crunchydata.com/v1beta1 +# kind: PostgresCluster +# name: major-upgrade diff --git a/testing/kuttl/e2e/major-upgrade/30--cluster.yaml b/testing/kuttl/e2e/major-upgrade/30--cluster.yaml new file mode 100644 index 0000000000..01e1ef6175 --- /dev/null +++ b/testing/kuttl/e2e/major-upgrade/30--cluster.yaml @@ -0,0 +1,22 @@ +--- +# Create the cluster we will do an actual upgrade on +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: major-upgrade +spec: + postgresVersion: ${KUTTL_PG_UPGRADE_FROM_VERSION} + patroni: + dynamicConfiguration: + postgresql: + parameters: + shared_preload_libraries: pgaudit, set_user, pg_stat_statements, pgnodemx, pg_cron + instances: + - dataVolumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } + replicas: 3 + backups: + pgbackrest: + repos: + - name: repo1 + volume: + volumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } diff --git a/testing/kuttl/e2e/major-upgrade/30-assert.yaml b/testing/kuttl/e2e/major-upgrade/30-assert.yaml new file mode 100644 index 0000000000..1db8ec257d --- /dev/null +++ b/testing/kuttl/e2e/major-upgrade/30-assert.yaml @@ -0,0 +1,31 @@ +--- +# Wait for the instances to be ready and the replica backup to complete +# by waiting for the status to signal pods ready and pgbackrest stanza created +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: major-upgrade +spec: + postgresVersion: ${KUTTL_PG_UPGRADE_FROM_VERSION} +status: + instances: + - name: '00' + replicas: 3 + readyReplicas: 3 + updatedReplicas: 3 + pgbackrest: + repos: + - name: repo1 + replicaCreateBackupComplete: true + stanzaCreated: true +--- +# Even when the cluster exists, the pgupgrade is not progressing because the cluster is not shutdown +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PGUpgrade +metadata: + name: major-upgrade-do-it +status: + conditions: + - type: "Progressing" + status: "False" + reason: "PGClusterNotShutdown" diff --git a/testing/kuttl/e2e/major-upgrade/31--create-data.yaml b/testing/kuttl/e2e/major-upgrade/31--create-data.yaml new file mode 100644 index 0000000000..ed8c27b06b --- /dev/null +++ b/testing/kuttl/e2e/major-upgrade/31--create-data.yaml @@ -0,0 +1,94 @@ +--- +# Check the version reported by PostgreSQL and create some data. +apiVersion: batch/v1 +kind: Job +metadata: + name: major-upgrade-before + labels: { postgres-operator-test: kuttl } +spec: + backoffLimit: 3 + template: + metadata: + labels: { postgres-operator-test: kuttl } + spec: + restartPolicy: Never + containers: + - name: psql + image: ${KUTTL_PSQL_IMAGE} + env: + - name: PGURI + valueFrom: { secretKeyRef: { name: major-upgrade-pguser-major-upgrade, key: uri } } + + # Do not wait indefinitely. + - { name: PGCONNECT_TIMEOUT, value: '5' } + + # Note: the `$$$$` is reduced to `$$` by Kubernetes. + # - https://kubernetes.io/docs/tasks/inject-data-application/ + command: + - psql + - $(PGURI) + - --quiet + - --echo-errors + - --set=ON_ERROR_STOP=1 + - --command + - | + DO $$$$ + BEGIN + ASSERT current_setting('server_version_num') LIKE '${KUTTL_PG_UPGRADE_FROM_VERSION}%', + format('got %L', current_setting('server_version_num')); + END $$$$; + - --command + - | + CREATE SCHEMA very; + CREATE TABLE very.important (data) AS VALUES ('treasure'); +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: major-upgrade-before-replica + labels: { postgres-operator-test: kuttl } +spec: + backoffLimit: 3 + template: + metadata: + labels: { postgres-operator-test: kuttl } + spec: + restartPolicy: Never + containers: + - name: psql + image: ${KUTTL_PSQL_IMAGE} + env: + # The Replica svc is not held in the user secret, so we hard-code the Service address + # (using the downstream API for the namespace) + - name: NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + - name: PGHOST + value: "major-upgrade-replicas.$(NAMESPACE).svc" + - name: PGPORT + valueFrom: { secretKeyRef: { name: major-upgrade-pguser-major-upgrade, key: port } } + - name: PGDATABASE + valueFrom: { secretKeyRef: { name: major-upgrade-pguser-major-upgrade, key: dbname } } + - name: PGUSER + valueFrom: { secretKeyRef: { name: major-upgrade-pguser-major-upgrade, key: user } } + - name: PGPASSWORD + valueFrom: { secretKeyRef: { name: major-upgrade-pguser-major-upgrade, key: password } } + + # Do not wait indefinitely. + - { name: PGCONNECT_TIMEOUT, value: '5' } + + # Note: the `$$$$` is reduced to `$$` by Kubernetes. + # - https://kubernetes.io/docs/tasks/inject-data-application/ + command: + - psql + - --quiet + - --echo-errors + - --set=ON_ERROR_STOP=1 + - --command + - | + DO $$$$ + BEGIN + ASSERT current_setting('server_version_num') LIKE '${KUTTL_PG_UPGRADE_FROM_VERSION}%', + format('got %L', current_setting('server_version_num')); + END $$$$; diff --git a/testing/kuttl/e2e/major-upgrade/31-assert.yaml b/testing/kuttl/e2e/major-upgrade/31-assert.yaml new file mode 100644 index 0000000000..dab4dc9de0 --- /dev/null +++ b/testing/kuttl/e2e/major-upgrade/31-assert.yaml @@ -0,0 +1,14 @@ +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: major-upgrade-before +status: + succeeded: 1 +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: major-upgrade-before-replica +status: + succeeded: 1 diff --git a/testing/kuttl/e2e/major-upgrade/32--shutdown-cluster.yaml b/testing/kuttl/e2e/major-upgrade/32--shutdown-cluster.yaml new file mode 100644 index 0000000000..9e4a575a3a --- /dev/null +++ b/testing/kuttl/e2e/major-upgrade/32--shutdown-cluster.yaml @@ -0,0 +1,8 @@ +--- +# Shutdown the cluster -- but without the annotation. +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: major-upgrade +spec: + shutdown: true diff --git a/testing/kuttl/e2e/major-upgrade/32-assert.yaml b/testing/kuttl/e2e/major-upgrade/32-assert.yaml new file mode 100644 index 0000000000..2ad7f2869a --- /dev/null +++ b/testing/kuttl/e2e/major-upgrade/32-assert.yaml @@ -0,0 +1,11 @@ +--- +# Since the cluster is missing the annotation, we get this condition +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PGUpgrade +metadata: + name: major-upgrade-do-it +status: + conditions: + - type: "Progressing" + status: "False" + reason: "PGClusterMissingRequiredAnnotation" diff --git a/testing/kuttl/e2e/major-upgrade/33--annotate-cluster.yaml b/testing/kuttl/e2e/major-upgrade/33--annotate-cluster.yaml new file mode 100644 index 0000000000..35cd269035 --- /dev/null +++ b/testing/kuttl/e2e/major-upgrade/33--annotate-cluster.yaml @@ -0,0 +1,8 @@ +--- +# Annotate the cluster for an upgrade. +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: major-upgrade + annotations: + postgres-operator.crunchydata.com/allow-upgrade: major-upgrade-do-it diff --git a/testing/kuttl/e2e/major-upgrade/33-assert.yaml b/testing/kuttl/e2e/major-upgrade/33-assert.yaml new file mode 100644 index 0000000000..aadb5e3bb1 --- /dev/null +++ b/testing/kuttl/e2e/major-upgrade/33-assert.yaml @@ -0,0 +1,22 @@ +--- +# Now that the postgres cluster is shut down and annotated, the pgupgrade +# can finish reconciling. We know the reconciling is complete when +# the pgupgrade status is succeeded and the postgres cluster status +# has the updated version. +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PGUpgrade +metadata: + name: major-upgrade-do-it +status: + conditions: + - type: "Progressing" + status: "False" + - type: "Succeeded" + status: "True" +--- +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: major-upgrade +status: + postgresVersion: ${KUTTL_PG_UPGRADE_TO_VERSION} diff --git a/testing/kuttl/e2e/major-upgrade/34--restart-cluster.yaml b/testing/kuttl/e2e/major-upgrade/34--restart-cluster.yaml new file mode 100644 index 0000000000..ee674151ca --- /dev/null +++ b/testing/kuttl/e2e/major-upgrade/34--restart-cluster.yaml @@ -0,0 +1,10 @@ +--- +# Once the pgupgrade is finished, update the version and set shutdown to false +# in the postgres cluster +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: major-upgrade +spec: + postgresVersion: ${KUTTL_PG_UPGRADE_TO_VERSION} + shutdown: false diff --git a/testing/kuttl/e2e/major-upgrade/34-assert.yaml b/testing/kuttl/e2e/major-upgrade/34-assert.yaml new file mode 100644 index 0000000000..aba583f74c --- /dev/null +++ b/testing/kuttl/e2e/major-upgrade/34-assert.yaml @@ -0,0 +1,18 @@ +--- +# Wait for the instances to be ready with the target Postgres version. +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: major-upgrade +status: + postgresVersion: ${KUTTL_PG_UPGRADE_TO_VERSION} + instances: + - name: '00' + replicas: 3 + readyReplicas: 3 + updatedReplicas: 3 + pgbackrest: + repos: + - name: repo1 + replicaCreateBackupComplete: true + stanzaCreated: true diff --git a/testing/kuttl/e2e/major-upgrade/35-check-pgbackrest-and-replica.yaml b/testing/kuttl/e2e/major-upgrade/35-check-pgbackrest-and-replica.yaml new file mode 100644 index 0000000000..be1c3ff357 --- /dev/null +++ b/testing/kuttl/e2e/major-upgrade/35-check-pgbackrest-and-replica.yaml @@ -0,0 +1,11 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +commands: +# Check that the pgbackrest setup has successfully completed +- script: | + kubectl -n "${NAMESPACE}" exec "statefulset.apps/major-upgrade-repo-host" -c pgbackrest -- pgbackrest check --stanza=db +# Check that the replica data dir has been successfully cleaned +- script: | + # Check that the old pg folders do not exist on the replica + REPLICA=$(kubectl get pod -l=postgres-operator.crunchydata.com/role=replica -n "${NAMESPACE}" -o=jsonpath='{ .items[0].metadata.name }') + kubectl -n "${NAMESPACE}" exec "${REPLICA}" -c database -- [ ! -d "pgdata/pg${KUTTL_PG_UPGRADE_FROM_VERSION}" ] diff --git a/testing/kuttl/e2e/major-upgrade/36--check-data-and-version.yaml b/testing/kuttl/e2e/major-upgrade/36--check-data-and-version.yaml new file mode 100644 index 0000000000..135f34c7df --- /dev/null +++ b/testing/kuttl/e2e/major-upgrade/36--check-data-and-version.yaml @@ -0,0 +1,108 @@ +--- +# Check the version reported by PostgreSQL and confirm that data was upgraded. +apiVersion: batch/v1 +kind: Job +metadata: + name: major-upgrade-after + labels: { postgres-operator-test: kuttl } +spec: + backoffLimit: 6 + template: + metadata: + labels: { postgres-operator-test: kuttl } + spec: + restartPolicy: Never + containers: + - name: psql + image: ${KUTTL_PSQL_IMAGE} + env: + - name: PGURI + valueFrom: { secretKeyRef: { name: major-upgrade-pguser-major-upgrade, key: uri } } + + # Do not wait indefinitely. + - { name: PGCONNECT_TIMEOUT, value: '5' } + + # Note: the `$$$$` is reduced to `$$` by Kubernetes. + # - https://kubernetes.io/docs/tasks/inject-data-application/ + command: + - psql + - $(PGURI) + - --quiet + - --echo-errors + - --set=ON_ERROR_STOP=1 + - --command + - | + DO $$$$ + BEGIN + ASSERT current_setting('server_version_num') LIKE '${KUTTL_PG_UPGRADE_TO_VERSION}%', + format('got %L', current_setting('server_version_num')); + END $$$$; + - --command + - | + DO $$$$ + DECLARE + everything jsonb; + BEGIN + SELECT jsonb_agg(important) INTO everything FROM very.important; + ASSERT everything = '[{"data":"treasure"}]', format('got %L', everything); + END $$$$; +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: major-upgrade-after-replica + labels: { postgres-operator-test: kuttl } +spec: + backoffLimit: 3 + template: + metadata: + labels: { postgres-operator-test: kuttl } + spec: + restartPolicy: Never + containers: + - name: psql + image: ${KUTTL_PSQL_IMAGE} + env: + # The Replica svc is not held in the user secret, so we hard-code the Service address + # (using the downstream API for the namespace) + - name: NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + - name: PGHOST + value: "major-upgrade-replicas.$(NAMESPACE).svc" + - name: PGPORT + valueFrom: { secretKeyRef: { name: major-upgrade-pguser-major-upgrade, key: port } } + - name: PGDATABASE + valueFrom: { secretKeyRef: { name: major-upgrade-pguser-major-upgrade, key: dbname } } + - name: PGUSER + valueFrom: { secretKeyRef: { name: major-upgrade-pguser-major-upgrade, key: user } } + - name: PGPASSWORD + valueFrom: { secretKeyRef: { name: major-upgrade-pguser-major-upgrade, key: password } } + + # Do not wait indefinitely. + - { name: PGCONNECT_TIMEOUT, value: '5' } + + # Note: the `$$$$` is reduced to `$$` by Kubernetes. + # - https://kubernetes.io/docs/tasks/inject-data-application/ + command: + - psql + - --quiet + - --echo-errors + - --set=ON_ERROR_STOP=1 + - --command + - | + DO $$$$ + BEGIN + ASSERT current_setting('server_version_num') LIKE '${KUTTL_PG_UPGRADE_TO_VERSION}%', + format('got %L', current_setting('server_version_num')); + END $$$$; + - --command + - | + DO $$$$ + DECLARE + everything jsonb; + BEGIN + SELECT jsonb_agg(important) INTO everything FROM very.important; + ASSERT everything = '[{"data":"treasure"}]', format('got %L', everything); + END $$$$; diff --git a/testing/kuttl/e2e/major-upgrade/36-assert.yaml b/testing/kuttl/e2e/major-upgrade/36-assert.yaml new file mode 100644 index 0000000000..a545bfd756 --- /dev/null +++ b/testing/kuttl/e2e/major-upgrade/36-assert.yaml @@ -0,0 +1,14 @@ +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: major-upgrade-after +status: + succeeded: 1 +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: major-upgrade-after-replica +status: + succeeded: 1 diff --git a/testing/kuttl/e2e/optional-backups/00--cluster.yaml b/testing/kuttl/e2e/optional-backups/00--cluster.yaml new file mode 100644 index 0000000000..7b927831e0 --- /dev/null +++ b/testing/kuttl/e2e/optional-backups/00--cluster.yaml @@ -0,0 +1,15 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: created-without-backups +spec: + postgresVersion: ${KUTTL_PG_VERSION} + instances: + - name: instance1 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + diff --git a/testing/kuttl/e2e/optional-backups/00-assert.yaml b/testing/kuttl/e2e/optional-backups/00-assert.yaml new file mode 100644 index 0000000000..86392d0308 --- /dev/null +++ b/testing/kuttl/e2e/optional-backups/00-assert.yaml @@ -0,0 +1,38 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: created-without-backups +status: + instances: + - name: instance1 + pgbackrest: {} +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + labels: + postgres-operator.crunchydata.com/cluster: created-without-backups + postgres-operator.crunchydata.com/data: postgres + postgres-operator.crunchydata.com/instance-set: instance1 + postgres-operator.crunchydata.com/role: pgdata +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + labels: + postgres-operator.crunchydata.com/cluster: created-without-backups + postgres-operator.crunchydata.com/data: postgres + postgres-operator.crunchydata.com/instance-set: instance1 +--- +apiVersion: v1 +kind: Pod +metadata: + labels: + postgres-operator.crunchydata.com/cluster: created-without-backups + postgres-operator.crunchydata.com/data: postgres + postgres-operator.crunchydata.com/instance-set: instance1 + postgres-operator.crunchydata.com/role: master +status: + containerStatuses: + - ready: true + - ready: true diff --git a/testing/kuttl/e2e/optional-backups/01-errors.yaml b/testing/kuttl/e2e/optional-backups/01-errors.yaml new file mode 100644 index 0000000000..e702fcddb4 --- /dev/null +++ b/testing/kuttl/e2e/optional-backups/01-errors.yaml @@ -0,0 +1,29 @@ +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: created-without-backups-repo1 +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: created-without-backups-repo-host +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: created-without-backups-pgbackrest-config +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: created-without-backups-pgbackrest +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: created-without-backups-pgbackrest +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: created-without-backups-pgbackrest diff --git a/testing/kuttl/e2e/optional-backups/02-assert.yaml b/testing/kuttl/e2e/optional-backups/02-assert.yaml new file mode 100644 index 0000000000..eb3f70357f --- /dev/null +++ b/testing/kuttl/e2e/optional-backups/02-assert.yaml @@ -0,0 +1,15 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +commands: +- script: | + pod=$(kubectl get pods -o name -n "${NAMESPACE}" \ + -l postgres-operator.crunchydata.com/cluster=created-without-backups) + + kubectl exec --stdin "${pod}" --namespace "${NAMESPACE}" -c database \ + -- psql -qb --set ON_ERROR_STOP=1 --file=- <<'SQL' + DO $$ + BEGIN + ASSERT current_setting('archive_command') LIKE 'true', + format('expected "true", got %L', current_setting('archive_command')); + END $$ + SQL diff --git a/testing/kuttl/e2e/optional-backups/03-assert.yaml b/testing/kuttl/e2e/optional-backups/03-assert.yaml new file mode 100644 index 0000000000..17ca1e4062 --- /dev/null +++ b/testing/kuttl/e2e/optional-backups/03-assert.yaml @@ -0,0 +1,14 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +commands: +- script: | + pod=$(kubectl get pods -o name -n "${NAMESPACE}" \ + -l postgres-operator.crunchydata.com/cluster=created-without-backups) + + kubectl exec --stdin "${pod}" --namespace "${NAMESPACE}" -c database \ + -- psql -qb --set ON_ERROR_STOP=1 \ + -c "CREATE TABLE important (data) AS VALUES ('treasure');" + + kubectl exec --stdin "${pod}" --namespace "${NAMESPACE}" -c database \ + -- psql -qb --set ON_ERROR_STOP=1 \ + -c "CHECKPOINT;" diff --git a/testing/kuttl/e2e/optional-backups/04--cluster.yaml b/testing/kuttl/e2e/optional-backups/04--cluster.yaml new file mode 100644 index 0000000000..fc39ff6ebe --- /dev/null +++ b/testing/kuttl/e2e/optional-backups/04--cluster.yaml @@ -0,0 +1,16 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: created-without-backups +spec: + postgresVersion: ${KUTTL_PG_VERSION} + instances: + - name: instance1 + replicas: 2 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + diff --git a/testing/kuttl/e2e/optional-backups/05-assert.yaml b/testing/kuttl/e2e/optional-backups/05-assert.yaml new file mode 100644 index 0000000000..d346e01a04 --- /dev/null +++ b/testing/kuttl/e2e/optional-backups/05-assert.yaml @@ -0,0 +1,12 @@ +apiVersion: v1 +kind: Pod +metadata: + labels: + postgres-operator.crunchydata.com/cluster: created-without-backups + postgres-operator.crunchydata.com/data: postgres + postgres-operator.crunchydata.com/instance-set: instance1 + postgres-operator.crunchydata.com/role: replica +status: + containerStatuses: + - ready: true + - ready: true diff --git a/testing/kuttl/e2e/optional-backups/06-assert.yaml b/testing/kuttl/e2e/optional-backups/06-assert.yaml new file mode 100644 index 0000000000..c366545508 --- /dev/null +++ b/testing/kuttl/e2e/optional-backups/06-assert.yaml @@ -0,0 +1,18 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +commands: +- script: | + pod=$(kubectl get pods -o name -n "${NAMESPACE}" \ + -l postgres-operator.crunchydata.com/cluster=created-without-backups \ + -l postgres-operator.crunchydata.com/role=replica) + + kubectl exec --stdin "${pod}" --namespace "${NAMESPACE}" -c database \ + -- psql -qb --set ON_ERROR_STOP=1 --file=- <<'SQL' + DO $$ + DECLARE + everything jsonb; + BEGIN + SELECT jsonb_agg(important) INTO everything FROM important; + ASSERT everything = '[{"data":"treasure"}]', format('got %L', everything); + END $$ + SQL diff --git a/testing/kuttl/e2e/optional-backups/10--cluster.yaml b/testing/kuttl/e2e/optional-backups/10--cluster.yaml new file mode 100644 index 0000000000..6da85c93f9 --- /dev/null +++ b/testing/kuttl/e2e/optional-backups/10--cluster.yaml @@ -0,0 +1,27 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: created-without-backups +spec: + postgresVersion: ${KUTTL_PG_VERSION} + instances: + - name: instance1 + replicas: 1 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + backups: + pgbackrest: + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + diff --git a/testing/kuttl/e2e/optional-backups/10-assert.yaml b/testing/kuttl/e2e/optional-backups/10-assert.yaml new file mode 100644 index 0000000000..7b740b310d --- /dev/null +++ b/testing/kuttl/e2e/optional-backups/10-assert.yaml @@ -0,0 +1,79 @@ +# It should be possible to turn backups back on. +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: created-without-backups +status: + pgbackrest: + repoHost: + apiVersion: apps/v1 + kind: StatefulSet + ready: true + repos: + - bound: true + name: repo1 + replicaCreateBackupComplete: true + stanzaCreated: true +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + labels: + postgres-operator.crunchydata.com/cluster: created-without-backups + postgres-operator.crunchydata.com/data: postgres + postgres-operator.crunchydata.com/instance-set: instance1 + postgres-operator.crunchydata.com/role: pgdata +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + labels: + postgres-operator.crunchydata.com/cluster: created-without-backups + postgres-operator.crunchydata.com/data: postgres + postgres-operator.crunchydata.com/instance-set: instance1 +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: created-without-backups-repo1 +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: created-without-backups-repo-host +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: created-without-backups-pgbackrest-config +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: created-without-backups-pgbackrest +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: created-without-backups-pgbackrest +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: created-without-backups-pgbackrest +--- +apiVersion: v1 +kind: Pod +metadata: + labels: + postgres-operator.crunchydata.com/cluster: created-without-backups + postgres-operator.crunchydata.com/data: postgres + postgres-operator.crunchydata.com/instance-set: instance1 + postgres-operator.crunchydata.com/patroni: created-without-backups-ha + postgres-operator.crunchydata.com/role: master +status: + containerStatuses: + - ready: true + - ready: true + - ready: true + - ready: true diff --git a/testing/kuttl/e2e/optional-backups/11-assert.yaml b/testing/kuttl/e2e/optional-backups/11-assert.yaml new file mode 100644 index 0000000000..5976d03f41 --- /dev/null +++ b/testing/kuttl/e2e/optional-backups/11-assert.yaml @@ -0,0 +1,18 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +commands: +- script: | + pod=$(kubectl get pods -o name -n "${NAMESPACE}" \ + -l postgres-operator.crunchydata.com/cluster=created-without-backup \ + -l postgres-operator.crunchydata.com/instance-set=instance1 \ + -l postgres-operator.crunchydata.com/patroni=created-without-backups-ha \ + -l postgres-operator.crunchydata.com/role=master) + + kubectl exec --stdin "${pod}" --namespace "${NAMESPACE}" -c database \ + -- psql -qb --set ON_ERROR_STOP=1 --file=- <<'SQL' + DO $$ + BEGIN + ASSERT current_setting('archive_command') LIKE 'pgbackrest --stanza=db archive-push "%p"', + format('expected "pgbackrest --stanza=db archive-push \"%p\"", got %L', current_setting('archive_command')); + END $$ + SQL diff --git a/testing/kuttl/e2e/optional-backups/20--cluster.yaml b/testing/kuttl/e2e/optional-backups/20--cluster.yaml new file mode 100644 index 0000000000..8e0d01cbf8 --- /dev/null +++ b/testing/kuttl/e2e/optional-backups/20--cluster.yaml @@ -0,0 +1,6 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +commands: +- command: |- + kubectl patch postgrescluster created-without-backups --type 'merge' -p '{"spec":{"backups": null}}' + namespaced: true diff --git a/testing/kuttl/e2e/optional-backups/20-assert.yaml b/testing/kuttl/e2e/optional-backups/20-assert.yaml new file mode 100644 index 0000000000..b469e277f8 --- /dev/null +++ b/testing/kuttl/e2e/optional-backups/20-assert.yaml @@ -0,0 +1,63 @@ +# It should be possible to turn backups back on. +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: created-without-backups +status: + pgbackrest: + repoHost: + apiVersion: apps/v1 + kind: StatefulSet + ready: true + repos: + - bound: true + name: repo1 + replicaCreateBackupComplete: true + stanzaCreated: true +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + labels: + postgres-operator.crunchydata.com/cluster: created-without-backups + postgres-operator.crunchydata.com/data: postgres + postgres-operator.crunchydata.com/instance-set: instance1 + postgres-operator.crunchydata.com/role: pgdata +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + labels: + postgres-operator.crunchydata.com/cluster: created-without-backups + postgres-operator.crunchydata.com/data: postgres + postgres-operator.crunchydata.com/instance-set: instance1 +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: created-without-backups-repo1 +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: created-without-backups-repo-host +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: created-without-backups-pgbackrest-config +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: created-without-backups-pgbackrest +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: created-without-backups-pgbackrest +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: created-without-backups-pgbackrest diff --git a/testing/kuttl/e2e/optional-backups/21-assert.yaml b/testing/kuttl/e2e/optional-backups/21-assert.yaml new file mode 100644 index 0000000000..5976d03f41 --- /dev/null +++ b/testing/kuttl/e2e/optional-backups/21-assert.yaml @@ -0,0 +1,18 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +commands: +- script: | + pod=$(kubectl get pods -o name -n "${NAMESPACE}" \ + -l postgres-operator.crunchydata.com/cluster=created-without-backup \ + -l postgres-operator.crunchydata.com/instance-set=instance1 \ + -l postgres-operator.crunchydata.com/patroni=created-without-backups-ha \ + -l postgres-operator.crunchydata.com/role=master) + + kubectl exec --stdin "${pod}" --namespace "${NAMESPACE}" -c database \ + -- psql -qb --set ON_ERROR_STOP=1 --file=- <<'SQL' + DO $$ + BEGIN + ASSERT current_setting('archive_command') LIKE 'pgbackrest --stanza=db archive-push "%p"', + format('expected "pgbackrest --stanza=db archive-push \"%p\"", got %L', current_setting('archive_command')); + END $$ + SQL diff --git a/testing/kuttl/e2e/optional-backups/22--cluster.yaml b/testing/kuttl/e2e/optional-backups/22--cluster.yaml new file mode 100644 index 0000000000..2e25309886 --- /dev/null +++ b/testing/kuttl/e2e/optional-backups/22--cluster.yaml @@ -0,0 +1,5 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +commands: +- command: kubectl annotate postgrescluster created-without-backups postgres-operator.crunchydata.com/authorizeBackupRemoval="true" + namespaced: true diff --git a/testing/kuttl/e2e/optional-backups/23-assert.yaml b/testing/kuttl/e2e/optional-backups/23-assert.yaml new file mode 100644 index 0000000000..8748ea015c --- /dev/null +++ b/testing/kuttl/e2e/optional-backups/23-assert.yaml @@ -0,0 +1,26 @@ +# It should be possible to turn backups back on. +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: created-without-backups +status: + instances: + - name: instance1 + pgbackrest: {} +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + labels: + postgres-operator.crunchydata.com/cluster: created-without-backups + postgres-operator.crunchydata.com/data: postgres + postgres-operator.crunchydata.com/instance-set: instance1 + postgres-operator.crunchydata.com/role: pgdata +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + labels: + postgres-operator.crunchydata.com/cluster: created-without-backups + postgres-operator.crunchydata.com/data: postgres + postgres-operator.crunchydata.com/instance-set: instance1 diff --git a/testing/kuttl/e2e/optional-backups/24-errors.yaml b/testing/kuttl/e2e/optional-backups/24-errors.yaml new file mode 100644 index 0000000000..e702fcddb4 --- /dev/null +++ b/testing/kuttl/e2e/optional-backups/24-errors.yaml @@ -0,0 +1,29 @@ +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: created-without-backups-repo1 +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: created-without-backups-repo-host +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: created-without-backups-pgbackrest-config +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: created-without-backups-pgbackrest +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: created-without-backups-pgbackrest +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: created-without-backups-pgbackrest diff --git a/testing/kuttl/e2e/optional-backups/25-assert.yaml b/testing/kuttl/e2e/optional-backups/25-assert.yaml new file mode 100644 index 0000000000..eb3f70357f --- /dev/null +++ b/testing/kuttl/e2e/optional-backups/25-assert.yaml @@ -0,0 +1,15 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +commands: +- script: | + pod=$(kubectl get pods -o name -n "${NAMESPACE}" \ + -l postgres-operator.crunchydata.com/cluster=created-without-backups) + + kubectl exec --stdin "${pod}" --namespace "${NAMESPACE}" -c database \ + -- psql -qb --set ON_ERROR_STOP=1 --file=- <<'SQL' + DO $$ + BEGIN + ASSERT current_setting('archive_command') LIKE 'true', + format('expected "true", got %L', current_setting('archive_command')); + END $$ + SQL diff --git a/testing/kuttl/e2e/optional-backups/README.md b/testing/kuttl/e2e/optional-backups/README.md new file mode 100644 index 0000000000..92c52d4136 --- /dev/null +++ b/testing/kuttl/e2e/optional-backups/README.md @@ -0,0 +1,13 @@ +## Optional backups + +### Steps + +00-02. Create cluster without backups, check that expected K8s objects do/don't exist, e.g., repo-host sts doesn't exist; check that the archive command is `true` + +03-06. Add data and a replica; check that the data successfully replicates to the replica. + +10-11. Update cluster to add backups, check that expected K8s objects do/don't exist, e.g., repo-host sts exists; check that the archive command is set to the usual + +20-21. Update cluster to remove backups but without annotation, check that no changes were made, including to the archive command + +22-25. Annotate cluster to remove existing backups, check that expected K8s objects do/don't exist, e.g., repo-host sts doesn't exist; check that the archive command is `true` diff --git a/testing/kuttl/e2e/password-change/00--cluster.yaml b/testing/kuttl/e2e/password-change/00--cluster.yaml new file mode 100644 index 0000000000..d7b7019b62 --- /dev/null +++ b/testing/kuttl/e2e/password-change/00--cluster.yaml @@ -0,0 +1,14 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: password-change +spec: + postgresVersion: ${KUTTL_PG_VERSION} + instances: + - name: instance1 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi diff --git a/testing/kuttl/e2e/password-change/00-assert.yaml b/testing/kuttl/e2e/password-change/00-assert.yaml new file mode 100644 index 0000000000..bfedc0b25e --- /dev/null +++ b/testing/kuttl/e2e/password-change/00-assert.yaml @@ -0,0 +1,15 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: password-change +status: + instances: + - name: instance1 + readyReplicas: 1 + replicas: 1 + updatedReplicas: 1 +--- +apiVersion: v1 +kind: Service +metadata: + name: password-change-primary diff --git a/testing/kuttl/e2e/password-change/01--psql-connect-uri.yaml b/testing/kuttl/e2e/password-change/01--psql-connect-uri.yaml new file mode 100644 index 0000000000..2c9b769f89 --- /dev/null +++ b/testing/kuttl/e2e/password-change/01--psql-connect-uri.yaml @@ -0,0 +1,23 @@ +apiVersion: batch/v1 +kind: Job +metadata: + name: psql-connect-uri +spec: + backoffLimit: 6 + template: + spec: + restartPolicy: Never + containers: + - name: psql + image: ${KUTTL_PSQL_IMAGE} + command: + - psql + - "$(PGURI)" + - -c + - "select version();" + env: + - name: PGURI + valueFrom: { secretKeyRef: { name: password-change-pguser-password-change, key: uri } } + + # Do not wait indefinitely. + - { name: PGCONNECT_TIMEOUT, value: '5' } diff --git a/testing/kuttl/e2e/password-change/01--psql-connect.yaml b/testing/kuttl/e2e/password-change/01--psql-connect.yaml new file mode 100644 index 0000000000..28ffa3a0e5 --- /dev/null +++ b/testing/kuttl/e2e/password-change/01--psql-connect.yaml @@ -0,0 +1,30 @@ +apiVersion: batch/v1 +kind: Job +metadata: + name: psql-connect +spec: + backoffLimit: 6 + template: + spec: + restartPolicy: Never + containers: + - name: psql + image: ${KUTTL_PSQL_IMAGE} + command: + - psql + - -c + - "select version();" + env: + - name: PGHOST + valueFrom: { secretKeyRef: { name: password-change-pguser-password-change, key: host } } + - name: PGPORT + valueFrom: { secretKeyRef: { name: password-change-pguser-password-change, key: port } } + - name: PGDATABASE + valueFrom: { secretKeyRef: { name: password-change-pguser-password-change, key: dbname } } + - name: PGUSER + valueFrom: { secretKeyRef: { name: password-change-pguser-password-change, key: user } } + - name: PGPASSWORD + valueFrom: { secretKeyRef: { name: password-change-pguser-password-change, key: password } } + + # Do not wait indefinitely. + - { name: PGCONNECT_TIMEOUT, value: '5' } diff --git a/testing/kuttl/e2e/password-change/01-assert.yaml b/testing/kuttl/e2e/password-change/01-assert.yaml new file mode 100644 index 0000000000..f9e5dca807 --- /dev/null +++ b/testing/kuttl/e2e/password-change/01-assert.yaml @@ -0,0 +1,13 @@ +apiVersion: batch/v1 +kind: Job +metadata: + name: psql-connect +status: + succeeded: 1 +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: psql-connect-uri +status: + succeeded: 1 diff --git a/testing/kuttl/e2e/password-change/02--secret.yaml b/testing/kuttl/e2e/password-change/02--secret.yaml new file mode 100644 index 0000000000..03e4816e91 --- /dev/null +++ b/testing/kuttl/e2e/password-change/02--secret.yaml @@ -0,0 +1,8 @@ +apiVersion: v1 +kind: Secret +metadata: + name: password-change-pguser-password-change +data: + # Hardcoding the password as "datalake" + password: ZGF0YWxha2U= + verifier: "" diff --git a/testing/kuttl/e2e/password-change/02-errors.yaml b/testing/kuttl/e2e/password-change/02-errors.yaml new file mode 100644 index 0000000000..300ace7737 --- /dev/null +++ b/testing/kuttl/e2e/password-change/02-errors.yaml @@ -0,0 +1,10 @@ +apiVersion: v1 +kind: Secret +metadata: + name: password-change-pguser-password-change +data: + # `02-secret.yaml` changes the password and removes the verifier field, + # so when PGO reconciles the secret, it should fill in the empty verifier field; + # if it does not fill in the verifier field by a certain time this step will error + # and KUTTL will mark the test as failed. + verifier: "" diff --git a/testing/kuttl/e2e/password-change/03--psql-connect-uri.yaml b/testing/kuttl/e2e/password-change/03--psql-connect-uri.yaml new file mode 100644 index 0000000000..175482704a --- /dev/null +++ b/testing/kuttl/e2e/password-change/03--psql-connect-uri.yaml @@ -0,0 +1,26 @@ +apiVersion: batch/v1 +kind: Job +metadata: + name: psql-connect-uri2 +spec: + backoffLimit: 6 + template: + spec: + restartPolicy: Never + containers: + - name: psql + image: ${KUTTL_PSQL_IMAGE} + command: + - psql + - "$(PGURI)" + - -c + - "select version();" + env: + # The ./02-errors.yaml checks that the secret is not in the state that we set it to + # in the ./02-secret.yaml file, i.e., the secret has been reconciled by PGO, + # so the uri field of the secret should be updated with the new password by this time + - name: PGURI + valueFrom: { secretKeyRef: { name: password-change-pguser-password-change, key: uri } } + + # Do not wait indefinitely. + - { name: PGCONNECT_TIMEOUT, value: '5' } diff --git a/testing/kuttl/e2e/password-change/03--psql-connect.yaml b/testing/kuttl/e2e/password-change/03--psql-connect.yaml new file mode 100644 index 0000000000..fc03215183 --- /dev/null +++ b/testing/kuttl/e2e/password-change/03--psql-connect.yaml @@ -0,0 +1,34 @@ +apiVersion: batch/v1 +kind: Job +metadata: + name: psql-connect2 +spec: + backoffLimit: 6 + template: + spec: + restartPolicy: Never + containers: + - name: psql + image: ${KUTTL_PSQL_IMAGE} + command: + - psql + - -c + - "select version();" + env: + - name: PGHOST + valueFrom: { secretKeyRef: { name: password-change-pguser-password-change, key: host } } + - name: PGPORT + valueFrom: { secretKeyRef: { name: password-change-pguser-password-change, key: port } } + - name: PGDATABASE + valueFrom: { secretKeyRef: { name: password-change-pguser-password-change, key: dbname } } + - name: PGUSER + valueFrom: { secretKeyRef: { name: password-change-pguser-password-change, key: user } } + # Hardcoding the password here to be equal to what we changed the password to in + # ./02-secret.yaml + # The ./02-errors.yaml checks that the secret is not in the state that we set it to + # in the ./02-secret.yaml file, i.e., the secret has been reconciled by PGO + - name: PGPASSWORD + value: datalake + + # Do not wait indefinitely. + - { name: PGCONNECT_TIMEOUT, value: '5' } diff --git a/testing/kuttl/e2e/password-change/03-assert.yaml b/testing/kuttl/e2e/password-change/03-assert.yaml new file mode 100644 index 0000000000..9db69d0367 --- /dev/null +++ b/testing/kuttl/e2e/password-change/03-assert.yaml @@ -0,0 +1,13 @@ +apiVersion: batch/v1 +kind: Job +metadata: + name: psql-connect2 +status: + succeeded: 1 +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: psql-connect-uri2 +status: + succeeded: 1 diff --git a/testing/kuttl/e2e/password-change/04--secret.yaml b/testing/kuttl/e2e/password-change/04--secret.yaml new file mode 100644 index 0000000000..f5cd1537c9 --- /dev/null +++ b/testing/kuttl/e2e/password-change/04--secret.yaml @@ -0,0 +1,9 @@ +apiVersion: v1 +kind: Secret +metadata: + name: password-change-pguser-password-change +# Updating the password with the stringData field and an md5-based verifier +stringData: + password: infopond + verifier: "md585eb8fa4f697b2ea949d3aba788e8631" + uri: "" diff --git a/testing/kuttl/e2e/password-change/04-errors.yaml b/testing/kuttl/e2e/password-change/04-errors.yaml new file mode 100644 index 0000000000..f23cdded80 --- /dev/null +++ b/testing/kuttl/e2e/password-change/04-errors.yaml @@ -0,0 +1,10 @@ +apiVersion: v1 +kind: Secret +metadata: + name: password-change-pguser-password-change +data: + # `04-secret.yaml` changes the password and removes the verifier field, + # so when PGO reconciles the secret, it should fill in the empty verifier field; + # if it does not fill in the verifier field by a certain time this step will error + # and KUTTL will mark the test as failed. + uri: "" diff --git a/testing/kuttl/e2e/password-change/05--psql-connect-uri.yaml b/testing/kuttl/e2e/password-change/05--psql-connect-uri.yaml new file mode 100644 index 0000000000..8e96ccfde5 --- /dev/null +++ b/testing/kuttl/e2e/password-change/05--psql-connect-uri.yaml @@ -0,0 +1,26 @@ +apiVersion: batch/v1 +kind: Job +metadata: + name: psql-connect-uri3 +spec: + backoffLimit: 6 + template: + spec: + restartPolicy: Never + containers: + - name: psql + image: ${KUTTL_PSQL_IMAGE} + command: + - psql + - "$(PGURI)" + - -c + - "select version();" + env: + # The ./04-errors.yaml checks that the secret is not in the state that we set it to + # in the ./04-secret.yaml file, i.e., the secret has been reconciled by PGO, + # so the uri field of the secret should be updated with the new password by this time + - name: PGURI + valueFrom: { secretKeyRef: { name: password-change-pguser-password-change, key: uri } } + + # Do not wait indefinitely. + - { name: PGCONNECT_TIMEOUT, value: '5' } diff --git a/testing/kuttl/e2e/password-change/05--psql-connect.yaml b/testing/kuttl/e2e/password-change/05--psql-connect.yaml new file mode 100644 index 0000000000..7209235f31 --- /dev/null +++ b/testing/kuttl/e2e/password-change/05--psql-connect.yaml @@ -0,0 +1,34 @@ +apiVersion: batch/v1 +kind: Job +metadata: + name: psql-connect3 +spec: + backoffLimit: 6 + template: + spec: + restartPolicy: Never + containers: + - name: psql + image: ${KUTTL_PSQL_IMAGE} + command: + - psql + - -c + - "select version();" + env: + - name: PGHOST + valueFrom: { secretKeyRef: { name: password-change-pguser-password-change, key: host } } + - name: PGPORT + valueFrom: { secretKeyRef: { name: password-change-pguser-password-change, key: port } } + - name: PGDATABASE + valueFrom: { secretKeyRef: { name: password-change-pguser-password-change, key: dbname } } + - name: PGUSER + valueFrom: { secretKeyRef: { name: password-change-pguser-password-change, key: user } } + # Hardcoding the password here to be equal to what we changed the password to in + # ./04-secret.yaml + # The ./04-errors.yaml checks that the secret is not in the state that we set it to + # in the ./04-secret.yaml file, i.e., the secret has been reconciled by PGO + - name: PGPASSWORD + value: infopond + + # Do not wait indefinitely. + - { name: PGCONNECT_TIMEOUT, value: '5' } diff --git a/testing/kuttl/e2e/password-change/05-assert.yaml b/testing/kuttl/e2e/password-change/05-assert.yaml new file mode 100644 index 0000000000..07c2349b06 --- /dev/null +++ b/testing/kuttl/e2e/password-change/05-assert.yaml @@ -0,0 +1,13 @@ +apiVersion: batch/v1 +kind: Job +metadata: + name: psql-connect3 +status: + succeeded: 1 +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: psql-connect-uri3 +status: + succeeded: 1 diff --git a/testing/kuttl/e2e/password-change/06--cluster.yaml b/testing/kuttl/e2e/password-change/06--cluster.yaml new file mode 100644 index 0000000000..4cb70defdd --- /dev/null +++ b/testing/kuttl/e2e/password-change/06--cluster.yaml @@ -0,0 +1,10 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: password-change +spec: + # Adding a custom user to the spec + users: + - name: rhino + databases: + - rhino diff --git a/testing/kuttl/e2e/password-change/06-assert.yaml b/testing/kuttl/e2e/password-change/06-assert.yaml new file mode 100644 index 0000000000..bfedc0b25e --- /dev/null +++ b/testing/kuttl/e2e/password-change/06-assert.yaml @@ -0,0 +1,15 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: password-change +status: + instances: + - name: instance1 + readyReplicas: 1 + replicas: 1 + updatedReplicas: 1 +--- +apiVersion: v1 +kind: Service +metadata: + name: password-change-primary diff --git a/testing/kuttl/e2e/password-change/07--psql-connect-uri.yaml b/testing/kuttl/e2e/password-change/07--psql-connect-uri.yaml new file mode 100644 index 0000000000..2fb8057021 --- /dev/null +++ b/testing/kuttl/e2e/password-change/07--psql-connect-uri.yaml @@ -0,0 +1,23 @@ +apiVersion: batch/v1 +kind: Job +metadata: + name: psql-connect-uri4 +spec: + backoffLimit: 6 + template: + spec: + restartPolicy: Never + containers: + - name: psql + image: ${KUTTL_PSQL_IMAGE} + command: + - psql + - "$(PGURI)" + - -c + - "select version();" + env: + - name: PGURI + valueFrom: { secretKeyRef: { name: password-change-pguser-rhino, key: uri } } + + # Do not wait indefinitely. + - { name: PGCONNECT_TIMEOUT, value: '5' } diff --git a/testing/kuttl/e2e/password-change/07--psql-connect.yaml b/testing/kuttl/e2e/password-change/07--psql-connect.yaml new file mode 100644 index 0000000000..277cce24c4 --- /dev/null +++ b/testing/kuttl/e2e/password-change/07--psql-connect.yaml @@ -0,0 +1,30 @@ +apiVersion: batch/v1 +kind: Job +metadata: + name: psql-connect4 +spec: + backoffLimit: 6 + template: + spec: + restartPolicy: Never + containers: + - name: psql + image: ${KUTTL_PSQL_IMAGE} + command: + - psql + - -c + - "select version();" + env: + - name: PGHOST + valueFrom: { secretKeyRef: { name: password-change-pguser-rhino, key: host } } + - name: PGPORT + valueFrom: { secretKeyRef: { name: password-change-pguser-rhino, key: port } } + - name: PGDATABASE + valueFrom: { secretKeyRef: { name: password-change-pguser-rhino, key: dbname } } + - name: PGUSER + valueFrom: { secretKeyRef: { name: password-change-pguser-rhino, key: user } } + - name: PGPASSWORD + valueFrom: { secretKeyRef: { name: password-change-pguser-rhino, key: password } } + + # Do not wait indefinitely. + - { name: PGCONNECT_TIMEOUT, value: '5' } diff --git a/testing/kuttl/e2e/password-change/07-assert.yaml b/testing/kuttl/e2e/password-change/07-assert.yaml new file mode 100644 index 0000000000..4f6afd5d98 --- /dev/null +++ b/testing/kuttl/e2e/password-change/07-assert.yaml @@ -0,0 +1,13 @@ +apiVersion: batch/v1 +kind: Job +metadata: + name: psql-connect4 +status: + succeeded: 1 +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: psql-connect-uri4 +status: + succeeded: 1 diff --git a/testing/kuttl/e2e/password-change/08--secret.yaml b/testing/kuttl/e2e/password-change/08--secret.yaml new file mode 100644 index 0000000000..b104ce7ae7 --- /dev/null +++ b/testing/kuttl/e2e/password-change/08--secret.yaml @@ -0,0 +1,8 @@ +apiVersion: v1 +kind: Secret +metadata: + name: password-change-pguser-rhino +data: + # Hardcoding the password as "datalake" + password: ZGF0YWxha2U= + verifier: "" diff --git a/testing/kuttl/e2e/password-change/08-errors.yaml b/testing/kuttl/e2e/password-change/08-errors.yaml new file mode 100644 index 0000000000..a7ab60c9eb --- /dev/null +++ b/testing/kuttl/e2e/password-change/08-errors.yaml @@ -0,0 +1,10 @@ +apiVersion: v1 +kind: Secret +metadata: + name: password-change-pguser-rhino +data: + # `08-secret.yaml` changes the password and removes the verifier field, + # so when PGO reconciles the secret, it should fill in the empty verifier field; + # if it does not fill in the verifier field by a certain time this step will error + # and KUTTL will mark the test as failed. + verifier: "" diff --git a/testing/kuttl/e2e/password-change/09--psql-connect-uri.yaml b/testing/kuttl/e2e/password-change/09--psql-connect-uri.yaml new file mode 100644 index 0000000000..5d83af7933 --- /dev/null +++ b/testing/kuttl/e2e/password-change/09--psql-connect-uri.yaml @@ -0,0 +1,26 @@ +apiVersion: batch/v1 +kind: Job +metadata: + name: psql-connect-uri5 +spec: + backoffLimit: 6 + template: + spec: + restartPolicy: Never + containers: + - name: psql + image: ${KUTTL_PSQL_IMAGE} + command: + - psql + - "$(PGURI)" + - -c + - "select version();" + env: + # The ./08-errors.yaml checks that the secret is not in the state that we set it to + # in the ./08-secret.yaml file, i.e., the secret has been reconciled by PGO, + # so the uri field of the secret should be updated with the new password by this time + - name: PGURI + valueFrom: { secretKeyRef: { name: password-change-pguser-rhino, key: uri } } + + # Do not wait indefinitely. + - { name: PGCONNECT_TIMEOUT, value: '5' } diff --git a/testing/kuttl/e2e/password-change/09--psql-connect.yaml b/testing/kuttl/e2e/password-change/09--psql-connect.yaml new file mode 100644 index 0000000000..912fb33561 --- /dev/null +++ b/testing/kuttl/e2e/password-change/09--psql-connect.yaml @@ -0,0 +1,34 @@ +apiVersion: batch/v1 +kind: Job +metadata: + name: psql-connect5 +spec: + backoffLimit: 6 + template: + spec: + restartPolicy: Never + containers: + - name: psql + image: ${KUTTL_PSQL_IMAGE} + command: + - psql + - -c + - "select version();" + env: + - name: PGHOST + valueFrom: { secretKeyRef: { name: password-change-pguser-rhino, key: host } } + - name: PGPORT + valueFrom: { secretKeyRef: { name: password-change-pguser-rhino, key: port } } + - name: PGDATABASE + valueFrom: { secretKeyRef: { name: password-change-pguser-rhino, key: dbname } } + - name: PGUSER + valueFrom: { secretKeyRef: { name: password-change-pguser-rhino, key: user } } + # Hardcoding the password here to be equal to what we changed the password to in + # ./08-secret.yaml + # The ./08-errors.yaml checks that the secret is not in the state that we set it to + # in the ./08-secret.yaml file, i.e., the secret has been reconciled by PGO + - name: PGPASSWORD + value: datalake + + # Do not wait indefinitely. + - { name: PGCONNECT_TIMEOUT, value: '5' } diff --git a/testing/kuttl/e2e/password-change/09-assert.yaml b/testing/kuttl/e2e/password-change/09-assert.yaml new file mode 100644 index 0000000000..399b7cb17d --- /dev/null +++ b/testing/kuttl/e2e/password-change/09-assert.yaml @@ -0,0 +1,13 @@ +apiVersion: batch/v1 +kind: Job +metadata: + name: psql-connect5 +status: + succeeded: 1 +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: psql-connect-uri5 +status: + succeeded: 1 diff --git a/testing/kuttl/e2e/password-change/10--secret.yaml b/testing/kuttl/e2e/password-change/10--secret.yaml new file mode 100644 index 0000000000..7002cc622e --- /dev/null +++ b/testing/kuttl/e2e/password-change/10--secret.yaml @@ -0,0 +1,9 @@ +apiVersion: v1 +kind: Secret +metadata: + name: password-change-pguser-rhino +# Updating the password with the stringData field and a scram verifier +stringData: + password: infopond + verifier: "SCRAM-SHA-256$4096:RI03PMRQH2oAFMH6AOQHdA==$D74VOn98ErW3J8CIiFYldUVO+kjsXj+Ju7jhmMURHQo=:c5hC/1V2TYNnoJ6VcaSJCcoGQ2eTcYJBP/pfKFv+k54=" + uri: "" diff --git a/testing/kuttl/e2e/password-change/10-errors.yaml b/testing/kuttl/e2e/password-change/10-errors.yaml new file mode 100644 index 0000000000..16d7b1642a --- /dev/null +++ b/testing/kuttl/e2e/password-change/10-errors.yaml @@ -0,0 +1,10 @@ +apiVersion: v1 +kind: Secret +metadata: + name: password-change-pguser-rhino +data: + # `10-secret.yaml` changes the password and removes the verifier field, + # so when PGO reconciles the secret, it should fill in the empty verifier field; + # if it does not fill in the verifier field by a certain time this step will error + # and KUTTL will mark the test as failed. + uri: "" diff --git a/testing/kuttl/e2e/password-change/11--psql-connect-uri.yaml b/testing/kuttl/e2e/password-change/11--psql-connect-uri.yaml new file mode 100644 index 0000000000..f7f6d8287a --- /dev/null +++ b/testing/kuttl/e2e/password-change/11--psql-connect-uri.yaml @@ -0,0 +1,26 @@ +apiVersion: batch/v1 +kind: Job +metadata: + name: psql-connect-uri6 +spec: + backoffLimit: 6 + template: + spec: + restartPolicy: Never + containers: + - name: psql + image: ${KUTTL_PSQL_IMAGE} + command: + - psql + - "$(PGURI)" + - -c + - "select version();" + env: + # The ./10-errors.yaml checks that the secret is not in the state that we set it to + # in the ./10-secret.yaml file, i.e., the secret has been reconciled by PGO, + # so the uri field of the secret should be updated with the new password by this time + - name: PGURI + valueFrom: { secretKeyRef: { name: password-change-pguser-rhino, key: uri } } + + # Do not wait indefinitely. + - { name: PGCONNECT_TIMEOUT, value: '5' } diff --git a/testing/kuttl/e2e/password-change/11--psql-connect.yaml b/testing/kuttl/e2e/password-change/11--psql-connect.yaml new file mode 100644 index 0000000000..420de82024 --- /dev/null +++ b/testing/kuttl/e2e/password-change/11--psql-connect.yaml @@ -0,0 +1,34 @@ +apiVersion: batch/v1 +kind: Job +metadata: + name: psql-connect6 +spec: + backoffLimit: 6 + template: + spec: + restartPolicy: Never + containers: + - name: psql + image: ${KUTTL_PSQL_IMAGE} + command: + - psql + - -c + - "select version();" + env: + - name: PGHOST + valueFrom: { secretKeyRef: { name: password-change-pguser-rhino, key: host } } + - name: PGPORT + valueFrom: { secretKeyRef: { name: password-change-pguser-rhino, key: port } } + - name: PGDATABASE + valueFrom: { secretKeyRef: { name: password-change-pguser-rhino, key: dbname } } + - name: PGUSER + valueFrom: { secretKeyRef: { name: password-change-pguser-rhino, key: user } } + # Hardcoding the password here to be equal to what we changed the password to in + # ./10-secret.yaml + # The ./10-errors.yaml checks that the secret is not in the state that we set it to + # in the ./10-secret.yaml file, i.e., the secret has been reconciled by PGO + - name: PGPASSWORD + value: infopond + + # Do not wait indefinitely. + - { name: PGCONNECT_TIMEOUT, value: '5' } diff --git a/testing/kuttl/e2e/password-change/11-assert.yaml b/testing/kuttl/e2e/password-change/11-assert.yaml new file mode 100644 index 0000000000..589c2cbf21 --- /dev/null +++ b/testing/kuttl/e2e/password-change/11-assert.yaml @@ -0,0 +1,13 @@ +apiVersion: batch/v1 +kind: Job +metadata: + name: psql-connect6 +status: + succeeded: 1 +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: psql-connect-uri6 +status: + succeeded: 1 diff --git a/testing/kuttl/e2e/password-change/README.md b/testing/kuttl/e2e/password-change/README.md new file mode 100644 index 0000000000..e898bd5ac2 --- /dev/null +++ b/testing/kuttl/e2e/password-change/README.md @@ -0,0 +1,27 @@ +### Password Change Test with Kuttl + +This Kuttl routine runs through the following steps: + +#### Create cluster and test connection + +- 00: Creates the cluster and verifies that it exists and is ready for connection +- 01: Connects to the cluster with the PGO-generated password (both with env vars and with the URI) + +#### Default user connection tests + +- 02: Change the password (using Kuttl's update object method on the secret's `data` field) and verify that the password changes by asserting that the `verifier` field is not blank (using KUTTL's `errors` method, which makes sure that a state is _not_ met by a certain time) +- 03: Connects to the cluster with the user-defined password (both with env vars and with the URI) +- 04: Change the password and verifier (using Kuttl's update object method on the secret's `stringData` field) and verify that the password changes by asserting that the `uri` field is not blank (using KUTTL's `errors` method, which makes sure that a state is _not_ met by a certain time) +- 05: Connects to the cluster with the second user-defined password (both with env vars and with the URI) + +#### Create custom user and test connection + +- 06: Updates the postgrescluster spec with a custom user and password +- 07: Connects to the cluster with the PGO-generated password (both with env vars and with the URI) for the custom user + +#### Custom user connection tests + +- 08: Change the custom user's password (using Kuttl's update object method on the secret's `data` field) and verify that the password changes by asserting that the `verifier` field is not blank (using KUTTL's `errors` method, which makes sure that a state is _not_ met by a certain time) +- 09: Connects to the cluster with the user-defined password (both with env vars and with the URI) for the custom user +- 10: Change the custom user's password and verifier (using Kuttl's update object method on the secret's `stringData` field) and verify that the password changes by asserting that the `uri` field is not blank (using KUTTL's `errors` method, which makes sure that a state is _not_ met by a certain time) +- 11: Connects to the cluster with the second user-defined password (both with env vars and with the URI) for the custom user diff --git a/testing/kuttl/e2e/pgadmin/01--cluster.yaml b/testing/kuttl/e2e/pgadmin/01--cluster.yaml new file mode 100644 index 0000000000..d1afb7be04 --- /dev/null +++ b/testing/kuttl/e2e/pgadmin/01--cluster.yaml @@ -0,0 +1,40 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: test-cm +data: + configMap: config +--- +apiVersion: v1 +kind: Secret +metadata: + name: test-secret +type: Opaque +stringData: + password: myPassword +--- +# Create a cluster with a configured pgAdmin UI. +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: interfaced + labels: { postgres-operator-test: kuttl } +spec: + postgresVersion: ${KUTTL_PG_VERSION} + instances: + - name: instance1 + replicas: 1 + dataVolumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } + userInterface: + pgAdmin: + dataVolumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } + config: + files: + - secret: + name: test-secret + - configMap: + name: test-cm + settings: + SHOW_GRAVATAR_IMAGE: False + LOGIN_BANNER: | + Custom KUTTL Login Banner diff --git a/testing/kuttl/e2e/pgadmin/01-assert.yaml b/testing/kuttl/e2e/pgadmin/01-assert.yaml new file mode 100644 index 0000000000..e4192a1217 --- /dev/null +++ b/testing/kuttl/e2e/pgadmin/01-assert.yaml @@ -0,0 +1,32 @@ +--- +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: interfaced +status: + instances: + - name: instance1 + replicas: 1 + readyReplicas: 1 + updatedReplicas: 1 + +--- +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: interfaced-pgadmin +status: + replicas: 1 + readyReplicas: 1 + updatedReplicas: 1 + +--- +apiVersion: v1 +kind: Secret +metadata: + name: test-secret +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: test-cm diff --git a/testing/kuttl/e2e/pgadmin/02--check-settings.yaml b/testing/kuttl/e2e/pgadmin/02--check-settings.yaml new file mode 100644 index 0000000000..c68d032d1e --- /dev/null +++ b/testing/kuttl/e2e/pgadmin/02--check-settings.yaml @@ -0,0 +1,56 @@ +--- +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +commands: + # Log the amount of space on the startup volume. Assert that 4KiB are used. + - script: | + kubectl exec --namespace "${NAMESPACE}" statefulset.apps/interfaced-pgadmin \ + -- df --block-size=1K /etc/pgadmin | + awk '{ print } END { exit ($3 != "4") }' + + # Assert that current settings contain values from the spec. + - script: | + SETTINGS=$( + kubectl exec --namespace "${NAMESPACE}" statefulset.apps/interfaced-pgadmin \ + -- cat /etc/pgadmin/conf.d/~postgres-operator/pgadmin.json + ) + + contains() { bash -ceu '[[ "$1" == *"$2"* ]]' - "$@"; } + { + contains "${SETTINGS}" '"LOGIN_BANNER": "Custom KUTTL Login Banner\n"' && + contains "${SETTINGS}" '"SHOW_GRAVATAR_IMAGE": false' + } || { + echo >&2 'Wrong settings!' + echo "${SETTINGS}" + exit 1 + } + + - script: | + CONTENTS=$( + kubectl exec --namespace "${NAMESPACE}" statefulset.apps/interfaced-pgadmin \ + -- cat /etc/pgadmin/conf.d/configMap + ) + + contains() { bash -ceu '[[ "$1" == *"$2"* ]]' - "$@"; } + { + contains "${CONTENTS}" 'config' + } || { + echo >&2 'Wrong settings!' + echo "${CONTENTS}" + exit 1 + } + + - script: | + CONTENTS=$( + kubectl exec --namespace "${NAMESPACE}" statefulset.apps/interfaced-pgadmin \ + -- cat /etc/pgadmin/conf.d/password + ) + + contains() { bash -ceu '[[ "$1" == *"$2"* ]]' - "$@"; } + { + contains "${CONTENTS}" 'myPassword' + } || { + echo >&2 'Wrong settings!' + echo "${CONTENTS}" + exit 1 + } diff --git a/testing/kuttl/e2e/pgbackrest-backup-standby/00--cluster.yaml b/testing/kuttl/e2e/pgbackrest-backup-standby/00--cluster.yaml new file mode 100644 index 0000000000..9665fac665 --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-backup-standby/00--cluster.yaml @@ -0,0 +1,28 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: pgbackrest-backup-standby +spec: + postgresVersion: ${KUTTL_PG_VERSION} + instances: + - name: instance1 + replicas: 1 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + backups: + pgbackrest: + global: + backup-standby: "y" + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi diff --git a/testing/kuttl/e2e/pgbackrest-backup-standby/00-assert.yaml b/testing/kuttl/e2e/pgbackrest-backup-standby/00-assert.yaml new file mode 100644 index 0000000000..d69a3c68b5 --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-backup-standby/00-assert.yaml @@ -0,0 +1,23 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: pgbackrest-backup-standby +status: + pgbackrest: + repoHost: + apiVersion: apps/v1 + kind: StatefulSet + ready: true + repos: + - bound: true + name: repo1 +--- +apiVersion: v1 +kind: Pod +metadata: + labels: + postgres-operator.crunchydata.com/cluster: pgbackrest-backup-standby + postgres-operator.crunchydata.com/pgbackrest-backup: replica-create + postgres-operator.crunchydata.com/pgbackrest-repo: repo1 +status: + phase: Failed diff --git a/testing/kuttl/e2e/pgbackrest-backup-standby/01--check-backup-logs.yaml b/testing/kuttl/e2e/pgbackrest-backup-standby/01--check-backup-logs.yaml new file mode 100644 index 0000000000..72d2050d4a --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-backup-standby/01--check-backup-logs.yaml @@ -0,0 +1,20 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +commands: +# First, find at least one backup job pod. +# Then, check the logs for the 'unable to find standby cluster' line. +# If this line isn't found, exit 1. +- script: | + retry() { bash -ceu 'printf "$1\nSleeping...\n" && sleep 5' - "$@"; } + contains() { bash -ceu '[[ "$1" == *"$2"* ]]' - "$@"; } + + pod=$(kubectl get pods -o name -n "${NAMESPACE}" \ + -l postgres-operator.crunchydata.com/cluster=pgbackrest-backup-standby \ + -l postgres-operator.crunchydata.com/pgbackrest-backup=replica-create) + [ "$pod" = "" ] && retry "Pod not found" && exit 1 + + logs=$(kubectl logs "${pod}" --namespace "${NAMESPACE}") + { contains "${logs}" 'unable to find standby cluster - cannot proceed'; } || { + echo 'did not find expected standby cluster error ' + exit 1 + } diff --git a/testing/kuttl/e2e/pgbackrest-backup-standby/02--cluster.yaml b/testing/kuttl/e2e/pgbackrest-backup-standby/02--cluster.yaml new file mode 100644 index 0000000000..c986f4a9de --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-backup-standby/02--cluster.yaml @@ -0,0 +1,28 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: pgbackrest-backup-standby +spec: + postgresVersion: ${KUTTL_PG_VERSION} + instances: + - name: instance1 + replicas: 2 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + backups: + pgbackrest: + global: + backup-standby: "y" + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi diff --git a/testing/kuttl/e2e/pgbackrest-backup-standby/02-assert.yaml b/testing/kuttl/e2e/pgbackrest-backup-standby/02-assert.yaml new file mode 100644 index 0000000000..92f7b12f5a --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-backup-standby/02-assert.yaml @@ -0,0 +1,25 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: pgbackrest-backup-standby +status: + pgbackrest: + repoHost: + apiVersion: apps/v1 + kind: StatefulSet + ready: true + repos: + - bound: true + name: repo1 + replicaCreateBackupComplete: true + stanzaCreated: true +--- +apiVersion: batch/v1 +kind: Job +metadata: + labels: + postgres-operator.crunchydata.com/cluster: pgbackrest-backup-standby + postgres-operator.crunchydata.com/pgbackrest-backup: replica-create + postgres-operator.crunchydata.com/pgbackrest-repo: repo1 +status: + succeeded: 1 diff --git a/testing/kuttl/e2e/pgbackrest-backup-standby/README.md b/testing/kuttl/e2e/pgbackrest-backup-standby/README.md new file mode 100644 index 0000000000..39fb8707a8 --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-backup-standby/README.md @@ -0,0 +1,5 @@ +### pgBackRest backup-standby test + +* 00: Create a cluster with 'backup-standby' set to 'y' but with only one replica. +* 01: Check the backup Job Pod logs for the expected error. +* 02: Update the cluster to have 2 replicas and verify that the cluster can initialize successfully and the backup job can complete. diff --git a/testing/kuttl/e2e/pgbackrest-init/00--cluster.yaml b/testing/kuttl/e2e/pgbackrest-init/00--cluster.yaml new file mode 100644 index 0000000000..03391359a1 --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-init/00--cluster.yaml @@ -0,0 +1,38 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: init-pgbackrest +spec: + postgresVersion: ${KUTTL_PG_VERSION} + instances: + - name: instance1 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + backups: + pgbackrest: + manual: + repoName: repo2 + options: + - --type=full + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + # Adding a second PVC repo for testing, rather than test with S3/GCS/Azure + - name: repo2 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi diff --git a/testing/kuttl/e2e/pgbackrest-init/00-assert.yaml b/testing/kuttl/e2e/pgbackrest-init/00-assert.yaml new file mode 100644 index 0000000000..5181c95993 --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-init/00-assert.yaml @@ -0,0 +1,68 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: init-pgbackrest +status: + pgbackrest: + repoHost: + apiVersion: apps/v1 + kind: StatefulSet + ready: true + repos: +# Assert that the status has the two repos, with only the first having the `replicaCreateBackupComplete` field + - bound: true + name: repo1 + replicaCreateBackupComplete: true + stanzaCreated: true + - bound: true + name: repo2 + stanzaCreated: true +--- +apiVersion: batch/v1 +kind: Job +metadata: + labels: + postgres-operator.crunchydata.com/cluster: init-pgbackrest + postgres-operator.crunchydata.com/pgbackrest-backup: replica-create + postgres-operator.crunchydata.com/pgbackrest-repo: repo1 +status: + succeeded: 1 +--- +# Assert the existence of two PVCs +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + labels: + postgres-operator.crunchydata.com/cluster: init-pgbackrest + postgres-operator.crunchydata.com/data: pgbackrest + postgres-operator.crunchydata.com/pgbackrest: "" + postgres-operator.crunchydata.com/pgbackrest-repo: repo1 + postgres-operator.crunchydata.com/pgbackrest-volume: "" + name: init-pgbackrest-repo1 +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +status: + phase: Bound +--- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + labels: + postgres-operator.crunchydata.com/cluster: init-pgbackrest + postgres-operator.crunchydata.com/data: pgbackrest + postgres-operator.crunchydata.com/pgbackrest: "" + postgres-operator.crunchydata.com/pgbackrest-repo: repo2 + postgres-operator.crunchydata.com/pgbackrest-volume: "" + name: init-pgbackrest-repo2 +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +status: + phase: Bound diff --git a/testing/kuttl/e2e/pgbackrest-init/01-pgbackrest-connect.yaml b/testing/kuttl/e2e/pgbackrest-init/01-pgbackrest-connect.yaml new file mode 100644 index 0000000000..94fa317da1 --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-init/01-pgbackrest-connect.yaml @@ -0,0 +1,6 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +commands: +# When the cluster comes up, only the repo in the 0th position has activated with a backup, +# so the pgbackrest status should be "mixed" and there should be only one backup +- script: CLUSTER=init-pgbackrest ../../scripts/pgbackrest-initialization.sh "mixed" 1 diff --git a/testing/kuttl/e2e/pgbackrest-init/02--cluster.yaml b/testing/kuttl/e2e/pgbackrest-init/02--cluster.yaml new file mode 100644 index 0000000000..606272257d --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-init/02--cluster.yaml @@ -0,0 +1,5 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +commands: +- command: kubectl annotate postgrescluster init-pgbackrest postgres-operator.crunchydata.com/pgbackrest-backup="manual" + namespaced: true diff --git a/testing/kuttl/e2e/pgbackrest-init/02-assert.yaml b/testing/kuttl/e2e/pgbackrest-init/02-assert.yaml new file mode 100644 index 0000000000..589a04e738 --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-init/02-assert.yaml @@ -0,0 +1,10 @@ +# Manual backup job should have pushed to repo2 +apiVersion: batch/v1 +kind: Job +metadata: + labels: + postgres-operator.crunchydata.com/cluster: init-pgbackrest + postgres-operator.crunchydata.com/pgbackrest-backup: manual + postgres-operator.crunchydata.com/pgbackrest-repo: repo2 +status: + succeeded: 1 diff --git a/testing/kuttl/e2e/pgbackrest-init/03-pgbackrest-connect.yaml b/testing/kuttl/e2e/pgbackrest-init/03-pgbackrest-connect.yaml new file mode 100644 index 0000000000..9c5cbc9154 --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-init/03-pgbackrest-connect.yaml @@ -0,0 +1,6 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +commands: +# Now that a manual backup has been pushed to repo2, the pgbackrest status should be "ok" +# and there should be two backups +- script: CLUSTER=init-pgbackrest ../../scripts/pgbackrest-initialization.sh "ok" 2 diff --git a/testing/kuttl/e2e/pgbackrest-init/04--cluster.yaml b/testing/kuttl/e2e/pgbackrest-init/04--cluster.yaml new file mode 100644 index 0000000000..e732f1fd9a --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-init/04--cluster.yaml @@ -0,0 +1,40 @@ +--- +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: init-pgbackrest +spec: + postgresVersion: ${KUTTL_PG_VERSION} + instances: + - name: instance1 + replicas: 2 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + backups: + pgbackrest: + manual: + repoName: repo2 + options: + - --type=full + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + # Adding a second PVC repo for testing, rather than test with S3/GCS/Azure + - name: repo2 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi diff --git a/testing/kuttl/e2e/pgbackrest-init/04-assert.yaml b/testing/kuttl/e2e/pgbackrest-init/04-assert.yaml new file mode 100644 index 0000000000..04a38ac9f4 --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-init/04-assert.yaml @@ -0,0 +1,34 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: init-pgbackrest +status: + instances: + - name: instance1 + readyReplicas: 2 + replicas: 2 + updatedReplicas: 2 + pgbackrest: + repoHost: + apiVersion: apps/v1 + kind: StatefulSet + ready: true + repos: +# Assert that the status has the two repos, with only the first having the `replicaCreateBackupComplete` field + - bound: true + name: repo1 + replicaCreateBackupComplete: true + stanzaCreated: true + - bound: true + name: repo2 + stanzaCreated: true +--- +apiVersion: batch/v1 +kind: Job +metadata: + labels: + postgres-operator.crunchydata.com/cluster: init-pgbackrest + postgres-operator.crunchydata.com/pgbackrest-backup: replica-create + postgres-operator.crunchydata.com/pgbackrest-repo: repo1 +status: + succeeded: 1 diff --git a/testing/kuttl/e2e/pgbackrest-init/05-pgbackrest-connect.yaml b/testing/kuttl/e2e/pgbackrest-init/05-pgbackrest-connect.yaml new file mode 100644 index 0000000000..d8b9cd6758 --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-init/05-pgbackrest-connect.yaml @@ -0,0 +1,25 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +commands: +- script: | + # Assumes the cluster only has a single replica + NEW_REPLICA=$( + kubectl get pod --namespace "${NAMESPACE}" \ + --output name --selector ' + postgres-operator.crunchydata.com/cluster=init-pgbackrest, + postgres-operator.crunchydata.com/role=replica' + ) + + LIST=$( + kubectl exec --namespace "${NAMESPACE}" "${NEW_REPLICA}" -- \ + ls /pgdata/pg${KUTTL_PG_VERSION}/ + ) + + contains() { bash -ceu '[[ "$1" == *"$2"* ]]' - "$@"; } + { + !(contains "${LIST}" 'recovery.signal') + } || { + echo >&2 'Signal file(s) found' + echo "${LIST}" + exit 1 + } diff --git a/testing/kuttl/e2e/pgbackrest-init/06--check-spool-path.yaml b/testing/kuttl/e2e/pgbackrest-init/06--check-spool-path.yaml new file mode 100644 index 0000000000..e32cc2fc87 --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-init/06--check-spool-path.yaml @@ -0,0 +1,17 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +commands: +- script: | + PRIMARY=$( + kubectl get pod --namespace "${NAMESPACE}" \ + --output name --selector ' + postgres-operator.crunchydata.com/role=master' + ) + + LIST=$( + kubectl exec --namespace "${NAMESPACE}" -c database "${PRIMARY}" -- \ + ls -l /pgdata + ) + + contains() { bash -ceu '[[ "$1" == *"$2"* ]]' - "$@"; } + contains "$LIST" "pgbackrest-spool" || exit 1 diff --git a/testing/kuttl/e2e/pgbackrest-init/README.md b/testing/kuttl/e2e/pgbackrest-init/README.md new file mode 100644 index 0000000000..d319a31b09 --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-init/README.md @@ -0,0 +1,6 @@ +### pgBackRest Init test + +* 00: Create a cluster with two PVC repos and set up for manual backups to go to the second; verify that the PVCs exist and that the backup job completed successfully +* 01: Run pgbackrest-initialization.sh, which checks that the status matches the expected status of `mixed` (because the second repo in the repo list has not yet been pushed to) and that there is only one full backup +* 02: Use `kubectl` to annotate the cluster to initiate a manual backup; verify that the job completed successfully +* 03: Rerun pgbackrest-initialization.sh, now expecting the status to be `ok` since both repos have been pushed to and there to be two full backups diff --git a/testing/kuttl/e2e/pgbackrest-restore/01--create-cluster.yaml b/testing/kuttl/e2e/pgbackrest-restore/01--create-cluster.yaml new file mode 100644 index 0000000000..c414806892 --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-restore/01--create-cluster.yaml @@ -0,0 +1,26 @@ +--- +# Create a cluster with a single pgBackRest repository and some parameters that +# require attention during PostgreSQL recovery. +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: original + labels: { postgres-operator-test: kuttl } +spec: + postgresVersion: ${KUTTL_PG_VERSION} + patroni: + dynamicConfiguration: + postgresql: + parameters: + max_connections: 200 + instances: + - dataVolumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } + replicas: 2 + backups: + pgbackrest: + manual: + repoName: repo1 + repos: + - name: repo1 + volume: + volumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } diff --git a/testing/kuttl/e2e/pgbackrest-restore/01-assert.yaml b/testing/kuttl/e2e/pgbackrest-restore/01-assert.yaml new file mode 100644 index 0000000000..25b5bbee76 --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-restore/01-assert.yaml @@ -0,0 +1,12 @@ +--- +# Wait for the replica backup to complete. +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: original +status: + pgbackrest: + repos: + - name: repo1 + replicaCreateBackupComplete: true + stanzaCreated: true diff --git a/testing/kuttl/e2e/pgbackrest-restore/02--create-data.yaml b/testing/kuttl/e2e/pgbackrest-restore/02--create-data.yaml new file mode 100644 index 0000000000..6801edbf61 --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-restore/02--create-data.yaml @@ -0,0 +1,32 @@ +--- +# Create some data that will be restored. +apiVersion: batch/v1 +kind: Job +metadata: + name: original-data + labels: { postgres-operator-test: kuttl } +spec: + backoffLimit: 3 + template: + metadata: + labels: { postgres-operator-test: kuttl } + spec: + restartPolicy: Never + containers: + - name: psql + image: ${KUTTL_PSQL_IMAGE} + env: + - name: PGURI + valueFrom: { secretKeyRef: { name: original-pguser-original, key: uri } } + + # Do not wait indefinitely. + - { name: PGCONNECT_TIMEOUT, value: '5' } + + command: + - psql + - $(PGURI) + - --set=ON_ERROR_STOP=1 + - --command + - | + CREATE SCHEMA "original"; + CREATE TABLE important (data) AS VALUES ('treasure'); diff --git a/testing/kuttl/e2e/pgbackrest-restore/02-assert.yaml b/testing/kuttl/e2e/pgbackrest-restore/02-assert.yaml new file mode 100644 index 0000000000..5115ba97c9 --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-restore/02-assert.yaml @@ -0,0 +1,7 @@ +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: original-data +status: + succeeded: 1 diff --git a/testing/kuttl/e2e/pgbackrest-restore/03--backup.yaml b/testing/kuttl/e2e/pgbackrest-restore/03--backup.yaml new file mode 100644 index 0000000000..b759dd0fc4 --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-restore/03--backup.yaml @@ -0,0 +1,8 @@ +--- +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +commands: + # Annotate the cluster to trigger a backup. + - script: | + kubectl annotate --namespace="${NAMESPACE}" postgrescluster/original \ + 'postgres-operator.crunchydata.com/pgbackrest-backup=one' diff --git a/testing/kuttl/e2e/pgbackrest-restore/03-assert.yaml b/testing/kuttl/e2e/pgbackrest-restore/03-assert.yaml new file mode 100644 index 0000000000..a2c5b3bb22 --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-restore/03-assert.yaml @@ -0,0 +1,13 @@ +--- +# Wait for the backup job to complete. +apiVersion: batch/v1 +kind: Job +metadata: + annotations: + postgres-operator.crunchydata.com/pgbackrest-backup: one + labels: + postgres-operator.crunchydata.com/cluster: original + postgres-operator.crunchydata.com/pgbackrest-backup: manual + postgres-operator.crunchydata.com/pgbackrest-repo: repo1 +status: + succeeded: 1 diff --git a/testing/kuttl/e2e/pgbackrest-restore/04--clone-cluster.yaml b/testing/kuttl/e2e/pgbackrest-restore/04--clone-cluster.yaml new file mode 100644 index 0000000000..4bc1ce56a9 --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-restore/04--clone-cluster.yaml @@ -0,0 +1,22 @@ +--- +# Clone the cluster using a pgBackRest restore. +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: clone-one + labels: { postgres-operator-test: kuttl } +spec: + dataSource: + postgresCluster: + clusterName: original + repoName: repo1 + + postgresVersion: ${KUTTL_PG_VERSION} + instances: + - dataVolumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } + backups: + pgbackrest: + repos: + - name: repo1 + volume: + volumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } diff --git a/testing/kuttl/e2e/pgbackrest-restore/04-assert.yaml b/testing/kuttl/e2e/pgbackrest-restore/04-assert.yaml new file mode 100644 index 0000000000..8aa51fc440 --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-restore/04-assert.yaml @@ -0,0 +1,12 @@ +--- +# Wait for the clone cluster to come online. +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: clone-one +status: + instances: + - name: '00' + replicas: 1 + readyReplicas: 1 + updatedReplicas: 1 diff --git a/testing/kuttl/e2e/pgbackrest-restore/05--check-data.yaml b/testing/kuttl/e2e/pgbackrest-restore/05--check-data.yaml new file mode 100644 index 0000000000..1ee6fe9c32 --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-restore/05--check-data.yaml @@ -0,0 +1,49 @@ +--- +# Confirm that all the data was restored. +apiVersion: batch/v1 +kind: Job +metadata: + name: clone-one-data + labels: { postgres-operator-test: kuttl } +spec: + backoffLimit: 3 + template: + metadata: + labels: { postgres-operator-test: kuttl } + spec: + restartPolicy: Never + containers: + - name: psql + image: ${KUTTL_PSQL_IMAGE} + env: + # Connect to the cluster using the restored database and original credentials. + - name: PGHOST + valueFrom: { secretKeyRef: { name: clone-one-pguser-clone-one, key: host } } + - name: PGPORT + valueFrom: { secretKeyRef: { name: clone-one-pguser-clone-one, key: port } } + - name: PGDATABASE + valueFrom: { secretKeyRef: { name: original-pguser-original, key: dbname } } + - name: PGUSER + valueFrom: { secretKeyRef: { name: original-pguser-original, key: user } } + - name: PGPASSWORD + valueFrom: { secretKeyRef: { name: original-pguser-original, key: password } } + + # Do not wait indefinitely. + - { name: PGCONNECT_TIMEOUT, value: '5' } + + # Confirm that all the data was restored. + # Note: the `$$$$` is reduced to `$$` by Kubernetes. + # - https://kubernetes.io/docs/tasks/inject-data-application/ + command: + - psql + - -qa + - --set=ON_ERROR_STOP=1 + - --command + - | + DO $$$$ + DECLARE + restored jsonb; + BEGIN + SELECT jsonb_agg(important) INTO restored FROM important; + ASSERT restored = '[{"data":"treasure"}]', format('got %L', restored); + END $$$$; diff --git a/testing/kuttl/e2e/pgbackrest-restore/05-assert.yaml b/testing/kuttl/e2e/pgbackrest-restore/05-assert.yaml new file mode 100644 index 0000000000..1b6fad318b --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-restore/05-assert.yaml @@ -0,0 +1,7 @@ +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: clone-one-data +status: + succeeded: 1 diff --git a/testing/kuttl/e2e/pgbackrest-restore/06--delete-clone.yaml b/testing/kuttl/e2e/pgbackrest-restore/06--delete-clone.yaml new file mode 100644 index 0000000000..69ebc06c9d --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-restore/06--delete-clone.yaml @@ -0,0 +1,8 @@ +--- +# Remove the cloned cluster. +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +delete: + - apiVersion: postgres-operator.crunchydata.com/v1beta1 + kind: PostgresCluster + name: clone-one diff --git a/testing/kuttl/e2e/pgbackrest-restore/07--annotate.yaml b/testing/kuttl/e2e/pgbackrest-restore/07--annotate.yaml new file mode 100644 index 0000000000..279c216ed0 --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-restore/07--annotate.yaml @@ -0,0 +1,18 @@ +--- +# Annotate the cluster with the timestamp at which PostgreSQL last started. +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +commands: + - script: | + PRIMARY=$( + kubectl get pod --namespace "${NAMESPACE}" \ + --output name --selector ' + postgres-operator.crunchydata.com/cluster=original, + postgres-operator.crunchydata.com/role=master' + ) + START=$( + kubectl exec --namespace "${NAMESPACE}" "${PRIMARY}" \ + -- psql -qAt --command 'SELECT pg_postmaster_start_time()' + ) + kubectl annotate --namespace "${NAMESPACE}" postgrescluster/original \ + "testing/start-before=${START}" diff --git a/testing/kuttl/e2e/pgbackrest-restore/07--update-cluster.yaml b/testing/kuttl/e2e/pgbackrest-restore/07--update-cluster.yaml new file mode 100644 index 0000000000..f83a02c7c6 --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-restore/07--update-cluster.yaml @@ -0,0 +1,25 @@ +--- +# Update the cluster with PostgreSQL parameters that require attention during recovery. +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: original + labels: { postgres-operator-test: kuttl } +spec: + postgresVersion: ${KUTTL_PG_VERSION} + patroni: + dynamicConfiguration: + postgresql: + parameters: + max_connections: 1000 + instances: + - dataVolumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } + replicas: 2 + backups: + pgbackrest: + manual: + repoName: repo1 + repos: + - name: repo1 + volume: + volumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } diff --git a/testing/kuttl/e2e/pgbackrest-restore/08--wait-restart.yaml b/testing/kuttl/e2e/pgbackrest-restore/08--wait-restart.yaml new file mode 100644 index 0000000000..305d757386 --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-restore/08--wait-restart.yaml @@ -0,0 +1,29 @@ +--- +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +commands: + - script: | + BEFORE=$( + kubectl get --namespace "${NAMESPACE}" postgrescluster/original \ + --output 'go-template={{ index .metadata.annotations "testing/start-before" }}' + ) + PRIMARY=$( + kubectl get pod --namespace "${NAMESPACE}" \ + --output name --selector ' + postgres-operator.crunchydata.com/cluster=original, + postgres-operator.crunchydata.com/role=master' + ) + + # Wait for PostgreSQL to restart. + while true; do + START=$( + kubectl exec --namespace "${NAMESPACE}" "${PRIMARY}" \ + -- psql -qAt --command 'SELECT pg_postmaster_start_time()' + ) + if [ "${START}" ] && [ "${START}" != "${BEFORE}" ]; then break; else sleep 1; fi + done + echo "${START} != ${BEFORE}" + + # Reset counters in the "pg_stat_archiver" view. + kubectl exec --namespace "${NAMESPACE}" "${PRIMARY}" \ + -- psql -qb --command "SELECT pg_stat_reset_shared('archiver')" diff --git a/testing/kuttl/e2e/pgbackrest-restore/09--add-data.yaml b/testing/kuttl/e2e/pgbackrest-restore/09--add-data.yaml new file mode 100644 index 0000000000..41c2255239 --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-restore/09--add-data.yaml @@ -0,0 +1,31 @@ +--- +# Add more data to the WAL archive. +apiVersion: batch/v1 +kind: Job +metadata: + name: original-more-data + labels: { postgres-operator-test: kuttl } +spec: + backoffLimit: 3 + template: + metadata: + labels: { postgres-operator-test: kuttl } + spec: + restartPolicy: Never + containers: + - name: psql + image: ${KUTTL_PSQL_IMAGE} + env: + - name: PGURI + valueFrom: { secretKeyRef: { name: original-pguser-original, key: uri } } + + # Do not wait indefinitely. + - { name: PGCONNECT_TIMEOUT, value: '5' } + + command: + - psql + - $(PGURI) + - --set=ON_ERROR_STOP=1 + - --command + - | + INSERT INTO important (data) VALUES ('water'), ('socks'); diff --git a/testing/kuttl/e2e/pgbackrest-restore/09-assert.yaml b/testing/kuttl/e2e/pgbackrest-restore/09-assert.yaml new file mode 100644 index 0000000000..a60cd9ab8f --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-restore/09-assert.yaml @@ -0,0 +1,7 @@ +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: original-more-data +status: + succeeded: 1 diff --git a/testing/kuttl/e2e/pgbackrest-restore/10--wait-archived.yaml b/testing/kuttl/e2e/pgbackrest-restore/10--wait-archived.yaml new file mode 100644 index 0000000000..446886ead3 --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-restore/10--wait-archived.yaml @@ -0,0 +1,18 @@ +--- +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +commands: + - script: | + PRIMARY=$( + kubectl get pod --namespace "${NAMESPACE}" \ + --output name --selector ' + postgres-operator.crunchydata.com/cluster=original, + postgres-operator.crunchydata.com/role=master' + ) + + # Wait for the data to be sent to the WAL archive. A prior step reset the + # "pg_stat_archiver" counters, so anything more than zero should suffice. + kubectl exec --namespace "${NAMESPACE}" "${PRIMARY}" -- psql -c 'SELECT pg_switch_wal()' + while [ 0 = "$( + kubectl exec --namespace "${NAMESPACE}" "${PRIMARY}" -- psql -qAt -c 'SELECT archived_count FROM pg_stat_archiver' + )" ]; do sleep 1; done diff --git a/testing/kuttl/e2e/pgbackrest-restore/11--clone-cluster.yaml b/testing/kuttl/e2e/pgbackrest-restore/11--clone-cluster.yaml new file mode 100644 index 0000000000..fcbdde4ea7 --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-restore/11--clone-cluster.yaml @@ -0,0 +1,22 @@ +--- +# Clone the cluster using a pgBackRest restore. +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: clone-two + labels: { postgres-operator-test: kuttl } +spec: + dataSource: + postgresCluster: + clusterName: original + repoName: repo1 + + postgresVersion: ${KUTTL_PG_VERSION} + instances: + - dataVolumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } + backups: + pgbackrest: + repos: + - name: repo1 + volume: + volumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } diff --git a/testing/kuttl/e2e/pgbackrest-restore/11-assert.yaml b/testing/kuttl/e2e/pgbackrest-restore/11-assert.yaml new file mode 100644 index 0000000000..0ad9669a62 --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-restore/11-assert.yaml @@ -0,0 +1,12 @@ +--- +# Wait for the clone cluster to come online. +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: clone-two +status: + instances: + - name: '00' + replicas: 1 + readyReplicas: 1 + updatedReplicas: 1 diff --git a/testing/kuttl/e2e/pgbackrest-restore/12--check-data.yaml b/testing/kuttl/e2e/pgbackrest-restore/12--check-data.yaml new file mode 100644 index 0000000000..2cd2e4932b --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-restore/12--check-data.yaml @@ -0,0 +1,51 @@ +--- +# Confirm that all the data was restored. +apiVersion: batch/v1 +kind: Job +metadata: + name: clone-two-data + labels: { postgres-operator-test: kuttl } +spec: + backoffLimit: 3 + template: + metadata: + labels: { postgres-operator-test: kuttl } + spec: + restartPolicy: Never + containers: + - name: psql + image: ${KUTTL_PSQL_IMAGE} + env: + # Connect to the cluster using the restored database and original credentials. + - name: PGHOST + valueFrom: { secretKeyRef: { name: clone-two-pguser-clone-two, key: host } } + - name: PGPORT + valueFrom: { secretKeyRef: { name: clone-two-pguser-clone-two, key: port } } + - name: PGDATABASE + valueFrom: { secretKeyRef: { name: original-pguser-original, key: dbname } } + - name: PGUSER + valueFrom: { secretKeyRef: { name: original-pguser-original, key: user } } + - name: PGPASSWORD + valueFrom: { secretKeyRef: { name: original-pguser-original, key: password } } + + # Do not wait indefinitely. + - { name: PGCONNECT_TIMEOUT, value: '5' } + + # Confirm that all the data was restored. + # Note: the `$$$$` is reduced to `$$` by Kubernetes. + # - https://kubernetes.io/docs/tasks/inject-data-application/ + command: + - psql + - -qa + - --set=ON_ERROR_STOP=1 + - --command + - | + DO $$$$ + DECLARE + restored jsonb; + BEGIN + SELECT jsonb_agg(important) INTO restored FROM important; + ASSERT restored = '[ + {"data":"treasure"}, {"data":"water"}, {"data":"socks"} + ]', format('got %L', restored); + END $$$$; diff --git a/testing/kuttl/e2e/pgbackrest-restore/12-assert.yaml b/testing/kuttl/e2e/pgbackrest-restore/12-assert.yaml new file mode 100644 index 0000000000..198d196836 --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-restore/12-assert.yaml @@ -0,0 +1,7 @@ +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: clone-two-data +status: + succeeded: 1 diff --git a/testing/kuttl/e2e/pgbackrest-restore/13--delete-clone.yaml b/testing/kuttl/e2e/pgbackrest-restore/13--delete-clone.yaml new file mode 100644 index 0000000000..9646f66f35 --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-restore/13--delete-clone.yaml @@ -0,0 +1,8 @@ +--- +# Remove the cloned cluster. +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +delete: + - apiVersion: postgres-operator.crunchydata.com/v1beta1 + kind: PostgresCluster + name: clone-two diff --git a/testing/kuttl/e2e/pgbackrest-restore/14--lose-data.yaml b/testing/kuttl/e2e/pgbackrest-restore/14--lose-data.yaml new file mode 100644 index 0000000000..4f1eaeaa53 --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-restore/14--lose-data.yaml @@ -0,0 +1,50 @@ +--- +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +commands: + - script: | + PRIMARY=$( + kubectl get pod --namespace "${NAMESPACE}" \ + --output name --selector ' + postgres-operator.crunchydata.com/cluster=original, + postgres-operator.crunchydata.com/role=master' + ) + OBJECTIVE=$( + kubectl exec --namespace "${NAMESPACE}" "${PRIMARY}" \ + -- psql -qAt --command 'SELECT clock_timestamp()' + ) + + # Store the recovery objective for later steps. + kubectl annotate --namespace "${NAMESPACE}" postgrescluster/original \ + "testing/objective=${OBJECTIVE}" + + # A reason to restore. Wait for the change to be sent to the WAL archive. + kubectl exec --namespace "${NAMESPACE}" "${PRIMARY}" \ + -- psql -qb original --set ON_ERROR_STOP=1 \ + --command 'DROP TABLE original.important' \ + --command "SELECT pg_stat_reset_shared('archiver')" \ + --command 'SELECT pg_switch_wal()' + + while [ 0 = "$( + kubectl exec --namespace "${NAMESPACE}" "${PRIMARY}" -- psql -qAt -c 'SELECT archived_count FROM pg_stat_archiver' + )" ]; do sleep 1; done + + # The replica should also need to be restored. + - script: | + REPLICA=$( + kubectl get pod --namespace "${NAMESPACE}" \ + --output name --selector ' + postgres-operator.crunchydata.com/cluster=original, + postgres-operator.crunchydata.com/role=replica' + ) + + kubectl exec --stdin --namespace "${NAMESPACE}" "${REPLICA}" \ + -- psql -qb original --set ON_ERROR_STOP=1 \ + --file=- <<'SQL' + DO $$ + BEGIN + ASSERT to_regclass('important') IS NULL, 'expected no table'; + PERFORM * FROM information_schema.tables WHERE table_name = 'important'; + ASSERT NOT FOUND, 'expected no table'; + END $$ + SQL diff --git a/testing/kuttl/e2e/pgbackrest-restore/15--in-place-pitr.yaml b/testing/kuttl/e2e/pgbackrest-restore/15--in-place-pitr.yaml new file mode 100644 index 0000000000..3e647946db --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-restore/15--in-place-pitr.yaml @@ -0,0 +1,50 @@ +--- +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +commands: + - script: | + TARGET_JSON=$( + kubectl get --namespace "${NAMESPACE}" postgrescluster/original \ + --output 'go-template={{ index .metadata.annotations "testing/objective" | printf "--target=%q" | printf "%q" }}' + ) + + # Configure the cluster for an in-place point-in-time restore (PITR). + kubectl patch --namespace "${NAMESPACE}" postgrescluster/original \ + --type 'merge' --patch ' + {"spec":{"backups":{"pgbackrest":{"restore":{ + "enabled": true, + "repoName": "repo1", + "options": ["--type=time", '"${TARGET_JSON}"'] + }}}}}' + + # Annotate the cluster to trigger the restore. + kubectl annotate --namespace="${NAMESPACE}" postgrescluster/original \ + 'postgres-operator.crunchydata.com/pgbackrest-restore=one' + + # TODO(benjaminjb): remove this when PG10 is no longer being supported + # For PG10, we need to run a patronictl reinit for the replica when that is running + # Get the replica name--the replica will exist during the PITR process so we don't need to wait + if [[ ${KUTTL_PG_VERSION} == 10 ]]; then + # Find replica + REPLICA=$(kubectl get pods --namespace "${NAMESPACE}" \ + --selector=' + postgres-operator.crunchydata.com/cluster=original, + postgres-operator.crunchydata.com/data=postgres, + postgres-operator.crunchydata.com/role!=master' \ + --output=jsonpath={.items..metadata.name}) + + # Wait for replica to be deleted + kubectl wait pod/"${REPLICA}" --namespace "${NAMESPACE}" --for=delete --timeout=-1s + + # Wait for the restarted replica to be started + NOT_RUNNING="" + while [[ "${NOT_RUNNING}" == "" ]]; do + kubectl get pods --namespace "${NAMESPACE}" "${REPLICA}" || (sleep 1 && continue) + + NOT_RUNNING=$(kubectl get pods --namespace "${NAMESPACE}" "${REPLICA}" \ + --output jsonpath="{.status.containerStatuses[?(@.name=='database')].state.running.startedAt}") + sleep 1 + done + + kubectl exec --namespace "${NAMESPACE}" "${REPLICA}" -- patronictl reinit original-ha "${REPLICA}" --force + fi diff --git a/testing/kuttl/e2e/pgbackrest-restore/15-assert.yaml b/testing/kuttl/e2e/pgbackrest-restore/15-assert.yaml new file mode 100644 index 0000000000..c408b75a60 --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-restore/15-assert.yaml @@ -0,0 +1,16 @@ +--- +# Wait for the restore to complete and the cluster to come online. +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: original +status: + instances: + - name: '00' + replicas: 2 + readyReplicas: 2 + updatedReplicas: 2 + pgbackrest: + restore: + id: one + finished: true diff --git a/testing/kuttl/e2e/pgbackrest-restore/16--check-data.yaml b/testing/kuttl/e2e/pgbackrest-restore/16--check-data.yaml new file mode 100644 index 0000000000..b0ae252831 --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-restore/16--check-data.yaml @@ -0,0 +1,100 @@ +--- +# Confirm that data was restored to the point-in-time. +apiVersion: batch/v1 +kind: Job +metadata: + name: original-pitr-primary-data + labels: { postgres-operator-test: kuttl } +spec: + backoffLimit: 3 + template: + metadata: + labels: { postgres-operator-test: kuttl } + spec: + restartPolicy: Never + containers: + - name: psql + image: ${KUTTL_PSQL_IMAGE} + env: + - name: PGURI + valueFrom: { secretKeyRef: { name: original-pguser-original, key: uri } } + + # Do not wait indefinitely. + - { name: PGCONNECT_TIMEOUT, value: '5' } + + # Note: the `$$$$` is reduced to `$$` by Kubernetes. + # - https://kubernetes.io/docs/tasks/inject-data-application/ + command: + - psql + - $(PGURI) + - -qa + - --set=ON_ERROR_STOP=1 + - --command + - | + DO $$$$ + DECLARE + restored jsonb; + BEGIN + SELECT jsonb_agg(important) INTO restored FROM important; + ASSERT restored = '[ + {"data":"treasure"}, {"data":"water"}, {"data":"socks"} + ]', format('got %L', restored); + END $$$$; + +--- +# Confirm that replicas are also restored and streaming from the primary. +apiVersion: batch/v1 +kind: Job +metadata: + name: original-pitr-replica-data + labels: { postgres-operator-test: kuttl } +spec: + backoffLimit: 3 + template: + metadata: + labels: { postgres-operator-test: kuttl } + spec: + restartPolicy: Never + containers: + - name: psql + image: ${KUTTL_PSQL_IMAGE} + env: + - name: PGPORT + valueFrom: { secretKeyRef: { name: original-pguser-original, key: port } } + - name: PGDATABASE + valueFrom: { secretKeyRef: { name: original-pguser-original, key: dbname } } + - name: PGUSER + valueFrom: { secretKeyRef: { name: original-pguser-original, key: user } } + - name: PGPASSWORD + valueFrom: { secretKeyRef: { name: original-pguser-original, key: password } } + + # The user secret does not contain the replica service. + - name: NAMESPACE + valueFrom: { fieldRef: { fieldPath: metadata.namespace } } + - name: PGHOST + value: "original-replicas.$(NAMESPACE).svc" + + # Do not wait indefinitely. + - { name: PGCONNECT_TIMEOUT, value: '5' } + + # Note: the `$$$$` is reduced to `$$` by Kubernetes. + # - https://kubernetes.io/docs/tasks/inject-data-application/ + command: + - psql + - -qa + - --set=ON_ERROR_STOP=1 + - --command + - | + DO $$$$ + DECLARE + restored jsonb; + BEGIN + ASSERT pg_is_in_recovery(), 'expected replica'; + -- only users with "pg_read_all_settings" role may examine "primary_conninfo" + -- ASSERT current_setting('primary_conninfo') <> '', 'expected streaming'; + + SELECT jsonb_agg(important) INTO restored FROM important; + ASSERT restored = '[ + {"data":"treasure"}, {"data":"water"}, {"data":"socks"} + ]', format('got %L', restored); + END $$$$; diff --git a/testing/kuttl/e2e/pgbackrest-restore/16-assert.yaml b/testing/kuttl/e2e/pgbackrest-restore/16-assert.yaml new file mode 100644 index 0000000000..0baadef25b --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-restore/16-assert.yaml @@ -0,0 +1,15 @@ +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: original-pitr-primary-data +status: + succeeded: 1 + +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: original-pitr-replica-data +status: + succeeded: 1 diff --git a/testing/kuttl/e2e/pgbackrest-restore/17--check-replication.yaml b/testing/kuttl/e2e/pgbackrest-restore/17--check-replication.yaml new file mode 100644 index 0000000000..f6c813c8b1 --- /dev/null +++ b/testing/kuttl/e2e/pgbackrest-restore/17--check-replication.yaml @@ -0,0 +1,22 @@ +--- +# Confirm that the replica is streaming from the primary. +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +commands: + - script: | + REPLICA=$( + kubectl get pod --namespace "${NAMESPACE}" \ + --output name --selector ' + postgres-operator.crunchydata.com/cluster=original, + postgres-operator.crunchydata.com/role=replica' + ) + + kubectl exec --stdin --namespace "${NAMESPACE}" "${REPLICA}" \ + -- psql -qb original --set ON_ERROR_STOP=1 \ + --file=- <<'SQL' + DO $$ + BEGIN + PERFORM * FROM pg_stat_wal_receiver WHERE status = 'streaming'; + ASSERT FOUND, 'expected streaming replication'; + END $$ + SQL diff --git a/testing/kuttl/e2e/pgbouncer/00--cluster.yaml b/testing/kuttl/e2e/pgbouncer/00--cluster.yaml new file mode 100644 index 0000000000..4699d90171 --- /dev/null +++ b/testing/kuttl/e2e/pgbouncer/00--cluster.yaml @@ -0,0 +1,19 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: proxied + labels: { postgres-operator-test: kuttl } +spec: + postgresVersion: ${KUTTL_PG_VERSION} + instances: + - name: instance1 + replicas: 1 + dataVolumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } + proxy: + pgBouncer: + replicas: 1 + config: + # Set the pgBouncer verbosity level to debug to print connection logs + # --https://www.pgbouncer.org/config.html#log-settings + global: + verbose: '1' diff --git a/testing/kuttl/e2e/pgbouncer/00-assert.yaml b/testing/kuttl/e2e/pgbouncer/00-assert.yaml new file mode 100644 index 0000000000..6c3a33079f --- /dev/null +++ b/testing/kuttl/e2e/pgbouncer/00-assert.yaml @@ -0,0 +1,15 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: proxied +status: + instances: + - name: instance1 + readyReplicas: 1 + replicas: 1 + updatedReplicas: 1 +--- +apiVersion: v1 +kind: Service +metadata: + name: proxied-pgbouncer diff --git a/testing/kuttl/e2e/pgbouncer/01--psql-connect.yaml b/testing/kuttl/e2e/pgbouncer/01--psql-connect.yaml new file mode 100644 index 0000000000..0f7099d4e8 --- /dev/null +++ b/testing/kuttl/e2e/pgbouncer/01--psql-connect.yaml @@ -0,0 +1,41 @@ +apiVersion: batch/v1 +kind: Job +metadata: + name: psql-connect + labels: { postgres-operator-test: kuttl } +spec: + backoffLimit: 6 + template: + metadata: + labels: { postgres-operator-test: kuttl } + spec: + restartPolicy: Never + containers: + - name: psql + image: ${KUTTL_PSQL_IMAGE} + command: + - psql + - -c + - "select version();" + env: + - name: PGSSLMODE + value: verify-full + - name: PGSSLROOTCERT + value: "/tmp/certs/ca.crt" + - name: PGHOST + valueFrom: { secretKeyRef: { name: proxied-pguser-proxied, key: pgbouncer-host } } + - name: PGPORT + valueFrom: { secretKeyRef: { name: proxied-pguser-proxied, key: pgbouncer-port } } + - name: PGDATABASE + valueFrom: { secretKeyRef: { name: proxied-pguser-proxied, key: dbname } } + - name: PGUSER + valueFrom: { secretKeyRef: { name: proxied-pguser-proxied, key: user } } + - name: PGPASSWORD + valueFrom: { secretKeyRef: { name: proxied-pguser-proxied, key: password } } + volumeMounts: + - name: certs + mountPath: "/tmp/certs" + volumes: + - name: certs + secret: + secretName: proxied-cluster-cert diff --git a/testing/kuttl/e2e/pgbouncer/01-assert.yaml b/testing/kuttl/e2e/pgbouncer/01-assert.yaml new file mode 100644 index 0000000000..e4d8bbb37a --- /dev/null +++ b/testing/kuttl/e2e/pgbouncer/01-assert.yaml @@ -0,0 +1,6 @@ +apiVersion: batch/v1 +kind: Job +metadata: + name: psql-connect +status: + succeeded: 1 diff --git a/testing/kuttl/e2e/pgbouncer/10--read-certificate.yaml b/testing/kuttl/e2e/pgbouncer/10--read-certificate.yaml new file mode 100644 index 0000000000..87739116ae --- /dev/null +++ b/testing/kuttl/e2e/pgbouncer/10--read-certificate.yaml @@ -0,0 +1,28 @@ +--- +# Print the certificate presented by PgBouncer. +apiVersion: batch/v1 +kind: Job +metadata: + name: read-cert-before + labels: { postgres-operator-test: kuttl } +spec: + backoffLimit: 1 + template: + metadata: + labels: { postgres-operator-test: kuttl } + spec: + restartPolicy: Never + containers: + - name: openssl + image: ${KUTTL_PSQL_IMAGE} + env: + - name: PGHOST + valueFrom: { secretKeyRef: { name: proxied-pguser-proxied, key: pgbouncer-host } } + - name: PGPORT + valueFrom: { secretKeyRef: { name: proxied-pguser-proxied, key: pgbouncer-port } } + command: + - bash + - -ceu + - | + openssl s_client --connect '$(PGHOST):$(PGPORT)' --starttls postgres < /dev/null 2> /dev/null | + openssl x509 --noout --text diff --git a/testing/kuttl/e2e/pgbouncer/10-assert.yaml b/testing/kuttl/e2e/pgbouncer/10-assert.yaml new file mode 100644 index 0000000000..87d1a262fb --- /dev/null +++ b/testing/kuttl/e2e/pgbouncer/10-assert.yaml @@ -0,0 +1,8 @@ +--- +# Wait for the job to complete. +apiVersion: batch/v1 +kind: Job +metadata: + name: read-cert-before +status: + succeeded: 1 diff --git a/testing/kuttl/e2e/pgbouncer/11--open-connection.yaml b/testing/kuttl/e2e/pgbouncer/11--open-connection.yaml new file mode 100644 index 0000000000..f43c586e7f --- /dev/null +++ b/testing/kuttl/e2e/pgbouncer/11--open-connection.yaml @@ -0,0 +1,43 @@ +--- +# Connect through PgBouncer and wait long enough for TLS certificates to rotate. +apiVersion: batch/v1 +kind: Job +metadata: + name: psql-open-connection + labels: { postgres-operator-test: kuttl } +spec: + backoffLimit: 1 + template: + metadata: + labels: { postgres-operator-test: kuttl } + spec: + restartPolicy: Never + volumes: + # TODO(cbandy): Provide a CA bundle that clients can use for verification. + - { name: tls, secret: { secretName: proxied-cluster-cert } } + containers: + - name: psql + image: ${KUTTL_PSQL_IMAGE} + env: + # Connect through PgBouncer. + - name: PGURI + valueFrom: { secretKeyRef: { name: proxied-pguser-proxied, key: pgbouncer-uri } } + + # Verify the certificate presented by PgBouncer. + - { name: PGSSLMODE, value: verify-full } + - { name: PGSSLROOTCERT, value: /mnt/ca.crt } + + volumeMounts: + - { name: tls, mountPath: /mnt } + + command: + - psql + - $(PGURI) + - -qAt + - --set=ON_ERROR_STOP=1 + + # Print connection details. + - --command=SELECT pid, backend_start FROM pg_stat_activity WHERE pid = pg_backend_pid(); + + # Wait here so later test steps can see this open connection. + - --command=SELECT pg_sleep_for('5 minutes'); diff --git a/testing/kuttl/e2e/pgbouncer/11-assert.yaml b/testing/kuttl/e2e/pgbouncer/11-assert.yaml new file mode 100644 index 0000000000..4c1f3a752d --- /dev/null +++ b/testing/kuttl/e2e/pgbouncer/11-assert.yaml @@ -0,0 +1,18 @@ +--- +# Wait for the job to start. +apiVersion: batch/v1 +kind: Job +metadata: + name: psql-open-connection +status: + active: 1 + +--- +# Wait for the pod to start. +apiVersion: v1 +kind: Pod +metadata: + labels: + job-name: psql-open-connection +status: + phase: Running diff --git a/testing/kuttl/e2e/pgbouncer/12--rotate-certificate.yaml b/testing/kuttl/e2e/pgbouncer/12--rotate-certificate.yaml new file mode 100644 index 0000000000..67e8f31c84 --- /dev/null +++ b/testing/kuttl/e2e/pgbouncer/12--rotate-certificate.yaml @@ -0,0 +1,31 @@ +--- +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +commands: + - script: | + BEFORE=$(date -u +%FT%TZ) + + # Wipe out the stored PgBouncer certificate. + kubectl patch --namespace "${NAMESPACE}" secret/proxied-pgbouncer \ + --patch '{"data":{"pgbouncer-frontend.crt":""}}' + + # Wait for the certificate to be regenerated then loaded. + # Changing this from "wait until timeout" to "try X times" + # so that we can get the logs before exiting 1 in case we cannot find the reload. + for _ in $(seq 120); do + kubectl logs --namespace "${NAMESPACE}" deployment.apps/proxied-pgbouncer \ + --container pgbouncer-config --since-time "${BEFORE}" | grep 'Loaded' && \ + found=true && break + sleep 1 + done + + # This test has been flaky in the past, potentially around rotating/reloading the cert. + # To help debug, we set the pgBouncer verbosity to 1 (debug) and print the logs + kubectl logs --namespace "${NAMESPACE}" deployment.apps/proxied-pgbouncer \ + --all-containers --prefix --timestamps + + # If we haven't found the `Loaded` log statement, exit with an error + if [ -z "$found" ]; then + echo "pgbouncer-config has failed to reload in time" + exit 1; + fi diff --git a/testing/kuttl/e2e/pgbouncer/13--read-certificate.yaml b/testing/kuttl/e2e/pgbouncer/13--read-certificate.yaml new file mode 100644 index 0000000000..5134c75ab0 --- /dev/null +++ b/testing/kuttl/e2e/pgbouncer/13--read-certificate.yaml @@ -0,0 +1,28 @@ +--- +# Print the certificate presented by PgBouncer. +apiVersion: batch/v1 +kind: Job +metadata: + name: read-cert-after + labels: { postgres-operator-test: kuttl } +spec: + backoffLimit: 1 + template: + metadata: + labels: { postgres-operator-test: kuttl } + spec: + restartPolicy: Never + containers: + - name: openssl + image: ${KUTTL_PSQL_IMAGE} + env: + - name: PGHOST + valueFrom: { secretKeyRef: { name: proxied-pguser-proxied, key: pgbouncer-host } } + - name: PGPORT + valueFrom: { secretKeyRef: { name: proxied-pguser-proxied, key: pgbouncer-port } } + command: + - bash + - -ceu + - | + openssl s_client --connect '$(PGHOST):$(PGPORT)' --starttls postgres < /dev/null 2> /dev/null | + openssl x509 --noout --text diff --git a/testing/kuttl/e2e/pgbouncer/13-assert.yaml b/testing/kuttl/e2e/pgbouncer/13-assert.yaml new file mode 100644 index 0000000000..ca9eae62a0 --- /dev/null +++ b/testing/kuttl/e2e/pgbouncer/13-assert.yaml @@ -0,0 +1,8 @@ +--- +# Wait for the job to complete. +apiVersion: batch/v1 +kind: Job +metadata: + name: read-cert-after +status: + succeeded: 1 diff --git a/testing/kuttl/e2e/pgbouncer/14--compare-certificate.yaml b/testing/kuttl/e2e/pgbouncer/14--compare-certificate.yaml new file mode 100644 index 0000000000..4d60a4eb6e --- /dev/null +++ b/testing/kuttl/e2e/pgbouncer/14--compare-certificate.yaml @@ -0,0 +1,14 @@ +--- +# Confirm that PgBouncer is serving a new certificate. +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +commands: + - script: | + bash -c '! diff -u \ + <(kubectl logs --namespace "${NAMESPACE}" job.batch/read-cert-before) \ + <(kubectl logs --namespace "${NAMESPACE}" job.batch/read-cert-after) \ + ' || { + echo 'Certificate did not change!' + kubectl logs --namespace "${NAMESPACE}" job.batch/read-cert-after + exit 1 + } diff --git a/testing/kuttl/e2e/pgbouncer/15--check-connection.yaml b/testing/kuttl/e2e/pgbouncer/15--check-connection.yaml new file mode 100644 index 0000000000..6055dc4910 --- /dev/null +++ b/testing/kuttl/e2e/pgbouncer/15--check-connection.yaml @@ -0,0 +1,35 @@ +--- +# Confirm that the open connection is encrypted and remained open through rotation. +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +commands: + - script: | + CONNECTION=$( + kubectl logs --namespace "${NAMESPACE}" job.batch/psql-open-connection + ) + PRIMARY=$( + kubectl get pod --namespace "${NAMESPACE}" \ + --output name --selector ' + postgres-operator.crunchydata.com/cluster=proxied, + postgres-operator.crunchydata.com/role=master' + ) + + kubectl exec --stdin --namespace "${NAMESPACE}" "${PRIMARY}" \ + -- psql -qb --set ON_ERROR_STOP=1 --set CONNECTION="${CONNECTION}" \ + --file=- <<'SQL' + SELECT + set_config('testing.pid', (string_to_array(:'CONNECTION', '|'))[1], false) AS "testing.pid", + set_config('testing.start', (string_to_array(:'CONNECTION', '|'))[2], false) AS "testing.start"; + + DO $$ + BEGIN + PERFORM * FROM pg_stat_ssl + WHERE ssl AND pid = current_setting('testing.pid')::integer; + ASSERT FOUND, 'expected TLS end-to-end'; + + PERFORM * FROM pg_stat_activity + WHERE pid = current_setting('testing.pid')::integer + AND backend_start = current_setting('testing.start')::timestamptz; + ASSERT FOUND, 'expected to stay connected'; + END $$; + SQL diff --git a/testing/kuttl/e2e/pgbouncer/16--reconnect.yaml b/testing/kuttl/e2e/pgbouncer/16--reconnect.yaml new file mode 100644 index 0000000000..e070430169 --- /dev/null +++ b/testing/kuttl/e2e/pgbouncer/16--reconnect.yaml @@ -0,0 +1,46 @@ +--- +# Verify the new PgBouncer certificate and transport encryption. +apiVersion: batch/v1 +kind: Job +metadata: + name: psql-tls-after + labels: { postgres-operator-test: kuttl } +spec: + backoffLimit: 1 + template: + metadata: + labels: { postgres-operator-test: kuttl } + spec: + restartPolicy: Never + volumes: + # TODO(cbandy): Provide a CA bundle that clients can use for verification. + - { name: tls, secret: { secretName: proxied-cluster-cert } } + containers: + - name: psql + image: ${KUTTL_PSQL_IMAGE} + env: + # Connect through PgBouncer. + - name: PGURI + valueFrom: { secretKeyRef: { name: proxied-pguser-proxied, key: pgbouncer-uri } } + + # Verify the certificate presented by PgBouncer. + - { name: PGSSLMODE, value: verify-full } + - { name: PGSSLROOTCERT, value: /mnt/ca.crt } + + volumeMounts: + - { name: tls, mountPath: /mnt } + + # Note: the `$$$$` is reduced to `$$` by Kubernetes. + # - https://kubernetes.io/docs/tasks/inject-data-application/ + command: + - psql + - $(PGURI) + - -qb + - --set=ON_ERROR_STOP=1 + - --command + - | + DO $$$$ + BEGIN + PERFORM * FROM pg_stat_ssl WHERE ssl AND pid = pg_backend_pid(); + ASSERT FOUND, 'expected TLS end-to-end'; + END $$$$; diff --git a/testing/kuttl/e2e/pgbouncer/16-assert.yaml b/testing/kuttl/e2e/pgbouncer/16-assert.yaml new file mode 100644 index 0000000000..b6fbbf95f2 --- /dev/null +++ b/testing/kuttl/e2e/pgbouncer/16-assert.yaml @@ -0,0 +1,8 @@ +--- +# Wait for the job to complete. +apiVersion: batch/v1 +kind: Job +metadata: + name: psql-tls-after +status: + succeeded: 1 diff --git a/testing/kuttl/e2e/replica-read/00--cluster.yaml b/testing/kuttl/e2e/replica-read/00--cluster.yaml new file mode 100644 index 0000000000..c62f5418cd --- /dev/null +++ b/testing/kuttl/e2e/replica-read/00--cluster.yaml @@ -0,0 +1,15 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: replica-read +spec: + postgresVersion: ${KUTTL_PG_VERSION} + instances: + - name: instance1 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + replicas: 2 diff --git a/testing/kuttl/e2e/replica-read/00-assert.yaml b/testing/kuttl/e2e/replica-read/00-assert.yaml new file mode 100644 index 0000000000..17c2942eb0 --- /dev/null +++ b/testing/kuttl/e2e/replica-read/00-assert.yaml @@ -0,0 +1,15 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: replica-read +status: + instances: + - name: instance1 + readyReplicas: 2 + replicas: 2 + updatedReplicas: 2 +--- +apiVersion: v1 +kind: Service +metadata: + name: replica-read-replicas diff --git a/testing/kuttl/e2e/replica-read/01--psql-replica-read.yaml b/testing/kuttl/e2e/replica-read/01--psql-replica-read.yaml new file mode 100644 index 0000000000..3d000aee85 --- /dev/null +++ b/testing/kuttl/e2e/replica-read/01--psql-replica-read.yaml @@ -0,0 +1,44 @@ +apiVersion: batch/v1 +kind: Job +metadata: + name: psql-replica-read +spec: + backoffLimit: 6 + template: + spec: + restartPolicy: Never + containers: + - name: psql + image: ${KUTTL_PSQL_IMAGE} + command: + # https://www.postgresql.org/docs/current/plpgsql-errors-and-messages.html#PLPGSQL-STATEMENTS-ASSERT + # If run on a non-replica, this assertion fails, resulting in the pod erroring + # Note: the `$$$$` is reduced to `$$` by Kubernetes. + # - https://kubernetes.io/docs/tasks/inject-data-application/ + - psql + - -qc + - | + DO $$$$ + BEGIN + ASSERT pg_is_in_recovery(); + END $$$$; + env: + # The Replica svc is not held in the user secret, so we hard-code the Service address + # (using the downstream API for the namespace) + - name: NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + - name: PGHOST + value: "replica-read-replicas.$(NAMESPACE).svc" + - name: PGPORT + valueFrom: { secretKeyRef: { name: replica-read-pguser-replica-read, key: port } } + - name: PGDATABASE + valueFrom: { secretKeyRef: { name: replica-read-pguser-replica-read, key: dbname } } + - name: PGUSER + valueFrom: { secretKeyRef: { name: replica-read-pguser-replica-read, key: user } } + - name: PGPASSWORD + valueFrom: { secretKeyRef: { name: replica-read-pguser-replica-read, key: password } } + + # Do not wait indefinitely. + - { name: PGCONNECT_TIMEOUT, value: '5' } diff --git a/testing/kuttl/e2e/replica-read/01-assert.yaml b/testing/kuttl/e2e/replica-read/01-assert.yaml new file mode 100644 index 0000000000..97ea0972c3 --- /dev/null +++ b/testing/kuttl/e2e/replica-read/01-assert.yaml @@ -0,0 +1,6 @@ +apiVersion: batch/v1 +kind: Job +metadata: + name: psql-replica-read +status: + succeeded: 1 diff --git a/testing/kuttl/e2e/root-cert-ownership/00--cluster.yaml b/testing/kuttl/e2e/root-cert-ownership/00--cluster.yaml new file mode 100644 index 0000000000..2d23e1e3d3 --- /dev/null +++ b/testing/kuttl/e2e/root-cert-ownership/00--cluster.yaml @@ -0,0 +1,23 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: owner1 + labels: { postgres-operator-test: kuttl } +spec: + postgresVersion: ${KUTTL_PG_VERSION} + instances: + - name: instance1 + replicas: 1 + dataVolumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } +--- +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: owner2 + labels: { postgres-operator-test: kuttl } +spec: + postgresVersion: ${KUTTL_PG_VERSION} + instances: + - name: instance1 + replicas: 1 + dataVolumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } diff --git a/testing/kuttl/e2e/root-cert-ownership/00-assert.yaml b/testing/kuttl/e2e/root-cert-ownership/00-assert.yaml new file mode 100644 index 0000000000..406465b691 --- /dev/null +++ b/testing/kuttl/e2e/root-cert-ownership/00-assert.yaml @@ -0,0 +1,26 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: owner1 +status: + instances: + - name: instance1 + readyReplicas: 1 + replicas: 1 + updatedReplicas: 1 +--- +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: owner2 +status: + instances: + - name: instance1 + readyReplicas: 1 + replicas: 1 + updatedReplicas: 1 +--- +apiVersion: v1 +kind: Secret +metadata: + name: pgo-root-cacert diff --git a/testing/kuttl/e2e/root-cert-ownership/01--check-owners.yaml b/testing/kuttl/e2e/root-cert-ownership/01--check-owners.yaml new file mode 100644 index 0000000000..ea8353427c --- /dev/null +++ b/testing/kuttl/e2e/root-cert-ownership/01--check-owners.yaml @@ -0,0 +1,17 @@ +--- +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +commands: + # Get a list of the current owners of the root ca cert secret and verify that + # both owners are listed. + - script: | + contains() { bash -ceu '[[ "$1" == *"$2"* ]]' - "$@"; } + while true; do + sleep 1 # this sleep allows time for the owner reference list to be updated + CURRENT_OWNERS=$(kubectl --namespace="${NAMESPACE}" get secret \ + pgo-root-cacert -o jsonpath='{.metadata.ownerReferences[*].name}') + # If owner1 and owner2 are both listed, exit successfully + if contains "${CURRENT_OWNERS}" "owner1" && contains "${CURRENT_OWNERS}" "owner2"; then + exit 0 + fi + done diff --git a/testing/kuttl/e2e/root-cert-ownership/02--delete-owner1.yaml b/testing/kuttl/e2e/root-cert-ownership/02--delete-owner1.yaml new file mode 100644 index 0000000000..14d9532d8d --- /dev/null +++ b/testing/kuttl/e2e/root-cert-ownership/02--delete-owner1.yaml @@ -0,0 +1,6 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +delete: +- apiVersion: postgres-operator.crunchydata.com/v1beta1 + kind: PostgresCluster + name: owner1 diff --git a/testing/kuttl/e2e/root-cert-ownership/02-assert.yaml b/testing/kuttl/e2e/root-cert-ownership/02-assert.yaml new file mode 100644 index 0000000000..839f6a9b29 --- /dev/null +++ b/testing/kuttl/e2e/root-cert-ownership/02-assert.yaml @@ -0,0 +1,9 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: owner2 +--- +apiVersion: v1 +kind: Secret +metadata: + name: pgo-root-cacert diff --git a/testing/kuttl/e2e/root-cert-ownership/02-errors.yaml b/testing/kuttl/e2e/root-cert-ownership/02-errors.yaml new file mode 100644 index 0000000000..d8f159d59c --- /dev/null +++ b/testing/kuttl/e2e/root-cert-ownership/02-errors.yaml @@ -0,0 +1,4 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: owner1 diff --git a/testing/kuttl/e2e/root-cert-ownership/03--check-owners.yaml b/testing/kuttl/e2e/root-cert-ownership/03--check-owners.yaml new file mode 100644 index 0000000000..951f9fce68 --- /dev/null +++ b/testing/kuttl/e2e/root-cert-ownership/03--check-owners.yaml @@ -0,0 +1,17 @@ +--- +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +commands: + # Get a list of the current owners of the root ca cert secret and verify that + # owner1 is no longer listed and owner2 is found. + - script: | + contains() { bash -ceu '[[ "$1" == *"$2"* ]]' - "$@"; } + while true; do + sleep 1 # this sleep allows time for the owner reference list to be updated + CURRENT_OWNERS=$(kubectl --namespace="${NAMESPACE}" get secret \ + pgo-root-cacert -o jsonpath='{.metadata.ownerReferences[*].name}') + # If owner1 is removed and owner2 is still listed, exit successfully + if !(contains "${CURRENT_OWNERS}" "owner1") && contains "${CURRENT_OWNERS}" "owner2"; then + exit 0 + fi + done diff --git a/testing/kuttl/e2e/root-cert-ownership/04--delete-owner2.yaml b/testing/kuttl/e2e/root-cert-ownership/04--delete-owner2.yaml new file mode 100644 index 0000000000..df1d55d3bb --- /dev/null +++ b/testing/kuttl/e2e/root-cert-ownership/04--delete-owner2.yaml @@ -0,0 +1,6 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +delete: +- apiVersion: postgres-operator.crunchydata.com/v1beta1 + kind: PostgresCluster + name: owner2 diff --git a/testing/kuttl/e2e/root-cert-ownership/04-errors.yaml b/testing/kuttl/e2e/root-cert-ownership/04-errors.yaml new file mode 100644 index 0000000000..b117c4561b --- /dev/null +++ b/testing/kuttl/e2e/root-cert-ownership/04-errors.yaml @@ -0,0 +1,9 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: owner1 +--- +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: owner2 diff --git a/testing/kuttl/e2e/root-cert-ownership/05--check-secret.yaml b/testing/kuttl/e2e/root-cert-ownership/05--check-secret.yaml new file mode 100644 index 0000000000..9c432f02b2 --- /dev/null +++ b/testing/kuttl/e2e/root-cert-ownership/05--check-secret.yaml @@ -0,0 +1,36 @@ +--- +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +commands: + # If there are other PostgresClusters in the namespace, ensure that 'owner1' + # and 'owner2' are not listed. + # If there are no other PostgresClusters in the namespace, the 'pgo-root-cacert' + # secret should be deleted. + - script: | + NUM_CLUSTERS=$(kubectl --namespace="${NAMESPACE}" get postgrescluster --output name | wc -l) + echo "Found ${NUM_CLUSTERS} clusters" + if [ "$NUM_CLUSTERS" != 0 ]; then + # Continue checking until Kuttl times out + # If at least one owner is never removed the test fails + while true; do + sleep 5 # This sleep allows time for the owner reference list to be updated + CURRENT_OWNERS=$(kubectl --namespace="${NAMESPACE}" get secret \ + pgo-root-cacert -o jsonpath='{.metadata.ownerReferences[*].name}') + # If neither owner is listed, exit successfully + contains() { bash -ceu '[[ "$1" == *"$2"* ]]' - "$@"; } + if ! contains "${CURRENT_OWNERS}" "owner1" && ! contains "${CURRENT_OWNERS}" "owner2"; then + exit 0 + fi + done + else + # Continue checking until Kuttl times out + # If the secret is never removed, the test fails + while true; do + sleep 5 # this sleep allows time for garbage collector to delete the secret + ROOT_SECRET=$(kubectl --namespace="${NAMESPACE}" get --ignore-not-found \ + secret pgo-root-cacert --output name | wc -l) + if [ "$ROOT_SECRET" = 0 ]; then + exit 0 + fi + done + fi diff --git a/testing/kuttl/e2e/root-cert-ownership/README.md b/testing/kuttl/e2e/root-cert-ownership/README.md new file mode 100644 index 0000000000..fe29596938 --- /dev/null +++ b/testing/kuttl/e2e/root-cert-ownership/README.md @@ -0,0 +1,23 @@ +### Root Certificate Ownership Test + +This Kuttl routine runs through the following steps: + +#### Create two clusters and verify the root certificate secret ownership + +- 00: Creates the two clusters and verifies they and the root cert secret exist +- 01: Check that the secret shows both clusters as owners + +#### Delete the first cluster and verify the root certificate secret ownership + +- 02: Delete the first cluster, assert that the second cluster and the root cert +secret are still present and that the first cluster is not present +- 03: Check that the secret shows the second cluster as an owner but does not show +the first cluster as an owner + +#### Delete the second cluster and verify the root certificate secret ownership + +- 04: Delete the second cluster, assert that both clusters are not present +- 05: Check the number of clusters in the namespace. If there are any remaining +clusters, ensure that the secret shows neither the first nor second cluster as an +owner. If there are no clusters remaining in the namespace, ensure the root cert +secret has been deleted. diff --git a/testing/kuttl/e2e/scaledown/00--create-cluster.yaml b/testing/kuttl/e2e/scaledown/00--create-cluster.yaml new file mode 100644 index 0000000000..50377c2fb6 --- /dev/null +++ b/testing/kuttl/e2e/scaledown/00--create-cluster.yaml @@ -0,0 +1,32 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: scaledown +spec: + postgresVersion: ${KUTTL_PG_VERSION} + instances: + - name: instance1 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + - name: instance2 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + backups: + pgbackrest: + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi diff --git a/testing/kuttl/e2e/scaledown/00-assert.yaml b/testing/kuttl/e2e/scaledown/00-assert.yaml new file mode 100644 index 0000000000..b5fa5a9051 --- /dev/null +++ b/testing/kuttl/e2e/scaledown/00-assert.yaml @@ -0,0 +1,14 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: scaledown +status: + instances: + - name: instance1 + readyReplicas: 1 + replicas: 1 + updatedReplicas: 1 + - name: instance2 + readyReplicas: 1 + replicas: 1 + updatedReplicas: 1 diff --git a/testing/kuttl/e2e/scaledown/01--update-cluster.yaml b/testing/kuttl/e2e/scaledown/01--update-cluster.yaml new file mode 100644 index 0000000000..d6409a8fd1 --- /dev/null +++ b/testing/kuttl/e2e/scaledown/01--update-cluster.yaml @@ -0,0 +1,14 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: scaledown +spec: + postgresVersion: ${KUTTL_PG_VERSION} + instances: + - name: instance1 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi diff --git a/testing/kuttl/e2e/scaledown/01-assert.yaml b/testing/kuttl/e2e/scaledown/01-assert.yaml new file mode 100644 index 0000000000..45bb0b6d04 --- /dev/null +++ b/testing/kuttl/e2e/scaledown/01-assert.yaml @@ -0,0 +1,10 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: scaledown +status: + instances: + - name: instance1 + readyReplicas: 1 + replicas: 1 + updatedReplicas: 1 diff --git a/testing/kuttl/e2e/scaledown/02--delete-cluster.yaml b/testing/kuttl/e2e/scaledown/02--delete-cluster.yaml new file mode 100644 index 0000000000..fc23731cd3 --- /dev/null +++ b/testing/kuttl/e2e/scaledown/02--delete-cluster.yaml @@ -0,0 +1,7 @@ +--- +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +delete: + - apiVersion: postgres-operator.crunchydata.com/v1beta1 + kind: PostgresCluster + name: scaledown diff --git a/testing/kuttl/e2e/scaledown/10--create-cluster.yaml b/testing/kuttl/e2e/scaledown/10--create-cluster.yaml new file mode 100644 index 0000000000..3847e588c0 --- /dev/null +++ b/testing/kuttl/e2e/scaledown/10--create-cluster.yaml @@ -0,0 +1,26 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: scaledown1 +spec: + postgresVersion: ${KUTTL_PG_VERSION} + instances: + - name: instance1 + replicas: 2 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + backups: + pgbackrest: + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi diff --git a/testing/kuttl/e2e/scaledown/10-assert.yaml b/testing/kuttl/e2e/scaledown/10-assert.yaml new file mode 100644 index 0000000000..cf8bcb461a --- /dev/null +++ b/testing/kuttl/e2e/scaledown/10-assert.yaml @@ -0,0 +1,30 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: scaledown1 +status: + instances: + - name: instance1 + readyReplicas: 2 + replicas: 2 + updatedReplicas: 2 +--- +apiVersion: v1 +kind: Pod +metadata: + labels: + postgres-operator.crunchydata.com/cluster: scaledown1 + postgres-operator.crunchydata.com/instance-set: instance1 + postgres-operator.crunchydata.com/role: master +status: + phase: Running +--- +apiVersion: v1 +kind: Pod +metadata: + labels: + postgres-operator.crunchydata.com/cluster: scaledown1 + postgres-operator.crunchydata.com/instance-set: instance1 + postgres-operator.crunchydata.com/role: replica +status: + phase: Running diff --git a/testing/kuttl/e2e/scaledown/11-annotate.yaml b/testing/kuttl/e2e/scaledown/11-annotate.yaml new file mode 100644 index 0000000000..a4bc743b3f --- /dev/null +++ b/testing/kuttl/e2e/scaledown/11-annotate.yaml @@ -0,0 +1,13 @@ +--- +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +commands: + # Label instance pods with their current role. + - script: | + kubectl label --namespace="${NAMESPACE}" pods \ + --selector='postgres-operator.crunchydata.com/role=master' \ + 'testing/role-before=master' + - script: | + kubectl label --namespace="${NAMESPACE}" pods \ + --selector='postgres-operator.crunchydata.com/role=replica' \ + 'testing/role-before=replica' diff --git a/testing/kuttl/e2e/scaledown/12--update-cluster.yaml b/testing/kuttl/e2e/scaledown/12--update-cluster.yaml new file mode 100644 index 0000000000..3b4f62094a --- /dev/null +++ b/testing/kuttl/e2e/scaledown/12--update-cluster.yaml @@ -0,0 +1,15 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: scaledown1 +spec: + postgresVersion: ${KUTTL_PG_VERSION} + instances: + - name: instance1 + replicas: 1 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi diff --git a/testing/kuttl/e2e/scaledown/12-assert.yaml b/testing/kuttl/e2e/scaledown/12-assert.yaml new file mode 100644 index 0000000000..079435b67d --- /dev/null +++ b/testing/kuttl/e2e/scaledown/12-assert.yaml @@ -0,0 +1,21 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: scaledown1 +status: + instances: + - name: instance1 + readyReplicas: 1 + replicas: 1 + updatedReplicas: 1 +--- +apiVersion: v1 +kind: Pod +metadata: + labels: + postgres-operator.crunchydata.com/cluster: scaledown1 + postgres-operator.crunchydata.com/instance-set: instance1 + postgres-operator.crunchydata.com/role: master + testing/role-before: master +status: + phase: Running diff --git a/testing/kuttl/e2e/scaledown/13--delete-cluster.yaml b/testing/kuttl/e2e/scaledown/13--delete-cluster.yaml new file mode 100644 index 0000000000..ddcdb20910 --- /dev/null +++ b/testing/kuttl/e2e/scaledown/13--delete-cluster.yaml @@ -0,0 +1,7 @@ +--- +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +delete: + - apiVersion: postgres-operator.crunchydata.com/v1beta1 + kind: PostgresCluster + name: scaledown1 diff --git a/testing/kuttl/e2e/scaledown/20--create-cluster.yaml b/testing/kuttl/e2e/scaledown/20--create-cluster.yaml new file mode 100644 index 0000000000..796f88db3c --- /dev/null +++ b/testing/kuttl/e2e/scaledown/20--create-cluster.yaml @@ -0,0 +1,33 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: scaledown2 +spec: + postgresVersion: ${KUTTL_PG_VERSION} + instances: + - name: instance1 + replicas: 2 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + - name: instance2 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + backups: + pgbackrest: + repos: + - name: repo1 + volume: + volumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi diff --git a/testing/kuttl/e2e/scaledown/20-assert.yaml b/testing/kuttl/e2e/scaledown/20-assert.yaml new file mode 100644 index 0000000000..f65cef60b8 --- /dev/null +++ b/testing/kuttl/e2e/scaledown/20-assert.yaml @@ -0,0 +1,14 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: scaledown2 +status: + instances: + - name: instance1 + readyReplicas: 2 + replicas: 2 + updatedReplicas: 2 + - name: instance2 + readyReplicas: 1 + replicas: 1 + updatedReplicas: 1 diff --git a/testing/kuttl/e2e/scaledown/21--update-cluster.yaml b/testing/kuttl/e2e/scaledown/21--update-cluster.yaml new file mode 100644 index 0000000000..02d8936d0b --- /dev/null +++ b/testing/kuttl/e2e/scaledown/21--update-cluster.yaml @@ -0,0 +1,21 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: scaledown2 +spec: + postgresVersion: ${KUTTL_PG_VERSION} + instances: + - name: instance1 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + - name: instance2 + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi diff --git a/testing/kuttl/e2e/scaledown/21-assert.yaml b/testing/kuttl/e2e/scaledown/21-assert.yaml new file mode 100644 index 0000000000..f137a616b8 --- /dev/null +++ b/testing/kuttl/e2e/scaledown/21-assert.yaml @@ -0,0 +1,14 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: scaledown2 +status: + instances: + - name: instance1 + readyReplicas: 1 + replicas: 1 + updatedReplicas: 1 + - name: instance2 + readyReplicas: 1 + replicas: 1 + updatedReplicas: 1 diff --git a/testing/kuttl/e2e/scaledown/readme.MD b/testing/kuttl/e2e/scaledown/readme.MD new file mode 100644 index 0000000000..44fd880ed1 --- /dev/null +++ b/testing/kuttl/e2e/scaledown/readme.MD @@ -0,0 +1,31 @@ +## Scaledown tests + +This is a KUTTL version of a previous `TestScaleDown` test that was prone to flaky behavior; +The KUTTL test captures the three test-cases enumerated in that test, and for ease of reading, +all three tests exist in this folder, which necessitates a clean-up step after tests one and two. +This tests makes extensive use of `status.instances` to make sure that the expected instances +have the expected number of pods. + +### From two sets to one set + +* 00--create-cluster: create the cluster with two instance sets, one replica each +* 00-assert: check that the cluster exists with the expected status +* 01--update-cluster: update the cluster to remove one instance set +* 01-assert: check that the cluster exists with the expected status +* 02--delete-cluster + +### From one set with multiple replicas to one set with one replica + +* 10--create-cluster: create the cluster with one instance set with two replicas +* 10-assert: check that the cluster exists with the expected status +* 11-annotate: set the roles as labels on the pods +* 12--update-cluster: update the cluster to remove one replica +* 12-assert: check that the cluster exists with the expected status; and that the `master` pod that exists was the `master` before the scaledown +* 13--delete-cluster: delete the cluster + +### From two sets with variable replicas to two set with one replica each + +* 20--create-cluster: create the cluster with two instance sets, with two and one replica +* 20-assert: check that the cluster exists with the expected status +* 21--update-cluster: update the cluster to reduce the two-replica instance to one-replica +* 21-assert: check that the cluster exists with the expected status diff --git a/testing/kuttl/e2e/security-context/00--cluster.yaml b/testing/kuttl/e2e/security-context/00--cluster.yaml new file mode 100644 index 0000000000..5155eb4fc6 --- /dev/null +++ b/testing/kuttl/e2e/security-context/00--cluster.yaml @@ -0,0 +1,26 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: security-context + labels: { postgres-operator-test: kuttl } +spec: + postgresVersion: ${KUTTL_PG_VERSION} + instances: + - name: instance1 + replicas: 1 + dataVolumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } + backups: + pgbackrest: + repos: + - name: repo1 + volume: + volumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } + proxy: + pgBouncer: + replicas: 1 + userInterface: + pgAdmin: + dataVolumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } + monitoring: + pgmonitor: + exporter: {} diff --git a/testing/kuttl/e2e/security-context/00-assert.yaml b/testing/kuttl/e2e/security-context/00-assert.yaml new file mode 100644 index 0000000000..a6a5f48b6a --- /dev/null +++ b/testing/kuttl/e2e/security-context/00-assert.yaml @@ -0,0 +1,186 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: security-context +status: + instances: + - name: instance1 + readyReplicas: 1 + replicas: 1 + updatedReplicas: 1 +--- +apiVersion: batch/v1 +kind: Job +metadata: + labels: + postgres-operator.crunchydata.com/cluster: security-context + postgres-operator.crunchydata.com/pgbackrest-backup: replica-create +status: + succeeded: 1 +--- +# initial pgBackRest backup +apiVersion: v1 +kind: Pod +metadata: + labels: + postgres-operator.crunchydata.com/cluster: security-context + postgres-operator.crunchydata.com/pgbackrest: "" + postgres-operator.crunchydata.com/pgbackrest-backup: replica-create + postgres-operator.crunchydata.com/pgbackrest-repo: repo1 +spec: + containers: + - name: pgbackrest + securityContext: + allowPrivilegeEscalation: false + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true +--- +# instance +apiVersion: v1 +kind: Pod +metadata: + labels: + postgres-operator.crunchydata.com/cluster: security-context + postgres-operator.crunchydata.com/data: postgres + postgres-operator.crunchydata.com/instance-set: instance1 + postgres-operator.crunchydata.com/patroni: security-context-ha + postgres-operator.crunchydata.com/role: master +spec: + containers: + - name: database + securityContext: + allowPrivilegeEscalation: false + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + - name: replication-cert-copy + securityContext: + allowPrivilegeEscalation: false + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + - name: pgbackrest + securityContext: + allowPrivilegeEscalation: false + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + - name: pgbackrest-config + securityContext: + allowPrivilegeEscalation: false + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + - name: exporter + securityContext: + allowPrivilegeEscalation: false + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + initContainers: + - name: postgres-startup + securityContext: + allowPrivilegeEscalation: false + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + - name: nss-wrapper-init + securityContext: + allowPrivilegeEscalation: false + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true +--- +# pgAdmin +apiVersion: v1 +kind: Pod +metadata: + labels: + postgres-operator.crunchydata.com/cluster: security-context + postgres-operator.crunchydata.com/data: pgadmin + postgres-operator.crunchydata.com/role: pgadmin + statefulset.kubernetes.io/pod-name: security-context-pgadmin-0 + name: security-context-pgadmin-0 +spec: + containers: + - name: pgadmin + securityContext: + allowPrivilegeEscalation: false + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + initContainers: + - name: pgadmin-startup + securityContext: + allowPrivilegeEscalation: false + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + - name: nss-wrapper-init + securityContext: + allowPrivilegeEscalation: false + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true +--- +# pgBouncer +apiVersion: v1 +kind: Pod +metadata: + labels: + postgres-operator.crunchydata.com/cluster: security-context + postgres-operator.crunchydata.com/role: pgbouncer +spec: + containers: + - name: pgbouncer + securityContext: + allowPrivilegeEscalation: false + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + - name: pgbouncer-config + securityContext: + allowPrivilegeEscalation: false + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true +--- +# pgBackRest repo +apiVersion: v1 +kind: Pod +metadata: + labels: + postgres-operator.crunchydata.com/cluster: security-context + postgres-operator.crunchydata.com/data: pgbackrest + postgres-operator.crunchydata.com/pgbackrest: "" + postgres-operator.crunchydata.com/pgbackrest-dedicated: "" + statefulset.kubernetes.io/pod-name: security-context-repo-host-0 + name: security-context-repo-host-0 +spec: + containers: + - name: pgbackrest + securityContext: + allowPrivilegeEscalation: false + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + - name: pgbackrest-config + securityContext: + allowPrivilegeEscalation: false + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + initContainers: + - name: pgbackrest-log-dir + securityContext: + allowPrivilegeEscalation: false + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true + - name: nss-wrapper-init + securityContext: + allowPrivilegeEscalation: false + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: true diff --git a/testing/kuttl/e2e/security-context/01--security-context.yaml b/testing/kuttl/e2e/security-context/01--security-context.yaml new file mode 100644 index 0000000000..a8dd098697 --- /dev/null +++ b/testing/kuttl/e2e/security-context/01--security-context.yaml @@ -0,0 +1,48 @@ +--- +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +commands: + - script: | + # Check that every container has the correct capabilities. + + # Capture every container name alongside its list of dropped capabilities. + CONTAINERS_DROP_CAPS=$( + kubectl --namespace "${NAMESPACE}" get pods --output "jsonpath={\ + range .items[*].spec.containers[*]\ + }{ @.name }{'\t\t'}{ @.securityContext.capabilities.drop }{'\n'}{\ + end\ + }" + ) || exit + + WRONG=$( ! echo "${CONTAINERS_DROP_CAPS}" | grep -Fv '"ALL"' ) || { + echo 'Not all containers have dropped "ALL" capabilities!' + echo "${WRONG}" + exit 1 + } + + - script: | + # Check that every Pod is assigned to the "restricted" SecurityContextConstraint + # in OpenShift. + + SCC=$( + kubectl api-resources --cached | + grep -F 'security.openshift.io/v1' | + grep -F 'SecurityContextConstraint' + ) + + # Skip this check when the API has no notion of SecurityContextConstraint. + [ -z "${SCC}" ] && exit + + PODS_SCC=$( + kubectl --namespace "${NAMESPACE}" get pods --no-headers \ + --output "custom-columns=\ + NAME:.metadata.name,\ + SCC:.metadata.annotations['openshift\.io/scc']\ + " + ) || exit + + WRONG=$( ! echo "${PODS_SCC}" | grep -Ev -e '\ policies.yaml + kyverno apply --cluster --namespace "${NAMESPACE}" policies.yaml diff --git a/testing/kuttl/e2e/standalone-pgadmin-db-uri/00--create-cluster.yaml b/testing/kuttl/e2e/standalone-pgadmin-db-uri/00--create-cluster.yaml new file mode 100644 index 0000000000..c86a544166 --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin-db-uri/00--create-cluster.yaml @@ -0,0 +1,6 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +apply: +- files/00-cluster.yaml +assert: +- files/00-cluster-check.yaml diff --git a/testing/kuttl/e2e/standalone-pgadmin-db-uri/01--user-schema.yaml b/testing/kuttl/e2e/standalone-pgadmin-db-uri/01--user-schema.yaml new file mode 100644 index 0000000000..bbddba56c2 --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin-db-uri/01--user-schema.yaml @@ -0,0 +1,14 @@ +--- +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +commands: +# ensure the user schema is created for pgAdmin to use + - script: | + PRIMARY=$( + kubectl get pod --namespace "${NAMESPACE}" \ + --output name --selector ' + postgres-operator.crunchydata.com/cluster=elephant, + postgres-operator.crunchydata.com/role=master' + ) + kubectl exec --namespace "${NAMESPACE}" "${PRIMARY}" \ + -- psql -qAt -d elephant --command 'CREATE SCHEMA elephant AUTHORIZATION elephant' diff --git a/testing/kuttl/e2e/standalone-pgadmin-db-uri/02--create-pgadmin.yaml b/testing/kuttl/e2e/standalone-pgadmin-db-uri/02--create-pgadmin.yaml new file mode 100644 index 0000000000..0ef15853af --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin-db-uri/02--create-pgadmin.yaml @@ -0,0 +1,6 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +apply: +- files/02-pgadmin.yaml +assert: +- files/02-pgadmin-check.yaml diff --git a/testing/kuttl/e2e/standalone-pgadmin-db-uri/03-assert.yaml b/testing/kuttl/e2e/standalone-pgadmin-db-uri/03-assert.yaml new file mode 100644 index 0000000000..6a25871f63 --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin-db-uri/03-assert.yaml @@ -0,0 +1,21 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +commands: +- script: | + PRIMARY=$( + kubectl get pod --namespace "${NAMESPACE}" \ + --output name --selector ' + postgres-operator.crunchydata.com/cluster=elephant, + postgres-operator.crunchydata.com/role=master' + ) + + NUM_USERS=$( + kubectl exec --namespace "${NAMESPACE}" "${PRIMARY}" -- \ + psql -qAt -d elephant --command 'select count(*) from elephant.user' \ + ) + + if [[ ${NUM_USERS} != 1 ]]; then + echo >&2 'Expected 1 user' + echo "got ${NUM_USERS}" + exit 1 + fi diff --git a/testing/kuttl/e2e/standalone-pgadmin-db-uri/04--update-pgadmin.yaml b/testing/kuttl/e2e/standalone-pgadmin-db-uri/04--update-pgadmin.yaml new file mode 100644 index 0000000000..f8aaf480fd --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin-db-uri/04--update-pgadmin.yaml @@ -0,0 +1,6 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +apply: +- files/04-pgadmin.yaml +assert: +- files/04-pgadmin-check.yaml diff --git a/testing/kuttl/e2e/standalone-pgadmin-db-uri/05-assert.yaml b/testing/kuttl/e2e/standalone-pgadmin-db-uri/05-assert.yaml new file mode 100644 index 0000000000..4d31c5db18 --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin-db-uri/05-assert.yaml @@ -0,0 +1,36 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +# timeout: 120 +commands: +- script: | + PRIMARY=$( + kubectl get pod --namespace "${NAMESPACE}" \ + --output name --selector ' + postgres-operator.crunchydata.com/cluster=elephant, + postgres-operator.crunchydata.com/role=master' + ) + + NUM_USERS=$( + kubectl exec --namespace "${NAMESPACE}" "${PRIMARY}" -- \ + psql -qAt -d elephant --command 'select count(*) from elephant.user' \ + ) + + if [[ ${NUM_USERS} != 2 ]]; then + echo >&2 'Expected 2 user' + echo "got ${NUM_USERS}" + exit 1 + fi + + contains() { bash -ceu '[[ "$1" == *"$2"* ]]' - "$@"; } + USER_LIST=$( + kubectl exec --namespace "${NAMESPACE}" "${PRIMARY}" -- \ + psql -qAt -d elephant --command 'select email from elephant.user;' \ + ) + + { + contains "${USER_LIST}" "john.doe@example.com" + } || { + echo >&2 'User john.doe@example.com not found. Got:' + echo "${USER_LIST}" + exit 1 + } diff --git a/testing/kuttl/e2e/standalone-pgadmin-db-uri/README.md b/testing/kuttl/e2e/standalone-pgadmin-db-uri/README.md new file mode 100644 index 0000000000..2d7688ae3b --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin-db-uri/README.md @@ -0,0 +1,26 @@ +# pgAdmin external database tests + +Notes: +- Due to the (random) namespace being part of the host, we cannot check the configmap using the usual assert/file pattern. +- These tests will only work with pgAdmin version v8 and higher + +## create postgrescluster and add user schema +* 00: + * create a postgrescluster with a label; + * check that the cluster has the label and that the expected user secret is created. +* 01: + * create the user schema for pgAdmin to use + + ## create pgadmin and verify connection to database +* 02: + * create a pgadmin with a selector for the existing cluster's label; + * check the correct existence of the secret, configmap, and pod. +* 03: + * check that pgAdmin only has one user + + ## add a pgadmin user and verify it in the database +* 04: + * update pgadmin with a new user; + * check that the pod is still running as expected. +* 05: + * check that pgAdmin now has two users and that the defined user is present. diff --git a/testing/kuttl/e2e/standalone-pgadmin-db-uri/files/00-cluster-check.yaml b/testing/kuttl/e2e/standalone-pgadmin-db-uri/files/00-cluster-check.yaml new file mode 100644 index 0000000000..8ae250152f --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin-db-uri/files/00-cluster-check.yaml @@ -0,0 +1,31 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: elephant + labels: + sometest: test1 +status: + instances: + - name: instance1 + readyReplicas: 1 + replicas: 1 + updatedReplicas: 1 +--- +apiVersion: v1 +kind: Secret +metadata: + labels: + postgres-operator.crunchydata.com/cluster: elephant + postgres-operator.crunchydata.com/pguser: elephant + postgres-operator.crunchydata.com/role: pguser +type: Opaque +--- +apiVersion: v1 +kind: Pod +metadata: + labels: + postgres-operator.crunchydata.com/cluster: elephant + postgres-operator.crunchydata.com/instance-set: instance1 + postgres-operator.crunchydata.com/role: master +status: + phase: Running diff --git a/testing/kuttl/e2e/standalone-pgadmin-db-uri/files/00-cluster.yaml b/testing/kuttl/e2e/standalone-pgadmin-db-uri/files/00-cluster.yaml new file mode 100644 index 0000000000..5f8678e5e9 --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin-db-uri/files/00-cluster.yaml @@ -0,0 +1,11 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PostgresCluster +metadata: + name: elephant + labels: + sometest: test1 +spec: + postgresVersion: ${KUTTL_PG_VERSION} + instances: + - name: instance1 + dataVolumeClaimSpec: { accessModes: [ReadWriteOnce], resources: { requests: { storage: 1Gi } } } diff --git a/testing/kuttl/e2e/standalone-pgadmin-db-uri/files/02-pgadmin-check.yaml b/testing/kuttl/e2e/standalone-pgadmin-db-uri/files/02-pgadmin-check.yaml new file mode 100644 index 0000000000..6457b2ca20 --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin-db-uri/files/02-pgadmin-check.yaml @@ -0,0 +1,29 @@ +--- +apiVersion: v1 +kind: ConfigMap +metadata: + labels: + postgres-operator.crunchydata.com/role: pgadmin + postgres-operator.crunchydata.com/pgadmin: pgadmin1 +--- +apiVersion: v1 +kind: Pod +metadata: + labels: + postgres-operator.crunchydata.com/data: pgadmin + postgres-operator.crunchydata.com/role: pgadmin + postgres-operator.crunchydata.com/pgadmin: pgadmin1 +status: + containerStatuses: + - name: pgadmin + ready: true + started: true + phase: Running +--- +apiVersion: v1 +kind: Secret +metadata: + labels: + postgres-operator.crunchydata.com/role: pgadmin + postgres-operator.crunchydata.com/pgadmin: pgadmin1 +type: Opaque diff --git a/testing/kuttl/e2e/standalone-pgadmin-db-uri/files/02-pgadmin.yaml b/testing/kuttl/e2e/standalone-pgadmin-db-uri/files/02-pgadmin.yaml new file mode 100644 index 0000000000..f1e251b949 --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin-db-uri/files/02-pgadmin.yaml @@ -0,0 +1,20 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PGAdmin +metadata: + name: pgadmin1 +spec: + config: + configDatabaseURI: + name: elephant-pguser-elephant + key: uri + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + serverGroups: + - name: kuttl-test + postgresClusterSelector: + matchLabels: + sometest: test1 diff --git a/testing/kuttl/e2e/standalone-pgadmin-db-uri/files/04-pgadmin-check.yaml b/testing/kuttl/e2e/standalone-pgadmin-db-uri/files/04-pgadmin-check.yaml new file mode 100644 index 0000000000..3a3f459441 --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin-db-uri/files/04-pgadmin-check.yaml @@ -0,0 +1,14 @@ +--- +apiVersion: v1 +kind: Pod +metadata: + labels: + postgres-operator.crunchydata.com/data: pgadmin + postgres-operator.crunchydata.com/role: pgadmin + postgres-operator.crunchydata.com/pgadmin: pgadmin1 +status: + containerStatuses: + - name: pgadmin + ready: true + started: true + phase: Running diff --git a/testing/kuttl/e2e/standalone-pgadmin-db-uri/files/04-pgadmin.yaml b/testing/kuttl/e2e/standalone-pgadmin-db-uri/files/04-pgadmin.yaml new file mode 100644 index 0000000000..2c62b58b4b --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin-db-uri/files/04-pgadmin.yaml @@ -0,0 +1,33 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PGAdmin +metadata: + name: pgadmin1 +spec: + users: + - username: "john.doe@example.com" + passwordRef: + name: john-doe-password + key: password + config: + configDatabaseURI: + name: elephant-pguser-elephant + key: uri + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + serverGroups: + - name: kuttl-test + postgresClusterSelector: + matchLabels: + sometest: test1 +--- +apiVersion: v1 +kind: Secret +metadata: + name: john-doe-password +type: Opaque +stringData: + password: password diff --git a/testing/kuttl/e2e/standalone-pgadmin-service/00--pgadmin.yaml b/testing/kuttl/e2e/standalone-pgadmin-service/00--pgadmin.yaml new file mode 100644 index 0000000000..9372467a93 --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin-service/00--pgadmin.yaml @@ -0,0 +1,12 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PGAdmin +metadata: + name: pgadmin +spec: + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + serviceName: pgadmin-service diff --git a/testing/kuttl/e2e/standalone-pgadmin-service/00-assert.yaml b/testing/kuttl/e2e/standalone-pgadmin-service/00-assert.yaml new file mode 100644 index 0000000000..758814cad2 --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin-service/00-assert.yaml @@ -0,0 +1,21 @@ +apiVersion: v1 +kind: Service +metadata: + name: pgadmin-service + labels: + postgres-operator.crunchydata.com/role: pgadmin + postgres-operator.crunchydata.com/pgadmin: pgadmin + ownerReferences: + - apiVersion: postgres-operator.crunchydata.com/v1beta1 + controller: true + kind: PGAdmin + name: pgadmin +spec: + selector: + postgres-operator.crunchydata.com/pgadmin: pgadmin + ports: + - port: 5050 + targetPort: 5050 + protocol: TCP + name: pgadmin-port + type: ClusterIP diff --git a/testing/kuttl/e2e/standalone-pgadmin-service/01--update-service.yaml b/testing/kuttl/e2e/standalone-pgadmin-service/01--update-service.yaml new file mode 100644 index 0000000000..81db248fd4 --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin-service/01--update-service.yaml @@ -0,0 +1,12 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PGAdmin +metadata: + name: pgadmin +spec: + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + serviceName: pgadmin-service-updated diff --git a/testing/kuttl/e2e/standalone-pgadmin-service/01-assert.yaml b/testing/kuttl/e2e/standalone-pgadmin-service/01-assert.yaml new file mode 100644 index 0000000000..2303ebe9bb --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin-service/01-assert.yaml @@ -0,0 +1,16 @@ +apiVersion: v1 +kind: Service +metadata: + name: pgadmin-service-updated + labels: + postgres-operator.crunchydata.com/role: pgadmin + postgres-operator.crunchydata.com/pgadmin: pgadmin +spec: + selector: + postgres-operator.crunchydata.com/pgadmin: pgadmin + ports: + - port: 5050 + targetPort: 5050 + protocol: TCP + name: pgadmin-port + type: ClusterIP diff --git a/testing/kuttl/e2e/standalone-pgadmin-service/02--remove-service.yaml b/testing/kuttl/e2e/standalone-pgadmin-service/02--remove-service.yaml new file mode 100644 index 0000000000..b8cbf4eb41 --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin-service/02--remove-service.yaml @@ -0,0 +1,11 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PGAdmin +metadata: + name: pgadmin +spec: + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi diff --git a/testing/kuttl/e2e/standalone-pgadmin-service/02-errors.yaml b/testing/kuttl/e2e/standalone-pgadmin-service/02-errors.yaml new file mode 100644 index 0000000000..f2795c106d --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin-service/02-errors.yaml @@ -0,0 +1,16 @@ +apiVersion: v1 +kind: Service +metadata: + name: pgadmin-service + labels: + postgres-operator.crunchydata.com/role: pgadmin + postgres-operator.crunchydata.com/pgadmin: pgadmin +spec: + selector: + postgres-operator.crunchydata.com/pgadmin: pgadmin + ports: + - port: 5050 + targetPort: 5050 + protocol: TCP + name: pgadmin-port + type: ClusterIP diff --git a/testing/kuttl/e2e/standalone-pgadmin-service/10--manual-service.yaml b/testing/kuttl/e2e/standalone-pgadmin-service/10--manual-service.yaml new file mode 100644 index 0000000000..88d8da6718 --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin-service/10--manual-service.yaml @@ -0,0 +1,29 @@ +# Manually create a service that should be taken over by pgAdmin +# The manual service is of type LoadBalancer +# Once taken over, the type should change to ClusterIP +apiVersion: v1 +kind: Service +metadata: + name: manual-pgadmin-service +spec: + ports: + - name: pgadmin-port + port: 5050 + protocol: TCP + selector: + postgres-operator.crunchydata.com/pgadmin: rhino + type: LoadBalancer +--- +# Create a pgAdmin that points to an existing un-owned service +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PGAdmin +metadata: + name: manual-svc-pgadmin +spec: + serviceName: manual-pgadmin-service + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi diff --git a/testing/kuttl/e2e/standalone-pgadmin-service/10-assert.yaml b/testing/kuttl/e2e/standalone-pgadmin-service/10-assert.yaml new file mode 100644 index 0000000000..95bf241b16 --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin-service/10-assert.yaml @@ -0,0 +1,22 @@ +# Check that the manually created service has the correct ownerReference +apiVersion: v1 +kind: Service +metadata: + name: manual-pgadmin-service + labels: + postgres-operator.crunchydata.com/role: pgadmin + postgres-operator.crunchydata.com/pgadmin: manual-svc-pgadmin + ownerReferences: + - apiVersion: postgres-operator.crunchydata.com/v1beta1 + controller: true + kind: PGAdmin + name: manual-svc-pgadmin +spec: + selector: + postgres-operator.crunchydata.com/pgadmin: manual-svc-pgadmin + ports: + - port: 5050 + targetPort: 5050 + protocol: TCP + name: pgadmin-port + type: ClusterIP diff --git a/testing/kuttl/e2e/standalone-pgadmin-service/20--owned-service.yaml b/testing/kuttl/e2e/standalone-pgadmin-service/20--owned-service.yaml new file mode 100644 index 0000000000..04f211ffc7 --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin-service/20--owned-service.yaml @@ -0,0 +1,13 @@ +# Create a pgAdmin that will create and own a service +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PGAdmin +metadata: + name: pgadmin-service-owner +spec: + serviceName: pgadmin-owned-service + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi diff --git a/testing/kuttl/e2e/standalone-pgadmin-service/20-assert.yaml b/testing/kuttl/e2e/standalone-pgadmin-service/20-assert.yaml new file mode 100644 index 0000000000..a6ab1653bb --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin-service/20-assert.yaml @@ -0,0 +1,21 @@ +apiVersion: v1 +kind: Service +metadata: + name: pgadmin-owned-service + labels: + postgres-operator.crunchydata.com/role: pgadmin + postgres-operator.crunchydata.com/pgadmin: pgadmin-service-owner + ownerReferences: + - apiVersion: postgres-operator.crunchydata.com/v1beta1 + controller: true + kind: PGAdmin + name: pgadmin-service-owner +spec: + selector: + postgres-operator.crunchydata.com/pgadmin: pgadmin-service-owner + ports: + - port: 5050 + targetPort: 5050 + protocol: TCP + name: pgadmin-port + type: ClusterIP diff --git a/testing/kuttl/e2e/standalone-pgadmin-service/21--service-takeover-fails.yaml b/testing/kuttl/e2e/standalone-pgadmin-service/21--service-takeover-fails.yaml new file mode 100644 index 0000000000..f992521ce8 --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin-service/21--service-takeover-fails.yaml @@ -0,0 +1,13 @@ +# Create a second pgAdmin that attempts to steal the service +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PGAdmin +metadata: + name: pgadmin-service-thief +spec: + serviceName: pgadmin-owned-service + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi diff --git a/testing/kuttl/e2e/standalone-pgadmin-service/21-assert.yaml b/testing/kuttl/e2e/standalone-pgadmin-service/21-assert.yaml new file mode 100644 index 0000000000..060d669987 --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin-service/21-assert.yaml @@ -0,0 +1,35 @@ +# Original service should still have owner reference +apiVersion: v1 +kind: Service +metadata: + name: pgadmin-owned-service + labels: + postgres-operator.crunchydata.com/role: pgadmin + postgres-operator.crunchydata.com/pgadmin: pgadmin-service-owner + ownerReferences: + - apiVersion: postgres-operator.crunchydata.com/v1beta1 + controller: true + kind: PGAdmin + name: pgadmin-service-owner +spec: + selector: + postgres-operator.crunchydata.com/pgadmin: pgadmin-service-owner + ports: + - port: 5050 + targetPort: 5050 + protocol: TCP + name: pgadmin-port + type: ClusterIP +--- +# An event should be created for the failure to reconcile the Service +apiVersion: v1 +involvedObject: + apiVersion: postgres-operator.crunchydata.com/v1beta1 + kind: PGAdmin + name: pgadmin-service-thief +kind: Event +message: 'Failed to reconcile Service ServiceName: pgadmin-owned-service' +reason: InvalidServiceWarning +source: + component: pgadmin-controller +type: Warning diff --git a/testing/kuttl/e2e/standalone-pgadmin-user-management/00--create-pgadmin.yaml b/testing/kuttl/e2e/standalone-pgadmin-user-management/00--create-pgadmin.yaml new file mode 100644 index 0000000000..ee1a03ec64 --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin-user-management/00--create-pgadmin.yaml @@ -0,0 +1,6 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +apply: +- files/00-pgadmin.yaml +assert: +- files/00-pgadmin-check.yaml diff --git a/testing/kuttl/e2e/standalone-pgadmin-user-management/01-assert.yaml b/testing/kuttl/e2e/standalone-pgadmin-user-management/01-assert.yaml new file mode 100644 index 0000000000..244533b7ee --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin-user-management/01-assert.yaml @@ -0,0 +1,26 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +commands: +# When setup.py returns users in Json, the Role translation is 1 for Admin, 2 for User +- script: | + pod_name=$(kubectl get pod -n "${NAMESPACE}" -l postgres-operator.crunchydata.com/pgadmin=pgadmin -o name) + secret_name=$(kubectl get secret -n "${NAMESPACE}" -l postgres-operator.crunchydata.com/pgadmin=pgadmin -o name) + + users_in_pgadmin=$(kubectl exec -n "${NAMESPACE}" "${pod_name}" -- bash -c "python3 /usr/local/lib/python3.11/site-packages/pgadmin4/setup.py get-users --json") + + bob_role=$(printf '%s\n' $users_in_pgadmin | jq '.[] | select(.username=="bob@example.com") | .role') + dave_role=$(printf '%s\n' $users_in_pgadmin | jq '.[] | select(.username=="dave@example.com") | .role') + + [ $bob_role = 1 ] && [ $dave_role = 2 ] || exit 1 + + users_in_secret=$(kubectl get "${secret_name}" -n "${NAMESPACE}" -o 'go-template={{index .data "users.json" }}' | base64 -d) + + bob_is_admin=$(printf '%s\n' $users_in_secret | jq '.[] | select(.username=="bob@example.com") | .isAdmin') + dave_is_admin=$(printf '%s\n' $users_in_secret | jq '.[] | select(.username=="dave@example.com") | .isAdmin') + + $bob_is_admin && ! $dave_is_admin || exit 1 + + bob_password=$(printf '%s\n' $users_in_secret | jq -r '.[] | select(.username=="bob@example.com") | .password') + dave_password=$(printf '%s\n' $users_in_secret | jq -r '.[] | select(.username=="dave@example.com") | .password') + + [ "$bob_password" = "password123" ] && [ "$dave_password" = "password456" ] || exit 1 diff --git a/testing/kuttl/e2e/standalone-pgadmin-user-management/02--edit-pgadmin-users.yaml b/testing/kuttl/e2e/standalone-pgadmin-user-management/02--edit-pgadmin-users.yaml new file mode 100644 index 0000000000..0ef15853af --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin-user-management/02--edit-pgadmin-users.yaml @@ -0,0 +1,6 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +apply: +- files/02-pgadmin.yaml +assert: +- files/02-pgadmin-check.yaml diff --git a/testing/kuttl/e2e/standalone-pgadmin-user-management/03-assert.yaml b/testing/kuttl/e2e/standalone-pgadmin-user-management/03-assert.yaml new file mode 100644 index 0000000000..01aff25b3b --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin-user-management/03-assert.yaml @@ -0,0 +1,29 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +commands: +# When setup.py returns users in Json, the Role translation is 1 for Admin, 2 for User +- script: | + pod_name=$(kubectl get pod -n "${NAMESPACE}" -l postgres-operator.crunchydata.com/pgadmin=pgadmin -o name) + secret_name=$(kubectl get secret -n "${NAMESPACE}" -l postgres-operator.crunchydata.com/pgadmin=pgadmin -o name) + + users_in_pgadmin=$(kubectl exec -n "${NAMESPACE}" "${pod_name}" -- bash -c "python3 /usr/local/lib/python3.11/site-packages/pgadmin4/setup.py get-users --json") + + bob_role=$(printf '%s\n' $users_in_pgadmin | jq '.[] | select(.username=="bob@example.com") | .role') + dave_role=$(printf '%s\n' $users_in_pgadmin | jq '.[] | select(.username=="dave@example.com") | .role') + jimi_role=$(printf '%s\n' $users_in_pgadmin | jq '.[] | select(.username=="jimi@example.com") | .role') + + [ $bob_role = 1 ] && [ $dave_role = 1 ] && [ $jimi_role = 2 ] || exit 1 + + users_in_secret=$(kubectl get "${secret_name}" -n "${NAMESPACE}" -o 'go-template={{index .data "users.json" }}' | base64 -d) + + bob_is_admin=$(printf '%s\n' $users_in_secret | jq '.[] | select(.username=="bob@example.com") | .isAdmin') + dave_is_admin=$(printf '%s\n' $users_in_secret | jq '.[] | select(.username=="dave@example.com") | .isAdmin') + jimi_is_admin=$(printf '%s\n' $users_in_secret | jq '.[] | select(.username=="jimi@example.com") | .isAdmin') + + $bob_is_admin && $dave_is_admin && ! $jimi_is_admin || exit 1 + + bob_password=$(printf '%s\n' $users_in_secret | jq -r '.[] | select(.username=="bob@example.com") | .password') + dave_password=$(printf '%s\n' $users_in_secret | jq -r '.[] | select(.username=="dave@example.com") | .password') + jimi_password=$(printf '%s\n' $users_in_secret | jq -r '.[] | select(.username=="jimi@example.com") | .password') + + [ "$bob_password" = "password123" ] && [ "$dave_password" = "password456" ] && [ "$jimi_password" = "password789" ] || exit 1 diff --git a/testing/kuttl/e2e/standalone-pgadmin-user-management/04--change-pgadmin-user-passwords.yaml b/testing/kuttl/e2e/standalone-pgadmin-user-management/04--change-pgadmin-user-passwords.yaml new file mode 100644 index 0000000000..f8aaf480fd --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin-user-management/04--change-pgadmin-user-passwords.yaml @@ -0,0 +1,6 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +apply: +- files/04-pgadmin.yaml +assert: +- files/04-pgadmin-check.yaml diff --git a/testing/kuttl/e2e/standalone-pgadmin-user-management/05-assert.yaml b/testing/kuttl/e2e/standalone-pgadmin-user-management/05-assert.yaml new file mode 100644 index 0000000000..1dca13a7b7 --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin-user-management/05-assert.yaml @@ -0,0 +1,29 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +commands: +# When setup.py returns users in Json, the Role translation is 1 for Admin, 2 for User +- script: | + pod_name=$(kubectl get pod -n "${NAMESPACE}" -l postgres-operator.crunchydata.com/pgadmin=pgadmin -o name) + secret_name=$(kubectl get secret -n "${NAMESPACE}" -l postgres-operator.crunchydata.com/pgadmin=pgadmin -o name) + + users_in_pgadmin=$(kubectl exec -n "${NAMESPACE}" "${pod_name}" -- bash -c "python3 /usr/local/lib/python3.11/site-packages/pgadmin4/setup.py get-users --json") + + bob_role=$(printf '%s\n' $users_in_pgadmin | jq '.[] | select(.username=="bob@example.com") | .role') + dave_role=$(printf '%s\n' $users_in_pgadmin | jq '.[] | select(.username=="dave@example.com") | .role') + jimi_role=$(printf '%s\n' $users_in_pgadmin | jq '.[] | select(.username=="jimi@example.com") | .role') + + [ $bob_role = 1 ] && [ $dave_role = 1 ] && [ $jimi_role = 2 ] || exit 1 + + users_in_secret=$(kubectl get "${secret_name}" -n "${NAMESPACE}" -o 'go-template={{index .data "users.json" }}' | base64 -d) + + bob_is_admin=$(printf '%s\n' $users_in_secret | jq '.[] | select(.username=="bob@example.com") | .isAdmin') + dave_is_admin=$(printf '%s\n' $users_in_secret | jq '.[] | select(.username=="dave@example.com") | .isAdmin') + jimi_is_admin=$(printf '%s\n' $users_in_secret | jq '.[] | select(.username=="jimi@example.com") | .isAdmin') + + $bob_is_admin && $dave_is_admin && ! $jimi_is_admin || exit 1 + + bob_password=$(printf '%s\n' $users_in_secret | jq -r '.[] | select(.username=="bob@example.com") | .password') + dave_password=$(printf '%s\n' $users_in_secret | jq -r '.[] | select(.username=="dave@example.com") | .password') + jimi_password=$(printf '%s\n' $users_in_secret | jq -r '.[] | select(.username=="jimi@example.com") | .password') + + [ "$bob_password" = "NEWpassword123" ] && [ "$dave_password" = "NEWpassword456" ] && [ "$jimi_password" = "NEWpassword789" ] || exit 1 diff --git a/testing/kuttl/e2e/standalone-pgadmin-user-management/06--delete-pgadmin-users.yaml b/testing/kuttl/e2e/standalone-pgadmin-user-management/06--delete-pgadmin-users.yaml new file mode 100644 index 0000000000..a538b7dca4 --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin-user-management/06--delete-pgadmin-users.yaml @@ -0,0 +1,6 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +apply: +- files/06-pgadmin.yaml +assert: +- files/06-pgadmin-check.yaml diff --git a/testing/kuttl/e2e/standalone-pgadmin-user-management/07-assert.yaml b/testing/kuttl/e2e/standalone-pgadmin-user-management/07-assert.yaml new file mode 100644 index 0000000000..5c0e7267e6 --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin-user-management/07-assert.yaml @@ -0,0 +1,19 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +commands: +# When setup.py returns users in Json, the Role translation is 1 for Admin, 2 for User +- script: | + pod_name=$(kubectl get pod -n "${NAMESPACE}" -l postgres-operator.crunchydata.com/pgadmin=pgadmin -o name) + secret_name=$(kubectl get secret -n "${NAMESPACE}" -l postgres-operator.crunchydata.com/pgadmin=pgadmin -o name) + + users_in_pgadmin=$(kubectl exec -n "${NAMESPACE}" "${pod_name}" -- bash -c "python3 /usr/local/lib/python3.11/site-packages/pgadmin4/setup.py get-users --json") + + bob_role=$(printf '%s\n' $users_in_pgadmin | jq '.[] | select(.username=="bob@example.com") | .role') + dave_role=$(printf '%s\n' $users_in_pgadmin | jq '.[] | select(.username=="dave@example.com") | .role') + jimi_role=$(printf '%s\n' $users_in_pgadmin | jq '.[] | select(.username=="jimi@example.com") | .role') + + [ $bob_role = 1 ] && [ $dave_role = 1 ] && [ $jimi_role = 2 ] || exit 1 + + users_in_secret=$(kubectl get "${secret_name}" -n "${NAMESPACE}" -o 'go-template={{index .data "users.json" }}' | base64 -d) + + $(printf '%s\n' $users_in_secret | jq '. == []') || exit 1 diff --git a/testing/kuttl/e2e/standalone-pgadmin-user-management/README.md b/testing/kuttl/e2e/standalone-pgadmin-user-management/README.md new file mode 100644 index 0000000000..0bbdfc2893 --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin-user-management/README.md @@ -0,0 +1,21 @@ +# pgAdmin User Management tests + +*Note: These tests will only work with pgAdmin version v8 and higher* + +## Create pgAdmin with users + +* Start pgAdmin with a couple users +* Ensure users exist in pgAdmin with correct settings +* Ensure users exist in the `users.json` file in the pgAdmin secret with the correct settings + +## Edit pgAdmin users + +* Add a user and edit an existing user +* Ensure users exist in pgAdmin with correct settings +* Ensure users exist in the `users.json` file in the pgAdmin secret with the correct settings + +## Delete pgAdmin users + +* Remove users from pgAdmin spec +* Ensure users still exist in pgAdmin with correct settings +* Ensure users have been removed from the `users.json` file in the pgAdmin secret diff --git a/testing/kuttl/e2e/standalone-pgadmin-user-management/files/00-pgadmin-check.yaml b/testing/kuttl/e2e/standalone-pgadmin-user-management/files/00-pgadmin-check.yaml new file mode 100644 index 0000000000..f2c7f28cd1 --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin-user-management/files/00-pgadmin-check.yaml @@ -0,0 +1,34 @@ +--- +apiVersion: v1 +kind: Pod +metadata: + labels: + postgres-operator.crunchydata.com/data: pgadmin + postgres-operator.crunchydata.com/role: pgadmin + postgres-operator.crunchydata.com/pgadmin: pgadmin +status: + containerStatuses: + - name: pgadmin + ready: true + started: true + phase: Running +--- +apiVersion: v1 +kind: Secret +metadata: + labels: + postgres-operator.crunchydata.com/role: pgadmin + postgres-operator.crunchydata.com/pgadmin: pgadmin +type: Opaque +--- +apiVersion: v1 +kind: Secret +metadata: + name: bob-password-secret +type: Opaque +--- +apiVersion: v1 +kind: Secret +metadata: + name: dave-password-secret +type: Opaque diff --git a/testing/kuttl/e2e/standalone-pgadmin-user-management/files/00-pgadmin.yaml b/testing/kuttl/e2e/standalone-pgadmin-user-management/files/00-pgadmin.yaml new file mode 100644 index 0000000000..ce86d8d894 --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin-user-management/files/00-pgadmin.yaml @@ -0,0 +1,40 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PGAdmin +metadata: + name: pgadmin +spec: + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + serverGroups: [] + users: + - username: bob@example.com + role: Administrator + passwordRef: + name: bob-password-secret + key: password + - username: dave@example.com + passwordRef: + name: dave-password-secret + key: password +--- +apiVersion: v1 +kind: Secret +metadata: + name: bob-password-secret +type: Opaque +data: + # Password is "password123", base64 encoded + password: cGFzc3dvcmQxMjM= +--- +apiVersion: v1 +kind: Secret +metadata: + name: dave-password-secret +type: Opaque +data: + # Password is "password456", base64 encoded + password: cGFzc3dvcmQ0NTY= diff --git a/testing/kuttl/e2e/standalone-pgadmin-user-management/files/02-pgadmin-check.yaml b/testing/kuttl/e2e/standalone-pgadmin-user-management/files/02-pgadmin-check.yaml new file mode 100644 index 0000000000..9a07b0d994 --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin-user-management/files/02-pgadmin-check.yaml @@ -0,0 +1,40 @@ +--- +apiVersion: v1 +kind: Pod +metadata: + labels: + postgres-operator.crunchydata.com/data: pgadmin + postgres-operator.crunchydata.com/role: pgadmin + postgres-operator.crunchydata.com/pgadmin: pgadmin +status: + containerStatuses: + - name: pgadmin + ready: true + started: true + phase: Running +--- +apiVersion: v1 +kind: Secret +metadata: + labels: + postgres-operator.crunchydata.com/role: pgadmin + postgres-operator.crunchydata.com/pgadmin: pgadmin +type: Opaque +--- +apiVersion: v1 +kind: Secret +metadata: + name: bob-password-secret +type: Opaque +--- +apiVersion: v1 +kind: Secret +metadata: + name: dave-password-secret +type: Opaque +--- +apiVersion: v1 +kind: Secret +metadata: + name: jimi-password-secret +type: Opaque diff --git a/testing/kuttl/e2e/standalone-pgadmin-user-management/files/02-pgadmin.yaml b/testing/kuttl/e2e/standalone-pgadmin-user-management/files/02-pgadmin.yaml new file mode 100644 index 0000000000..88f75d8092 --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin-user-management/files/02-pgadmin.yaml @@ -0,0 +1,54 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PGAdmin +metadata: + name: pgadmin +spec: + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + serverGroups: [] + users: + - username: bob@example.com + role: Administrator + passwordRef: + name: bob-password-secret + key: password + - username: dave@example.com + role: Administrator + passwordRef: + name: dave-password-secret + key: password + - username: jimi@example.com + passwordRef: + name: jimi-password-secret + key: password +--- +apiVersion: v1 +kind: Secret +metadata: + name: bob-password-secret +type: Opaque +data: + # Password is "password123", base64 encoded + password: cGFzc3dvcmQxMjM= +--- +apiVersion: v1 +kind: Secret +metadata: + name: dave-password-secret +type: Opaque +data: + # Password is "password456", base64 encoded + password: cGFzc3dvcmQ0NTY= +--- +apiVersion: v1 +kind: Secret +metadata: + name: jimi-password-secret +type: Opaque +data: + # Password is "password789", base64 encoded + password: cGFzc3dvcmQ3ODk= diff --git a/testing/kuttl/e2e/standalone-pgadmin-user-management/files/04-pgadmin-check.yaml b/testing/kuttl/e2e/standalone-pgadmin-user-management/files/04-pgadmin-check.yaml new file mode 100644 index 0000000000..9a07b0d994 --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin-user-management/files/04-pgadmin-check.yaml @@ -0,0 +1,40 @@ +--- +apiVersion: v1 +kind: Pod +metadata: + labels: + postgres-operator.crunchydata.com/data: pgadmin + postgres-operator.crunchydata.com/role: pgadmin + postgres-operator.crunchydata.com/pgadmin: pgadmin +status: + containerStatuses: + - name: pgadmin + ready: true + started: true + phase: Running +--- +apiVersion: v1 +kind: Secret +metadata: + labels: + postgres-operator.crunchydata.com/role: pgadmin + postgres-operator.crunchydata.com/pgadmin: pgadmin +type: Opaque +--- +apiVersion: v1 +kind: Secret +metadata: + name: bob-password-secret +type: Opaque +--- +apiVersion: v1 +kind: Secret +metadata: + name: dave-password-secret +type: Opaque +--- +apiVersion: v1 +kind: Secret +metadata: + name: jimi-password-secret +type: Opaque diff --git a/testing/kuttl/e2e/standalone-pgadmin-user-management/files/04-pgadmin.yaml b/testing/kuttl/e2e/standalone-pgadmin-user-management/files/04-pgadmin.yaml new file mode 100644 index 0000000000..32b0081f92 --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin-user-management/files/04-pgadmin.yaml @@ -0,0 +1,54 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PGAdmin +metadata: + name: pgadmin +spec: + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + serverGroups: [] + users: + - username: bob@example.com + role: Administrator + passwordRef: + name: bob-password-secret + key: password + - username: dave@example.com + role: Administrator + passwordRef: + name: dave-password-secret + key: password + - username: jimi@example.com + passwordRef: + name: jimi-password-secret + key: password +--- +apiVersion: v1 +kind: Secret +metadata: + name: bob-password-secret +type: Opaque +data: + # Password is "NEWpassword123", base64 encoded + password: TkVXcGFzc3dvcmQxMjM= +--- +apiVersion: v1 +kind: Secret +metadata: + name: dave-password-secret +type: Opaque +data: + # Password is "NEWpassword456", base64 encoded + password: TkVXcGFzc3dvcmQ0NTY= +--- +apiVersion: v1 +kind: Secret +metadata: + name: jimi-password-secret +type: Opaque +data: + # Password is "NEWpassword789", base64 encoded + password: TkVXcGFzc3dvcmQ3ODk= diff --git a/testing/kuttl/e2e/standalone-pgadmin-user-management/files/06-pgadmin-check.yaml b/testing/kuttl/e2e/standalone-pgadmin-user-management/files/06-pgadmin-check.yaml new file mode 100644 index 0000000000..04481fb4d1 --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin-user-management/files/06-pgadmin-check.yaml @@ -0,0 +1,22 @@ +--- +apiVersion: v1 +kind: Pod +metadata: + labels: + postgres-operator.crunchydata.com/data: pgadmin + postgres-operator.crunchydata.com/role: pgadmin + postgres-operator.crunchydata.com/pgadmin: pgadmin +status: + containerStatuses: + - name: pgadmin + ready: true + started: true + phase: Running +--- +apiVersion: v1 +kind: Secret +metadata: + labels: + postgres-operator.crunchydata.com/role: pgadmin + postgres-operator.crunchydata.com/pgadmin: pgadmin +type: Opaque diff --git a/testing/kuttl/e2e/standalone-pgadmin-user-management/files/06-pgadmin.yaml b/testing/kuttl/e2e/standalone-pgadmin-user-management/files/06-pgadmin.yaml new file mode 100644 index 0000000000..0513edf050 --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin-user-management/files/06-pgadmin.yaml @@ -0,0 +1,13 @@ +apiVersion: postgres-operator.crunchydata.com/v1beta1 +kind: PGAdmin +metadata: + name: pgadmin +spec: + dataVolumeClaimSpec: + accessModes: + - "ReadWriteOnce" + resources: + requests: + storage: 1Gi + serverGroups: [] + users: [] diff --git a/testing/kuttl/e2e/standalone-pgadmin/00--create-pgadmin.yaml b/testing/kuttl/e2e/standalone-pgadmin/00--create-pgadmin.yaml new file mode 100644 index 0000000000..ee1a03ec64 --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin/00--create-pgadmin.yaml @@ -0,0 +1,6 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +apply: +- files/00-pgadmin.yaml +assert: +- files/00-pgadmin-check.yaml diff --git a/testing/kuttl/e2e/standalone-pgadmin/00-assert.yaml b/testing/kuttl/e2e/standalone-pgadmin/00-assert.yaml new file mode 100644 index 0000000000..5b95b46964 --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin/00-assert.yaml @@ -0,0 +1,7 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +collectors: +- type: command + command: kubectl -n $NAMESPACE describe pods --selector postgres-operator.crunchydata.com/pgadmin=pgadmin +- namespace: $NAMESPACE + selector: postgres-operator.crunchydata.com/pgadmin=pgadmin diff --git a/testing/kuttl/e2e/standalone-pgadmin/01-assert.yaml b/testing/kuttl/e2e/standalone-pgadmin/01-assert.yaml new file mode 100644 index 0000000000..6b7c8c8794 --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin/01-assert.yaml @@ -0,0 +1,17 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +commands: +- script: | + contains() { bash -ceu '[[ "$1" == *"$2"* ]]' - "$@"; } + + pod_name=$(kubectl get pod -n "${NAMESPACE}" -l postgres-operator.crunchydata.com/pgadmin=pgadmin -o name) + + clusters_actual=$(kubectl exec -n "${NAMESPACE}" "${pod_name}" -- bash -c "python3 /usr/local/lib/python3.11/site-packages/pgadmin4/setup.py dump-servers /tmp/dumped.json --user admin@pgadmin.${NAMESPACE}.svc && cat /tmp/dumped.json") + + clusters_expected="\"Servers\": {}" + { + contains "${clusters_actual}" "${clusters_expected}" + } || { + echo "Wrong servers dumped: got ${clusters_actual}" + exit 1 + } diff --git a/testing/kuttl/e2e/standalone-pgadmin/02--create-cluster.yaml b/testing/kuttl/e2e/standalone-pgadmin/02--create-cluster.yaml new file mode 100644 index 0000000000..bee91ce0a4 --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin/02--create-cluster.yaml @@ -0,0 +1,7 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +apply: +- files/02-cluster.yaml +- files/02-pgadmin.yaml +assert: +- files/02-cluster-check.yaml diff --git a/testing/kuttl/e2e/standalone-pgadmin/03-assert.yaml b/testing/kuttl/e2e/standalone-pgadmin/03-assert.yaml new file mode 100644 index 0000000000..169a8261eb --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin/03-assert.yaml @@ -0,0 +1,76 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +# Check the configmap is updated; +# Check the file is updated on the pod; +# Check the server dump is accurate. +# Because we have to wait for the configmap reload, make sure we have enough time. +timeout: 120 +commands: +- script: | + contains() { bash -ceu '[[ "$1" == *"$2"* ]]' - "$@"; } + diff_comp() { bash -ceu 'diff <(echo "$1" ) <(echo "$2")' - "$@"; } + + data_expected='"pgadmin-shared-clusters.json": "{\n \"Servers\": {\n \"1\": {\n \"Group\": \"groupOne\",\n \"Host\": \"pgadmin1-primary.'${NAMESPACE}.svc'\",\n \"MaintenanceDB\": \"postgres\",\n \"Name\": \"pgadmin1\",\n \"Port\": 5432,\n \"SSLMode\": \"prefer\",\n \"Shared\": true,\n \"Username\": \"pgadmin1\"\n }\n }\n}\n"' + + data_actual=$(kubectl get cm -l postgres-operator.crunchydata.com/pgadmin=pgadmin -n "${NAMESPACE}" -o json | jq .items[0].data) + + { + contains "${data_actual}" "${data_expected}" + } || { + echo "Wrong configmap: got ${data_actual}" + exit 1 + } + + pod_name=$(kubectl get pod -n "${NAMESPACE}" -l postgres-operator.crunchydata.com/pgadmin=pgadmin -o name) + + config_updated=$(kubectl exec -n "${NAMESPACE}" "${pod_name}" -- bash -c 'cat /etc/pgadmin/conf.d/~postgres-operator/pgadmin-shared-clusters.json') + config_expected='"Servers": { + "1": { + "Group": "groupOne", + "Host": "pgadmin1-primary.'${NAMESPACE}.svc'", + "MaintenanceDB": "postgres", + "Name": "pgadmin1", + "Port": 5432, + "SSLMode": "prefer", + "Shared": true, + "Username": "pgadmin1" + } + }' + { + contains "${config_updated}" "${config_expected}" + } || { + echo "Wrong file mounted: got ${config_updated}" + echo "Wrong file mounted: expected ${config_expected}" + sleep 10 + exit 1 + } + + clusters_actual=$(kubectl exec -n "${NAMESPACE}" "${pod_name}" -- bash -c "python3 /usr/local/lib/python3.11/site-packages/pgadmin4/setup.py dump-servers /tmp/dumped.json --user admin@pgadmin.${NAMESPACE}.svc && cat /tmp/dumped.json") + + clusters_expected=' + { + "Servers": { + "1": { + "Name": "pgadmin1", + "Group": "groupOne", + "Host": "pgadmin1-primary.'${NAMESPACE}.svc'", + "Port": 5432, + "MaintenanceDB": "postgres", + "Username": "pgadmin1", + "Shared": true, + "TunnelPort": "22", + "KerberosAuthentication": false, + "ConnectionParameters": { + "sslmode": "prefer" + } + } + } + }' + { + contains "${clusters_actual}" "${clusters_expected}" + } || { + echo "Wrong servers dumped: got ${clusters_actual}" + echo "Wrong servers dumped: expected ${clusters_expected}" + diff_comp "${clusters_actual}" "${clusters_expected}" + exit 1 + } diff --git a/testing/kuttl/e2e/standalone-pgadmin/04--create-cluster.yaml b/testing/kuttl/e2e/standalone-pgadmin/04--create-cluster.yaml new file mode 100644 index 0000000000..5701678501 --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin/04--create-cluster.yaml @@ -0,0 +1,6 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +apply: +- files/04-cluster.yaml +assert: +- files/04-cluster-check.yaml diff --git a/testing/kuttl/e2e/standalone-pgadmin/05-assert.yaml b/testing/kuttl/e2e/standalone-pgadmin/05-assert.yaml new file mode 100644 index 0000000000..7fe5b69dc2 --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin/05-assert.yaml @@ -0,0 +1,102 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +# Check the configmap is updated; +# Check the file is updated on the pod; +# Check the server dump is accurate. +# Because we have to wait for the configmap reload, make sure we have enough time. +timeout: 120 +commands: +- script: | + contains() { bash -ceu '[[ "$1" == *"$2"* ]]' - "$@"; } + diff_comp() { bash -ceu 'diff <(echo "$1" ) <(echo "$2")' - "$@"; } + + data_expected='"pgadmin-shared-clusters.json": "{\n \"Servers\": {\n \"1\": {\n \"Group\": \"groupOne\",\n \"Host\": \"pgadmin1-primary.'${NAMESPACE}.svc'\",\n \"MaintenanceDB\": \"postgres\",\n \"Name\": \"pgadmin1\",\n \"Port\": 5432,\n \"SSLMode\": \"prefer\",\n \"Shared\": true,\n \"Username\": \"pgadmin1\"\n },\n \"2\": {\n \"Group\": \"groupOne\",\n \"Host\": \"pgadmin2-primary.'${NAMESPACE}.svc'\",\n \"MaintenanceDB\": \"postgres\",\n \"Name\": \"pgadmin2\",\n \"Port\": 5432,\n \"SSLMode\": \"prefer\",\n \"Shared\": true,\n \"Username\": \"pgadmin2\"\n }\n }\n}\n"' + + data_actual=$(kubectl get cm -l postgres-operator.crunchydata.com/pgadmin=pgadmin -n "${NAMESPACE}" -o json | jq .items[0].data) + + { + contains "${data_actual}" "${data_expected}" + } || { + echo "Wrong configmap: got ${data_actual}" + diff_comp "${data_actual}" "${data_expected}" + exit 1 + } + + pod_name=$(kubectl get pod -n "${NAMESPACE}" -l postgres-operator.crunchydata.com/pgadmin=pgadmin -o name) + + config_updated=$(kubectl exec -n "${NAMESPACE}" "${pod_name}" -- bash -c 'cat /etc/pgadmin/conf.d/~postgres-operator/pgadmin-shared-clusters.json') + config_expected='"Servers": { + "1": { + "Group": "groupOne", + "Host": "pgadmin1-primary.'${NAMESPACE}.svc'", + "MaintenanceDB": "postgres", + "Name": "pgadmin1", + "Port": 5432, + "SSLMode": "prefer", + "Shared": true, + "Username": "pgadmin1" + }, + "2": { + "Group": "groupOne", + "Host": "pgadmin2-primary.'${NAMESPACE}.svc'", + "MaintenanceDB": "postgres", + "Name": "pgadmin2", + "Port": 5432, + "SSLMode": "prefer", + "Shared": true, + "Username": "pgadmin2" + } + }' + { + contains "${config_updated}" "${config_expected}" + } || { + echo "Wrong file mounted: got ${config_updated}" + echo "Wrong file mounted: expected ${config_expected}" + diff_comp "${config_updated}" "${config_expected}" + sleep 10 + exit 1 + } + + clusters_actual=$(kubectl exec -n "${NAMESPACE}" "${pod_name}" -- bash -c "python3 /usr/local/lib/python3.11/site-packages/pgadmin4/setup.py dump-servers /tmp/dumped.json --user admin@pgadmin.${NAMESPACE}.svc && cat /tmp/dumped.json") + + clusters_expected=' + { + "Servers": { + "1": { + "Name": "pgadmin1", + "Group": "groupOne", + "Host": "pgadmin1-primary.'${NAMESPACE}.svc'", + "Port": 5432, + "MaintenanceDB": "postgres", + "Username": "pgadmin1", + "Shared": true, + "TunnelPort": "22", + "KerberosAuthentication": false, + "ConnectionParameters": { + "sslmode": "prefer" + } + }, + "2": { + "Name": "pgadmin2", + "Group": "groupOne", + "Host": "pgadmin2-primary.'${NAMESPACE}.svc'", + "Port": 5432, + "MaintenanceDB": "postgres", + "Username": "pgadmin2", + "Shared": true, + "TunnelPort": "22", + "KerberosAuthentication": false, + "ConnectionParameters": { + "sslmode": "prefer" + } + } + } + }' + { + contains "${clusters_actual}" "${clusters_expected}" + } || { + echo "Wrong servers dumped: got ${clusters_actual}" + echo "Wrong servers dumped: expected ${clusters_expected}" + diff_comp "${clusters_actual}" "${clusters_expected}" + exit 1 + } diff --git a/testing/kuttl/e2e/standalone-pgadmin/06--create-cluster.yaml b/testing/kuttl/e2e/standalone-pgadmin/06--create-cluster.yaml new file mode 100644 index 0000000000..86b5f8bf04 --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin/06--create-cluster.yaml @@ -0,0 +1,7 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +apply: +- files/06-cluster.yaml +- files/06-pgadmin.yaml +assert: +- files/06-cluster-check.yaml diff --git a/testing/kuttl/e2e/standalone-pgadmin/07-assert.yaml b/testing/kuttl/e2e/standalone-pgadmin/07-assert.yaml new file mode 100644 index 0000000000..323237cad4 --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin/07-assert.yaml @@ -0,0 +1,126 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +# Check the configmap is updated; +# Check the file is updated on the pod; +# Check the server dump is accurate. +# Because we have to wait for the configmap reload, make sure we have enough time. +timeout: 120 +commands: +- script: | + contains() { bash -ceu '[[ "$1" == *"$2"* ]]' - "$@"; } + diff_comp() { bash -ceu 'diff <(echo "$1" ) <(echo "$2")' - "$@"; } + + data_expected='"pgadmin-shared-clusters.json": "{\n \"Servers\": {\n \"1\": {\n \"Group\": \"groupOne\",\n \"Host\": \"pgadmin1-primary.'${NAMESPACE}.svc'\",\n \"MaintenanceDB\": \"postgres\",\n \"Name\": \"pgadmin1\",\n \"Port\": 5432,\n \"SSLMode\": \"prefer\",\n \"Shared\": true,\n \"Username\": \"pgadmin1\"\n },\n \"2\": {\n \"Group\": \"groupOne\",\n \"Host\": \"pgadmin2-primary.'${NAMESPACE}.svc'\",\n \"MaintenanceDB\": \"postgres\",\n \"Name\": \"pgadmin2\",\n \"Port\": 5432,\n \"SSLMode\": \"prefer\",\n \"Shared\": true,\n \"Username\": \"pgadmin2\"\n },\n \"3\": {\n \"Group\": \"groupTwo\",\n \"Host\": \"pgadmin3-primary.'${NAMESPACE}.svc'\",\n \"MaintenanceDB\": \"postgres\",\n \"Name\": \"pgadmin3\",\n \"Port\": 5432,\n \"SSLMode\": \"prefer\",\n \"Shared\": true,\n \"Username\": \"pgadmin3\"\n }\n }\n}\n"' + + data_actual=$(kubectl get cm -l postgres-operator.crunchydata.com/pgadmin=pgadmin -n "${NAMESPACE}" -o json | jq .items[0].data) + + { + contains "${data_actual}" "${data_expected}" + } || { + echo "Wrong configmap: got ${data_actual}" + diff_comp "${data_actual}" "${data_expected}" + exit 1 + } + + pod_name=$(kubectl get pod -n "${NAMESPACE}" -l postgres-operator.crunchydata.com/pgadmin=pgadmin -o name) + + config_updated=$(kubectl exec -n "${NAMESPACE}" "${pod_name}" -- bash -c 'cat /etc/pgadmin/conf.d/~postgres-operator/pgadmin-shared-clusters.json') + config_expected='"Servers": { + "1": { + "Group": "groupOne", + "Host": "pgadmin1-primary.'${NAMESPACE}.svc'", + "MaintenanceDB": "postgres", + "Name": "pgadmin1", + "Port": 5432, + "SSLMode": "prefer", + "Shared": true, + "Username": "pgadmin1" + }, + "2": { + "Group": "groupOne", + "Host": "pgadmin2-primary.'${NAMESPACE}.svc'", + "MaintenanceDB": "postgres", + "Name": "pgadmin2", + "Port": 5432, + "SSLMode": "prefer", + "Shared": true, + "Username": "pgadmin2" + }, + "3": { + "Group": "groupTwo", + "Host": "pgadmin3-primary.'${NAMESPACE}.svc'", + "MaintenanceDB": "postgres", + "Name": "pgadmin3", + "Port": 5432, + "SSLMode": "prefer", + "Shared": true, + "Username": "pgadmin3" + } + }' + { + contains "${config_updated}" "${config_expected}" + } || { + echo "Wrong file mounted: got ${config_updated}" + echo "Wrong file mounted: expected ${config_expected}" + diff_comp "${config_updated}" "${config_expected}" + sleep 10 + exit 1 + } + + clusters_actual=$(kubectl exec -n "${NAMESPACE}" "${pod_name}" -- bash -c "python3 /usr/local/lib/python3.11/site-packages/pgadmin4/setup.py dump-servers /tmp/dumped.json --user admin@pgadmin.${NAMESPACE}.svc && cat /tmp/dumped.json") + + clusters_expected=' + { + "Servers": { + "1": { + "Name": "pgadmin1", + "Group": "groupOne", + "Host": "pgadmin1-primary.'${NAMESPACE}.svc'", + "Port": 5432, + "MaintenanceDB": "postgres", + "Username": "pgadmin1", + "Shared": true, + "TunnelPort": "22", + "KerberosAuthentication": false, + "ConnectionParameters": { + "sslmode": "prefer" + } + }, + "2": { + "Name": "pgadmin2", + "Group": "groupOne", + "Host": "pgadmin2-primary.'${NAMESPACE}.svc'", + "Port": 5432, + "MaintenanceDB": "postgres", + "Username": "pgadmin2", + "Shared": true, + "TunnelPort": "22", + "KerberosAuthentication": false, + "ConnectionParameters": { + "sslmode": "prefer" + } + }, + "3": { + "Name": "pgadmin3", + "Group": "groupTwo", + "Host": "pgadmin3-primary.'${NAMESPACE}.svc'", + "Port": 5432, + "MaintenanceDB": "postgres", + "Username": "pgadmin3", + "Shared": true, + "TunnelPort": "22", + "KerberosAuthentication": false, + "ConnectionParameters": { + "sslmode": "prefer" + } + } + } + }' + { + contains "${clusters_actual}" "${clusters_expected}" + } || { + echo "Wrong servers dumped: got ${clusters_actual}" + echo "Wrong servers dumped: expected ${clusters_expected}" + diff_comp "${clusters_actual}" "${clusters_expected}" + exit 1 + } diff --git a/testing/kuttl/e2e/standalone-pgadmin/08--delete-cluster.yaml b/testing/kuttl/e2e/standalone-pgadmin/08--delete-cluster.yaml new file mode 100644 index 0000000000..bc11ea62f4 --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin/08--delete-cluster.yaml @@ -0,0 +1,8 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +delete: + - apiVersion: postgres-operator.crunchydata.com/v1beta1 + kind: PostgresCluster + name: pgadmin2 +error: +- files/04-cluster-check.yaml diff --git a/testing/kuttl/e2e/standalone-pgadmin/09-assert.yaml b/testing/kuttl/e2e/standalone-pgadmin/09-assert.yaml new file mode 100644 index 0000000000..eca5581cb7 --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin/09-assert.yaml @@ -0,0 +1,102 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestAssert +# Check the configmap is updated; +# Check the file is updated on the pod; +# Check the server dump is accurate. +# Because we have to wait for the configmap reload, make sure we have enough time. +timeout: 120 +commands: +- script: | + contains() { bash -ceu '[[ "$1" == *"$2"* ]]' - "$@"; } + diff_comp() { bash -ceu 'diff <(echo "$1" ) <(echo "$2")' - "$@"; } + + data_expected='"pgadmin-shared-clusters.json": "{\n \"Servers\": {\n \"1\": {\n \"Group\": \"groupOne\",\n \"Host\": \"pgadmin1-primary.'${NAMESPACE}.svc'\",\n \"MaintenanceDB\": \"postgres\",\n \"Name\": \"pgadmin1\",\n \"Port\": 5432,\n \"SSLMode\": \"prefer\",\n \"Shared\": true,\n \"Username\": \"pgadmin1\"\n },\n \"2\": {\n \"Group\": \"groupTwo\",\n \"Host\": \"pgadmin3-primary.'${NAMESPACE}.svc'\",\n \"MaintenanceDB\": \"postgres\",\n \"Name\": \"pgadmin3\",\n \"Port\": 5432,\n \"SSLMode\": \"prefer\",\n \"Shared\": true,\n \"Username\": \"pgadmin3\"\n }\n }\n}\n"' + + data_actual=$(kubectl get cm -l postgres-operator.crunchydata.com/pgadmin=pgadmin -n "${NAMESPACE}" -o json | jq .items[0].data) + + { + contains "${data_actual}" "${data_expected}" + } || { + echo "Wrong configmap: got ${data_actual}" + diff_comp "${data_actual}" "${data_expected}" + exit 1 + } + + pod_name=$(kubectl get pod -n "${NAMESPACE}" -l postgres-operator.crunchydata.com/pgadmin=pgadmin -o name) + + config_updated=$(kubectl exec -n "${NAMESPACE}" "${pod_name}" -- bash -c 'cat /etc/pgadmin/conf.d/~postgres-operator/pgadmin-shared-clusters.json') + config_expected='"Servers": { + "1": { + "Group": "groupOne", + "Host": "pgadmin1-primary.'${NAMESPACE}.svc'", + "MaintenanceDB": "postgres", + "Name": "pgadmin1", + "Port": 5432, + "SSLMode": "prefer", + "Shared": true, + "Username": "pgadmin1" + }, + "2": { + "Group": "groupTwo", + "Host": "pgadmin3-primary.'${NAMESPACE}.svc'", + "MaintenanceDB": "postgres", + "Name": "pgadmin3", + "Port": 5432, + "SSLMode": "prefer", + "Shared": true, + "Username": "pgadmin3" + } + }' + { + contains "${config_updated}" "${config_expected}" + } || { + echo "Wrong file mounted: got ${config_updated}" + echo "Wrong file mounted: expected ${config_expected}" + diff_comp "${config_updated}" "${config_expected}" + sleep 10 + exit 1 + } + + clusters_actual=$(kubectl exec -n "${NAMESPACE}" "${pod_name}" -- bash -c "python3 /usr/local/lib/python3.11/site-packages/pgadmin4/setup.py dump-servers /tmp/dumped.json --user admin@pgadmin.${NAMESPACE}.svc && cat /tmp/dumped.json") + + clusters_expected=' + { + "Servers": { + "1": { + "Name": "pgadmin1", + "Group": "groupOne", + "Host": "pgadmin1-primary.'${NAMESPACE}.svc'", + "Port": 5432, + "MaintenanceDB": "postgres", + "Username": "pgadmin1", + "Shared": true, + "TunnelPort": "22", + "KerberosAuthentication": false, + "ConnectionParameters": { + "sslmode": "prefer" + } + }, + "2": { + "Name": "pgadmin3", + "Group": "groupTwo", + "Host": "pgadmin3-primary.'${NAMESPACE}.svc'", + "Port": 5432, + "MaintenanceDB": "postgres", + "Username": "pgadmin3", + "Shared": true, + "TunnelPort": "22", + "KerberosAuthentication": false, + "ConnectionParameters": { + "sslmode": "prefer" + } + } + } + }' + { + contains "${clusters_actual}" "${clusters_expected}" + } || { + echo "Wrong servers dumped: got ${clusters_actual}" + echo "Wrong servers dumped: expected ${clusters_expected}" + diff_comp "${clusters_actual}" "${clusters_expected}" + exit 1 + } diff --git a/testing/kuttl/e2e/standalone-pgadmin/10-invalid-pgadmin.yaml b/testing/kuttl/e2e/standalone-pgadmin/10-invalid-pgadmin.yaml new file mode 100644 index 0000000000..118b8d06ef --- /dev/null +++ b/testing/kuttl/e2e/standalone-pgadmin/10-invalid-pgadmin.yaml @@ -0,0 +1,37 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestStep +# Check that invalid spec cannot be applied. +commands: +- script: | + contains() { bash -ceu '[[ "$1" == *"$2"* ]]' - "$@"; } + diff_comp() { bash -ceu 'diff <(echo "$1" ) <(echo "$2")' - "$@"; } + + data_expected='"pgadmin2" is invalid: spec.serverGroups[0]: Invalid value: "object": exactly one of "postgresClusterName" or "postgresClusterSelector" is required' + + data_actual=$(kubectl apply -f - 2>&1 < /pgwal/pgbackrest-spool" || exit 1 diff --git a/testing/kuttl/e2e/wal-pvc-pgupgrade/06-assert.yaml b/testing/kuttl/e2e/wal-pvc-pgupgrade/06-assert.yaml new file mode 100644 index 0000000000..f7575212e0 --- /dev/null +++ b/testing/kuttl/e2e/wal-pvc-pgupgrade/06-assert.yaml @@ -0,0 +1,14 @@ +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: wal-pvc-pgupgrade-after +status: + succeeded: 1 +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: wal-pvc-pgupgrade-after-replica +status: + succeeded: 1 diff --git a/testing/kuttl/kuttl-test.yaml b/testing/kuttl/kuttl-test.yaml new file mode 100644 index 0000000000..6733707507 --- /dev/null +++ b/testing/kuttl/kuttl-test.yaml @@ -0,0 +1,14 @@ +apiVersion: kuttl.dev/v1beta1 +kind: TestSuite +testDirs: +- testing/kuttl/e2e-generated/ +timeout: 300 +parallel: 2 +# by default kuttl will run in a generated namespace to override +# that functionality simply uncomment the line below and replace +# postgres-operator with the desired namespace to run in. +# namespace: postgres-operator +# By default kuttl deletes the resources created during a test. +# For debugging, it may be helpful to uncomment the following line +# in order to inspect the resources. +# skipDelete: true diff --git a/testing/kuttl/scripts/pgbackrest-initialization.sh b/testing/kuttl/scripts/pgbackrest-initialization.sh new file mode 100755 index 0000000000..ba6cd4a7e5 --- /dev/null +++ b/testing/kuttl/scripts/pgbackrest-initialization.sh @@ -0,0 +1,24 @@ +#!/bin/bash + +EXPECTED_STATUS=$1 +EXPECTED_NUM_BACKUPS=$2 + +CLUSTER=${CLUSTER:-default} + +INFO=$(kubectl -n "${NAMESPACE}" exec "statefulset.apps/${CLUSTER}-repo-host" -c pgbackrest -- pgbackrest info) + +# Grab the `status` line from `pgbackrest info`, remove whitespace with `xargs`, +# and trim the string to only include the status in order to +# validate the status matches the expected status. +STATUS=$(grep "status" <<< "$INFO" | xargs | cut -d' ' -f 2) +if [[ "$STATUS" != "$EXPECTED_STATUS" ]]; then + echo "Expected ${EXPECTED_STATUS} but got ${STATUS}" + exit 1 +fi + +# Count the lines with `full backup` to validate that the expected number of backups are found. +NUM_BACKUPS=$(grep -c "full backup:" <<< "$INFO") +if [[ "$NUM_BACKUPS" != "$EXPECTED_NUM_BACKUPS" ]]; then + echo "Expected ${EXPECTED_NUM_BACKUPS} but got ${NUM_BACKUPS}" + exit 1 +fi diff --git a/testing/pgo_cli/README.md b/testing/pgo_cli/README.md deleted file mode 100644 index 03c7c79b1e..0000000000 --- a/testing/pgo_cli/README.md +++ /dev/null @@ -1,95 +0,0 @@ - -This directory contains a suite of basic regression scenarios that exercise the -PostgreSQL Operator through the PostgreSQL Operator Client. It uses the -[`testing` package](https://pkg.go.dev/testing) of Go and can be driven using `go test`. - - -## Configuration - -The environment variables of the `go test` process are passed to `pgo` so it can -be tested under different configurations. - -When `PGO_OPERATOR_NAMESPACE` is set, some of the [variables that affect `pgo`][pgo-env] -are given defaults: - -- `PGO_CA_CERT`, `PGO_CLIENT_CERT`, and `PGO_CLIENT_KEY` default to paths under - `~/.pgo/${PGO_OPERATOR_NAMESPACE}/output` which is usually populated by the - Ansible installer. - -- `PGO_APISERVER_URL` defaults to a random local port that forwards to the - PostgreSQL Operator API using the same mechanism as `kubectl port-forward`. - -When `PGO_NAMESPACE` is set, any objects created by tests will appear there. -When it is not set, each test creates a new random namespace, runs there, and -deletes it afterward. These random namespaces all have the `pgo-test` label. - -`PGO_TEST_TIMEOUT_SCALE` can be used to adjust the amount of time a test waits -for asynchronous events to take place. A setting of `1.2` waits 20% longer than -usual. - -[pgo-env]: ../../docs/content/pgo-client/_index.md#global-environment-variables - - -### Kubernetes - -The suite expects to be able to call the Kubernetes API that is running the -Operator. If it cannot find a [kubeconfig][] file in the typical places, it will -try the [in-cluster API][k8s-in-cluster]. Use the `KUBECONFIG` environment -variable to configure the Kubernetes API client. - -[k8s-in-cluster]: https://pkg.go.dev/k8s.io/client-go/rest#InClusterConfig -[kubeconfig]: https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/ - - -## Execution - -The suite uses whatever `pgo` executable is on your `PATH`. To use a recently -built one you can set your `PATH` with the following: - -```sh -cd testing/pgo_cli -PATH="$(dirname $(dirname $(pwd)))/bin:$PATH" -``` - -The following will run the entire suite printing the names of tests as it goes. -The results for every test will appear as they finish. - -```sh -cd testing/pgo_cli -GO111MODULE=on go test -count=1 -parallel=2 -timeout=30m -v . -``` - -Use the [`-run` argument][go-test-run] to select some subset of tests: - -```sh -go test -run '/failover' -``` - -[go-test-run]: https://pkg.go.dev/testing#hdr-Subtests_and_Sub_benchmarks - - -## Test Descriptions - -**operator_test.go** executes the pgo version, pgo status and pgo show config commands and verifies the correct responses are returned. - -**cluster_create_test.go** executes the pgo create cluster, pgo show workflow, pgo show cluster and pgo show user commands. The test will create a cluster named mycluster and verify the cluster is created. This test will also verify correct responses are returned for the pgo show workflow command as well as pgo show cluster and pgo show user commands. - -**cluster_label_test.go** executes the pgo label and various pgo show cluster commands as well as the pgo delete label command. This test will add a label to a cluster then exercise various ways to show the cluster via the label as well as verify the label was applied to the cluster. Pgo delete label is also executed and verifies the label is successfully removed from the cluster. - -**cluster_test_test.go** executes the pgo test command and verifies the cluster services and instances are "UP" - -**operator_rbac_test.go** executes various pgo pgouser and pgo pgorole commands. This test verifies operator user creation scenarios which include creating an operator user and assigning roles and namespace access and verifying operator users can only access namespaces they are assigned to as well as being able to run commands that are assigned to them via the pgo pgorole command. - -**cluster_user_test.go** executes various pgo create user, pgo show user, pgo update user and pgo delete commands. This test verifies the operator creates a PostgreSQL user correctly as well as showing the correct user data and updates the user correctly. This test also verifies a variety of flags that are passed in with the create update and delete user commands. - -**cluster_df_test.go** executed the pgo df command and verifies the capacity of a cluster is returned. - -**cluster_policy_test.go** executes the policy functionality utilizing the commands pgo create policy, pgo apply policy, pgo schedule policy and pgo delete policy as well as various flags and will verify the appropriate values are returned and created, updated, applied or deleted. - -**cluster_scale_test.go** executes pgo scale which will scale the cluster with an additional replica. This test will verify the cluster has successfully scaled the cluster up and verify the replica is available and ready. - -**cluster_pgbouncer_test.go** executes pgo create pgbouncer to onboard a pgbouncer and verifies pgbouncer has been added and is available and running. This test also executes the pgo show cluster command as well to verify pgbouncer has been onboarded as well as pgo test to ensure all of the clusters services and instances are "UP" and available. Lastly, this test will remove the pgbouncer from the cluster by running the pgo delete pgbouncer command and verify pgbouncer was indeed removed from the cluster. - -**cluster_delete_test.go** executes the pgo delete command with scenarios such as completely delete the cluster including all of the backups and delete the cluster but keep the backup data and verifies the deletions occurred. - - diff --git a/testing/pgo_cli/cluster_backup_test.go b/testing/pgo_cli/cluster_backup_test.go deleted file mode 100644 index d2f8508c3d..0000000000 --- a/testing/pgo_cli/cluster_backup_test.go +++ /dev/null @@ -1,278 +0,0 @@ -package pgo_cli_test - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "strings" - "testing" - "time" - - "github.com/crunchydata/postgres-operator/testing/kubeapi" - "github.com/stretchr/testify/require" -) - -// TC60 ✓ -// TC122 ✓ -// TC130 ✓ -func TestClusterBackup(t *testing.T) { - t.Parallel() - - teardownSchedule := func(t *testing.T, namespace, schedule string) { - pgo("delete", "schedule", "-n", namespace, "--no-prompt", "--schedule-name="+schedule).Exec(t) - } - - withNamespace(t, func(namespace func() string) { - withCluster(t, namespace, func(cluster func() string) { - t.Run("show backup", func(t *testing.T) { - t.Run("shows something", func(t *testing.T) { - requireClusterReady(t, namespace(), cluster(), time.Minute) - - // BUG(cbandy): cannot check too soon. - waitFor(t, func() bool { return false }, 5*time.Second, time.Second) - - output, err := pgo("show", "backup", cluster(), "-n", namespace()).Exec(t) - require.NoError(t, err) - require.NotEmpty(t, output) - }) - }) - - t.Run("backup", func(t *testing.T) { - t.Run("creates an incremental backup", func(t *testing.T) { - requireClusterReady(t, namespace(), cluster(), time.Minute) - requireStanzaExists(t, namespace(), cluster(), 2*time.Minute) - - // BUG(cbandy): cannot create a backup too soon. - waitFor(t, func() bool { return false }, 5*time.Second, time.Second) - - output, err := pgo("backup", cluster(), "-n", namespace()).Exec(t) - require.NoError(t, err) - require.Contains(t, output, "created") - - exists := func() bool { - output, err := pgo("show", "backup", cluster(), "-n", namespace()).Exec(t) - require.NoError(t, err) - return strings.Contains(output, "incr backup") - } - requireWaitFor(t, exists, time.Minute, time.Second, - "timeout waiting for backup of %q in %q", cluster(), namespace()) - }) - - t.Run("accepts options", func(t *testing.T) { - requireClusterReady(t, namespace(), cluster(), time.Minute) - requireStanzaExists(t, namespace(), cluster(), 2*time.Minute) - - // BUG(cbandy): cannot create a backup too soon. - waitFor(t, func() bool { return false }, 5*time.Second, time.Second) - - output, err := pgo("backup", cluster(), "-n", namespace(), - "--backup-opts=--type=diff", - ).Exec(t) - require.NoError(t, err) - require.Contains(t, output, "created") - - exists := func() bool { - output, err := pgo("show", "backup", cluster(), "-n", namespace()).Exec(t) - require.NoError(t, err) - return strings.Contains(output, "diff backup") - } - requireWaitFor(t, exists, time.Minute, time.Second, - "timeout waiting for backup of %q in %q", cluster(), namespace()) - }) - }) - - t.Run("create schedule", func(t *testing.T) { - t.Run("creates a backup", func(t *testing.T) { - output, err := pgo("create", "schedule", "--selector=name="+cluster(), "-n", namespace(), - "--schedule-type=pgbackrest", "--schedule=* * * * *", "--pgbackrest-backup-type=full", - ).Exec(t) - defer teardownSchedule(t, namespace(), cluster()+"-pgbackrest-full") - require.NoError(t, err) - require.Contains(t, output, "created") - - output, err = pgo("show", "schedule", cluster(), "-n", namespace()).Exec(t) - require.NoError(t, err) - require.Contains(t, output, "pgbackrest-full") - - requireClusterReady(t, namespace(), cluster(), time.Minute) - requireStanzaExists(t, namespace(), cluster(), 2*time.Minute) - - output, err = pgo("show", "backup", cluster(), "-n", namespace()).Exec(t) - require.NoError(t, err) - before := strings.Count(output, "full backup") - - more := func() bool { - output, err := pgo("show", "backup", cluster(), "-n", namespace()).Exec(t) - require.NoError(t, err) - return strings.Count(output, "full backup") > before - } - requireWaitFor(t, more, 75*time.Second, time.Second, - "timeout waiting for backup to execute on %q in %q", cluster(), namespace()) - }) - }) - - t.Run("delete schedule", func(t *testing.T) { - requireSchedule := func(t *testing.T, kind string) { - _, err := pgo("create", "schedule", "--selector=name="+cluster(), "-n", namespace(), - "--schedule-type=pgbackrest", "--schedule=* * * * *", "--pgbackrest-backup-type="+kind, - ).Exec(t) - require.NoError(t, err) - } - - t.Run("removes all schedules", func(t *testing.T) { - requireSchedule(t, "diff") - requireSchedule(t, "full") - - output, err := pgo("delete", "schedule", cluster(), "--no-prompt", "-n", namespace()).Exec(t) - require.NoError(t, err) - require.Contains(t, output, "deleted") - require.Contains(t, output, "pgbackrest-diff") - require.Contains(t, output, "pgbackrest-full") - - output, err = pgo("show", "schedule", cluster(), "-n", namespace()).Exec(t) - require.NoError(t, err) - require.NotContains(t, output, "pgbackrest-diff") - require.NotContains(t, output, "pgbackrest-full") - }) - - t.Run("accepts schedule name", func(t *testing.T) { - requireSchedule(t, "diff") - requireSchedule(t, "full") - - output, err := pgo("delete", "schedule", "-n", namespace(), - "--schedule-name="+cluster()+"-pgbackrest-diff", "--no-prompt", - ).Exec(t) - require.NoError(t, err) - require.Contains(t, output, "deleted") - require.Contains(t, output, "pgbackrest-diff") - require.NotContains(t, output, "pgbackrest-full") - - output, err = pgo("show", "schedule", cluster(), "-n", namespace()).Exec(t) - require.NoError(t, err) - require.NotContains(t, output, "pgbackrest-diff") - require.Contains(t, output, "pgbackrest-full") - }) - }) - }) - - t.Run("restore", func(t *testing.T) { - t.Run("replaces the cluster", func(t *testing.T) { - t.Parallel() - withCluster(t, namespace, func(cluster func() string) { - requireClusterReady(t, namespace(), cluster(), time.Minute) - requireStanzaExists(t, namespace(), cluster(), 2*time.Minute) - - before := clusterPVCs(t, namespace(), cluster()) - require.NotEmpty(t, before, "expected volumes to exist") - - // find the creation timestamp for the primary PVC, which wll have the same - // name as the cluster - var primaryPVCCreationTimestamp time.Time - for _, pvc := range before { - if pvc.GetName() == cluster() { - primaryPVCCreationTimestamp = pvc.GetCreationTimestamp().Time - } - } - - output, err := pgo("restore", cluster(), "--no-prompt", "-n", namespace()).Exec(t) - require.NoError(t, err) - require.Contains(t, output, "restore request") - - // wait for primary PVC to be recreated - more := func() bool { - after := clusterPVCs(t, namespace(), cluster()) - for _, pvc := range after { - // check to see if the PVC for the primary is bound, and has a timestamp - // after the original timestamp for the primary PVC timestamp captured above, - // indicating that it been re-created - if pvc.GetName() == cluster() && kubeapi.IsPVCBound(pvc) && - pvc.GetCreationTimestamp().Time.After(primaryPVCCreationTimestamp) { - return true - } - } - return false - } - requireWaitFor(t, more, time.Minute, time.Second, - "timeout waiting for restore to begin on %q in %q", cluster(), namespace()) - - requireClusterReady(t, namespace(), cluster(), 2*time.Minute) - }) - }) - - t.Run("accepts point-in-time options", func(t *testing.T) { - t.Parallel() - withCluster(t, namespace, func(cluster func() string) { - requireClusterReady(t, namespace(), cluster(), time.Minute) - requireStanzaExists(t, namespace(), cluster(), 2*time.Minute) - - // data that will need to be restored - _, stderr := clusterPSQL(t, namespace(), cluster(), - `CREATE TABLE important (data) AS VALUES ('treasure')`) - require.Empty(t, stderr) - - // point to at which to restore - recoveryObjective, stderr := clusterPSQL(t, namespace(), cluster(), ` - \set QUIET yes - \pset format unaligned - \pset tuples_only yes - SELECT clock_timestamp()`) - recoveryObjective = strings.TrimSpace(recoveryObjective) - require.Empty(t, stderr) - - // a reason to restore followed by a WAL flush - _, stderr = clusterPSQL(t, namespace(), cluster(), ` - DROP TABLE important; - DO $$ BEGIN IF current_setting('server_version_num')::int > 100000 - THEN PERFORM pg_switch_wal(); - ELSE PERFORM pg_switch_xlog(); - END IF; END $$`) - require.Empty(t, stderr) - - output, err := pgo("restore", cluster(), "-n", namespace(), - "--backup-opts=--type=time", - "--pitr-target="+recoveryObjective, - "--no-prompt", - ).Exec(t) - require.NoError(t, err) - require.Contains(t, output, recoveryObjective) - - restored := func() bool { - pods, err := TestContext.Kubernetes.ListPods( - namespace(), map[string]string{ - "pg-cluster": cluster(), - "pgo-pg-database": "true", - }) - - if err != nil || len(pods) == 0 { - return false - } - - stdout, stderr, err := TestContext.Kubernetes.PodExec( - pods[0].Namespace, pods[0].Name, - strings.NewReader(`TABLE important`), - "psql", "-U", "postgres", "-f-") - - return err == nil && len(stderr) == 0 && - strings.Contains(stdout, "(1 row)") - } - requireWaitFor(t, restored, 2*time.Minute, time.Second, - "timeout waiting for restore to finish on %q in %q", cluster(), namespace()) - - requireClusterReady(t, namespace(), cluster(), time.Minute) - }) - }) - }) - }) -} diff --git a/testing/pgo_cli/cluster_cat_test.go b/testing/pgo_cli/cluster_cat_test.go deleted file mode 100644 index 4cb159be8d..0000000000 --- a/testing/pgo_cli/cluster_cat_test.go +++ /dev/null @@ -1,43 +0,0 @@ -package pgo_cli_test - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "testing" - "time" - - "github.com/stretchr/testify/require" -) - -func TestClusterCat(t *testing.T) { - t.Parallel() - - withNamespace(t, func(namespace func() string) { - withCluster(t, namespace, func(cluster func() string) { - t.Run("cat", func(t *testing.T) { - t.Run("shows something", func(t *testing.T) { - requireClusterReady(t, namespace(), cluster(), time.Minute) - - output, err := pgo("cat", cluster(), "-n", namespace(), - "/pgdata/"+cluster()+"/postgresql.conf", - ).Exec(t) - require.NoError(t, err) - require.NotEmpty(t, output) - }) - }) - }) - }) -} diff --git a/testing/pgo_cli/cluster_clone_test.go b/testing/pgo_cli/cluster_clone_test.go deleted file mode 100644 index 606139e820..0000000000 --- a/testing/pgo_cli/cluster_clone_test.go +++ /dev/null @@ -1,59 +0,0 @@ -package pgo_cli_test - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "testing" - "time" - - "github.com/stretchr/testify/require" -) - -func TestClusterClone(t *testing.T) { - t.Parallel() - - withNamespace(t, func(namespace func() string) { - withCluster(t, namespace, func(cluster func() string) { - t.Run("clone", func(t *testing.T) { - t.Run("creates a copy of a cluster", func(t *testing.T) { - requireClusterReady(t, namespace(), cluster(), time.Minute) - requireStanzaExists(t, namespace(), cluster(), 2*time.Minute) - - // data in the origin cluster followed by a WAL flush - _, stderr := clusterPSQL(t, namespace(), cluster(), ` - CREATE TABLE original (data) AS VALUES ('one'), ('two'); - DO $$ BEGIN IF current_setting('server_version_num')::int > 100000 - THEN PERFORM pg_switch_wal(); - ELSE PERFORM pg_switch_xlog(); - END IF; END $$`) - require.Empty(t, stderr) - - output, err := pgo("clone", cluster(), "rex", "-n", namespace()).Exec(t) - require.NoError(t, err) - require.NotEmpty(t, output) - - defer teardownCluster(t, namespace(), "rex", time.Now()) - requireClusterReady(t, namespace(), "rex", 4*time.Minute) - - stdout, stderr := clusterPSQL(t, namespace(), "rex", `TABLE original`) - require.Empty(t, stderr) - require.Contains(t, stdout, "(2 rows)", - "expected original data to be present in the clone") - }) - }) - }) - }) -} diff --git a/testing/pgo_cli/cluster_create_test.go b/testing/pgo_cli/cluster_create_test.go deleted file mode 100644 index 26f0c2be4f..0000000000 --- a/testing/pgo_cli/cluster_create_test.go +++ /dev/null @@ -1,63 +0,0 @@ -package pgo_cli_test - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "regexp" - "testing" - "time" - - "github.com/stretchr/testify/require" -) - -// TC41 ✓ -func TestClusterCreate(t *testing.T) { - t.Parallel() - - withNamespace(t, func(namespace func() string) { - t.Run("create cluster", func(t *testing.T) { - t.Run("creates a workflow", func(t *testing.T) { - workflow := regexp.MustCompile(`(?m:^workflow id.*?(\S+)$)`) - - output, err := pgo("create", "cluster", "mycluster", "-n", namespace()).Exec(t) - defer teardownCluster(t, namespace(), "mycluster", time.Now()) - require.NoError(t, err) - require.Regexp(t, workflow, output, "expected pgo to show the workflow") - - _, err = pgo("show", "workflow", workflow.FindStringSubmatch(output)[1], "-n", namespace()).Exec(t) - require.NoError(t, err) - }) - }) - - withCluster(t, namespace, func(cluster func() string) { - t.Run("show cluster", func(t *testing.T) { - t.Run("shows something", func(t *testing.T) { - output, err := pgo("show", "cluster", cluster(), "-n", namespace()).Exec(t) - require.NoError(t, err) - require.NotEmpty(t, output) - }) - }) - - t.Run("show user", func(t *testing.T) { - t.Run("shows something", func(t *testing.T) { - output, err := pgo("show", "user", cluster(), "-n", namespace()).Exec(t) - require.NoError(t, err) - require.NotEmpty(t, output) - }) - }) - }) - }) -} diff --git a/testing/pgo_cli/cluster_delete_test.go b/testing/pgo_cli/cluster_delete_test.go deleted file mode 100644 index cba99408f7..0000000000 --- a/testing/pgo_cli/cluster_delete_test.go +++ /dev/null @@ -1,104 +0,0 @@ -package pgo_cli_test - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "testing" - "time" - - "github.com/stretchr/testify/require" -) - -// TC115 ✓ -// TC116 ✓ -// TC119 ✓ -func TestClusterDelete(t *testing.T) { - t.Parallel() - - withNamespace(t, func(namespace func() string) { - t.Run("delete cluster", func(t *testing.T) { - t.Run("removes data and backups", func(t *testing.T) { - t.Parallel() - withCluster(t, namespace, func(cluster func() string) { - requireClusterReady(t, namespace(), cluster(), time.Minute) - require.NotEmpty(t, clusterPVCs(t, namespace(), cluster()), "expected data to exist") - - output, err := pgo("delete", "cluster", cluster(), "--no-prompt", "-n", namespace()).Exec(t) - require.NoError(t, err) - require.Contains(t, output, "deleted") - - gone := func() bool { - return len(clusterPVCs(t, namespace(), cluster())) == 0 - } - requireWaitFor(t, gone, time.Minute, time.Second, - "timeout waiting for data of %q in %q", cluster(), namespace()) - - output, err = pgo("show", "cluster", cluster(), "-n", namespace()).Exec(t) - require.NoError(t, err) - require.NotContains(t, output, cluster()) - }) - }) - - t.Run("can keep backups", func(t *testing.T) { - t.Parallel() - withCluster(t, namespace, func(cluster func() string) { - requireClusterReady(t, namespace(), cluster(), time.Minute) - require.NotEmpty(t, clusterPVCs(t, namespace(), cluster()), "expected data to exist") - - output, err := pgo("delete", "cluster", cluster(), "--keep-backups", "--no-prompt", "-n", namespace()).Exec(t) - require.NoError(t, err) - require.Contains(t, output, "deleted") - - gone := func() bool { - return len(clusterPVCs(t, namespace(), cluster())) == 1 - } - requireWaitFor(t, gone, time.Minute, time.Second, - "timeout waiting for data of %q in %q", cluster(), namespace()) - - pvcs := clusterPVCs(t, namespace(), cluster()) - require.NotEmpty(t, pvcs) - require.Contains(t, pvcs[0].Name, "pgbr-repo") - - output, err = pgo("show", "cluster", cluster(), "-n", namespace()).Exec(t) - require.NoError(t, err) - require.NotContains(t, output, cluster()) - }) - }) - - t.Run("can keep data", func(t *testing.T) { - t.Parallel() - withCluster(t, namespace, func(cluster func() string) { - requireClusterReady(t, namespace(), cluster(), time.Minute) - require.NotEmpty(t, clusterPVCs(t, namespace(), cluster()), "expected data to exist") - - output, err := pgo("delete", "cluster", cluster(), "--keep-data", "--no-prompt", "-n", namespace()).Exec(t) - require.NoError(t, err) - require.Contains(t, output, "deleted") - - gone := func() bool { - return len(clusterPVCs(t, namespace(), cluster())) == 1 - } - requireWaitFor(t, gone, time.Minute, time.Second, - "timeout waiting for data of %q in %q", cluster(), namespace()) - - pvcs := clusterPVCs(t, namespace(), cluster()) - require.NotEmpty(t, pvcs) - require.Equal(t, cluster(), pvcs[0].Name) - }) - }) - }) - }) -} diff --git a/testing/pgo_cli/cluster_df_test.go b/testing/pgo_cli/cluster_df_test.go deleted file mode 100644 index 8171a7aa45..0000000000 --- a/testing/pgo_cli/cluster_df_test.go +++ /dev/null @@ -1,42 +0,0 @@ -package pgo_cli_test - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "testing" - "time" - - "github.com/stretchr/testify/require" -) - -// TC44 ✓ -func TestClusterDF(t *testing.T) { - t.Parallel() - - withNamespace(t, func(namespace func() string) { - withCluster(t, namespace, func(cluster func() string) { - t.Run("df", func(t *testing.T) { - t.Run("shows something", func(t *testing.T) { - requireClusterReady(t, namespace(), cluster(), time.Minute) - - output, err := pgo("df", cluster(), "-n", namespace()).Exec(t) - require.NoError(t, err) - require.NotEmpty(t, output) - }) - }) - }) - }) -} diff --git a/testing/pgo_cli/cluster_failover_test.go b/testing/pgo_cli/cluster_failover_test.go deleted file mode 100644 index d35a6d87f5..0000000000 --- a/testing/pgo_cli/cluster_failover_test.go +++ /dev/null @@ -1,106 +0,0 @@ -package pgo_cli_test - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "strings" - "sync" - "testing" - "time" - - "github.com/stretchr/testify/require" -) - -// TC48 ✓ -// TC99 ✓ -// TC100 ✓ -// TC101 ✓ -// TC102 ✓ -// TC103 ✓ -func TestClusterFailover(t *testing.T) { - t.Parallel() - - var replicaOnce sync.Once - requireReplica := func(t *testing.T, namespace, cluster string) { - replicaOnce.Do(func() { - _, err := pgo("scale", cluster, "--no-prompt", "-n", namespace).Exec(t) - require.NoError(t, err) - requireReplicasReady(t, namespace, cluster, 3*time.Minute) - }) - } - - withNamespace(t, func(namespace func() string) { - withCluster(t, namespace, func(cluster func() string) { - t.Run("failover", func(t *testing.T) { - t.Run("shows replicas", func(t *testing.T) { - requireReplica(t, namespace(), cluster()) - - pods := replicaPods(t, namespace(), cluster()) - require.NotEmpty(t, pods, "expected replicas to exist") - - output, err := pgo("failover", cluster(), "-n", namespace(), - "--query", - ).Exec(t) - require.NoError(t, err) - require.Contains(t, output, pods[0].Labels["deployment-name"]) - }) - - t.Run("swaps primary with replica", func(t *testing.T) { - requireReplica(t, namespace(), cluster()) - - before := replicaPods(t, namespace(), cluster()) - require.NotEmpty(t, before, "expected replicas to exist") - - output, err := pgo("failover", cluster(), "-n", namespace(), - "--target="+before[0].Labels["deployment-name"], "--no-prompt", - ).Exec(t) - require.NoError(t, err) - require.Contains(t, output, "created") - - replaced := func() bool { - after := replicaPods(t, namespace(), cluster()) - return len(after) > 0 && - after[0].Labels["deployment-name"] != before[0].Labels["deployment-name"] - } - requireWaitFor(t, replaced, time.Minute, time.Second, - "timeout waiting for failover of %q in %q", cluster(), namespace()) - - requireReplicasReady(t, namespace(), cluster(), 5*time.Second) - - { - var stdout, stderr string - streaming := func() bool { - primaries := primaryPods(t, namespace(), cluster()) - require.Len(t, primaries, 1) - - stdout, stderr, err = TestContext.Kubernetes.PodExec( - primaries[0].Namespace, primaries[0].Name, - strings.NewReader(`SELECT to_json(pg_stat_replication) FROM pg_stat_replication`), - "psql", "-U", "postgres", "-f-") - require.NoError(t, err) - require.Empty(t, stderr) - - return strings.Contains(stdout, `"state":"streaming"`) - } - if !waitFor(t, streaming, 10*time.Second, time.Second) { - require.Contains(t, stdout, `"state":"streaming"`) - } - } - }) - }) - }) - }) -} diff --git a/testing/pgo_cli/cluster_label_test.go b/testing/pgo_cli/cluster_label_test.go deleted file mode 100644 index 0f54f7af93..0000000000 --- a/testing/pgo_cli/cluster_label_test.go +++ /dev/null @@ -1,63 +0,0 @@ -package pgo_cli_test - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "testing" - - "github.com/stretchr/testify/require" -) - -// TC42 ✓ -// TC115 ✓ -func TestClusterLabel(t *testing.T) { - t.Parallel() - - withNamespace(t, func(namespace func() string) { - withCluster(t, namespace, func(cluster func() string) { - t.Run("label", func(t *testing.T) { - t.Run("modifies the cluster", func(t *testing.T) { - output, err := pgo("label", cluster(), "--label=villain=hordak", "-n", namespace()).Exec(t) - require.NoError(t, err) - require.Contains(t, output, "applied") - - output, err = pgo("show", "cluster", cluster(), "-n", namespace()).Exec(t) - require.NoError(t, err) - require.Contains(t, output, "villain=hordak") - - output, err = pgo("show", "cluster", "--selector=villain=hordak", "-n", namespace()).Exec(t) - require.NoError(t, err) - require.Contains(t, output, cluster()) - }) - }) - - t.Run("delete label", func(t *testing.T) { - t.Run("modifies the cluster", func(t *testing.T) { - _, err := pgo("label", cluster(), "--label=etheria=yes", "-n", namespace()).Exec(t) - require.NoError(t, err) - - output, err := pgo("delete", "label", cluster(), "--label=etheria=yes", "-n", namespace()).Exec(t) - require.NoError(t, err) - require.Contains(t, output, "deleting") - - output, err = pgo("show", "cluster", cluster(), "-n", namespace()).Exec(t) - require.NoError(t, err) - require.NotContains(t, output, "etheria=yes") - }) - }) - }) - }) -} diff --git a/testing/pgo_cli/cluster_pgbouncer_test.go b/testing/pgo_cli/cluster_pgbouncer_test.go deleted file mode 100644 index 9c5b72ba66..0000000000 --- a/testing/pgo_cli/cluster_pgbouncer_test.go +++ /dev/null @@ -1,102 +0,0 @@ -package pgo_cli_test - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "strings" - "sync" - "testing" - "time" - - "github.com/stretchr/testify/require" -) - -// TC51 ✓ -func TestClusterPgBouncer(t *testing.T) { - t.Parallel() - - var pgbouncerOnce sync.Once - requirePgBouncer := func(t *testing.T, namespace, cluster string) { - pgbouncerOnce.Do(func() { - output, err := pgo("create", "pgbouncer", cluster, "-n", namespace).Exec(t) - require.NoError(t, err) - require.Contains(t, output, "added") - }) - } - - withNamespace(t, func(namespace func() string) { - withCluster(t, namespace, func(cluster func() string) { - t.Run("create pgbouncer", func(t *testing.T) { - t.Run("starts PgBouncer", func(t *testing.T) { - requireClusterReady(t, namespace(), cluster(), time.Minute) - requirePgBouncer(t, namespace(), cluster()) - - // PgBouncer does not appear immediately. - requirePgBouncerReady(t, namespace(), cluster(), time.Minute) - - output, err := pgo("show", "cluster", cluster(), "-n", namespace()).Exec(t) - require.NoError(t, err) - require.Contains(t, output, "pgbouncer") - - output, err = pgo("test", cluster(), "-n", namespace()).Exec(t) - require.NoError(t, err) - require.Contains(t, output, "pgbouncer", "expected PgBouncer to be discoverable") - - for _, line := range strings.Split(output, "\n") { - if strings.Contains(line, "pgbouncer") { - require.Contains(t, line, "UP", "expected PgBouncer to be accessible") - } - } - }) - }) - - t.Run("delete pgbouncer", func(t *testing.T) { - t.Run("stops PgBouncer", func(t *testing.T) { - requireClusterReady(t, namespace(), cluster(), time.Minute) - requirePgBouncer(t, namespace(), cluster()) - requirePgBouncerReady(t, namespace(), cluster(), time.Minute) - - output, err := pgo("delete", "pgbouncer", cluster(), "--no-prompt", "-n", namespace()).Exec(t) - require.NoError(t, err) - require.Contains(t, output, "deleted") - - gone := func() bool { - deployments, err := TestContext.Kubernetes.ListDeployments(namespace(), map[string]string{ - "pg-cluster": cluster(), - "crunchy-pgbouncer": "true", - }) - require.NoError(t, err) - return len(deployments) == 0 - } - requireWaitFor(t, gone, time.Minute, time.Second, - "timeout waiting for PgBouncer of %q in %q", cluster(), namespace()) - - output, err = pgo("show", "cluster", cluster(), "-n", namespace()).Exec(t) - require.NoError(t, err) - - //require.NotContains(t, output, "pgbouncer") - for _, line := range strings.Split(output, "\n") { - // The service and deployment should be gone. The only remaining - // reference could be in the labels. - if strings.Contains(line, "pgbouncer") { - require.Contains(t, line, "pgbouncer=false") - } - } - }) - }) - }) - }) -} diff --git a/testing/pgo_cli/cluster_policy_test.go b/testing/pgo_cli/cluster_policy_test.go deleted file mode 100644 index df7197deb4..0000000000 --- a/testing/pgo_cli/cluster_policy_test.go +++ /dev/null @@ -1,192 +0,0 @@ -package pgo_cli_test - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "io/ioutil" - "strings" - "testing" - "time" - - "github.com/stretchr/testify/require" -) - -// TC45 ✓ -// TC52 ✓ -// TC115 ✓ -func TestClusterPolicy(t *testing.T) { - t.Parallel() - - withNamespace(t, func(namespace func() string) { - t.Run("create policy", func(t *testing.T) { - t.Run("requires argument", func(t *testing.T) { - t.Skip("BUG: exits zero") - output, err := pgo("create", "policy", "hello", "-n", namespace()).Exec(t) - require.Error(t, err) - require.Contains(t, output, "flags are required") - }) - - t.Run("keeps content", func(t *testing.T) { - const policyPath = "../testdata/policy1.sql" - policyContent, err := ioutil.ReadFile(policyPath) - if err != nil { - t.Fatalf("bug in test: %v", err) - } - - output, err := pgo("create", "policy", "hello", "--in-file="+policyPath, "-n", namespace()).Exec(t) - defer pgo("delete", "policy", "hello", "--no-prompt", "-n", namespace()).Exec(t) - require.NoError(t, err) - require.NotEmpty(t, output) - - output, err = pgo("show", "policy", "hello", "-n", namespace()).Exec(t) - require.NoError(t, err) - require.Contains(t, output, "hello") - require.Contains(t, output, string(policyContent)) - }) - }) - - withCluster(t, namespace, func(cluster func() string) { - t.Run("apply", func(t *testing.T) { - t.Run("requires selector", func(t *testing.T) { - t.Skip("BUG: exits zero") - output, err := pgo("apply", "nope", "-n", namespace()).Exec(t) - require.Error(t, err) - require.Contains(t, output, "required") - }) - - t.Run("executes a policy", func(t *testing.T) { - t.Skip("BUG: how to choose a database") - const policyPath = "../testdata/policy1.sql" - - _, err := pgo("create", "policy", "p1-apply", "--in-file="+policyPath, "-n", namespace()).Exec(t) - defer pgo("delete", "policy", "p1-apply", "--no-prompt", "-n", namespace()).Exec(t) - require.NoError(t, err) - - requireClusterReady(t, namespace(), cluster(), time.Minute) - - output, err := pgo("apply", "p1-apply", "--selector=name="+cluster(), "-n", namespace()).Exec(t) - require.NoError(t, err) - require.NotEmpty(t, output) - - stdout, stderr := clusterPSQL(t, namespace(), cluster(), ` - \c userdb - \dt policy1 - `) - require.Empty(t, stderr) - require.Contains(t, stdout, "(1 row)") - }) - }) - - t.Run("create schedule", func(t *testing.T) { - t.Run("executes a policy", func(t *testing.T) { - t.Skip("BUG: how to choose a database") - const policyPath = "../testdata/policy2-setup.sql" - const insertPath = "../testdata/policy2-insert.sql" - - _, err := pgo("create", "policy", "p2-schedule-setup", "--in-file="+policyPath, "-n", namespace()).Exec(t) - defer pgo("delete", "policy", "p2-schedule-setup", "--no-prompt", "-n", namespace()).Exec(t) - require.NoError(t, err) - - _, err = pgo("create", "policy", "p2-schedule-insert", "--in-file="+insertPath, "-n", namespace()).Exec(t) - defer pgo("delete", "policy", "p2-schedule-insert", "--no-prompt", "-n", namespace()).Exec(t) - require.NoError(t, err) - - output, err := pgo("create", "schedule", "--selector=name="+cluster(), "-n", namespace(), - "--schedule-type=policy", "--schedule=* * * * *", "--policy=p2-schedule-insert", - "--database=userdb", "--secret="+cluster()+"-postgres-secret", - ).Exec(t) - require.NoError(t, err) - require.Contains(t, output, "created") - - output, err = pgo("show", "schedule", cluster(), "-n", namespace()).Exec(t) - require.NoError(t, err) - require.Contains(t, output, "p2-schedule-insert") - - requireClusterReady(t, namespace(), cluster(), time.Minute) - - _, err = pgo("apply", "p2-schedule-setup", "--selector=name="+cluster(), "-n", namespace()).Exec(t) - require.NoError(t, err) - - executed := func() bool { - stdout, stderr := clusterPSQL(t, namespace(), cluster(), ` - \c userdb - TABLE policy2; - `) - return len(stderr) == 0 && !strings.Contains(stdout, "(0 rows)") - } - requireWaitFor(t, executed, 75*time.Second, time.Second, - "timeout waiting for policy to execute on %q in %q", cluster(), namespace()) - }) - }) - - t.Run("delete schedule", func(t *testing.T) { - requirePolicyAndSchedule := func(t *testing.T, policy string) { - const policyPath = "../testdata/policy1.sql" - - _, err := pgo("create", "policy", policy, "--in-file="+policyPath, "-n", namespace()).Exec(t) - require.NoError(t, err) - - _, err = pgo("create", "schedule", "--selector=name="+cluster(), "-n", namespace(), - "--schedule-type=policy", "--schedule=* * * * *", "--policy="+policy, - "--database=userdb", "--secret="+cluster()+"-postgres-secret", - ).Exec(t) - require.NoError(t, err) - } - - t.Run("removes all schedules", func(t *testing.T) { - requirePolicyAndSchedule(t, "p1-delete-all") - requirePolicyAndSchedule(t, "p2-delete-all") - defer pgo("delete", "policy", "p1-delete-all", "--no-prompt", "-n", namespace()).Exec(t) - defer pgo("delete", "policy", "p2-delete-all", "--no-prompt", "-n", namespace()).Exec(t) - - output, err := pgo("delete", "schedule", cluster(), "--no-prompt", "-n", namespace()).Exec(t) - require.NoError(t, err) - require.Contains(t, output, "deleted") - require.Contains(t, output, "p1-delete-all") - require.Contains(t, output, "p2-delete-all") - - output, err = pgo("show", "schedule", cluster(), "-n", namespace()).Exec(t) - require.NoError(t, err) - require.NotContains(t, output, "p1-delete-all") - require.NotContains(t, output, "p2-delete-all") - }) - - t.Run("accepts schedule name", func(t *testing.T) { - requirePolicyAndSchedule(t, "p1-delete-one") - requirePolicyAndSchedule(t, "p2-delete-one") - defer pgo("delete", "policy", "p1-delete-one", "--no-prompt", "-n", namespace()).Exec(t) - defer pgo("delete", "policy", "p2-delete-one", "--no-prompt", "-n", namespace()).Exec(t) - defer pgo("delete", "schedule", "-n", namespace(), - "--schedule-name="+cluster()+"-policy-p2-delete-one", "--no-prompt", - ).Exec(t) - - output, err := pgo("delete", "schedule", "-n", namespace(), - "--schedule-name="+cluster()+"-policy-p1-delete-one", "--no-prompt", - ).Exec(t) - require.NoError(t, err) - require.Contains(t, output, "deleted") - require.Contains(t, output, "p1-delete-one") - require.NotContains(t, output, "p2-delete-one") - - output, err = pgo("show", "schedule", cluster(), "-n", namespace()).Exec(t) - require.NoError(t, err) - require.NotContains(t, output, "p1-delete-one") - require.Contains(t, output, "p2-delete-one") - }) - }) - }) - }) -} diff --git a/testing/pgo_cli/cluster_pvc_test.go b/testing/pgo_cli/cluster_pvc_test.go deleted file mode 100644 index bd91b28435..0000000000 --- a/testing/pgo_cli/cluster_pvc_test.go +++ /dev/null @@ -1,41 +0,0 @@ -package pgo_cli_test - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "testing" - "time" - - "github.com/stretchr/testify/require" -) - -func TestClusterPVC(t *testing.T) { - t.Parallel() - - withNamespace(t, func(namespace func() string) { - withCluster(t, namespace, func(cluster func() string) { - t.Run("show pvc", func(t *testing.T) { - t.Run("shows something", func(t *testing.T) { - requireClusterReady(t, namespace(), cluster(), time.Minute) - - output, err := pgo("show", "pvc", cluster(), "-n", namespace()).Exec(t) - require.NoError(t, err) - require.Contains(t, output, cluster()) - }) - }) - }) - }) -} diff --git a/testing/pgo_cli/cluster_reload_test.go b/testing/pgo_cli/cluster_reload_test.go deleted file mode 100644 index e2900ee4fb..0000000000 --- a/testing/pgo_cli/cluster_reload_test.go +++ /dev/null @@ -1,71 +0,0 @@ -package pgo_cli_test - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "strings" - "testing" - "time" - - "github.com/stretchr/testify/require" -) - -func TestClusterReload(t *testing.T) { - t.Parallel() - - withNamespace(t, func(namespace func() string) { - withCluster(t, namespace, func(cluster func() string) { - t.Run("reload", func(t *testing.T) { - t.Run("applies PostgreSQL configuration", func(t *testing.T) { - requireClusterReady(t, namespace(), cluster(), time.Minute) - - _, stderr := clusterPSQL(t, namespace(), cluster(), - `ALTER SYSTEM SET checkpoint_completion_target = 1`) - require.Empty(t, stderr) - - stdout, stderr := clusterPSQL(t, namespace(), cluster(), ` - SELECT name, s.setting, fs.setting - FROM pg_settings s JOIN pg_file_settings fs USING (name) - WHERE name = 'checkpoint_completion_target' - AND (fs.sourcefile, fs.sourceline, fs.setting) - IS NOT DISTINCT FROM (s.sourcefile, s.sourceline, s.setting) - `) - require.Empty(t, stderr) - require.Contains(t, stdout, "(0 rows)", - "bug in test: expected ALTER SYSTEM to change settings") - - output, err := pgo("reload", cluster(), "--no-prompt", "-n", namespace()).Exec(t) - require.NoError(t, err) - require.Contains(t, output, "reload") - - applied := func() bool { - stdout, stderr := clusterPSQL(t, namespace(), cluster(), ` - SELECT name, s.setting, fs.setting - FROM pg_settings s JOIN pg_file_settings fs USING (name) - WHERE name = 'checkpoint_completion_target' - AND (fs.sourcefile, fs.sourceline, fs.setting) - IS DISTINCT FROM (s.sourcefile, s.sourceline, s.setting) - `) - require.Empty(t, stderr) - return strings.Contains(stdout, "(0 rows)") - } - requireWaitFor(t, applied, 20*time.Second, time.Second, - "expected settings to take effect") - }) - }) - }) - }) -} diff --git a/testing/pgo_cli/cluster_restart_test.go b/testing/pgo_cli/cluster_restart_test.go deleted file mode 100644 index 3f438b6570..0000000000 --- a/testing/pgo_cli/cluster_restart_test.go +++ /dev/null @@ -1,249 +0,0 @@ -package pgo_cli_test - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "encoding/json" - "fmt" - "sync" - "testing" - "time" - - "github.com/stretchr/testify/require" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" -) - -func TestRestart(t *testing.T) { - t.Parallel() - - var replicaOnce sync.Once - requireReplica := func(t *testing.T, namespace, cluster string) { - replicaOnce.Do(func() { - _, err := pgo("scale", cluster, "--no-prompt", "-n", namespace).Exec(t) - require.NoError(t, err) - requireReplicasReady(t, namespace, cluster, 5*time.Minute) - }) - } - - withNamespace(t, func(namespace func() string) { - withCluster(t, namespace, func(cluster func() string) { - - t.Run("restart", func(t *testing.T) { - - t.Run("query instances", func(t *testing.T) { - // require a single replica - requireReplica(t, namespace(), cluster()) - - primaries := primaryPods(t, namespace(), cluster()) - // only a single primary is expected - require.Len(t, primaries, 1) - - replicas := replicaPods(t, namespace(), cluster()) - require.NotEmpty(t, replicas, "expected replica to exist") - // only a single replica is expected - require.Len(t, replicas, 1) - - // query for restart targets, - output, err := pgo("restart", cluster(), "-n", namespace(), - "--query", - ).Exec(t) - require.NoError(t, err) - - require.Contains(t, output, primaries[0].Labels["deployment-name"]) - require.Contains(t, output, replicas[0].Labels["deployment-name"]) - }) - - type restartTargetSpec struct { - Name string - PendingRestart bool - } - - type queryRestartOutput struct { - Results []restartTargetSpec - } - - t.Run("apply config changes cluster", func(t *testing.T) { - - // require a single replica - requireReplica(t, namespace(), cluster()) - - // wait for DCS config to populate - hasDCSConfig := func() bool { - clusterConf, err := TestContext.Kubernetes.Client.CoreV1().ConfigMaps(namespace()). - Get(fmt.Sprintf("%s-pgha-config", cluster()), metav1.GetOptions{}) - require.NoError(t, err) - _, ok := clusterConf.Data[fmt.Sprintf("%s-dcs-config", cluster())] - return ok - } - // wait for the primary and replica to show a pending restart - requireWaitFor(t, hasDCSConfig, time.Minute, time.Second, - "timeout waiting for the DCS config to populate in ConfigMap %s-pgha-config", - cluster()) - - restartQueryCMD := []string{"restart", cluster(), "-n", namespace(), "--query", - "-o", "json"} - - // query for restart targets - output, err := pgo(restartQueryCMD...).Exec(t) - require.NoError(t, err) - - queryOutput := queryRestartOutput{} - err = json.Unmarshal([]byte(output), &queryOutput) - require.NoError(t, err) - - // should return the primary and replica - require.NotEmpty(t, queryOutput.Results) - - // check that the primary is accounted for - for _, queryResult := range queryOutput.Results { - require.False(t, queryResult.PendingRestart) - } - - // now update a PG setting - updatePGConfigDCS(t, cluster(), namespace(), - map[string]string{"unix_socket_directories": "/tmp,/crunchyadm,/tmp/e2e"}) - - requiresRestartPrimaryReplica := func() bool { - output, err := pgo(restartQueryCMD...).Exec(t) - require.NoError(t, err) - - queryOutput := queryRestartOutput{} - err = json.Unmarshal([]byte(output), &queryOutput) - require.NoError(t, err) - - for _, queryResult := range queryOutput.Results { - if queryResult.PendingRestart { - return true - } - } - return false - } - // wait for the primary and replica to show a pending restart - requireWaitFor(t, requiresRestartPrimaryReplica, time.Minute, time.Second, - "timeout waiting for all instances in cluster %s in namespace %s "+ - "to show a pending restart", cluster(), namespace()) - - // now restart the cluster - _, err = pgo("restart", cluster(), "-n", namespace(), "--no-prompt").Exec(t) - require.NoError(t, err) - - output, err = pgo(restartQueryCMD...).Exec(t) - require.NoError(t, err) - - queryOutput = queryRestartOutput{} - err = json.Unmarshal([]byte(output), &queryOutput) - require.NoError(t, err) - - require.NotEmpty(t, queryOutput.Results) - - // ensure pending restarts are no longer required - for _, queryResult := range queryOutput.Results { - require.False(t, queryResult.PendingRestart) - } - }) - - t.Run("apply config changes primary", func(t *testing.T) { - - // require a single replica - requireReplica(t, namespace(), cluster()) - - primaries := primaryPods(t, namespace(), cluster()) - // only a single primary is expected - require.Len(t, primaries, 1) - - // wait for DCS config to populate - hasDCSConfig := func() bool { - clusterConf, err := TestContext.Kubernetes.Client.CoreV1().ConfigMaps(namespace()). - Get(fmt.Sprintf("%s-pgha-config", cluster()), metav1.GetOptions{}) - require.NoError(t, err) - _, ok := clusterConf.Data[fmt.Sprintf("%s-dcs-config", cluster())] - return ok - } - // wait for the primary and replica to show a pending restart - requireWaitFor(t, hasDCSConfig, time.Minute, time.Second, - "timeout waiting for the DCS config to populate in ConfigMap %s-pgha-config", - cluster()) - - restartQueryCMD := []string{"restart", cluster(), "-n", namespace(), "--query", - "-o", "json"} - - // query for restart targets - output, err := pgo(restartQueryCMD...).Exec(t) - require.NoError(t, err) - - queryOutput := queryRestartOutput{} - err = json.Unmarshal([]byte(output), &queryOutput) - require.NoError(t, err) - - // query should return the primary and replica - require.NotEmpty(t, queryOutput.Results) - - // check that the primary is accounted for - for _, queryResult := range queryOutput.Results { - require.False(t, queryResult.PendingRestart) - } - - // now update a PG setting - updatePGConfigDCS(t, cluster(), namespace(), - map[string]string{"max_wal_senders": "8"}) - - requiresRestartPrimary := func() bool { - output, err := pgo(restartQueryCMD...).Exec(t) - require.NoError(t, err) - - queryOutput := queryRestartOutput{} - err = json.Unmarshal([]byte(output), &queryOutput) - require.NoError(t, err) - - for _, queryResult := range queryOutput.Results { - if queryResult.Name == primaries[0].Labels["deployment-name"] && - queryResult.PendingRestart { - return true - } - } - return false - } - // wait for the primary to show a pending restart - requireWaitFor(t, requiresRestartPrimary, time.Minute, time.Second, - "timeout waiting for primary in cluster %s namespace %s) "+ - "to show a pending restart", cluster(), namespace()) - - // now restart the cluster - _, err = pgo("restart", cluster(), "-n", namespace(), "--no-prompt", - "--target", primaries[0].Labels["deployment-name"]).Exec(t) - require.NoError(t, err) - - output, err = pgo(restartQueryCMD...).Exec(t) - require.NoError(t, err) - - queryOutput = queryRestartOutput{} - err = json.Unmarshal([]byte(output), &queryOutput) - require.NoError(t, err) - - require.NotEmpty(t, queryOutput.Results) - - // ensure pending restarts are no longer required - for _, queryResult := range queryOutput.Results { - if queryResult.Name == primaries[0].Labels["deployment-name"] { - require.False(t, queryResult.PendingRestart) - } - } - }) - }) - }) - }) - -} diff --git a/testing/pgo_cli/cluster_scale_test.go b/testing/pgo_cli/cluster_scale_test.go deleted file mode 100644 index 11ce9a9c21..0000000000 --- a/testing/pgo_cli/cluster_scale_test.go +++ /dev/null @@ -1,45 +0,0 @@ -package pgo_cli_test - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "testing" - "time" - - "github.com/stretchr/testify/require" -) - -// TC47 ✓ -// TC49 ✓ -func TestClusterScale(t *testing.T) { - t.Parallel() - - withNamespace(t, func(namespace func() string) { - withCluster(t, namespace, func(cluster func() string) { - t.Run("scale", func(t *testing.T) { - t.Run("creates replica", func(t *testing.T) { - requireClusterReady(t, namespace(), cluster(), time.Minute) - - output, err := pgo("scale", cluster(), "--no-prompt", "-n", namespace()).Exec(t) - require.NoError(t, err) - require.NotEmpty(t, output) - - requireReplicasReady(t, namespace(), cluster(), 2*time.Minute) - }) - }) - }) - }) -} diff --git a/testing/pgo_cli/cluster_scaledown_test.go b/testing/pgo_cli/cluster_scaledown_test.go deleted file mode 100644 index f1926a4d4d..0000000000 --- a/testing/pgo_cli/cluster_scaledown_test.go +++ /dev/null @@ -1,77 +0,0 @@ -package pgo_cli_test - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "sync" - "testing" - "time" - - "github.com/stretchr/testify/require" -) - -func TestClusterScaleDown(t *testing.T) { - t.Parallel() - - var replicaOnce sync.Once - requireReplica := func(t *testing.T, namespace, cluster string) { - replicaOnce.Do(func() { - _, err := pgo("scale", cluster, "--no-prompt", "-n", namespace).Exec(t) - require.NoError(t, err) - requireReplicasReady(t, namespace, cluster, 3*time.Minute) - }) - } - - withNamespace(t, func(namespace func() string) { - withCluster(t, namespace, func(cluster func() string) { - t.Run("scaledown", func(t *testing.T) { - t.Run("shows replicas", func(t *testing.T) { - requireReplica(t, namespace(), cluster()) - - pods := replicaPods(t, namespace(), cluster()) - require.NotEmpty(t, pods, "expected replicas to exist") - - output, err := pgo("scaledown", cluster(), "-n", namespace(), - "--query", - ).Exec(t) - require.NoError(t, err) - require.Contains(t, output, pods[0].Labels["deployment-name"]) - }) - - t.Run("removes one replica", func(t *testing.T) { - requireReplica(t, namespace(), cluster()) - - before := replicaPods(t, namespace(), cluster()) - require.NotEmpty(t, before, "expected replicas to exist") - - output, err := pgo("scaledown", cluster(), "-n", namespace(), - "--target="+before[0].Labels["deployment-name"], "--no-prompt", - ).Exec(t) - require.NoError(t, err) - require.Contains(t, output, "deleted") - require.Contains(t, output, before[0].Labels["deployment-name"]) - - gone := func() bool { - after := replicaPods(t, namespace(), cluster()) - return len(before) != len(after) - } - requireWaitFor(t, gone, time.Minute, time.Second, - "timeout waiting for replica of %q in %q", cluster(), namespace()) - }) - }) - }) - }) -} diff --git a/testing/pgo_cli/cluster_test_test.go b/testing/pgo_cli/cluster_test_test.go deleted file mode 100644 index 153d47f467..0000000000 --- a/testing/pgo_cli/cluster_test_test.go +++ /dev/null @@ -1,56 +0,0 @@ -package pgo_cli_test - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "strings" - "testing" - "time" - - "github.com/stretchr/testify/require" -) - -// TC126 ✓ -func TestClusterTest(t *testing.T) { - t.Parallel() - - withNamespace(t, func(namespace func() string) { - withCluster(t, namespace, func(cluster func() string) { - t.Run("test", func(t *testing.T) { - t.Run("shows something immediately", func(t *testing.T) { - output, err := pgo("test", cluster(), "-n", namespace()).Exec(t) - require.NoError(t, err) - require.Contains(t, output, cluster()) - }) - - t.Run("detects the cluster eventually", func(t *testing.T) { - var output string - var err error - - check := func() bool { - output, err = pgo("test", cluster(), "-n", namespace()).Exec(t) - require.NoError(t, err) - return strings.Contains(output, "UP") - } - - if !check() && !waitFor(t, check, time.Minute, time.Second) { - require.Contains(t, output, "UP") - } - }) - }) - }) - }) -} diff --git a/testing/pgo_cli/cluster_user_test.go b/testing/pgo_cli/cluster_user_test.go deleted file mode 100644 index 9e59757a9a..0000000000 --- a/testing/pgo_cli/cluster_user_test.go +++ /dev/null @@ -1,318 +0,0 @@ -package pgo_cli_test - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "encoding/json" - "regexp" - "testing" - "time" - - "github.com/stretchr/testify/require" -) - -// TC127 ✓ -func TestClusterUser(t *testing.T) { - t.Parallel() - - clusterDatabase := func(t *testing.T, namespace, cluster string) string { - t.Helper() - - names := clusterDatabases(t, namespace, cluster) - if len(names) > 0 && names[0] == "postgres" { - names = names[:1] - } - require.NotEmpty(t, names, "expected database to exist") - return names[0] - } - - showPassword := func(t *testing.T, namespace, cluster, user string) string { - t.Helper() - - output, err := pgo("show", "user", cluster, "-n", namespace, "--output=json").Exec(t) - require.NoError(t, err) - - var response struct{ Results []map[string]interface{} } - require.NoError(t, json.Unmarshal([]byte(output), &response)) - - for _, result := range response.Results { - if result["Username"].(string) == user { - return result["Password"].(string) - } - } - return "" - } - - withNamespace(t, func(namespace func() string) { - withCluster(t, namespace, func(cluster func() string) { - t.Run("show user", func(t *testing.T) { - t.Run("shows something", func(t *testing.T) { - requireClusterReady(t, namespace(), cluster(), time.Minute) - - output, err := pgo("show", "user", cluster(), "-n", namespace()).Exec(t) - require.NoError(t, err) - require.Contains(t, output, cluster()) - }) - }) - - t.Run("create user", func(t *testing.T) { - t.Run("accepts password", func(t *testing.T) { - t.Parallel() - requireClusterReady(t, namespace(), cluster(), time.Minute) - db := clusterDatabase(t, namespace(), cluster()) - - output, err := pgo("create", "user", cluster(), "-n", namespace(), - "--username=gandalf", "--password=wizard", "--managed", - ).Exec(t) - require.NoError(t, err) - require.NotEmpty(t, output) - - // Connect using the above credentials - pool := clusterConnection(t, namespace(), cluster(), - "user=gandalf password=wizard database="+db) - pool.Close() - }) - - t.Run("accepts selector", func(t *testing.T) { - t.Parallel() - requireClusterReady(t, namespace(), cluster(), time.Minute) - db := clusterDatabase(t, namespace(), cluster()) - - output, err := pgo("create", "user", - "--selector=name="+cluster(), "-n", namespace(), - "--username=samwise", "--password=hobbit", "--managed", - ).Exec(t) - require.NoError(t, err) - require.NotEmpty(t, output) - - // Connect using the above credentials - pool := clusterConnection(t, namespace(), cluster(), - "user=samwise password=hobbit database="+db) - pool.Close() - }) - - t.Run("generates password", func(t *testing.T) { - t.Parallel() - requireClusterReady(t, namespace(), cluster(), time.Minute) - db := clusterDatabase(t, namespace(), cluster()) - password := regexp.MustCompile(`\s+gimli\s+(\S+)`) - - output, err := pgo("create", "user", cluster(), "-n", namespace(), - "--username=gimli", "--password-length=16", "--managed", - ).Exec(t) - require.NoError(t, err) - require.Regexp(t, password, output, "expected pgo to show the generated password") - - // Connect using the above credentials - pool := clusterConnection(t, namespace(), cluster(), - "user=gimli password="+password.FindStringSubmatch(output)[1]+" database="+db) - pool.Close() - }) - - t.Run("does not keep password", func(t *testing.T) { - t.Parallel() - requireClusterReady(t, namespace(), cluster(), time.Minute) - - output, err := pgo("create", "user", cluster(), "-n", namespace(), - "--username=arwen", "--valid-days=60", - ).Exec(t) - require.NoError(t, err) - require.NotEmpty(t, output) - - _, err = pgo("show", "user", cluster(), "-n", namespace()).Exec(t) - require.NoError(t, err) - require.Empty(t, showPassword(t, namespace(), cluster(), "arwen")) - - stdout, stderr := clusterPSQL(t, namespace(), cluster(), `\du arwen`) - require.Empty(t, stderr) - require.Contains(t, stdout, "arwen", "expected user to exist") - }) - }) - - t.Run("update user", func(t *testing.T) { - t.Run("changes password", func(t *testing.T) { - t.Parallel() - requireClusterReady(t, namespace(), cluster(), time.Minute) - db := clusterDatabase(t, namespace(), cluster()) - - _, err := pgo("create", "user", cluster(), "-n", namespace(), - "--username=howl", "--password=wizard", - ).Exec(t) - require.NoError(t, err) - - output, err := pgo("update", "user", cluster(), "-n", namespace(), - "--username=howl", "--password=jenkins", - ).Exec(t) - require.NoError(t, err) - require.NotEmpty(t, output) - - // Connect using the above credentials - pool := clusterConnection(t, namespace(), cluster(), - "user=howl password=jenkins database="+db) - pool.Close() - }) - - t.Run("changes expiration", func(t *testing.T) { - t.Parallel() - requireClusterReady(t, namespace(), cluster(), time.Minute) - db := clusterDatabase(t, namespace(), cluster()) - - _, err := pgo("create", "user", cluster(), "-n", namespace(), - "--username=sophie", "--password=hatter", - ).Exec(t) - require.NoError(t, err) - - { - output, err := pgo("update", "user", cluster(), "-n", namespace(), - "--username=sophie", "--valid-days=10", - ).Exec(t) - require.NoError(t, err) - require.NotEmpty(t, output) - - // Connect using the above credentials - pool := clusterConnection(t, namespace(), cluster(), - "user=sophie password=hatter database="+db) - pool.Close() - - stdout, stderr := clusterPSQL(t, namespace(), cluster(), `\du sophie`) - require.Empty(t, stderr) - require.Contains(t, stdout, time.Now().AddDate(0, 0, 10).Format("2006-01-02"), - "expected expiry to be set") - } - - { - _, err := pgo("update", "user", cluster(), "-n", namespace(), - "--username=sophie", "--expire-user", - ).Exec(t) - require.NoError(t, err) - - stdout, stderr := clusterPSQL(t, namespace(), cluster(), `\du sophie`) - require.Empty(t, stderr) - require.Contains(t, stdout, "-infinity", "expected to find an expiry") - - expiry := regexp.MustCompile(`\d{4}-\d{2}-\d{2}`).FindString(stdout) - require.Less(t, expiry, time.Now().Format("2006-01-02"), - "expected expiry to have passed") - } - }) - - t.Run("generates password", func(t *testing.T) { - t.Skip("BUG: --expired silently requires --password-length") - t.Parallel() - requireClusterReady(t, namespace(), cluster(), time.Minute) - db := clusterDatabase(t, namespace(), cluster()) - - _, err := pgo("create", "user", cluster(), "-n", namespace(), - "--username=calcifer", "--valid-days=2", "--managed", - ).Exec(t) - require.NoError(t, err) - - original := showPassword(t, namespace(), cluster(), "calcifer") - - _, err = pgo("update", "user", cluster(), "-n", namespace(), - "--expired=5", - ).Exec(t) - require.NoError(t, err) - - generated := showPassword(t, namespace(), cluster(), "calcifer") - require.True(t, original != generated, - "expected password to be regenerated") - - // Connect using the above credentials - pool := clusterConnection(t, namespace(), cluster(), - "user=calcifer password="+generated+" database="+db) - pool.Close() - }) - }) - - t.Run("delete user", func(t *testing.T) { - t.Run("removes managed", func(t *testing.T) { - t.Parallel() - requireClusterReady(t, namespace(), cluster(), time.Minute) - - _, err := pgo("create", "user", cluster(), "-n", namespace(), - "--username=ged", "--managed", - ).Exec(t) - require.NoError(t, err) - - output, err := pgo("delete", "user", cluster(), "-n", namespace(), - "--username=ged", "--no-prompt", - ).Exec(t) - require.NoError(t, err) - require.NotEmpty(t, output) - - _, err = pgo("show", "user", cluster(), "-n", namespace()).Exec(t) - require.NoError(t, err) - require.Empty(t, showPassword(t, namespace(), cluster(), "ged"), - "expected pgo to forget about this user") - - stdout, stderr := clusterPSQL(t, namespace(), cluster(), `\du ged`) - require.Empty(t, stderr) - require.NotRegexp(t, `\bged\b`, stdout, - "expected user to be removed") - }) - - t.Run("removes unmanaged", func(t *testing.T) { - t.Parallel() - requireClusterReady(t, namespace(), cluster(), time.Minute) - - _, err := pgo("create", "user", cluster(), "-n", namespace(), - "--username=ogion", - ).Exec(t) - require.NoError(t, err) - - output, err := pgo("delete", "user", cluster(), "-n", namespace(), - "--username=ogion", "--no-prompt", - ).Exec(t) - require.NoError(t, err) - require.NotEmpty(t, output) - - stdout, stderr := clusterPSQL(t, namespace(), cluster(), `\du ogion`) - require.Empty(t, stderr) - require.NotRegexp(t, `\bogion\b`, stdout, - "expected user to be removed") - }) - - t.Run("accepts selector", func(t *testing.T) { - t.Parallel() - requireClusterReady(t, namespace(), cluster(), time.Minute) - - _, err := pgo("create", "user", cluster(), "-n", namespace(), - "--username=vetch", "--managed", - ).Exec(t) - require.NoError(t, err) - - output, err := pgo("delete", "user", - "--selector=name="+cluster(), "-n", namespace(), - "--username=vetch", "--no-prompt", - ).Exec(t) - require.NoError(t, err) - require.NotEmpty(t, output) - - _, err = pgo("show", "user", cluster(), "-n", namespace()).Exec(t) - require.NoError(t, err) - require.Empty(t, showPassword(t, namespace(), cluster(), "vetch"), - "expected pgo to forget about this user") - - stdout, stderr := clusterPSQL(t, namespace(), cluster(), `\du vetch`) - require.Empty(t, stderr) - require.NotRegexp(t, `\bvetch\b`, stdout, - "expected user to be removed") - }) - }) - }) - }) -} diff --git a/testing/pgo_cli/operator_namespace_test.go b/testing/pgo_cli/operator_namespace_test.go deleted file mode 100644 index ef327c2ece..0000000000 --- a/testing/pgo_cli/operator_namespace_test.go +++ /dev/null @@ -1,36 +0,0 @@ -package pgo_cli_test - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "testing" - - "github.com/stretchr/testify/require" -) - -func TestOperatorNamespace(t *testing.T) { - t.Parallel() - - withNamespace(t, func(namespace func() string) { - t.Run("show namespace", func(t *testing.T) { - t.Run("shows something", func(t *testing.T) { - output, err := pgo("show", "namespace", namespace()).Exec(t) - require.NoError(t, err) - require.Contains(t, output, namespace()) - }) - }) - }) -} diff --git a/testing/pgo_cli/operator_rbac_test.go b/testing/pgo_cli/operator_rbac_test.go deleted file mode 100644 index 8fa4609894..0000000000 --- a/testing/pgo_cli/operator_rbac_test.go +++ /dev/null @@ -1,141 +0,0 @@ -package pgo_cli_test - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "testing" - "time" - - "github.com/stretchr/testify/require" -) - -// TC110 ✓ -func TestOperatorRBAC(t *testing.T) { - t.Parallel() - - withNamespace(t, func(namespace1 func() string) { - withNamespace(t, func(namespace2 func() string) { - t.Run("create pgouser", func(t *testing.T) { - t.Run("uses namespaces", func(t *testing.T) { - var err error - _, err = pgo("create", "pgouser", "heihei", - "--pgouser-namespaces="+namespace1()+","+namespace2(), - "--pgouser-password=moana", - "--pgouser-roles=pgoadmin", - ).Exec(t) - require.NoError(t, err) - defer pgo("delete", "pgouser", "heihei", "--no-prompt").Exec(t) - - var output string - output, err = pgo("create", "pgouser", "maui", - "--pgouser-namespaces=pgo-test-does-not-exist", - "--pgouser-password=demigod", - "--pgouser-roles=pgoadmin", - ).Exec(t) - require.Error(t, err) - require.Contains(t, output, "not watched by") - - _, err = pgo("create", "pgouser", "pua", - "--all-namespaces", - "--pgouser-password=tafiti", - "--pgouser-roles=pgoadmin", - ).Exec(t) - require.NoError(t, err) - defer pgo("delete", "pgouser", "pua", "--no-prompt").Exec(t) - - output, err = pgo("show", "pgouser", "--all").Exec(t) - require.NoError(t, err) - require.Contains(t, output, "heihei") - require.NotContains(t, output, "maui") - require.Contains(t, output, "pua") - }) - }) - - t.Run("create pgorole", func(t *testing.T) { - var err error - _, err = pgo("create", "pgorole", "junker", "--permissions=CreateCluster").Exec(t) - require.NoError(t, err) - defer pgo("delete", "pgorole", "junker", "--no-prompt").Exec(t) - - var output string - output, err = pgo("show", "pgorole", "--all").Exec(t) - require.NoError(t, err) - require.Contains(t, output, "junker") - }) - - t.Run("update pgouser", func(t *testing.T) { - t.Run("constrains actions", func(t *testing.T) { - var err error - - // initially pgoadmin - _, err = pgo("create", "pgouser", "heihei", - "--pgouser-namespaces="+namespace1()+","+namespace2(), - "--pgouser-password=moana", - "--pgouser-roles=pgoadmin", - ).Exec(t) - require.NoError(t, err) - defer pgo("delete", "pgouser", "heihei", "--no-prompt").Exec(t) - - _, err = pgo("create", "pgorole", "junker", "--permissions=CreateCluster").Exec(t) - require.NoError(t, err) - defer pgo("delete", "pgorole", "junker", "--no-prompt").Exec(t) - - // change to junker - _, err = pgo("update", "pgouser", "heihei", "--pgouser-roles=junker").Exec(t) - require.NoError(t, err) - - // allowed - _, err = pgo("create", "cluster", "test-permissions", "-n", namespace1()). - WithEnvironment("PGOUSERNAME", "heihei"). - WithEnvironment("PGOUSERPASS", "moana"). - Exec(t) - require.NoError(t, err) - defer teardownCluster(t, namespace1(), "test-permissions", time.Now()) - - // forbidden - _, err = pgo("update", "namespace", namespace2()). - WithEnvironment("PGOUSERNAME", "heihei"). - WithEnvironment("PGOUSERPASS", "moana"). - Exec(t) - require.Error(t, err) - }) - }) - - t.Run("delete pgouser", func(t *testing.T) { - var err error - _, err = pgo("create", "pgouser", "heihei", - "--pgouser-namespaces="+namespace1()+","+namespace2(), - "--pgouser-password=moana", - "--pgouser-roles=pgoadmin", - ).Exec(t) - require.NoError(t, err) - defer pgo("delete", "pgouser", "heihei", "--no-prompt").Exec(t) - - var output string - output, err = pgo("show", "pgouser", "--all").Exec(t) - require.NoError(t, err) - require.Contains(t, output, "heihei") - - _, err = pgo("delete", "pgouser", "heihei", "--no-prompt").Exec(t) - require.NoError(t, err) - - output, err = pgo("show", "pgouser", "--all").Exec(t) - require.NoError(t, err) - require.NotContains(t, output, "heihei") - }) - }) - }) -} diff --git a/testing/pgo_cli/operator_test.go b/testing/pgo_cli/operator_test.go deleted file mode 100644 index 743b872614..0000000000 --- a/testing/pgo_cli/operator_test.go +++ /dev/null @@ -1,53 +0,0 @@ -package pgo_cli_test - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "testing" - - "github.com/stretchr/testify/require" -) - -// TC40 ✓ -func TestOperatorCommands(t *testing.T) { - t.Parallel() - - t.Run("version", func(t *testing.T) { - t.Run("reports the API version", func(t *testing.T) { - output, err := pgo("version").Exec(t) - require.NoError(t, err) - require.Contains(t, output, "pgo-apiserver version") - }) - }) - - withNamespace(t, func(namespace func() string) { - t.Run("status", func(t *testing.T) { - t.Run("shows something", func(t *testing.T) { - output, err := pgo("status", "-n", namespace()).Exec(t) - require.NoError(t, err) - require.Contains(t, output, "Total Volume Size") - }) - }) - - t.Run("show config", func(t *testing.T) { - t.Run("shows something", func(t *testing.T) { - output, err := pgo("show", "config", "-n", namespace()).Exec(t) - require.NoError(t, err) - require.Contains(t, output, "PrimaryStorage") - }) - }) - }) -} diff --git a/testing/pgo_cli/suite_helpers_test.go b/testing/pgo_cli/suite_helpers_test.go deleted file mode 100644 index 4ec5ed613b..0000000000 --- a/testing/pgo_cli/suite_helpers_test.go +++ /dev/null @@ -1,461 +0,0 @@ -package pgo_cli_test - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "context" - "encoding/json" - "fmt" - "net" - "os" - "strings" - "sync" - "testing" - "time" - - "github.com/crunchydata/postgres-operator/testing/kubeapi" - "github.com/jackc/pgx/v4/pgxpool" - "github.com/stretchr/testify/assert" - "github.com/stretchr/testify/require" - core_v1 "k8s.io/api/core/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/types" - "k8s.io/apiserver/pkg/storage/names" - "sigs.k8s.io/yaml" -) - -type Pool struct { - *kubeapi.Proxy - *pgxpool.Pool -} - -func (p *Pool) Close() error { p.Pool.Close(); return p.Proxy.Close() } - -// clusterConnection opens a PostgreSQL connection to a database pod. Any error -// will cause t to FailNow. -func clusterConnection(t testing.TB, namespace, cluster, dsn string) *Pool { - t.Helper() - - pods, err := TestContext.Kubernetes.ListPods(namespace, map[string]string{ - "pg-cluster": cluster, - "pgo-pg-database": "true", - }) - require.NoError(t, err) - require.NotEmpty(t, pods) - - proxy, err := TestContext.Kubernetes.PodPortForward(pods[0].Namespace, pods[0].Name, "5432") - require.NoError(t, err) - - host, port, err := net.SplitHostPort(proxy.LocalAddr()) - if err != nil { - proxy.Close() - require.NoError(t, err) - } - - pool, err := pgxpool.Connect(context.Background(), dsn+" host="+host+" port="+port) - if err != nil { - proxy.Close() - require.NoError(t, err) - } - - return &Pool{proxy, pool} -} - -// clusterDatabases returns the names of all non-template databases in cluster. -// Any error will cause t to FailNow. -func clusterDatabases(t testing.TB, namespace, cluster string) []string { - stdout, stderr := clusterPSQL(t, namespace, cluster, ` - \set QUIET yes - \pset format unaligned - \pset tuples_only yes - SELECT datname FROM pg_database WHERE NOT datistemplate; - `) - require.Empty(t, stderr) - return strings.FieldsFunc(stdout, func(c rune) bool { return c == '\r' || c == '\n' }) -} - -// clusterPSQL executes psql commands and/or SQL on a database pod. Any error -// will cause t to FailNow. -func clusterPSQL(t testing.TB, namespace, cluster, psql string) (string, string) { - t.Helper() - - pods, err := TestContext.Kubernetes.ListPods(namespace, map[string]string{ - "pg-cluster": cluster, - "pgo-pg-database": "true", - }) - require.NoError(t, err) - require.NotEmpty(t, pods) - - stdout, stderr, err := TestContext.Kubernetes.PodExec( - pods[0].Namespace, pods[0].Name, - strings.NewReader(psql), "psql", "-U", "postgres", "-f-") - require.NoError(t, err) - - return stdout, stderr -} - -// clusterPVCs returns a list of persistent volume claims for the cluster. Any -// error will cause t to FailNow. -func clusterPVCs(t testing.TB, namespace, cluster string) []core_v1.PersistentVolumeClaim { - t.Helper() - - pvcs, err := TestContext.Kubernetes.ListPVCs(namespace, map[string]string{ - "pg-cluster": cluster, - }) - require.NoError(t, err) - - return pvcs -} - -// primaryPods returns a list of PostgreSQL primary pods for the cluster. Any -// error will cause t to FailNow. -func primaryPods(t testing.TB, namespace, cluster string) []core_v1.Pod { - t.Helper() - - pods, err := TestContext.Kubernetes.ListPods(namespace, map[string]string{ - "pg-cluster": cluster, - "role": "master", - }) - require.NoError(t, err) - - return pods -} - -// replicaPods returns a list of PostgreSQL replica pods for the cluster. Any -// error will cause t to FailNow. -func replicaPods(t testing.TB, namespace, cluster string) []core_v1.Pod { - t.Helper() - - pods, err := TestContext.Kubernetes.ListPods(namespace, map[string]string{ - "pg-cluster": cluster, - "role": "replica", - }) - require.NoError(t, err) - - return pods -} - -// requireClusterReady waits until all deployments of cluster are ready. If -// timeout elapses or any error occurs, t will FailNow. -func requireClusterReady(t testing.TB, namespace, cluster string, timeout time.Duration) { - t.Helper() - - // Give up now if some part of setting up the cluster failed. - if t.Failed() || cluster == "" { - t.FailNow() - } - - ready := func() bool { - deployments, err := TestContext.Kubernetes.ListDeployments(namespace, map[string]string{ - "pg-cluster": cluster, - }) - require.NoError(t, err) - - if len(deployments) == 0 { - return false - } - - var database bool - for _, deployment := range deployments { - if *deployment.Spec.Replicas < 1 || - deployment.Status.ReadyReplicas != *deployment.Spec.Replicas || - deployment.Status.UpdatedReplicas != *deployment.Spec.Replicas { - return false - } - if deployment.Labels["pgo-pg-database"] == "true" { - database = true - } - } - return database - } - - if !ready() { - requireWaitFor(t, ready, timeout, time.Second, - "timeout waiting for %q in %q", cluster, namespace) - } -} - -// requirePgBouncerReady waits until all PgBouncer deployments for cluster are -// ready. If timeout elapses or any error occurs, t will FailNow. -func requirePgBouncerReady(t testing.TB, namespace, cluster string, timeout time.Duration) { - t.Helper() - - ready := func() bool { - deployments, err := TestContext.Kubernetes.ListDeployments(namespace, map[string]string{ - "pg-cluster": cluster, - "crunchy-pgbouncer": "true", - }) - require.NoError(t, err) - - if len(deployments) == 0 { - return false - } - for _, deployment := range deployments { - if *deployment.Spec.Replicas < 1 || - deployment.Status.ReadyReplicas != *deployment.Spec.Replicas || - deployment.Status.UpdatedReplicas != *deployment.Spec.Replicas { - return false - } - } - return true - } - - if !ready() { - requireWaitFor(t, ready, timeout, time.Second, - "timeout waiting for PgBouncer of %q in %q", cluster, namespace) - } -} - -// requireReplicasReady waits until all replicas of cluster are ready. If -// timeout elapses or any error occurs, t will FailNow. -func requireReplicasReady(t testing.TB, namespace, cluster string, timeout time.Duration) { - t.Helper() - - ready := func() bool { - pods := replicaPods(t, namespace, cluster) - - if len(pods) == 0 { - return false - } - for _, pod := range pods { - if !kubeapi.IsPodReady(pod) { - return false - } - } - return true - } - - if !ready() { - requireWaitFor(t, ready, timeout, time.Second, - "timeout waiting for replicas of %q in %q", cluster, namespace) - } -} - -// requireStanzaExists waits until pgBackRest reports the stanza is ok. If -// timeout elapses, t will FailNow. -func requireStanzaExists(t testing.TB, namespace, cluster string, timeout time.Duration) { - t.Helper() - - var err error - var output string - - ready := func() bool { - output, err = pgo("show", "backup", cluster, "-n", namespace).Exec(t) - return err == nil && strings.Contains(output, "status: ok") - } - - if !ready() { - requireWaitFor(t, ready, timeout, time.Second, - "timeout waiting for stanza of %q in %q:\n%s", cluster, namespace, output) - } -} - -// requireWaitFor calls condition every tick until it returns true. If timeout -// elapses, t will Logf message and args then FailNow. Condition runs in the -// current goroutine so that it may also call t.FailNow. -func requireWaitFor(t testing.TB, - condition func() bool, timeout, tick time.Duration, - message string, args ...interface{}, -) { - t.Helper() - - if !waitFor(t, condition, timeout, tick) { - t.Fatalf(message, args...) - } -} - -// teardownCluster deletes a cluster. It waits sufficiently long after created -// for the delete to go well. -func teardownCluster(t testing.TB, namespace, cluster string, created time.Time) { - minimum := TestContext.Scale(10 * time.Second) - - if elapsed := time.Since(created); elapsed < minimum { - time.Sleep(minimum - elapsed) - } - - _, err := pgo("delete", "cluster", cluster, "-n", namespace, "--no-prompt").Exec(t) - assert.NoError(t, err, "unable to tear down cluster %q in %q", cluster, namespace) -} - -// waitFor calls condition once every tick until it returns true. If timeout -// elapses or t Failed, waitFor returns false. Condition runs in the current -// goroutine so that it may also call t.FailNow. -func waitFor(t testing.TB, condition func() bool, timeout, tick time.Duration) bool { - t.Helper() - - ticker := time.NewTicker(tick) - defer ticker.Stop() - - timer := time.NewTimer(TestContext.Scale(timeout)) - defer timer.Stop() - - for { - select { - case <-timer.C: - return false - case <-ticker.C: - if condition() { - return true - } - if t.Failed() { - return false - } - } - } -} - -// withCluster calls during with a function that returns the name of an existing -// cluster. The cluster may not exist until that function is called. When during -// returns, the cluster might be destroyed. -func withCluster(t testing.TB, namespace func() string, during func(func() string)) { - t.Helper() - - var created time.Time - var name string - var once sync.Once - - defer func() { - if name != "" { - teardownCluster(t, namespace(), name, created) - } - }() - - during(func() string { - once.Do(func() { - generated := names.SimpleNameGenerator.GenerateName("pgo-test-") - _, err := pgo("create", "cluster", generated, "-n", namespace()).Exec(t) - - if assert.NoError(t, err) { - t.Logf("created cluster %q in %q", generated, namespace()) - created = time.Now() - name = generated - } - }) - return name - }) -} - -// withNamespace calls during with a function that returns the name of an -// existing namespace. The namespace may not exist until that function is -// called. When during returns, the namespace and all its contents are destroyed. -func withNamespace(t testing.TB, during func(func() string)) { - t.Helper() - - // Use the namespace specified in the environment. - if name := os.Getenv("PGO_NAMESPACE"); name != "" { - during(func() string { return name }) - return - } - - // Prepare to cleanup a namespace that might be created. - var namespace *core_v1.Namespace - var once sync.Once - - defer func() { - if namespace != nil { - err := TestContext.Kubernetes.DeleteNamespace(namespace.Name) - assert.NoErrorf(t, err, "unable to tear down namespace %q", namespace.Name) - } - }() - - during(func() string { - once.Do(func() { - ns, err := TestContext.Kubernetes.GenerateNamespace( - "pgo-test-", map[string]string{"pgo-test": kubeapi.SanitizeLabelValue(t.Name())}) - - if assert.NoError(t, err) { - namespace = ns - _, err = pgo("update", "namespace", namespace.Name).Exec(t) - assert.NoErrorf(t, err, "unable to take ownership of namespace %q", namespace.Name) - } - }) - - return namespace.Name - }) -} - -// updatePGConfigDCS updates PG configuration for cluster via its Distributed Configuration Store -// (DCS) according to the key/value pairs defined in the pgConfig map, specifically by updating -// the -pgha-config ConfigMap. Specifically, the configuration settings specified are -// applied to the entire cluster via the DCS configuration included within this the -// -pgha-config ConfigMap. -func updatePGConfigDCS(t testing.TB, clusterName, namespace string, pgConfig map[string]string) { - t.Helper() - - dcsConfigName := fmt.Sprintf("%s-dcs-config", clusterName) - - type postgresDCS struct { - Parameters map[string]interface{} `json:"parameters,omitempty"` - } - - type dcsConfig struct { - PostgreSQL *postgresDCS `json:"postgresql,omitempty"` - LoopWait interface{} `json:"loop_wait,omitempty"` - TTL interface{} `json:"ttl,omitempty"` - RetryTimeout interface{} `json:"retry_timeout,omitempty"` - MaximumLagOnFailover interface{} `json:"maximum_lag_on_failover,omitempty"` - MasterStartTimeout interface{} `json:"master_start_timeout,omitempty"` - SynchronousMode interface{} `json:"synchronous_mode,omitempty"` - SynchronousModeStrict interface{} `json:"synchronous_mode_strict,omitempty"` - StandbyCluster interface{} `json:"standby_cluster,omitempty"` - Slots interface{} `json:"slots,omitempty"` - } - - clusterConfig, err := TestContext.Kubernetes.Client.CoreV1().ConfigMaps(namespace). - Get(fmt.Sprintf("%s-pgha-config", clusterName), metav1.GetOptions{}) - require.NoError(t, err) - - dcsConf := &dcsConfig{} - if err := yaml.Unmarshal([]byte(clusterConfig.Data[dcsConfigName]), - dcsConf); err != nil { - } - require.NoError(t, err) - -newConf: - for newParamKey, newParamVal := range pgConfig { - for currParamKey := range dcsConf.PostgreSQL.Parameters { - // update setting if it already exists - if newParamKey == currParamKey { - dcsConf.PostgreSQL.Parameters[currParamKey] = newParamVal - // move to the next new setting provided - continue newConf - } - } - // add new setting if doesn't already exist - dcsConf.PostgreSQL.Parameters[newParamKey] = newParamVal - } - - content, err := yaml.Marshal(dcsConf) - require.NoError(t, err) - - jsonOpBytes, err := json.Marshal([]struct { - Op string `json:"op"` - Path string `json:"path"` - Value string `json:"value"` - }{{ - "replace", - fmt.Sprintf("/data/%s", dcsConfigName), - string(content), - }}) - require.NoError(t, err) - - if _, err := TestContext.Kubernetes.Client.CoreV1().ConfigMaps(namespace). - Patch(clusterConfig.GetName(), - types.JSONPatchType, jsonOpBytes); err != nil { - require.NoError(t, err) - } - -} diff --git a/testing/pgo_cli/suite_pgo_cmd_test.go b/testing/pgo_cli/suite_pgo_cmd_test.go deleted file mode 100644 index 91aec62228..0000000000 --- a/testing/pgo_cli/suite_pgo_cmd_test.go +++ /dev/null @@ -1,89 +0,0 @@ -package pgo_cli_test - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "bytes" - "fmt" - "os" - "os/exec" - "testing" - "time" -) - -type pgoCmd struct { - cmd *exec.Cmd - timeout <-chan time.Time -} - -// pgo returns a builder of an invocation of the PostgreSQL Operator CLI. It -// inherits a copy of any variables set in TestContext.DefaultEnvironment and -// the calling environment. -func pgo(args ...string) *pgoCmd { - c := new(pgoCmd) - c.cmd = exec.Command("pgo", args...) - c.cmd.Env = append(c.cmd.Env, TestContext.DefaultEnvironment...) - c.cmd.Env = append(c.cmd.Env, os.Environ()...) - return c -} - -// WithEnvironment overrides a single environment variable of this invocation. -func (c *pgoCmd) WithEnvironment(key, value string) *pgoCmd { - c.cmd.Env = append(c.cmd.Env, key+"="+value) - return c -} - -// Exec invokes the PostgreSQL Operator CLI, returning its standard output and -// any error encountered. -func (c *pgoCmd) Exec(t testing.TB) (string, error) { - var stdout, stderr bytes.Buffer - - cmd := c.cmd - cmd.Stdout, cmd.Stderr = &stdout, &stderr - - //t.Logf("Running `%s %s`", cmd.Path, strings.Join(cmd.Args[1:], " ")) - if err := cmd.Start(); err != nil { - return "", fmt.Errorf( - "error starting %q: %v\nstdout:\n%v\nstderr:\n%v", - cmd.Path, err, stdout.String(), stderr.String()) - } - - chError := make(chan error, 1) - chTimeout := c.timeout - - if chTimeout == nil { - chTimeout = time.After(time.Minute) - } - - go func() { chError <- cmd.Wait() }() - select { - case err := <-chError: - if err != nil { - //if ee, ok := err.(*exec.ExitError); ok { - // t.Logf("rc: %v", ee.ProcessState.ExitCode()) - //} - return stdout.String(), fmt.Errorf( - "error running %q: %v\nstdout:\n%v\nstderr:\n%v", - cmd.Path, err, stdout.String(), stderr.String()) - } - case <-chTimeout: - cmd.Process.Kill() - return stdout.String(), fmt.Errorf( - "timed out waiting for %q:\nstdout:\n%v\nstderr:\n%v", - cmd.Path, stdout.String(), stderr.String()) - } - return stdout.String(), nil -} diff --git a/testing/pgo_cli/suite_test.go b/testing/pgo_cli/suite_test.go deleted file mode 100644 index 4f2056d08e..0000000000 --- a/testing/pgo_cli/suite_test.go +++ /dev/null @@ -1,106 +0,0 @@ -package pgo_cli_test - -/* - Copyright 2020 Crunchy Data Solutions, Inc. - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. -*/ - -import ( - "fmt" - "os" - "path/filepath" - "strconv" - "testing" - "time" - - "github.com/crunchydata/postgres-operator/testing/kubeapi" -) - -func TestMain(m *testing.M) { - must := func(b bool, message string, args ...interface{}) { - if !b { - panic(fmt.Errorf(message, args...)) - } - } - - os.Exit(func() int { - { - config, err := kubeapi.NewConfig() - must(err == nil, "kubernetes config: %v", err) - - // Nothing's gonna stop us now. - config.QPS = 1000.0 - config.Burst = 2000.0 - - TestContext.Kubernetes, err = kubeapi.NewForConfig(config) - must(err == nil, "kubernetes client: %v", err) - } - - // By default, use a port-forward proxy to talk to the Operator. - if url := os.Getenv("PGO_APISERVER_URL"); url == "" { - if ns := os.Getenv("PGO_OPERATOR_NAMESPACE"); ns != "" { - pods, err := TestContext.Kubernetes.ListPods(ns, map[string]string{"name": "postgres-operator"}) - must(err == nil, "list pods: %v", err) - must(len(pods) > 0, "missing postgres-operator") - - port := "8443" - for _, c := range pods[0].Spec.Containers { - if c.Name == "apiserver" { - must(len(c.Ports) > 0, "missing proxy port") - port = fmt.Sprintf("%d", c.Ports[0].ContainerPort) - } - } - - proxy, err := TestContext.Kubernetes.PodPortForward(pods[0].Namespace, pods[0].Name, port) - must(err == nil, "pod port forward: %v", err) - defer proxy.Close() - - TestContext.DefaultEnvironment = append(TestContext.DefaultEnvironment, - "PGO_APISERVER_URL=https://"+proxy.LocalAddr(), - ) - } - } - - // By default, use files that are generated by the Ansible installer. - if ns := os.Getenv("PGO_OPERATOR_NAMESPACE"); ns != "" { - if home, err := os.UserHomeDir(); err == nil { - TestContext.DefaultEnvironment = append(TestContext.DefaultEnvironment, - "PGO_CA_CERT="+filepath.Join(home, ".pgo", ns, "output", "server.crt"), - "PGO_CLIENT_CERT="+filepath.Join(home, ".pgo", ns, "output", "server.crt"), - "PGO_CLIENT_KEY="+filepath.Join(home, ".pgo", ns, "output", "server.key"), - ) - } - } - - if scale := os.Getenv("PGO_TEST_TIMEOUT_SCALE"); scale != "" { - s, _ := strconv.ParseFloat(scale, 64) - must(s > 0, "PGO_TEST_TIMEOUT_SCALE must be a fractional number greater than zero") - TestContext.Scale = func(d time.Duration) time.Duration { return time.Duration(s * float64(d)) } - } else { - TestContext.Scale = func(d time.Duration) time.Duration { return d } - } - - return m.Run() - }()) -} - -var TestContext struct { - // DefaultEnvironment specifies environment variables to be passed to every - // executed process. Each entry is of the form "key=value", and values in - // os.Environ() take precedence. See https://golang.org/pkg/os/exec/#Cmd. - DefaultEnvironment []string - - Kubernetes *kubeapi.KubeAPI - - Scale func(time.Duration) time.Duration -} diff --git a/testing/policies/kyverno/kustomization.yaml b/testing/policies/kyverno/kustomization.yaml new file mode 100644 index 0000000000..88e9775e79 --- /dev/null +++ b/testing/policies/kyverno/kustomization.yaml @@ -0,0 +1,37 @@ +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +bases: + - https://github.com/kyverno/policies/pod-security/restricted + +resources: + # CVE-2020-14386: https://cloud.google.com/anthos/clusters/docs/security-bulletins#gcp-2020-012 + # CVE-2021-22555: https://cloud.google.com/anthos/clusters/docs/security-bulletins#gcp-2021-015 + - https://raw.githubusercontent.com/kyverno/policies/main/best-practices/require-drop-all/require-drop-all.yaml + - https://raw.githubusercontent.com/kyverno/policies/main/best-practices/require-ro-rootfs/require-ro-rootfs.yaml + + # CVE-2020-8554: https://cloud.google.com/anthos/clusters/docs/security-bulletins#gcp-2020-015 + - https://raw.githubusercontent.com/kyverno/policies/main/best-practices/restrict-service-external-ips/restrict-service-external-ips.yaml + +patches: +- target: + group: kyverno.io + kind: ClusterPolicy + patch: |- + # Ensure all policies "audit" rather than "enforce". + - { op: replace, path: /spec/validationFailureAction, value: audit } + +# Issue: [sc-11286] +# OpenShift 4.10 forbids any/all seccomp profiles. Remove the policy for now. +# - https://github.com/openshift/cluster-kube-apiserver-operator/issues/1325 +# - https://github.com/kyverno/policies/tree/main/pod-security/restricted/restrict-seccomp-strict +- target: + group: kyverno.io + kind: ClusterPolicy + name: restrict-seccomp-strict + patch: |- + $patch: delete + apiVersion: kyverno.io/v1 + kind: ClusterPolicy + metadata: + name: restrict-seccomp-strict diff --git a/testing/policies/kyverno/service_links.yaml b/testing/policies/kyverno/service_links.yaml new file mode 100644 index 0000000000..0ae48796ed --- /dev/null +++ b/testing/policies/kyverno/service_links.yaml @@ -0,0 +1,43 @@ +# Copyright 2022 - 2024 Crunchy Data Solutions, Inc. +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +--- +apiVersion: kyverno.io/v1 +kind: ClusterPolicy +metadata: + name: disable-service-links + annotations: + policies.kyverno.io/title: Disable Injection of Service Environment Variables + policies.kyverno.io/category: PGO + policies.kyverno.io/severity: high + policies.kyverno.io/subject: Pod + policies.kyverno.io/description: >- + Kubernetes automatically adds environment variables describing every Service in a Pod's namespace. + This can inadvertently change the behavior of things that read from the environment. For example, + a PodSpec that worked in the past might start to fail when the Pod is recreated with new Services + around. + +spec: + validationFailureAction: audit + background: true + rules: + - name: validate-enableServiceLinks + match: + resources: + kinds: + - Pod + validate: + message: Do not inject Service environment variables. + pattern: + spec: + enableServiceLinks: false diff --git a/testing/testdata/policy1.sql b/testing/testdata/policy1.sql deleted file mode 100644 index 208ef17331..0000000000 --- a/testing/testdata/policy1.sql +++ /dev/null @@ -1,3 +0,0 @@ -\c userdb; -create table policy1 (id text); -grant all on policy1 to primaryuser; diff --git a/testing/testdata/policy2-insert.sql b/testing/testdata/policy2-insert.sql deleted file mode 100644 index 7a1ddf0753..0000000000 --- a/testing/testdata/policy2-insert.sql +++ /dev/null @@ -1 +0,0 @@ -insert into policy2 (select now()); diff --git a/testing/testdata/policy2-setup.sql b/testing/testdata/policy2-setup.sql deleted file mode 100644 index a379f1814a..0000000000 --- a/testing/testdata/policy2-setup.sql +++ /dev/null @@ -1,3 +0,0 @@ -\c userdb; -create table policy2 (id text); -grant all on policy2 to primaryuser; diff --git a/trivy.yaml b/trivy.yaml new file mode 100644 index 0000000000..b2ef32d785 --- /dev/null +++ b/trivy.yaml @@ -0,0 +1,14 @@ +# https://aquasecurity.github.io/trivy/latest/docs/references/configuration/config-file/ +--- +# Specify an exact list of recognized and acceptable licenses. +# [A GitHub workflow](/.github/workflows/trivy.yaml) rejects pull requests that +# import licenses not in this list. +# +# https://aquasecurity.github.io/trivy/latest/docs/scanner/license/ +license: + ignored: + - Apache-2.0 + - BSD-2-Clause + - BSD-3-Clause + - ISC + - MIT